repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
metpy/MetPy
dev/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
bsd-3-clause
[ "%matplotlib inline", "Natural Neighbor Verification\nWalks through the steps of Natural Neighbor interpolation to validate that the algorithmic\napproach taken in MetPy is correct.\nFind natural neighbors visual test\nA triangle is a natural neighbor for a point if the\ncircumscribed circle <https://en.wikipedia.org/wiki/Circumscribed_circle>_ of the\ntriangle contains that point. It is important that we correctly grab the correct triangles\nfor each point before proceeding with the interpolation.\nAlgorithmically:\n\n\nWe place all of the grid points in a KDTree. These provide worst-case O(n) time\n complexity for spatial searches.\n\n\nWe generate a Delaunay Triangulation <https://docs.scipy.org/doc/scipy/\n reference/tutorial/spatial.html#delaunay-triangulations>_\n using the locations of the provided observations.\n\n\nFor each triangle, we calculate its circumcenter and circumradius. Using\n KDTree, we then assign each grid a triangle that has a circumcenter within a\n circumradius of the grid's location.\n\n\nThe resulting dictionary uses the grid index as a key and a set of natural\n neighbor triangles in the form of triangle codes from the Delaunay triangulation.\n This dictionary is then iterated through to calculate interpolation values.\n\n\nWe then traverse the ordered natural neighbor edge vertices for a particular\n grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate\n proportional polygon areas.\n\n\nCircumcenter of (n - 1), n, grid_location\n Circumcenter of (n + 1), n, grid_location\nDetermine what existing circumcenters (ie, Delaunay circumcenters) are associated\n with vertex n, and add those as polygon vertices. Calculate the area of this polygon.\n\n\nIncrement the current edges to be checked, i.e.:\n n - 1 = n, n = n + 1, n + 1 = n + 2\n\n\nRepeat steps 5 & 6 until all of the edge combinations of 3 have been visited.\n\n\nRepeat steps 4 through 7 for each grid cell.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d\nfrom scipy.spatial.distance import euclidean\n\nfrom metpy.interpolate import geometry\nfrom metpy.interpolate.points import natural_neighbor_point", "For a test case, we generate 10 random points and observations, where the\nobservation values are just the x coordinate value times the y coordinate\nvalue divided by 1000.\nWe then create two test points (grid 0 & grid 1) at which we want to\nestimate a value using natural neighbor interpolation.\nThe locations of these observations are then used to generate a Delaunay triangulation.", "np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = (pts[:, 0] * pts[:, 0]) / 1000\n\ntri = Delaunay(pts)\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\ndelaunay_plot_2d(tri, ax=ax)\n\nfor i, zval in enumerate(zp):\n ax.annotate(f'{zval} F', xy=(pts[i, 0] + 2, pts[i, 1]))\n\nsim_gridx = [30., 60.]\nsim_gridy = [30., 60.]\n\nax.plot(sim_gridx, sim_gridy, '+', markersize=10)\nax.set_aspect('equal', 'datalim')\nax.set_title('Triangulation of observations and test grid cell '\n 'natural neighbor interpolation values')\n\nmembers, circumcenters = geometry.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nval = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0],\n circumcenters)\nax.annotate(f'grid 0: {val:.3f}', xy=(sim_gridx[0] + 2, sim_gridy[0]))\n\nval = natural_neighbor_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1],\n circumcenters)\nax.annotate(f'grid 1: {val:.3f}', xy=(sim_gridx[1] + 2, sim_gridy[1]))", "Using the circumcenter and circumcircle radius information from\n:func:metpy.interpolate.geometry.find_natural_neighbors, we can visually\nexamine the results to see if they are correct.", "def draw_circle(ax, x, y, r, m, label):\n th = np.linspace(0, 2 * np.pi, 100)\n nx = x + r * np.cos(th)\n ny = y + r * np.sin(th)\n ax.plot(nx, ny, m, label=label)\n\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\ndelaunay_plot_2d(tri, ax=ax)\nax.plot(sim_gridx, sim_gridy, 'ks', markersize=10)\n\nfor i, (x_t, y_t) in enumerate(circumcenters):\n r = geometry.circumcircle_radius(*tri.points[tri.simplices[i]])\n if i in members[1] and i in members[0]:\n draw_circle(ax, x_t, y_t, r, 'm-', str(i) + ': grid 1 & 2')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[0]:\n draw_circle(ax, x_t, y_t, r, 'r-', str(i) + ': grid 0')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[1]:\n draw_circle(ax, x_t, y_t, r, 'b-', str(i) + ': grid 1')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n else:\n draw_circle(ax, x_t, y_t, r, 'k:', str(i) + ': no match')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=9)\n\nax.set_aspect('equal', 'datalim')\nax.legend()", "What?....the circle from triangle 8 looks pretty darn close. Why isn't\ngrid 0 included in that circle?", "x_t, y_t = circumcenters[8]\nr = geometry.circumcircle_radius(*tri.points[tri.simplices[8]])\n\nprint('Distance between grid0 and Triangle 8 circumcenter:',\n euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))\nprint('Triangle 8 circumradius:', r)", "Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)\nGrab the circumcenters and radii for natural neighbors", "cc = np.array(circumcenters)\nr = np.array([geometry.circumcircle_radius(*tri.points[tri.simplices[m]]) for m in members[0]])\n\nprint('circumcenters:\\n', cc)\nprint('radii\\n', r)", "Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram\n<https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams>_\nwhich serves as a complementary (but not necessary)\nspatial data structure that we use here simply to show areal ratios.\nNotice that the two natural neighbor triangle circumcenters are also vertices\nin the Voronoi plot (green dots), and the observations are in the polygons (blue dots).", "vor = Voronoi(list(zip(xp, yp)))\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\nvoronoi_plot_2d(vor, ax=ax)\n\nnn_ind = np.array([0, 5, 7, 8])\nz_0 = zp[nn_ind]\nx_0 = xp[nn_ind]\ny_0 = yp[nn_ind]\n\nfor x, y, z in zip(x_0, y_0, z_0):\n ax.annotate(f'{x}, {y}: {z:.3f} F', xy=(x, y))\n\nax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)\nax.annotate(f'{sim_gridx[0]}, {sim_gridy[0]}', xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',\n label='natural neighbor\\ncircumcenters')\n\nfor center in cc:\n ax.annotate(f'{center[0]:.3f}, {center[1]:.3f}', xy=(center[0] + 1, center[1] + 1))\n\ntris = tri.points[tri.simplices[members[0]]]\nfor triangle in tris:\n x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]\n y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]\n ax.plot(x, y, ':', linewidth=2)\n\nax.legend()\nax.set_aspect('equal', 'datalim')\n\n\ndef draw_polygon_with_info(ax, polygon, off_x=0, off_y=0):\n \"\"\"Draw one of the natural neighbor polygons with some information.\"\"\"\n pts = np.array(polygon)[ConvexHull(polygon).vertices]\n for i, pt in enumerate(pts):\n ax.plot([pt[0], pts[(i + 1) % len(pts)][0]],\n [pt[1], pts[(i + 1) % len(pts)][1]], 'k-')\n\n avex, avey = np.mean(pts, axis=0)\n ax.annotate(f'area: {geometry.area(pts):.3f}', xy=(avex + off_x, avey + off_y),\n fontsize=12)\n\n\ncc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc1, cc2])\n\ncc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)\n\ncc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15)\n\ncc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2])", "Put all of the generated polygon areas and their affiliated values in arrays.\nCalculate the total area of all of the generated polygons.", "areas = np.array([60.434, 448.296, 25.916, 70.647])\nvalues = np.array([0.064, 1.156, 2.809, 0.225])\ntotal_area = np.sum(areas)\nprint(total_area)", "For each polygon area, calculate its percent of total area.", "proportions = areas / total_area\nprint(proportions)", "Multiply the percent of total area by the respective values.", "contributions = proportions * values\nprint(contributions)", "The sum of this array is the interpolation value!", "interpolation_value = np.sum(contributions)\nfunction_output = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri,\n members[0], circumcenters)\n\nprint(interpolation_value, function_output)", "The values are slightly different due to truncating the area values in\nthe above visual example to the 3rd decimal place.", "plt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
panoptes/PIAA
notebooks/ProcessObservation.ipynb
mit
[ "%load_ext autotime\n\nimport tempfile\nfrom pathlib import Path\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sb\nfrom IPython.display import display, Markdown, JSON\nfrom astropy import stats\nfrom astropy.io import fits\nfrom google.cloud import firestore\nfrom loguru import logger\nfrom matplotlib.figure import Figure\nfrom panoptes.utils.images import bayer\nfrom panoptes.utils.images import fits as fits_utils\nfrom panoptes.utils.serializers import from_json\nfrom panoptes.utils.time import current_time\n\nfrom panoptes.pipeline.observation import make_stamps\nfrom panoptes.pipeline.utils import plot\n\nfirestore_db = firestore.Client()\n\nsb.set_theme()\n\nlogger.remove()\n\ncurrent_time()", "Process an observation\nSetup the processing", "# Default input parameters (replaced in next cell)\nsequence_id = '' # e.g. PAN012_358d0f_20191005T112325\n\n# Unused option for now. See below.\n# vmag_min = 6\n# vmag_max = 14\n\nposition_column_x = 'catalog_wcs_x'\nposition_column_y = 'catalog_wcs_y'\n\ninput_bucket = 'panoptes-images-processed'\n\n# JSON string of additional settings.\nobservation_settings = '{}'\noutput_dir = tempfile.TemporaryDirectory().name\n\nimage_status = 'MATCHED'\nbase_url = 'https://storage.googleapis.com'\n\n# Set up output directory and filenames.\noutput_dir = Path(output_dir)\noutput_dir.mkdir(parents=True, exist_ok=True)\n\nobservation_store_path = output_dir / 'observation.h5'\n\nobservation_settings = from_json(observation_settings)\nobservation_settings['output_dir'] = output_dir", "Fetch all the image documents from the metadata store. We then filter based off image status and measured properties.", "unit_id, camera_id, sequence_time = sequence_id.split('_')\n\n# Get sequence information\nsequence_doc_path = f'units/{unit_id}/observations/{sequence_id}'\nsequence_doc_ref = firestore_db.document(sequence_doc_path)\n\nsequence_info = sequence_doc_ref.get().to_dict()\n\nexptime = sequence_info['total_exptime'] / sequence_info['num_images']\nsequence_info['exptime'] = int(exptime)\n\npd.json_normalize(sequence_info, sep='_').T\n\n# Get and show the metadata about the observation.\nmatched_query = sequence_doc_ref.collection('images').where('status', '==', image_status)\nmatched_docs = [d.to_dict() for d in matched_query.stream()]\nimages_df = pd.json_normalize(matched_docs, sep='_')\n\n# Set a time index.\nimages_df.time = pd.to_datetime(images_df.time)\nimages_df = images_df.set_index(['time']).sort_index()\n\nnum_frames = len(images_df)\nprint(f'Found {num_frames} images in observation')", "Filter frames\nFilter some of the frames based on the image properties as a whole.", "# Sigma filtering of certain stats\nmask_columns = [\n 'camera_colortemp',\n 'sources_num_detected',\n 'sources_photutils_fwhm_mean'\n]\n\nfor mask_col in mask_columns:\n images_df[f'mask_{mask_col}'] = stats.sigma_clip(images_df[mask_col]).mask\n display(plot.filter_plot(images_df, mask_col, sequence_id))\n \n\nimages_df['is_masked'] = False\nimages_df['is_masked'] = images_df.filter(regex='mask_*').any(1)\n\npg = sb.pairplot(images_df[['is_masked', *mask_columns]], hue='is_masked')\npg.fig.suptitle(f'Masked image properties for {sequence_id}', y=1.01)\npg.fig.set_size_inches(9, 8);\n\n# Get the unfiltered frames\nimages_df = images_df.query('is_masked==False')\n\nnum_frames = len(images_df)\nprint(f'Frames after filtering: {num_frames}')\n\nif num_frames < 10:\n raise RuntimeError(f'Cannot process with less than 10 frames,have {num_frames}')\n\npg = sb.pairplot(images_df[mask_columns])\npg.fig.suptitle(f'Image properties w/ clipping for {sequence_id}', y=1.01)\n\npg.fig.set_size_inches(9, 8);\n\n# Save (most of) the images info to the observation store.\nimages_df.select_dtypes(exclude='object').to_hdf(observation_store_path, key='images', format='table', errors='ignore')", "Load metadata for images", "# Build the joined metadata file.\nsources = list()\nfor image_id in images_df.uid:\n blob_path = f'gcs://{input_bucket}/{image_id.replace(\"_\", \"/\")}/sources.parquet'\n try:\n sources.append(pd.read_parquet(blob_path))\n except FileNotFoundError:\n print(f'Error finding {blob_path}, skipping')\n\nsources_df = pd.concat(sources).sort_index()\ndel sources", "Filter stars\nNow that we have images of a sufficient quality, filter the star detections themselves.\nWe get the mean metadata values for each star and use that to filter any stellar outliers based on a few properties of the observation as a whole.", "# Use the mean value for the observation for each source.\nsample_source_df = sources_df.groupby('picid').mean()\n\nnum_sources = len(sample_source_df)\nprint(f'Sources before filtering: {num_sources}')\n\nframe_count = sources_df.groupby('picid').catalog_vmag.count()\nexptime = images_df.camera_exptime.mean()\n\n# Mask sources that don't appear in all (filtered) frames.\nsample_source_df['frame_count'] = frame_count\n\nsample_source_df.eval('mask_frame_count = frame_count!=frame_count.max()', inplace=True)\n\nfig = Figure()\nfig.set_dpi(100)\nax = fig.subplots()\n\nsb.histplot(data=sample_source_df, x='frame_count', hue=f'mask_frame_count', ax=ax, legend=False)\nax.set_title(f'{sequence_id} {num_frames=}')\n\nfig.suptitle(f'Frame star detection')\nfig", "See gini coefficient info here.", "# Sigma clip columns.\nclip_columns = [\n 'catalog_vmag',\n 'photutils_gini',\n 'photutils_fwhm',\n]\n\n# Display in pair plot columns.\npair_columns = [\n 'catalog_sep',\n 'photutils_eccentricity',\n 'photutils_background_mean',\n 'catalog_wcs_x_int',\n 'catalog_wcs_y_int',\n 'is_masked',\n]\n\nfor mask_col in clip_columns:\n sample_source_df[f'mask_{mask_col}'] = stats.sigma_clip(sample_source_df[mask_col]).mask\n \n# sample_source_df.eval('mask_catalog_vmag = catalog_vmag > @vmag_max or catalog_vmag < @vmag_min', inplace=True)\n\nsample_source_df['is_masked'] = False\nsample_source_df['is_masked'] = sample_source_df.filter(regex='mask_*').any(1)\n\ndisplay(Markdown('Number of stars filtered by type (with overlap):'))\ndisplay(Markdown(sample_source_df.filter(regex='mask_').sum(0).sort_values(ascending=False).to_markdown()))\n\nfig = Figure()\nfig.set_dpi(100)\nfig.set_size_inches(10, 3)\naxes = fig.subplots(ncols=len(clip_columns), sharey=True)\nfor i, col in enumerate(clip_columns):\n sb.histplot(data=sample_source_df, x=col, hue=f'mask_{col}', ax=axes[i], legend=False)\n\nfig.suptitle(f'Filter properties for {sequence_id}')\nfig\n\npp = sb.pairplot(sample_source_df[clip_columns + ['is_masked']], hue='is_masked', plot_kws=dict(alpha=0.5))\npp.fig.suptitle(f'Filter properties for {sequence_id}', y=1.01)\npp.fig.set_dpi(100);\n\npp = sb.pairplot(sample_source_df[clip_columns + pair_columns], hue='is_masked', plot_kws=dict(alpha=0.5))\npp.fig.suptitle(f'Catalog vs detected properties for {sequence_id}', y=1.01);\n\npp = sb.pairplot(sample_source_df.query('is_masked==False')[clip_columns + pair_columns], hue='is_masked', plot_kws=dict(alpha=0.5))\npp.fig.suptitle(f'Catalog vs detected for filtered sources of {sequence_id}', y=1.01);\n\nfig = Figure()\nfig.set_dpi(100)\nax = fig.add_subplot()\n\nplot_data = sample_source_df.query('is_masked == True')\nsb.scatterplot(data=plot_data, \n x='catalog_wcs_x_int', \n y='catalog_wcs_y_int', \n marker='*', \n hue='photutils_fwhm',\n palette='Reds',\n edgecolor='k',\n linewidth=0.2,\n size='catalog_vmag_bin', sizes=(100, 5),\n ax=ax\n )\nax.set_title(f'Location of {len(plot_data)} outlier stars in {exptime:.0f}s for {sequence_id}')\n\nfig.set_size_inches(12, 8)\n\nfig\n\nfig = Figure()\nfig.set_dpi(100)\nax = fig.add_subplot()\n\nplot_data = sample_source_df.query('is_masked == False')\nsb.scatterplot(data=plot_data, \n x='catalog_wcs_x_int', \n y='catalog_wcs_y_int', \n marker='*', \n hue='photutils_fwhm',\n palette='Blues',\n edgecolor='k',\n linewidth=0.2,\n size='catalog_vmag_bin', sizes=(100, 5),\n ax=ax\n )\nax.set_title(f'Location of {len(plot_data)} detected stars in {exptime:.0f}s for {sequence_id}')\n\nfig.set_size_inches(12, 8)\n\nfig\n\n# Get the sources that aren't filtered.\nsources_df = sources_df.loc[sample_source_df.query('is_masked == False').index]\n\nnum_sources = len(sources_df.index.get_level_values('picid').unique())\nprint(f'Detected stars after filtering: {num_sources}')\n\n# Filter based on mean x and y movement of stars.\nposition_diffs = sources_df[['catalog_wcs_x_int', 'catalog_wcs_y_int']].groupby('picid').apply(lambda grp: grp - grp.mean())\npixel_diff_mask = stats.sigma_clip(position_diffs.groupby('time').mean()).mask\n\nx_mask = pixel_diff_mask[:, 0]\ny_mask = pixel_diff_mask[:, 1]\n\nprint(f'Filtering {sum(x_mask | y_mask)} of {num_frames} frames based on pixel movement.')\n\nfiltered_time_index = sources_df.index.get_level_values('time').unique()[~(x_mask | y_mask)]\n\n# Filter sources\nsources_df = sources_df.reset_index('picid').loc[filtered_time_index].reset_index().set_index(['picid', 'time']).sort_index()\n# Filter images\nimages_df = images_df.loc[filtered_time_index]\nnum_frames = len(filtered_time_index)\nprint(f'Now have {num_frames}')\n\nfig = Figure()\nfig.set_dpi(100)\nfig.set_size_inches(8, 4)\nax = fig.add_subplot()\nposition_diffs.groupby('time').mean().plot(marker='.', ax=ax)\n\n# Mark outliers\ntime_mean = position_diffs.groupby('time').mean()\npd.DataFrame(time_mean[x_mask]['catalog_wcs_x_int']).plot(marker='o', c='r', ls='', ax=ax, legend=False)\npd.DataFrame(time_mean[y_mask]['catalog_wcs_y_int']).plot(marker='o', c='r', ls='', ax=ax, legend=False)\n\nax.hlines(1, time_mean.index[0], time_mean.index[-1], ls='--', color='grey', alpha=0.5)\nax.hlines(-1, time_mean.index[0], time_mean.index[-1], ls='--', color='grey', alpha=0.5)\n\nif time_mean.max().max() < 6:\n ax.set_ylim([-6, 6])\n \nax.set_title(f'Mean xy pixel movement for {num_sources} stars {sequence_id}')\nax.set_xlabel('Time [utc]')\nax.set_ylabel('Difference from mean [pixel]')\nfig\n\n# Save sources to observation hdf5 file.\nsources_df.to_hdf(observation_store_path, key='sources', format='table')\ndel sources_df", "Make stamp locations", "xy_catalog = pd.read_hdf(observation_store_path, \n key='sources', \n columns=[position_column_x, position_column_y]).reset_index().groupby('picid')\n\n# Get max diff in xy positions.\nx_catalog_diff = (xy_catalog.catalog_wcs_x.max() - xy_catalog.catalog_wcs_x.min()).max()\ny_catalog_diff = (xy_catalog.catalog_wcs_y.max() - xy_catalog.catalog_wcs_y.min()).max()\n\nif x_catalog_diff >= 18 or y_catalog_diff >= 18:\n raise RuntimeError(f'Too much drift! {x_catalog_diff=} {y_catalog_diff}')\n\nstamp_width = 10 if x_catalog_diff < 10 else 18\nstamp_height = 10 if y_catalog_diff < 10 else 18\n\n# Determine stamp size\nstamp_size = (stamp_width, stamp_height)\nprint(f'Using {stamp_size=}.')\n\n# Get the mean positions\nxy_mean = xy_catalog.mean()\nxy_std = xy_catalog.std()\n\nxy_mean = xy_mean.rename(columns=dict(\n catalog_wcs_x=f'{position_column_x}_mean',\n catalog_wcs_y=f'{position_column_y}_mean')\n)\nxy_std = xy_std.rename(columns=dict(\n catalog_wcs_x=f'{position_column_x}_std',\n catalog_wcs_y=f'{position_column_y}_std')\n)\n\nxy_mean = xy_mean.join(xy_std)\n\nstamp_positions = xy_mean.apply(\n lambda row: bayer.get_stamp_slice(row[f'{position_column_x}_mean'],\n row[f'{position_column_y}_mean'],\n stamp_size=stamp_size,\n as_slices=False,\n ), axis=1, result_type='expand')\n\nstamp_positions[f'{position_column_x}_mean'] = xy_mean[f'{position_column_x}_mean']\nstamp_positions[f'{position_column_y}_mean'] = xy_mean[f'{position_column_y}_mean']\nstamp_positions[f'{position_column_x}_std'] = xy_mean[f'{position_column_x}_std']\nstamp_positions[f'{position_column_y}_std'] = xy_mean[f'{position_column_y}_std']\n\nstamp_positions.rename(columns={0: 'stamp_y_min',\n 1: 'stamp_y_max',\n 2: 'stamp_x_min',\n 3: 'stamp_x_max'}, inplace=True)\n\nstamp_positions.to_hdf(observation_store_path, key='positions', format='table')", "Extract stamps", "# Get list of FITS file urls\nfits_urls = [f'{base_url}/{input_bucket}/{image_id.replace(\"_\", \"/\")}/image.fits.fz' for image_id in images_df.uid]\n\n# Build the joined metadata file.\nreference_image = None\ndiff_image = None\nstack_image = None\nfor image_time, fits_url in zip(images_df.index, fits_urls):\n try:\n data = fits_utils.getdata(fits_url)\n if reference_image is None:\n reference_image = data\n diff_image = np.zeros_like(data)\n stack_image = np.zeros_like(data)\n \n # Get the diff and stack images.\n diff_image = diff_image + (data - reference_image)\n stack_image = stack_image + data\n \n # Get stamps data from positions.\n stamps = make_stamps(stamp_positions, data)\n \n # Add the time stamp to this index.\n time_index = [image_time] * num_sources\n stamps.index = pd.MultiIndex.from_arrays([stamps.index, time_index], names=('picid', 'time'))\n \n # Append directly to the observation store.\n stamps.to_hdf(observation_store_path, key='stamps', format='table', append=True)\n except Exception as e:\n print(f'Problem with {fits_url}: {e!r}')\n\nfits.HDUList([\n fits.PrimaryHDU(diff_image),\n fits.ImageHDU(stack_image / num_frames),\n ]).writeto(str(output_dir / f'stack-and-diff.fits'), overwrite=True)\n\nimage_title = sequence_id\nif 'field_name' in sequence_info:\n image_title = f'{sequence_id} \\\"{sequence_info[\"field_name\"]}\\\"'\n\nplot.image_simple(stack_image, title=f'Stack image for {image_title}')\n\nplot.image_simple(diff_image, title=f'Diff image for {image_title}')\n\n# Results\nJSON(sequence_info, expanded=True)", "Notebook environment info", "!jupyter --version\n\ncurrent_time()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nicofarr/eeg4sounds
oddball/ERP_grav_statistics-univariate.ipynb
apache-2.0
[ "import scipy.io as sio\nfrom matplotlib import pyplot as plt \n%matplotlib inline\nimport numpy as np \nimport os \n\nimport mne\nimport numpy as np\nimport scipy.io as sio\n \n\n# coding: utf-8\n\ndef _loadftfile(path):\n\n filecontents = sio.whosmat(path)\n \n strucname = filecontents[0][0]\n\n mat = sio.loadmat(path, squeeze_me=True, struct_as_record=False)\n matstruct = mat[strucname]\n return matstruct \n\n\ndef _matstruc2mne(matstruct,ch_names=None):\n \n if ch_names is None:\n ch_names=list(matstruct.label)\n \n myinfo = mne.create_info(ch_names=ch_names,sfreq=1/(matstruct.time[1] - matstruct.time[0]),ch_types='eeg')\n ev_arr = mne.EvokedArray(matstruct.individual.mean(axis=0),myinfo,tmin=-0.2) ### Specific to this dataset, 200ms baseline\n ev_arr.set_montage(mne.channels.read_montage(\"EGI_256\"))\n return ev_arr\n\n\ndef _matstruc2latency(matstruct,peak_tmin,peak_tmax,ch_names=None):\n \n if ch_names is None:\n ch_names=list(matstruct.label)\n \n myinfo = mne.create_info(ch_names=ch_names,sfreq=1/(matstruct.time[1] - matstruct.time[0]),ch_types='eeg')\n \n all_chpeaks = []\n all_lat = []\n all_amp = []\n for mat in matstruct.individual:\n \n ev_arr = mne.EvokedArray(mat,myinfo,tmin=-0.2) ### Specific to this dataset, 200ms baseline\n ev_arr.set_montage(mne.channels.read_montage(\"EGI_256\"))\n chpeak,lat,amp = ev_arr.get_peak(tmin=peak_tmin,tmax=peak_tmax,return_amplitude=True,mode='neg')\n all_lat.append(lat)\n all_chpeaks.append(chpeak)\n all_amp.append(amp)\n return all_chpeaks,all_lat,all_amp\n\n\ndef _matstruc2mne_epochs(matstruct,ch_names=None):\n \n if ch_names is None:\n ch_names=list(matstruct.label)\n \n myinfo = mne.create_info(ch_names=ch_names,sfreq=1/(matstruct.time[1] - matstruct.time[0]),ch_types='eeg')\n ev_arr = mne.EpochsArray(matstruct.individual,myinfo,tmin=-0.2) ### Specific to this dataset, 200ms baseline\n ev_arr.set_montage(mne.channels.read_montage(\"EGI_256\"))\n return ev_arr\n\ntcrop = 0.7\n\nmatfile = '/home/nfarrugi/datasets/eeg4sounds/result-eeg4sounds/oddball/grav/grav_bin_dev.mat'\n\nmat_bin_dev = _loadftfile(matfile)\n\nmatfile = '/home/nfarrugi/datasets/eeg4sounds/result-eeg4sounds/oddball/grav/grav_bin_std.mat'\n\nmat_bin_std = _loadftfile(matfile)\n\n\n\nmatfile = '/home/nfarrugi/datasets/eeg4sounds/result-eeg4sounds/oddball/grav/grav_ste_dev.mat'\n\nmat_ste_dev = _loadftfile(matfile)\n\nmatfile = '/home/nfarrugi/datasets/eeg4sounds/result-eeg4sounds/oddball/grav/grav_ste_std.mat'\n\nmat_ste_std = _loadftfile(matfile)\n\n\nev_bin_dev = _matstruc2mne(mat_bin_dev).crop(tmax=tcrop)\nev_bin_std = _matstruc2mne(mat_bin_std).crop(tmax=tcrop)\n\nev_ste_dev = _matstruc2mne(mat_ste_dev).crop(tmax=tcrop)\nev_ste_std = _matstruc2mne(mat_ste_std).crop(tmax=tcrop)\n\nmne.equalize_channels([ev_bin_dev,ev_ste_dev,ev_bin_std,ev_ste_std])\n\nepochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop)\nepochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop)\n\nepochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop)\nepochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop)\n\nmne.equalize_channels([epochs_bin_dev,epochs_bin_std,epochs_ste_dev,epochs_ste_std])\n\nX_bin = [epochs_bin_dev.get_data().transpose(0, 2, 1),\n epochs_bin_std.get_data().transpose(0, 2, 1)]\n\nX_ste = [epochs_ste_dev.get_data().transpose(0, 2, 1),\n epochs_ste_std.get_data().transpose(0, 2, 1)]\n", "Analyse de clusters", "nperm = 1000\n\nT_obs_bin,clusters_bin,clusters_pb_bin,H0_bin = mne.stats.spatio_temporal_cluster_test(X_bin,threshold=None,n_permutations=nperm,out_type='mask')\n\nT_obs_ste,clusters_ste,clusters_pb_ste,H0_ste = mne.stats.spatio_temporal_cluster_test(X_ste,threshold=None,n_permutations=nperm,out_type='mask')", "On récupère les channels trouvés grace a l'analyse de clusters", "def extract_electrodes_times(clusters,clusters_pb,tmin_ind=500,tmax_ind=640,alpha=0.005,evoked = ev_bin_dev):\n\n ch_list_temp = []\n time_list_temp = []\n\n for clust,pval in zip(clusters,clusters_pb):\n if pval < alpha:\n\n for j,curline in enumerate(clust[tmin_ind:tmax_ind]):\n\n for k,el in enumerate(curline):\n if el: \n ch_list_temp.append(evoked.ch_names[k])\n time_list_temp.append(evoked.times[j+tmin_ind])\n\n return np.unique(ch_list_temp),np.unique(time_list_temp)\n\n\nchannels_deviance_ste,times_deviance_ste=extract_electrodes_times(clusters_ste,clusters_pb_ste)\n\nchannels_deviance_bin,times_deviance_bin=extract_electrodes_times(clusters_bin,clusters_pb_bin)\n\nprint(channels_deviance_bin),print(times_deviance_bin)\n\nprint(channels_deviance_ste),print(times_deviance_ste)\n\ntimes_union = np.union1d(times_deviance_bin,times_deviance_ste)\n\nch_union = np.unique(np.hstack([channels_deviance_bin,channels_deviance_ste]))\n\n\n\nprint(ch_union)\n\n#Selecting channels \nepochs_bin_dev_ch = epochs_bin_dev.pick_channels(ch_union)\nepochs_bin_std_ch = epochs_bin_std.pick_channels(ch_union)\nepochs_ste_dev_ch = epochs_ste_dev.pick_channels(ch_union)\nepochs_ste_std_ch = epochs_ste_std.pick_channels(ch_union)\n\nX_diff = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1),\n epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)]\n\nX_diff_ste_bin = X_diff[1]-X_diff[0]\n\n\nepochs_bin_dev_ch.plot_sensors(show_names=True)\nplt.show()\n\nroi = ['E117','E116','E108','E109','E151','E139','E141','E152','E110','E131','E143','E154','E142','E153','E140','E127','E118']\n\nroi_frontal = ['E224','E223','E2','E4','E5','E6','E13','E14','E15','E20','E21','E27','E28','E30','E36','E40','E41']\n\nlen(roi_frontal),len(roi)", "One sample ttest FDR corrected (per electrode)", "from scipy.stats import ttest_1samp\nfrom mne.stats import bonferroni_correction,fdr_correction\n\ndef ttest_amplitude(X,times_ind,ch_names,times):\n\n # Selecting time points and averaging over time \n amps = X[:,times_ind,:].mean(axis=1)\n \n T, pval = ttest_1samp(amps, 0)\n alpha = 0.05\n\n n_samples, n_tests= amps.shape\n threshold_uncorrected = stats.t.ppf(1.0 - alpha, n_samples - 1)\n\n reject_bonferroni, pval_bonferroni = bonferroni_correction(pval, alpha=alpha)\n threshold_bonferroni = stats.t.ppf(1.0 - alpha / n_tests, n_samples - 1)\n\n reject_fdr, pval_fdr = fdr_correction(pval, alpha=alpha, method='indep')\n \n mask_fdr = pval_fdr < 0.05\n mask_bonf = pval_bonferroni < 0.05\n\n print('FDR from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]]))\n for i,curi in enumerate(mask_fdr):\n if curi:\n print(\"Channel %s, T = %0.2f, p = %0.3f \" % (ch_names[i], T[i],pval_fdr[i]))\n \n \n print('Bonferonni from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]]))\n for i,curi in enumerate(mask_bonf):\n if curi:\n print(\"Channel %s, T = %0.2f, p = %0.3f \" % (ch_names[i], T[i],pval_bonferroni[i]))\n \n \n \n return T,pval,pval_fdr,pval_bonferronia\n\ndef ttest_amplitude_roi(X,times_ind,ch_names_roi,times):\n\n print(X.shape)\n # Selecting time points and averaging over time \n amps = X[:,times_ind,:].mean(axis=1)\n \n # averaging over channels\n amps = amps.mean(axis=1)\n \n T, pval = ttest_1samp(amps, 0)\n alpha = 0.05\n\n n_samples, _, n_tests= X.shape\n \n print('Uncorrected from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]]))\n print(\"T = %0.2f, p = %0.3f \" % (T,pval))\n \n \n return T,pval,pval_fdr,pval_bonferroni", "Tests de 280 a 440, par fenetres de 20 ms avec chevauchement de 10 ms", "toi = np.arange(0.28,0.44,0.001)\ntoi_index = ev_bin_dev.time_as_index(toi)\n\nwsize = 20\nwstep = 10\n\ntoi", "Printing and preparing all time windows", "all_toi_indexes = []\n\nfor i in range(14):\n print(toi[10*i],toi[10*i + 20])\n cur_toi_ind = range(10*i+1,(10*i+21))\n all_toi_indexes.append(ev_bin_dev.time_as_index(toi[cur_toi_ind]))\n \nprint(toi[10*14],toi[10*14 + 19])\ncur_toi_ind = range(10*14+1,(10*14+19))\nall_toi_indexes.append(ev_bin_dev.time_as_index(toi[cur_toi_ind]))", "Tests on each time window", "for cur_timewindow in all_toi_indexes:\n T,pval,pval_fdr,pval_bonferroni = ttest_amplitude(X_diff_ste_bin,cur_timewindow,epochs_bin_dev_ch.ch_names,times=epochs_bin_dev_ch.times)", "On a channel subset (ROI) - average over channels\nParietal roi", "#Selecting channels \n\nepochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop)\nepochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop)\n\nepochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop)\nepochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop)\n\nmne.equalize_channels([epochs_bin_dev,epochs_bin_std,epochs_ste_dev,epochs_ste_std])\n\nepochs_bin_dev_ch = epochs_bin_dev.pick_channels(roi)\nepochs_bin_std_ch = epochs_bin_std.pick_channels(roi)\nepochs_ste_dev_ch = epochs_ste_dev.pick_channels(roi)\nepochs_ste_std_ch = epochs_ste_std.pick_channels(roi)\n\nX_diff_roi = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1),\n epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)]\n\nX_diff_ste_bin_roi = X_diff_roi[1]-X_diff_roi[0]\n\nfor cur_timewindow in all_toi_indexes:\n T,pval,pval_fdr,pval_bonferroni = ttest_amplitude_roi(X_diff_ste_bin_roi,cur_timewindow,roi,times=epochs_bin_dev_ch.times)\n\ngrav_bin_dev = epochs_bin_dev_ch.average()\ngrav_bin_std = epochs_bin_std_ch.average()\ngrav_ste_dev = epochs_ste_dev_ch.average()\ngrav_ste_std = epochs_ste_std_ch.average()\n\nevoked_bin = mne.combine_evoked([grav_bin_dev, -grav_bin_std],\n weights='equal')\n\nevoked_ste = mne.combine_evoked([grav_ste_dev, -grav_ste_std],\n weights='equal')\n\n\nmne.viz.plot_compare_evokeds([grav_bin_std,grav_bin_dev,grav_ste_std,grav_ste_dev],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])\nplt.show()\n\nmne.viz.plot_compare_evokeds([evoked_bin,evoked_ste],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])\nplt.show()", "Frontal roi", "#Selecting channels \n\nepochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop)\nepochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop)\n\nepochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop)\nepochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop)\n\nmne.equalize_channels([epochs_bin_dev,epochs_bin_std,epochs_ste_dev,epochs_ste_std])\n\nepochs_bin_dev_ch = epochs_bin_dev.pick_channels(roi_frontal)\nepochs_bin_std_ch = epochs_bin_std.pick_channels(roi_frontal)\nepochs_ste_dev_ch = epochs_ste_dev.pick_channels(roi_frontal)\nepochs_ste_std_ch = epochs_ste_std.pick_channels(roi_frontal)\n\nX_diff_roi = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1),\n epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)]\n\nX_diff_ste_bin_roi = X_diff_roi[1]-X_diff_roi[0]\n\nfor cur_timewindow in all_toi_indexes:\n T,pval,pval_fdr,pval_bonferroni = ttest_amplitude_roi(X_diff_ste_bin_roi,cur_timewindow,roi,times=epochs_bin_dev_ch.times)\n\n\n\ngrav_bin_dev = epochs_bin_dev_ch.average()\ngrav_bin_std = epochs_bin_std_ch.average()\ngrav_ste_dev = epochs_ste_dev_ch.average()\ngrav_ste_std = epochs_ste_std_ch.average()\n\nevoked_bin = mne.combine_evoked([grav_bin_dev, -grav_bin_std],\n weights='equal')\n\nevoked_ste = mne.combine_evoked([grav_ste_dev, -grav_ste_std],\n weights='equal')\n\n\nmne.viz.plot_compare_evokeds([grav_bin_std,grav_bin_dev,grav_ste_std,grav_ste_dev],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])\nplt.show()\n\nmne.viz.plot_compare_evokeds([evoked_bin,evoked_ste],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])\nplt.show()\n\nmne.viz.plot_compare_evokeds?\n\nfrom scipy import stats\nfrom mne.stats import bonferroni_correction,fdr_correction\nT, pval = ttest_1samp(X_diff_ste_bin, 0)\nalpha = 0.05\n\nn_samples, n_tests,_ = X_diff_ste_bin.shape\nthreshold_uncorrected = stats.t.ppf(1.0 - alpha, n_samples - 1)\n\nreject_bonferroni, pval_bonferroni = bonferroni_correction(pval, alpha=alpha)\nthreshold_bonferroni = stats.t.ppf(1.0 - alpha / n_tests, n_samples - 1)\n\nreject_fdr, pval_fdr = fdr_correction(pval, alpha=alpha, method='indep')\n#threshold_fdr = np.min(np.abs(T)[reject_fdr])\n\nmasking_mat = pval<0.05\n\nTbis = np.zeros_like(T)\nTbis[masking_mat] = T[masking_mat]\n\n\nplt.matshow(Tbis.T,cmap=plt.cm.RdBu_r)\nplt.colorbar()\nplt.show()\n\nplt.matshow(-np.log10(pval).T)\nplt.colorbar()", "a faire :\n- figures au propre (avec bandes de sig) \n- stats : refaire des stats de ROI avec une TW unique pour P3a et une pour P3b (reprendre par rapport à tous les sujets) \n- retenter les clusters sur X_diff\n- faire graph des T value au propre ?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
steinam/teacher
jup_notebooks/data-science-ipython-notebooks-master/deep-learning/theano-tutorial/scan_tutorial/scan_tutorial.ipynb
mit
[ "Introduction to Scan in Theano\nCredits: Forked from summerschool2015 by mila-udem\nIn short\n\nMechanism to perform loops in a Theano graph\nSupports nested loops and reusing results from previous iterations \nHighly generic\n\nImplementation\nA Theano function graph is composed of two types of nodes; Variable nodes which represent data and Apply node which apply Ops (which represent some computation) to Variables to produce new Variables.\nFrom this point of view, a node that applies a Scan op is just like any other. Internally, however, it is very different from most Ops.\nInside a Scan op is yet another Theano graph which represents the computation to be performed at every iteration of the loop. During compilation, that graph is compiled into a function and, during execution, the Scan op will call that function repeatedly on its inputs to produce its outputs.\nExample 1 : As simple as it gets\nScan's interface is complex and, thus, best introduced by examples. So, let's dive right in and start with a simple example; perform an element-wise multiplication between two vectors. \nThis particular example is simple enough that Scan is not the best way to do things but we'll gradually work our way to more complex examples where Scan gets more interesting.\nLet's first setup our use case by defining Theano variables for the inputs :", "import theano\nimport theano.tensor as T\nimport numpy as np\n\nvector1 = T.vector('vector1')\nvector2 = T.vector('vector2')", "Next, we call the scan() function. It has many parameters but, because our use case is simple, we only need two of them. We'll introduce other parameters in the next examples.\nThe parameter sequences allows us to specify variables that Scan should iterate over as it loops. The first iteration will take as input the first element of every sequence, the second iteration will take as input the second element of every sequence, etc. These individual element have will have one less dimension than the original sequences. For example, for a matrix sequence, the individual elements will be vectors.\nThe parameter fn receives a function or lambda expression that expresses the computation to do at every iteration. It operates on the symbolic inputs to produce symbolic outputs. It will only ever be called once, to assemble the Theano graph used by Scan at every the iterations.\nSince we wish to iterate over both vector1 and vector2 simultaneously, we provide them as sequences. This means that every iteration will operate on two inputs: an element from vector1 and the corresponding element from vector2. \nBecause what we want is the elementwise product between the vectors, we provide a lambda expression that, given an element a from vector1 and an element b from vector2 computes and return the product.", "output, updates = theano.scan(fn=lambda a, b : a * b,\n sequences=[vector1, vector2])", "Calling scan(), we see that it returns two outputs.\nThe first output contains the outputs of fn from every timestep concatenated into a tensor. In our case, the output of a single timestep is a scalar so output is a vector where output[i] is the output of the i-th iteration.\nThe second output details if and how the execution of the Scan updates any shared variable in the graph. It should be provided as an argument when compiling the Theano function.", "f = theano.function(inputs=[vector1, vector2],\n outputs=output,\n updates=updates)", "If updates is omitted, the state of any shared variables modified by Scan will not be updated properly. Random number sampling, for instance, relies on shared variables. If updates is not provided, the state of the random number generator won't be updated properly and the same numbers might be sampled repeatedly. Always provide updates when compiling your Theano function.\nNow that we've defined how to do elementwise multiplication with Scan, we can see that the result is as expected :", "vector1_value = np.arange(0, 5).astype(theano.config.floatX) # [0,1,2,3,4]\nvector2_value = np.arange(1, 6).astype(theano.config.floatX) # [1,2,3,4,5]\nprint(f(vector1_value, vector2_value))", "An interesting thing is that we never explicitly told Scan how many iteration it needed to run. It was automatically inferred; when given sequences, Scan will run as many iterations as the length of the shortest sequence :", "print(f(vector1_value, vector2_value[:4]))", "Example 2 : Non-sequences\nIn this example, we introduce another of Scan's features; non-sequences. To demonstrate how to use them, we use Scan to compute the activations of a linear MLP layer over a minibatch.\nIt is not yet a use case where Scan is truly useful but it introduces a requirement that sequences cannot fulfill; if we want to use Scan to iterate over the minibatch elements and compute the activations for each of them, then we need some variables (the parameters of the layer), to be available 'as is' at every iteration of the loop. We do not want Scan to iterate over them and give only part of them at every iteration.\nOnce again, we begin by setting up our Theano variables :", "X = T.matrix('X') # Minibatch of data\nW = T.matrix('W') # Weights of the layer\nb = T.vector('b') # Biases of the layer", "For the sake of variety, in this example we define the computation to be done at every iteration of the loop using a Python function, step(), instead of a lambda expression.\nTo have the full weight matrix W and the full bias vector b available at every iteration, we use the argument non_sequences. Contrary to sequences, non-sequences are not iterated upon by Scan. Every non-sequence is passed as input to every iteration.\nThis means that our step() function will need to operate on three symbolic inputs; one for our sequence X and one for each of our non-sequences W and b. \nThe inputs that correspond to the non-sequences are always last and in the same order at the non-sequences are provided to Scan. This means that the correspondence between the inputs of the step() function and the arguments to scan() is the following : \n\nv : individual element of the sequence X \nW and b : non-sequences W and b, respectively", "def step(v, W, b):\n return T.dot(v, W) + b\n\noutput, updates = theano.scan(fn=step,\n sequences=[X],\n non_sequences=[W, b])", "We can now compile our Theano function and see that it gives the expected results.", "f = theano.function(inputs=[X, W, b],\n outputs=output,\n updates=updates)\n\nX_value = np.arange(-3, 3).reshape(3, 2).astype(theano.config.floatX)\nW_value = np.eye(2).astype(theano.config.floatX)\nb_value = np.arange(2).astype(theano.config.floatX)\nprint(f(X_value, W_value, b_value))", "Example 3 : Reusing outputs from the previous iterations\nIn this example, we will use Scan to compute a cumulative sum over the first dimension of a matrix $M$. This means that the output will be a matrix $S$ in which the first row will be equal to the first row of $M$, the second row will be equal to the sum of the two first rows of $M$, and so on.\nAnother way to express this, which is the way we will implement here, is that $S[t] = S[t-1] + M[t]$. Implementing this with Scan would involve iterating over the rows of the matrix $M$ and, at every iteration, reuse the cumulative row that was output at the previous iteration and return the sum of it and the current row of $M$.\nIf we assume for a moment that we can get Scan to provide the output value from the previous iteration as an input for every iteration, implementing a step function is simple :", "def step(m_row, cumulative_sum):\n return m_row + cumulative_sum", "The trick part is informing Scan that our step function expects as input the output of a previous iteration. To achieve this, we need to use a new parameter of the scan() function: outputs_info. This parameter is used to tell Scan how we intend to use each of the outputs that are computed at each iteration.\nThis parameter can be omitted (like we did so far) when the step function doesn't depend on any output of a previous iteration. However, now that we wish to have recurrent outputs, we need to start using it.\noutputs_info takes a sequence with one element for every output of the step() function :\n* For a non-recurrent output (like in every example before this one), the element should be None.\n* For a simple recurrent output (iteration $t$ depends on the value at iteration $t-1$), the element must be a tensor. Scan will interpret it as being an initial state for a recurrent output and give it as input to the first iteration, pretending it is the output value from a previous iteration. For subsequent iterations, Scan will automatically handle giving the previous output value as an input.\nThe step() function needs to expect one additional input for each simple recurrent output. These inputs correspond to outputs from previous iteration and are always after the inputs that correspond to sequences but before those that correspond to non-sequences. The are received by the step() function in the order in which the recurrent outputs are declared in the outputs_info sequence.", "M = T.matrix('X')\ns = T.vector('s') # Initial value for the cumulative sum\n\noutput, updates = theano.scan(fn=step,\n sequences=[M],\n outputs_info=[s])", "We can now compile and test the Theano function :", "f = theano.function(inputs=[M, s],\n outputs=output,\n updates=updates)\n\nM_value = np.arange(9).reshape(3, 3).astype(theano.config.floatX)\ns_value = np.zeros((3, ), dtype=theano.config.floatX)\nprint(f(M_value, s_value))", "An important thing to notice here, is that the output computed by the Scan does not include the initial state that we provided. It only outputs the states that it has computed itself.\nIf we want to have both the initial state and the computed states in the same Theano variable, we have to join them ourselves.\nExample 4 : Reusing outputs from multiple past iterations\nThe Fibonacci sequence is a sequence of numbers F where the two first numbers both 1 and every subsequence number is defined as such : $F_n = F_{n-1} + F_{n-2}$. Thus, the Fibonacci sequence goes : 1, 1, 2, 3, 5, 8, 13, ...\nIn this example, we will cover how to compute part of the Fibonacci sequence using Scan. Most of the tools required to achieve this have been introduced in the previous examples. The only one missing is the ability to use, at iteration $i$, outputs from iterations older than $i-1$.\nAlso, since every example so far had only one output at every iteration of the loop, we will also compute, at each timestep, the ratio between the new term of the Fibonacci sequence and the previous term.\nWriting an appropriate step function given two inputs, representing the two previous terms of the Fibonacci sequence, is easy:", "def step(f_minus2, f_minus1):\n new_f = f_minus2 + f_minus1\n ratio = new_f / f_minus1\n return new_f, ratio", "The next step is defining the value of outputs_info.\nRecall that, for non-recurrent outputs, the value is None and, for simple recurrent outputs, the value is a single initial state. For general recurrent outputs, where iteration $t$ may depend on multiple past values, the value is a dictionary. That dictionary has two values:\n* taps : list declaring which previous values of that output every iteration will need. [-3, -2, -1] would mean every iteration should take as input the last 3 values of that output. [-2] would mean every iteration should take as input the value of that output from two iterations ago.\n* initial : tensor of initial values. If every initial value has $n$ dimensions, initial will be a single tensor of $n+1$ dimensions with as many initial values as the oldest requested tap. In the case of the Fibonacci sequence, the individual initial values are scalars so the initial will be a vector. \nIn our example, we have two outputs. The first output is the next computed term of the Fibonacci sequence so every iteration should take as input the two last values of that output. The second output is the ratio between successive terms and we don't reuse its value so this output is non-recurrent. We define the value of outputs_info as such :", "f_init = T.fvector()\noutputs_info = [dict(initial=f_init, taps=[-2, -1]),\n None]", "Now that we've defined the step function and the properties of our outputs, we can call the scan() function. Because the step() function has multiple outputs, the first output of scan() function will be a list of tensors: the first tensor containing all the states of the first output and the second tensor containing all the states of the second input.\nIn every previous example, we used sequences and Scan automatically inferred the number of iterations it needed to run from the length of these\nsequences. Now that we have no sequence, we need to explicitly tell Scan how many iterations to run using the n_step parameter. The value can be real or symbolic.", "output, updates = theano.scan(fn=step,\n outputs_info=outputs_info,\n n_steps=10)\n\nnext_fibonacci_terms = output[0]\nratios_between_terms = output[1]", "Let's compile our Theano function which will take a vector of consecutive values from the Fibonacci sequence and compute the next 10 values :", "f = theano.function(inputs=[f_init],\n outputs=[next_fibonacci_terms, ratios_between_terms],\n updates=updates)\n\nout = f([1, 1])\nprint(out[0])\nprint(out[1])", "Precisions about the order of the arguments to the step function\nWhen we start using many sequences, recurrent outputs and non-sequences, it's easy to get confused regarding the order in which the step function receives the corresponding inputs. Below is the full order:\n\nElement from the first sequence\n...\nElement from the last sequence\nFirst requested tap from first recurrent output\n...\nLast requested tap from first recurrent output\n...\nFirst requested tap from last recurrent output\n...\nLast requested tap from last recurrent output\nFirst non-sequence\n...\nLast non-sequence\n\nWhen to use Scan and when not to\nScan is not appropriate for every problem. Here's some information to help you figure out if Scan is the best solution for a given use case.\nExecution speed\nUsing Scan in a Theano function typically makes it slighly slower compared to the equivalent Theano graph in which the loop is unrolled. Both of these approaches tend to be much slower than a vectorized implementation in which large chunks of the computation can be done in parallel.\nCompilation speed\nScan also adds an overhead to the compilation, potentially making it slower, but using it can also dramatically reduce the size of your graph, making compilation much faster. In the end, the effect of Scan on compilation speed will heavily depend on the size of the graph with and without Scan.\nThe compilation speed of a Theano function using Scan will usually be comparable to one in which the loop is unrolled if the number of iterations is small. It the number of iterations is large, however, the compilation will usually be much faster with Scan.\nIn summary\nIf you have one of the following cases, Scan can help :\n* A vectorized implementation is not possible (due to the nature of the computation and/or memory usage)\n* You want to do a large or variable number of iterations\nIf you have one of the following cases, you should consider other options :\n* A vectorized implementation could perform the same computation => Use the vectorized approach. It will often be faster during both compilation and execution.\n* You want to do a small, fixed, number of iterations (ex: 2 or 3) => It's probably better to simply unroll the computation\nExercises\nExercise 1 - Computing a polynomial\nIn this exercise, the initial version already works. It computes the value of a polynomial ($n_0 + n_1 x + n_2 x^2 + ... $) of at most 10000 degrees given the coefficients of the various terms and the value of x.\nYou must modify it such that the reduction (the sum() call) is done by Scan.", "coefficients = theano.tensor.vector(\"coefficients\")\nx = T.scalar(\"x\")\nmax_coefficients_supported = 10000\n\ndef step(coeff, power, free_var):\n return coeff * free_var ** power\n\n# Generate the components of the polynomial\nfull_range=theano.tensor.arange(max_coefficients_supported)\ncomponents, updates = theano.scan(fn=step,\n outputs_info=None,\n sequences=[coefficients, full_range],\n non_sequences=x)\n\npolynomial = components.sum()\ncalculate_polynomial = theano.function(inputs=[coefficients, x],\n outputs=polynomial,\n updates=updates)\n\ntest_coeff = np.asarray([1, 0, 2], dtype=theano.config.floatX)\nprint(calculate_polynomial(test_coeff, 3))\n# 19.0", "Solution : run the cell below to display the solution to this exercise.", "%load scan_ex1_solution.py", "Exercise 2 - Sampling without replacement\nIn this exercise, the goal is to implement a Theano function that :\n* takes as input a vector of probabilities and a scalar\n* performs sampling without replacements from those probabilities as many times as the value of the scalar\n* returns a vector containing the indices of the sampled elements.\nPartial code is provided to help with the sampling of random numbers since this is not something that was covered in this tutorial.", "probabilities = T.vector()\nnb_samples = T.iscalar()\n\nrng = T.shared_randomstreams.RandomStreams(1234)\n\ndef sample_from_pvect(pvect):\n \"\"\" Provided utility function: given a symbolic vector of\n probabilities (which MUST sum to 1), sample one element\n and return its index.\n \"\"\"\n onehot_sample = rng.multinomial(n=1, pvals=pvect)\n sample = onehot_sample.argmax()\n return sample\n\ndef set_p_to_zero(pvect, i):\n \"\"\" Provided utility function: given a symbolic vector of\n probabilities and an index 'i', set the probability of the\n i-th element to 0 and renormalize the probabilities so they\n sum to 1.\n \"\"\"\n new_pvect = T.set_subtensor(pvect[i], 0.)\n new_pvect = new_pvect / new_pvect.sum()\n return new_pvect\n \n\n# TODO use Scan to sample from the vector of probabilities and\n# symbolically obtain 'samples' the vector of sampled indices.\nsamples = None\n\n# Compiling the function\nf = theano.function(inputs=[probabilities, nb_samples],\n outputs=[samples])\n\n# Testing the function\ntest_probs = np.asarray([0.6, 0.3, 0.1], dtype=theano.config.floatX)\nfor i in range(10):\n print(f(test_probs, 2))", "Solution : run the cell below to display the solution to this exercise.", "%load scan_ex2_solution.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NathanYee/ThinkBayes2
code/chap02mine.ipynb
gpl-2.0
[ "Think Bayes: Chapter 2\nThis notebook presents example code and exercise solutions for Think Bayes.\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "% matplotlib inline\n\nfrom thinkbayes2 import Hist, Pmf, Suite", "The Pmf class\nI'll start by making a Pmf that represents the outcome of a six-sided die. Initially there are 6 values with equal probability.", "pmf = Pmf()\nfor x in [1,2,3,4,5,6]:\n pmf[x] = 1\n \npmf.Print()", "To be true probabilities, they have to add up to 1. So we can normalize the Pmf:", "pmf.Normalize()", "The return value from Normalize is the sum of the probabilities before normalizing.", "pmf.Print()", "A faster way to make a Pmf is to provide a sequence of values. The constructor adds the values to the Pmf and then normalizes:", "pmf = Pmf([1,2,3,4,5,6])\npmf.Print()", "To extract a value from a Pmf, you can use Prob", "pmf.Prob(1)", "Or you can use the bracket operator. Either way, if you ask for the probability of something that's not in the Pmf, the result is 0.", "pmf[1]", "The cookie problem\nHere's a Pmf that represents the prior distribution.", "pmf = Pmf()\npmf['Bowl 1'] = 0.5\npmf['Bowl 2'] = 0.5\npmf.Print()", "And we can update it using Mult", "pmf.Mult('Bowl 1', 0.75)\npmf.Mult('Bowl 2', 0.5)\npmf.Print()", "Or here's the shorter way to construct the prior.", "pmf = Pmf(['Bowl 1', 'Bowl 2'])\npmf.Print()", "And we can use *= for the update.", "pmf['Bowl 1'] *= 0.75\npmf['Bowl 2'] *= 0.5\npmf.Print()", "Either way, we have to normalize the posterior distribution.", "pmf.Normalize()\npmf.Print()", "The Bayesian framework\nHere's the same computation encapsulated in a class.", "class Cookie(Pmf):\n \"\"\"A map from string bowl ID to probablity.\"\"\"\n\n def __init__(self, hypos):\n \"\"\"Initialize self.\n\n hypos: sequence of string bowl IDs\n \"\"\"\n Pmf.__init__(self)\n for hypo in hypos:\n self.Set(hypo, 1)\n self.Normalize()\n\n def Update(self, data):\n \"\"\"Updates the PMF with new data.\n\n data: string cookie type\n \"\"\"\n for hypo in self.Values():\n like = self.Likelihood(data, hypo)\n self.Mult(hypo, like)\n self.Normalize()\n\n mixes = {\n 'Bowl 1':dict(vanilla=0.75, chocolate=0.25),\n 'Bowl 2':dict(vanilla=0.5, chocolate=0.5),\n }\n\n def Likelihood(self, data, hypo):\n \"\"\"The likelihood of the data under the hypothesis.\n\n data: string cookie type\n hypo: string bowl ID\n \"\"\"\n mix = self.mixes[hypo]\n like = mix[data]\n return like", "We can confirm that we get the same result.", "pmf = Cookie(['Bowl 1', 'Bowl 2'])\npmf.Update('vanilla')\npmf.Print()", "But this implementation is more general; it can handle any sequence of data.", "dataset = ['vanilla', 'chocolate', 'vanilla']\nfor data in dataset:\n pmf.Update(data)\n \npmf.Print()", "The Monty Hall problem\nThe Monty Hall problem might be the most contentious question in\nthe history of probability. The scenario is simple, but the correct\nanswer is so counterintuitive that many people just can't accept\nit, and many smart people have embarrassed themselves not just by\ngetting it wrong but by arguing the wrong side, aggressively,\nin public.\nMonty Hall was the original host of the game show Let's Make a\nDeal. The Monty Hall problem is based on one of the regular\ngames on the show. If you are on the show, here's what happens:\n\n\nMonty shows you three closed doors and tells you that there is a\n prize behind each door: one prize is a car, the other two are less\n valuable prizes like peanut butter and fake finger nails. The\n prizes are arranged at random.\n\n\nThe object of the game is to guess which door has the car. If\n you guess right, you get to keep the car.\n\n\nYou pick a door, which we will call Door A. We'll call the\n other doors B and C.\n\n\nBefore opening the door you chose, Monty increases the\n suspense by opening either Door B or C, whichever does not\n have the car. (If the car is actually behind Door A, Monty can\n safely open B or C, so he chooses one at random.)\n\n\nThen Monty offers you the option to stick with your original\n choice or switch to the one remaining unopened door.\n\n\nThe question is, should you \"stick\" or \"switch\" or does it\nmake no difference?\nMost people have the strong intuition that it makes no difference.\nThere are two doors left, they reason, so the chance that the car\nis behind Door A is 50%.\nBut that is wrong. In fact, the chance of winning if you stick\nwith Door A is only 1/3; if you switch, your chances are 2/3.\nHere's a class that solves the Monty Hall problem.", "class Monty(Pmf):\n \"\"\"Map from string location of car to probability\"\"\"\n\n def __init__(self, hypos):\n \"\"\"Initialize the distribution.\n\n hypos: sequence of hypotheses\n \"\"\"\n Pmf.__init__(self)\n for hypo in hypos:\n self.Set(hypo, 1)\n self.Normalize()\n\n def Update(self, data):\n \"\"\"Updates each hypothesis based on the data.\n\n data: any representation of the data\n \"\"\"\n for hypo in self.Values():\n like = self.Likelihood(data, hypo)\n self.Mult(hypo, like)\n self.Normalize()\n\n def Likelihood(self, data, hypo):\n \"\"\"Compute the likelihood of the data under the hypothesis.\n\n hypo: string name of the door where the prize is\n data: string name of the door Monty opened\n \"\"\"\n if hypo == data:\n return 0\n elif hypo == 'A':\n return 0.5\n else:\n return 1", "And here's how we use it.", "pmf = Monty('ABC')\npmf.Update('B')\npmf.Print()", "The Suite class\nMost Bayesian updates look pretty much the same, especially the Update method. So we can encapsulate the framework in a class, Suite, and create new classes that extend it.\nChild classes of Suite inherit Update and provide Likelihood. So here's the short version of Monty", "class Monty(Suite):\n\n def Likelihood(self, data, hypo):\n if hypo == data:\n return 0\n elif hypo == 'A':\n return 0.5\n else:\n return 1", "And it works.", "pmf = Monty('ABC')\npmf.Update('B')\npmf.Print()", "The M&M problem\nM&Ms are small candy-coated chocolates that come in a variety of\ncolors. Mars, Inc., which makes M&Ms, changes the mixture of\ncolors from time to time.\nIn 1995, they introduced blue M&Ms. Before then, the color mix in\na bag of plain M&Ms was 30% Brown, 20% Yellow, 20% Red, 10%\nGreen, 10% Orange, 10% Tan. Afterward it was 24% Blue , 20%\nGreen, 16% Orange, 14% Yellow, 13% Red, 13% Brown.\nSuppose a friend of mine has two bags of M&Ms, and he tells me\nthat one is from 1994 and one from 1996. He won't tell me which is\nwhich, but he gives me one M&M from each bag. One is yellow and\none is green. What is the probability that the yellow one came\nfrom the 1994 bag?\nHere's a solution:", "class M_and_M(Suite):\n \"\"\"Map from hypothesis (A or B) to probability.\"\"\"\n\n mix94 = dict(brown=30,\n yellow=20,\n red=20,\n green=10,\n orange=10,\n tan=10,\n blue=0)\n\n mix96 = dict(blue=24,\n green=20,\n orange=16,\n yellow=14,\n red=13,\n brown=13,\n tan=0)\n\n hypoA = dict(bag1=mix94, bag2=mix96)\n hypoB = dict(bag1=mix96, bag2=mix94)\n\n hypotheses = dict(A=hypoA, B=hypoB)\n\n def Likelihood(self, data, hypo):\n \"\"\"Computes the likelihood of the data under the hypothesis.\n\n hypo: string hypothesis (A or B)\n data: tuple of string bag, string color\n \"\"\"\n bag, color = data\n mix = self.hypotheses[hypo][bag]\n like = mix[color]\n return like", "And here's an update:", "suite = M_and_M('AB')\nsuite.Update(('bag1', 'yellow'))\nsuite.Update(('bag2', 'green'))\nsuite.Print()", "Exercise: Suppose you draw another M&M from bag1 and it's blue. What can you conclude? Run the update to confirm your intuition.", "suite.Update(('bag1', 'blue'))\nsuite.Print()", "Exercise: Now suppose you draw an M&M from bag2 and it's blue. What does that mean? Run the update to see what happens.", "suite.Update(('bag2', 'blue'))\nsuite.Print()", "Exercises\nExercise: This one is from one of my favorite books, David MacKay's \"Information Theory, Inference, and Learning Algorithms\":\n\nElvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin?\"\n\nTo answer this one, you need some background information: According to the Wikipedia article on twins: ``Twins are estimated to be approximately 1.9% of the world population, with monozygotic twins making up 0.2% of the total---and 8% of all twins.''", "twins = dict()\n\n# first calculate the total percentage of male-male twins. We can do this by adding the percentage of male-male\n# monozygotic and the percentage of male-male dizygotic\ntwins['male-male'] = (.08*.50 + .92*.25)\ntwins['male-male|monozygotic'] = (.50)\ntwins['monozygotic'] = (.08)\n\nprint(twins['male-male'])\nprint(twins['male-male|monozygotic'])\nprint(twins['monozygotic'])\n\n# now using bayes theorem\ntemp = twins['male-male|monozygotic'] * twins['monozygotic'] / twins['male-male']\nprint(\"P(monozygotic|male-male): {0:.3f}\".format(temp))", "Exercise: Let's consider a more general version of the Monty Hall problem where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B with probability p and C with probability 1-p. If you choose A and Monty opens B, what is the probability that the car is behind A, in terms of p? What if Monty opens C?\nHint: you might want to use SymPy to do the algebra for you.", "from sympy import symbols\np = symbols('p')\n\npmf = Pmf('ABC')\npmf['A'] *= p\npmf['B'] *= 0\npmf['C'] *= 1\n\npmf.Normalize()\npmf.Print()\n\np\n\npmf['A'].simplify()\n\npmf['A'].subs(p, 0.5)", "Exercise: According to the CDC, ``Compared to nonsmokers, men who smoke are about 23 times more likely to develop lung cancer and women who smoke are about 13 times more likely.'' Also, among adults in the U.S. in 2014:\n\nNearly 19 of every 100 adult men (18.8%)\nNearly 15 of every 100 adult women (14.8%)\n\nIf you learn that a woman has been diagnosed with lung cancer, and you know nothing else about her, what is the probability that she is a smoker?", "# Solution goes here", "Exercise In Section 2.3 I said that the solution to the cookie problem generalizes to the case where we draw multiple cookies with replacement.\nBut in the more likely scenario where we eat the cookies we draw, the likelihood of each draw depends on the previous draws.\nModify the solution in this chapter to handle selection without replacement. Hint: add instance variables to Cookie to represent the hypothetical state of the bowls, and modify Likelihood accordingly. You might want to define a Bowl object.", "# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arcyfelix/Courses
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
apache-2.0
[ "Stacked Autoencoder", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport tensorflow as tf\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"./data/MNIST_data/\", \n one_hot = True)\n\ntf.reset_default_graph() ", "Parameters", "num_inputs = 784 # 28*28\nneurons_hid1 = 392\nneurons_hid2 = 196\nneurons_hid3 = neurons_hid1 # Decoder Begins\nnum_outputs = num_inputs\n\nlearning_rate = 0.01", "Activation function", "actf = tf.nn.relu", "Placeholder", "X = tf.placeholder(tf.float32, shape = [None, num_inputs])", "Weights\nInitializer capable of adapting its scale to the shape of weights tensors.\nWith distribution=\"normal\", samples are drawn from a truncated normal\ndistribution centered on zero, with stddev = sqrt(scale / n)\nwhere n is:\n - number of input units in the weight tensor, if mode = \"fan_in\"\n - number of output units, if mode = \"fan_out\"\n - average of the numbers of input and output units, if mode = \"fan_avg\"\nWith distribution=\"uniform\", samples are drawn from a uniform distribution\nwithin [-limit, limit], with limit = sqrt(3 * scale / n).", "initializer = tf.variance_scaling_initializer()\n\nw1 = tf.Variable(initializer([num_inputs, neurons_hid1]), \n dtype = tf.float32)\nw2 = tf.Variable(initializer([neurons_hid1, neurons_hid2]), \n dtype = tf.float32)\nw3 = tf.Variable(initializer([neurons_hid2, neurons_hid3]), \n dtype = tf.float32)\nw4 = tf.Variable(initializer([neurons_hid3, num_outputs]), \n dtype = tf.float32)", "Biases", "b1 = tf.Variable(tf.zeros(neurons_hid1))\nb2 = tf.Variable(tf.zeros(neurons_hid2))\nb3 = tf.Variable(tf.zeros(neurons_hid3))\nb4 = tf.Variable(tf.zeros(num_outputs))", "Activation Function and Layers", "act_func = tf.nn.relu\n\nhid_layer1 = act_func(tf.matmul(X, w1) + b1)\nhid_layer2 = act_func(tf.matmul(hid_layer1, w2) + b2)\nhid_layer3 = act_func(tf.matmul(hid_layer2, w3) + b3)\noutput_layer = tf.matmul(hid_layer3, w4) + b4", "Loss Function", "loss = tf.reduce_mean(tf.square(output_layer - X))", "Optimizer", "#tf.train.RMSPropOptimizer\noptimizer = tf.train.AdamOptimizer(learning_rate)\n\ntrain = optimizer.minimize(loss)", "Intialize Variables", "init = tf.global_variables_initializer()\n\nsaver = tf.train.Saver() \n\nnum_epochs = 5\nbatch_size = 150\n\nwith tf.Session() as sess:\n sess.run(init)\n \n # Epoch == Entire Training Set\n for epoch in range(num_epochs):\n num_batches = mnist.train.num_examples // batch_size\n \n # 150 batch size\n for iteration in range(num_batches):\n X_batch, y_batch = mnist.train.next_batch(batch_size)\n sess.run(train, \n feed_dict = {X: X_batch})\n \n training_loss = loss.eval(feed_dict={X: X_batch}) \n \n print(\"Epoch {} Complete. Training Loss: {}\".format(epoch, training_loss))\n \n saver.save(sess, \"./checkpoint/stacked_autoencoder.ckpt\") ", "Test Autoencoder output on Test Data", "num_test_images = 10\n\nwith tf.Session() as sess:\n saver.restore(sess,\"./checkpoint/stacked_autoencoder.ckpt\")\n results = output_layer.eval(feed_dict = {X : mnist.test.images[:num_test_images]})\n\n# Compare original images with their reconstructions\nf, a = plt.subplots(2, 10, \n figsize = (20, 4))\nfor i in range(num_test_images):\n a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))\n a[1][i].imshow(np.reshape(results[i], (28, 28)))", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jdhp-docs/python_notebooks
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
mit
[ "%matplotlib inline\n\nfrom ailib.ml.datasets.regression_toy_problems import gen_1d_polynomial_samples", "Basic 1D non-linear regression with Keras\nTODO: see https://stackoverflow.com/questions/44998910/keras-model-to-fit-polynomial\nInstall Keras\nhttps://keras.io/#installation\nInstall dependencies\nInstall TensorFlow backend: https://www.tensorflow.org/install/\npip install tensorflow\nInsall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels\npip install h5py\nInstall pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation\npip install pydot\nInstall Keras\npip install keras\nImport packages and check versions", "import tensorflow as tf\ntf.__version__\n\nimport keras\nkeras.__version__\n\nimport h5py\nh5py.__version__\n\nimport pydot\npydot.__version__", "Make the dataset", "df_train = gen_1d_polynomial_samples(n_samples=100, noise_std=0.05)\n\nx_train = df_train.x.values\ny_train = df_train.y.values\n\nplt.plot(x_train, y_train, \".k\");\n\ndf_test = gen_1d_polynomial_samples(n_samples=100, noise_std=None)\n\nx_test = df_test.x.values\ny_test = df_test.y.values\n\nplt.plot(x_test, y_test, \".k\");", "Make the regressor", "model = keras.models.Sequential()\n\n#model.add(keras.layers.Dense(units=1000, activation='relu', input_dim=1))\n#model.add(keras.layers.Dense(units=1))\n#model.add(keras.layers.Dense(units=1000, activation='relu'))\n#model.add(keras.layers.Dense(units=1))\n\nmodel.add(keras.layers.Dense(units=5, activation='relu', input_dim=1))\nmodel.add(keras.layers.Dense(units=1))\nmodel.add(keras.layers.Dense(units=5, activation='relu'))\nmodel.add(keras.layers.Dense(units=1))\nmodel.add(keras.layers.Dense(units=5, activation='relu'))\nmodel.add(keras.layers.Dense(units=1))\n\nmodel.compile(loss='mse',\n optimizer='adam')\n\nmodel.summary()\n\nhist = model.fit(x_train, y_train, batch_size=100, epochs=3000, verbose=None)\n\nplt.plot(hist.history['loss']);\n\nmodel.evaluate(x_test, y_test)\n\ny_predicted = model.predict(x_test)\n\nplt.plot(x_test, y_test, \".r\")\nplt.plot(x_test, y_predicted, \".k\");" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tolaoniyangi/dmc
notebooks/week-2/04 - Lab 2 Assignment.ipynb
apache-2.0
[ "Lab 2 assignment\nThis assignment will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.\nWe will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.", "import random", "First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.", "gameStake = 50 \ncards = range(10)", "Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.", "class Player:\n \n # create here two local variables to store a unique ID for each player and the player's current 'pot' of money\n # [FILL IN YOUR VARIABLES HERE]\n \n # in the __init__() function, use the two input variables to initialize the ID and starting pot of each player\n \n def __init__(self, inputID, startingPot):\n # [CREATE YOUR INITIALIZATIONS HERE]\n \n # create a function for playing the game. This function starts by taking an input for the dealer's card\n # and picking a random number from the 'cards' list for the player's card\n\n def play(self, dealerCard):\n # we use the random.choice() function to select a random item from a list\n playerCard = random.choice(cards)\n \n # here we should have a conditional that tests the player's card value against the dealer card\n # and returns a statement saying whether the player won or lost the hand\n # before returning the statement, make sure to either add or subtract the stake from the player's pot so that\n # the 'pot' variable tracks the player's money\n \n if playerCard < dealerCard:\n # [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]\n else:\n # [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]\n \n # create an accessor function to return the current value of the player's pot\n def returnPot(self):\n # [FILL IN THE RETURN STATEMENT]\n \n # create an accessor function to return the player's ID\n def returnID(self):\n # [FILL IN THE RETURN STATEMENT]", "Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.", "def playHand(players):\n \n for player in players:\n dealerCard = random.choice(cards)\n #[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]", "Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.", "def checkBalances(players):\n \n for player in players:\n #[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]", "Now we are ready to start the game. First we create an empy list to store the collection of players in the game.", "players = [] ", "Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.", "for i in range(5):\n players.append(Player(i, 500))", "Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.", "for i in range(10):\n print ''\n print 'start game ' + str(i)\n playHand(players)", "Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.", "print ''\nprint 'game results:'\ncheckBalances(players)", "Below is a version of the expected printout if you've done everything correctly (note that since the cards are chosen randomly the actual results will differ, but the structure should be the same). Once you finish the assignment please submit a pull request to the main dmc-2016 repo before the deadline.\n```\nstart game 0\nplayer 0 Lose, 4 vs 7\nplayer 1 Win, 2 vs 0\nplayer 2 Lose, 0 vs 4\nplayer 3 Win, 7 vs 2\nplayer 4 Win, 5 vs 0\nstart game 1\nplayer 0 Win, 1 vs 0\nplayer 1 Lose, 1 vs 5\nplayer 2 Lose, 6 vs 9\nplayer 3 Lose, 1 vs 8\nplayer 4 Lose, 0 vs 9\nstart game 2\nplayer 0 Win, 3 vs 3\nplayer 1 Lose, 0 vs 2\nplayer 2 Win, 9 vs 6\nplayer 3 Win, 8 vs 7\nplayer 4 Win, 8 vs 6\nstart game 3\nplayer 0 Win, 9 vs 7\nplayer 1 Lose, 7 vs 8\nplayer 2 Lose, 2 vs 3\nplayer 3 Lose, 0 vs 8\nplayer 4 Lose, 0 vs 6\nstart game 4\nplayer 0 Win, 7 vs 4\nplayer 1 Win, 3 vs 0\nplayer 2 Win, 8 vs 5\nplayer 3 Win, 2 vs 1\nplayer 4 Lose, 4 vs 7\nstart game 5\nplayer 0 Lose, 2 vs 8\nplayer 1 Lose, 4 vs 6\nplayer 2 Win, 2 vs 0\nplayer 3 Lose, 4 vs 5\nplayer 4 Lose, 3 vs 8\nstart game 6\nplayer 0 Lose, 3 vs 6\nplayer 1 Win, 8 vs 0\nplayer 2 Win, 5 vs 5\nplayer 3 Lose, 2 vs 6\nplayer 4 Win, 8 vs 7\nstart game 7\nplayer 0 Lose, 0 vs 9\nplayer 1 Lose, 6 vs 8\nplayer 2 Lose, 1 vs 9\nplayer 3 Lose, 4 vs 8\nplayer 4 Win, 9 vs 8\nstart game 8\nplayer 0 Lose, 1 vs 8\nplayer 1 Lose, 3 vs 9\nplayer 2 Win, 5 vs 4\nplayer 3 Win, 6 vs 2\nplayer 4 Win, 3 vs 0\nstart game 9\nplayer 0 Lose, 5 vs 6\nplayer 1 Win, 6 vs 1\nplayer 2 Lose, 8 vs 9\nplayer 3 Lose, 3 vs 9\nplayer 4 Win, 7 vs 5\ngame results:\nplayer 0 has $400 left.\nplayer 1 has $400 left.\nplayer 2 has $500 left.\nplayer 3 has $400 left.\nplayer 4 has $600 left.\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ceos-seo/data_cube_notebooks
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
apache-2.0
[ "<a id=\"alos_land_change_top\"></a>\nALOS Land Change\n<hr>\n\nNotebook Summary\nThis notebook tries to detect land change with ALOS-2.\n<hr>\n\nIndex\n\nImport Dependencies and Connect to the Data Cube\nChoose Platform and Product\nGet the Extents of the Cube\nDefine the Analysis Parameters\nLoad and Clean Data from the Data Cube\nView RGBs for the Baseline and Analysis Periods\nPlot HH or HV Band for the Baseline and Analysis Periods\nPlot a Custom RGB That Uses Bands from the Baseline and Analysis Periods\nPlot a Change Product to Compare Two Time Periods (Epochs)\n\n<span id=\"alos_land_change_import_connect\">Import Dependencies and Connect to the Data Cube &#9652;</span>", "import sys\nimport os\nsys.path.append(os.environ.get('NOTEBOOK_ROOT'))\n\nimport datacube\nimport numpy as np\nimport pandas as pd\nimport xarray as xr\nimport matplotlib.pyplot as plt\n\nfrom datacube.utils.aws import configure_s3_access\nconfigure_s3_access(requester_pays=True)\n\ndc = datacube.Datacube()", "<span id=\"alos_land_change_plat_prod\">Choose Platform and Product &#9652;</span>", "# Select one of the ALOS data cubes from around the world\n# Colombia, Vietnam, Samoa Islands\n\n## ALOS Data Summary\n# There are 7 time slices (epochs) for the ALOS mosaic data. \n# The dates of the mosaics are centered on June 15 of each year (time stamp)\n# Bands: RGB (HH-HV-HH/HV), HH, HV, date, incidence angle, mask)\n# Years: 2007, 2008, 2009, 2010, 2015, 2016, 2017\n\nplatform = \"ALOS/ALOS-2\"\nproduct = \"alos_palsar_mosaic\"", "<a id=\"alos_land_change_extents\"></a> Get the Extents of the Cube &#9652;", "from utils.data_cube_utilities.dc_time import dt_to_str\n\nmetadata = dc.load(platform=platform, product=product, measurements=[])\n\nfull_lat = metadata.latitude.values[[-1,0]]\nfull_lon = metadata.longitude.values[[0,-1]]\nmin_max_dates = list(map(dt_to_str, map(pd.to_datetime, metadata.time.values[[0,-1]])))\n\n# Print the extents of the combined data.\nprint(\"Latitude Extents:\", full_lat)\nprint(\"Longitude Extents:\", full_lon)\nprint(\"Time Extents:\", min_max_dates)", "<a id=\"alos_land_change_parameters\"></a> Define the Analysis Parameters &#9652;", "from datetime import datetime\n\n## Somoa ##\n\n# Apia City\n# lat = (-13.7897, -13.8864)\n# lon = (-171.8531, -171.7171)\n# time_extents = (\"2014-01-01\", \"2014-12-31\")\n\n# East Area\n# lat = (-13.94, -13.84)\n# lon = (-171.96, -171.8)\n# time_extents = (\"2014-01-01\", \"2014-12-31\")\n\n# Central Area\n# lat = (-14.057, -13.884)\n# lon = (-171.774, -171.573)\n# time_extents = (\"2014-01-01\", \"2014-12-31\")\n\n# Small focused area in Central Region\n# lat = (-13.9443, -13.884)\n# lon = (-171.6431, -171.573)\n# time_extents = (\"2014-01-01\", \"2014-12-31\")\n\n## Kenya ##\n\n# Mombasa\nlat = (-4.1095, -3.9951)\nlon = (39.5178, 39.7341)\ntime_extents = (\"2007-01-01\", \"2017-12-31\")", "Visualize the selected area", "from utils.data_cube_utilities.dc_display_map import display_map\n\ndisplay_map(lat, lon) ", "<a id=\"alos_land_change_load\"></a> Load and Clean Data from the Data Cube &#9652;", "dataset = dc.load(product = product, platform = platform, \n latitude = lat, longitude = lon, \n time=time_extents)", "View an acquisition in dataset", "# Select a baseline and analysis time slice for comparison\n# Make the adjustments to the years according to the following scheme\n# Time Slice: 0=2007, 1=2008, 2=2009, 3=2010, 4=2015, 5=2016, 6=2017)\n\nbaseline_slice = dataset.isel(time = 0)\nanalysis_slice = dataset.isel(time = -1)", "<a id=\"alos_land_change_rgbs\"></a> View RGBs for the Baseline and Analysis Periods &#9652;", "%matplotlib inline\nfrom utils.data_cube_utilities.dc_rgb import rgb\n\n# Baseline RGB\n\nrgb_dataset2 = xr.Dataset()\nmin_ = np.min([\n np.percentile(baseline_slice.hh,5),\n np.percentile(baseline_slice.hv,5),\n])\nmax_ = np.max([\n np.percentile(baseline_slice.hh,95),\n np.percentile(baseline_slice.hv,95),\n])\nrgb_dataset2['base.hh'] = baseline_slice.hh.clip(min_,max_)/40\nrgb_dataset2['base.hv'] = baseline_slice.hv.clip(min_,max_)/20\nrgb_dataset2['base.ratio'] = (baseline_slice.hh.clip(min_,max_)/baseline_slice.hv.clip(min_,max_))*75\nrgb(rgb_dataset2, bands=['base.hh','base.hv','base.ratio'], width=8)\n\n# Analysis RGB\n\nrgb_dataset2 = xr.Dataset()\nmin_ = np.min([\n np.percentile(analysis_slice.hh,5),\n np.percentile(analysis_slice.hv,5),\n])\nmax_ = np.max([\n np.percentile(analysis_slice.hh,95),\n np.percentile(analysis_slice.hv,95),\n])\nrgb_dataset2['base.hh'] = analysis_slice.hh.clip(min_,max_)/40\nrgb_dataset2['base.hv'] = analysis_slice.hv.clip(min_,max_)/20\nrgb_dataset2['base.ratio'] = (analysis_slice.hh.clip(min_,max_)/analysis_slice.hv.clip(min_,max_))*75\nrgb(rgb_dataset2, bands=['base.hh','base.hv','base.ratio'], width=8)", "<a id=\"alos_land_change_hh_hv\"></a> Plot HH or HV Band for the Baseline and Analysis Periods &#9652;\nNOTE: The HV band is best for deforestation detection\nTypical radar analyses convert the backscatter values at the pixel level to dB scale.<br>\nThe ALOS coversion (from JAXA) is: Backscatter dB = 20 * log10( backscatter intensity) - 83.0", "# Plot the BASELINE and ANALYSIS slice side-by-side\n# Change the band (HH or HV) in the code below\n\nplt.figure(figsize = (15,6))\n\nplt.subplot(1,2,1)\n(20*np.log10(baseline_slice.hv)-83).plot(vmax=0, vmin=-30, cmap = \"Greys_r\")\nplt.subplot(1,2,2)\n(20*np.log10(analysis_slice.hv)-83).plot(vmax=0, vmin=-30, cmap = \"Greys_r\")", "<a id=\"alos_land_change_custom_rgb\"></a> Plot a Custom RGB That Uses Bands from the Baseline and Analysis Periods &#9652;\nThe RGB image below assigns RED to the baseline year HV band and GREEN+BLUE to the analysis year HV band<br>\nVegetation loss appears in RED and regrowth in CYAN. Areas of no change appear in different shades of GRAY.<br>\nUsers can change the RGB color assignments and bands (HH, HV) in the code below", "# Clipping the bands uniformly to brighten the image\nrgb_dataset2 = xr.Dataset()\nmin_ = np.min([\n np.percentile(baseline_slice.hv,5),\n np.percentile(analysis_slice.hv,5),\n])\nmax_ = np.max([\n np.percentile(baseline_slice.hv,95),\n np.percentile(analysis_slice.hv,95),\n])\nrgb_dataset2['baseline_slice.hv'] = baseline_slice.hv.clip(min_,max_)\nrgb_dataset2['analysis_slice.hv'] = analysis_slice.hv.clip(min_,max_)\n\n# Plot the RGB with clipped HV band values\nrgb(rgb_dataset2, bands=['baseline_slice.hv','analysis_slice.hv','analysis_slice.hv'], width=8)", "Select one of the plots below and adjust the threshold limits (top and bottom)", "plt.figure(figsize = (15,6))\nplt.subplot(1,2,1)\nbaseline_slice.hv.plot (vmax=0, vmin=4000, cmap=\"Greys\")\nplt.subplot(1,2,2)\nanalysis_slice.hv.plot (vmax=0, vmin=4000, cmap=\"Greys\")", "<a id=\"alos_land_change_change_product\"></a> Plot a Change Product to Compare Two Time Periods (Epochs) &#9652;", "from matplotlib.ticker import FuncFormatter\n\ndef intersection_threshold_plot(first, second, th, mask = None, color_none=np.array([0,0,0]), \n color_first=np.array([0,255,0]), color_second=np.array([255,0,0]), \n color_both=np.array([255,255,255]), color_mask=np.array([127,127,127]), \n width = 10, *args, **kwargs):\n \"\"\"\n Given two dataarrays, create a threshold plot showing where zero, one, or both are within a threshold.\n \n Parameters\n ----------\n first, second: xarray.DataArray\n The DataArrays to compare.\n th: tuple\n A 2-tuple of the minimum (inclusive) and maximum (exclusive) threshold values, respectively.\n mask: numpy.ndarray\n A NumPy array of the same shape as the dataarrays. The pixels for which it is `True` are colored `color_mask`.\n color_none: list-like\n A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where neither\n first nor second have values within the threshold. Default color is black.\n color_first: list-like\n A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where only the first \n has values within the threshold. Default color is green.\n color_second: list-like\n A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where only the second\n has values within the threshold. Default color is red.\n color_both: list-like\n A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where both the\n first and second have values within the threshold. Default color is white.\n color_mask: list-like\n A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where `mask == True`.\n Overrides any other color a region may have. Default color is gray.\n width: int\n The width of the created ``matplotlib.figure.Figure``.\n *args: list\n Arguments passed to ``matplotlib.pyplot.imshow()``.\n **kwargs: dict\n Keyword arguments passed to ``matplotlib.pyplot.imshow()``.\n \"\"\"\n mask = np.zeros(first.shape).astype(bool) if mask is None else mask\n \n first_in = np.logical_and(th[0] <= first, first < th[1])\n second_in = np.logical_and(th[0] <= second, second < th[1])\n both_in = np.logical_and(first_in, second_in)\n none_in = np.invert(both_in)\n \n # The colors for each pixel.\n color_array = np.zeros((*first.shape, 3)).astype(np.int16)\n \n color_array[none_in] = color_none\n color_array[first_in] = color_first\n color_array[second_in] = color_second\n color_array[both_in] = color_both\n color_array[mask] = color_mask\n\n def figure_ratio(ds, fixed_width = 10):\n width = fixed_width\n height = len(ds.latitude) * (fixed_width / len(ds.longitude))\n return (width, height)\n\n fig, ax = plt.subplots(figsize = figure_ratio(first,fixed_width = width))\n \n lat_formatter = FuncFormatter(lambda y_val, tick_pos: \"{0:.3f}\".format(first.latitude.values[tick_pos] ))\n lon_formatter = FuncFormatter(lambda x_val, tick_pos: \"{0:.3f}\".format(first.longitude.values[tick_pos]))\n\n ax.xaxis.set_major_formatter(lon_formatter)\n ax.yaxis.set_major_formatter(lat_formatter)\n \n plt.title(\"Threshold: {} < x < {}\".format(th[0], th[1]))\n plt.xlabel('Longitude')\n plt.ylabel('Latitude')\n \n plt.imshow(color_array, *args, **kwargs)\n plt.show()\n\nchange_product_band = 'hv'\nbaseline_epoch = \"2007-07-02\"\nanalysis_epoch = \"2017-07-02\"\nthreshold_range = (0, 2000) # The minimum and maximum threshold values, respectively.\n\nbaseline_ds = dataset.sel(time=baseline_epoch)[change_product_band].isel(time=0)\nanalysis_ds = dataset.sel(time=analysis_epoch)[change_product_band].isel(time=0)\n\nanomaly = analysis_ds - baseline_ds\n\nintersection_threshold_plot(baseline_ds, analysis_ds, threshold_range)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
adico-somoto/deep-learning
sentiment-rnn/Sentiment_RNN_Solution.ipynb
mit
[ "Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.", "import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment-network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment-network/labels.txt', 'r') as f:\n labels = f.read()\n\nreviews[:2000]", "Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nall_text[:2000]\n\nwords[:100]", "Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.", "from collections import Counter\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\nreviews_ints = []\nfor each in reviews:\n reviews_ints.append([vocab_to_int[word] for word in each.split()])", "Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.", "labels = labels.split('\\n')\nlabels = np.array([1 if each == 'positive' else 0 for each in labels])\n\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.", "non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\nlen(non_zero_idx)\n\nreviews_ints[-1]", "Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.", "reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nlabels = np.array([labels[ii] for ii in non_zero_idx])", "Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "seq_len = 200\nfeatures = np.zeros((len(reviews_ints), seq_len), dtype=int)\nfor i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_len]\n\nfeatures[:10,:100]", "Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.", "split_frac = 0.8\nsplit_idx = int(len(features)*0.8)\ntrain_x, val_x = features[:split_idx], features[split_idx:]\ntrain_y, val_y = labels[:split_idx], labels[split_idx:]\n\ntest_idx = int(len(val_x)*0.5)\nval_x, test_x = val_x[:test_idx], val_x[test_idx:]\nval_y, test_y = val_y[:test_idx], val_y[test_idx:]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate", "lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001", "For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.", "n_words = len(vocab_to_int)\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')\n labels_ = tf.placeholder(tf.int32, [None, None], name='labels')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')", "Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].", "# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs_)", "LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.", "with graph.as_default():\n # Your basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)", "RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.", "with graph.as_default():\n outputs, final_state = tf.nn.dynamic_rnn(cell, embed,\n initial_state=initial_state)", "Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.", "with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.", "with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].", "def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]", "Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.", "epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")", "Testing", "test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
notebooks/chap18.ipynb
mit
[ "Modeling and Simulation in Python\nChapter 18\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *", "Code from the previous chapter\nRead the data.", "data = pd.read_csv('data/glucose_insulin.csv', index_col='time');", "Interpolate the insulin data.", "I = interpolate(data.insulin)", "The glucose minimal model\nI'll cheat by starting with parameters that fit the data roughly; then we'll see how to improve them.", "params = Params(G0 = 290,\n k1 = 0.03,\n k2 = 0.02,\n k3 = 1e-05)", "Here's a version of make_system that takes the parameters and data:", "def make_system(params, data):\n \"\"\"Makes a System object with the given parameters.\n \n params: sequence of G0, k1, k2, k3\n data: DataFrame with `glucose` and `insulin`\n \n returns: System object\n \"\"\"\n G0, k1, k2, k3 = params\n \n Gb = data.glucose[0]\n Ib = data.insulin[0]\n I = interpolate(data.insulin)\n \n t_0 = get_first_label(data)\n t_end = get_last_label(data)\n\n init = State(G=G0, X=0)\n \n return System(params,\n init=init, Gb=Gb, Ib=Ib, I=I,\n t_0=t_0, t_end=t_end, dt=2)\n\nsystem = make_system(params, data)", "And here's the update function.", "def update_func(state, t, system):\n \"\"\"Updates the glucose minimal model.\n \n state: State object\n t: time in min\n system: System object\n \n returns: State object\n \"\"\"\n G, X = state\n k1, k2, k3 = system.k1, system.k2, system.k3 \n I, Ib, Gb = system.I, system.Ib, system.Gb\n dt = system.dt\n \n dGdt = -k1 * (G - Gb) - X*G\n dXdt = k3 * (I(t) - Ib) - k2 * X\n \n G += dGdt * dt\n X += dXdt * dt\n\n return State(G=G, X=X)", "Before running the simulation, it is always a good idea to test the update function using the initial conditions. In this case we can veryify that the results are at least qualitatively correct.", "update_func(system.init, system.t_0, system)", "Now run_simulation is pretty much the same as it always is.", "def run_simulation(system, update_func):\n \"\"\"Runs a simulation of the system.\n \n system: System object\n update_func: function that updates state\n \n returns: TimeFrame\n \"\"\"\n init = system.init\n t_0, t_end, dt = system.t_0, system.t_end, system.dt\n \n frame = TimeFrame(columns=init.index)\n frame.row[t_0] = init\n ts = linrange(t_0, t_end, dt)\n \n for t in ts:\n frame.row[t+dt] = update_func(frame.row[t], t, system)\n \n return frame", "And here's how we run it.", "results = run_simulation(system, update_func);", "The results are in a TimeFrame object with one column per state variable.", "results", "The following plot shows the results of the simulation along with the actual glucose data.", "subplot(2, 1, 1)\n\nplot(results.G, 'b-', label='simulation')\nplot(data.glucose, 'bo', label='glucose data')\ndecorate(ylabel='Concentration (mg/dL)')\n\nsubplot(2, 1, 2)\n\nplot(results.X, 'C1', label='remote insulin')\n\ndecorate(xlabel='Time (min)', \n ylabel='Concentration (arbitrary units)')\n\nsavefig('figs/chap18-fig01.pdf')", "Numerical solution\nNow let's solve the differential equation numerically using run_ode_solver, which is an implementation of Ralston's method.\nInstead of an update function, we provide a slope function that evaluates the right-hand side of the differential equations.\nWe don't have to do the update part; the solver does it for us.", "def slope_func(state, t, system):\n \"\"\"Computes derivatives of the glucose minimal model.\n \n state: State object\n t: time in min\n system: System object\n \n returns: derivatives of G and X\n \"\"\"\n G, X = state\n k1, k2, k3 = system.k1, system.k2, system.k3 \n I, Ib, Gb = system.I, system.Ib, system.Gb\n \n dGdt = -k1 * (G - Gb) - X*G\n dXdt = k3 * (I(t) - Ib) - k2 * X\n \n return dGdt, dXdt", "We can test the slope function with the initial conditions.", "slope_func(system.init, 0, system)", "Here's how we run the ODE solver.", "results2, details = run_ode_solver(system, slope_func)", "details is a ModSimSeries object with information about how the solver worked.", "details", "results is a TimeFrame with one row for each time step and one column for each state variable:", "results2", "Plotting the results from run_simulation and run_ode_solver, we can see that they are not very different.", "plot(results.G, 'C0', label='run_simulation')\nplot(results2.G, 'C2--', label='run_ode_solver')\n\ndecorate(xlabel='Time (min)', ylabel='Concentration (mg/dL)')\n\nsavefig('figs/chap18-fig02.pdf')", "The differences in G are less than 2%.", "diff = results.G - results2.G\npercent_diff = diff / results2.G * 100\npercent_diff\n\nmax(abs(percent_diff))", "Exercises\nExercise: Our solution to the differential equations is only approximate because we used a finite step size, dt=2 minutes.\nIf we make the step size smaller, we expect the solution to be more accurate. Run the simulation with dt=1 and compare the results. What is the largest relative error between the two solutions?", "# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here", "Under the hood\nHere's the source code for run_ode_solver if you'd like to know how it works.\nNotice that run_ode_solver is another name for run_ralston, which implements Ralston's method.", "source_code(run_ode_solver)", "Related reading: You might be interested in this article about people making a DIY artificial pancreas." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Neuroglycerin/neukrill-net-work
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
mit
[ "We're wasting a bunch of time waiting for our iterators to produce minibatches when we're running epochs. Seems like we should probably precompute them while the minibatch is being run on the GPU. To do this involves using the multiprocessing module. Since I've never used it before, here are my dev notes for writing this into the dataset iterators.", "import multiprocessing\nimport numpy as np\n\np = multiprocessing.Pool(4)\n\nx = range(3)\n\nf = lambda x: x*2\n\ndef f(x):\n return x**2\n\nprint(x)", "For some reason can't run these in the notebook. So have to run them with subprocess like so:", "%%python\nfrom multiprocessing import Pool\n\ndef f(x):\n return x*x\n\nif __name__ == '__main__':\n p = Pool(5)\n print(p.map(f, [1, 2, 3]))\n\n%%python\nfrom multiprocessing import Pool\nimport numpy as np\n\ndef f(x):\n return x*x\n\nif __name__ == '__main__':\n p = Pool(5)\n print(p.map(f, np.array([1, 2, 3])))", "Now doing this asynchronously:", "%%python\nfrom multiprocessing import Pool\nimport numpy as np\n\ndef f(x):\n return x**2\n\nif __name__ == '__main__':\n p = Pool(5)\n r = p.map_async(f, np.array([0,1,2]))\n print(dir(r))\n print(r.get(timeout=1))", "Now trying to create an iterable that will precompute it's output using multiprocessing.", "%%python\nfrom multiprocessing import Pool\nimport numpy as np\n\ndef f(x):\n return x**2\n\nclass It(object):\n def __init__(self,a):\n # store an array (2D)\n self.a = a\n # initialise pool\n self.p = Pool(4)\n # initialise index\n self.i = 0\n # initialise pre-computed first batch\n self.batch = self.p.map_async(f,self.a[self.i,:])\n \n def get(self):\n return self.batch.get(timeout=1)\n \n def f(self,x):\n return x**2\n\nif __name__ == '__main__':\n it = It(np.random.randn(4,4))\n print(it.get())\n\n%%python\nfrom multiprocessing import Pool\nimport numpy as np\n\ndef f(x):\n return x**2\n\nclass It(object):\n def __init__(self,a):\n # store an array (2D)\n self.a = a\n # initialise pool\n self.p = Pool(4)\n # initialise index\n self.i = 0\n # initialise pre-computed first batch\n self.batch = self.p.map_async(f,self.a[self.i,:])\n \n def __iter__(self):\n return self\n \n def next(self):\n # check if we've got something pre-computed to return\n if self.batch:\n # get the output\n output = self.batch.get(timeout=1)\n #output = self.batch\n # prepare next batch\n self.i += 1\n if self.i < self.a.shape[0]:\n self.p = Pool(4)\n self.batch = self.p.map_async(f,self.a[self.i,:])\n #self.batch = map(self.f,self.a[self.i,:])\n else:\n self.batch = False\n return output\n else:\n raise StopIteration\n\nif __name__ == '__main__':\n it = It(np.random.randn(4,4))\n for a in it:\n print a", "Then we have to try and do a similar thing, but using the randomaugment function. In the following two cells one uses multiprocessiung and one that doesn't. Testing them by pretending to ask for a minibatch and then sleep, applying the RandomAugment function each time.", "%%time\n%%python\nfrom multiprocessing import Pool\nimport numpy as np\nimport neukrill_net.augment\nimport time\n\nclass It(object):\n def __init__(self,a,f):\n # store an array (2D)\n self.a = a\n # store the function\n self.f = f\n # initialise pool\n self.p = Pool(4)\n # initialise indices\n self.inds = range(self.a.shape[0])\n # pop a batch from top\n self.batch_inds = [self.inds.pop(0) for _ in range(100)]\n # initialise pre-computed first batch\n self.batch = map(self.f,self.a[self.batch_inds,:])\n \n def __iter__(self):\n return self\n \n def next(self):\n # check if we've got something pre-computed to return\n if self.inds != []:\n # get the output\n output = self.batch\n # prepare next batch\n self.batch_inds = [self.inds.pop(0) for _ in range(100)]\n self.p = Pool(4)\n self.batch = map(self.f,self.a[self.batch_inds,:])\n return output\n else:\n raise StopIteration\n\nif __name__ == '__main__':\n f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270])\n it = It(np.random.randn(10000,48,48),f)\n for a in it:\n time.sleep(0.01)\n pass\n\n%%time\n%%python\nfrom multiprocessing import Pool\nimport numpy as np\nimport neukrill_net.augment\nimport time\n\nclass It(object):\n def __init__(self,a,f):\n # store an array (2D)\n self.a = a\n # store the function\n self.f = f\n # initialise pool\n self.p = Pool(8)\n # initialise indices\n self.inds = range(self.a.shape[0])\n # pop a batch from top\n self.batch_inds = [self.inds.pop(0) for _ in range(100)]\n # initialise pre-computed first batch\n self.batch = self.p.map_async(f,self.a[self.batch_inds,:])\n \n def __iter__(self):\n return self\n \n def next(self):\n # check if we've got something pre-computed to return\n if self.inds != []:\n # get the output\n output = self.batch.get(timeout=1)\n # prepare next batch\n self.batch_inds = [self.inds.pop(0) for _ in range(100)]\n #self.p = Pool(4)\n self.batch = self.p.map_async(f,self.a[self.batch_inds,:])\n return output\n else:\n raise StopIteration\n\nif __name__ == '__main__':\n f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270])\n it = It(np.random.randn(10000,48,48),f)\n for a in it:\n time.sleep(0.01)\n pass\n\n%%time\n%%python\nfrom multiprocessing import Pool\nimport numpy as np\nimport neukrill_net.augment\nimport time\n\nclass It(object):\n def __init__(self,a,f):\n # store an array (2D)\n self.a = a\n # store the function\n self.f = f\n # initialise pool\n self.p = Pool(8)\n # initialise indices\n self.inds = range(self.a.shape[0])\n # pop a batch from top\n self.batch_inds = [self.inds.pop(0) for _ in range(100)]\n # initialise pre-computed first batch\n self.batch = self.p.map_async(f,self.a[self.batch_inds,:])\n \n def __iter__(self):\n return self\n \n def next(self):\n # check if we've got something pre-computed to return\n if self.inds != []:\n # get the output\n output = self.batch.get(timeout=1)\n # prepare next batch\n self.batch_inds = [self.inds.pop(0) for _ in range(100)]\n #self.p = Pool(4)\n self.batch = self.p.map_async(f,self.a[self.batch_inds,:])\n return output\n else:\n raise StopIteration\n\nif __name__ == '__main__':\n f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270])\n it = It(np.random.randn(10000,48,48),f)\n for a in it:\n print np.array(a).shape\n print np.array(a).reshape(100,48,48,1).shape\n break", "It looks like, depending on the sleep time this should be about 5 times as fast." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dpshelio/2015-EuroScipy-pandas-tutorial
solved - 03b - Some more advanced indexing.ipynb
bsd-2-clause
[ "Advanced indexing", "%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\ntry:\n import seaborn\nexcept ImportError:\n pass\n\npd.options.display.max_rows = 10", "This dataset is borrowed from the PyCon tutorial of Brandon Rhodes (so all credit to him!). You can download these data from here: titles.csv and cast.csv and put them in the /data folder.", "cast = pd.read_csv('data/cast.csv')\ncast.head()\n\ntitles = pd.read_csv('data/titles.csv')\ntitles.head()", "Setting columns as the index\nWhy is it useful to have an index?\n\nGiving meaningful labels to your data -> easier to remember which data are where\nUnleash some powerful methods, eg with a DatetimeIndex for time series\nEasier and faster selection of data\n\nIt is this last one we are going to explore here!\nSetting the title column as the index:", "c = cast.set_index('title')\n\nc.head()", "Instead of doing:", "%%time\ncast[cast['title'] == 'Hamlet']", "we can now do:", "%%time\nc.loc['Hamlet']", "But you can also have multiple columns as the index, leading to a multi-index or hierarchical index:", "c = cast.set_index(['title', 'year'])\n\nc.head()\n\n%%time\nc.loc[('Hamlet', 2000),:]\n\nc2 = c.sort_index()\n\n%%time\nc2.loc[('Hamlet', 2000),:]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
flohorovicic/pynoddy
docs/notebooks/Paper-Fig3-4-Read-Geophysics.ipynb
gpl-2.0
[ "Read and Visualise Geophysical Potential-Fields\nGeophysical potential fields (gravity and magnetics) can be calculated directly from the generated kinematic model. A wide range of options also exists to consider effects of geological events on the relevant rock properties. We will here use pynoddy to simply and quickly test the effect of changing geological structures on the calculated geophysical response.", "%matplotlib inline\n\nimport sys, os\nimport matplotlib.pyplot as plt\n# adjust some settings for matplotlib\nfrom matplotlib import rcParams\n# print rcParams\nrcParams['font.size'] = 15\n# determine path of repository to set paths corretly below\nrepo_path = os.path.realpath('../..')\nimport pynoddy\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom IPython.core.display import HTML\ncss_file = 'pynoddy.css'\nHTML(open(css_file, \"r\").read())", "Read history file from Virtual Explorer\nMany Noddy models are available on the site of the Virtual Explorer in the Structural Geophysics Atlas. We will download and use one of these models here as the base model.\nWe start with the history file of a \"Fold and Thrust Belt\" setting stored on:\nhttp://virtualexplorer.com.au/special/noddyatlas/ch3/ch3_5/his/nfold_thrust.his\nThe file can directly be downloaded and opened with pynoddy:", "import pynoddy.history\nreload(pynoddy.history)\n\n# read model directly from Atlas of Virtual Geophysics\n#his = pynoddy.history.NoddyHistory(url = http://virtualexplorer.com.au/special/noddyatlas/ch3/ch3_5/his/nfold_thrust.his\")\n# his = pynoddy.history.NoddyHistory(url = \"http://virtualexplorer.com.au/special/noddyatlas/ch3/ch3_6/his/ndome_basin.his\")\n\nhis = pynoddy.history.NoddyHistory(url = \"http://tectonique.net/asg/ch3/ch3_5/his/fold_thrust.his\")\n\nhis.determine_model_stratigraphy()\n\nhis.change_cube_size(50)\n\n# Save to (local) file to compute and visualise model\nhistory_name = \"fold_thrust.his\"\nhis.write_history(history_name)\n# his = pynoddy.history.NoddyHistory(history_name)\n\noutput = \"fold_thrust_out\"\npynoddy.compute_model(history_name, output)\n\nimport pynoddy.output\nreload(pynoddy.output)\n# load and visualise model\nh_out = pynoddy.output.NoddyOutput(output)\n\n# his.determine_model_stratigraphy()\nh_out.plot_section('x', \n layer_labels = his.model_stratigraphy, \n colorbar_orientation = 'horizontal', \n colorbar=False,\n title = '',\n# savefig=True, fig_filename = 'fold_thrust_NS_section.eps',\n cmap = 'YlOrRd')\n\n\nh_out.plot_section('y', layer_labels = his.model_stratigraphy, \n colorbar_orientation = 'horizontal', title = '', cmap = 'YlOrRd', \n# savefig=True, fig_filename = 'fold_thrust_EW_section.eps',\n ve=1.5)\n \n\nh_out.export_to_vtk(vtk_filename = \"fold_thrust\")", "Visualise calculated geophysical fields\nThe first step is to recompute the model with the generation of the geophysical responses", "pynoddy.compute_model(history_name, output, sim_type = 'GEOPHYSICS')", "We now get two files for the caluclated fields: '.grv' for gravity, and '.mag' for the magnetic field. We can extract the information of these files for visualisation and further processing in python:", "reload(pynoddy.output)\ngeophys = pynoddy.output.NoddyGeophysics(output)\n\nfig = plt.figure(figsize = (8,8))\nax = fig.add_subplot(111)\n# imshow(geophys.grv_data, cmap = 'jet')\n# define contour levels\nlevels = np.arange(322,344,1)\ncf = ax.contourf(geophys.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)\ncbar = plt.colorbar(cf, orientation = 'horizontal')\n# print levels", "Change history and compare gravity\nAs a next step, we will now change aspects of the geological history (paramtereised in as parameters of the kinematic events) and calculate the effect on the gravity. Then, we will compare the changed gravity field to the original field.\nLet's have a look at the properties of the defined faults in the original model:", "for i in range(4):\n print(\"\\nEvent %d\" % (i+2))\n print \"Event type:\\t\" + his.events[i+2].event_type\n print \"Fault slip:\\t%.1f\" % his.events[i+2].properties['Slip']\n print \"Fault dip:\\t%.1f\" % his.events[i+2].properties['Dip']\n print \"Dip direction:\\t%.1f\" % his.events[i+2].properties['Dip Direction']\n\nreload(pynoddy.history)\nreload(pynoddy.events)\nhis2 = pynoddy.history.NoddyHistory(\"fold_thrust.his\")\n\nprint his2.events[6].properties", "As a simple test, we are changing the fault slip for all the faults and simply add 1000 m to all defined slips. In order to not mess up the original model, we are creating a copy of the history object first:", "import copy\nhis = pynoddy.history.NoddyHistory(history_name)\nhis.all_events_end += 1\nhis_changed = copy.deepcopy(his)\n\n# change parameters of kinematic events\nslip_change = 2000.\nwavelength_change = 2000.\n# his_changed.events[3].properties['Slip'] += slip_change\n# his_changed.events[5].properties['Slip'] += slip_change\n# change fold wavelength\nhis_changed.events[6].properties['Wavelength'] += wavelength_change\nhis_changed.events[6].properties['X'] += wavelength_change/2.", "We now write the adjusted history back to a new history file and then calculate the updated gravity field:", "his_changed.write_history('fold_thrust_changed.his')\n\n# %%timeit\n# recompute block model\npynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out')\n\n# %%timeit\n# recompute geophysical response\npynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out', \n sim_type = 'GEOPHYSICS')\n\n# load changed block model\ngeo_changed = pynoddy.output.NoddyOutput('fold_thrust_changed_out')\n# load output and visualise geophysical field\ngeophys_changed = pynoddy.output.NoddyGeophysics('fold_thrust_changed_out')\n\nfig = plt.figure(figsize = (8,8))\nax = fig.add_subplot(111)\n# imshow(geophys_changed.grv_data, cmap = 'jet')\ncf = ax.contourf(geophys_changed.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)\ncbar = plt.colorbar(cf, orientation = 'horizontal')\n\nfig = plt.figure(figsize = (8,8))\nax = fig.add_subplot(111)\n# imshow(geophys.grv_data - geophys_changed.grv_data, cmap = 'jet')\nmaxval = np.ceil(np.max(np.abs(geophys.grv_data - geophys_changed.grv_data)))\n# comp_levels = np.arange(-maxval,1.01 * maxval, 0.05 * maxval)\ncf = ax.contourf(geophys.grv_data - geophys_changed.grv_data, 20, cmap = 'spectral') #, comp_levels, cmap = 'RdBu_r')\ncbar = plt.colorbar(cf, orientation = 'horizontal')\n\n# compare sections through model\ngeo_changed.plot_section('y', colorbar = False)\nh_out.plot_section('y', colorbar = False)\n\nfor i in range(4):\n print(\"Event %d\" % (i+2))\n print his.events[i+2].properties['Slip']\n print his.events[i+2].properties['Dip']\n print his.events[i+2].properties['Dip Direction']\n\n \n\n# recompute the geology blocks for comparison:\npynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out')\n\ngeology_changed = pynoddy.output.NoddyOutput('fold_thrust_changed_out')\n\ngeology_changed.plot_section('x', \n# layer_labels = his.model_stratigraphy, \n colorbar_orientation = 'horizontal', \n colorbar=False,\n title = '',\n# savefig=True, fig_filename = 'fold_thrust_NS_section.eps',\n cmap = 'YlOrRd')\n\n\n\ngeology_changed.plot_section('y', \n # layer_labels = his.model_stratigraphy, \n colorbar_orientation = 'horizontal', title = '', cmap = 'YlOrRd', \n# savefig=True, fig_filename = 'fold_thrust_EW_section.eps',\n ve=1.5)\n \n\n# Calculate block difference and export as VTK for 3-D visualisation:\nimport copy\ndiff_model = copy.deepcopy(geology_changed)\ndiff_model.block -= h_out.block\n\ndiff_model.export_to_vtk(vtk_filename = \"diff_model_fold_thrust_belt\")", "Figure with all results\nWe now create a figure with the gravity field of the original and the changed model, as well as a difference plot to highlight areas with significant changes. This example also shows how additional equations can easily be combined with pynoddy classes.", "fig = plt.figure(figsize=(20,8))\nax1 = fig.add_subplot(131)\n# original plot\nlevels = np.arange(322,344,1)\ncf1 = ax1.contourf(geophys.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)\n# cbar1 = ax1.colorbar(cf1, orientation = 'horizontal')\nfig.colorbar(cf1, orientation='horizontal')\nax1.set_title('Gravity of original model')\n\nax2 = fig.add_subplot(132)\n\n\n\n\ncf2 = ax2.contourf(geophys_changed.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342)\nax2.set_title('Gravity of changed model')\nfig.colorbar(cf2, orientation='horizontal')\n\nax3 = fig.add_subplot(133)\n\n\ncomp_levels = np.arange(-10.,10.1,0.25)\ncf3 = ax3.contourf(geophys.grv_data - geophys_changed.grv_data, comp_levels, cmap = 'RdBu_r')\nax3.set_title('Gravity difference')\n\nfig.colorbar(cf3, orientation='horizontal')\n\nplt.savefig(\"grav_ori_changed_compared.eps\")\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pvillela/ServerSim
.ipynb_checkpoints/OverviewAndTutorial-checkpoint.ipynb
mit
[ "ServerSim Overview and Tutorial\nIntroduction\nThis is an overview and tutorial about ServerSim, a framework for the creation of discrete event simulation models to analyze the performance, throughput, and scalability of services deployed on computer servers.\nFollowing the overview of ServerSim, we will provide an example and tutorial of its use for the comparison of two major service deployment patterns.\nThis document is a Jupyter notebook. See http://jupyter.org/ for more information on Jupyter notebooks.\nServerSim Core Concepts\nServerSim is a small framework based on SimPy, a well-known discrete-event simulation framework written in the Python language. The reader should hav at least a cursory familiarity with Python and SimPy (https://simpy.readthedocs.io/en/latest/contents.html) in order to make the most of this document.\nPython is well-suited to this kind of application due to its rapid development dynamic language characteristics and the availability of powerful libraries relevant for this kind of work. In addition to SimPy, we will use portions of SciPy, a powerful set of libraries for efficient data analysis and visualization that includes Matplotlib, which will be used for plotting graphs in our tutorial.\nServerSim consists of a several classes and utilities. The main classes are described below.\nclass Server\nRepresents a server -- physical, VM, or container, with a predetermined computation capacity. A server can execute arbitrary service request types. The computation capacity of a server is represented in terms of a number of hardware threads and a total speed number (computation units processed per unit of time). The total speed is equally apportioned among the hardware threads, to give the speed per hardware thread. A server also has a number of associated software threads (which must be no smaller than the number of hardware threads). Software threads are relevant for blocking computations only.\nThe simulations in this document assume non-blocking services, so the software threads will not be of consequence in the tutorial example.\nAttributes:\n- env: SimPy Environment, used to start and end simulations, and used internally by SimPy to control simulation events.\n- max_concurrency: The maximum of _hardware threads for the server.\n- num_threads: The maximum number of software threads for the server.\n- speed: Aggregate server speed across all _hardware threads.\n- name: The server's name.\n- hw_svc_req_log: If not None, a list where hardware\n service requests will be logged. Each log entry is a\n triple (\"hw\", name, svc_req), where name is this server's\n name and svc_req is the current service request asking for\n hardware resources.\n- sw_svc_req_log: If not None, a list where software thread\n service requests will be logged. Each log entry is a\n triple (\"sw\", name, svc_req), where name is this server's\n name and svc_req is the current service request asking for a\n software thread.\nclass SvcRequest\nA request for execution of computation units on one or more servers.\nA service request submission is implemented as a SimPy Process.\nA service request can be a composite of sub-requests.\nA service request that is part of a composite has an attribute that\nis a reference to its parent service request. In addition,\nsuch a composition is implemented through the gen attribute of the\nservice request, which is a generator. That generator can yield service\nrequest submissions for other service requests.\nBy default, a service request is non-blocking, i.e., a thread is\nheld on the target server only while the service request itself\nis executing; the thread is relinquished when the request\nfinishes executing and it passes control to its sub-requests.\nHowever, blocking service requests can be modeled as well (see the\nBlkg class).\nAttributes:\n- env: The SimPy Environment.\n- parent: The immediately containing service request, in case this\n is part of a composite service request. None othersise.\n- svc_name: Name of the service this request is associated with.\n- gen: Generator which defines the behavior of this request. The\n generator produces an iterator which yields simpy.Event\n instances. The submit() method wratps the iterator in a\n simpy.Process object to schedule the request for execution\n by SimPy..\n- server: The target server. May be None for composite service\n requests, i.e., those not produced by CoreSvcRequester.\n- in_val: Optional input value of the request.\n- in_blocking_call: Indicates whether this request is\n in the scope of a blocking call. When this parameter\n is true, the service request will hold a software\n thread on the target server while the service\n request itself and any of its sub-requests (calls\n to other servers) are executing. Otherwise, the\n call is non-blocking, so a thread is held on the\n target server only while the service request itself\n is executing; the thread is relinquished when\n this request finishes executing and it passes control\n to its sub-requests.\n- out_val: Output value produced from in_val by the service\n execution. None by default.\n- id: The unique numerical ID of this request.\n- time_log: List of tag-time pairs\n representing significant occurrences for this request.\n- time_dict: Dictionary with contents of time_log,\n for easier access to information.\nclass SvcRequester\nBase class of service requesters.\nA service requester represents a service. In this framework,\na service requester is a factory for service requests. \"Deploying\"\na service on a server is modeled by having service requests \nproduced by the service requester sent to the target server.\nA service requester can be a composite of sub-requesters, thus\nrepresenting a composite service.\nAttributes:\n- env: The SimPy Environment.\n- svc_name: Name of the service.\n- log: Optional list to collect all service request objects\n produced by this service requester.\nclass UserGroup\nRepresents a set of identical users or clients that submit\nservice requests.\nEach user repeatedly submits service requests produced \nby service requesters randomly selected from the set \nof service requesters specified for the group.\nAttributes:\n- env: The Simpy Environment.\n- num_users: Number of users in group. This can be either a\n positive integer or a sequence of (float, int), where\n the floats are monotonically increasing. In this case,\n the sequence represents a step function of time, where each pair\n represents a step whose range of x values extend from the\n first component of the pair (inclusive) to the first\n component of the next pair (exclusive), and whose y value\n is the second component of the pair. The first pair in\n the sequence must have 0 as its first component.\n If the num_users argument is an int, it is transformed\n into the list [(0, num_users)].\n- name: This user group's name.\n- weighted_svcs: List of pairs of\n SvcRequester instances and positive numbers\n representing the different service request types issued by\n the users in the group and their weights. The weights are\n the relative frequencies with which the service requesters \n will be executed (the weights do not need to add up to 1, \n as they are normalized by this class).\n- min_think_time: The minimum think time between service\n requests from a user. Think time will be uniformly \n distributed between min_think_time and max_think_time.\n- max_think_time: The maximum think time between service\n requests from a user. Think time will be uniformly \n distributed between min_think_time and max_think_time.\n- quantiles: List of quantiles to be tallied. It\n defaults to [0.5, 0.95, 0.99] if not provided.\n- svc_req_log: If not None, a sequence where service requests will\n be logged. Each log entry is a pair (name, svc_req), where\n name is this group's name and svc_req is the current\n service request generated by this group.\n- svcs: The first components of weighted_svcs.\nclass CoreSvcRequester(SvcRequester)\nThis is the core service requester implementation that\ninteracts with servers to utilize server resources.\nAll service requesters are either instances of this class or\ncomposites of such instances created using the various \nservice requester combinators in this module\nAttributes:\n- env: See base class.\n- svc_name: See base class.\n- fcompunits: A (possibly randodm) function that\n generates the number of compute units required to execute a\n service request instance produced by this object.\n- fserver: Function that produces a server (possibly round-robin,\n random, or based on server load information) when given a\n service request name. Models a load-balancer.\n- log: See base class.\n- f: An optional function that is applied to a service request's\n in_val to produce its out_val. If f is None, the constant\n function that always returns None is used.\nOther service requester classes\nFollowing are other service requester classes (subclasses of SvcRequester) in addition to CoreSvcRequester, that can be used to define more complex services, including blocking services, asynchronous fire-and-forget services, sequentially dependednt services, parallel service calls, and service continuations. These additional classes are not used in the simulations in this document.\nclass Async(SvcRequester)\nWraps a service requester to produce asynchronous fire-and-forget\nservice requests.\nAn asynchronous service request completes and returns immediately\nto the parent request, while the underlying (child) service request is\nscheduled for execution on its target server.\nAttributes:\n- env: See base class.\n- svc_requester: The underlying service requester that is wrapped\n by this one.\n- log: See base class.\nclass Blkg(SvcRequester)\nWraps a service requester to produce blocking service requests.\nA blocking service request will hold a software thread on the\ntarget server until the service request itself and all of its\nnon-asynchronous sub-requests complete.\nAttributes:\n- env: See base class.\n- svc_requester: The underlying service requester that is wrapped\n by this one.\n- log: See base class.\nclass Seq(SvcRequester)\nCombines a non-empty list of service requesters to yield a\nsequential composite service requester.\nThis service requester produces composite service requests.\nA composite service request produced by this service\nrequester consists of a service request from each of the\nprovided service requesters. Each of the service requests is\nsubmitted in sequence, i.e., each service request is\nsubmitted when the previous one completes.\nAttributes:\n- env: See base class.\n- svc_name: See base class.\n- svc_requesters: A composite service request produced by this\n service requester consists of a service request from each of\n the provided service requesters\n- cont: If true, the sequence executes as continuations\n of the first request, all on the same server. Otherwise,\n each request can execute on a different server.\n- log: See base class.\nclass Par(SvcRequester)\nCombines a non-empty list of service requesters to yield a\nparallel composite service requester.\nThis service requester produces composite service requests.\nA composite service request produced by this service\nrequester consists of a service request from each of the\nprovided service requesters. All of the service requests are\nsubmitted concurrently.\nWhen the attribute cont is True, this represents multi-threaded\nexecution of requests on the same server. Otherwise, each\nservice request can execute on a different server.\nAttributes:\n- env: See base class.\n- svc_name: See base class.\n- svc_requesters: See class docstring.\n- f: Optional function that takes the outputs of all the component\n service requests and produces the overall output\n for the composite. If None then the constant function\n that always produces None is used.\n- cont: If true, all the requests execute on the same server.\n Otherwise, each request can execute on a different server.\n When cont is True, the server is the container service\n request's server if not None, otherwise the server is\n picked from the first service request in the list of\n generated service requests.\n- log: See base class.\nTutorial Example: Comparison of Two Service Deployment Patterns\nBelow we compare two major service deployment patterns by using discrete-event simulations. Ideally the reader will have had some prior exposure to the Python language in order to follow along all the details. However, the concepts and conclusions should be understandable to readers with software architecture or engineering background even if not familiar with Python.\nWe assume an application made up of multiple multi-threaded services and consider two deployment patterns:\n\nCookie-cutter deployment, where all services making up an application are deployed together on each VM or container. This is typical for \"monolithic\" applications but can also be used for micro-services. See Fowler and Hammant.\nIndividualized deployment, where each of the services is deployed on its own VM or (more likely) it own container.\n\nIn the simulations below, the application is made up of just two services, to simplify the model and the analysis, but without loss of generality in terms of the main conclusions.\nEnvironment set-up\nThe code used in these simulations should be compatible with both Python 2.7 and Python 3.x.\nPython and the following Python packages need to be installed in your computer:\n- jupyter-notebook\n- simpy\n- matplotlib\n- LiveStats\nThe model in this document should be run from the parent directory of the serversim package directory, which contains the source files for the ServerSim framework.\nThe core simulation function\nFollowing is the the core function used in the simulations This function will be called with different arguments to simulate different scenarios.\nThis function sets up a simulation with the following givens:\n\nSimulation duration of 200 time units (e.g., seconds).\nA set of servers. Each server has 10 hardware threads, 20 software threads, and a speed of 20 compute units per unit of time. The number of servers is fixed by the server_range1 and server_range2 parameters described below.\nTwo services:\nsvc_1, which consumes a random number of compute units per request, with a range from 0.2 to 3.8, averaging 2.0 compute units per request\nsvc_2, which consumes a random number of compute units per request, with a range from 0.1 to 1.9, averaging 1.0 compute units per request\nA user group, with a number of users determined by the num_users parameter described below. The user group generates service requests from the two services, with probabilities proportional to the parameters weight_1 and weight_2 described below. The think time for users in the user group ranges from 2.0 to 10.0 time units.\n\nParameters:\n\nnum_users: the number of users being simulated. This parameter can be either a positive integer or a list of pairs. In the second case, the list of pairs represents a number of users that varies over time as a step function. The first elements of the pairs in the list must be strictly monotonically increasing and each pair in the list represents a step in the step function. Each step starts (inclusive) at the time represented by the first component of the corresponding pair and ends (exclusive) at the time represented by the first component of the next pair in the list.\nweight1: the relative frequency of service requests for the first service.\nweight2: the relative frequency of service requests for the second service.\nserver_range1: a Python range representing the numeric server IDs of the servers on which the first service can be deployed.\nserver_range2: a Python range representing the numeric server IDs of the servers on which the second service can be deployed. This and the above range can be overlapping. In case they are overlapping, the servers in the intersection of the ranges will host both the first and the second service.\n\nImports\nWe import the required libraries, as well as the __future__ import for compatibility between Python 2.7 and Python 3.x.", "# %load simulate_deployment_scenario.py\nfrom __future__ import print_function\n\nfrom typing import List, Tuple, Sequence\n\nfrom collections import namedtuple\nimport random\n\nimport simpy\n\nfrom serversim import *\n\n\ndef simulate_deployment_scenario(num_users, weight1, weight2, server_range1,\n server_range2):\n # type: (int, float, float, Sequence[int], Sequence[int]) -> Result\n\n Result = namedtuple(\"Result\", [\"num_users\", \"weight1\", \"weight2\", \"server_range1\",\n \"server_range2\", \"servers\", \"grp\"])\n\n def cug(mid, delta):\n \"\"\"Computation units generator\"\"\"\n def f():\n return random.uniform(mid - delta, mid + delta)\n return f\n\n def ld_bal(svc_name):\n \"\"\"Application server load-balancer.\"\"\"\n if svc_name == \"svc_1\":\n svr = random.choice(servers1)\n elif svc_name == \"svc_2\":\n svr = random.choice(servers2)\n else:\n assert False, \"Invalid service type.\"\n return svr\n\n simtime = 200\n hw_threads = 10\n sw_threads = 20\n speed = 20\n svc_1_comp_units = 2.0\n svc_2_comp_units = 1.0\n quantiles = (0.5, 0.95, 0.99)\n\n env = simpy.Environment()\n\n n_servers = max(server_range1[-1] + 1, server_range2[-1] + 1)\n servers = [Server(env, hw_threads, sw_threads, speed, \"AppServer_%s\" % i)\n for i in range(n_servers)]\n servers1 = [servers[i] for i in server_range1]\n servers2 = [servers[i] for i in server_range2]\n\n svc_1 = CoreSvcRequester(env, \"svc_1\", cug(svc_1_comp_units,\n svc_1_comp_units*.9), ld_bal)\n svc_2 = CoreSvcRequester(env, \"svc_2\", cug(svc_2_comp_units,\n svc_2_comp_units*.9), ld_bal)\n\n weighted_txns = [(svc_1, weight1),\n (svc_2, weight2)\n ]\n\n min_think_time = 2.0 # .5 # 4\n max_think_time = 10.0 # 1.5 # 20\n svc_req_log = [] # type: List[Tuple[str, SvcRequest]]\n\n grp = UserGroup(env, num_users, \"UserTypeX\", weighted_txns, min_think_time,\n max_think_time, quantiles, svc_req_log)\n grp.activate_users()\n\n env.run(until=simtime)\n\n return Result(num_users=num_users, weight1=weight1, weight2=weight2,\n server_range1=server_range1, server_range2=server_range2,\n servers=servers, grp=grp)\n", "Printing the simulation results\nThe following function prints the outputs from the above core simulation function.", "# %load print_results.py\nfrom __future__ import print_function\n\nfrom typing import Sequence, Any, IO\n\nfrom serversim import *\n\n\ndef print_results(num_users=None, weight1=None, weight2=None, server_range1=None,\n server_range2=None, servers=None, grp=None, fi=None):\n # type: (int, float, float, Sequence[int], Sequence[int], Sequence[Server], UserGroup, IO[str]) -> None\n \n if fi is None:\n import sys\n fi = sys.stdout\n\n print(\"\\n\\n***** Start Simulation --\", num_users, \",\", weight1, \",\", weight2, \", [\", server_range1[0], \",\", server_range1[-1] + 1,\n \") , [\", server_range2[0], \",\", server_range2[-1] + 1, \") *****\", file=fi)\n print(\"Simulation: num_users =\", num_users, file=fi)\n\n print(\"<< ServerExample >>\\n\", file=fi)\n\n indent = \" \" * 4\n\n print(\"\\n\" + \"Servers:\", file=fi)\n for svr in servers:\n print(indent*1 + \"Server:\", svr.name, file=fi)\n print(indent * 2 + \"max_concurrency =\", svr.max_concurrency, file=fi)\n print(indent * 2 + \"num_threads =\", svr.num_threads, file=fi)\n print(indent*2 + \"speed =\", svr.speed, file=fi)\n print(indent * 2 + \"avg_process_time =\", svr.avg_process_time, file=fi)\n print(indent * 2 + \"avg_hw_queue_time =\", svr.avg_hw_queue_time, file=fi)\n print(indent * 2 + \"avg_thread_queue_time =\", svr.avg_thread_queue_time, file=fi)\n print(indent * 2 + \"avg_service_time =\", svr.avg_service_time, file=fi)\n print(indent * 2 + \"avg_hw_queue_length =\", svr.avg_hw_queue_length, file=fi)\n print(indent * 2 + \"avg_thread_queue_length =\", svr.avg_thread_queue_length, file=fi)\n print(indent * 2 + \"hw_queue_length =\", svr.hw_queue_length, file=fi)\n print(indent * 2 + \"hw_in_process_count =\", svr.hw_in_process_count, file=fi)\n print(indent * 2 + \"thread_queue_length =\", svr.thread_queue_length, file=fi)\n print(indent * 2 + \"thread_in_use_count =\", svr.thread_in_use_count, file=fi)\n print(indent*2 + \"utilization =\", svr.utilization, file=fi)\n print(indent*2 + \"throughput =\", svr.throughput, file=fi)\n\n print(indent*1 + \"Group:\", grp.name, file=fi)\n print(indent*2 + \"num_users =\", grp.num_users, file=fi)\n print(indent*2 + \"min_think_time =\", grp.min_think_time, file=fi)\n print(indent*2 + \"max_think_time =\", grp.max_think_time, file=fi)\n print(indent * 2 + \"responded_request_count =\", grp.responded_request_count(None), file=fi)\n print(indent * 2 + \"unresponded_request_count =\", grp.unresponded_request_count(None), file=fi)\n print(indent * 2 + \"avg_response_time =\", grp.avg_response_time(), file=fi)\n print(indent * 2 + \"std_dev_response_time =\", grp.std_dev_response_time(None), file=fi)\n print(indent*2 + \"throughput =\", grp.throughput(None), file=fi)\n\n for svc in grp.svcs:\n print(indent*2 + svc.svc_name + \":\", file=fi)\n print(indent * 3 + \"responded_request_count =\", grp.responded_request_count(svc), file=fi)\n print(indent * 3 + \"unresponded_request_count =\", grp.unresponded_request_count(svc), file=fi)\n print(indent * 3 + \"avg_response_time =\", grp.avg_response_time(svc), file=fi)\n print(indent * 3 + \"std_dev_response_time =\", grp.std_dev_response_time(svc), file=fi)\n print(indent*3 + \"throughput =\", grp.throughput(svc), file=fi)\n", "Mini-batching, plotting, and comparison of results\nThe following three functions handle mini-batching, plotting, and comparison of results.\n\nminibatch_resp_times -- This function takes the user group from the results of the deployment_example function, scans the service request log of the user group, and produces mini-batch statistics for every time_resolution time units. For example, with a simulation of 200 time units and a time_resolution of 5 time units, we end up with 40 mini-batches. The statistics produced are the x values corresponding to each mini-batch, and the counts, means, medians, 95th percentile, and 99th percentile in each mini-batch.\nplot_counts_means_q95 -- Plots superimposed counts, means, and 95th percentiles for two mini-batch sets coming from two simulations.\ncompare_scenarios -- Combines the above two functions to produce comparison plots from two simulations.", "# %load report_resp_times.py\nfrom typing import TYPE_CHECKING, Sequence, Tuple\nimport functools as ft\nfrom collections import OrderedDict\n\nimport matplotlib.pyplot as plt\nfrom livestats import livestats\n\nif TYPE_CHECKING:\n from serversim import UserGroup\n\n\ndef minibatch_resp_times(time_resolution, grp):\n # type: (float, UserGroup) -> Tuple[Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float]]\n quantiles = [0.5, 0.95, 0.99]\n\n xys = [(int(svc_req.time_dict[\"submitted\"]/time_resolution),\n svc_req.time_dict[\"completed\"] - svc_req.time_dict[\"submitted\"])\n for (_, svc_req) in grp.svc_req_log\n if svc_req.is_completed]\n\n def ffold(map_, p):\n x, y = p\n if x not in map_:\n map_[x] = livestats.LiveStats(quantiles)\n map_[x].add(y)\n return map_\n\n xlvs = ft.reduce(ffold, xys, dict())\n\n xs = xlvs.keys()\n xs.sort()\n\n counts = [xlvs[x].count for x in xs]\n means = [xlvs[x].average for x in xs]\n q_50 = [xlvs[x].quantiles()[0] for x in xs]\n q_95 = [xlvs[x].quantiles()[1] for x in xs]\n q_99 = [xlvs[x].quantiles()[2] for x in xs]\n\n return xs, counts, means, q_50, q_95, q_99\n\n\ndef plot_counts_means_q95(quantiles1, quantiles2):\n\n x = quantiles1[0] # should be same as quantiles2[0]\n\n counts1 = quantiles1[1]\n counts2 = quantiles2[1]\n\n means1 = quantiles1[2]\n means2 = quantiles2[2]\n\n q1_95 = quantiles1[4]\n q2_95 = quantiles2[4]\n\n # Plot counts\n plt.plot(x, counts1, color='b', label=\"Counts 1\")\n plt.plot(x, counts2, color='r', label=\"Counts 2\")\n plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n plt.xlabel(\"Time buckets\")\n plt.ylabel(\"Throughput\")\n plt.show()\n\n # Plot averages and 95th percentiles\n\n plt.plot(x, means1, color='b', label=\"Means 1\")\n plt.plot(x, q1_95, color='c', label=\"95th Percentile 1\")\n\n plt.plot(x, means2, color='r', label=\"Means 2\")\n plt.plot(x, q2_95, color='m', label=\"95th Percentile 2\")\n\n # Hack to avoid duplicated labels (https://stackoverflow.com/questions/13588920/stop-matplotlib-repeating-labels-in-legend)\n handles, labels = plt.gca().get_legend_handles_labels()\n by_label = OrderedDict(zip(labels, handles))\n plt.legend(by_label.values(), by_label.keys(), bbox_to_anchor=(1.05, 1),\n loc=2, borderaxespad=0.)\n\n plt.xlabel(\"Time buckets\")\n plt.ylabel(\"Response times\")\n\n plt.show()\n\n \ndef compare_scenarios(sc1, sc2):\n grp1 = sc1.grp\n grp2 = sc2.grp\n\n quantiles1 = minibatch_resp_times(5, grp1)\n quantiles2 = minibatch_resp_times(5, grp2)\n\n plot_counts_means_q95(quantiles1, quantiles2)\n", "Random number generator seed\nWe set the random number generator seed to a known value to produce repeatable simulations. Comment-out this line to have a different system-generated seed every time the simulations are executed.", "random.seed(123456)", "Simulations\nSeveral simulation scenarios are executed below. See the descriptions of the parameters and hard-coded given values of the core simulation function above.\nWith 10 servers and weight_1 = 2 and weight_2 = 1, this configuration supports 720 users with average response times close to the minimum possible. How did we arrive at that number? For svc_1, the heavier of the two services, the minimum possible average response time is 1 time unit (= 20 server compute units / 10 hardware threads / 2 average service compute units). One server can handle 10 concurrent svc_1 users without think time, or 60 concurrent svc_1 users with average think time of 6 time units. Thus, 10 servers can handle 600 concurrent svc_1 users. Doing the math for both services and taking their respective probabilities into account, the number of users is 720. For full details, see the spreadsheet CapacityPlanning.xlsx. Of course, due to randomness, there will be queuing and the average response times will be greater than the minimum possible. With these numbers, the servers will be running hot as there is no planned slack capacity.\nSimulation 0\nThis is a simulation of one scenario (not a comparison) and printing out of its results. It illustrates the use of the print_results function. The scenario here is the same as the first scenario for Simulation 1 below.", "sc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1, \n server_range1=range(0, 10), server_range2=range(0, 10))\nprint_results(**sc1.__dict__)", "Simulation 1\nIn the first scenario, there are 10 servers which are shared by both services. In the second scenario, there are 10 servers, of which 8 are allocated to the first service and 2 are allocated to the second service. This allocation is proportional to their respective loads.", "rand_state = random.getstate()\nsc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1, \n server_range1=range(0, 10), server_range2=range(0, 10))\nrandom.setstate(rand_state)\nsc2 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1, \n server_range1=range(0, 8), server_range2=range(8, 10))\ncompare_scenarios(sc1, sc2)", "Repeating above comparison to illustrate variability of results.", "rand_state = random.getstate()\nsc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1, \n server_range1=range(0, 10), server_range2=range(0, 10))\nrandom.setstate(rand_state)\nsc2 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1, \n server_range1=range(0, 8), server_range2=range(8, 10))\ncompare_scenarios(sc1, sc2)", "Conclusions: The results of the two deployment strategies are similar in terms of throughput, mean response times, and 95th percentile response times. This is as would be expected, since the capacities allocated under the individualized deployment strategy are proportional to the respective service loads.\nSimulation 2\nNow, we change the weights of the different services, significantly increasing the weight of svc_1 from 2 to 5.", "rand_state = random.getstate()\nsc1 = simulate_deployment_scenario(num_users=720, weight1=5, weight2=1, \n server_range1=range(0, 10), server_range2=range(0, 10))\nrandom.setstate(rand_state)\nsc2 = simulate_deployment_scenario(num_users=720, weight1=5, weight2=1, \n server_range1=range(0, 8), server_range2=range(8, 10))\ncompare_scenarios(sc1, sc2)", "Conclusions: The cookie-cutter deployment strategy was able to absorb the change in load mix, while the individualized strategy was not, with visibly lower throughput and higher mean and 95th percentile response times.\nSimulation 3\nFor this simulation, we also change the weights of the two services, but now in the opposite direction -- we change the weight of svc_1 from 2 to 1.", "rand_state = random.getstate()\nsc1 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1, \n server_range1=range(0, 10), server_range2=range(0, 10))\nrandom.setstate(rand_state)\nsc2 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1, \n server_range1=range(0, 8), server_range2=range(8, 10))\ncompare_scenarios(sc1, sc2)", "Conclusions: Again the cookie-cutter deployment strategy was able to absorb the change in load mix, while the individualized strategy was not, with visibly lower throughput and higher mean and 95th percentile response times. Notice that due to the changed load mix, the total load was lower than before and, with the same number of servers, the cookie-cutter configuration had excess capacity while the individualized configuration had excess capacity for svc_1 and insufficient capacity for svc_2.\nSimulation 4\nWe now continue with the weights used in Simulation 3, but adjust server capacity to account for the lower aggregate load and different load mix. \nBelow we have three scenarios:\n- Scenario 1 (cookie-cutter) removes one server\n- Scenario 2 (individualized) removes one server from the pool allocated to svc_1\n- Scenario 3 (individualized) removes one server and reassigns one server from the svc_1 pool to the svc_2 pool.\nRun the three scenarios:", "rand_state = random.getstate()\nsc1 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1, \n server_range1=range(0, 9), server_range2=range(0, 9))\nrandom.setstate(rand_state)\nsc2a = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1, \n server_range1=range(0, 7), server_range2=range(7, 9))\nrandom.setstate(rand_state)\nsc2b = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1, \n server_range1=range(0, 6), server_range2=range(6, 9))", "Compare the results of scenarios 1 and 2a:", "compare_scenarios(sc1, sc2a)", "Compare the results of scenarios 1 and 2b:", "compare_scenarios(sc1, sc2b)", "Conclusions: Scenario 1 performs significantly than better Scenario 2a and comparably to Scenario 2b. This simulation shows again that the cookie-cutter strategy is comparable in performance and throughput to a tuned individualized configuration, and beats hands-down an individualized configuration that is not perfectly tuned for the load mix.\nVary the number of users over time\nThe simulations below will vary the load over time by varying the number of users over time. The list below defines a step function the represents the number of users varying over time. In this case, the number of users changes every 50 time units.", "users_curve = [(0, 900), (50, 540), (100, 900), (150, 540)]", "Simulation 5\nThis simulation is similar to Simulation 1, the difference being the users curve instead of a constant 720 users.", "rand_state = random.getstate()\nsc1 = simulate_deployment_scenario(num_users=users_curve, weight1=2, weight2=1, \n server_range1=range(0, 10), server_range2=range(0, 10))\nrandom.setstate(rand_state)\nsc2 = simulate_deployment_scenario(num_users=users_curve, weight1=2, weight2=1, \n server_range1=range(0, 8), server_range2=range(8, 10))\ncompare_scenarios(sc1, sc2)", "Conclusions: The cookie-cutter and individualized strategies produced similar results.\nSimulation 6\nWe now run a simulation similar to Simulation 4, with the difference that the number of users varies over time. This combines load variability over time as well as a change in load mix. As in Simulation 4, we adjust server capacity to account for the lower aggregate load and different load mix. \nBelow we have three scenarios:\n- Scenario 1 (cookie-cutter) removes one server\n- Scenario 2a (individualized) removes one server from the pool allocated to svc_1\n- Scenario 2b (individualized) removes one server and reassigns one server from the svc_1 pool to the svc_2 pool.\nRun the three scenarios:", "rand_state = random.getstate()\nsc1 = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1, \n server_range1=range(0, 9), server_range2=range(0, 9))\nrandom.setstate(rand_state)\nsc2a = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1, \n server_range1=range(0, 7), server_range2=range(7, 9))\nrandom.setstate(rand_state)\nsc2b = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1, \n server_range1=range(0, 6), server_range2=range(6, 9))", "Compare the results of scenarios 1 and 2a:", "compare_scenarios(sc1, sc2a)", "Compare the results of scenarios 1 and 2b:", "compare_scenarios(sc1, sc2b)", "Conclusions: Scenario 1 performs significantly better than Scenario 2a and comparably to Scenario 2b. This simulation shows again that the cookie-cutter strategy is comparable in performance and throughput to a tuned individualized configuration, and beats an individualized configuration that is not perfectly tuned for the load mix.\nSimulation 7\nThis final simulation is similar to Simulation 1, with the difference that the number of users is 864 instad of 720. In this scenario, the total number of servers required for best capacity utilization can be calculated to be 12 (see CapacityPlanning.xlsx). Under the individualized deployment strategy, the ideal number of servers allocated to svc_1 and svc_2 would be 9.6 and 2.4, respectively. Since the number of servers needs to be an integer, we will run simulations with server allocations to svc_1 and svc_2, respectively, of 10 and 2, 9 and 3, and 10 and 3.\nThus, we have five scenarios:\n- Scenario 1a (cookie-cutter) with 12 servers\n- Scenario 2a1 (individualized) with 9 servers for svc_1 and 3 servers for svc_2\n- Scenario 2a2 (individualized) with 10 servers for svc_1 and 2 servers for svc_2\n- Scenario 1b (cookie-cutter) with 13 servers\n- Scenario 2b (individualized) with 10 servers for svc_1 and 3 servers for svc_2\nRun the scenarios:", "rand_state = random.getstate()\nsc1a = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1, \n server_range1=range(0, 12), server_range2=range(0, 12))\nrandom.setstate(rand_state)\nsc2a1 = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1, \n server_range1=range(0, 9), server_range2=range(9, 12))\nrandom.setstate(rand_state)\nsc2a2 = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1, \n server_range1=range(0, 10), server_range2=range(10, 12))\nrandom.setstate(rand_state)\nsc1b = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1, \n server_range1=range(0, 13), server_range2=range(0, 13))\nrandom.setstate(rand_state)\nsc2b = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1, \n server_range1=range(0, 10), server_range2=range(10, 13))", "Compare the results of scenarios 1a and 2a1:", "compare_scenarios(sc1a, sc2a1)", "Compare the results of scenarios 1a and 2a2:", "compare_scenarios(sc1a, sc2a2)", "Compare the results of scenarios 1b and 2b:", "compare_scenarios(sc1b, sc2b)", "Conclusions: Scenario 1a has comparable throughput but somewhat better response times than Scenario 2a1. Scenario 1a has somewhat better throughput and response times than Scenario 2a2. Scenario 1b has comparable throughput and a bit less extreme response times than Scenario 2b. In all three comparisons, the cookie-cutter strategy performs better than or comparably to the individualized strategy.\nOverall Conclusions\nThe various simulations show consistently that the cookie-cutter strategy is comparable in performance and throughput (and therefore hardware utilization) to a tuned individualized configuration, and beats an individualized configuration that is not well-tuned for the load mix. Cookie-cutter thus proves to be a more robust and stable deployment strategy in many realistic situations, in the face of likely load mix fluctuations, mismatches between forecast average load mixes and actual average load mixes, and mismatches between forecast load mixes and allocated server capacities. However, although not highlighted on the simulation graphs presented, it is a fact (that can be observed in the simulation logs) that response times for svc_2 are better under a well-tuned individualized configuration because then svc_2 requests don't have to share a queue with longer-running svc_1 requests. When that's an important consideration, an individualized deployment strategy could be a more appropriate choice." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jpn--/larch
book/example/102-swissmetro-weighted.ipynb
gpl-3.0
[ "102: Swissmetro Weighted MNL Mode Choice", "# TEST\nimport larch\nimport os\nimport pandas as pd\npd.set_option(\"display.max_columns\", 999)\npd.set_option('expand_frame_repr', False)\npd.set_option('display.precision', 3)\nlarch._doctest_mode_ = True\nimport larch.numba as lx\n\nimport pandas as pd\nimport larch.numba as lx", "This example is a mode choice model built using the Swissmetro example dataset.\nFirst we create the Dataset and Model objects:", "raw_data = pd.read_csv(lx.example_file('swissmetro.csv.gz')).rename_axis(index='CASEID')\ndata = lx.Dataset.construct.from_idco(raw_data, alts={1:'Train', 2:'SM', 3:'Car'})\ndata", "The swissmetro example models exclude some observations. We can use the \nDataset.query_cases method to identify the observations we would like to keep.", "m = lx.Model(data.dc.query_cases(\"PURPOSE in (1,3) and CHOICE != 0\"))", "We can attach a title to the model. The title does not affect the calculations\nas all; it is merely used in various output report styles.", "m.title = \"swissmetro example 02 (weighted logit)\"", "We need to identify the availability and choice variables.", "m.availability_co_vars = {\n 1: \"TRAIN_AV * (SP!=0)\",\n 2: \"SM_AV\",\n 3: \"CAR_AV * (SP!=0)\",\n}\nm.choice_co_code = 'CHOICE'", "This model adds a weighting factor.", "m.weight_co_var = \"1.0*(GROUP==2)+1.2*(GROUP==3)\"", "The swissmetro dataset, as with all Biogeme data, is only in co format.", "from larch.roles import P,X\nm.utility_co[1] = P(\"ASC_TRAIN\")\nm.utility_co[2] = 0\nm.utility_co[3] = P(\"ASC_CAR\")\nm.utility_co[1] += X(\"TRAIN_TT\") * P(\"B_TIME\")\nm.utility_co[2] += X(\"SM_TT\") * P(\"B_TIME\")\nm.utility_co[3] += X(\"CAR_TT\") * P(\"B_TIME\")\nm.utility_co[1] += X(\"TRAIN_CO*(GA==0)\") * P(\"B_COST\")\nm.utility_co[2] += X(\"SM_CO*(GA==0)\") * P(\"B_COST\")\nm.utility_co[3] += X(\"CAR_CO\") * P(\"B_COST\")", "Larch will find all the parameters in the model, but we'd like to output them in\na rational order. We can use the ordering method to do this:", "m.ordering = [\n (\"ASCs\", 'ASC.*',),\n (\"LOS\", 'B_.*',),\n]\n\n# TEST\nfrom pytest import approx\nassert m.loglike() == approx(-7892.111473285806)", "We can estimate the models and check the results match up with those given by Biogeme:", "m.set_cap(15)\nm.maximize_loglike(method='SLSQP')\n\n# TEST\nr = _\nfrom pytest import approx\nassert r.loglike == approx(-5931.557677709527)\n\nm.calculate_parameter_covariance()\nm.parameter_summary()\n\n# TEST\nassert m.parameter_summary().data.to_markdown() == '''\n| | Value | Std Err | t Stat | Signif | Null Value |\n|:----------------------|--------:|----------:|---------:|:---------|-------------:|\n| ('ASCs', 'ASC_CAR') | -0.114 | 0.0407 | -2.81 | ** | 0 |\n| ('ASCs', 'ASC_TRAIN') | -0.757 | 0.0528 | -14.32 | *** | 0 |\n| ('LOS', 'B_COST') | -0.0112 | 0.00049 | -22.83 | *** | 0 |\n| ('LOS', 'B_TIME') | -0.0132 | 0.000537 | -24.62 | *** | 0 |\n'''[1:-1]", "Looks good!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fdmazzone/Ecuaciones_Diferenciales
Teoria_Basica/scripts/GruposLie.ipynb
gpl-2.0
[ "<h2> Ejercicios varios relacionados con grupos de Lie </h2>", "from sympy import *\ninit_printing() #muestra símbolos más agradab\nR=lambda n,d: Rational(n,d)", "Ejercicio (1ª parcial 2018): Resolver $\\frac{dy}{dx}=\\frac{x y^{4}}{3} - \\frac{2 y}{3 x} + \\frac{1}{3 x^{3} y^{2}}$. \nIntentaremos con la heuística $$\\xi=ax+cy+e$$ y $$\\eta=bx+dy+f$$ para encontrar las simetrías", "x,y,a,b,c,d,e,f=symbols('x,y,a,b,c,d,e,f',real=True)\n#cargamos la función\nF=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2\nF", "Hacemos $\\xi=ax+cy+e$ y $\\eta=bx+dy+f$", "xi=a*x+c*y+e\neta=b*x+d*y+f\nxi, eta", "Condición de simetría linealizada", "Q=eta-xi*F\nCondSim=Q.diff(x)+F*Q.diff(y)-F.diff(y)*Q\nCondSim\n\nCondSim=CondSim.factor()\nCondSim\n\nCondSim1,nosirvo=fraction(CondSim)\nCondSim1\n\ne1=CondSim1.coeff(x**7).coeff(y**7)\ne1", "debe ser $f=0$", "CondSim2=CondSim1.subs(f,0)\nCondSim2\n\ne2=CondSim2.coeff(x**7).coeff(y**8)\ne2", "Vemos que $d=-2/3a$.", "CondSim3=CondSim2.subs(d,-2*a/3)\nCondSim3", "debe ser $c=0$.", "CondSim4=CondSim3.subs(c,0)\nCondSim4\n\ne3=CondSim4.coeff(x**8).coeff(y**7)\ne3", "Vemos que $b=0$.", "CondSim5=CondSim4.subs(b,0)\nCondSim5", "Se cumple si $e=0$.", "xi=xi.subs({c:0,f:0,e:0,a:1,b:0,d:-R(2,3)})\neta=eta.subs({c:0,f:0,e:0,a:1,b:0,d:-R(2,3)})\nxi,eta", "Puntos invariantes: $(0,0)$. Allí no tendremos coordenadas canónicas. \nPara hallar la coordenada invariante resolvemos\n$$y'=\\frac{\\eta}{\\xi}.$$", "f=Function('f')(x)\nxi2=xi.subs(y,f)\neta2=eta.subs(y,f)\ndsolve(Eq(f.diff(x),eta2/xi2),f)\n", "Esto nos indica que $r=x^{\\frac{2}{3}}y$ es una solución. Como $H(r)$ también sirve cualquiera sea la $H$, con $H'\\neq 0$. Eligiendo $F(r)=r^3$ podemos suponer $r= x^2y^3$.", "r=x**2*y**3\nr", "Para hallar $s$ resolvemos\n$$s=\\int\\frac{1}{\\xi}dx.$$", "s=integrate(xi2**(-1),x)\ns", "Sympy no integra bien el logarítmo", "s=log(abs(x))\nr, s", "Reemplacemos en la fórmula de cambios de variables\n$$\\frac{ds}{dr}=\\left.\\frac{s_x+s_y F}{r_x+r_y F}\\right|_{x=e^s,y=r^{1/3}e^{-2/3s}}.$$", "\nEcua=( (s.diff(x)+s.diff(y)*F)/(r.diff(x)+r.diff(y)*F)).simplify()\nr,s=symbols('r,s',real=True)\nEcua=Ecua.subs({x:exp(s),y:r**R(1,3)*exp(-R(2,3)*s)}) \nEcua", "Resolvamos $\\frac{dr}{ds}=\\frac{1}{1+r^2}$. La solucón gral es $\\arctan(r)=s+C$. Expresemos la ecuación en coordenadas cartesianas\n$$\\arctan(x^2y^3)=\\log(|x|)+C.$$", "C=symbols('C',real=True)\nsol=Eq(atan(x**2*y**3),log(abs(x))+C)\nsolExpl=solve(sol,y)\nsolExpl\n\nyg=solExpl[0]\nyg\n\nQ=simplify(eta-xi*F)\nQ", "No hay soluciones invariantes", "p=plot_implicit(sol.subs(C,0),(x,-5,5),(y,-10,10),show=False)\nfor k in range(-10,10):\n p.append(plot_implicit(sol.subs(C,k),(x,-5,5),(y,-10,10),show=False)[0])\np.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nguy/AWOT
examples/T28_jpole_flight.ipynb
gpl-2.0
[ "<h2>Load and plot a T-28 NetCDF file</h2>\n<br>\n<i>Files can be found here\n<br>\nThey follow RAF nimbus conventions</i>", "# Load the needed packages\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport awot\nfrom awot.graph.common import create_basemap\nfrom awot.graph import FlightLevel\n\n%matplotlib inline", "Supply user information", "# Set the path for data file\nflname=\"/Users/guy/data/t28/jpole/T28_JPOLE2003_800.nc\"", "<li>Set up some characteristics for plotting. \n<li>Use Cylindrical Equidistant Area map projection.\n<li>Set the spacing of the barbs and X-axis time step for labels.\n<li>Set the start and end times for subsetting.", "proj = 'cea'\nWbarb_Spacing = 300 # Spacing of wind barbs along flight path (sec)\n\n# Choose the X-axis time step (in seconds) where major labels will be\nXlabStride = 3600\n\n# Should landmarks be plotted? [If yes, then modify the section below\nLmarks=False\n\n# Optional variables that can be included with AWOT\n# Start and end times for track in Datetime instance format\n#start_time = \"2003-08-06 00:00:00\"\n#end_time = \"2003-08-06 23:50:00\"\n\ncorners = [-96., 34., -98., 36.,]", "Read in flight data<br>\nNOTE: At time or writing this it is required that the time_var argument be provided to make the read function work properly. This may change in the future, but time variables are not standard even among RAF Nimbus guidelines.", "fl = awot.io.read_netcdf(fname=flname, platform='t28', time_var=\"Time\")\n\nfl.keys()", "Create the track figure for this flight, there appear to be some bunk data values in lat/lon", "print(fl['latitude']['data'].min(), fl['latitude']['data'].max())\nfl['latitude']['data'][:] = np.ma.masked_equal(fl['latitude']['data'][:], 0.)\nfl['longitude']['data'][:] = np.ma.masked_equal(fl['longitude']['data'][:], 0.)\nprint(fl['latitude']['data'].min(), fl['latitude']['data'].max())\nprint(fl['longitude']['data'].min(), fl['longitude']['data'].max())\nprint(fl['altitude']['data'].max())\n\nfig, ax = plt.subplots(1, 1, figsize=(9, 9))\n\n# Create the basemap\nbm = create_basemap(corners=corners, proj=proj, resolution='l', area_thresh=1.,ax=ax)\nbm.drawcounties()\n# Instantiate the Flight plotting routines\nflp = FlightLevel(fl, basemap=bm)\nflp.plot_trackmap(\n# start_time=start_time, end_time=end_time,\n color_by_altitude=True, track_cmap='spectral',\n min_altitude=50., max_altitude= 5000.,\n addlegend=True, addtitle=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.19/_downloads/81308ca6ca6807326a79661c989cfcba/plot_make_report.ipynb
bsd-3-clause
[ "%matplotlib inline", "Make an MNE-Report with a Slider\nIn this example, MEG evoked data are plotted in an html slider.", "# Authors: Teon Brooks <[email protected]>\n# Eric Larson <[email protected]>\n#\n# License: BSD (3-clause)\n\nfrom mne.report import Report\nfrom mne.datasets import sample\nfrom mne import read_evokeds\nfrom matplotlib import pyplot as plt\n\n\ndata_path = sample.data_path()\nmeg_path = data_path + '/MEG/sample'\nsubjects_dir = data_path + '/subjects'\nevoked_fname = meg_path + '/sample_audvis-ave.fif'", "Do standard folder parsing (this can take a couple of minutes):", "report = Report(image_format='png', subjects_dir=subjects_dir,\n info_fname=evoked_fname, subject='sample',\n raw_psd=False) # use False for speed here\nreport.parse_folder(meg_path, on_error='ignore', mri_decim=10)", "Add a custom section with an evoked slider:", "# Load the evoked data\nevoked = read_evokeds(evoked_fname, condition='Left Auditory',\n baseline=(None, 0), verbose=False)\nevoked.crop(0, .2)\ntimes = evoked.times[::4]\n# Create a list of figs for the slider\nfigs = list()\nfor t in times:\n figs.append(evoked.plot_topomap(t, vmin=-300, vmax=300, res=100,\n show=False))\n plt.close(figs[-1])\nreport.add_slider_to_section(figs, times, 'Evoked Response',\n image_format='png') # can also use 'svg'\n\n# to save report\nreport.save('my_report.html', overwrite=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Chipe1/aima-python
learning.ipynb
mit
[ "LEARNING\nThis notebook serves as supporting material for topics covered in Chapter 18 - Learning from Examples , Chapter 19 - Knowledge in Learning, Chapter 20 - Learning Probabilistic Models from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from learning.py. Let's start by importing everything from the module:", "from learning import *\nfrom probabilistic_learning import *\nfrom notebook import *", "CONTENTS\n\nMachine Learning Overview\nDatasets\nIris Visualization\nDistance Functions\nPlurality Learner\nk-Nearest Neighbours\nDecision Tree Learner\nRandom Forest Learner\nNaive Bayes Learner\nPerceptron\nLearner Evaluation\n\nMACHINE LEARNING OVERVIEW\nIn this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences.\nAn agent is learning if it improves its performance on future tasks after making observations about the world.\nThere are three types of feedback that determine the three main types of learning:\n\nSupervised Learning:\n\nIn Supervised Learning the agent observes some example input-output pairs and learns a function that maps from input to output.\nExample: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string \"cat\" or \"dog\" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-\"cat\"}, {dog image-\"dog\"} to the agent. The agent then learns a function that maps from an input image to one of those strings.\n\nUnsupervised Learning:\n\nIn Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is clustering: detecting potential useful clusters of input examples.\nExample: A taxi agent would develop a concept of good traffic days and bad traffic days without ever being given labeled examples.\n\nReinforcement Learning:\n\nIn Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments.\nExample: Let's talk about an agent to play the popular Atari game—Pong. We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it.\nDATASETS\nFor the following tutorials we will use a range of datasets, to better showcase the strengths and weaknesses of the algorithms. The datasests are the following:\n\n\nFisher's Iris: Each item represents a flower, with four measurements: the length and the width of the sepals and petals. Each item/flower is categorized into one of three species: Setosa, Versicolor and Virginica.\n\n\nZoo: The dataset holds different animals and their classification as \"mammal\", \"fish\", etc. The new animal we want to classify has the following measurements: 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1 (don't concern yourself with what the measurements mean).\n\n\nTo make using the datasets easier, we have written a class, DataSet, in learning.py. The tutorials found here make use of this class.\nLet's have a look at how it works before we get started with the algorithms.\nIntro\nA lot of the datasets we will work with are .csv files (although other formats are supported too). We have a collection of sample datasets ready to use on aima-data. Two examples are the datasets mentioned above (iris.csv and zoo.csv). You can find plenty datasets online, and a good repository of such datasets is UCI Machine Learning Repository.\nIn such files, each line corresponds to one item/measurement. Each individual value in a line represents a feature and usually there is a value denoting the class of the item.\nYou can find the code for the dataset here:", "%psource DataSet", "Class Attributes\n\n\nexamples: Holds the items of the dataset. Each item is a list of values.\n\n\nattrs: The indexes of the features (by default in the range of [0,f), where f is the number of features). For example, item[i] returns the feature at index i of item.\n\n\nattrnames: An optional list with attribute names. For example, item[s], where s is a feature name, returns the feature of name s in item.\n\n\ntarget: The attribute a learning algorithm will try to predict. By default the last attribute.\n\n\ninputs: This is the list of attributes without the target.\n\n\nvalues: A list of lists which holds the set of possible values for the corresponding attribute/feature. If initially None, it gets computed (by the function setproblem) from the examples.\n\n\ndistance: The distance function used in the learner to calculate the distance between two items. By default mean_boolean_error.\n\n\nname: Name of the dataset.\n\n\nsource: The source of the dataset (url or other). Not used in the code.\n\n\nexclude: A list of indexes to exclude from inputs. The list can include either attribute indexes (attrs) or names (attrnames).\n\n\nClass Helper Functions\nThese functions help modify a DataSet object to your needs.\n\n\nsanitize: Takes as input an example and returns it with non-input (target) attributes replaced by None. Useful for testing. Keep in mind that the example given is not itself sanitized, but instead a sanitized copy is returned.\n\n\nclasses_to_numbers: Maps the class names of a dataset to numbers. If the class names are not given, they are computed from the dataset values. Useful for classifiers that return a numerical value instead of a string.\n\n\nremove_examples: Removes examples containing a given value. Useful for removing examples with missing values, or for removing classes (needed for binary classifiers).\n\n\nImporting a Dataset\nImporting from aima-data\nDatasets uploaded on aima-data can be imported with the following line:", "iris = DataSet(name=\"iris\")", "To check that we imported the correct dataset, we can do the following:", "print(iris.examples[0])\nprint(iris.inputs)", "Which correctly prints the first line in the csv file and the list of attribute indexes.\nWhen importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter exclude to the attribute index or name.", "iris2 = DataSet(name=\"iris\",exclude=[1])\nprint(iris2.inputs)", "Attributes\nHere we showcase the attributes.\nFirst we will print the first three items/examples in the dataset.", "print(iris.examples[:3])", "Then we will print attrs, attrnames, target, input. Notice how attrs holds values in [0,4], but since the fourth attribute is the target, inputs holds values in [0,3].", "print(\"attrs:\", iris.attrs)\nprint(\"attrnames (by default same as attrs):\", iris.attrnames)\nprint(\"target:\", iris.target)\nprint(\"inputs:\", iris.inputs)", "Now we will print all the possible values for the first feature/attribute.", "print(iris.values[0])", "Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty.", "print(\"name:\", iris.name)\nprint(\"source:\", iris.source)", "A useful combination of the above is dataset.values[dataset.target] which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it:", "print(iris.values[iris.target])", "Helper Functions\nWe will now take a look at the auxiliary functions found in the class.\nFirst we will take a look at the sanitize function, which sets the non-input values of the given example to None.\nIn this case we want to hide the class of the first example, so we will sanitize it.\nNote that the function doesn't actually change the given example; it returns a sanitized copy of it.", "print(\"Sanitized:\",iris.sanitize(iris.examples[0]))\nprint(\"Original:\",iris.examples[0])", "Currently the iris dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is \"virginica\". To accomplish that we will utilize the helper function remove_examples.", "iris2 = DataSet(name=\"iris\")\n\niris2.remove_examples(\"virginica\")\nprint(iris2.values[iris2.target])", "We also have classes_to_numbers. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers.", "print(\"Class of first example:\",iris2.examples[0][iris2.target])\niris2.classes_to_numbers()\nprint(\"Class of first example:\",iris2.examples[0][iris2.target])", "As you can see \"setosa\" was mapped to 0.\nFinally, we take a look at find_means_and_deviations. It finds the means and standard deviations of the features for each class.", "means, deviations = iris.find_means_and_deviations()\n\nprint(\"Setosa feature means:\", means[\"setosa\"])\nprint(\"Versicolor mean for first feature:\", means[\"versicolor\"][0])\n\nprint(\"Setosa feature deviations:\", deviations[\"setosa\"])\nprint(\"Virginica deviation for second feature:\",deviations[\"virginica\"][1])", "IRIS VISUALIZATION\nSince we will use the iris dataset extensively in this notebook, below we provide a visualization tool that helps in comprehending the dataset and thus how the algorithms work.\nWe plot the dataset in a 3D space using matplotlib and the function show_iris from notebook.py. The function takes as input three parameters, i, j and k, which are indicises to the iris features, \"Sepal Length\", \"Sepal Width\", \"Petal Length\" and \"Petal Width\" (0 to 3). By default we show the first three features.", "iris = DataSet(name=\"iris\")\n\nshow_iris()\nshow_iris(0, 1, 3)\nshow_iris(1, 2, 3)", "You can play around with the values to get a good look at the dataset.\nDISTANCE FUNCTIONS\nIn a lot of algorithms (like the k-Nearest Neighbors algorithm), there is a need to compare items, finding how similar or close they are. For that we have many different functions at our disposal. Below are the functions implemented in the module:\nManhattan Distance (manhattan_distance)\nOne of the simplest distance functions. It calculates the difference between the coordinates/features of two items. To understand how it works, imagine a 2D grid with coordinates x and y. In that grid we have two items, at the squares positioned at (1,2) and (3,4). The difference between their two coordinates is 3-1=2 and 4-2=2. If we sum these up we get 4. That means to get from (1,2) to (3,4) we need four moves; two to the right and two more up. The function works similarly for n-dimensional grids.", "def manhattan_distance(X, Y):\n return sum([abs(x - y) for x, y in zip(X, Y)])\n\n\ndistance = manhattan_distance([1,2], [3,4])\nprint(\"Manhattan Distance between (1,2) and (3,4) is\", distance)", "Euclidean Distance (euclidean_distance)\nProbably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items.", "def euclidean_distance(X, Y):\n return math.sqrt(sum([(x - y)**2 for x, y in zip(X,Y)]))\n\n\ndistance = euclidean_distance([1,2], [3,4])\nprint(\"Euclidean Distance between (1,2) and (3,4) is\", distance)", "Hamming Distance (hamming_distance)\nThis function counts the number of differences between single elements in two items. For example, if we have two binary strings \"111\" and \"011\" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too.", "def hamming_distance(X, Y):\n return sum(x != y for x, y in zip(X, Y))\n\n\ndistance = hamming_distance(['a','b','c'], ['a','b','b'])\nprint(\"Hamming Distance between 'abc' and 'abb' is\", distance)", "Mean Boolean Error (mean_boolean_error)\nTo calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are (1,2,3) and (1,4,5), the ration of different/all elements is 2/3, since they differ in two out of three elements.", "def mean_boolean_error(X, Y):\n return mean(int(x != y) for x, y in zip(X, Y))\n\n\ndistance = mean_boolean_error([1,2,3], [1,4,5])\nprint(\"Mean Boolean Error Distance between (1,2,3) and (1,4,5) is\", distance)", "Mean Error (mean_error)\nThis function finds the mean difference of single elements between two items. For example, if the two items are (1,0,5) and (3,10,5), their error distance is (3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12. The mean error distance therefore is 12/3=4.", "def mean_error(X, Y):\n return mean([abs(x - y) for x, y in zip(X, Y)])\n\n\ndistance = mean_error([1,0,5], [3,10,5])\nprint(\"Mean Error Distance between (1,0,5) and (3,10,5) is\", distance)", "Mean Square Error (ms_error)\nThis is very similar to the Mean Error, but instead of calculating the difference between elements, we are calculating the square of the differences.", "def ms_error(X, Y):\n return mean([(x - y)**2 for x, y in zip(X, Y)])\n\n\ndistance = ms_error([1,0,5], [3,10,5])\nprint(\"Mean Square Distance between (1,0,5) and (3,10,5) is\", distance)", "Root of Mean Square Error (rms_error)\nThis is the square root of Mean Square Error.", "def rms_error(X, Y):\n return math.sqrt(ms_error(X, Y))\n\n\ndistance = rms_error([1,0,5], [3,10,5])\nprint(\"Root of Mean Error Distance between (1,0,5) and (3,10,5) is\", distance)", "PLURALITY LEARNER CLASSIFIER\nOverview\nThe Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification.\n\nLet's see how the classifier works with the plot above. There are three classes named Class A (orange-colored dots) and Class B (blue-colored dots) and Class C (green-colored dots). Every point in this plot has two features (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem.\nThe Plurality Learner will find the class most represented in the plot. Class A has four items, Class B has three and Class C has seven. The most popular class is Class C. Therefore, the item will get classified in Class C, despite the fact that it is closer to the other two classes.\nImplementation\nBelow follows the implementation of the PluralityLearner algorithm:", "psource(PluralityLearner)", "It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in.\nThe function first finds the most popular class in the dataset and then each time we call its \"predict\" function, it returns it. Note that the input (\"example\") does not matter. The function always returns the same class.\nExample\nFor this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset.", "zoo = DataSet(name=\"zoo\")\n\npL = PluralityLearner(zoo)\nprint(pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1]))", "The output for the above code is \"mammal\", since that is the most popular and common class in the dataset.\nK-NEAREST NEIGHBOURS CLASSIFIER\nOverview\nThe k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on Scholarpedia.\n\nLet's see how kNN works with a simple plot shown in the above picture.\nWe have co-ordinates (we call them features in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of k is arbitrary. k is one of the hyper parameters for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as hyper parameter tuning/optimising. We learn more about this in coming topics.\nLet's put k = 3. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than test point (red star). As there are two violet points, which form the majority, we predict the class of red star as violet- Class B.\nSimilarly if we put k = 5, you can observe that there are three yellow points, which form the majority. So, we classify our test point as yellow- Class A.\nIn practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one. \nImplementation\nBelow follows the implementation of the kNN algorithm:", "psource(NearestNeighborLearner)", "It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item.\nTo accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from example (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class.\nExample\nWe measured a new flower with the following values: 5.1, 3.0, 1.1, 0.1. We want to classify that item/flower in a class. To do that, we write the following:", "iris = DataSet(name=\"iris\")\n\nkNN = NearestNeighborLearner(iris,k=3)\nprint(kNN([5.1,3.0,1.1,0.1]))", "The output of the above code is \"setosa\", which means the flower with the above measurements is of the \"setosa\" species.\nDECISION TREE LEARNER\nOverview\nDecision Trees\nA decision tree is a flowchart that uses a tree of decisions and their possible consequences for classification. At each non-leaf node of the tree an attribute of the input is tested, based on which corresponding branch leading to a child-node is selected. At the leaf node the input is classified based on the class label of this leaf node. The paths from root to leaves represent classification rules based on which leaf nodes are assigned class labels.\n\nDecision Tree Learning\nDecision tree learning is the construction of a decision tree from class-labeled training data. The data is expected to be a tuple in which each record of the tuple is an attribute used for classification. The decision tree is built top-down, by choosing a variable at each step that best splits the set of items. There are different metrics for measuring the \"best split\". These generally measure the homogeneity of the target variable within the subsets.\nGini Impurity\nGini impurity of a set is the probability of a randomly chosen element to be incorrectly labeled if it was randomly labeled according to the distribution of labels in the set.\n$$I_G(p) = \\sum{p_i(1 - p_i)} = 1 - \\sum{p_i^2}$$\nWe select a split which minimizes the Gini impurity in child nodes.\nInformation Gain\nInformation gain is based on the concept of entropy from information theory. Entropy is defined as:\n$$H(p) = -\\sum{p_i \\log_2{p_i}}$$\nInformation Gain is difference between entropy of the parent and weighted sum of entropy of children. The feature used for splitting is the one which provides the most information gain.\nPseudocode\nYou can view the pseudocode by running the cell below:", "pseudocode(\"Decision Tree Learning\")", "Implementation\nThe nodes of the tree constructed by our learning algorithm are stored using either DecisionFork or DecisionLeaf based on whether they are a parent node or a leaf node respectively.", "psource(DecisionFork)", "DecisionFork holds the attribute, which is tested at that node, and a dict of branches. The branches store the child nodes, one for each of the attribute's values. Calling an object of this class as a function with input tuple as an argument returns the next node in the classification path based on the result of the attribute test.", "psource(DecisionLeaf)", "The leaf node stores the class label in result. All input tuples' classification paths end on a DecisionLeaf whose result attribute decide their class.", "psource(DecisionTreeLearner)", "The implementation of DecisionTreeLearner provided in learning.py uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices:\n<ol>\n<li>If the input at the current step has no training data we return the mode of classes of input data received in the parent step (previous level of recursion).</li>\n<li>If all values in training data belong to the same class it returns a `DecisionLeaf` whose class label is the class which all the data belongs to.</li>\n<li>If the data has no attributes that can be tested we return the class with highest plurality value in the training data.</li>\n<li>We choose the attribute which gives the highest amount of entropy gain and return a `DecisionFork` which splits based on this attribute. Each branch recursively calls `decision_tree_learning` to construct the sub-tree.</li>\n</ol>\n\nExample\nWe will now use the Decision Tree Learner to classify a sample with values: 5.1, 3.0, 1.1, 0.1.", "iris = DataSet(name=\"iris\")\n\nDTL = DecisionTreeLearner(iris)\nprint(DTL([5.1, 3.0, 1.1, 0.1]))", "As expected, the Decision Tree learner classifies the sample as \"setosa\" as seen in the previous section.\nRANDOM FOREST LEARNER\nOverview\n \nImage via src\nRandom Forest\nAs the name of the algorithm and image above suggest, this algorithm creates the forest with a number of trees. The more number of trees makes the forest robust. In the same way in random forest algorithm, the higher the number of trees in the forest, the higher is the accuray result. The main difference between Random Forest and Decision trees is that, finding the root node and splitting the feature nodes will be random. \nLet's see how Rnadom Forest Algorithm work : \nRandom Forest Algorithm works in two steps, first is the creation of random forest and then the prediction. Let's first see the creation : \nThe first step in creation is to randomly select 'm' features out of total 'n' features. From these 'm' features calculate the node d using the best split point and then split the node into further nodes using best split. Repeat these steps until 'i' number of nodes are reached. Repeat the entire whole process to build the forest. \nNow, let's see how the prediction works\nTake the test features and predict the outcome for each randomly created decision tree. Calculate the votes for each prediction and the prediction which gets the highest votes would be the final prediction.\nImplementation\nBelow mentioned is the implementation of Random Forest Algorithm.", "psource(RandomForest)", "This algorithm creates an ensemble of decision trees using bagging and feature bagging. It takes 'm' examples randomly from the total number of examples and then perform feature bagging with probability p to retain an attribute. All the predictors are predicted from the DecisionTreeLearner and then a final prediction is made.\nExample\nWe will now use the Random Forest to classify a sample with values: 5.1, 3.0, 1.1, 0.1.", "iris = DataSet(name=\"iris\")\n\nDTL = RandomForest(iris)\nprint(DTL([5.1, 3.0, 1.1, 0.1]))", "As expected, the Random Forest classifies the sample as \"setosa\".\nNAIVE BAYES LEARNER\nOverview\nTheory of Probabilities\nThe Naive Bayes algorithm is a probabilistic classifier, making use of Bayes' Theorem. The theorem states that the conditional probability of A given B equals the conditional probability of B given A multiplied by the probability of A, divided by the probability of B.\n$$P(A|B) = \\dfrac{P(B|A)*P(A)}{P(B)}$$\nFrom the theory of Probabilities we have the Multiplication Rule, if the events X are independent the following is true:\n$$P(X_{1} \\cap X_{2} \\cap ... \\cap X_{n}) = P(X_{1})P(X_{2})...*P(X_{n})$$\nFor conditional probabilities this becomes:\n$$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)P(X_{2}|Y)...*P(X_{n}|Y)$$\nClassifying an Item\nHow can we use the above to classify an item though?\nWe have a dataset with a set of classes (C) and we want to classify an item with a set of features (F). Essentially what we want to do is predict the class of an item given the features.\nFor a specific class, Class, we will find the conditional probability given the item features:\n$$P(Class|F) = \\dfrac{P(F|Class)*P(Class)}{P(F)}$$\nWe will do this for every class and we will pick the maximum. This will be the class the item is classified in.\nThe features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes:\n$$P(Class|F) = \\dfrac{P(Class)P(F_{1}|Class)P(F_{2}|Class)...P(F_{n}|Class)}{P(F_{1})P(F_{2})...*P(F_{n})}$$\nThe calculation of the conditional probability then depends on the calculation of the following:\na) The probability of Class in the dataset.\nb) The conditional probability of each feature occurring in an item classified in Class.\nc) The probabilities of each individual feature.\nFor a), we will count how many times Class occurs in the dataset (aka how many items are classified in a particular class).\nFor b), if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see Central Limit Theorem).\nNOTE: If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function.\nThe last one, c), is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values).\nSo as we cannot calculate the feature value probabilities, what are we going to do?\nLet's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, A and B, we want to know which one is greater:\n$$\\dfrac{P(F|A)P(A)}{P(F)} vs. \\dfrac{P(F|B)P(B)}{P(F)}$$\nWait, P(F) is the same for both the classes! In fact, it is the same for every combination of classes. That is because P(F) does not depend on a class, thus being independent of the classes.\nSo, for c), we actually don't need to calculate it at all.\nWrapping It Up\nClassifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious.\nRemember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called Naive Bayes Classifier. We (naively) assume that the features are independent to make computations easier.\nImplementation\nThe implementation of the Naive Bayes Classifier is split in two; Learning and Simple. The learning classifier takes as input a dataset and learns the needed distributions from that. It is itself split into two, for discrete and continuous features. The simple classifier takes as input not a dataset, but already calculated distributions (a dictionary of CountingProbDist objects).\nDiscrete\nThe implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a CountinProbDist object.\nWith the below code you can see the probabilities of the class \"Setosa\" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, \"Tall\", \"3\", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution.", "dataset = iris\n\ntarget_vals = dataset.values[dataset.target]\ntarget_dist = CountingProbDist(target_vals)\nattr_dists = {(gv, attr): CountingProbDist(dataset.values[attr])\n for gv in target_vals\n for attr in dataset.inputs}\nfor example in dataset.examples:\n targetval = example[dataset.target]\n target_dist.add(targetval)\n for attr in dataset.inputs:\n attr_dists[targetval, attr].add(example[attr])\n\n\nprint(target_dist['setosa'])\nprint(attr_dists['setosa', 0][5.0])", "First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of CountingProbDist objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites.\nHaving calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result.", "def predict(example):\n def class_probability(targetval):\n return (target_dist[targetval] *\n product(attr_dists[targetval, attr][example[attr]]\n for attr in dataset.inputs))\n return argmax(target_vals, key=class_probability)\n\n\nprint(predict([5, 3, 1, 0.1]))", "You can view the complete code by executing the next line:", "psource(NaiveBayesDiscrete)", "Continuous\nIn the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the find_means_and_deviations Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach.", "means, deviations = dataset.find_means_and_deviations()\n\ntarget_vals = dataset.values[dataset.target]\ntarget_dist = CountingProbDist(target_vals)\n\n\nprint(means[\"setosa\"])\nprint(deviations[\"versicolor\"])", "You can see the means of the features for the \"Setosa\" class and the deviations for \"Versicolor\".\nThe prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occurring with the conditional probabilities of the feature values for the class.\nSince we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value.", "def predict(example):\n def class_probability(targetval):\n prob = target_dist[targetval]\n for attr in dataset.inputs:\n prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr])\n return prob\n\n return argmax(target_vals, key=class_probability)\n\n\nprint(predict([5, 3, 1, 0.1]))", "The complete code of the continuous algorithm:", "psource(NaiveBayesContinuous)", "Simple\nThe simple classifier (chosen with the argument simple) does not learn from a dataset, instead it takes as input a dictionary of already calculated CountingProbDist objects and returns a predictor function. The dictionary is in the following form: (Class Name, Class Probability): CountingProbDist Object.\nEach class has its own probability distribution. The classifier given a list of features calculates the probability of the input for each class and returns the max. The only pre-processing work is to create dictionaries for the distribution of classes (named targets) and attributes/features.\nThe complete code for the simple classifier:", "psource(NaiveBayesSimple)", "This classifier is useful when you already have calculated the distributions and you need to predict future items.\nExamples\nWe will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items:", "nBD = NaiveBayesLearner(iris, continuous=False)\nprint(\"Discrete Classifier\")\nprint(nBD([5, 3, 1, 0.1]))\nprint(nBD([6, 5, 3, 1.5]))\nprint(nBD([7, 3, 6.5, 2]))\n\n\nnBC = NaiveBayesLearner(iris, continuous=True)\nprint(\"\\nContinuous Classifier\")\nprint(nBC([5, 3, 1, 0.1]))\nprint(nBC([6, 5, 3, 1.5]))\nprint(nBC([7, 3, 6.5, 2]))", "Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem.\nLet's now take a look at the simple classifier. First we will come up with a sample problem to solve. Say we are given three bags. Each bag contains three letters ('a', 'b' and 'c') of different quantities. We are given a string of letters and we are tasked with finding from which bag the string of letters came.\nSince we know the probability distribution of the letters for each bag, we can use the naive bayes classifier to make our prediction.", "bag1 = 'a'*50 + 'b'*30 + 'c'*15\ndist1 = CountingProbDist(bag1)\nbag2 = 'a'*30 + 'b'*45 + 'c'*20\ndist2 = CountingProbDist(bag2)\nbag3 = 'a'*20 + 'b'*20 + 'c'*35\ndist3 = CountingProbDist(bag3)", "Now that we have the CountingProbDist objects for each bag/class, we will create the dictionary. We assume that it is equally probable that we will pick from any bag.", "dist = {('First', 0.5): dist1, ('Second', 0.3): dist2, ('Third', 0.2): dist3}\nnBS = NaiveBayesLearner(dist, simple=True)", "Now we can start making predictions:", "print(nBS('aab')) # We can handle strings\nprint(nBS(['b', 'b'])) # And lists!\nprint(nBS('ccbcc'))", "The results make intuitive sence. The first bag has a high amount of 'a's, the second has a high amount of 'b's and the third has a high amount of 'c's. The classifier seems to confirm this intuition.\nNote that the simple classifier doesn't distinguish between discrete and continuous values. It just takes whatever it is given. Also, the simple option on the NaiveBayesLearner overrides the continuous argument. NaiveBayesLearner(d, simple=True, continuous=False) just creates a simple classifier.\nPERCEPTRON CLASSIFIER\nOverview\nThe Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network.\nIts input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has n synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index.\nNote that in classification problems each node represents a class. The final classification is the class/node with the max output value.\nBelow you can see a single node/neuron in the outer layer. With f we denote the item features, with w the synapse weights, then inside the node we have the dot product and the activation function, g.\n\nImplementation\nFirst, we train (calculate) the weights given a dataset, using the BackPropagationLearner function of learning.py. We then return a function, predict, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class.", "psource(PerceptronLearner)", "Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in BackPropagationLearner, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated.\nThat function predict passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product.\nExample\nWe will train the Perceptron on the iris dataset. Because though the BackPropagationLearner works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1.", "iris = DataSet(name=\"iris\")\niris.classes_to_numbers()\n\nperceptron = PerceptronLearner(iris)\nprint(perceptron([5, 3, 1, 0.1]))", "The correct output is 0, which means the item belongs in the first class, \"setosa\". Note that the Perceptron algorithm is not perfect and may produce false classifications.\nLINEAR LEARNER\nOverview\nLinear Learner is a model that assumes a linear relationship between the input variables x and the single output variable y. More specifically, that y can be calculated from a linear combination of the input variables x. Linear learner is a quite simple model as the representation of this model is a linear equation. \nThe linear equation assigns one scaler factor to each input value or column, called a coefficients or weights. One additional coefficient is also added, giving additional degree of freedom and is often called the intercept or the bias coefficient. \nFor example : y = ax1 + bx2 + c . \nImplementation\nBelow mentioned is the implementation of Linear Learner.", "psource(LinearLearner)", "This algorithm first assigns some random weights to the input variables and then based on the error calculated updates the weight for each variable. Finally the prediction is made with the updated weights. \nImplementation\nWe will now use the Linear Learner to classify a sample with values: 5.1, 3.0, 1.1, 0.1.", "iris = DataSet(name=\"iris\")\niris.classes_to_numbers()\n\nlinear_learner = LinearLearner(iris)\nprint(linear_learner([5, 3, 1, 0.1]))", "ENSEMBLE LEARNER\nOverview\nEnsemble Learning improves the performance of our model by combining several learners. It improvise the stability and predictive power of the model. Ensemble methods are meta-algorithms that combine several machine learning techniques into one predictive model in order to decrease variance, bias, or improve predictions. \n\nSome commonly used Ensemble Learning techniques are : \n\n\nBagging : Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It helps us to reduce variance error.\n\n\nBoosting : Boosting is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. It helps us to reduce bias error.\n\n\nStacking : This is a very interesting way of combining models. Here we use a learner to combine output from different learners. It can either decrease bias or variance error depending on the learners we use.\n\n\nImplementation\nBelow mentioned is the implementation of Ensemble Learner.", "psource(EnsembleLearner)", "This algorithm takes input as a list of learning algorithms, have them vote and then finally returns the predicted result.\nLEARNER EVALUATION\nIn this section we will evaluate and compare algorithm performance. The dataset we will use will again be the iris one.", "iris = DataSet(name=\"iris\")", "Naive Bayes\nFirst up we have the Naive Bayes algorithm. First we will test how well the Discrete Naive Bayes works, and then how the Continuous fares.", "nBD = NaiveBayesLearner(iris, continuous=False)\nprint(\"Error ratio for Discrete:\", err_ratio(nBD, iris))\n\nnBC = NaiveBayesLearner(iris, continuous=True)\nprint(\"Error ratio for Continuous:\", err_ratio(nBC, iris))", "The error for the Naive Bayes algorithm is very, very low; close to 0. There is also very little difference between the discrete and continuous version of the algorithm.\nk-Nearest Neighbors\nNow we will take a look at kNN, for different values of k. Note that k should have odd values, to break any ties between two classes.", "kNN_1 = NearestNeighborLearner(iris, k=1)\nkNN_3 = NearestNeighborLearner(iris, k=3)\nkNN_5 = NearestNeighborLearner(iris, k=5)\nkNN_7 = NearestNeighborLearner(iris, k=7)\n\nprint(\"Error ratio for k=1:\", err_ratio(kNN_1, iris))\nprint(\"Error ratio for k=3:\", err_ratio(kNN_3, iris))\nprint(\"Error ratio for k=5:\", err_ratio(kNN_5, iris))\nprint(\"Error ratio for k=7:\", err_ratio(kNN_7, iris))", "Notice how the error became larger and larger as k increased. This is generally the case with datasets where classes are spaced out, as is the case with the iris dataset. If items from different classes were closer together, classification would be more difficult. Usually a value of 1, 3 or 5 for k suffices.\nAlso note that since the training set is also the testing set, for k equal to 1 we get a perfect score, since the item we want to classify each time is already in the dataset and its closest neighbor is itself.\nPerceptron\nFor the Perceptron, we first need to convert class names to integers. Let's see how it performs in the dataset.", "iris2 = DataSet(name=\"iris\")\niris2.classes_to_numbers()\n\nperceptron = PerceptronLearner(iris2)\nprint(\"Error ratio for Perceptron:\", err_ratio(perceptron, iris2))", "The Perceptron didn't fare very well mainly because the dataset is not linearly separated. On simpler datasets the algorithm performs much better, but unfortunately such datasets are rare in real life scenarios.\nAdaBoost\nOverview\nAdaBoost is an algorithm which uses ensemble learning. In ensemble learning the hypotheses in the collection, or ensemble, vote for what the output should be and the output with the majority votes is selected as the final answer.\nAdaBoost algorithm, as mentioned in the book, works with a weighted training set and weak learners (classifiers that have about 50%+epsilon accuracy i.e slightly better than random guessing). It manipulates the weights attached to the the examples that are showed to it. Importance is given to the examples with higher weights.\nAll the examples start with equal weights and a hypothesis is generated using these examples. Examples which are incorrectly classified, their weights are increased so that they can be classified correctly by the next hypothesis. The examples that are correctly classified, their weights are reduced. This process is repeated K times (here K is an input to the algorithm) and hence, K hypotheses are generated.\nThese K hypotheses are also assigned weights according to their performance on the weighted training set. The final ensemble hypothesis is the weighted-majority combination of these K hypotheses.\nThe speciality of AdaBoost is that by using weak learners and a sufficiently large K, a highly accurate classifier can be learned irrespective of the complexity of the function being learned or the dullness of the hypothesis space.\nImplementation\nAs seen in the previous section, the PerceptronLearner does not perform that well on the iris dataset. We'll use perceptron as the learner for the AdaBoost algorithm and try to increase the accuracy. \nLet's first see what AdaBoost is exactly:", "psource(AdaBoost)", "AdaBoost takes as inputs: L and K where L is the learner and K is the number of hypotheses to be generated. The learner L takes in as inputs: a dataset and the weights associated with the examples in the dataset. But the PerceptronLearner doesnot handle weights and only takes a dataset as its input.\nTo remedy that we will give as input to the PerceptronLearner a modified dataset in which the examples will be repeated according to the weights associated to them. Intuitively, what this will do is force the learner to repeatedly learn the same example again and again until it can classify it correctly. \nTo convert PerceptronLearner so that it can take weights as input too, we will have to pass it through the WeightedLearner function.", "psource(WeightedLearner)", "The WeightedLearner function will then call the PerceptronLearner, during each iteration, with the modified dataset which contains the examples according to the weights associated with them.\nExample\nWe will pass the PerceptronLearner through WeightedLearner function. Then we will create an AdaboostLearner classifier with number of hypotheses or K equal to 5.", "WeightedPerceptron = WeightedLearner(PerceptronLearner)\nAdaboostLearner = AdaBoost(WeightedPerceptron, 5)\n\niris2 = DataSet(name=\"iris\")\niris2.classes_to_numbers()\n\nadaboost = AdaboostLearner(iris2)\n\nadaboost([5, 3, 1, 0.1])", "That is the correct answer. Let's check the error rate of adaboost with perceptron.", "print(\"Error ratio for adaboost: \", err_ratio(adaboost, iris2))", "It reduced the error rate considerably. Unlike the PerceptronLearner, AdaBoost was able to learn the complexity in the iris dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/pyquickhelper
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
mit
[ "2A.eco - Python et la logique SQL\nSQL permet de créer des tables, de rechercher, d'ajouter, de modifier ou de supprimer des données dans les bases de données. Un peu ce que vous ferez bientôt tous les jours. C'est un langage de management de données, pas de nettoyage, d’analyse ou de statistiques avancées.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Les instructions SQL s'écrivent d'une manière qui ressemble à celle de phrases ordinaires en anglais. Cette ressemblance voulue vise à faciliter l'apprentissage et la lecture. Il est néanmoins important de respecter un ordre pour les différentes instructions.\nDans ce TD, nous allons écrire des commandes en SQL via Python.\nPour plus de précisions sur SQL et les commandes qui existent, rendez-vous là SQL, PRINCIPES DE BASE.\nSe connecter à une base de données\nA la différence des tables qu'on utilise habituellement, la base de données n'est pas visible directement en ouvrant Excel ou un éditeur de texte. Pour avoir une vue de ce que contient la base de données, il est nécessaire d'avoir un autre type de logiciel.\nPour le TD, nous vous recommandans d'installer SQLLiteSpy (disponible à cette adresse SqliteSpy ou sqlite_bro si vous voulez voir à quoi ressemble les données avant de les utiliser avec Python.", "import sqlite3\n# on va se connecter à une base de données SQL vide\n# SQLite stocke la BDD dans un simple fichier\nfilepath = \"./DataBase.db\"\nopen(filepath, 'w').close() #crée un fichier vide\nCreateDataBase = sqlite3.connect(filepath)\n\nQueryCurs = CreateDataBase.cursor()", "La méthode cursor est un peu particulière : \nIl s'agit d'une sorte de tampon mémoire intermédiaire, destiné à mémoriser temporairement les données en cours de traitement, ainsi que les opérations que vous effectuez sur elles, avant leur transfert définitif dans la base de données. Tant que la méthode commit n'aura pas été appelée, aucun ordre ne sera appliqué à la base de données.\n\nA présent que nous sommes connectés à la base de données, on va créer une table qui contient plusieurs variables de format différents\n- ID sera la clé primaire de la base\n- Nom, Rue, Ville, Pays seront du text\n- Prix sera un réel", "# On définit une fonction de création de table\ndef CreateTable(nom_bdd):\n QueryCurs.execute('''CREATE TABLE IF NOT EXISTS ''' + nom_bdd + '''\n (id INTEGER PRIMARY KEY, Name TEXT,City TEXT, Country TEXT, Price REAL)''')\n\n# On définit une fonction qui permet d'ajouter des observations dans la table \ndef AddEntry(nom_bdd, Nom,Ville,Pays,Prix):\n QueryCurs.execute('''INSERT INTO ''' + nom_bdd + ''' \n (Name,City,Country,Price) VALUES (?,?,?,?)''',(Nom,Ville,Pays,Prix))\n \ndef AddEntries(nom_bdd, data):\n \"\"\" data : list with (Name,City,Country,Price) tuples to insert\n \"\"\"\n QueryCurs.executemany('''INSERT INTO ''' + nom_bdd + ''' \n (Name,City,Country,Price) VALUES (?,?,?,?)''',data)\n \n \n### On va créer la table clients\n\nCreateTable('Clients')\n\nAddEntry('Clients','Toto','Munich','Germany',5.2)\nAddEntries('Clients',\n [('Bill','Berlin','Germany',2.3),\n ('Tom','Paris','France',7.8),\n ('Marvin','Miami','USA',15.2),\n ('Anna','Paris','USA',7.8)])\n\n# on va \"commit\" c'est à dire qu'on va valider la transaction. \n# > on va envoyer ses modifications locales vers le référentiel central - la base de données SQL\n\nCreateDataBase.commit()", "Voir la table\nPour voir ce qu'il y a dans la table, on utilise un premier Select où on demande à voir toute la table", "QueryCurs.execute('SELECT * FROM Clients')\nValues = QueryCurs.fetchall()\nprint(Values)", "Passer en pandas\nRien de plus simple : plusieurs manières de faire", "import pandas as pd\n# méthode SQL Query\ndf1 = pd.read_sql_query('SELECT * FROM Clients', CreateDataBase)\nprint(\"En utilisant la méthode read_sql_query \\n\", df1.head(), \"\\n\")\n\n\n#méthode DataFrame en utilisant la liste issue de .fetchall()\ndf2 = pd.DataFrame(Values, columns=['ID','Name','City','Country','Price'])\nprint(\"En passant par une DataFrame \\n\", df2.head())", "Comparaison SQL et pandas\nSELECT\nEn SQL, la sélection se fait en utilisant des virgules ou * si on veut sélectionner toutes les colonnes", "# en SQL\nQueryCurs.execute('SELECT ID,City FROM Clients LIMIT 2')\nValues = QueryCurs.fetchall()\nprint(Values)", "En pandas, la sélection de colonnes se fait en donnant une liste", "#sur la table\ndf2[['ID','City']].head(2)", "WHERE\nEn SQL, on utilise WHERE pour filtrer les tables selon certaines conditions", "QueryCurs.execute('SELECT * FROM Clients WHERE City==\"Paris\"')\nprint(QueryCurs.fetchall())", "Avec Pandas, on peut utiliser plusieurs manières de faire : \n - avec un booléen\n - en utilisant la méthode 'query'", "df2[df2['City'] == \"Paris\"]\n\ndf2.query('City == \"Paris\"')", "Pour mettre plusieurs conditions, on utilise : \n- & en Python, AND en SQL\n- | en python, OR en SQL", "QueryCurs.execute('SELECT * FROM Clients WHERE City==\"Paris\" AND Country == \"USA\"')\nprint(QueryCurs.fetchall())\n\ndf2.query('City == \"Paris\" & Country == \"USA\"')\n\ndf2[(df2['City'] == \"Paris\") & (df2['Country'] == \"USA\")]", "GROUP BY\nEn pandas, l'opération GROUP BY de SQL s'effectue avec une méthode similaire : groupby \ngroupby sert à regrouper des observations en groupes selon les modalités de certaines variables en appliquant une fonction d'aggrégation sur d'autres variables.", "QueryCurs.execute('SELECT Country, count(*) FROM Clients GROUP BY Country')\nprint(QueryCurs.fetchall())", "Attention, en pandas, la fonction count ne fait pas la même chose qu'en SQL. count s'applique à toutes les colonnes et compte toutes les observations non nulles.", "df2.groupby('Country').count()", "Pour réaliser la même chose qu'en SQL, il faut utiliser la méthode size.", "df2.groupby('Country').size()", "On peut aussi appliquer des fonctions plus sophistiquées lors d'un groupby", "QueryCurs.execute('SELECT Country, AVG(Price), count(*) FROM Clients GROUP BY Country')\nprint(QueryCurs.fetchall())", "Avec pandas, on peut appeler les fonctions classiques de numpy", "import numpy as np\ndf2.groupby('Country').agg({'Price': np.mean, 'Country': np.size})", "Ou utiliser des fonctions lambda.", "# par exemple calculer le prix moyen et le multiplier par 2\ndf2.groupby('Country')['Price'].apply(lambda x: 2*x.mean())\n\nQueryCurs.execute('SELECT Country, 2*AVG(Price) FROM Clients GROUP BY Country').fetchall()\n\nQueryCurs.execute('SELECT * FROM Clients WHERE Country == \"Germany\"')\nprint(QueryCurs.fetchall())\nQueryCurs.execute('SELECT * FROM Clients WHERE City==\"Berlin\" AND Country == \"Germany\"')\nprint(QueryCurs.fetchall())\nQueryCurs.execute('SELECT * FROM Clients WHERE Price BETWEEN 7 AND 20')\nprint(QueryCurs.fetchall())", "Enregistrer une table SQL sous un autre format\nOn utilise le package csv, l'option 'w' pour 'write'. \nOn crée l'objet \"writer\", qui vient du package csv.\nCet objet a deux méthodes : \n- writerow pour les noms de colonnes : une liste\n- writerows pour les lignes : un ensemble de liste", "data = QueryCurs.execute('SELECT * FROM Clients')\n\nimport csv\n\nwith open('./output.csv', 'w') as file:\n writer = csv.writer(file)\n writer.writerow(['id','Name','City','Country','Price'])\n writer.writerows(data)", "On peut également passer par un DataFrame pandas et utiliser .to_csv()", "QueryCurs.execute('''DROP TABLE Clients''')\nQueryCurs.close()", "Exercice\nDans cet exercice, nous allons manipuler les tables de la base de données World. \nAvant tout, télechargez le fichier et connectez vous à la base de données en utilisant sqlite3 et connect.\nFamiliarisez vous avec la base de données :\n- quelles sont les tables ? \n- quelles sont les variables de ces tables ? \n- utilisez la fonction PRAGMA pour obtenir des informations sur les tables\nQuestion 1\n\nQuels sont les 10 pays qui ont le plus de langues ?\nQuelle langue est présente dans le plus de pays ?\n\nQuestion 2\n\nQuelles sont les différentes formes de gouvernements dans les pays du monde ?\nQuels sont les 3 gouvernements où la population est la plus importante ?\n\nQuestion 3\n\n\nCombien de pays ont Elisabeth II à la tête de leur gouvernement ?\n\n\nQuelle proporition des sujets de Sa Majesté ne parlent pas anglais ?\n\n78 % ou 83% ?\n\n\n\nQuestion 4 - passons à Pandas\nCréer une DataFrame qui contient les informations suivantes par pays :\n- le nom\n- le code du pays\n- le nombre de langues parlées\n- le nombre de langues officielles\n- la population\n- le GNP\n- l'espérance de vie\nIndice : utiliser la commande pd.read_sql_query\nQue dit la matrice de corrélation de ces variables ?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
weikang9009/pysal
notebooks/model/spvcm/using_the_sampler.ipynb
bsd-3-clause
[ "Using the sampler\nspvcm is a generic gibbs sampling framework for spatially-correlated variance components models. The current supported models are:\n\nspvcm.both contains specifications with correlated errors in both levels, with the first statement se/sma describing the lower level and the second statement se/sma describing the upper level. In addition, MVCM, the multilevel variance components model with no spatial correlation, is in the both namespace. \nspvcm.lower contains two specifications, se/sma, that can be used for a variance components model with correlated lower-level errors.\nspvcm.upper contains two specifications, se/sma that can be used for a variance components model with correlated upper-level errors. \n\nSpecification\nThese derive from a variance components specification: \n$$ Y \\sim \\mathcal{N}(X\\beta, \\Psi_1(\\lambda, \\sigma^2) + \\Delta\\Psi_2(\\rho, \\tau^2)\\Delta') $$\nWhere:\n1. $\\beta$, called Betas in code, is the marginal effect parameter. In this implementation, any region-level covariates $Z$ get appended to the end of $X$. So, if $X$ is $n \\times p$ ($n$ observations of $p$ covariates) and $Z$ is $J \\times p'$ ($p'$ covariates observed for $J$ regions), then the model's $X$ matrix is $n \\times (p + p')$ and $\\beta$ is $p + p' \\times 1$. \n2. $\\Psi_1$ is the covariance function for the response-level model. In the software, a separable covariance is assumed, so that $\\Psi_1(\\rho, \\sigma^2) = \\Psi_1(\\rho) * I \\sigma^2)$, where $I$ is the $n \\times n$ covariance matrix. Thus, $\\rho$ is the spatial autoregressive parameter and $\\sigma^2$ is the variance parameter. In the software, $\\Psi_1$ takes any of the following forms:\n - Spatial Error (SE): $\\Psi_1(\\rho) = [(I - \\rho \\mathbf{W})'(I - \\rho \\mathbf{W})]^{-1} \\sigma^2$\n - Spatial Moving Average (SMA): $\\Psi_1(\\rho) = (I + \\rho \\mathbf{W})(I + \\lambda \\mathbf{W})'$\n - Identity: $\\Psi_1(\\rho) = I$\n2. $\\Psi_2$ is the region-level covariance function, with region-level autoregressive parameter $\\lambda$ and region-level variance $\\tau^2$. It has the same potential forms as $\\Psi_1$. \n3. $\\alpha$, called Alphas in code, is the region-level random effect. In a variance components model, this is interpreted as a random effect for the upper-level. For a Varying-intercept format, this random component should be added to a region-level fixed effect to provide the varying intercept. This may also make it more difficult to identify the spatial parameter. \nSoftare implementation\nAll of the possible combinations of Spatial Moving Average and Spatial Error processes are contained in the following classes. I will walk through estimating one below, and talk about the various features of the package. \nFirst, the API of the package is defined by the spvcm.api submodule. To load it, use from pysal.model import spvcm.api as spvcm:", "from pysal.model import spvcm as spvcm #package API\nspvcm.both_levels.Generic # abstract customizable class, ignores rho/lambda, equivalent to MVCM\nspvcm.both_levels.MVCM # no spatial effect\nspvcm.both_levels.SESE # both spatial error (SE)\nspvcm.both_levels.SESMA # response-level SE, region-level spatial moving average\nspvcm.both_levels.SMASE # response-level SMA, region-level SE\nspvcm.both_levels.SMASMA # both levels SMA\nspvcm.upper_level.Upper_SE # response-level uncorrelated, region-level SE\nspvcm.upper_level.Upper_SMA # response-level uncorrelated, region-level SMA\nspvcm.lower_level.Lower_SE # response-level SE, region-level uncorrelated\nspvcm.lower_level.Lower_SMA # response-level SMA, region-level uncorrelated ", "Depending on the structure of the model, you need at least:\n- X, data at the response (lower) level\n- Y, system response in the lower level\n- membership or Delta, the membership vector relating each observation to its group or the \"dummy variable\" matrix encoding the same information. \nThen, if spatial correlation is desired, M is the \"upper-level\" weights matrix and W the lower-level weights matrix. \nAny upper-level data should be passed in $Z$, and have $J$ rows. To fit a varying-intercept model, include an identity matrix in $Z$. You can include state-level and response-level intercept terms simultaneously. \nFinally, there are many configuration and tuning options that can be passed in at the start, or assigned after the model is initialized. \nFirst, though, let's set up some data for a model on southern counties predicting HR90, the Homicide Rate in the US South in 1990, using the the percent of the labor force that is unemployed (UE90), a principal component expressing the population structure (PS90), and a principal component expressing resource deprivation. \nWe will also use the state-level average percentage of families below the poverty line and the average Gini coefficient at the state level for a $Z$ variable.", "#seaborn is required for the traceplots\nimport pysal as ps\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport geopandas as gpd\n%matplotlib inline", "Reading in the data, we'll extract these values we need from the dataframe.", "data = ps.pdio.read_files(ps.examples.get_path('south.shp'))\ngdf = gpd.read_file(ps.examples.get_path('south.shp'))\ndata = data[data.STATE_NAME != 'District of Columbia']\nX = data[['UE90', 'PS90', 'RD90']].values\nN = X.shape[0]\nZ = data.groupby('STATE_NAME')[['FP89', 'GI89']].mean().values\nJ = Z.shape[0]\n\nY = data.HR90.values.reshape(-1,1)", "Then, we'll construct some queen contiguity weights from the files to show how to run a model.", "W2 = ps.queen_from_shapefile(ps.examples.get_path('us48.shp'), \n idVariable='STATE_NAME')\nW2 = ps.w_subset(W2, ids=data.STATE_NAME.unique().tolist()) #only keep what's in the data\nW1 = ps.queen_from_shapefile(ps.examples.get_path('south.shp'),\n idVariable='FIPS')\nW1 = ps.w_subset(W1, ids=data.FIPS.tolist()) #again, only keep what's in the data\n\nW1.transform = 'r'\nW2.transform = 'r'", "With the data, upper-level weights, and lower-level weights, we can construct a membership vector or a dummy data matrix. For now, I'll create the membership vector.", "membership = data.STATE_NAME.apply(lambda x: W2.id_order.index(x)).values", "But, we could also build the dummy variable matrix using pandas, if we have a suitable categorical variable:", "Delta_frame = pd.get_dummies(data.STATE_NAME)\nDelta = Delta_frame.values", "Every call to the sampler is of the following form:\nsampler(Y, X, W, M, Z, membership, Delta, n_samples, **configuration)\nWhere W, M are passed if appropriate, Z is passed if used, and only one of membership or Delta is required. In the end, Z is appended to X, so the effects pertaining to the upper level will be at the tail end of the $\\beta$ effects vector. If both Delta and membership are supplied, they're verified against each other to ensure that they agree before they are used in the model. \nFor all models, the membership vector or an equivalent dummy variable matrix is required. For models with correlation in the upper level, only the upper-level weights matrix $\\mathbf{M}$ is needed. For lower level models, the lower-level weights matrix $\\mathbf{W}$ is required. For models with correlation in both levels, both $\\mathbf{W}$ and $\\mathbf{M}$ are required. \nEvery sampler uses, either in whole or in part, spvcm.both.generic, which implements the full generic sampler discussed in the working paper. For efficiency, the upper-level samplers modify this runtime to avoid processing the full lower-level covariance matrix. \nLike many of the R packages dedicated to bayesian models, configuration occurs by passing the correct dictionary to the model call. In addition, you can \"setup\" the model, configure it, and then run samples in separate steps. \nThe most common way to call the sampler is something like:", "vcsma = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, \n membership=membership, \n n_samples=5000,\n configs=dict(tuning=1000, \n adapt_step=1.01))", "This model, spvcm.upper_level.Upper_SMA, is a variance components/varying intercept model with a state-level SMA-correlated error. \nThus, there are only five parameters in this model, since $\\rho$, the lower-level autoregressive parameter, is constrained to zero:", "vcsma.trace.varnames", "The results and state of the sampler are stored within the vcsma object. I'll step through the most important parts of this object. \ntrace\nThe quickest way to get information out of the model is via the trace object. This is where the results of the tracked parameters are stored each iteration. Any variable in the sampler state can be added to the tracked params. Trace objects are essentially dictionaries with the keys being the name of the tracked parameter and the values being a list of each iteration's sampler output.", "vcsma.trace.varnames", "In this case, Lambda is the upper-level moving average parameter, Alphas is the vector of correlated group-level random effects, Tau2 is the upper-level variance, Betas are the marginal effects, and Sigma2 is the lower-level error variance.\nI've written two helper functions for working with traces. First is to just dump all the output into a pandas dataframe, which makes it super easy to do work on the samples, or write them out to csv and assess convergence in R's coda package.", "trace_dataframe = vcsma.trace.to_df()", "the dataframe will have columns containing the elements of the parameters and each row is a single iteration of the sampler:", "trace_dataframe.head()", "You can write this out to a csv or analyze it in memory like a typical pandas dataframes:", "trace_dataframe.mean()", "The second is a method to plot the traces:", "fig, ax = vcsma.trace.plot()\nplt.show()", "The trace object can be sliced by (chain, parameter, index) tuples, or any subset thereof.", "vcsma.trace['Lambda',-4:] #last 4 draws of lambda\n\nvcsma.trace[['Tau2', 'Sigma2'], 0:2] #the first 2 variance parameters", "We only ran a single chain, so the first index is assumed to be zero. You can run more than one chain in parallel, using the builtin python multiprocessing library:", "vcsma_p = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, \n membership=membership, \n #run 3 chains\n n_samples=5000, n_jobs=3, \n configs=dict(tuning=500, \n adapt_step=1.01))\n\nvcsma_p.trace[0, 'Betas', -1] #the last draw of Beta on the first chain. \n\nvcsma_p.trace[1, 'Betas', -1] #the last draw of Beta on the second chain", "and the chain plotting works also for the multi-chain traces. In addition, there are quite a few traceplot options, and all the plots are returned by the methods as matplotlib objects, so they can also be saved using plt.savefig().", "vcsma_p.trace.plot(burn=1000, thin=10)\nplt.suptitle('SMA of Homicide Rate in Southern US Counties', y=0, fontsize=20)\n#plt.savefig('trace.png') #saves to a file called \"trace.png\"\nplt.show()\n\nvcsma_p.trace.plot(burn=-100, varnames='Lambda') #A negative burn-in works like negative indexing in Python & R \nplt.suptitle('First 100 iterations of $\\lambda$', fontsize=20, y=.02)\nplt.show() #so this plots Lambda in the first 100 iterations. ", "To get stuff like posterior quantiles, you can use the attendant pandas dataframe functionality, like describe.", "df = vcsma.trace.to_df()\n\ndf.describe()", "There is also a trace.summarize function that will compute various things contained in spvcm.diagnostics on the chain. It takes a while for large chains, because the statsmodels.tsa.AR estimator is much slower than the ar estimator in R. If you have rpy2 installed and CODA installed in your R environment, I attempt to use R directly.", "vcsma.trace.summarize()", "So, 5000 iterations, but many parameters have an effective sample size that's much less than this. There's debate about whether it's necesasry to thin these samples in accordance with the effective size, and I think you should thin your sample to the effective size and see if it affects your HPD/Standard Errorrs. \nThe existing python packages for MCMC diagnostics were incorrect. So, I've implemented many of the diagnostics from CODA, and have verified that the diagnostics comport with CODA diagnostics. One can also use numpy & statsmodels functions. I'll show some types of analysis.", "from statsmodels.api import tsa\n#if you don't have it, try removing the comment and:\n#! pip install statsmodels", "For example, a plot of the partial autocorrelation in $\\lambda$, the upper-level spatial moving average parameter, over the last half of the chain is:", "plt.plot(tsa.pacf(vcsma.trace['Lambda', -2500:]))", "So, the chain is close-to-first order:", "tsa.pacf(df.Lambda)[0:3]", "We could do this for many parameters, too. An Autocorrelation/Partial Autocorrelation plot can be made of the marginal effects by:", "betas = [c for c in df.columns if c.startswith('Beta')]\nf,ax = plt.subplots(len(betas), 2, figsize=(10,8))\nfor i, col in enumerate(betas):\n ax[i,0].plot(tsa.acf(df[col].values))\n ax[i,1].plot(tsa.pacf(df[col].values)) #the pacf plots take a while\n ax[i,0].set_title(col +' (ACF)')\n ax[i,1].set_title('(PACF)')\nf.tight_layout()\nplt.show()", "As far as the builtin diagnostics for convergence and simulation quality, the diagnostics module exposes a few things:\nGeweke statistics for differences in means between chain components:", "gstats = spvcm.diagnostics.geweke(vcsma, varnames='Tau2') #takes a while\nprint(gstats)", "Typically, this means the chain is converged at the given \"bin\" count if the line stays within $\\pm2$. The geweke statistic is a test of differences in means between the given chunk of the chain and the remaining chain. If it's outside of +/- 2 in the early part of the chain, you should discard observations early in the chain. If you get extreme values of these statistics throughout, you need to keep running the chain.", "plt.plot(gstats[0]['Tau2'][:-1])", "We can also compute Monte Carlo Standard Errors like in the mcse R package, which represent the intrinsic error contained in the estimate:", "spvcm.diagnostics.mcse(vcsma, varnames=['Tau2', 'Sigma2'])", "Another handy statistic is the Partial Scale Reduction factor, which measures of how likely a set of chains run in parallel have converged to the same stationary distribution. It provides the difference in variance between between chains vs. within chains. \nIf these are significantly larger than one (say, 1.5), the chain probably has not converged. Being marginally below $1$ is fine, too.", "spvcm.diagnostics.psrf(vcsma_p, varnames=['Tau2', 'Sigma2'])", "Highest posterior density intervals provide a kind of interval estimate for parameters in Bayesian models:", "spvcm.diagnostics.hpd_interval(vcsma, varnames=['Betas', 'Lambda', 'Sigma2'])", "Sometimes, you want to apply arbitrary functions to each parameter trace. To do this, I've written a map function that works like the python builtin map. For example, if you wanted to get arbitrary percentiles from the chain:", "vcsma.trace.map(np.percentile, \n varnames=['Lambda', 'Tau2', 'Sigma2'],\n #arguments to pass to the function go last\n q=[25, 50, 75]) ", "In addition, you can pop the trace results pretty simply to a .csv file and analyze it elsewhere, like if you want to use use the coda Bayesian Diagnostics package in R. \nTo write out a model to a csv, you can use:", "vcsma.trace.to_csv('./model_run.csv')", "And, you can even load traces from csvs:", "tr = spvcm.abstracts.Trace.from_csv('./model_run.csv')\nprint(tr.varnames)\ntr.plot(varnames=['Tau2'])", "Working with models: draw and sample\nThese two functions are used to call the underlying Gibbs sampler. They take no arguments, and operate on the sampler in place. draw provides a single new sample:", "vcsma.draw()", "And sample steps forward an arbitrary number of times:", "vcsma.sample(10)", "At this point, we did 5000 initial samples and 11 extra samples. Thus:", "vcsma.cycles", "Parallel models can suspend/resume sampling too:", "vcsma_p.sample(10)\n\nvcsma_p.cycles", "Under the hood, it's the draw method that actually ends up calling one run of model._iteration, which is where the actual statistical code lives. Then, it updates all model.traced_params by adding their current value in model.state to model.trace. In addition, model._finalize is called the first time sampling is run, which computes some of the constants & derived quantities that save computing time.\nWorking with models: state\nThis is the collection of current values in the sampler. To be efficient, Gibbs sampling must keep around some of the computations used in the simulation, since sometimes the same terms show up in different conditional posteriors. So, the current values of the sampler are stored in state.\nAll of the following are tracked in the state:", "print(vcsma.state.keys())", "If you want to track how something (maybe a hyperparameter) changes over sampling, you can pass extra_traced_params to the model declaration:", "example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, \n membership=membership, \n n_samples=250, \n extra_traced_params = ['DeltaAlphas'],\n configs=dict(tuning=500, adapt_step=1.01))\nexample.trace.varnames", "configs\nthis is where configuration options for the various MCMC steps are stored. For multilevel variance components models, these are called $\\rho$ for the lower-level error parameter and $\\lambda$ for the upper-level parameter. Two exact sampling methods are implemented, Metropolis sampling & Slice sampling. \nEach MCMC step has its own config:", "vcsma.configs", "Since vcsma is an upper-level-only model, the Rho config is skipped. But, we can look at the Lambda config. The number of accepted lambda draws is contained in :", "vcsma.configs.Lambda.accepted", "so, the acceptance rate is", "vcsma.configs.Lambda.accepted / float(vcsma.cycles)", "Also, if you want to get verbose output from the metropolis sampler, there is a \"debug\" flag:", "example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, \n membership=membership, \n n_samples=500, \n configs=dict(tuning=250, \n adapt_step=1.01, \n debug=True))", "Which stores the information about each iteration in a list, accessible from model.configs.&lt;parameter&gt;._cache:", "example.configs.Lambda._cache[-1] #let's only look at the last one", "Configuration of the MCMC steps is done using the config options dictionary, like done in spBayes in R. The actual configuration classes exist in spvcm.steps:", "from pysal.model.spvcm.steps import Metropolis, Slice", "Most of the common options are:\nMetropolis\n\njump: the starting standard deviation of the proposal distribution\ntuning: the number of iterations to tune the scale of the proposal\nar_low: the lower bound of the target acceptance rate range\nar_hi: the upper bound of the target acceptance rate range\nadapt_step: a number (bigger than 1) that will be used to modify the jump in order to keep the acceptance rate betwen ar_lo and ar_hi. Values much larger than 1 result in much more dramatic tuning. \n\nSlice\n\nwidth: starting width of the level set\nadapt: number of previous slices use in the weighted average for the next slice. If 0, the width is not dynamically tuned.", "example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, \n membership=membership, \n n_samples=500, \n configs=dict(tuning=250, \n adapt_step=1.01, \n debug=True, ar_low=.1, ar_hi=.4))\n\nexample.configs.Lambda.ar_hi, example.configs.Lambda.ar_low\n\nexample_slicer = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, \n membership=membership, \n n_samples=500, \n configs=dict(Lambda_method='slice'))\n\nexample_slicer.trace.plot(varnames='Lambda')\nplt.show()\n\nexample_slicer.configs.Lambda.adapt, example_slicer.configs.Lambda.width", "Working with models: customization\nIf you're doing heavy customization, it makes the most sense to first initialize the class without sampling. We did this before when showing how the \"extra_traced_params\" option worked. \nTo show, let's initialize a double-level SAR-Error variance components model, but not actually draw anything.\nTo do this, you pass the option n_samples=0.", "vcsese = spvcm.both_levels.SESE(Y, X, W=W1, M=W2, Z=Z, \n membership=membership, \n n_samples=0)", "This sets up a two-level spatial error model with the default uninformative configuration. This means the prior precisions are all I * .001*, prior means are all 0, spatial parameters are set to -1/(n-1), and prior scale factors are set arbitrarily. \nConfigs\nOptions are set by assgning to the relevant property in model.configs. \nThe model configuration object is another dictionary with a few special methods. \nConfiguration options are stored for each parameter separately:", "vcsese.configs", "So, for example, if we wanted to turn off adaptation in the upper-level parameter, and fix the Metrpolis jump variance to .25:", "vcsese.configs.Lambda.max_tuning = 0\nvcsese.configs.Lambda.jump = .25", "Priors\nAnother thing that might be interesting (though not \"bayesian\") would be to fix the prior mean of $\\beta$ to the OLS estimates. One way this could be done would be to pull the Delta matrix out from the state, and estimate:\n$$ Y = X\\beta + \\Delta Z + \\epsilon $$\nusing PySAL:", "Delta = vcsese.state.Delta\nDeltaZ = Delta.dot(Z)\nvcsese.state.Betas_mean0 = ps.spreg.OLS(Y, np.hstack((X, DeltaZ))).betas", "Starting Values\nIf you wanted to start the sampler at a given starting value, you can do so by assigning that value to the Lambda value in state.", "vcsese.state.Lambda = -.25", "Sometimes, it's suggested that you start the beta vector randomly, rather than at zero. For the parallel sampling, the model starting values are adjusted to induce overdispersion in the start values. \nYou could do this manually, too:", "vcsese.state.Betas += np.random.uniform(-10, 10, size=(vcsese.state.p,1))", "Spatial Priors\nChanging the spatial parameter priors is also done by changing their prior in state. This prior must be a function that takes a value of the parameter and return the log of the prior probability for that value. \nFor example, we could assign P(\\lambda) = Beta(2,1) and zero if outside $(0,1)$, and asign $\\rho$ a truncated $\\mathcal{N}(0,.5)$ prior by first defining their functional form:", "from scipy import stats\n\ndef Lambda_prior(val):\n if (val < 0) or (val > 1):\n return -np.inf\n return np.log(stats.beta.pdf(val, 2,1))\n\ndef Rho_prior(val):\n if (val > .5) or (val < -.5):\n return -np.inf\n return np.log(stats.truncnorm.pdf(val, -.5, .5, loc=0, scale=.5))", "And then assigning to their symbols, LogLambda0 and LogRho0 in the state:", "vcsese.state.LogLambda0 = Lambda_prior\nvcsese.state.LogRho0 = Rho_prior", "Performance\nThe efficiency of the sampler is contingent on the lower-level size. If we were to estimate the draw in a dual-level SAR-Error Variance Components iteration:", "%timeit vcsese.draw()", "To make it easy to work with the model, you can interrupt and resume sampling using keyboard interrupts (ctrl-c or the stop button in the notebook).", "%time vcsese.sample(100)\n\nvcsese.sample(10)", "Under the Hood\nPackage Structure\nMost of the tools in the package are stored in relevant python files in the top level or a dedicated subfolder. Explaining a few:\n\nabstracts.py - the abstract class machinery to iterate over a sampling loop. This is where the classes are defined, like Trace, Sampler_Mixin, or Hashmap. \nplotting.py - tools for plotting output\nsteps.py - the step method definitions\nverify.py - like user checks in pysal.spreg, this contains a few sanity checks. \nutils.py- contains statistical or numerical utilities to make the computation easier, like cholesky multivariate normal sampling, more sparse utility functions, etc. \ndiagnostics.py - all the diagnostics\npriors.py - definitions of alternative prior forms. Right now, this is pretty simple. \nsqlite.py - functions to use a sqlite database instead of an in-memory chain are defined here. \n\nThe implementation of a Model\nThe package is implemented so that every \"model type\" first sends off to the spvcm.both.Base_Generic, which sets up the state, trace, and priors. \nModels are added by writing a model.py file and possibly a sample.py file. The model.py file defines a Base/User class pair (like spreg) that sets up the state and trace. It must define hyperparameters, and can precompute objects used in the sampling loop. The base class should inherit from Sampler_Mixin, which defines all of the machinery of sampling. \nThe loop through the conditional posteriors should be defined in model.py, in the model._iteration function. This should update the model state in place.\nThe model may also define a _finalize function which is run once before sampling. \nSo, if I write a new model, like a varying-intercept model with endogenously-lagged intercepts, I would write a model.py containing something like:\n```python\nclass Base_VISAR(spvcm.generic.Base_Generic):\n def init(self, Y, X, M, membership=None, Delta=None,\n extra_traced_params=None, #record extra things in state\n n_samples=1000, n_jobs=1, #sampling config\n priors = None, # dict with prior values for params\n configs=None, # dict with configs for MCMC steps\n starting_values=None, # dict with starting values\n truncation=None, # options to truncate MCMC step priors\n center=False, # Whether to center the X,Z matrices\n scale=False # Whether re-scale the X,Z matrices\n ):\n super(Base_VISAR, self).init(self, Y, X, M, W=None,\n membership=membership,\n Delta=Delta,\n n_samples=0, n_jobs=n_jobs,\n priors=priors, configs=configs,\n starting_values=starting_values,\n truncation=truncation,\n center=center,\n scale=scale\n )\n self.sample(n_samples, n_jobs=n_jobs)\n def _finalize(self):\n # the degrees of freedom of the variance parameter is constant\n self.state.Sigma2_an = self.state.N/2 + self.state.Sigma2_a0\n ...\n\n def _iteration(self):\n\n # computing the values needed to sample from the conditional posteriors\n mean = spdot(X.T, spdot(self.PsiRhoi, X)) / Sigma2 + self.state.bmean0\n ...\n...\n\n``\nI've organized the directories in this project intoboth_levels,upper_level,lower_level, andhierarchical`, which contains some of the spatially-varying coefficient models & other models I'm working on that are unrelated to the multilevel variance components stuff. \nSince most of the _iteration loop is the same between models, most of the models share the same sampling code, but customize the structure of the covariance in each level. These covariance variables are stored in the state.Psi_1, for the lower-level covariance, and state.Psi_2 for the upper-level covariance. Likewise, the precision functions are state.Psi_1i and state.Psi_2i. \nFor example:", "vcsese.state.Psi_1 #lower-level covariance\n\nvcsese.state.Psi_2 #upper-level covariance\n\nvcsma.state.Psi_2 #upper-level covariance\n\nvcsma.state.Psi_2i\n\nvcsma.state.Psi_1", "The functions that generate the covariance matrices are stored in spvcm.utils. They can be arbitrarily overwritten for alternative covariance specifications. \nThus, if we want to sample a model with a new covariance specification, then we need to define functions for the variance and precision." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DJCordhose/ai
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
mit
[ "Retrain a VGG16 Architecture\n\nhttps://keras.io/applications/#vgg16\nhttps://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html\nhttps://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb", "import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n%pylab inline\n\nimport matplotlib.pylab as plt\nimport numpy as np\n\nfrom distutils.version import StrictVersion\n\nimport sklearn\nprint(sklearn.__version__)\n\nassert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')\n\nimport tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\nprint(tf.__version__)\n\nassert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')\n\nimport keras\nprint(keras.__version__)\n\nassert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')\n\nimport pandas as pd\nprint(pd.__version__)\n\nassert StrictVersion(pd.__version__) >= StrictVersion('0.20.0')", "Preparation", "# the larger the longer it takes, be sure to also adapt input layer size auf vgg network to this value\n\nINPUT_SHAPE = (64, 64)\n# INPUT_SHAPE = (128, 128)\n# INPUT_SHAPE = (256, 256)\n\nEPOCHS = 50\n\n# Depends on harware GPU architecture, set as high as possible (this works well on K80)\nBATCH_SIZE = 100\n\n!rm -rf ./tf_log\n# https://keras.io/callbacks/#tensorboard\ntb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')\n# To start tensorboard\n# tensorboard --logdir=./tf_log\n# open http://localhost:6006\n\n!ls -lh\n\nimport os\nimport skimage.data\nimport skimage.transform\nfrom keras.utils.np_utils import to_categorical\nimport numpy as np\n\ndef load_data(data_dir, type=\".ppm\"):\n num_categories = 6\n\n # Get all subdirectories of data_dir. Each represents a label.\n directories = [d for d in os.listdir(data_dir) \n if os.path.isdir(os.path.join(data_dir, d))]\n # Loop through the label directories and collect the data in\n # two lists, labels and images.\n labels = []\n images = []\n for d in directories:\n label_dir = os.path.join(data_dir, d)\n file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)]\n # For each label, load it's images and add them to the images list.\n # And add the label number (i.e. directory name) to the labels list.\n for f in file_names:\n images.append(skimage.data.imread(f))\n labels.append(int(d))\n images64 = [skimage.transform.resize(image, INPUT_SHAPE) for image in images]\n y = np.array(labels)\n y = to_categorical(y, num_categories)\n X = np.array(images64)\n return X, y\n\n# Load datasets.\nROOT_PATH = \"./\"\noriginal_dir = os.path.join(ROOT_PATH, \"speed-limit-signs\")\noriginal_images, original_labels = load_data(original_dir, type=\".ppm\")\n\nX, y = original_images, original_labels", "Uncomment next three cells if you want to train on augmented image set\nOtherwise Overfitting can not be avoided because image set is simply too small", "# !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/augmented-signs.zip\n# from zipfile import ZipFile\n# zip = ZipFile('augmented-signs.zip')\n# zip.extractall('.')\n\ndata_dir = os.path.join(ROOT_PATH, \"augmented-signs\")\naugmented_images, augmented_labels = load_data(data_dir, type=\".png\")\n\n# merge both data sets\n\nall_images = np.vstack((X, augmented_images))\nall_labels = np.vstack((y, augmented_labels))\n\n# shuffle\n# https://stackoverflow.com/a/4602224\n\np = numpy.random.permutation(len(all_labels))\nshuffled_images = all_images[p]\nshuffled_labels = all_labels[p]\nX, y = shuffled_images, shuffled_labels", "Split test and train data 80% to 20%", "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)\nX_train.shape, y_train.shape", "First Step: Load VGG pretrained on imagenet and remove classifier\nHope: Feature Extraction will also work well for Speed Limit Signs\n\nImagenet\n\nCollection of labelled images from many categories\nhttp://image-net.org/\n\nhttp://image-net.org/about-stats\n<table class=\"table-stats\" style=\"width: 500px\">\n<tbody><tr>\n<td width=\"25%\"><b>High level category</b></td>\n<td width=\"20%\"><b># synset (subcategories)</b></td>\n<td width=\"30%\"><b>Avg # images per synset</b></td>\n<td width=\"25%\"><b>Total # images</b></td>\n</tr>\n\n<tr><td>amphibian</td><td>94</td><td>591</td><td>56K</td></tr>\n\n<tr><td>animal</td><td>3822</td><td>732</td><td>2799K</td></tr>\n\n<tr><td>appliance</td><td>51</td><td>1164</td><td>59K</td></tr>\n\n<tr><td>bird</td><td>856</td><td>949</td><td>812K</td></tr>\n\n<tr><td>covering</td><td>946</td><td>819</td><td>774K</td></tr>\n\n<tr><td>device</td><td>2385</td><td>675</td><td>1610K</td></tr>\n\n<tr><td>fabric</td><td>262</td><td>690</td><td>181K</td></tr>\n\n<tr><td>fish</td><td>566</td><td>494</td><td>280K</td></tr>\n\n<tr><td>flower</td><td>462</td><td>735</td><td>339K</td></tr>\n\n<tr><td>food</td><td>1495</td><td>670</td><td>1001K</td></tr>\n\n<tr><td>fruit</td><td>309</td><td>607</td><td>188K</td></tr>\n\n<tr><td>fungus</td><td>303</td><td>453</td><td>137K</td></tr>\n\n<tr><td>furniture</td><td>187</td><td>1043</td><td>195K</td></tr>\n\n<tr><td>geological formation</td><td>151</td><td>838</td><td>127K</td></tr>\n\n<tr><td>invertebrate</td><td>728</td><td>573</td><td>417K</td></tr>\n\n<tr><td>mammal</td><td>1138</td><td>821</td><td>934K</td></tr>\n\n<tr><td>musical instrument</td><td>157</td><td>891</td><td>140K</td></tr>\n\n\n<tr><td>plant</td><td>1666</td><td>600</td><td>999K</td></tr>\n\n<tr><td>reptile</td><td>268</td><td>707</td><td>190K</td></tr>\n\n<tr><td>sport</td><td>166</td><td>1207</td><td>200K</td></tr>\n\n<tr><td>structure</td><td>1239</td><td>763</td><td>946K</td></tr>\n\n<tr><td>tool</td><td>316</td><td>551</td><td>174K</td></tr>\n\n<tr><td>tree</td><td>993</td><td>568</td><td>564K</td></tr>\n\n<tr><td>utensil</td><td>86</td><td>912</td><td>78K</td></tr>\n\n<tr><td>vegetable</td><td>176</td><td>764</td><td>135K</td></tr>\n\n<tr><td>vehicle</td><td>481</td><td>778</td><td>374K</td></tr>\n\n<tr><td>person</td><td>2035</td><td>468</td><td>952K</td></tr>\n\n</tbody></table>\n\nMight be more suitable for cats and dogs, but is the best we have right now", "from keras import applications\n# applications.VGG16?\nvgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(64, 64, 3))\n# vgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(128, 128, 3))\n# vgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))", "All Convolutional Blocks are kept fully trained, we just removed the classifier part", "vgg_model.summary()", "Next step is to push all our signs through the net just once and record the output of bottleneck features\nDon't get confused: this is no training, yet, this just is recording the prediction in order not to repeat this expensive step over and over again when we train the classifier later", "# will take a while, but not really long depending on size and number of input images\n\n%time bottleneck_features_train = vgg_model.predict(X_train)\n\nbottleneck_features_train.shape", "What does this mean?\n\n303 predictions for 303 images or 3335 predictions for 3335 images when using augmented data set\n512 bottleneck feature per prediction\neach bottleneck feature has a size of 2x2, just a blob more or less\nbottleneck feature has larger size when we increase size of input images (might be a good idea)\n4x4 when using 128x128 as input\n8x8 when using 256x256 as input", "first_bottleneck_feature = bottleneck_features_train[0,:,:, 0]\n\nfirst_bottleneck_feature", "Now we create a new classifier and train it with this output and the labels from ground truth\nClassifier is copied from our first VGG style network", "input_shape = bottleneck_features_train.shape[1:]\n\nfrom keras.models import Model\nfrom keras.layers import Dense, Dropout, Flatten, Input\n\n# try and vary between .4 and .75\ndrop_out = 0.50\n\ninputs = Input(shape=input_shape)\n\nx = Flatten()(inputs)\n\n# this is an additional dropout to compensate for the missing one after bottleneck features\nx = Dropout(drop_out)(x)\n\nx = Dense(256, activation='relu')(x)\nx = Dropout(drop_out)(x)\n\n# softmax activation, 6 categories\npredictions = Dense(6, activation='softmax')(x)\n\nclassifier_model = Model(input=inputs, output=predictions)\nclassifier_model.summary()\n\nclassifier_model.compile(optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n!rm -rf tf_log\n# https://keras.io/callbacks/#tensorboard\ntb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')\n# To start tensorboard\n# tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log\n# open http://localhost:6006", "This is a very simple architecture and should train pretty fast\n\nit overfits by quite a bit", "%time history = classifier_model.fit(bottleneck_features_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback])\n# more epochs might be needed for original data\n# %time history = classifier_model.fit(bottleneck_features_train, y_train, epochs=2000, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback])", "Issue 1: We have two separate models now\n\nHow do we evaluate?\nHow to save model for later prediction use / deployment?", "from keras import models\n\ncombined_model = models.Sequential()\ncombined_model.add(vgg_model)\ncombined_model.add(classifier_model)\n\ncombined_model.summary()\n\ncombined_model.compile(optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\ntrain_loss, train_accuracy = combined_model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)\ntrain_loss, train_accuracy\n\ntest_loss, test_accuracy = combined_model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)\ntest_loss, test_accuracy\n\n# complete original non augmented speed limit signs\noriginal_loss, original_accuracy = combined_model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)\noriginal_loss, original_accuracy\n\n# combined_model.save('vgg16-retrained.hdf5')\ncombined_model.save('vgg16-augmented-retrained.hdf5')\n\n# !ls -lh vgg16-retrained.hdf5\n!ls -lh vgg16-augmented-retrained.hdf5", "Issue 2: Whatever we do, we overfit, much more than 85% on test not possible\n\nfor non augmented data it might even be as low as 70%\nfirst thing we could try: maybe bottlebeck feature being 2x2 is too small, we could compensate by scaling images up to 128x128 or even 256x256\nthis can indeed bring up test score to 90%\nhowever, this will make the model incompatible with the 64x64 input of the other models and make deployment harder, so we keep 64x64\nmaybe feature extracting from Imagenet is too different from what we have with speed limit signs? \nor is the classifier too simply for the complex features?\n\nLet us try some fine tuning\nFirst we freeze all but the last convolutional block", "len(vgg_model.layers)\n\nvgg_model.layers\n\nfirst_conv_layer = vgg_model.layers[1]\n\nfirst_conv_layer.trainable\n\n# set the first 15 layers (up to the last conv block)\n# to non-trainable (weights will not be updated)\n# so, the general features are kept and we (hopefully) do not have overfitting\nnon_trainable_layers = vgg_model.layers[:15]\n\nnon_trainable_layers\n\nfor layer in non_trainable_layers:\n layer.trainable = False\n\nfirst_conv_layer.trainable", "We then tweak the complete model by very slowly re-retraining classifier and final convolutional block\n\nslow learning prevents us from ruining previous good results\nleave everthing else in place\nearlier layers hopefully already encode common feaure channels\nless risk of overfitting\nearlier layers are more general\nmodel has too much capacity for training and is likley to learn each and every detail\na little bit faster\n\n\n\nThis may still take quite a while", "from keras import optimizers\n\n# compile the model with a SGD/momentum optimizer\n# and a very slow learning rate\n# make updates very small and non adaptive so we do not ruin previous learnings \ncombined_model.compile(loss='categorical_crossentropy',\n optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),\n metrics=['accuracy'])\n\n!rm -r tf_log\n\n%time combined_model.fit(X_train, y_train, epochs=150, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback])\n# non augmented data is cheap to retrain, so we can try a few more epochs\n# %time combined_model.fit(X_train, y_train, epochs=1000, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback])", "90% for validation is quite a bit of improvement, might even increase when we train for a bit longer\nMetrics for Augmented Data\nAccuracy\n\nValidation Accuracy", "train_loss, train_accuracy = combined_model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)\ntrain_loss, train_accuracy\n\ntest_loss, test_accuracy = combined_model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)\ntest_loss, test_accuracy\n\n# complete original non augmented speed limit signs\noriginal_loss, original_accuracy = combined_model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)\noriginal_loss, original_accuracy\n\ncombined_model.save('vgg16-augmented-retrained-fine-tuned.hdf5')\n# combined_model.save('vgg16-retrained-fine-tuned.hdf5')\n\n# !ls -lh vgg16-retrained-fine-tuned.hdf5\n!ls -lh vgg16-augmented-retrained-fine-tuned.hdf5", "Hands-On: Experiment with all parameters" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
diegocavalca/Studies
programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb
cc0-1.0
[ "Neural Network", "from __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom datetime import date\ndate.today()\n\nauthor = \"kyubyong. https://github.com/Kyubyong/tensorflow-exercises\"\n\ntf.__version__\n\nnp.__version__", "Activation Functions\nQ1. Apply relu, elu, and softplus to x.", "_x = np.linspace(-10., 10., 1000)\nx = tf.convert_to_tensor(_x)\n\nrelu = tf.nn.relu(x)\nelu = tf.nn.elu(x)\nsoftplus = tf.nn.softplus(x)\n\nwith tf.Session() as sess:\n _relu, _elu, _softplus = sess.run([relu, elu, softplus])\n plt.plot(_x, _relu, label='relu')\n plt.plot(_x, _elu, label='elu')\n plt.plot(_x, _softplus, label='softplus')\n plt.legend(bbox_to_anchor=(0.5, 1.0))\n plt.show()", "Q2. Apply sigmoid and tanh to x.", "_x = np.linspace(-10., 10., 1000)\nx = tf.convert_to_tensor(_x)\n\nsigmoid = tf.nn.sigmoid(x)\ntanh = tf.nn.tanh(x)\n\nwith tf.Session() as sess:\n _sigmoid, _tanh = sess.run([sigmoid, tanh])\n plt.plot(_x, _sigmoid, label='sigmoid')\n plt.plot(_x, _tanh, label='tanh')\n plt.legend(bbox_to_anchor=(0.5, 1.0))\n plt.grid()\n plt.show()", "Q3. Apply softmax to x.", "_x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)\nx = tf.convert_to_tensor(_x)\nout = tf.nn.softmax(x, dim=-1)\nwith tf.Session() as sess:\n _out = sess.run(out)\n print(_out) \n assert np.allclose(np.sum(_out, axis=-1), 1)", "Q4. Apply dropout with keep_prob=.5 to x.", "_x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)\nprint(\"_x =\\n\" , _x)\nx = tf.convert_to_tensor(_x)\nout = tf.nn.dropout(x, keep_prob=0.5)\nwith tf.Session() as sess:\n _out = sess.run(out)\n print(\"_out =\\n\", _out) ", "Fully Connected\nQ5. Apply a fully connected layer to x with 2 outputs and then an sigmoid function.", "x = tf.random_normal([8, 10])\nout = tf.contrib.layers.fully_connected(inputs=x, num_outputs=2, \n activation_fn=tf.nn.sigmoid,\n weights_initializer=tf.contrib.layers.xavier_initializer())\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n print(sess.run(out))\n", "Convolution\nQ6. Apply 2 kernels of width-height (2, 2), stride 1, and same padding to x.", "tf.reset_default_graph()\n\nx = tf.random_uniform(shape=(2, 3, 3, 3), dtype=tf.float32)\nfilter = tf.get_variable(\"filter\", shape=(2, 2, 3, 2), dtype=tf.float32, \n initializer=tf.random_uniform_initializer())\nout = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding=\"SAME\")\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n _out = sess.run(out)\n print(_out.shape)", "Q7. Apply 3 kernels of width-height (2, 2), stride 1, dilation_rate 2 and valid padding to x.", "tf.reset_default_graph()\n\nx = tf.random_uniform(shape=(4, 10, 10, 3), dtype=tf.float32)\nfilter = tf.get_variable(\"filter\", shape=(2, 2, 3, 2), dtype=tf.float32, \n initializer=tf.random_uniform_initializer())\nout = tf.nn.atrous_conv2d(x, filter, padding=\"VALID\", rate=2)\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n _out = sess.run(out)\n print(_out.shape)\n# Do we really have to distinguish between these two functions? \n# Unless you want to use stride of 2 or more,\n# You can just use tf.nn.atrous_conv2d. For normal convolution, set rate 1.", "Q8. Apply 4 kernels of width-height (3, 3), stride 2, and same padding to x.", "tf.reset_default_graph()\n\nx = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)\nfilter = tf.get_variable(\"filter\", shape=(3, 3, 5, 4), dtype=tf.float32, \n initializer=tf.random_uniform_initializer())\nout = tf.nn.conv2d(x, filter, strides=[1, 2, 2, 1], padding=\"SAME\")\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n _out = sess.run(out)\n print(_out.shape)", "Q9. Apply 4 times of kernels of width-height (3, 3), stride 2, and same padding to x, depth-wise.", "tf.reset_default_graph()\n\nx = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)\nfilter = tf.get_variable(\"filter\", shape=(3, 3, 5, 4), dtype=tf.float32, \n initializer=tf.random_uniform_initializer())\nout = tf.nn.depthwise_conv2d(x, filter, strides=[1, 2, 2, 1], padding=\"SAME\")\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n _out = sess.run(out)\n print(_out.shape)", "Q10. Apply 5 kernels of height 3, stride 2, and valid padding to x.", "tf.reset_default_graph()\n\nx = tf.random_uniform(shape=(4, 10, 5), dtype=tf.float32)\nfilter = tf.get_variable(\"filter\", shape=(3, 5, 5), dtype=tf.float32, \n initializer=tf.random_uniform_initializer())\nout = tf.nn.conv1d(x, filter, stride=2, padding=\"VALID\")\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n _out = sess.run(out)\n print(_out.shape)", "Q11. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and same padding to x.", "tf.reset_default_graph()\n\nx = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)\nfilter = tf.get_variable(\"filter\", shape=(3, 3, 5, 4), dtype=tf.float32, \n initializer=tf.random_uniform_initializer())\nshp = x.get_shape().as_list()\noutput_shape = [shp[0], shp[1]*2, shp[2]*2, 5]\nout = tf.nn.conv2d_transpose(x, filter, strides=[1, 2, 2, 1], output_shape=output_shape, padding=\"SAME\")\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n _out = sess.run(out)\n print(_out.shape)", "Q12. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and valid padding to x.", "tf.reset_default_graph()\n\nx = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)\nfilter = tf.get_variable(\"filter\", shape=(3, 3, 5, 4), dtype=tf.float32, \n initializer=tf.random_uniform_initializer())\nshp = x.get_shape().as_list()\noutput_shape = [shp[0], (shp[1]-1)*2+3, (shp[2]-1)*2+3, 5]\nout = tf.nn.conv2d_transpose(x, filter, strides=[1, 2, 2, 1], output_shape=output_shape, padding=\"VALID\")\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n _out = sess.run(out)\n print(_out.shape)", "Q13. Apply max pooling and average pooling of window size 2, stride 1, and valid padding to x.", "_x = np.zeros((1, 3, 3, 3), dtype=np.float32)\n_x[0, :, :, 0] = np.arange(1, 10, dtype=np.float32).reshape(3, 3)\n_x[0, :, :, 1] = np.arange(10, 19, dtype=np.float32).reshape(3, 3)\n_x[0, :, :, 2] = np.arange(19, 28, dtype=np.float32).reshape(3, 3)\nprint(\"1st channel of x =\\n\", _x[:, :, :, 0])\nprint(\"\\n2nd channel of x =\\n\", _x[:, :, :, 1])\nprint(\"\\n3rd channel of x =\\n\", _x[:, :, :, 2])\nx = tf.constant(_x)\n\nmaxpool = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 1, 1, 1], padding=\"VALID\")\navgpool = tf.nn.avg_pool(x, [1, 2, 2, 1], [1, 1, 1, 1], padding=\"VALID\")\nwith tf.Session() as sess:\n _maxpool, _avgpool = sess.run([maxpool, avgpool])\n print(\"\\n1st channel of max pooling =\\n\", _maxpool[:, :, :, 0])\n print(\"\\n2nd channel of max pooling =\\n\", _maxpool[:, :, :, 1])\n print(\"\\n3rd channel of max pooling =\\n\", _maxpool[:, :, :, 2])\n print(\"\\n1st channel of avg pooling =\\n\", _avgpool[:, :, :, 0])\n print(\"\\n2nd channel of avg pooling =\\n\", _avgpool[:, :, :, 1])\n print(\"\\n3rd channel of avg pooling =\\n\", _avgpool[:, :, :, 2])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
antoinecarme/sklearn_explain
doc/sklearn_reason_codes_RandomForest.ipynb
bsd-3-clause
[ "Model Explanation for Classification Models\nThis document describes the usage of a classification model to provide an explanation for a given prediction.\nModel explanation provides the ability to interpret the effect of the predictors on the composition of an individual score. These predictors can then be ranked according to their contribution in the final score (leading to a positive or negative decision).\nModel explanation has always been used in credit risk applications in presence of regulatory settings . The credit company is expected to give the customer the main (top n) reasons why the credit application was rejected (also known as reason codes).\nModel explanation was also recently introduced by the European Union’s new General Data Protection Regulation (GDPR, https://arxiv.org/pdf/1606.08813.pdf) to add the possibility to control the increasing use of machine learning algorithms in routine decision-making processes. \n\nThe law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. \n\nThe process we will use here is similar to LIME. The main difference is that LIME uses a data sampling around score value locally, while here we perform as full cross-statistics computation between the predictors and the score and use a local piece-wise linear approximation.\nSample scikit-learn Classification Model\nHere, we will use a sciki-learn classification model on a standard dataset (breast cancer detection model).\nThe dataset used contains 30 predictor variables (numerical features) and one binary target (dependant variable). For practical reasons, we will restrict our study to the first 4 predictors in this document.", "from sklearn import datasets\nimport pandas as pd\n\n%matplotlib inline\n\nds = datasets.load_breast_cancer();\nNC = 4\nlFeatures = ds.feature_names[0:NC]\n\ndf_orig = pd.DataFrame(ds.data[:,0:NC] , columns=lFeatures)\ndf_orig['TGT'] = ds.target\ndf_orig.sample(6, random_state=1960)", "For the classification task, we will build a ridge regression model, and train it on a part of the full dataset", "from sklearn.ensemble import RandomForestClassifier\n\nclf = RandomForestClassifier(n_estimators=120, random_state = 1960)\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(df_orig[lFeatures].values, \n df_orig['TGT'].values, \n test_size=0.2, \n random_state=1960)\n\ndf_train = pd.DataFrame(X_train , columns=lFeatures)\ndf_train['TGT'] = y_train\ndf_test = pd.DataFrame(X_test , columns=lFeatures)\ndf_test['TGT'] = y_test\n\nclf.fit(X_train , y_train)\n\n\n# clf.predict_proba(df[lFeatures])[:,1]", "Model Explanation\nThe goal here is to be able, for a given individual, the impact of each predictor on the final score.\nFor our model, we will do this by analyzing cross statistics between (binned) predictors and the (binned) final score. \nFor each score bin, we fit a linear model locally and use it to explain the score. This is generalization of the linear case, based on the fact that any model can be approximated well enough locally be a linear function (inside each score_bin). The more score bins we use, the more data we have, the better the approximation is.\nFor a random forest , the score can be seen as the probability of the positive class.", "from sklearn.linear_model import *\ndef create_score_stats(df, feature_bins = 4 , score_bins=30):\n df_binned = df.copy()\n df_binned['Score'] = clf.predict_proba(df[lFeatures].values)[:,0]\n df_binned['Score_bin'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=False, duplicates='drop')\n df_binned['Score_bin_labels'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=None, duplicates='drop')\n\n for col in lFeatures:\n df_binned[col + '_bin'] = pd.qcut(df[col] , feature_bins, labels=False, duplicates='drop')\n \n binned_features = [col + '_bin' for col in lFeatures]\n lInterpolated_Score= pd.Series(index=df_binned.index)\n bin_classifiers = {}\n coefficients = {}\n intercepts = {}\n for b in range(score_bins):\n bin_clf = Ridge(random_state = 1960)\n bin_indices = (df_binned['Score_bin'] == b)\n # print(\"PER_BIN_INDICES\" , b , bin_indexes)\n bin_data = df_binned[bin_indices]\n bin_X = bin_data[binned_features]\n bin_y = bin_data['Score']\n if(bin_y.shape[0] > 0):\n bin_clf.fit(bin_X , bin_y)\n bin_classifiers[b] = bin_clf\n bin_coefficients = dict(zip(lFeatures, [bin_clf.coef_.ravel()[i] for i in range(len(lFeatures))]))\n # print(\"PER_BIN_COEFFICIENTS\" , b , bin_coefficients)\n coefficients[b] = bin_coefficients\n intercepts[b] = bin_clf.intercept_\n predicted = bin_clf.predict(bin_X)\n lInterpolated_Score[bin_indices] = predicted\n\n df_binned['Score_interp'] = lInterpolated_Score \n return (df_binned , bin_classifiers , coefficients, intercepts)\n", "For simplicity, to describe our method, we use 5 score bins and 5 predictor bins. \nWe fit our local models on the training dataset, each model is fit on the values inside its score bin.", "\n(df_cross_stats , per_bin_classifiers , per_bin_coefficients, per_bin_intercepts) = create_score_stats(df_train , feature_bins=5 , score_bins=10)\n\n\ndef debrief_score_bin_classifiers(bin_classifiers):\n binned_features = [col + '_bin' for col in lFeatures]\n score_classifiers_df = pd.DataFrame(index=(['intercept'] + list(binned_features)))\n for (b, bin_clf) in per_bin_classifiers.items():\n bin\n score_classifiers_df['score_bin_' + str(b) + \"_model\"] = [bin_clf.intercept_] + list(bin_clf.coef_.ravel())\n return score_classifiers_df\n \ndf = debrief_score_bin_classifiers(per_bin_classifiers)\ndf.head(10)", "From the table above, we see that lower score values (score_bin_0) are all around zero probability and are not impacted by the predictor values, higher score values (score_bin_5) are all around 1 and are also not impacted. This is what one expects from a good classification model.\nin the score bin 3, the score values increase significantly with mean area_bin and decrease with mean radius_bin values.\nPredictor Effects\nPredictor effects describe the impact of specific predictor values on the final score. For example, some values of a predictor can increase or decrease the score locally by 0.10 or more points and change the negative decision to a positive one.\nThe predictor effect reflects how a specific predictor increases the score (above or below the mean local contribtution of this variable).", "for col in lFeatures:\n lcoef = df_cross_stats['Score_bin'].apply(lambda x : per_bin_coefficients.get(x).get(col))\n lintercept = df_cross_stats['Score_bin'].apply(lambda x : per_bin_intercepts.get(x))\n lContrib = lcoef * df_cross_stats[col + '_bin'] + lintercept/len(lFeatures)\n df1 = pd.DataFrame();\n df1['contrib'] = lContrib\n df1['Score_bin'] = df_cross_stats['Score_bin']\n lContribMeanDict = df1.groupby(['Score_bin'])['contrib'].mean().to_dict()\n lContribMean = df1['Score_bin'].apply(lambda x : lContribMeanDict.get(x))\n # print(\"CONTRIB_MEAN\" , col, lContribMean)\n df_cross_stats[col + '_Effect'] = lContrib - lContribMean\n\ndf_cross_stats.sample(6, random_state=1960)", "The previous sample, shows that the first individual lost 0.000000 score points due to the feature $X_1$, gained 0.003994 with the feature $X_2$, etc\nReason Codes\nThe reason codes are a user-oriented representation of the decision making process. These are the predictors ranked by their effects.", "import numpy as np\nreason_codes = np.argsort(df_cross_stats[[col + '_Effect' for col in lFeatures]].values, axis=1)\ndf_rc = pd.DataFrame(reason_codes, columns=['reason_idx_' + str(NC-c) for c in range(NC)])\ndf_rc = df_rc[list(reversed(df_rc.columns))]\ndf_rc = pd.concat([df_cross_stats , df_rc] , axis=1)\nfor c in range(NC):\n reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x])\n df_rc['reason_' + str(c+1)] = reason\n # detailed_reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x] + \"_bin\")\n # df_rc['detailed_reason_' + str(c+1)] = df_rc[['reason_' + str(c+1) , ]]\n \ndf_rc.sample(6, random_state=1960)\n\ndf_rc[['reason_' + str(NC-c) for c in range(NC)]].describe()", "Going Further\nThis was an introductory document with a simple linear classifier. Deeper analysis can be made to extend this study\n\nNon-linear models (generalizing contributions, non-parameteric setting ?)\nOther classifiers (SVMs, Decision Trees, Naive Bayes, MLP, Ensembles, etc)\nOther predictors (categorical features, ordered, ...)\nMore risk scoring (https://kdd11pmml.files.wordpress.com/2011/09/p2_flint_guazzelli_kdd_20112.pdf)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AtmaMani/pyChakras
udemy_ml_bootcamp/Machine Learning Sections/Logistic-Regression/Logistic Regression Project - Solutions.ipynb
mit
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nLogistic Regression Project - Solutions\nIn this project we will be working with a fake advertising data set, indicating whether or not a particular internet user clicked on an Advertisement on a company website. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user.\nThis data set contains the following features:\n\n'Daily Time Spent on Site': consumer time on site in minutes\n'Age': cutomer age in years\n'Area Income': Avg. Income of geographical area of consumer\n'Daily Internet Usage': Avg. minutes a day consumer is on the internet\n'Ad Topic Line': Headline of the advertisement\n'City': City of consumer\n'Male': Whether or not consumer was male\n'Country': Country of consumer\n'Timestamp': Time at which consumer clicked on Ad or closed window\n'Clicked on Ad': 0 or 1 indicated clicking on Ad\n\nImport Libraries\nImport a few libraries you think you'll need (Or just import them as you go along!)", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "Get the Data\nRead in the advertising.csv file and set it to a data frame called ad_data.", "ad_data = pd.read_csv('advertising.csv')", "Check the head of ad_data", "ad_data.head()", "Use info and describe() on ad_data", "ad_data.info()\n\nad_data.describe()", "Exploratory Data Analysis\nLet's use seaborn to explore the data!\nTry recreating the plots shown below!\n Create a histogram of the Age", "sns.set_style('whitegrid')\nad_data['Age'].hist(bins=30)\nplt.xlabel('Age')", "Create a jointplot showing Area Income versus Age.", "sns.jointplot(x='Age',y='Area Income',data=ad_data)", "Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.", "sns.jointplot(x='Age',y='Daily Time Spent on Site',data=ad_data,color='red',kind='kde');", "Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'", "sns.jointplot(x='Daily Time Spent on Site',y='Daily Internet Usage',data=ad_data,color='green')", "Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.", "sns.pairplot(ad_data,hue='Clicked on Ad',palette='bwr')", "Logistic Regression\nNow it's time to do a train test split, and train our model!\nYou'll have the freedom here to choose columns that you want to train on!\n Split the data into training set and testing set using train_test_split", "from sklearn.model_selection import train_test_split\n\nX = ad_data[['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage', 'Male']]\ny = ad_data['Clicked on Ad']\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)", "Train and fit a logistic regression model on the training set.", "from sklearn.linear_model import LogisticRegression\n\nlogmodel = LogisticRegression()\nlogmodel.fit(X_train,y_train)", "Predictions and Evaluations\n Now predict values for the testing data.", "predictions = logmodel.predict(X_test)", "Create a classification report for the model.", "from sklearn.metrics import classification_report\n\nprint(classification_report(y_test,predictions))", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
saketkc/hatex
2015_Fall/MATH-578B/Homework1/Homework1.ipynb
mit
[ "Problem 1\n\nThe transition matrix is given by:\n$$\n\\begin{bmatrix}\n1-\\alpha & \\alpha\\\n\\beta & 1-\\beta\n\\end{bmatrix}\n$$\nPart (a)\n$\\eta = min(n>0, X_n=1)$ given $X_0=0$\nTo Prove $\\eta \\sim Geom(\\alpha)$\n$\\eta = P(X_0=0,X_1=0, \\dots X_{n-1}=1, X_n=1)$\nUsing the Markov property this can be written as:\n$$\n\\eta = P(X_0=0)P(X_1=0|X_0=0)P(X_2=0|X_1=0)P(X_3=0|X_2=0) \\dots P(X_{n-1}=0|X_{n-2}=0)P(X_{n}=1|X_{n-1}=0)\n$$\nAnd being time-homogenous, this simplifies to:\n$$\n\\eta = P(X_0=0)\\big(P(X_1=0|X_0)\\big)^{n-1}\\times P(X_1=1|X_0=0)\n$$\n$\\implies$ \n$$\n\\eta = P(X_0=0)\\big(1-\\alpha)^{n-1}\\alpha = \\big(1-\\alpha)^{n-1}\\alpha\n$$\nAnd hence $\\eta \\sim Geom(\\alpha)$\nPart (b)\nSpectral decomposition of $P$ and value for $P(X_n=1|X_0=0)$\nSpectral decomposition of $P$:\n$$\ndet\\begin{bmatrix}\n\\alpha-\\lambda & 1-\\alpha\\\n1-\\beta & \\beta-\\lambda\n\\end{bmatrix} = 0\n$$\n$$\n\\lambda^2 +(\\alpha + \\beta-2) \\lambda + (1-\\alpha -\\beta) = 0\n$$\nThus, $\\lambda_1 = 1$ and $\\lambda_2 = 1-\\alpha-\\beta$\nEigenvectors are given by:\n$v_1^T = \\big( x_1\\ x_1 \\big)\\ \\forall\\ x_1 \\in R$\nand for $\\lambda_2$ , $v_2 = \\big( x_1\\ \\frac{-\\beta x_1}{\\alpha} \\big)$\nNow using Markov property: $P(X_n=1|X_0=0) = (P^n)_{01}$\nNow, \n$P^n = VD^nV^{-1}$\nwhere:\n$$\nV = \\begin{bmatrix}\n1 & 1\\\n1 & \\frac{-\\beta}{\\alpha}\n\\end{bmatrix}\n$$\nand \n$$\nD = \\begin{bmatrix}\n1 & 0 \\\n0 & (1-\\alpha-\\beta)\n\\end{bmatrix}\n$$\n$$\nV^{-1} = \\frac{-1}{\\frac{\\beta}{\\alpha}+1}\\begin{bmatrix}\n-\\frac{\\beta}{\\alpha} & -1 \\\n-1 & 1\n\\end{bmatrix}\n$$\nThus,\n$$\nP^n = \\begin{bmatrix}\n1 & 1\\\n1 & \\frac{-\\beta}{\\alpha}\n\\end{bmatrix} \\times \\begin{bmatrix}\n1 & 0 \\\n0 & (1-\\alpha-\\beta)^n\n\\end{bmatrix} \\times \\frac{-1}{\\frac{\\beta}{\\alpha}+1}\\begin{bmatrix}\n-\\frac{\\beta}{\\alpha} & -1 \\\n-1 & 1\n\\end{bmatrix}\n$$\n$$\nP^n = \\frac{1}{\\alpha+\\beta} \\begin{bmatrix}\n\\beta + \\alpha(1-\\alpha-\\beta)^n & \\alpha-\\alpha(1-\\alpha-\\beta)^n\\\n\\beta - \\beta(1-\\alpha-\\beta)^n & \\alpha + \\beta(1-\\alpha-\\beta)^n\n\\end{bmatrix}\n$$\nPart (c)\nWhen $\\alpha+\\beta=1$, the eigen values are $\\lambda_1=1$ and $\\lambda_2=0$ and hence\n$$\nP^n = \\begin{bmatrix}\n\\beta & \\alpha \\\n\\beta & \\alpha\n\\end{bmatrix}\n$$\nCheck:\nAlso consider the following identifiy: $P^{n+1}=PP^n$\nthen:\n$$\n\\begin{bmatrix}\np_{00}^{n+1} & p_{01}^{n+1}\\\np_{10}^{n+1} & p_{11}^{n+1}\\\n\\end{bmatrix} = \\begin{bmatrix} \np_{00}^n & p_{01}^n\\\np_{10}^n & p_{11}^n\n\\end{bmatrix} \\times \\begin{bmatrix}\n1-\\alpha & \\alpha\\\n\\beta & 1-\\beta\n\\end{bmatrix}\n$$\n$\\implies$\n$$\n\\begin{align}\np_{11}^{n+1} &= p_{10}^n(\\alpha) + p_{11}^n(1-\\beta)\\\n &= (1-p_{11}^n)(\\alpha) +(p_{11}^n)(1-\\beta)\\\n &= \\alpha + (1-\\alpha-\\beta)p_{11}^n\n\\end{align}\n$$\nConsider the recurrence:\n$$\nx_{n+1} = \\alpha+(1-\\alpha-\\beta)x_n\n$$\nConstant solution $x_n=x_{n+1}=x$ is given by: $x=\\frac{\\alpha}{\\alpha+\\beta}$\nNow let $y_n = x_n-x=x_n-\\frac{\\alpha}{\\alpha+\\beta}$ then,\n$y_{n+1} = (1-\\alpha-\\beta)y_n$ and hence $y_n=(1-\\alpha-\\beta)^n y_0$\nThus,\n$$p_{11}^{n} = (1-\\alpha-\\beta)^np_{11}^0 +\\frac{\\alpha}{\\alpha+\\beta}$$\nGiven $P_{00}=\\frac{\\beta}{\\alpha+\\beta}$ and $\\alpha+\\beta=1$ and hence:\n$p_{11}^n = \\frac{\\alpha}{\\alpha+\\beta} = \\alpha$\nand hence, $p_{10}^n = \\beta$\nSimilary,\n$p_{00}^n = \\beta$ and $p_{01}^n = \\alpha$\nProblem 2\n$P(X_1=0) \\frac{\\beta}{\\alpha+\\beta}$ and hence $P(X_1=1) = \\frac{\\alpha}{\\alpha+\\beta}$\n$X=X_1X_2\\dots X_n$ and $Y=Y_1Y_2\\dots Y_n$ representes the reverse string $Y_k=X_{n+k-1}$\nPart (a)\nGiven string of digits: $a_1,a_2,a_3 \\dots a_n $ to find: $P(Y_1=a_1,Y_2=a_2,Y_3=a_3\\dots Y_n=a_n)$\n$$\n\\begin{align}\nP(Y_1=a_1,Y_2=a_2,Y_3=a_3\\dots Y_n=a_n) &= P(X_1=a_n,X_2=a_{n-1}, \\dots X_n=a_1) \\\n&= P(X_1=a_n)P(X_2=a_{n-1}|X_1=a_n)P(X_3=a_{n-2}|X_2=a_{n-1})\\dots P(X_n=a_1|X_{n-1}=a_2) \\\n&= P(X_1=a_n)(P_{a_n a_{n-1}})(P_{a_{n-1} a_{n-2}}) \\dots (P_{a_2 a_1})\n\\end{align}\n$$\nThe problem asked about not using spectral decomposition, but I was not sure how spectral decomposition would have come in handy if the states $a_i$ are not specified explicitly.\nPart (b)\n$$\nz=\\begin{cases}\nX & if \\theta = H\\\nY & otherwise\n\\end{cases}\n$$\nGiven function f such that, $f :{0,1}^n \\longrightarrow {H,T}$\nTo show: $P(f(Z)=\\theta)=0.5$\n$P(\\theta=H) = P(\\theta=T) = 0.5$\nGiven Z, guess $\\theta$: \n$P(\\theta=H|Z=X) = \\frac{P(\\theta=H, Z=X)}{P(Z=X)}$\nZ, has only two possible values: $H$ and $T$ and hence assuming the guess function is unbiased:\n$P(f(Z)=H) = P(f(Z)=T)=0.5$\nProblem 3\n$$\n\\tau = min{ n \\geq 0: X_n=\\dagger}\n$$\n$$\nE[\\tau] = E_a[E_a[\\tau|X_n=a]\\ where\\ a \\in {\\phi, \\alpha, \\beta, \\alpha+\\beta, pol, \\dagger}\n$$\nLet $S={\\phi, \\alpha, \\beta, \\alpha+\\beta, pol, \\dagger}$\nConsider for $a\\neq \\dagger$:\n$$\nh(a) = E[\\tau|X_0=a] = \\sum_{s \\in S}P_{as} \\times (1) + P_{as}\\times E[\\tau|X_0=s) \n$$\n$\\implies$ \n$$\nh(a) = ((I-P_{-})^{-1})_a\n$$\nwhere $P_{-}$ represents the matrix with the row and column representng $X_i=\\dagger$ removed.", "%matplotlib inline\nfrom __future__ import division\nimport numpy as np\nfrom numpy import linalg as LA\nk_a=0.2\nk_b=0.2\nk_p = 0.5\nP = np.matrix([[1-k_a-k_b, k_a ,k_b, 0, 0, 0],\n [k_a, 1-k_a-k_b, 0, k_b, 0, 0],\n [k_b, 0, 1-k_a-k_b, k_a, 0, 0],\n [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0],\n [0, 0, 0, 0, 0, 1],\n [0, 0, 0, 1, 0, 0]])\nq = [[k_a-k_b,k_a,k_b,0,0],\n [k_a,k_a+k_b,0,k_b,0],\n [k_b,0,k_a+k_b,0,0],\n [0,k_b,k_a,k_a+k_b+k_p,k_p],\n [0,0,0,0,0]]\nqq = np.array(q)\n\nprint(P)\nstates = ['phi', 'alpha', 'beta', 'ab', 'pol', 'd']\n\n\nimport networkx as nx\n\nG=nx.from_numpy_matrix(P,create_using=nx.MultiDiGraph())\nG.edges(data=True)\n#nx.draw_graphviz(G)# labels=states)\nnx.write_dot(G,'G.dot')\n\n!neato -T png G.dot > multi.png", "The markov chain seems to be irreducible\nOne way to obtain the stationary state is to look at the eigen vectors correspendoing to the eigen value of 1. However, the eigen vectors come out to be imaginary. This seemed to be an issue wwith the solver so I relied on solving the system of equation: $\\pi = P\\pi$", "w, v = LA.eig(P)\nfor i in range(0,6):\n print 'Eigen value: {}\\n Eigen vector: {}\\n'.format(w[i],v[:,i])\n\n## Solve for (I-Q)^{-1}\niq = np.linalg.inv(np.eye(5)-qq)\niq_phi = iq[0,0]\niq_alpha = iq[1,1]\niq_beta = iq[2,2]\niq_alphabeta = iq[3,3]\niq_pol = iq[4,4]\n", "EDIT: I made correction to solve for corrected $\\pi$, by acounting for $P^T$ and not $P$", "A = np.eye(6)-P.T\nA[-1,:] = [1,1,1,1,1,1]\n\nB = [0,0,0,0,0,1]\nX=np.linalg.solve(A,B)\nprint(X)\n", "Stationary state is given by $\\pi = (0.1667, 0.1667, 0.1667, 0.1667, 0.1667, 0.1667)$ The mean number of visits per unit time to $\\dagger$ are $\\frac{1}{\\pi_6} = 6$ However strangely this does not satisfy $\\pi=P\\pi$. I was not able to figure out where I went wrong.\nEDIT: I made correction to solve for corrected $\\pi$, by acounting for $P^T$ and not $P$, so this no longer holds", "#EDIT: I made correction to solve for corrected $\\pi$, by acounting for $P^T$ and not $P$\nprint('\\pi*P={}\\n'.format(X*P))\nprint('But \\pi={}'.format(X)) ", "Simulating the chain:\nGeneral strategy: Generate a random number $\\longrightarrow$ Select a state $\\longrightarrow$ Jump to state $\\longrightarrow$ Repeat", "## phi\nnp.random.seed(1)\n\nPP = {}\nPP['phi']= [1-k_a-k_b, k_a ,k_b, 0, 0, 0]\nPP['alpha'] = [k_a, 1-k_a-k_b, 0, k_b, 0, 0]\nPP['beta'] = [k_b, 0, 1-k_a-k_b, k_a, 0, 0]\nPP['ab']= [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0]\nPP['pol']= [0, 0, 0, 0, 0, 1]\nPP['d']= [0, 0, 0, 1, 0, 0]\n\n##For $h(\\phi)$\nx0='phi'\nx='phi'\ndef h(x):\n s=0\n new_state=x\n for i in range(1,1000):\n old_state=new_state\n probs = PP[old_state]\n z=np.random.choice(6, 1, p=probs)\n new_state = states[z[0]]\n #print('{} --> {}'.format(old_state, new_state))\n s+=z[0]\n return s/1000\n", "Part (a,b,c)", "print(r'$h(\\phi)$: From simulation: {}; From calculation: {}'.format(h('phi'),iq_phi))\n\n\nprint(r'$h(\\alpha)$: From simulation: {}; From calculation: {}'.format(h('alpha'),iq_alpha))\n\n\nprint(r'$h(\\beta)$: From simulation: {}; From calculation: {}'.format(h('beta'),iq_beta))\n\n\nprint(r'$h(\\alpha+\\beta)$: From simulation: {}; From calculation: {}'.format(h('ab'),iq_alphabeta))\n\n\nprint(r'$h(\\pol)$: From simulation: {}; From calculation: {}'.format(h('pol'),iq_pol))\n\n\nold_state = [0.1,0.2,0.3,0.4,0,0]\n\ndef perturb(old_state):\n new_state = old_state*P\n return new_state\nnew_state = [0,0,0,0,0,1]\n\nwhile not np.allclose(old_state, new_state):\n old_state, new_state = new_state, perturb(old_state)\n \nprint old_state\n\n\n# EDIT: I made correction to solve for corrected $\\pi$, by acounting for $P^T$ and not $P$\nprint('From calculation(which is NO LONGER wrong!), stationary distribution:{}'.format(X))\n\nprint('From simulation, stationary distribution: {}'.format(old_state))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jinzishuai/learn2deeplearn
deeplearning.ai/C2.ImproveDeepNN/week2/assignment/Optimization+methods.ipynb
gpl-3.0
[ "Optimization Methods\nUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. \nGradient descent goes \"downhill\" on a cost function $J$. Think of it as trying to do this: \n<img src=\"images/cost.jpg\" style=\"width:650px;height:300px;\">\n<caption><center> <u> Figure 1 </u>: Minimizing the cost is like finding the lowest point in a hilly landscape<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>\nNotations: As usual, $\\frac{\\partial J}{\\partial a } = $ da for any variable a.\nTo get started, run the following code to import the libraries you will need.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io\nimport math\nimport sklearn\nimport sklearn.datasets\n\nfrom opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation\nfrom opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset\nfrom testCases import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'", "1 - Gradient Descent\nA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. \nWarm-up exercise: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: \n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{1}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{2}$$\nwhere L is the number of layers and $\\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.", "# GRADED FUNCTION: update_parameters_with_gd\n\ndef update_parameters_with_gd(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using one step of gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters to be updated:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients to update each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n learning_rate -- the learning rate, scalar.\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n\n L = len(parameters) // 2 # number of layers in the neural networks\n\n # Update rule for each parameter\n for l in range(L):\n ### START CODE HERE ### (approx. 2 lines)\n parameters[\"W\" + str(l+1)] = parameters['W' + str(l+1)] - learning_rate* grads['dW' + str(l+1)]\n parameters[\"b\" + str(l+1)] = parameters['b' + str(l+1)] - learning_rate* grads['db' + str(l+1)]\n ### END CODE HERE ###\n \n return parameters\n\nparameters, grads, learning_rate = update_parameters_with_gd_test_case()\n\nparameters = update_parameters_with_gd(parameters, grads, learning_rate)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected Output:\n<table> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.63535156 -0.62320365 -0.53718766]\n [-1.07799357 0.85639907 -2.29470142]] </td> \n </tr> \n\n <tr>\n <td > **b1** </td> \n <td > [[ 1.74604067]\n [-0.75184921]] </td> \n </tr> \n\n <tr>\n <td > **W2** </td> \n <td > [[ 0.32171798 -0.25467393 1.46902454]\n [-2.05617317 -0.31554548 -0.3756023 ]\n [ 1.1404819 -1.09976462 -0.1612551 ]] </td> \n </tr> \n\n <tr>\n <td > **b2** </td> \n <td > [[-0.88020257]\n [ 0.02561572]\n [ 0.57539477]] </td> \n </tr> \n</table>\n\nA variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. \n\n(Batch) Gradient Descent:\n\n``` python\nX = data_input\nY = labels\nparameters = initialize_parameters(layers_dims)\nfor i in range(0, num_iterations):\n # Forward propagation\n a, caches = forward_propagation(X, parameters)\n # Compute cost.\n cost = compute_cost(a, Y)\n # Backward propagation.\n grads = backward_propagation(a, caches, parameters)\n # Update parameters.\n parameters = update_parameters(parameters, grads)\n```\n\nStochastic Gradient Descent:\n\npython\nX = data_input\nY = labels\nparameters = initialize_parameters(layers_dims)\nfor i in range(0, num_iterations):\n for j in range(0, m):\n # Forward propagation\n a, caches = forward_propagation(X[:,j], parameters)\n # Compute cost\n cost = compute_cost(a, Y[:,j])\n # Backward propagation\n grads = backward_propagation(a, caches, parameters)\n # Update parameters.\n parameters = update_parameters(parameters, grads)\nIn Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will \"oscillate\" toward the minimum rather than converge smoothly. Here is an illustration of this: \n<img src=\"images/kiank_sgd.png\" style=\"width:750px;height:250px;\">\n<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : SGD vs GD<br> \"+\" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>\nNote also that implementing SGD requires 3 for-loops in total:\n1. Over the number of iterations\n2. Over the $m$ training examples\n3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)\nIn practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.\n<img src=\"images/kiank_minibatch.png\" style=\"width:750px;height:250px;\">\n<caption><center> <u> <font color='purple'> Figure 2 </u>: <font color='purple'> SGD vs Mini-Batch GD<br> \"+\" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>\n<font color='blue'>\nWhat you should remember:\n- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.\n- You have to tune a learning rate hyperparameter $\\alpha$.\n- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).\n2 - Mini-Batch Gradient descent\nLet's learn how to build mini-batches from the training set (X, Y).\nThere are two steps:\n- Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. \n<img src=\"images/kiank_shuffle.png\" style=\"width:550px;height:300px;\">\n\nPartition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this: \n\n<img src=\"images/kiank_partition.png\" style=\"width:550px;height:300px;\">\nExercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:\npython\nfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]\nsecond_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]\n...\nNote that the last mini-batch might end up smaller than mini_batch_size=64. Let $\\lfloor s \\rfloor$ represents $s$ rounded down to the nearest integer (this is math.floor(s) in Python). If the total number of examples is not a multiple of mini_batch_size=64 then there will be $\\lfloor \\frac{m}{mini_batch_size}\\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini__batch__size \\times \\lfloor \\frac{m}{mini_batch_size}\\rfloor$).", "# GRADED FUNCTION: random_mini_batches\n\ndef random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):\n \"\"\"\n Creates a list of random minibatches from (X, Y)\n \n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)\n mini_batch_size -- size of the mini-batches, integer\n \n Returns:\n mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)\n \"\"\"\n \n np.random.seed(seed) # To make your \"random\" minibatches the same as ours\n m = X.shape[1] # number of training examples\n mini_batches = []\n \n # Step 1: Shuffle (X, Y)\n permutation = list(np.random.permutation(m))\n shuffled_X = X[:, permutation]\n shuffled_Y = Y[:, permutation].reshape((1,m))\n\n # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.\n num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning\n for k in range(0, num_complete_minibatches):\n ### START CODE HERE ### (approx. 2 lines)\n mini_batch_X = shuffled_X[:, k*mini_batch_size : (k+1) * mini_batch_size]\n mini_batch_Y = shuffled_Y[:, k*mini_batch_size : (k+1) * mini_batch_size]\n ### END CODE HERE ###\n mini_batch = (mini_batch_X, mini_batch_Y)\n mini_batches.append(mini_batch)\n \n # Handling the end case (last mini-batch < mini_batch_size)\n if m % mini_batch_size != 0:\n ### START CODE HERE ### (approx. 2 lines)\n mini_batch_X = shuffled_X[:, num_complete_minibatches*mini_batch_size : m]\n mini_batch_Y = shuffled_Y[:, num_complete_minibatches*mini_batch_size : m]\n ### END CODE HERE ###\n mini_batch = (mini_batch_X, mini_batch_Y)\n mini_batches.append(mini_batch)\n \n return mini_batches\n\nX_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()\nmini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)\n\nprint (\"shape of the 1st mini_batch_X: \" + str(mini_batches[0][0].shape))\nprint (\"shape of the 2nd mini_batch_X: \" + str(mini_batches[1][0].shape))\nprint (\"shape of the 3rd mini_batch_X: \" + str(mini_batches[2][0].shape))\nprint (\"shape of the 1st mini_batch_Y: \" + str(mini_batches[0][1].shape))\nprint (\"shape of the 2nd mini_batch_Y: \" + str(mini_batches[1][1].shape)) \nprint (\"shape of the 3rd mini_batch_Y: \" + str(mini_batches[2][1].shape))\nprint (\"mini batch sanity check: \" + str(mini_batches[0][0][0][0:3]))", "Expected Output:\n<table style=\"width:50%\"> \n <tr>\n <td > **shape of the 1st mini_batch_X** </td> \n <td > (12288, 64) </td> \n </tr> \n\n <tr>\n <td > **shape of the 2nd mini_batch_X** </td> \n <td > (12288, 64) </td> \n </tr> \n\n <tr>\n <td > **shape of the 3rd mini_batch_X** </td> \n <td > (12288, 20) </td> \n </tr>\n <tr>\n <td > **shape of the 1st mini_batch_Y** </td> \n <td > (1, 64) </td> \n </tr> \n <tr>\n <td > **shape of the 2nd mini_batch_Y** </td> \n <td > (1, 64) </td> \n </tr> \n <tr>\n <td > **shape of the 3rd mini_batch_Y** </td> \n <td > (1, 20) </td> \n </tr> \n <tr>\n <td > **mini batch sanity check** </td> \n <td > [ 0.90085595 -0.7612069 0.2344157 ] </td> \n </tr>\n\n</table>\n\n<font color='blue'>\nWhat you should remember:\n- Shuffling and Partitioning are the two steps required to build mini-batches\n- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.\n3 - Momentum\nBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will \"oscillate\" toward convergence. Using momentum can reduce these oscillations. \nMomentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the \"velocity\" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. \n<img src=\"images/opt_momentum.png\" style=\"width:400px;height:250px;\">\n<caption><center> <u><font color='purple'>Figure 3</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>\nExercise: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the grads dictionary, that is:\nfor $l =1,...,L$:\npython\nv[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\nv[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\nNote that the iterator l starts at 0 in the for loop while the first parameters are v[\"dW1\"] and v[\"db1\"] (that's a \"one\" on the superscript). This is why we are shifting l to l+1 in the for loop.", "# GRADED FUNCTION: initialize_velocity\n\ndef initialize_velocity(parameters):\n \"\"\"\n Initializes the velocity as a python dictionary with:\n - keys: \"dW1\", \"db1\", ..., \"dWL\", \"dbL\" \n - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.\n Arguments:\n parameters -- python dictionary containing your parameters.\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n \n Returns:\n v -- python dictionary containing the current velocity.\n v['dW' + str(l)] = velocity of dWl\n v['db' + str(l)] = velocity of dbl\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v = {}\n \n # Initialize velocity\n for l in range(L):\n ### START CODE HERE ### (approx. 2 lines)\n v[\"dW\" + str(l+1)] = np.zeros((parameters[\"W\" + str(l+1)]).shape)\n v[\"db\" + str(l+1)] = np.zeros((parameters[\"b\" + str(l+1)]).shape)\n ### END CODE HERE ###\n \n return v\n\nparameters = initialize_velocity_test_case()\n\nv = initialize_velocity(parameters)\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))", "Expected Output:\n<table style=\"width:40%\"> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr> \n</table>\n\nExercise: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: \n$$ \\begin{cases}\nv_{dW^{[l]}} = \\beta v_{dW^{[l]}} + (1 - \\beta) dW^{[l]} \\\nW^{[l]} = W^{[l]} - \\alpha v_{dW^{[l]}}\n\\end{cases}\\tag{3}$$\n$$\\begin{cases}\nv_{db^{[l]}} = \\beta v_{db^{[l]}} + (1 - \\beta) db^{[l]} \\\nb^{[l]} = b^{[l]} - \\alpha v_{db^{[l]}} \n\\end{cases}\\tag{4}$$\nwhere L is the number of layers, $\\beta$ is the momentum and $\\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a \"one\" on the superscript). So you will need to shift l to l+1 when coding.", "# GRADED FUNCTION: update_parameters_with_momentum\n\ndef update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):\n \"\"\"\n Update parameters using Momentum\n \n Arguments:\n parameters -- python dictionary containing your parameters:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients for each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n v -- python dictionary containing the current velocity:\n v['dW' + str(l)] = ...\n v['db' + str(l)] = ...\n beta -- the momentum hyperparameter, scalar\n learning_rate -- the learning rate, scalar\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n v -- python dictionary containing your updated velocities\n \"\"\"\n\n L = len(parameters) // 2 # number of layers in the neural networks\n \n # Momentum update for each parameter\n for l in range(L):\n \n ### START CODE HERE ### (approx. 4 lines)\n # compute velocities\n v[\"dW\" + str(l+1)] = beta*v[\"dW\" + str(l+1)]+(1-beta)*grads['dW' + str(l+1)]\n v[\"db\" + str(l+1)] = beta*v[\"db\" + str(l+1)]+(1-beta)*grads['db' + str(l+1)]\n # update parameters\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l+1)]-learning_rate*v[\"dW\" + str(l+1)]\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l+1)]-learning_rate*v[\"db\" + str(l+1)]\n ### END CODE HERE ###\n \n return parameters, v\n\nparameters, grads, v = update_parameters_with_momentum_test_case()\n\nparameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))", "Expected Output:\n<table style=\"width:90%\"> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.62544598 -0.61290114 -0.52907334]\n [-1.07347112 0.86450677 -2.30085497]] </td> \n </tr> \n\n <tr>\n <td > **b1** </td> \n <td > [[ 1.74493465]\n [-0.76027113]] </td> \n </tr> \n\n <tr>\n <td > **W2** </td> \n <td > [[ 0.31930698 -0.24990073 1.4627996 ]\n [-2.05974396 -0.32173003 -0.38320915]\n [ 1.13444069 -1.0998786 -0.1713109 ]] </td> \n </tr> \n\n <tr>\n <td > **b2** </td> \n <td > [[-0.87809283]\n [ 0.04055394]\n [ 0.58207317]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[-0.01228902]\n [-0.09357694]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]]</td> \n </tr> \n</table>\n\nNote that:\n- The velocity is initialized with zeros. So the algorithm will take a few iterations to \"build up\" velocity and start to take bigger steps.\n- If $\\beta = 0$, then this just becomes standard gradient descent without momentum. \nHow do you choose $\\beta$?\n\nThe larger the momentum $\\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\\beta$ is too big, it could also smooth out the updates too much. \nCommon values for $\\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\\beta = 0.9$ is often a reasonable default. \nTuning the optimal $\\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. \n\n<font color='blue'>\nWhat you should remember:\n- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.\n- You have to tune a momentum hyperparameter $\\beta$ and a learning rate $\\alpha$.\n4 - Adam\nAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. \nHow does Adam work?\n1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). \n2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). \n3. It updates parameters in a direction based on combining information from \"1\" and \"2\".\nThe update rule is, for $l = 1, ..., L$: \n$$\\begin{cases}\nv_{dW^{[l]}} = \\beta_1 v_{dW^{[l]}} + (1 - \\beta_1) \\frac{\\partial \\mathcal{J} }{ \\partial W^{[l]} } \\\nv^{corrected}{dW^{[l]}} = \\frac{v{dW^{[l]}}}{1 - (\\beta_1)^t} \\\ns_{dW^{[l]}} = \\beta_2 s_{dW^{[l]}} + (1 - \\beta_2) (\\frac{\\partial \\mathcal{J} }{\\partial W^{[l]} })^2 \\\ns^{corrected}{dW^{[l]}} = \\frac{s{dW^{[l]}}}{1 - (\\beta_1)^t} \\\nW^{[l]} = W^{[l]} - \\alpha \\frac{v^{corrected}{dW^{[l]}}}{\\sqrt{s^{corrected}{dW^{[l]}}} + \\varepsilon}\n\\end{cases}$$\nwhere:\n- t counts the number of steps taken of Adam \n- L is the number of layers\n- $\\beta_1$ and $\\beta_2$ are hyperparameters that control the two exponentially weighted averages. \n- $\\alpha$ is the learning rate\n- $\\varepsilon$ is a very small number to avoid dividing by zero\nAs usual, we will store all parameters in the parameters dictionary \nExercise: Initialize the Adam variables $v, s$ which keep track of the past information.\nInstruction: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads, that is:\nfor $l = 1, ..., L$:\n```python\nv[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\nv[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\ns[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\ns[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\n```", "# GRADED FUNCTION: initialize_adam\n\ndef initialize_adam(parameters) :\n \"\"\"\n Initializes v and s as two python dictionaries with:\n - keys: \"dW1\", \"db1\", ..., \"dWL\", \"dbL\" \n - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.\n \n Arguments:\n parameters -- python dictionary containing your parameters.\n parameters[\"W\" + str(l)] = Wl\n parameters[\"b\" + str(l)] = bl\n \n Returns: \n v -- python dictionary that will contain the exponentially weighted average of the gradient.\n v[\"dW\" + str(l)] = ...\n v[\"db\" + str(l)] = ...\n s -- python dictionary that will contain the exponentially weighted average of the squared gradient.\n s[\"dW\" + str(l)] = ...\n s[\"db\" + str(l)] = ...\n\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v = {}\n s = {}\n \n # Initialize v, s. Input: \"parameters\". Outputs: \"v, s\".\n for l in range(L):\n ### START CODE HERE ### (approx. 4 lines)\n v[\"dW\" + str(l+1)] = np.zeros((parameters[\"W\" + str(l+1)]).shape)\n v[\"db\" + str(l+1)] = np.zeros((parameters[\"b\" + str(l+1)]).shape)\n s[\"dW\" + str(l+1)] = np.zeros((parameters[\"W\" + str(l+1)]).shape)\n s[\"db\" + str(l+1)] = np.zeros((parameters[\"b\" + str(l+1)]).shape)\n ### END CODE HERE ###\n \n return v, s\n\nparameters = initialize_adam_test_case()\n\nv, s = initialize_adam(parameters)\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))\nprint(\"s[\\\"dW1\\\"] = \" + str(s[\"dW1\"]))\nprint(\"s[\\\"db1\\\"] = \" + str(s[\"db1\"]))\nprint(\"s[\\\"dW2\\\"] = \" + str(s[\"dW2\"]))\nprint(\"s[\\\"db2\\\"] = \" + str(s[\"db2\"]))\n", "Expected Output:\n<table style=\"width:40%\"> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr> \n <tr>\n <td > **s[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n\n <tr>\n <td > **s[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr>\n\n</table>\n\nExercise: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: \n$$\\begin{cases}\nv_{W^{[l]}} = \\beta_1 v_{W^{[l]}} + (1 - \\beta_1) \\frac{\\partial J }{ \\partial W^{[l]} } \\\nv^{corrected}{W^{[l]}} = \\frac{v{W^{[l]}}}{1 - (\\beta_1)^t} \\\ns_{W^{[l]}} = \\beta_2 s_{W^{[l]}} + (1 - \\beta_2) (\\frac{\\partial J }{\\partial W^{[l]} })^2 \\\ns^{corrected}{W^{[l]}} = \\frac{s{W^{[l]}}}{1 - (\\beta_2)^t} \\\nW^{[l]} = W^{[l]} - \\alpha \\frac{v^{corrected}{W^{[l]}}}{\\sqrt{s^{corrected}{W^{[l]}}}+\\varepsilon}\n\\end{cases}$$\nNote that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.", "# GRADED FUNCTION: update_parameters_with_adam\n\ndef update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,\n beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):\n \"\"\"\n Update parameters using Adam\n \n Arguments:\n parameters -- python dictionary containing your parameters:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients for each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n v -- Adam variable, moving average of the first gradient, python dictionary\n s -- Adam variable, moving average of the squared gradient, python dictionary\n learning_rate -- the learning rate, scalar.\n beta1 -- Exponential decay hyperparameter for the first moment estimates \n beta2 -- Exponential decay hyperparameter for the second moment estimates \n epsilon -- hyperparameter preventing division by zero in Adam updates\n\n Returns:\n parameters -- python dictionary containing your updated parameters \n v -- Adam variable, moving average of the first gradient, python dictionary\n s -- Adam variable, moving average of the squared gradient, python dictionary\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v_corrected = {} # Initializing first moment estimate, python dictionary\n s_corrected = {} # Initializing second moment estimate, python dictionary\n \n # Perform Adam update on all parameters\n for l in range(L):\n # Moving average of the gradients. Inputs: \"v, grads, beta1\". Output: \"v\".\n ### START CODE HERE ### (approx. 2 lines)\n v[\"dW\" + str(l+1)] = beta1*v[\"dW\" + str(l+1)]+(1-beta1)*grads['dW' + str(l+1)]\n v[\"db\" + str(l+1)] = beta1*v[\"db\" + str(l+1)]+(1-beta1)*grads['db' + str(l+1)]\n ### END CODE HERE ###\n\n # Compute bias-corrected first moment estimate. Inputs: \"v, beta1, t\". Output: \"v_corrected\".\n ### START CODE HERE ### (approx. 2 lines)\n v_corrected[\"dW\" + str(l+1)] = v[\"dW\" + str(l+1)]/(1-np.power(beta1, t))\n v_corrected[\"db\" + str(l+1)] = v[\"db\" + str(l+1)]/(1-np.power(beta1, t))\n ### END CODE HERE ###\n\n # Moving average of the squared gradients. Inputs: \"s, grads, beta2\". Output: \"s\".\n ### START CODE HERE ### (approx. 2 lines)\n s[\"dW\" + str(l+1)] = beta2*s[\"dW\" + str(l+1)]+(1-beta2)*np.power(grads['dW' + str(l+1)],2)\n s[\"db\" + str(l+1)] = beta2*s[\"db\" + str(l+1)]+(1-beta2)*np.power(grads['db' + str(l+1)],2)\n ### END CODE HERE ###\n\n # Compute bias-corrected second raw moment estimate. Inputs: \"s, beta2, t\". Output: \"s_corrected\".\n ### START CODE HERE ### (approx. 2 lines)\n s_corrected[\"dW\" + str(l+1)] = s[\"dW\" + str(l+1)]/(1-np.power(beta2, t))\n s_corrected[\"db\" + str(l+1)] = s[\"db\" + str(l+1)]/(1-np.power(beta2, t))\n ### END CODE HERE ###\n\n # Update parameters. Inputs: \"parameters, learning_rate, v_corrected, s_corrected, epsilon\". Output: \"parameters\".\n ### START CODE HERE ### (approx. 2 lines)\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l+1)] - \\\n learning_rate*v_corrected[\"dW\" + str(l+1)]/(np.sqrt(s_corrected[\"dW\" + str(l+1)])+epsilon)\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l+1)] - \\\n learning_rate*v_corrected[\"db\" + str(l+1)]/(np.sqrt(s_corrected[\"db\" + str(l+1)])+epsilon)\n ### END CODE HERE ###\n\n return parameters, v, s\n\nparameters, grads, v, s = update_parameters_with_adam_test_case()\nparameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)\n\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))\nprint(\"s[\\\"dW1\\\"] = \" + str(s[\"dW1\"]))\nprint(\"s[\\\"db1\\\"] = \" + str(s[\"db1\"]))\nprint(\"s[\\\"dW2\\\"] = \" + str(s[\"dW2\"]))\nprint(\"s[\\\"db2\\\"] = \" + str(s[\"db2\"]))", "Expected Output:\n<table> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.63178673 -0.61919778 -0.53561312]\n [-1.08040999 0.85796626 -2.29409733]] </td> \n </tr> \n\n <tr>\n <td > **b1** </td> \n <td > [[ 1.75225313]\n [-0.75376553]] </td> \n </tr> \n\n <tr>\n <td > **W2** </td> \n <td > [[ 0.32648046 -0.25681174 1.46954931]\n [-2.05269934 -0.31497584 -0.37661299]\n [ 1.14121081 -1.09245036 -0.16498684]] </td> \n </tr> \n\n <tr>\n <td > **b2** </td> \n <td > [[-0.88529978]\n [ 0.03477238]\n [ 0.57537385]] </td> \n </tr> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[-0.01228902]\n [-0.09357694]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]] </td> \n </tr> \n <tr>\n <td > **s[\"dW1\"]** </td> \n <td > [[ 0.00121136 0.00131039 0.00081287]\n [ 0.0002525 0.00081154 0.00046748]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db1\"]** </td> \n <td > [[ 1.51020075e-05]\n [ 8.75664434e-04]] </td> \n </tr> \n\n <tr>\n <td > **s[\"dW2\"]** </td> \n <td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]\n [ 1.57413361e-04 4.72206320e-04 7.14372576e-04]\n [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db2\"]** </td> \n <td > [[ 5.49507194e-05]\n [ 2.75494327e-03]\n [ 5.50629536e-04]] </td> \n </tr>\n</table>\n\nYou now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.\n5 - Model with different optimization algorithms\nLets use the following \"moons\" dataset to test the different optimization methods. (The dataset is named \"moons\" because the data from each of the two classes looks a bit like a crescent-shaped moon.)", "train_X, train_Y = load_dataset()", "We have already implemented a 3-layer neural network. You will train it with: \n- Mini-batch Gradient Descent: it will call your function:\n - update_parameters_with_gd()\n- Mini-batch Momentum: it will call your functions:\n - initialize_velocity() and update_parameters_with_momentum()\n- Mini-batch Adam: it will call your functions:\n - initialize_adam() and update_parameters_with_adam()", "def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,\n beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):\n \"\"\"\n 3-layer neural network model which can be run in different optimizer modes.\n \n Arguments:\n X -- input data, of shape (2, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)\n layers_dims -- python list, containing the size of each layer\n learning_rate -- the learning rate, scalar.\n mini_batch_size -- the size of a mini batch\n beta -- Momentum hyperparameter\n beta1 -- Exponential decay hyperparameter for the past gradients estimates \n beta2 -- Exponential decay hyperparameter for the past squared gradients estimates \n epsilon -- hyperparameter preventing division by zero in Adam updates\n num_epochs -- number of epochs\n print_cost -- True to print the cost every 1000 epochs\n\n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n\n L = len(layers_dims) # number of layers in the neural networks\n costs = [] # to keep track of the cost\n t = 0 # initializing the counter required for Adam update\n seed = 10 # For grading purposes, so that your \"random\" minibatches are the same as ours\n \n # Initialize parameters\n parameters = initialize_parameters(layers_dims)\n\n # Initialize the optimizer\n if optimizer == \"gd\":\n pass # no initialization required for gradient descent\n elif optimizer == \"momentum\":\n v = initialize_velocity(parameters)\n elif optimizer == \"adam\":\n v, s = initialize_adam(parameters)\n \n # Optimization loop\n for i in range(num_epochs):\n \n # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch\n seed = seed + 1\n minibatches = random_mini_batches(X, Y, mini_batch_size, seed)\n\n for minibatch in minibatches:\n\n # Select a minibatch\n (minibatch_X, minibatch_Y) = minibatch\n\n # Forward propagation\n a3, caches = forward_propagation(minibatch_X, parameters)\n\n # Compute cost\n cost = compute_cost(a3, minibatch_Y)\n\n # Backward propagation\n grads = backward_propagation(minibatch_X, minibatch_Y, caches)\n\n # Update parameters\n if optimizer == \"gd\":\n parameters = update_parameters_with_gd(parameters, grads, learning_rate)\n elif optimizer == \"momentum\":\n parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)\n elif optimizer == \"adam\":\n t = t + 1 # Adam counter\n parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,\n t, learning_rate, beta1, beta2, epsilon)\n \n # Print the cost every 1000 epoch\n if print_cost and i % 1000 == 0:\n print (\"Cost after epoch %i: %f\" %(i, cost))\n if print_cost and i % 100 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('epochs (per 100)')\n plt.title(\"Learning rate = \" + str(learning_rate))\n plt.show()\n\n return parameters", "You will now run this 3 layer neural network with each of the 3 optimization methods.\n5.1 - Mini-batch Gradient descent\nRun the following code to see how the model does with mini-batch gradient descent.", "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, optimizer = \"gd\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Gradient Descent optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,2.5])\naxes.set_ylim([-1,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "5.2 - Mini-batch gradient descent with momentum\nRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.", "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = \"momentum\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Momentum optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,2.5])\naxes.set_ylim([-1,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "5.3 - Mini-batch with Adam mode\nRun the following code to see how the model does with Adam.", "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, optimizer = \"adam\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Adam optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,2.5])\naxes.set_ylim([-1,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "5.4 - Summary\n<table> \n <tr>\n <td>\n **optimization method**\n </td>\n <td>\n **accuracy**\n </td>\n <td>\n **cost shape**\n </td>\n\n </tr>\n <td>\n Gradient descent\n </td>\n <td>\n 79.7%\n </td>\n <td>\n oscillations\n </td>\n <tr>\n <td>\n Momentum\n </td>\n <td>\n 79.7%\n </td>\n <td>\n oscillations\n </td>\n </tr>\n <tr>\n <td>\n Adam\n </td>\n <td>\n 94%\n </td>\n <td>\n smoother\n </td>\n </tr>\n</table>\n\nMomentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.\nAdam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster.\nSome advantages of Adam include:\n- Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum) \n- Usually works well even with little tuning of hyperparameters (except $\\alpha$)\nReferences:\n\nAdam paper: https://arxiv.org/pdf/1412.6980.pdf" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.24/_downloads/27d6cff3f645408158cdf4f3f05a21b6/30_eeg_erp.ipynb
bsd-3-clause
[ "%matplotlib inline", "EEG processing and Event Related Potentials (ERPs)\nThis tutorial shows how to perform standard ERP analyses in MNE-Python. Most of\nthe material here is covered in other tutorials too, but for convenience the\nfunctions and methods most useful for ERP analyses are collected here, with\nlinks to other tutorials where more detailed information is given.\nAs usual we'll start by importing the modules we need and loading some example\ndata. Instead of parsing the events from the raw data's :term:stim channel\n(like we do in this tutorial &lt;tut-events-vs-annotations&gt;), we'll load\nthe events from an external events file. Finally, to speed up computations so\nour documentation server can handle them, we'll crop the raw data from ~4.5\nminutes down to 90 seconds.", "import os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_filt-0-40_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, preload=False)\n\nsample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_filt-0-40_raw-eve.fif')\nevents = mne.read_events(sample_data_events_file)\n\nraw.crop(tmax=90) # in seconds; happens in-place\n# discard events >90 seconds (not strictly necessary: avoids some warnings)\nevents = events[events[:, 0] <= raw.last_samp]", "The file that we loaded has already been partially processed: 3D sensor\nlocations have been saved as part of the .fif file, the data have been\nlow-pass filtered at 40 Hz, and a common average reference is set for the\nEEG channels, stored as a projector (see section-avg-ref-proj in the\ntut-set-eeg-ref tutorial for more info about when you may want to do\nthis). We'll discuss how to do each of these below.\nSince this is a combined EEG+MEG dataset, let's start by restricting the data\nto just the EEG and EOG channels. This will cause the other projectors saved\nin the file (which apply only to magnetometer channels) to be removed. By\nlooking at the measurement info we can see that we now have 59 EEG channels\nand 1 EOG channel.", "raw.pick(['eeg', 'eog']).load_data()\nraw.info", "Channel names and types\nIn practice it's quite common to have some channels labelled as EEG that are\nactually EOG channels. ~mne.io.Raw objects have a\n~mne.io.Raw.set_channel_types method that you can use to change a channel\nthat is labeled as eeg into an eog type. You can also rename channels\nusing the ~mne.io.Raw.rename_channels method. Detailed examples of both of\nthese methods can be found in the tutorial tut-raw-class. In this data\nthe channel types are all correct already, so for now we'll just rename the\nchannels to remove a space and a leading zero in the channel names, and\nconvert to lowercase:", "channel_renaming_dict = {name: name.replace(' 0', '').lower()\n for name in raw.ch_names}\n_ = raw.rename_channels(channel_renaming_dict) # happens in-place", "Channel locations\nThe tutorial tut-sensor-locations describes MNE-Python's handling of\nsensor positions in great detail. To briefly summarize: MNE-Python\ndistinguishes :term:montages &lt;montage&gt; (which contain sensor positions in\n3D: x, y, z, in meters) from :term:layouts &lt;layout&gt; (which\ndefine 2D arrangements of sensors for plotting approximate overhead diagrams\nof sensor positions). Additionally, montages may specify idealized sensor\npositions (based on, e.g., an idealized spherical headshape model) or they\nmay contain realistic sensor positions obtained by digitizing the 3D\nlocations of the sensors when placed on the actual subject's head.\nThis dataset has realistic digitized 3D sensor locations saved as part of the\n.fif file, so we can view the sensor locations in 2D or 3D using the\n~mne.io.Raw.plot_sensors method:", "raw.plot_sensors(show_names=True)\nfig = raw.plot_sensors('3d')", "If you're working with a standard montage like the 10-20 &lt;ten_twenty_&gt;_\nsystem, you can add sensor locations to the data like this:\nraw.set_montage('standard_1020'). See tut-sensor-locations for\ninfo on what other standard montages are built-in to MNE-Python.\nIf you have digitized realistic sensor locations, there are dedicated\nfunctions for loading those digitization files into MNE-Python; see\nreading-dig-montages for discussion and dig-formats for a list\nof supported formats. Once loaded, the digitized sensor locations can be\nadded to the data by passing the loaded montage object to\nraw.set_montage().\nSetting the EEG reference\nAs mentioned above, this data already has an EEG common average reference\nadded as a :term:projector. We can view the effect of this on the raw data\nby plotting with and without the projector applied:", "for proj in (False, True):\n fig = raw.plot(n_channels=5, proj=proj, scalings=dict(eeg=50e-6))\n fig.subplots_adjust(top=0.9) # make room for title\n ref = 'Average' if proj else 'No'\n fig.suptitle(f'{ref} reference', size='xx-large', weight='bold')", "The referencing scheme can be changed with the function\nmne.set_eeg_reference (which by default operates on a copy of the data)\nor the raw.set_eeg_reference() &lt;mne.io.Raw.set_eeg_reference&gt; method (which\nalways modifies the data in-place). The tutorial tut-set-eeg-ref shows\nseveral examples of this.\nFiltering\nMNE-Python has extensive support for different ways of filtering data. For a\ngeneral discussion of filter characteristics and MNE-Python defaults, see\ndisc-filtering. For practical examples of how to apply filters to your\ndata, see tut-filter-resample. Here, we'll apply a simple high-pass\nfilter for illustration:", "raw.filter(l_freq=0.1, h_freq=None)", "Evoked responses: epoching and averaging\nThe general process for extracting evoked responses from continuous data is\nto use the ~mne.Epochs constructor, and then average the resulting epochs\nto create an ~mne.Evoked object. In MNE-Python, events are represented as\na :class:NumPy array &lt;numpy.ndarray&gt; of sample numbers and integer event\ncodes. The event codes are stored in the last column of the events array:", "np.unique(events[:, -1])", "The tut-event-arrays tutorial discusses event arrays in more detail.\nInteger event codes are mapped to more descriptive text using a Python\n:class:dictionary &lt;dict&gt; usually called event_id. This mapping is\ndetermined by your experiment code (i.e., it reflects which event codes you\nchose to use to represent different experimental events or conditions). For\nthe sample-dataset data has the following mapping:", "event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'face': 5, 'buttonpress': 32}", "Now we can extract epochs from the continuous data. An interactive plot\nallows you to click on epochs to mark them as \"bad\" and drop them from the\nanalysis (it is not interactive on the documentation website, but will be\nwhen you run epochs.plot() &lt;mne.Epochs.plot&gt; in a Python console).", "epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.3, tmax=0.7,\n preload=True)\nfig = epochs.plot(events=events)", "It is also possible to automatically drop epochs, when first creating them or\nlater on, by providing maximum peak-to-peak signal value thresholds (pass to\nthe ~mne.Epochs constructor as the reject parameter; see\ntut-reject-epochs-section for details). You can also do this after\nthe epochs are already created, using the ~mne.Epochs.drop_bad method:", "reject_criteria = dict(eeg=100e-6, # 100 µV\n eog=200e-6) # 200 µV\n_ = epochs.drop_bad(reject=reject_criteria)", "Next we generate a barplot of which channels contributed most to epochs\ngetting rejected. If one channel is responsible for lots of epoch rejections,\nit may be worthwhile to mark that channel as \"bad\" in the ~mne.io.Raw\nobject and then re-run epoching (fewer channels w/ more good epochs may be\npreferable to keeping all channels but losing many epochs). See\ntut-bad-channels for more info.", "epochs.plot_drop_log()", "Another way in which epochs can be automatically dropped is if the event\naround which the epoch is formed is too close to the start or end of the\n~mne.io.Raw object (e.g., if the epoch's tmax would be past the end of\nthe file; this is the cause of the \"TOO_SHORT\" entry in the\n~mne.Epochs.plot_drop_log plot above). Epochs may also be automatically\ndropped if the ~mne.io.Raw object contains :term:annotations that begin\nwith either bad or edge (\"edge\" annotations are automatically\ninserted when concatenating two separate ~mne.io.Raw objects together). See\ntut-reject-data-spans for more information about annotation-based\nepoch rejection.\nNow that we've dropped the bad epochs, let's look at our evoked responses for\nsome conditions we care about. Here the ~mne.Epochs.average method will\ncreate an ~mne.Evoked object, which we can then plot. Notice that we\nselect which condition we want to average using the square-bracket indexing\n(like a :class:dictionary &lt;dict&gt;); that returns a smaller epochs object\ncontaining just the epochs from that condition, to which we then apply the\n~mne.Epochs.average method:", "l_aud = epochs['auditory/left'].average()\nl_vis = epochs['visual/left'].average()", "These ~mne.Evoked objects have their own interactive plotting method\n(though again, it won't be interactive on the documentation website):\nclick-dragging a span of time will generate a scalp field topography for that\ntime span. Here we also demonstrate built-in color-coding the channel traces\nby location:", "fig1 = l_aud.plot()\nfig2 = l_vis.plot(spatial_colors=True)", "Scalp topographies can also be obtained non-interactively with the\n~mne.Evoked.plot_topomap method. Here we display topomaps of the average\nfield in 50 ms time windows centered at -200 ms, 100 ms, and 400 ms.", "l_aud.plot_topomap(times=[-0.2, 0.1, 0.4], average=0.05)", "Considerable customization of these plots is possible, see the docstring of\n~mne.Evoked.plot_topomap for details.\nThere is also a built-in method for combining \"butterfly\" plots of the\nsignals with scalp topographies, called ~mne.Evoked.plot_joint. Like\n~mne.Evoked.plot_topomap you can specify times for the scalp topographies\nor you can let the method choose times automatically, as is done here:", "l_aud.plot_joint()", "Global field power (GFP)\nGlobal field power :footcite:Lehmann1980,Lehmann1984,Murray2008 is,\ngenerally speaking, a measure of agreement of the signals picked up by all\nsensors across the entire scalp: if all sensors have the same value at a\ngiven time point, the GFP will be zero at that time point; if the signals\ndiffer, the GFP will be non-zero at that time point. GFP\npeaks may reflect \"interesting\" brain activity, warranting further\ninvestigation. Mathematically, the GFP is the population standard\ndeviation across all sensors, calculated separately for every time point.\nYou can plot the GFP using evoked.plot(gfp=True) &lt;mne.Evoked.plot&gt;. The GFP\ntrace will be black if spatial_colors=True and green otherwise. The EEG\nreference does not affect the GFP:", "for evk in (l_aud, l_vis):\n evk.plot(gfp=True, spatial_colors=True, ylim=dict(eeg=[-12, 12]))", "To plot the GFP by itself you can pass gfp='only' (this makes it easier\nto read off the GFP data values, because the scale is aligned):", "l_aud.plot(gfp='only')", "As stated above, the GFP is the population standard deviation of the signal\nacross channels. To compute it manually, we can leverage the fact that\nevoked.data &lt;mne.Evoked.data&gt; is a :class:NumPy array &lt;numpy.ndarray&gt;,\nand verify by plotting it using matplotlib commands:", "gfp = l_aud.data.std(axis=0, ddof=0)\n\n# Reproducing the MNE-Python plot style seen above\nfig, ax = plt.subplots()\nax.plot(l_aud.times, gfp * 1e6, color='lime')\nax.fill_between(l_aud.times, gfp * 1e6, color='lime', alpha=0.2)\nax.set(xlabel='Time (s)', ylabel='GFP (µV)', title='EEG')", "Analyzing regions of interest (ROIs): averaging across channels\nSince our sample data is responses to left and right auditory and visual\nstimuli, we may want to compare left versus right ROIs. To average across\nchannels in a region of interest, we first find the channel indices we want.\nLooking back at the 2D sensor plot above, we might choose the following for\nleft and right ROIs:", "left = ['eeg17', 'eeg18', 'eeg25', 'eeg26']\nright = ['eeg23', 'eeg24', 'eeg34', 'eeg35']\n\nleft_ix = mne.pick_channels(l_aud.info['ch_names'], include=left)\nright_ix = mne.pick_channels(l_aud.info['ch_names'], include=right)", "Now we can create a new Evoked with 2 virtual channels (one for each ROI):", "roi_dict = dict(left_ROI=left_ix, right_ROI=right_ix)\nroi_evoked = mne.channels.combine_channels(l_aud, roi_dict, method='mean')\nprint(roi_evoked.info['ch_names'])\nroi_evoked.plot()", "Comparing conditions\nIf we wanted to compare our auditory and visual stimuli, a useful function is\nmne.viz.plot_compare_evokeds. By default this will combine all channels in\neach evoked object using global field power (or RMS for MEG channels); here\ninstead we specify to combine by averaging, and restrict it to a subset of\nchannels by passing picks:", "evokeds = dict(auditory=l_aud, visual=l_vis)\npicks = [f'eeg{n}' for n in range(10, 15)]\nmne.viz.plot_compare_evokeds(evokeds, picks=picks, combine='mean')", "We can also easily get confidence intervals by treating each epoch as a\nseparate observation using the ~mne.Epochs.iter_evoked method. A confidence\ninterval across subjects could also be obtained, by passing a list of\n~mne.Evoked objects (one per subject) to the\n~mne.viz.plot_compare_evokeds function.", "evokeds = dict(auditory=list(epochs['auditory/left'].iter_evoked()),\n visual=list(epochs['visual/left'].iter_evoked()))\nmne.viz.plot_compare_evokeds(evokeds, combine='mean', picks=picks)", "We can also compare conditions by subtracting one ~mne.Evoked object from\nanother using the mne.combine_evoked function (this function also allows\npooling of epochs without subtraction).", "aud_minus_vis = mne.combine_evoked([l_aud, l_vis], weights=[1, -1])\naud_minus_vis.plot_joint()", "<div class=\"alert alert-danger\"><h4>Warning</h4><p>The code above yields an **equal-weighted difference**. If you have\n imbalanced trial numbers, you might want to equalize the number of events\n per condition first by using `epochs.equalize_event_counts()\n <mne.Epochs.equalize_event_counts>` before averaging.</p></div>\n\nGrand averages\nTo compute grand averages across conditions (or subjects), you can pass a\nlist of ~mne.Evoked objects to mne.grand_average. The result is another\n~mne.Evoked object.", "grand_average = mne.grand_average([l_aud, l_vis])\nprint(grand_average)", "For combining conditions it is also possible to make use of :term:HED\ntags in the condition names when selecting which epochs to average. For\nexample, we have the condition names:", "list(event_dict)", "We can select the auditory conditions (left and right together) by passing:", "epochs['auditory'].average()", "see tut-section-subselect-epochs for details.\nThe tutorials tut-epochs-class and tut-evoked-class have many\nmore details about working with the ~mne.Epochs and ~mne.Evoked classes.\nAmplitude and latency measures\nIt is common in ERP research to extract measures of amplitude or latency to\ncompare across different conditions. There are many measures that can be\nextracted from ERPs, and many of these are detailed (including the respective\nstrengths and weaknesses) in chapter 9 of Luck :footcite:Luck2014 (also see\nthe Measurement Tool &lt;https://bit.ly/37uydRw&gt;_ in the ERPLAB Toolbox\n:footcite:Lopez-CalderonLuck2014).\nThis part of the tutorial will demonstrate how to extract three common\nmeasures:\n\nPeak latency\nPeak amplitude\nMean amplitude\n\nPeak latency and amplitude\nThe most common measures of amplitude and latency are peak measures.\nPeak measures are basically the maximum amplitude of the signal in a\nspecified time window and the time point (or latency) at which the peak\namplitude occurred.\nPeak measures can be obtained using the :meth:~mne.Evoked.get_peak method.\nThere are two important things to point out about\n:meth:~mne.Evoked.get_peak method. First, it finds the strongest peak\nlooking across all channels of the selected type that are available in\nthe :class:~mne.Evoked object. As a consequence, if you want to restrict\nthe search for the peak to a group of channels or a single channel, you\nshould first use the :meth:~mne.Evoked.pick or\n:meth:~mne.Evoked.pick_channels methods. Second, the\n:meth:~mne.Evoked.get_peak method can find different types of peaks using\nthe mode argument. There are three options:\n\nmode='pos': finds the peak with a positive voltage (ignores\n negative voltages)\nmode='neg': finds the peak with a negative voltage (ignores\n positive voltages)\nmode='abs': finds the peak with the largest absolute voltage\n regardless of sign (positive or negative)\n\nThe following example demonstrates how to find the first positive peak in the\nERP (i.e., the P100) for the left visual condition (i.e., the\nl_vis :class:~mne.Evoked object). The time window used to search for\nthe peak ranges from .08 to .12 s. This time window was selected because it\nis when P100 typically occurs. Note that all 'eeg' channels are submitted\nto the :meth:~mne.Evoked.get_peak method.", "# Define a function to print out the channel (ch) containing the\n# peak latency (lat; in msec) and amplitude (amp, in µV), with the\n# time range (tmin and tmax) that were searched.\n# This function will be used throughout the remainder of the tutorial\ndef print_peak_measures(ch, tmin, tmax, lat, amp):\n print(f'Channel: {ch}')\n print(f'Time Window: {tmin * 1e3:.3f} - {tmax * 1e3:.3f} ms')\n print(f'Peak Latency: {lat * 1e3:.3f} ms')\n print(f'Peak Amplitude: {amp * 1e6:.3f} µV')\n\n\n# Get peak amplitude and latency from a good time window that contains the peak\ngood_tmin, good_tmax = .08, .12\nch, lat, amp = l_vis.get_peak(ch_type='eeg', tmin=good_tmin, tmax=good_tmax,\n mode='pos', return_amplitude=True)\n\n# Print output from the good time window that contains the peak\nprint('** PEAK MEASURES FROM A GOOD TIME WINDOW **')\nprint_peak_measures(ch, good_tmin, good_tmax, lat, amp)", "The output shows that channel eeg55 had the maximum positive peak in\nthe chosen time window from all of the 'eeg' channels searched.\nIn practice, one might want to pull out the peak for\nan a priori region of interest or a single channel depending on the study.\nThis can be done by combining the :meth:~mne.Evoked.pick\nor :meth:~mne.Evoked.pick_channels methods with the\n:meth:~mne.Evoked.get_peak method.\nHere, let's assume we believe the effects of interest will occur\nat eeg59.", "# Fist, return a copy of l_vis to select the channel from\nl_vis_roi = l_vis.copy().pick('eeg59')\n\n# Get the peak and latency measure from the selected channel\nch_roi, lat_roi, amp_roi = l_vis_roi.get_peak(\n tmin=good_tmin, tmax=good_tmax, mode='pos', return_amplitude=True)\n\n# Print output\nprint('** PEAK MEASURES FOR ONE CHANNEL FROM A GOOD TIME WINDOW **')\nprint_peak_measures(ch_roi, good_tmin, good_tmax, lat_roi, amp_roi)", "While the peak latencies are the same in channels eeg55 and eeg59,\nthe peak amplitudes differ. This approach can also be applied to virtual\nchannels created with the :func:~mne.channels.combine_channels function and\ndifference waves created with the :func:mne.combine_evoked function (see\naud_minus_vis in section Comparing conditions_ above).\nPeak measures are very susceptible to high frequency noise in the\nsignal (for discussion, see :footcite:Luck2014). Specifically, high\nfrequency noise positively bias peak amplitude measures. This bias can\nconfound comparisons across conditions where ERPs differ in the level of high\nfrequency noise, such as when the conditions differ in the number of trials\ncontributing to the ERP. One way to avoid this is to apply a non-causal\nlow-pass filter to the ERP. Low-pass filters reduce the contribution of high\nfrequency noise by smoothing out fast (i.e., high frequency) fluctuations in\nthe signal (see disc-filtering). While this can reduce the positive\nbias in peak amplitude measures caused by high frequency noise, low-pass\nfiltering the ERP can introduce challenges in interpreting peak latency\nmeasures for effects of interest :footcite:Rousselet2012,VanRullen2011.\nIf using peak measures, it is critical to visually inspect the data to\nmake sure the selected time window actually contains a peak\n(:meth:~mne.Evoked.get_peak will always identify a peak).\nVisual inspection allows to easily verify whether the automatically found\npeak is correct. The :meth:~mne.Evoked.get_peak detects the maximum or\nminimum voltage in the specified time range and returns the latency and\namplitude of this peak. There is no guarantee that this method will return\nan actual peak. Instead, it may return a value on the rising or falling edge\nof the peak we are trying to find.\nThe following example demonstrates why visual inspection is crucial. Below,\nwe use a known bad time window (.095 to .135 s) to search for a peak in\nchannel eeg59.", "# Get BAD peak measures\nbad_tmin, bad_tmax = .095, .135\nch_roi, bad_lat_roi, bad_amp_roi = l_vis_roi.get_peak(\n mode='pos', tmin=bad_tmin, tmax=bad_tmax, return_amplitude=True)\n\n# Print output\nprint('** PEAK MEASURES FOR ONE CHANNEL FROM A BAD TIME WINDOW **')\nprint_peak_measures(ch_roi, bad_tmin, bad_tmax, bad_lat_roi, bad_amp_roi)", "If all we had were the above values, it would be unclear if they are truly\nidentifying a peak or just a the falling or rising edge of one. However, it\nbecomes clear that the .095 to .135 s time window is misses the peak on\neeg59. This is shown in the bottom panel where we see the bad time window\n(highlighted in orange) misses the peak (the pink star). In contrast, the\ntime window defined initially (.08 to .12 s; highlighted in blue) returns\nan actual peak instead of a just a maximal or minimal value in the searched\ntime window. Visual inspection will always help you to convince yourself the\ndata returned are actual peaks.", "fig, axs = plt.subplots(nrows=2, ncols=1)\nwords = (('Bad', 'missing'), ('Good', 'finding'))\ntimes = (np.array([bad_tmin, bad_tmax]), np.array([good_tmin, good_tmax]))\ncolors = ('C1', 'C0')\n\nfor ix, ax in enumerate(axs):\n title = '{} time window {} peak'.format(*words[ix])\n l_vis_roi.plot(axes=ax, time_unit='ms', show=False, titles=title)\n ax.plot(lat_roi * 1e3, amp_roi * 1e6, marker='*', color='C6')\n ax.axvspan(*(times[ix] * 1e3), facecolor=colors[ix], alpha=0.3)\n ax.set_xlim(-50, 150) # Show zoomed in around peak", "Mean Amplitude\nAnother common practice in ERP studies is to define a component (or effect)\nas the mean amplitude within a specified time window. One advantage of this\napproach is that it is less sensitive to high frequency noise (compared to\npeak amplitude measures) because averaging over a time window acts like a\nlow-pass filter (see discussion in the above section\nPeak latency and amplitude_).\nWhen using mean amplitude measures, selecting the time window based on\nthe effect of interest (e.g., the difference between two conditions) can\ninflate the likelihood of finding false positives in your results because\nthis approach is circular :footcite:LuckGaspelin2017. There are other, and\nbetter, ways to identify a time window to use for extracting mean amplitude\nmeasures. First, you can use a priori time window based on prior research.\nA second way is to define a time window from an independent condition or set\nof trials not used in the analysis (e.g., a \"localizer\"). A third approach is\nto define a time window using the across-condition grand average. This latter\napproach is not circular because the across-condition mean and condition\ndifference are independent of one another. The issues discussed above also\napply to selecting channels used for analysis.\nThe following example demonstrates how to pull out the mean amplitude\nfrom the left visual condition (i.e., the l_vis :class:~mne.Evoked\nobject) using from selected channels and time windows. Stimulating the\nleft visual field is increases neural activity visual cortex of the\ncontralateral (i.e., right) hemisphere. We can test this by examining the\namplitude of the ERP for left visual field stimulation over right\n(contralateral) and left (ipsilateral) channels. The channels used for this\nanalysis are eeg54 and eeg57 (left hemisphere), and eeg59 and\neeg55 (right hemisphere). The time window used is .08 (good_tmin)\nto .12 s (good_tmax) as it corresponds to when P100 typically occurs. The\nP100 is sensitive to left and right visual field stimulation. The mean\namplitude is extracted from the above four channels and stored in a\n:class:pandas.DataFrame.", "# Select all of the channels and crop to the time window\nchannels = ['eeg54', 'eeg57', 'eeg55', 'eeg59']\nhemisphere = ['left', 'left', 'right', 'right']\nl_vis_mean_roi = l_vis.copy().pick(channels).crop(\n tmin=good_tmin, tmax=good_tmax)\n\n# Extract mean amplitude in µV over time\nmean_amp_roi = l_vis_mean_roi.data.mean(axis=1) * 1e6\n\n# Store the data in a data frame\nmean_amp_roi_df = pd.DataFrame({\n 'ch_name': l_vis_mean_roi.ch_names,\n 'hemisphere': ['left', 'left', 'right', 'right'],\n 'mean_amp': mean_amp_roi\n})\n\n# Print the data frame\nprint(mean_amp_roi_df.groupby('hemisphere').mean())", "As demonstrated in the above example, the mean amplitude was higher and\npositive in right compared to left hemisphere channels. It should be\nreiterated that both that spatial and temporal window you use should be\ndetermined in an independent manner (e.g., defined a priori from prior\nresearch, a \"localizer\" or another independent condition) and not based\non the data you will use to test your hypotheses.\nThe above example can be modified to extract the the mean amplitude\nfrom all channels and store the resulting output in\n:class:pandas.DataFrame. This can be useful for statistical analyses\nconducted in other programming languages.", "# Extract mean amplitude for all channels in l_vis (including `eog`)\nl_vis_cropped = l_vis.copy().crop(tmin=good_tmin, tmax=good_tmax)\nmean_amp_all = l_vis_cropped.data.mean(axis=1) * 1e6\nmean_amp_all_df = pd.DataFrame({\n 'ch_name': l_vis_cropped.info['ch_names'],\n 'mean_amp': mean_amp_all\n})\nmean_amp_all_df['tmin'] = good_tmin\nmean_amp_all_df['tmax'] = good_tmax\nmean_amp_all_df['condition'] = 'Left/Visual'\nprint(mean_amp_all_df.head())\nprint(mean_amp_all_df.tail())", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/starthinker
colabs/anonymize_query.ipynb
apache-2.0
[ "BigQuery Anonymize Query\nRuns a query and anynonamizes all rows. Used to create sample table for dashboards.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter BigQuery Anonymize Query Recipe Parameters\n\nEnsure you have user access to both datasets.\nProvide the source project, dataset and query.\nProvide the destination project, dataset, and table.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth_read':'service', # Credentials used.\n 'from_project':'', # Original project to read from.\n 'from_dataset':'', # Original dataset to read from.\n 'from_query':'', # Query to read data.\n 'to_project':None, # Anonymous data will be writen to.\n 'to_dataset':'', # Anonymous data will be writen to.\n 'to_table':'', # Anonymous data will be writen to.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute BigQuery Anonymize Query\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'anonymize':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'service','description':'Credentials used.'}},\n 'bigquery':{\n 'from':{\n 'project':{'field':{'name':'from_project','kind':'string','order':1,'description':'Original project to read from.'}},\n 'dataset':{'field':{'name':'from_dataset','kind':'string','order':2,'description':'Original dataset to read from.'}},\n 'query':{'field':{'name':'from_query','kind':'string','order':3,'description':'Query to read data.'}}\n },\n 'to':{\n 'project':{'field':{'name':'to_project','kind':'string','order':4,'default':None,'description':'Anonymous data will be writen to.'}},\n 'dataset':{'field':{'name':'to_dataset','kind':'string','order':5,'description':'Anonymous data will be writen to.'}},\n 'table':{'field':{'name':'to_table','kind':'string','order':6,'description':'Anonymous data will be writen to.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
petrushy/CesiumWidget
Examples/CesiumWidget Example KML.ipynb
apache-2.0
[ "Cesium Widget Example KML\nIf the installation of Cesiumjs is ok, it should be reachable here:\nhttp://localhost:8888/nbextensions/CesiumWidget/cesium/index.html", "from CesiumWidget import CesiumWidget\nfrom IPython import display\nimport numpy as np", "Create widget object", "cesium = CesiumWidget()", "Display the widget:", "cesium", "Cesium is packed with example data. Let's look at some GDP per captia data from 2008.", "cesium.kml_url = '/nbextensions/CesiumWidget/cesium/Apps/SampleData/kml/gdpPerCapita2008.kmz'", "Example zoomto", "for lon in np.arange(0, 360, 0.5):\n cesium.zoom_to(lon, 0, 36000000, 0 ,-90, 0)\n\ncesium._zoomto", "Example flyto", "cesium.fly_to(14, 90, 20000001)\n\ncesium._flyto" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
valentina-s/GLM_PythonModules
notebooks/MLE_singleNeuron.ipynb
bsd-2-clause
[ "This notebook presents how to perform maximum-likelihood parameter estimation for a single neuron.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport random\n%matplotlib inline\n\nimport sys\nimport os\nsys.path.append(os.path.join(os.getcwd(),\"..\"))\nsys.path.append(os.path.join(os.getcwd(),\"..\",\"code\"))\ndata_path = os.path.join(os.getcwd(),\"..\",'data')\nsys.path.append(data_path)\n\nimport filters\nimport likelihood_functions as lk\nimport PoissonProcessClasses as PP\nimport auxiliary_functions as auxfun\n\n# Reloading modules which are in development\nimport imp\nimp.reload(filters)\nimp.reload(auxfun)\nimp.reload(lk)\nimp.reload(PP)", "Reading input-output data:", "# reading stimulus\nStim = np.array(pd.read_csv(os.path.join(data_path,'Stim.csv'),header = None))\n# reading location of spikes\ntsp = np.hstack(np.array(pd.read_csv(os.path.join(data_path,'tsp.csv'),header = None)))", "Extracting a spike train from spike positions:", "dt = 0.01\ntsp_int = np.ceil((tsp - dt*0.001)/dt)\ntsp_int = np.reshape(tsp_int,(tsp_int.shape[0],1))\ntsp_int = tsp_int.astype(int)\ny = np.array([item in tsp_int for item in np.arange(Stim.shape[0]/dt)+1]).astype(int)", "Displaying a subset of the spike train:", "fig, ax = plt.subplots(figsize=(16, 2))\nfig = ax.matshow(np.reshape(y[:1000],(1,len(y[:1000]))),cmap = 'Greys',aspect = 15)", "Creating filters:", "# create a stimulus filter\nkpeaks = np.array([0,round(20/3)])\npars_k = {'neye':5,'n':5,'kpeaks':kpeaks,'b':3}\nK,K_orth,kt_domain = filters.createStimulusBasis(pars_k, nkt = 20) \n\n# create a post-spike filter\nhpeaks = np.array([0.1,2])\npars_h = {'n':5,'hpeaks':hpeaks,'b':.4,'absref':0.}\nH,H_orth,ht_domain = filters.createPostSpikeBasis(pars_h,dt)\n\n# Interpolate Post Spike Filter\nMSP = auxfun.makeInterpMatrix(len(ht_domain),1)\nMSP[0,0] = 0\nH_orth = np.dot(MSP,H_orth)\n\nMSP.shape", "Conditional Intensity (spike rate):\n$$\\lambda_{\\beta} = \\exp(K(\\beta_k)Stim + H(\\beta_h)y + dc)$$\n($\\beta_k$ and $\\beta_h$ are the unknown coefficients of the filters and $dc$ is the direct current).\nSince the convolution is a linear operation the intensity can be written in the following form:\n$$\\lambda_{\\beta} = \\exp(M_k \\beta_k + M_h\\beta_h + \\textbf{1}dc),$$\nwhere $M_k$ and $M_h$ are matrices depending on the stimulus and the response correspondingly and $\\textbf{1}$ is a vector of ones.\nCreating a matrix of covariates:", "M_k = lk.construct_M_k(Stim,K,dt)\n\nM_h = lk.construct_M_h(tsp,H_orth,dt,Stim)", "Combining $M_k$, $M_h$ and $\\textbf{1}$ into one covariate matrix:", "M = np.hstack((M_k,M_h,np.ones((M_h.shape[0],1))))", "The conditional intensity becomes:\n$$ \\lambda_{\\beta} = \\exp(M\\beta) $$\n($\\beta$ contains all the unknown parameters).\nCreate a Poisson process model with this intensity:", "model = PP.PPModel(M.T,dt = dt/100)", "Setting initial parameters for optimization:", "coeff_k0 = np.array([-0.02304,\n 0.12903,\n 0.35945,\n 0.39631,\n 0.27189,\n 0.22003,\n -0.17457,\n 0.00482,\n -0.09811,\n 0.04823])\ncoeff_h0 = np.zeros((5,))\ndc0 = 0\n\npars0 = np.hstack((coeff_k0,coeff_h0,dc0))\n\n# pars0 = np.hstack((np.zeros((10,)),np.ones((5,)),0))", "Fitting the likelihood (here using Limited Memory BFGS method with 500 iterations):", "res = model.fit(y,start_coef = pars0,maxiter = 500,method = 'L-BFGS-B')", "Optimization results:", "print(res)", "Creating the predicted filters:", "k_coeff_predicted = res.x[:10]\nh_coeff_predicted = res.x[10:15]\n\nkfilter_predicted = np.dot(K,k_coeff_predicted)\nhfilter_predicted = np.dot(H_orth,h_coeff_predicted)\n\nk_coeff = np.array([ 0.061453,0.284916,0.860335,1.256983,0.910615,0.488660,-0.887091,0.097441,0.026607,-0.090147])\nh_coeff = np.array([-15.18,38.24,-67.58,-14.06,-3.36])\n\nkfilter_true = np.dot(K,k_coeff)\nhfilter_true = np.dot(H_orth,h_coeff)\n\nplt.plot(-kt_domain[::-1],kfilter_predicted,color = \"r\",label = 'predicted')\nplt.hold(True)\nplt.plot(-kt_domain[::-1],kfilter_true,color= \"blue\",label = 'true')\nplt.hold(True)\nplt.plot(-kt_domain[::-1],np.dot(K,coeff_k0),color = \"g\",label = 'initial')\nplt.legend(loc = 'upper left')\n\nplt.plot(ht_domain,hfilter_predicted,color = \"r\",label = 'predicted')\nplt.hold(True)\nplt.plot(ht_domain,hfilter_true,color = \"b\",label = 'true')\nplt.hold(True)\nplt.plot(ht_domain,np.dot(H_orth,coeff_h0),color = \"g\",label = 'initial')\nplt.legend(loc = 'lower right')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marco-olimpio/ufrn
EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb
gpl-3.0
[ "<h1>Finding crime patterns in Montgomery County</h1>\n\n<h3>Group components</h3>\n\n<ul>\n <li>Cephas Barreto- cephasax [at] gmail [dot] com></li>\n <li>Marco Olimpio - marco.olimpio [at] gmail [dot] com</li>\n <li>Rebecca Betwel - bekbetwel [at] gmail [dot] com</li>\n</ul>\n\n<h3>About the dataset</h3>\n<p>The data presented is derived from reported crimes classified according to Maryland criminal code and documented by approved police incident reports. The data about crimes do not put info about the victins and its masks the actual address not putting the exact place where the complaint occured.</p>\n\n<br/>\nSource: <a href=\"https://data.world/jboutros/montgomery-county-crime\" target=\"blank\"> https://data.world/jboutros/montgomery-county-crime </a>\n<p>\n <strong>Maryland County Area</strong>\n </p>\n<br/>\n<img src=\"https://www.montgomerycountymd.gov/POL/Resources/Images/districts/Countywidemap.jpg\">\n<h2><strong>Checking about data available</strong></h2>\n\n<ul>\n <li><strong>Incident ID</strong>: Looks like a simple table identification number</li>\n <li><strong>CR Number</strong>: CR stands for Complaint Register and its a identification for a Compleint Process for a full disciplinary investigation</li>\n <li><strong>Dispatch Date/Time</strong>: Looks the date and time when the complaint was made</li>\n <li><strong>Class</strong></li>\n <ul>\n <li><strong>Class number</strong>:Identification number of the complaint</li>\n <li><strong>Class description</strong>:Description of the class</li>\n </ul>\n <li><strong>Complaint</strong></li>\n <ul>\n <li><strong>Public place</strong></li>\n <ul>\n <li><strong>Police District Name</strong>: Auto describes it</li>\n <li><strong>Police District Number</strong>: Auto describes it</li>\n <li><strong>Block address</strong>: Auto describes it</li>\n <li><strong>City</strong>: Auto describes it</li>\n <li><strong>State</strong>: Auto describes it</li>\n <li><strong>Zip Code</strong>: Auto descrives it</li>\n <li><strong>Agency</strong>: Agency responsable for this address</li>\n <li><strong>Place</strong>: Kind of place where the crime occured</li>\n <li><strong>Sector</strong></li>\n <ul>\n <li>Rockville District: Sectors A, B, C</li>\n <li>Bethesda District: Sectors D, E</li>\n <li>Silver Spring District: Sector G</li>\n <li>Wheaton-Glenmont District: Sector J, K</li>\n <li>Germantown District: Sectors M, N, P</li>\n </ul>\n <li><strong>Address Number</strong>: Auto describes it</li>\n <li><strong>Beat</strong>: Beat is the territory and time that a police officer patrols</li>\n <li><strong>PRA</strong>: Police Reporting Area</li>\n <li><strong>Latitude</strong>: Auto describes it</li>\n <li><strong>Longitude</strong>: Auto describes it</li>\n <li><strong>Location</strong>: Tuple of Latitude and Longitude</li>\n </ul>\n <li>Complaint estimative</li>\n <ul>\n <li><strong>Start Data/Time</strong>: Start of the complaint</li>\n <li><strong>End Date/Time</strong>: End of the complaint</li>\n </ul>\n </ul>\n</ul>\n\n<h2>Dataset questions</h2>\n<ul>\n <li><strong>About Type of complaint</strong></li>\n <ul>\n <li>Which complaint is most common?</li>\n <li>What are the categories of complaints?</li>\n <li>Could we categorize the types of crimes in violent or not?</li>\n </ul>\n <li><strong>About Period of time/day of the week</strong></li>\n <ul>\n <li>Wich period of the day that most complaints occur</li>\n <li>Wich day of the week that most complaints occur</li>\n <li>Wich month of the years that most complaints occur </li>\n <li>These complainsts are realted with holidays?</li>\n <li>What period of time (time of day/day of the week/month of the year) has correlation with the type of complaint</li>\n </ul>\n <li><strong>About Location</strong></li>\n <ul>\n <li>Where is most of the complaints?</li>\n <li>What sort of places have most complaints</li>\n <li>What sort of place has correlation with the type of complaint</li>\n </ul>\n <li><strong>Correlation between locale and type of complaint</strong></li>\n <ul>\n <li>Is there a correlations between the day of the week and kind of complaint?</li>\n </ul>\n</ul>\n\n<h4>References</h4>\n<ul>\n <li>https://www.montgomerycountymd.gov/pol/districts/whatsmydistrict.html</li>\n <li>http://www.ericcarlson.net/scanner/police.html</li>\n\n</ul>\n\nImporting libraries Pandas and Bokeh and configuring Bokeh to show chart inline (calling output_notebook() function)", "import pandas as pd\nimport numpy as np\nfrom bokeh.io import push_notebook, show, output_notebook\nfrom bokeh.layouts import row\nfrom bokeh.plotting import figure\nfrom bokeh.models import (\n GMapPlot, GMapOptions, ColumnDataSource, Circle, DataRange1d, PanTool, WheelZoomTool, BoxSelectTool\n)\noutput_notebook()", "Configuring maps and loading data about where the complaints have occured. Observe, to sucesfully configure the Google Maps you have to create an API Key (You can generate one from this site: https://developers.google.com/maps/documentation/javascript/get-api-key) and change in the line 'plot.api_key = \"\"'", "map_options = GMapOptions(lat=39.151042, lng=-77.193023, map_type=\"roadmap\", zoom=11)\n\nplot = GMapPlot(x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options)\nplot.title.text = \"Montgomery County\"\n\n# For GMaps to function, Google requires you obtain and enable an API key:\n#\n# https://developers.google.com/maps/documentation/javascript/get-api-key\n#\n# Replace the value below with your personal API key:\nplot.api_key = \"AIzaSyBFHmpkUOfk2FtDZXHVBSUUHp6LVPmI-fs\"", "Load data in using read_csv function, configure which tools will be available in the plot.", "#Loading dataset from Montgomery County complaint dataset\nmonty_data = pd.read_csv(\"MontgomeryCountyCrime2013.csv\")\nlatitude_data = monty_data[\"Latitude\"]\nlongitude_data = monty_data[\"Longitude\"]\nmonty_data.head()\n\n", "Categorizing complaint classes", "#Creating a master class to categorize crimes\nclassaux = monty_data[\"Class\"]/100\nclassaux = classaux.astype(int)\nclassaux = classaux*100\n#Inserting this new data in the dataset\nmonty_data[\"MasterClass\"] = classaux\n\n#print(montydata.groupby(\"Class\")[\"Class Description\"].mean())\n#Sort by Class of complaint to analise master classes of Class complaints\n#montydata.sort_values(by=\"Class\")\n#montydata.sort_values(by=\"Class Description\")\nmonty_data[\"Class\",\"Class Description\"]\n#print(montydata.groupby[\"Class Description\"])\n\nsource = ColumnDataSource(\n data=dict(\n lat=latitude_data[13:130],\n lon=longitude_data[13:130],\n )\n)\n\nprint(source.data.values)\ncircle = Circle(x=\"lon\", y=\"lat\", size=15, fill_color=\"blue\", fill_alpha=0.8, line_color=None)\nplot.add_glyph(source, circle)\n\nplot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool())", "Ploting the geographic data in Google Maps. Note that the 'show' function receives another parameter 'notebook_handle=True' responsible for tell Bhoke to do a inline plot", "show(plot,notebook_handle=True)", "<h3>Which sort of complaints are most common, TOP 10?</h3>", "#Using the agg function allows you to calculate the frequency for each group using the standard library function len.\n#Sorting the result by the aggregated column code_count values, in descending order, then head selecting the top n records, then reseting the frame; will produce the top n frequent records\ntop = montydata.groupby(['Class','Class Description'])['Class'].agg({\"frequency\": len}).sort_values(\"frequency\", ascending=False).head(40).reset_index()\ntop['frequency'] = (top['frequency']/number_of_registries[0])*100\ntop\n\nfrom decimal import *\n#Configure precision\ngetcontext().prec = 2\n\nparcial_perc = top['frequency'].sum()\nparcial_perc = round(parcial_perc,2)\n\nprint(\"The crimes above are responsible for up to \" + str(parcial_perc) + \"% of the total crimes\")", "<h3><strong>What are the Classes of Classes (Master Classes) of complaints?</strong></h3>", "#Considering the top crimes\n\n#copy\ntop_classes_top = top\n\n#Creation of a Master Class\ntop_classes_top['Master Class'] = 0\naux = top_classes_top['Master Class'].astype(float,copy=True)\ntop_classes_top['Master Class'] = aux\ntop_classes_top['Master Class'] = top_classes_top['Class']/100\ntop_classes_top['Master Class'] = top_classes_top['Master Class'].round()\ntop_classes_top['Master Class'] = top_classes_top['Master Class']*100\naux = top_classes_top['Master Class'].astype(int,copy=True)\ntop_classes_top['Master Class'] = aux\n#teste.describe\n#top_classes_top\n#top_classes_top['Master Class'].describe()\n#top_classes_top.dtypes\ntop_classes_top\n", "<h4>Describing 'Master Classes'</h4>", "#Inserting the description of the Master Classes\ntop_classes_top['Master Class Description'] ='' \n\ntop_classes_top[top_classes_top['Master Class'] == 600]\ntest_top = top_classes_top\n\n\ntest_top.loc[(test_top['Master Class'] == 600),'Master Class Description'] = 'LARCENY'\ntest_top.loc[(test_top['Master Class'] == 2900),'Master Class Description'] = 'MISC'\ntest_top.loc[(test_top['Master Class'] == 1400),'Master Class Description'] = 'VANDALISM'\ntest_top.loc[(test_top['Master Class'] == 1000),'Master Class Description'] = 'FORGERY/CNTRFT'\ntest_top.loc[(test_top['Master Class'] == 500),'Master Class Description'] = 'BURGLARY'\ntest_top.loc[(test_top['Master Class'] == 800),'Master Class Description'] = 'ASSAULT & BATTERY'\ntest_top.loc[(test_top['Master Class'] == 1800),'Master Class Description'] = 'CONTROLLED DANGEROUS SUBSTANCE POSSESSION'\ntest_top.loc[(test_top['Master Class'] == 700),'Master Class Description'] = 'THEFT'\ntest_top.loc[(test_top['Master Class'] == 2100),'Master Class Description'] = 'JUVENILE RUNAWAY'\ntest_top.loc[(test_top['Master Class'] == 2800),'Master Class Description'] = 'DRIVING UNDER THE INFLUENCE'\ntest_top.loc[(test_top['Master Class'] == 1900),'Master Class Description'] = 'CONTROLLED DANGEROUS SUBSTANCE IMPLMNT'\ntest_top.loc[(test_top['Master Class'] == 2200),'Master Class Description'] = 'LIQUOR - DRINK IN PUB OVER 21'\ntest_top.loc[(test_top['Master Class'] == 2400),'Master Class Description'] = 'DISORDERLY CONDUCT'\ntest_top.loc[(test_top['Master Class'] == 2700),'Master Class Description'] = 'TRESPASSING'\n\ntest_top", "<h3>Could we categorize the types of crimes in violent or not?</h3>\n\nAccording to wikipedia (https://en.wikipedia.org/wiki/Violent_crime) include but are not limited to this list of crimes: Typically, violent criminals includes aircraft hijackers, bank robbers, muggers, burglars, terrorists, carjackers, rapists, kidnappers, torturers, active shooters, murderers, gangsters, drug cartels, and others.\nOnly analysing each master class we can see that only tree master classes are considered violent, that are: 500 - BURGLARY, 800 - ASSAULT & BATTERY and 700 - THEFT.", "test_top['Violent crime'] = False\n\ntest_top.loc[(test_top['Master Class'] == 500),'Violent crime'] = True\ntest_top.loc[(test_top['Master Class'] == 800),'Violent crime'] = True\ntest_top.loc[(test_top['Master Class'] == 700),'Violent crime'] = True\n\ntest_top.sort_values(['Violent crime', 'frequency'], ascending=False, axis=0, kind='quicksort')", "Acording to the data, almost 80% of the crimes selected from the total of crimes, the violent crimes are only", "value_percentage = test_top[test_top['Violent crime'] == True]['frequency'].sum()\nvalue_percentage = round(value_percentage,2)\nprint(str(value_percentage) + '% of the total crimes')", "<h3>Wich period (morning, afternoon, night) of the day that most complaints occur</h3>", "#Considering the top crimes\nday_process = montydata\n\n", "<h3>Wich day of the week that most complaints occur</h3>", "#Considering the top crimes", "<h3>Wich month of the years that most complaints occur</h3>", "#Considering the top crimes", "<h3>These complainsts are related with holidays?</h3>", "#Considering the top crimes", "<h3>What period of time (time of day/day of the week/month of the year) has correlation with the type of complaint</h3>", "#Considering the top crimes" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dipanjank/ml
simple_implementations/qp_admm.ipynb
gpl-3.0
[ "<h1 align=\"center\">ADMM for Quadratic Programming</h1>\n\nIn this notebook, we implement the Alternating Direction Method of Multipliers algorithm for solving a standard-form Quadratic Program. The ADMM approach solves convex optimization problems by breaking them into smaller pieces, each of which are then easier to handle.\nThe standard form QP is:\nMinimize $\\dfrac {1} {2} x^T P x + q^T x + r$ subject to $lb \\le x \\le ub$.", "import numpy as np\nimport pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')", "Pure-Python ADMM Implementation\nThe code below is a direct Python port of the reference MATLAB implementation in Reference[1].", "from numpy.linalg import inv, norm\n\ndef objective(P, q, r, x):\n \"\"\"Return the value of the Standard form QP using the current value of x.\"\"\"\n return 0.5 * np.dot(x, np.dot(P, x)) + np.dot(q, x) + r\n\n\ndef qp_admm(P, q, r, lb, ub,\n max_iter=1000,\n rho=1.0, \n alpha=1.2, \n atol=1e-4, \n rtol=1e-2):\n\n n = P.shape[0]\n \n x = np.zeros(n)\n z = np.zeros(n)\n u = np.zeros(n)\n \n history = []\n\n R = inv(P + rho * np.eye(n))\n \n for k in range(1, max_iter+1):\n x = np.dot(R, (z - u) - q)\n \n # z-update with relaxation\n z_old = z\n x_hat = alpha * x +(1 - alpha) * z_old\n z = np.minimum(ub, np.maximum(lb, x_hat + u))\n\n # u-update\n u = u + (x_hat - z)\n\n # diagnostics, and termination checks\n objval = objective(P, q, r, x)\n\n r_norm = norm(x - z)\n s_norm = norm(-rho * (z - z_old))\n eps_pri = np.sqrt(n) * atol + rtol * np.maximum(norm(x), norm(-z))\n eps_dual = np.sqrt(n)* atol + rtol * norm(rho*u)\n \n history.append({\n 'objval' : objval, \n 'r_norm' : r_norm, \n 's_norm' : s_norm,\n 'eps_pri' : eps_pri,\n 'eps_dual': eps_dual,\n })\n \n if r_norm < eps_pri and s_norm < eps_dual:\n print('Optimization terminated after {} iterations'.format(k))\n break;\n \n history = pd.DataFrame(history)\n return x, history\n", "QP Solver using CVXPY\nFor comparison, we also implement QP solver using cvxpy.", "import cvxpy as cvx\n\ndef qp_cvxpy(P, q, r, lb, ub,\n max_iter=1000,\n atol=1e-4, \n rtol=1e-2):\n n = P.shape[0]\n \n # The variable we want to solve for\n x = cvx.Variable(n)\n constraints = [x >= cvx.Constant(lb), x <= cvx.Constant(ub)]\n \n # Construct the QP expression using CVX Primitives\n # Note that in the CVX-meta language '*' of vectors of matrices indicates dot product, \n # not elementwise multiplication \n expr = cvx.Constant(0.5) * cvx.quad_form(x, cvx.Constant(P)) + cvx.Constant(q) * x + cvx.Constant(r)\n qp = cvx.Problem(cvx.Minimize(expr), constraints=constraints)\n qp.solve(max_iters=max_iter, abstol=atol, reltol=rtol, verbose=True) \n \n # The result is a Matrix object. Make it an NDArray and drop of 2nd dimension i.e. make it a vector.\n x_opt = np.array(x.value).squeeze()\n return x_opt", "Generate Optimal Portfolio Holdings\nIn this section, we define a helper function to load the one of the five asset returns datasets from OR library (Reference [2]). The data are available by requesting filenames port[1-5]. Each file contains a progressively larger set of asset returns, standard deviations of returns and correlations of returns.", "import requests\nfrom statsmodels.stats.moment_helpers import corr2cov\nfrom functools import lru_cache\n\n@lru_cache(maxsize=5)\ndef get_cov(filename):\n url = r'http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/{}.txt'.format(filename)\n data = requests.get(url).text\n lines = [line.strip() for line in data.split('\\n')]\n\n # First line is the number of assets\n n_assets = int(lines[0])\n \n # Next n_assets lines contain the space separated mean and stddev. of returns for each asset\n means_and_sds = pd.DataFrame(\n data=np.nan, \n index=range(0, n_assets), \n columns=['ret_mean', 'ret_std'])\n\n # Next n_assetsC2 lines contain the 1-based row and column index and the corresponding correlation\n for i in range(0, n_assets):\n mean, sd = map(float, lines[1+i].split())\n means_and_sds.loc[i, ['ret_mean', 'ret_std']] = [mean, sd]\n\n n_corrs = (n_assets * (n_assets + 1)) // 2\n corrs = pd.DataFrame(index=range(n_assets), columns=range(n_assets), data=np.nan)\n\n for i in range(0, n_corrs):\n row, col, corr = lines[n_assets + 1 + i].split()\n row, col = int(row)-1, int(col)-1\n corr = float(corr)\n corrs.loc[row, col] = corr\n corrs.loc[col, row] = corr\n \n cov = corr2cov(corrs, means_and_sds.ret_std)\n return cov", "Set up the Portfolio Optimization problem as a QP", "from numpy.random import RandomState\nrng = RandomState(0)\nP = get_cov('port1')\nn = P.shape[0]\nalphas = rng.uniform(-0.4, 0.4, size=n) \nq = -alphas\nub = np.ones_like(q)\nlb = np.zeros_like(q)\nr = 0", "Using ADMM", "%%time\nx_opt_admm, history = qp_admm(P, q, r, lb, ub)\n\nfig, ax = plt.subplots(history.shape[1], 1, figsize=(10, 8))\nax = history.plot(subplots=True, ax=ax, rot=0)", "Using CVXPY", "%%time\nx_opt_cvxpy = qp_cvxpy(P, q, r, lb, ub)", "Optimal Holdings Comparison", "holdings = pd.DataFrame(np.column_stack([x_opt_admm, x_opt_cvxpy]), columns=['opt_admm', 'opt_cvxpy'])\nfig, ax = plt.subplots(1, 1, figsize=(12, 4))\nax = holdings.plot(kind='bar', ax=ax, rot=0)\nlabels = ax.set(xlabel='Assets', ylabel='Holdings')", "References\n\nhttps://web.stanford.edu/~boyd/papers/admm/quadprog/quadprog.html\nhttp://people.brunel.ac.uk/~mastjjb/jeb/orlib/portinfo.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
keras-team/keras-io
examples/generative/ipynb/neural_style_transfer.ipynb
apache-2.0
[ "Neural style transfer\nAuthor: fchollet<br>\nDate created: 2016/01/11<br>\nLast modified: 2020/05/02<br>\nDescription: Transfering the style of a reference image to target image using gradient descent.\nIntroduction\nStyle transfer consists in generating an image\nwith the same \"content\" as a base image, but with the\n\"style\" of a different picture (typically artistic).\nThis is achieved through the optimization of a loss function\nthat has 3 components: \"style loss\", \"content loss\",\nand \"total variation loss\":\n\nThe total variation loss imposes local spatial continuity between\nthe pixels of the combination image, giving it visual coherence.\nThe style loss is where the deep learning keeps in --that one is defined\nusing a deep convolutional neural network. Precisely, it consists in a sum of\nL2 distances between the Gram matrices of the representations of\nthe base image and the style reference image, extracted from\ndifferent layers of a convnet (trained on ImageNet). The general idea\nis to capture color/texture information at different spatial\nscales (fairly large scales --defined by the depth of the layer considered).\nThe content loss is a L2 distance between the features of the base\nimage (extracted from a deep layer) and the features of the combination image,\nkeeping the generated image close enough to the original one.\n\nReference: A Neural Algorithm of Artistic Style\nSetup", "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.applications import vgg19\n\nbase_image_path = keras.utils.get_file(\"paris.jpg\", \"https://i.imgur.com/F28w3Ac.jpg\")\nstyle_reference_image_path = keras.utils.get_file(\n \"starry_night.jpg\", \"https://i.imgur.com/9ooB60I.jpg\"\n)\nresult_prefix = \"paris_generated\"\n\n# Weights of the different loss components\ntotal_variation_weight = 1e-6\nstyle_weight = 1e-6\ncontent_weight = 2.5e-8\n\n# Dimensions of the generated picture.\nwidth, height = keras.preprocessing.image.load_img(base_image_path).size\nimg_nrows = 400\nimg_ncols = int(width * img_nrows / height)\n", "Let's take a look at our base (content) image and our style reference image", "from IPython.display import Image, display\n\ndisplay(Image(base_image_path))\ndisplay(Image(style_reference_image_path))\n", "Image preprocessing / deprocessing utilities", "\ndef preprocess_image(image_path):\n # Util function to open, resize and format pictures into appropriate tensors\n img = keras.preprocessing.image.load_img(\n image_path, target_size=(img_nrows, img_ncols)\n )\n img = keras.preprocessing.image.img_to_array(img)\n img = np.expand_dims(img, axis=0)\n img = vgg19.preprocess_input(img)\n return tf.convert_to_tensor(img)\n\n\ndef deprocess_image(x):\n # Util function to convert a tensor into a valid image\n x = x.reshape((img_nrows, img_ncols, 3))\n # Remove zero-center by mean pixel\n x[:, :, 0] += 103.939\n x[:, :, 1] += 116.779\n x[:, :, 2] += 123.68\n # 'BGR'->'RGB'\n x = x[:, :, ::-1]\n x = np.clip(x, 0, 255).astype(\"uint8\")\n return x\n\n", "Compute the style transfer loss\nFirst, we need to define 4 utility functions:\n\ngram_matrix (used to compute the style loss)\nThe style_loss function, which keeps the generated image close to the local textures\nof the style reference image\nThe content_loss function, which keeps the high-level representation of the\ngenerated image close to that of the base image\nThe total_variation_loss function, a regularization loss which keeps the generated\nimage locally-coherent", "# The gram matrix of an image tensor (feature-wise outer product)\n\n\ndef gram_matrix(x):\n x = tf.transpose(x, (2, 0, 1))\n features = tf.reshape(x, (tf.shape(x)[0], -1))\n gram = tf.matmul(features, tf.transpose(features))\n return gram\n\n\n# The \"style loss\" is designed to maintain\n# the style of the reference image in the generated image.\n# It is based on the gram matrices (which capture style) of\n# feature maps from the style reference image\n# and from the generated image\n\n\ndef style_loss(style, combination):\n S = gram_matrix(style)\n C = gram_matrix(combination)\n channels = 3\n size = img_nrows * img_ncols\n return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2))\n\n\n# An auxiliary loss function\n# designed to maintain the \"content\" of the\n# base image in the generated image\n\n\ndef content_loss(base, combination):\n return tf.reduce_sum(tf.square(combination - base))\n\n\n# The 3rd loss function, total variation loss,\n# designed to keep the generated image locally coherent\n\n\ndef total_variation_loss(x):\n a = tf.square(\n x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, 1:, : img_ncols - 1, :]\n )\n b = tf.square(\n x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, : img_nrows - 1, 1:, :]\n )\n return tf.reduce_sum(tf.pow(a + b, 1.25))\n\n", "Next, let's create a feature extraction model that retrieves the intermediate activations\nof VGG19 (as a dict, by name).", "# Build a VGG19 model loaded with pre-trained ImageNet weights\nmodel = vgg19.VGG19(weights=\"imagenet\", include_top=False)\n\n# Get the symbolic outputs of each \"key\" layer (we gave them unique names).\noutputs_dict = dict([(layer.name, layer.output) for layer in model.layers])\n\n# Set up a model that returns the activation values for every layer in\n# VGG19 (as a dict).\nfeature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict)\n", "Finally, here's the code that computes the style transfer loss.", "# List of layers to use for the style loss.\nstyle_layer_names = [\n \"block1_conv1\",\n \"block2_conv1\",\n \"block3_conv1\",\n \"block4_conv1\",\n \"block5_conv1\",\n]\n# The layer to use for the content loss.\ncontent_layer_name = \"block5_conv2\"\n\n\ndef compute_loss(combination_image, base_image, style_reference_image):\n input_tensor = tf.concat(\n [base_image, style_reference_image, combination_image], axis=0\n )\n features = feature_extractor(input_tensor)\n\n # Initialize the loss\n loss = tf.zeros(shape=())\n\n # Add content loss\n layer_features = features[content_layer_name]\n base_image_features = layer_features[0, :, :, :]\n combination_features = layer_features[2, :, :, :]\n loss = loss + content_weight * content_loss(\n base_image_features, combination_features\n )\n # Add style loss\n for layer_name in style_layer_names:\n layer_features = features[layer_name]\n style_reference_features = layer_features[1, :, :, :]\n combination_features = layer_features[2, :, :, :]\n sl = style_loss(style_reference_features, combination_features)\n loss += (style_weight / len(style_layer_names)) * sl\n\n # Add total variation loss\n loss += total_variation_weight * total_variation_loss(combination_image)\n return loss\n\n", "Add a tf.function decorator to loss & gradient computation\nTo compile it, and thus make it fast.", "\[email protected]\ndef compute_loss_and_grads(combination_image, base_image, style_reference_image):\n with tf.GradientTape() as tape:\n loss = compute_loss(combination_image, base_image, style_reference_image)\n grads = tape.gradient(loss, combination_image)\n return loss, grads\n\n", "The training loop\nRepeatedly run vanilla gradient descent steps to minimize the loss, and save the\nresulting image every 100 iterations.\nWe decay the learning rate by 0.96 every 100 steps.", "optimizer = keras.optimizers.SGD(\n keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate=100.0, decay_steps=100, decay_rate=0.96\n )\n)\n\nbase_image = preprocess_image(base_image_path)\nstyle_reference_image = preprocess_image(style_reference_image_path)\ncombination_image = tf.Variable(preprocess_image(base_image_path))\n\niterations = 4000\nfor i in range(1, iterations + 1):\n loss, grads = compute_loss_and_grads(\n combination_image, base_image, style_reference_image\n )\n optimizer.apply_gradients([(grads, combination_image)])\n if i % 100 == 0:\n print(\"Iteration %d: loss=%.2f\" % (i, loss))\n img = deprocess_image(combination_image.numpy())\n fname = result_prefix + \"_at_iteration_%d.png\" % i\n keras.preprocessing.image.save_img(fname, img)\n", "After 4000 iterations, you get the following result:", "display(Image(result_prefix + \"_at_iteration_4000.png\"))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rishuatgithub/MLPy
nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb
apache-2.0
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nNLP Basics Assessment - Solutions\nFor this assessment we'll be using the short story An Occurrence at Owl Creek Bridge by Ambrose Bierce (1890). <br>The story is in the public domain; the text file was obtained from Project Gutenberg.", "# RUN THIS CELL to perform standard imports:\nimport spacy\nnlp = spacy.load('en_core_web_sm')", "1. Create a Doc object from the file owlcreek.txt<br>\n\nHINT: Use with open('../TextFiles/owlcreek.txt') as f:", "# Enter your code here:\n\nwith open('../TextFiles/owlcreek.txt') as f:\n doc = nlp(f.read())\n\n# Run this cell to verify it worked:\n\ndoc[:36]", "2. How many tokens are contained in the file?", "len(doc)", "3. How many sentences are contained in the file?<br>HINT: You'll want to build a list first!", "sents = [sent for sent in doc.sents]\nlen(sents)", "4. Print the second sentence in the document<br> HINT: Indexing starts at zero, and the title counts as the first sentence.", "print(sents[1].text)", "5. For each token in the sentence above, print its text, POS tag, dep tag and lemma<br>\nCHALLENGE: Have values line up in columns in the print output.", "# NORMAL SOLUTION:\nfor token in sents[1]:\n print(token.text, token.pos_, token.dep_, token.lemma_)\n\n# CHALLENGE SOLUTION:\n for token in sents[1]:\n print(f'{token.text:{15}} {token.pos_:{5}} {token.dep_:{10}} {token.lemma_:{15}}')", "6. Write a matcher called 'Swimming' that finds both occurrences of the phrase \"swimming vigorously\" in the text<br>\nHINT: You should include an 'IS_SPACE': True pattern between the two words!", "# Import the Matcher library:\n\nfrom spacy.matcher import Matcher\nmatcher = Matcher(nlp.vocab)\n\n# Create a pattern and add it to matcher:\n\npattern = [{'LOWER': 'swimming'}, {'IS_SPACE': True, 'OP':'*'}, {'LOWER': 'vigorously'}]\n\nmatcher.add('Swimming', None, pattern)\n\n# Create a list of matches called \"found_matches\" and print the list:\n\nfound_matches = matcher(doc)\nprint(found_matches)", "7. Print the text surrounding each found match", "print(doc[1265:1290])\n\nprint(doc[3600:3615])", "EXTRA CREDIT:<br>Print the sentence that contains each found match", "for sent in sents:\n if found_matches[0][1] < sent.end:\n print(sent)\n break\n\nfor sent in sents:\n if found_matches[1][1] < sent.end:\n print(sent)\n break", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nmih/ssbio
docs/notebooks/GEM-PRO - SBML Model.ipynb
mit
[ "GEM-PRO - SBML Model\nThis notebook gives an example of how to run the GEM-PRO pipeline with a SBML model, in this case iNJ661, the metabolic model of M. tuberculosis.\n<div class=\"alert alert-info\">\n\n**Input:** \nGEM (in SBML, JSON, or MAT formats)\n\n</div>\n\n<div class=\"alert alert-info\">\n\n**Output:**\nGEM-PRO model\n\n</div>\n\nImports", "import sys\nimport logging\n\n# Import the GEM-PRO class\nfrom ssbio.pipeline.gempro import GEMPRO\n\n# Printing multiple outputs per cell\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"", "Logging\nSet the logging level in logger.setLevel(logging.&lt;LEVEL_HERE&gt;) to specify how verbose you want the pipeline to be. Debug is most verbose.\n\nCRITICAL\nOnly really important messages shown\n\n\nERROR\nMajor errors\n\n\nWARNING\nWarnings that don't affect running of the pipeline\n\n\nINFO (default)\nInfo such as the number of structures mapped per gene\n\n\nDEBUG\nReally detailed information that will print out a lot of stuff\n\n\n\n<div class=\"alert alert-warning\">\n\n**Warning:** \n`DEBUG` mode prints out a large amount of information, especially if you have a lot of genes. This may stall your notebook!\n</div>", "# Create logger\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #\n\n# Other logger stuff for Jupyter notebooks\nhandler = logging.StreamHandler(sys.stderr)\nformatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt=\"%Y-%m-%d %H:%M\")\nhandler.setFormatter(formatter)\nlogger.handlers = [handler]", "Initialization of the project\nSet these three things:\n\nROOT_DIR\nThe directory where a folder named after your PROJECT will be created\n\n\nPROJECT\nYour project name\n\n\nLIST_OF_GENES\nYour list of gene IDs\n\n\n\nA directory will be created in ROOT_DIR with your PROJECT name. The folders are organized like so:\n```\n ROOT_DIR\n └── PROJECT\n ├── data # General storage for pipeline outputs\n ├── model # SBML and GEM-PRO models are stored here\n ├── genes # Per gene information\n │   ├── <gene_id1> # Specific gene directory\n │   │   └── protein\n │   │   ├── sequences # Protein sequence files, alignments, etc.\n │   │   └── structures # Protein structure files, calculations, etc.\n │   └── <gene_id2>\n │      └── protein\n │      ├── sequences\n │      └── structures\n ├── reactions # Per reaction information\n │ └── <reaction_id1> # Specific reaction directory\n │ └── complex\n │ └── structures # Protein complex files\n └── metabolites # Per metabolite information\n └── <metabolite_id1> # Specific metabolite directory\n └── chemical\n └── structures # Metabolite 2D and 3D structure files\n```\n<div class=\"alert alert-info\">**Note:** Methods for protein complexes and metabolites are still in development.</div>", "# SET FOLDERS AND DATA HERE\nimport tempfile\nROOT_DIR = tempfile.gettempdir()\n\nPROJECT = 'mtuberculosis_gp'\nGEM_FILE = '../../ssbio/test/test_files/models/iNJ661.json'\nGEM_FILE_TYPE = 'json'\nPDB_FILE_TYPE = 'mmtf'\n\n# Create the GEM-PRO project\nmy_gempro = GEMPRO(gem_name=PROJECT, root_dir=ROOT_DIR, gem_file_path=GEM_FILE, gem_file_type=GEM_FILE_TYPE, pdb_file_type=PDB_FILE_TYPE)", "Mapping gene ID --> sequence\nFirst, we need to map these IDs to their protein sequences. There are 2 ID mapping services provided to do this - through KEGG or UniProt. The end goal is to map a UniProt ID to each ID, since there is a comprehensive mapping (and some useful APIs) between UniProt and the PDB.\n<p><div class=\"alert alert-info\">**Note:** You only need to map gene IDs using one service. However you can run both if some genes don't map in one service and do map in another!</div></p>\n\nHowever, you don't need to map using these services if you already have the amino acid sequences for each protein. You can just manually load in the sequences as shown using the method manual_seq_mapping. Or, if you already have the UniProt IDs, you can load those in using the method manual_uniprot_mapping.\nMethods", "gene_to_seq_dict = {'Rv1295': 'MTVPPTATHQPWPGVIAAYRDRLPVGDDWTPVTLLEGGTPLIAATNLSKQTGCTIHLKVEGLNPTGSFKDRGMTMAVTDALAHGQRAVLCASTGNTSASAAAYAARAGITCAVLIPQGKIAMGKLAQAVMHGAKIIQIDGNFDDCLELARKMAADFPTISLVNSVNPVRIEGQKTAAFEIVDVLGTAPDVHALPVGNAGNITAYWKGYTEYHQLGLIDKLPRMLGTQAAGAAPLVLGEPVSHPETIATAIRIGSPASWTSAVEAQQQSKGRFLAASDEEILAAYHLVARVEGVFVEPASAASIAGLLKAIDDGWVARGSTVVCTVTGNGLKDPDTALKDMPSVSPVPVDPVAVVEKLGLA',\n 'Rv2233': 'VSSPRERRPASQAPRLSRRPPAHQTSRSSPDTTAPTGSGLSNRFVNDNGIVTDTTASGTNCPPPPRAAARRASSPGESPQLVIFDLDGTLTDSARGIVSSFRHALNHIGAPVPEGDLATHIVGPPMHETLRAMGLGESAEEAIVAYRADYSARGWAMNSLFDGIGPLLADLRTAGVRLAVATSKAEPTARRILRHFGIEQHFEVIAGASTDGSRGSKVDVLAHALAQLRPLPERLVMVGDRSHDVDGAAAHGIDTVVVGWGYGRADFIDKTSTTVVTHAATIDELREALGV'}\nmy_gempro.manual_seq_mapping(gene_to_seq_dict)\n\nmanual_uniprot_dict = {'Rv1755c': 'P9WIA9', 'Rv2321c': 'P71891', 'Rv0619': 'Q79FY3', 'Rv0618': 'Q79FY4', 'Rv2322c': 'P71890'}\nmy_gempro.manual_uniprot_mapping(manual_uniprot_dict)\nmy_gempro.df_uniprot_metadata.tail(4)\n\n# KEGG mapping of gene ids\nmy_gempro.kegg_mapping_and_metadata(kegg_organism_code='mtu')\nprint('Missing KEGG mapping: ', my_gempro.missing_kegg_mapping)\nmy_gempro.df_kegg_metadata.head()\n\n# UniProt mapping\nmy_gempro.uniprot_mapping_and_metadata(model_gene_source='TUBERCULIST_ID')\nprint('Missing UniProt mapping: ', my_gempro.missing_uniprot_mapping)\nmy_gempro.df_uniprot_metadata.head()", "If you have mapped with both KEGG and UniProt mappers, then you can set a representative sequence for the gene using this function. If you used just one, this will just set that ID as representative.\n\nIf any sequences or IDs were provided manually, these will be set as representative first.\nUniProt mappings override KEGG mappings except when KEGG mappings have PDBs associated with them and UniProt doesn't.", "# Set representative sequences\nmy_gempro.set_representative_sequence()\nprint('Missing a representative sequence: ', my_gempro.missing_representative_sequence)\nmy_gempro.df_representative_sequences.head()", "Mapping representative sequence --> structure\nThese are the ways to map sequence to structure:\n\nUse the UniProt ID and their automatic mappings to the PDB\nBLAST the sequence to the PDB\nMake homology models or \nMap to existing homology models\n\nYou can only utilize option #1 to map to PDBs if there is a mapped UniProt ID set in the representative sequence. If not, you'll have to BLAST your sequence to the PDB or make a homology model. You can also run both for maximum coverage.\nMethods", "# Mapping using the PDBe best_structures service\nmy_gempro.map_uniprot_to_pdb(seq_ident_cutoff=.3)\nmy_gempro.df_pdb_ranking.head()\n\n# Mapping using BLAST\nmy_gempro.blast_seqs_to_pdb(all_genes=True, seq_ident_cutoff=.9, evalue=0.00001)\nmy_gempro.df_pdb_blast.head(2)\n\ntb_homology_dir = '/home/nathan/projects_archive/homology_models/MTUBERCULOSIS/'\n\n##### EXAMPLE SPECIFIC CODE #####\n# Needed to map to older IDs used in this example\nimport pandas as pd\nimport os.path as op\nold_gene_to_homology = pd.read_csv(op.join(tb_homology_dir, 'data/161031-old_gene_to_uniprot_mapping.csv'))\ngene_to_uniprot = old_gene_to_homology.set_index('m_gene').to_dict()['u_uniprot_acc']\nmy_gempro.get_itasser_models(homology_raw_dir=op.join(tb_homology_dir, 'raw'), custom_itasser_name_mapping=gene_to_uniprot)\n### END EXAMPLE SPECIFIC CODE ###\n\n# Organizing I-TASSER homology models\nmy_gempro.get_itasser_models(homology_raw_dir=op.join(tb_homology_dir, 'raw'))\nmy_gempro.df_homology_models.head()\n\nhomology_model_dict = {}\nmy_gempro.get_manual_homology_models(homology_model_dict)", "Downloading and ranking structures\nMethods\n<div class=\"alert alert-warning\">\n\n**Warning:** \nDownloading all PDBs takes a while, since they are also parsed for metadata. You can skip this step and just set representative structures below if you want to minimize the number of PDBs downloaded.\n\n</div>", "# Download all mapped PDBs and gather the metadata\nmy_gempro.pdb_downloader_and_metadata()\nmy_gempro.df_pdb_metadata.head(2)\n\n# Set representative structures\nmy_gempro.set_representative_structure()\nmy_gempro.df_representative_structures.head()\n\n# Looking at the information saved within a gene\nmy_gempro.genes.get_by_id('Rv1295').protein.representative_structure\nmy_gempro.genes.get_by_id('Rv1295').protein.representative_structure.get_dict()", "Creating homology models\nFor those proteins with no representative structure, we can create homology models for them. ssbio contains some built in functions for easily running I-TASSER locally or on machines with SLURM (ie. on NERSC) or Torque job scheduling.\nYou can load in I-TASSER models once they complete using the get_itasser_models later.\n<p><div class=\"alert alert-info\">**Info:** Homology modeling can take a long time - about 24-72 hours per protein (highly dependent on the sequence length, as well as if there are available templates).</div></p>\n\nMethods", "# Prep I-TASSER model folders\nmy_gempro.prep_itasser_modeling('~/software/I-TASSER4.4', '~/software/ITLIB/', runtype='local', all_genes=False)", "Saving your GEM-PRO\nFinally, you can save your GEM-PRO as a JSON or pickle file, so you don't have to run the pipeline again. \nFor most functions, if you rerun them, they will check for existing results saved as files. The only function that would take a long time is setting the representative structure, as they are each rechecked and cleaned. This is where saving helps!\n<p><div class=\"alert alert-warning\">**Warning:** Saving in JSON format is still experimental. For a full GEM-PRO with sequences & structures, depending on the number of genes, saving can take >5 minutes.</div></p>", "import os.path as op\nmy_gempro.save_pickle(op.join(my_gempro.model_dir, '{}.pckl'.format(my_gempro.id)))\n\nimport os.path as op\nmy_gempro.save_json(op.join(my_gempro.model_dir, '{}.json'.format(my_gempro.id)), compression=False)", "Loading a saved GEM-PRO", "# Loading a pickle file\nimport pickle\nwith open('/tmp/mtuberculosis_gp_atlas/model/mtuberculosis_gp_atlas.pckl', 'rb') as f:\n my_saved_gempro = pickle.load(f)\n\n# Loading a JSON file\nimport ssbio.core.io\nmy_saved_gempro = ssbio.core.io.load_json('/tmp/mtuberculosis_gp_atlas/model/mtuberculosis_gp_atlas.json', decompression=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dirkseidensticker/CARD
Python/aDRACtoOxCal.ipynb
mit
[ "Conversion to OxCal-compliant output\nArchives des datations radiocarbone d'Afrique centrale\n\nDirk Seidensticker\nsee: https://c14.arch.ox.ac.uk/embed.php?File=oxcal.html", "%matplotlib inline\nfrom IPython.display import display\nimport pandas as pd", "Conversion of the Data into OxCal-usable Form", "df = pd.read_csv(\"https://raw.githubusercontent.com/dirkseidensticker/aDRAC/master/data/aDRAC.csv\", encoding='utf8')\ndisplay(df.head())", "Choosing only the first five entries as subsample:", "df_sub = df.head()", "OxCal-compliant output:", "print('''Plot()\n{''')\nfor index, row in df_sub.iterrows():\n print('R_Date(\"', row['SITE'],'/', row['FEATURE'], '-', row['LABNR'],'\",', row['C14AGE'],',', row['C14STD'],');')\nprint('};')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/probability
tensorflow_probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Variational Inference on Probabilistic Graphical Models with Joint Distributions\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Variational_Inference_and_Joint_Distributions\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nVariational Inference (VI) casts approximate Bayesian inference as an optimization problem and seeks a 'surrogate' posterior distribution that minimizes the KL divergence with the true posterior. Gradient-based VI is often faster than MCMC methods, composes naturally with optimization of model parameters, and provides a lower bound on model evidence that can be used directly for model comparison, convergence diagnosis, and composable inference.\nTensorFlow Probability offers tools for fast, flexible, and scalable VI that fit naturally into the TFP stack. These tools enable the construction of surrogate posteriors with covariance structures induced by linear transformations or normalizing flows.\nVI can be used to estimate Bayesian credible intervals for parameters of a regression model to estimate the effects of various treatments or observed features on an outcome of interest. Credible intervals bound the values of an unobserved parameter with a certain probability, according to the posterior distribution of the parameter conditioned on observed data and given an assumption on the parameter's prior distribution.\nIn this Colab, we demonstrate how to use VI to obtain credible intervals for parameters of a Bayesian linear regression model for radon levels measured in homes (using Gelman et al.'s (2007) Radon dataset; see similar examples in Stan). We demonstrate how TFP JointDistributions combine with bijectors to build and fit two types of expressive surrogate posteriors:\n\na standard Normal distribution transformed by a block matrix. The matrix may reflect independence among some components of the posterior and dependence among others, relaxing the assumption of a mean-field or full-covariance posterior.\na more complex, higher-capacity inverse autoregressive flow.\n\nThe surrogate posteriors are trained and compared with results from a mean-field surrogate posterior baseline, as well as ground-truth samples from Hamiltonian Monte Carlo.\nOverview of Bayesian Variational Inference\nSuppose we have the following generative process, where $\\theta$ represents random parameters, $\\omega$ represents deterministic parameters, and the $x_i$ are features and the $y_i$ are target values for $i=1,\\ldots,n$ observed data points:\n\\begin{align}\n&\\theta \\sim r(\\Theta) && \\text{(Prior)}\\\n&\\text{for } i = 1 \\ldots n: \\nonumber \\\n&\\quad y_i \\sim p(Y_i|x_i, \\theta, \\omega) && \\text{(Likelihood)}\n\\end{align}\nVI is then characterized by:\n$\\newcommand{\\E}{\\operatorname{\\mathbb{E}}}\n\\newcommand{\\K}{\\operatorname{\\mathbb{K}}}\n\\newcommand{\\defeq}{\\overset{\\tiny\\text{def}}{=}}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}$\n\\begin{align}\n-\\log p({y_i}i^n|{x_i}_i^n, \\omega)\n&\\defeq -\\log \\int \\textrm{d}\\theta\\, r(\\theta) \\prod_i^n p(y_i|x_i,\\theta, \\omega) && \\text{(Really hard integral)} \\\n&= -\\log \\int \\textrm{d}\\theta\\, q(\\theta) \\frac{1}{q(\\theta)} r(\\theta) \\prod_i^n p(y_i|x_i,\\theta, \\omega) && \\text{(Multiply by 1)}\\\n&\\le - \\int \\textrm{d}\\theta\\, q(\\theta) \\log \\frac{r(\\theta) \\prod_i^n p(y_i|x_i,\\theta, \\omega)}{q(\\theta)} && \\text{(Jensen's inequality)}\\\n&\\defeq \\E{q(\\Theta)}[ -\\log p(y_i|x_i,\\Theta, \\omega) ] + \\K[q(\\Theta), r(\\Theta)]\\\n&\\defeq \\text{expected negative log likelihood\"} +\\text{kl regularizer\"}\n\\end{align}\n(Technically we're assuming $q$ is absolutely continuous with respect to $r$. See also, Jensen's inequality.)\nSince the bound holds for all q, it is obviously tightest for:\n$$q^,w^ = \\argmin_{q \\in \\mathcal{Q},\\omega\\in\\mathbb{R}^d} \\left{ \\sum_i^n\\E_{q(\\Theta)}\\left[ -\\log p(y_i|x_i,\\Theta, \\omega) \\right] + \\K[q(\\Theta), r(\\Theta)] \\right}$$\nRegarding terminology, we call\n\n$q^*$ the \"surrogate posterior,\" and,\n$\\mathcal{Q}$ the \"surrogate family.\"\n\n$\\omega^*$ represents the maximum-likelihood values of the deterministic parameters on the VI loss. See this survey for more information on variational inference.\nExample: Bayesian hierarchical linear regression on Radon measurements\nRadon is a radioactive gas that enters homes through contact points with the\nground. It is a carcinogen that is the primary cause of lung cancer in\nnon-smokers. Radon levels vary greatly from household to household.\nThe EPA did a study of radon levels in 80,000 houses. Two important predictors\nare:\n- Floor on which the measurement was taken (radon higher in basements)\n- County uranium level (positive correlation with radon levels)\nPredicting radon levels in houses grouped by county is a classic problem in Bayesian hierarchical modeling, introduced by Gelman and Hill (2006). We will build a hierarchical linear model to predict radon measurements in houses, in which the hierarchy is the grouping of houses by county. We are interested in credible intervals for the effect of location (county) on the radon level of houses in Minnesota. In order to isolate this effect, the effects of floor and uranium level are also included in the model. Additionaly, we will incorporate a contextual effect corresponding to the mean floor on which the measurement was taken, by county, so that if there is variation among counties of the floor on which the measurements were taken, this is not attributed to the county effect.", "!pip3 install -q tf-nightly tfp-nightly\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\nimport tensorflow_probability as tfp\nimport warnings\n\ntfd = tfp.distributions\ntfb = tfp.bijectors\n\nplt.rcParams['figure.facecolor'] = '1.'\n\n# Load the Radon dataset from `tensorflow_datasets` and filter to data from\n# Minnesota.\ndataset = tfds.as_numpy(\n tfds.load('radon', split='train').filter(\n lambda x: x['features']['state'] == 'MN').batch(10**9))\n\n# Dependent variable: Radon measurements by house.\ndataset = next(iter(dataset))\nradon_measurement = dataset['activity'].astype(np.float32)\nradon_measurement[radon_measurement <= 0.] = 0.1\nlog_radon = np.log(radon_measurement)\n\n# Measured uranium concentrations in surrounding soil.\nuranium_measurement = dataset['features']['Uppm'].astype(np.float32)\nlog_uranium = np.log(uranium_measurement)\n\n# County indicator.\ncounty_strings = dataset['features']['county'].astype('U13')\nunique_counties, county = np.unique(county_strings, return_inverse=True)\ncounty = county.astype(np.int32)\nnum_counties = unique_counties.size\n\n# Floor on which the measurement was taken.\nfloor_of_house = dataset['features']['floor'].astype(np.int32)\n\n# Average floor by county (contextual effect).\ncounty_mean_floor = []\nfor i in range(num_counties):\n county_mean_floor.append(floor_of_house[county == i].mean())\ncounty_mean_floor = np.array(county_mean_floor, dtype=log_radon.dtype)\nfloor_by_county = county_mean_floor[county]", "The regression model is specified as follows:\n$\\newcommand{\\Normal}{\\operatorname{\\sf Normal}}$\n\\begin{align}\n&\\text{uranium_weight} \\sim \\Normal(0, 1) \\\n&\\text{county_floor_weight} \\sim \\Normal(0, 1) \\\n&\\text{for } j = 1\\ldots \\text{num_counties}:\\\n&\\quad \\text{county_effect}j \\sim \\Normal (0, \\sigma_c)\\\n&\\text{for } i = 1\\ldots n:\\\n&\\quad \\mu_i = ( \\\n&\\quad\\quad \\text{bias} \\\n&\\quad\\quad + \\text{county_effect}{\\text{county}i} \\\n&\\quad\\quad +\\text{log_uranium}_i \\times \\text{uranium_weight} \\\n&\\quad\\quad +\\text{floor_of_house}_i \\times \\text{floor_weight} \\ \n&\\quad\\quad +\\text{floor_by_county}{\\text{county}_i} \\times \\text{county_floor_weight} ) \\\n&\\quad \\text{log_radon}_i \\sim \\Normal(\\mu_i, \\sigma_y)\n\\end{align}\nin which $i$ indexes the observations and $\\text{county}_i$ is the county in which the $i$th observation was taken.\nWe use a county-level random effect to capture geographical variation. The parameters uranium_weight and county_floor_weight are modeled probabilistically, and floor_weight and the constant bias are deterministic. These modeling choices are largely arbitrary, and are made for the purpose of demonstrating VI on a probabilistic model of reasonable complexity. For a more thorough discussion of multilevel modeling with fixed and random effects in TFP, using the radon dataset, see Multilevel Modeling Primer and Fitting Generalized Linear Mixed-effects Models Using Variational Inference.", "# Create variables for fixed effects.\nfloor_weight = tf.Variable(0.)\nbias = tf.Variable(0.)\n\n# Variables for scale parameters.\nlog_radon_scale = tfp.util.TransformedVariable(1., tfb.Exp())\ncounty_effect_scale = tfp.util.TransformedVariable(1., tfb.Exp())\n\n# Define the probabilistic graphical model as a JointDistribution.\[email protected]\ndef model():\n uranium_weight = yield tfd.Normal(0., scale=1., name='uranium_weight')\n county_floor_weight = yield tfd.Normal(\n 0., scale=1., name='county_floor_weight')\n county_effect = yield tfd.Sample(\n tfd.Normal(0., scale=county_effect_scale),\n sample_shape=[num_counties], name='county_effect')\n yield tfd.Normal(\n loc=(log_uranium * uranium_weight + floor_of_house* floor_weight\n + floor_by_county * county_floor_weight\n + tf.gather(county_effect, county, axis=-1)\n + bias),\n scale=log_radon_scale[..., tf.newaxis],\n name='log_radon') \n\n# Pin the observed `log_radon` values to model the un-normalized posterior.\ntarget_model = model.experimental_pin(log_radon=log_radon)", "Expressive surrogate posteriors\nNext we estimate the posterior distributions of the random effects using VI with two different types of surrogate posteriors:\n- A constrained multivariate Normal distribution, with covariance structure induced by a blockwise matrix transformation.\n- A multivariate Standard Normal distribution transformed by an Inverse Autoregressive Flow, which is then split and restructured to match the support of the posterior.\nMultivariate Normal surrogate posterior\nTo build this surrogate posterior, a trainable linear operator is used to induce correlation among the components of the posterior.", "# Determine the `event_shape` of the posterior, and calculate the size of each\n# `event_shape` component. These determine the sizes of the components of the\n# underlying standard Normal distribution, and the dimensions of the blocks in\n# the blockwise matrix transformation.\nevent_shape = target_model.event_shape_tensor()\nflat_event_shape = tf.nest.flatten(event_shape)\nflat_event_size = tf.nest.map_structure(tf.reduce_prod, flat_event_shape)\n\n# The `event_space_bijector` maps unconstrained values (in R^n) to the support\n# of the prior -- we'll need this at the end to constrain Multivariate Normal\n# samples to the prior's support.\nevent_space_bijector = target_model.experimental_default_event_space_bijector()", "Construct a JointDistribution with vector-valued standard Normal components, with sizes determined by the corresponding prior components. The components should be vector-valued so they can be transformed by the linear operator.", "base_standard_dist = tfd.JointDistributionSequential(\n [tfd.Sample(tfd.Normal(0., 1.), s) for s in flat_event_size])", "Build a trainable blockwise lower-triangular linear operator. We'll apply it to the standard Normal distribution to implement a (trainable) blockwise matrix transformation and induce the correlation structure of the posterior.\nWithin the blockwise linear operator, a trainable full-matrix block represents full covariance between two components of the posterior, while a block of zeros (or None) expresses independence. Blocks on the diagonal are either lower-triangular or diagonal matrices, so that the entire block structure represents a lower-triangular matrix.\nApplying this bijector to the base distribution results in a multivariate Normal distribution with mean 0 and (Cholesky-factored) covariance equal to the lower-triangular block matrix.", "operators = (\n (tf.linalg.LinearOperatorDiag,), # Variance of uranium weight (scalar).\n (tf.linalg.LinearOperatorFullMatrix, # Covariance between uranium and floor-by-county weights.\n tf.linalg.LinearOperatorDiag), # Variance of floor-by-county weight (scalar).\n (None, # Independence between uranium weight and county effects.\n None, # Independence between floor-by-county and county effects.\n tf.linalg.LinearOperatorDiag) # Independence among the 85 county effects.\n )\n\nblock_tril_linop = (\n tfp.experimental.vi.util.build_trainable_linear_operator_block(\n operators, flat_event_size))\nscale_bijector = tfb.ScaleMatvecLinearOperatorBlock(block_tril_linop)", "After applying the linear operator to the standard Normal distribution, apply a multipart Shift bijector to allow the mean to take nonzero values.", "loc_bijector = tfb.JointMap(\n tf.nest.map_structure(\n lambda s: tfb.Shift(\n tf.Variable(tf.random.uniform(\n (s,), minval=-2., maxval=2., dtype=tf.float32))),\n flat_event_size))", "The resulting multivariate Normal distribution, obtained by transforming the standard Normal distribution with the scale and location bijectors, must be reshaped and restructured to match the prior, and finally constrained to the support of the prior.", "# Reshape each component to match the prior, using a nested structure of\n# `Reshape` bijectors wrapped in `JointMap` to form a multipart bijector.\nreshape_bijector = tfb.JointMap(\n tf.nest.map_structure(tfb.Reshape, flat_event_shape))\n\n# Restructure the flat list of components to match the prior's structure\nunflatten_bijector = tfb.Restructure(\n tf.nest.pack_sequence_as(\n event_shape, range(len(flat_event_shape))))", "Now, put it all together -- chain the trainable bijectors together and apply them to the base standard Normal distribution to construct the surrogate posterior.", "surrogate_posterior = tfd.TransformedDistribution(\n base_standard_dist,\n bijector = tfb.Chain( # Note that the chained bijectors are applied in reverse order\n [\n event_space_bijector, # constrain the surrogate to the support of the prior\n unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the posterior\n reshape_bijector, # reshape the vector-valued components to match the shapes of the posterior components\n loc_bijector, # allow for nonzero mean\n scale_bijector # apply the block matrix transformation to the standard Normal distribution\n ]))", "Train the multivariate Normal surrogate posterior.", "optimizer = tf.optimizers.Adam(learning_rate=1e-2)\nmvn_loss = tfp.vi.fit_surrogate_posterior(\n target_model.unnormalized_log_prob,\n surrogate_posterior,\n optimizer=optimizer,\n num_steps=10**4,\n sample_size=16,\n jit_compile=True)\n\nmvn_samples = surrogate_posterior.sample(1000)\nmvn_final_elbo = tf.reduce_mean(\n target_model.unnormalized_log_prob(*mvn_samples)\n - surrogate_posterior.log_prob(mvn_samples))\n\nprint('Multivariate Normal surrogate posterior ELBO: {}'.format(mvn_final_elbo))\n\nplt.plot(mvn_loss)\nplt.xlabel('Training step')\n_ = plt.ylabel('Loss value')", "Since the trained surrogate posterior is a TFP distribution, we can take samples from it and process them to produce posterior credible intervals for the parameters.\nThe box-and-whiskers plots below show 50% and 95% credible intervals for the county effect of the two largest counties and the regression weights on soil uranium measurements and mean floor by county. The posterior credible intervals for county effects indicate that location in St. Louis county is associated with lower radon levels, after accounting for other variables, and that the effect of location in Hennepin county is near neutral.\nPosterior credible intervals on the regression weights show that higher levels of soil uranium are associated with higher radon levels, and counties where measurements were taken on higher floors (likely because the house didn't have a basement) tend to have higher levels of radon, which could relate to soil properties and their effect on the type of structures built.\nThe (deterministic) coefficient of floor is negative, indicating that lower floors have higher radon levels, as expected.", "st_louis_co = 69 # Index of St. Louis, the county with the most observations.\nhennepin_co = 25 # Index of Hennepin, with the second-most observations.\n\ndef pack_samples(samples):\n return {'County effect (St. Louis)': samples.county_effect[..., st_louis_co],\n 'County effect (Hennepin)': samples.county_effect[..., hennepin_co],\n 'Uranium weight': samples.uranium_weight,\n 'Floor-by-county weight': samples.county_floor_weight}\n\ndef plot_boxplot(posterior_samples):\n fig, axes = plt.subplots(1, 4, figsize=(16, 4))\n\n # Invert the results dict for easier plotting.\n k = list(posterior_samples.values())[0].keys()\n plot_results = {\n v: {p: posterior_samples[p][v] for p in posterior_samples} for v in k}\n for i, (var, var_results) in enumerate(plot_results.items()):\n sns.boxplot(data=list(var_results.values()), ax=axes[i],\n width=0.18*len(var_results), whis=(2.5, 97.5))\n # axes[i].boxplot(list(var_results.values()), whis=(2.5, 97.5))\n axes[i].title.set_text(var)\n fs = 10 if len(var_results) < 4 else 8\n axes[i].set_xticklabels(list(var_results.keys()), fontsize=fs)\n\nresults = {'Multivariate Normal': pack_samples(mvn_samples)}\n\nprint('Bias is: {:.2f}'.format(bias.numpy()))\nprint('Floor fixed effect is: {:.2f}'.format(floor_weight.numpy()))\nplot_boxplot(results)", "Inverse Autoregressive Flow surrogate posterior\nInverse Autoregressive Flows (IAFs) are normalizing flows that use neural networks to capture complex, nonlinear dependencies among components of the distribution. Next we build an IAF surrogate posterior to see whether this higher-capacity, more fiexible model outperforms the constrained multivariate Normal.", "# Build a standard Normal with a vector `event_shape`, with length equal to the\n# total number of degrees of freedom in the posterior.\nbase_distribution = tfd.Sample(\n tfd.Normal(0., 1.), sample_shape=[tf.reduce_sum(flat_event_size)])\n\n# Apply an IAF to the base distribution.\nnum_iafs = 2\niaf_bijectors = [\n tfb.Invert(tfb.MaskedAutoregressiveFlow(\n shift_and_log_scale_fn=tfb.AutoregressiveNetwork(\n params=2, hidden_units=[256, 256], activation='relu')))\n for _ in range(num_iafs)\n]\n\n# Split the base distribution's `event_shape` into components that are equal\n# in size to the prior's components.\nsplit = tfb.Split(flat_event_size)\n\n# Chain these bijectors and apply them to the standard Normal base distribution\n# to build the surrogate posterior. `event_space_bijector`,\n# `unflatten_bijector`, and `reshape_bijector` are the same as in the\n# multivariate Normal surrogate posterior.\niaf_surrogate_posterior = tfd.TransformedDistribution(\n base_distribution,\n bijector=tfb.Chain([\n event_space_bijector, # constrain the surrogate to the support of the prior\n unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the prior\n reshape_bijector, # reshape the vector-valued components to match the shapes of the prior components\n split] + # Split the samples into components of the same size as the prior components\n iaf_bijectors # Apply a flow model to the Tensor-valued standard Normal distribution\n ))", "Train the IAF surrogate posterior.", "optimizer=tf.optimizers.Adam(learning_rate=1e-2)\niaf_loss = tfp.vi.fit_surrogate_posterior(\n target_model.unnormalized_log_prob,\n iaf_surrogate_posterior,\n optimizer=optimizer,\n num_steps=10**4,\n sample_size=4,\n jit_compile=True)\n\niaf_samples = iaf_surrogate_posterior.sample(1000)\niaf_final_elbo = tf.reduce_mean(\n target_model.unnormalized_log_prob(*iaf_samples)\n - iaf_surrogate_posterior.log_prob(iaf_samples))\nprint('IAF surrogate posterior ELBO: {}'.format(iaf_final_elbo))\n\nplt.plot(iaf_loss)\nplt.xlabel('Training step')\n_ = plt.ylabel('Loss value')", "The credible intervals for the IAF surrogate posterior appear similar to those of the constrained multivariate Normal.", "results['IAF'] = pack_samples(iaf_samples)\nplot_boxplot(results)", "Baseline: Mean-field surrogate posterior\nVI surrogate posteriors are often assumed to be mean-field (independent) Normal distributions, with trainable means and variances, that are constrained to the support of the prior with a bijective transformation. We define a mean-field surrogate posterior in addition to the two more expressive surrogate posteriors, using the same general formula as the multivariate Normal surrogate posterior.", "# A block-diagonal linear operator, in which each block is a diagonal operator,\n# transforms the standard Normal base distribution to produce a mean-field\n# surrogate posterior.\noperators = (tf.linalg.LinearOperatorDiag,\n tf.linalg.LinearOperatorDiag,\n tf.linalg.LinearOperatorDiag)\nblock_diag_linop = (\n tfp.experimental.vi.util.build_trainable_linear_operator_block(\n operators, flat_event_size))\nmean_field_scale = tfb.ScaleMatvecLinearOperatorBlock(block_diag_linop)\n\nmean_field_loc = tfb.JointMap(\n tf.nest.map_structure(\n lambda s: tfb.Shift(\n tf.Variable(tf.random.uniform(\n (s,), minval=-2., maxval=2., dtype=tf.float32))),\n flat_event_size))\n\nmean_field_surrogate_posterior = tfd.TransformedDistribution(\n base_standard_dist,\n bijector = tfb.Chain( # Note that the chained bijectors are applied in reverse order\n [\n event_space_bijector, # constrain the surrogate to the support of the prior\n unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the posterior\n reshape_bijector, # reshape the vector-valued components to match the shapes of the posterior components\n mean_field_loc, # allow for nonzero mean\n mean_field_scale # apply the block matrix transformation to the standard Normal distribution\n ]))\n\noptimizer=tf.optimizers.Adam(learning_rate=1e-2)\nmean_field_loss = tfp.vi.fit_surrogate_posterior(\n target_model.unnormalized_log_prob,\n mean_field_surrogate_posterior,\n optimizer=optimizer,\n num_steps=10**4,\n sample_size=16,\n jit_compile=True)\n\nmean_field_samples = mean_field_surrogate_posterior.sample(1000)\nmean_field_final_elbo = tf.reduce_mean(\n target_model.unnormalized_log_prob(*mean_field_samples)\n - mean_field_surrogate_posterior.log_prob(mean_field_samples))\nprint('Mean-field surrogate posterior ELBO: {}'.format(mean_field_final_elbo))\n\nplt.plot(mean_field_loss)\nplt.xlabel('Training step')\n_ = plt.ylabel('Loss value')", "In this case, the mean field surrogate posterior gives similar results to the more expressive surrogate posteriors, indicating that this simpler model may be adequate for the inference task.", "results['Mean Field'] = pack_samples(mean_field_samples)\nplot_boxplot(results)", "Ground truth: Hamiltonian Monte Carlo (HMC)\nWe use HMC to generate \"ground truth\" samples from the true posterior, for comparison with results of the surrogate posteriors.", "num_chains = 8\nnum_leapfrog_steps = 3\nstep_size = 0.4\nnum_steps=20000\n\nflat_event_shape = tf.nest.flatten(target_model.event_shape)\nenum_components = list(range(len(flat_event_shape)))\nbijector = tfb.Restructure(\n enum_components,\n tf.nest.pack_sequence_as(target_model.event_shape, enum_components))(\n target_model.experimental_default_event_space_bijector())\n\ncurrent_state = bijector(\n tf.nest.map_structure(\n lambda e: tf.zeros([num_chains] + list(e), dtype=tf.float32),\n target_model.event_shape))\n\nhmc = tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=target_model.unnormalized_log_prob,\n num_leapfrog_steps=num_leapfrog_steps,\n step_size=[tf.fill(s.shape, step_size) for s in current_state])\n\nhmc = tfp.mcmc.TransformedTransitionKernel(\n hmc, bijector)\nhmc = tfp.mcmc.DualAveragingStepSizeAdaptation(\n hmc,\n num_adaptation_steps=int(num_steps // 2 * 0.8),\n target_accept_prob=0.9)\n\nchain, is_accepted = tf.function(\n lambda current_state: tfp.mcmc.sample_chain(\n current_state=current_state,\n kernel=hmc,\n num_results=num_steps // 2,\n num_burnin_steps=num_steps // 2,\n trace_fn=lambda _, pkr:\n (pkr.inner_results.inner_results.is_accepted),\n ),\n autograph=False,\n jit_compile=True)(current_state)\n\naccept_rate = tf.reduce_mean(tf.cast(is_accepted, tf.float32))\ness = tf.nest.map_structure(\n lambda c: tfp.mcmc.effective_sample_size(\n c,\n cross_chain_dims=1,\n filter_beyond_positive_pairs=True),\n chain)\n\nr_hat = tf.nest.map_structure(tfp.mcmc.potential_scale_reduction, chain)\nhmc_samples = pack_samples(\n tf.nest.pack_sequence_as(target_model.event_shape, chain))\nprint('Acceptance rate is {}'.format(accept_rate))", "Plot sample traces to sanity-check HMC results.", "def plot_traces(var_name, samples):\n fig, axes = plt.subplots(1, 2, figsize=(14, 1.5), sharex='col', sharey='col')\n for chain in range(num_chains):\n s = samples.numpy()[:, chain]\n axes[0].plot(s, alpha=0.7)\n sns.kdeplot(s, ax=axes[1], shade=False)\n axes[0].title.set_text(\"'{}' trace\".format(var_name))\n axes[1].title.set_text(\"'{}' distribution\".format(var_name))\n axes[0].set_xlabel('Iteration')\n\nwarnings.filterwarnings('ignore')\nfor var, var_samples in hmc_samples.items():\n plot_traces(var, var_samples)", "All three surrogate posteriors produced credible intervals that are visually similar to the HMC samples, though sometimes under-dispersed due to the effect of the ELBO loss, as is common in VI.", "results['HMC'] = hmc_samples\nplot_boxplot(results)", "Additional results", "#@title Plotting functions\n\nplt.rcParams.update({'axes.titlesize': 'medium', 'xtick.labelsize': 'medium'})\ndef plot_loss_and_elbo():\n fig, axes = plt.subplots(1, 2, figsize=(12, 4))\n\n axes[0].scatter([0, 1, 2],\n [mvn_final_elbo.numpy(),\n iaf_final_elbo.numpy(),\n mean_field_final_elbo.numpy()])\n axes[0].set_xticks(ticks=[0, 1, 2])\n axes[0].set_xticklabels(labels=[\n 'Multivariate Normal', 'IAF', 'Mean Field'])\n axes[0].title.set_text('Evidence Lower Bound (ELBO)')\n\n axes[1].plot(mvn_loss, label='Multivariate Normal')\n axes[1].plot(iaf_loss, label='IAF')\n axes[1].plot(mean_field_loss, label='Mean Field')\n axes[1].set_ylim([1000, 4000])\n axes[1].set_xlabel('Training step')\n axes[1].set_ylabel('Loss (negative ELBO)')\n axes[1].title.set_text('Loss')\n plt.legend()\n plt.show()\n\nplt.rcParams.update({'axes.titlesize': 'medium', 'xtick.labelsize': 'small'})\ndef plot_kdes(num_chains=8):\n fig, axes = plt.subplots(2, 2, figsize=(12, 8))\n k = list(results.values())[0].keys()\n plot_results = {\n v: {p: results[p][v] for p in results} for v in k}\n for i, (var, var_results) in enumerate(plot_results.items()):\n ax = axes[i % 2, i // 2]\n for posterior, posterior_results in var_results.items():\n if posterior == 'HMC':\n label = posterior\n for chain in range(num_chains):\n sns.kdeplot(\n posterior_results[:, chain],\n ax=ax, shade=False, color='k', linestyle=':', label=label)\n label=None\n else:\n sns.kdeplot(\n posterior_results, ax=ax, shade=False, label=posterior)\n ax.title.set_text('{}'.format(var))\n ax.legend()", "Evidence Lower Bound (ELBO)\nIAF, by far the largest and most flexible surrogate posterior, converges to the highest Evidence Lower Bound (ELBO).", "plot_loss_and_elbo()", "Posterior samples\nSamples from each surrogate posterior, compared with HMC ground truth samples (a different visualization of the samples shown in the box plots).", "plot_kdes()", "Conclusion\nIn this Colab, we built VI surrogate posteriors using joint distributions and multipart bijectors, and fit them to estimate credible intervals for weights in a regression model on the radon dataset. For this simple model, more expressive surrogate posteriors performed similarly to a mean-field surrogate posterior. The tools we demonstrated, however, can be used to build a wide range of flexible surrogate posteriors suitable for more complex models." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mattilyra/suckerpunch
examples/Introduction.ipynb
lgpl-3.0
[ "Document deduplication using Locality Sensitive Hashing (LSH) with minhash\nThis notebook gives an overview of locality sensitive hashing for the purpose of deduplicating large collections of documents. The associated code is available from https://github.com/mattilyra/lsh.\nFinding exact duplicates is trivially easy (just use md5) but finding near duplicates is much harder. Many document collections from tweets to news text to forum discussions contain documents that are almost exactly the same, with only a few sentences or characters that differ. A hash function such as md5 won't find these near duplicates as they are not the same on the byte level, but rather on the semantic level. Finding these semantic duplicates can be done, for instance, by using n-gram shingles and computing the Jaccard similarity of the shingle sets. This is computationally very expensive as the similarity between all pairs of documents needs to be computed. The <cite>Reuters Corpus Version 1.0 (RCV1)</cite> [1] is ~810k documents, that's $810000^2 = 656100000000$ comparisons, assuming the we have a really fast computer that can generate a pair of shingles and compute their Jaccard similarity in 1 microsecond, that still takes a bit over a week to go through all of them, luckily the similarities are symmetric so going through half of $656100000000$ is enough, that is still ~3 days. The <cite>Amazon Product Data</cite> [2] is what could be called webscale ~140 million product reviews. I don't even want to think about how long that would take.\nLocality senstive hashing (LSH) relies on two different methods, first a hash fingerprint of each document is created and then the locality sensitive hashing is applied to that fingerprint. If the fingerprint is generated using the minhash algorithm then the probability of a hash collision is equal to the Jaccard distance of the documents. There are other hash functions that correspond to the cosine similarity for instance, but I won't deal with those here.\nDocument similarity and minhash\nMinhash is a hash function that computes the lowest hash value for a set of objects to be hashes. In the case of comparing document similarities the set of objects is word or character ngrams taken over a sliding window from the document - also known as shingles. The set of shingles allows us to compute the document similarity (defined in this case as Jaccard similarity) between a pair of documents.\nFor instance:", "document = 'Lorem Ipsum dolor sit amet'\n# shingle and discard the last 5 as these are just the last n<5 characters from the document\nshingles = [document[i:i+5] for i in range(len(document))][:-5]\nshingles\n\nother_document = 'Lorem Ipsum dolor sit amet is how dummy text starts'\n# shingle and discard the last 5 as these are just the last n<5 characters from the document\nother_shingles = [other_document[i:i+5] for i in range(len(other_document))][:-5]\n\n# Jaccard distance is the size of set intersection divided by the size of set union\nlen(set(shingles) & set(other_shingles)) / len(set(shingles) | set(other_shingles))", "As we can see these two documents are not very similar, at least in terms of their 3-gram shingle Jaccard similarity. That aside the problem with these shingles is that they do not allow us to compute the similarities of large numbers of documents very easily, we have to do an all pairs comparison. To get around that we can use locality sensitive hashing, but before LSH we'll turn the documents into a more manageable and uniform representation: a fixed length fingerprint comprised of $k$ minhashes.\nEvery document has a different number of shingles depending on the length of the document, for a corpus of any size predicting the memory requirements for an all pairs comparison is not possible as each document will consume a variable amount of memory. For LSH we would like to have a fixed length representation of the documents without changing the semantics of document similarity. This is where minhashing comes in. It turns out that the probability of a hash collision for a minhash is exactly the Jaccard similarity of two sets. This can be seen by considering the two sets of shingles as a matrix. For two dummy documents the shingles could be represented as the table below (the zeros and ones indicate if a shingle is present in the document or not). Notice that the Jaccard similarity of the documents is 2/5.\n<table>\n<th colspan=4><center>Document Shingles</center></th>\n<tr> <td>row</td><td>shingle ID</td><td>Doc 1</td><td>Doc 2</td> </tr>\n<tr> <td>1</td><td>1</td><td>0</td><td>1</td> </tr>\n<tr> <td>2</td><td>2</td><td>1</td><td>1</td> </tr>\n<tr> <td>3</td><td>3</td><td>0</td><td>1</td> </tr>\n<tr> <td>4</td><td>4</td><td>1</td><td>0</td> </tr>\n<tr> <td>5</td><td>5</td><td>1</td><td>1</td> </tr>\n<tr> <td>6</td><td>6</td><td>0</td><td>0</td> </tr>\n</table>\n\nThe minhash corresponds to a random permutation of the rows and gives back the row number where the first non zero entry is found. For the above table the minhash for documents one and two would thus be 2 and 1 respectively - meaning that the documents are not similar. The above table however is just one ordering of the shingles of each document. A different random permutation of the rows will give a different minhash, in this case 2 and 2, making the documents similar.\n<table>\n<th colspan=4><center>Document Shingles</center></th>\n<tr> <td>row</td><td>shingle ID</td><td>Doc 1</td><td>Doc 2</td> </tr>\n<tr> <td>1</td><td>6</td><td>0</td><td>0</td> </tr>\n<tr> <td>2</td><td>2</td><td>1</td><td>1</td> </tr>\n<tr> <td>3</td><td>3</td><td>0</td><td>1</td> </tr>\n<tr> <td>4</td><td>1</td><td>0</td><td>1</td> </tr>\n<tr> <td>5</td><td>4</td><td>1</td><td>0</td> </tr>\n<tr> <td>6</td><td>5</td><td>1</td><td>1</td> </tr>\n</table>\n\nA random permutation of the rows can produce any of 6! == 720 (factorial) different orderings. However we only care about the orderings for which the two columns have the same lowest row number with a 1, that is shingle ID $\\in {2, 5}$. Since the rows with zeros on them don't count, there are 5 rows with a one on it in any column, and two rows with a 1 in both columns. All a random permutation can therefore do is put two out of the five rows in the lowest row number, in other words produce a hash collision with a probability 2/5.\nThe above explanation follows Chapter 3 of <cite>Mining Massive Datasets</cite> [3]. An in depth explanation for why and how minhash works is provided there along with other interesting hash functions.\nApplying minhash gives us a fixed length $k$ (you pick the length) representation of each document such that the probability of a hash collision is equal to the Jaccard similarity of any pair. This being a probabilitic measure you're not guaranteed to get a collision. For Lorem Ipsum documents above and $k=100$ we get similarities that are roughly the same as the Jaccard similarity.", "from lsh import minhash\n\nfor _ in range(5):\n hasher = minhash.MinHasher(seeds=100, char_ngram=5)\n fingerprint0 = hasher.fingerprint('Lorem Ipsum dolor sit amet'.encode('utf8'))\n fingerprint1 = hasher.fingerprint('Lorem Ipsum dolor sit amet is how dummy text starts'.encode('utf8'))\n print(sum(fingerprint0[i] in fingerprint1 for i in range(hasher.num_seeds)) / hasher.num_seeds)", "Increasing the length of the fingerprint from $k=100$ to $k=1000$ reduces the variance between random initialisations of the minhasher.", "for _ in range(5):\n hasher = minhash.MinHasher(seeds=1000, char_ngram=5)\n fingerprint0 = hasher.fingerprint('Lorem Ipsum dolor sit amet'.encode('utf8'))\n fingerprint1 = hasher.fingerprint('Lorem Ipsum dolor sit amet is how dummy text starts'.encode('utf8'))\n print(sum(fingerprint0[i] in fingerprint1 for i in range(hasher.num_seeds)) / hasher.num_seeds)", "Increasing the fingerprint length however comes at the cost of increased memory usage and more time spent computing the minhashes. For a collection of documents we are still left with comparing all pairs, when that collection grows larger this becomes a very real problem. Queue LSH.\nLocality sensitive hashing\nThe idea behind locality sensitive hashing is to take the document fingerprints and chop them up into pieces, each piece being some number of minhashes. Since a single minhash (single entry in the fingerprint) has a probability equal to the Jaccard similarity of producing a collision, each chopped up portion of the fingerprint should as well. This chopped up portion is the locality in locality sensitive hashing, the hashing is just a hash function (any hash function) which produces a bin ID from the fingerprint locality being hashed. Each bin holds the entire fingerprint (with optional meta information) of the document and that of other documents that hash to the same bin.\nLet's say our fingerprint has 100 minhashes in it and we chop the fingerprints into 10 pieces. Each piece of each fingerprint therefore contains 10 minhashes, we hash those again (not using minhash this time) to get a bin ID and store the whole fingerprint in every bin each of the pieces happens to land in.\nWhen we want to know which documents are similar to a query document, we look in all the bins the query document lands in, any document in any of the bins is a potential duplicate. Comparing the full fingerprint of all documents in the bin or computing the actual Jaccard similarity between the shingle sets yields the final similarity of documents. Crucially since not all documents will land in the same bins we've reduced the number of comparisons needed to find similar or near duplicate documents.\nThe number of pieces to chop each fingerprint into and the size of each piece are parameters that need to be set. These should be set such that $num_pieces \\times size_of_piece == num_minhashes$ - this makes sense since having computed all the $N$ minhashes we want to use all of them in the locality sensitive hashing part. There is however a further issue that needs to be considered when setting the parameters; the relation between the number and size of the pieces and the probability of LSH \"finding\" a pair of similar documents.\nLSH is a probabilistic model which means that it won't always do the \"right thing\". Using LSH one needs to consider the similarity of a pair of documents (in this case the Jaccard similarity) and the probability that LSH will find that pair to be similar (a true positive, i.e. a correctly discovered duplicate pair). The pair of documents LSH finds to be similar should be thought of as candidate duplicates. The higher the probability, or guarantee, that LSH will find a pair of documents to be similar the more false positives the model will also produce, that is candidate duplicates that are not in fact duplicates.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\n\nix = pd.IndexSlice", "<a id='bands_rows'>How setting LSH parameters affects finding similar documents</a>", "df = pd.DataFrame(data=[(2, 50), (50, 2), (10, 10), (5, 20), (20, 5)], columns=['pieces', 'size'])\ndf['hashes'] = df['pieces'] * df['size']\nfor pr in np.linspace(0, 1, 200):\n df[pr] = 1 - (1 - pr**df['size']) ** df['pieces']\n\ndf = pd.pivot_table(df, index=['hashes', 'pieces', 'size'])\n\nax = df.T.plot(figsize=(10, 7), title='Probability of LSH finding a candidate pair');\nplt.ylabel('p(candidate | Jaccad)');\nplt.xlabel('Jaccard similarity');\nplt.legend(list(df.loc[ix[100]].index),\n bbox_to_anchor=(1., 1, 1., 0), loc='upper left', fontsize=12, \n ncol=1, borderaxespad=0., title='Each line shows the\\nfingerprint chopped\\ninto (pieces, size)\\n');", "The figure shows the probability that LSH with minhash will \"find\" a pair of similar documents (y-axis) given the Jaccard similarity (x-axis) of those documents for different settings for LSH. Each of the five lines correspond to different settings, the number of hashes is always 100 so we're just changing the number of pieces to chop each fingerprint into (and the size of those pieces, although that becomes determined by setting the number of hashes).\nCreating just two pieces with 50 rows each - that is two localities, each with a size of 50 minhashes - yields an LSH model (<font color=#4C72B0>blue line</font>) that tries really really hard not to find documents to be similar. This LSH model will find 80% of documents whose actual Jaccard similarity is over 95%. Documents whose Jaccard similarity is 80% will hardly ever be found to be similar.\nCreating 5 pieces with 20 rows (<font color=#55A868>green line</font>) each is slightly more relaxed. The above graph should give you a pretty good idea how to set the parameters for your use case so that you can be reasonably certain that LSH will generate acceptable candidate pairs.\n\nDeduplicating the Reuters RCV1 corpus [1]\nThe Reuters Corpus, Volume 1 (RCV1) corpus is a commonly used resource for various NLP tasks, especially document classification. It was made available in 2000 by Reuters Ltd and consists of ~810,000 english language news stories collected between August 20th 1996 and August 19th 1997 from the Reuters news wire.\nI've preprocessed the corpus so that it is all in a single file, one line per document. Each line has the format:\nITEMID&lt;TAB&gt;HEADLINE&lt;SPACE&gt;TEXT", "!wc -l ../data/rcv1/headline.text.txt\n\n!head -1 ../data/rcv1/headline.text.txt", "Some duplicate items are present in the corpus so let's see what happens when we apply LSH to it. First a helper function that takes a file pointer and some parameters for minhash and LSH and then finds duplicates.", "import itertools\n\nfrom lsh import cache, minhash # https://github.com/mattilyra/lsh\n\n# a pure python shingling function that will be used in comparing\n# LSH to true Jaccard similarities\ndef shingles(text, char_ngram=5):\n return set(text[head:head + char_ngram] for head in range(0, len(text) - char_ngram))\n\n\ndef jaccard(set_a, set_b):\n intersection = set_a & set_b\n union = set_a | set_b\n return len(intersection) / len(union)\n\n\ndef candidate_duplicates(document_feed, char_ngram=5, seeds=100, bands=5, hashbytes=4):\n char_ngram = 5\n sims = []\n hasher = minhash.MinHasher(seeds=seeds, char_ngram=char_ngram, hashbytes=hashbytes)\n if seeds % bands != 0:\n raise ValueError('Seeds has to be a multiple of bands. {} % {} != 0'.format(seeds, bands))\n \n lshcache = cache.Cache(num_bands=bands, hasher=hasher)\n for line in document_feed:\n line = line.decode('utf8')\n docid, headline_text = line.split('\\t', 1)\n fingerprint = hasher.fingerprint(headline_text.encode('utf8'))\n \n # in addition to storing the fingerpring store the line\n # number and document ID to help analysis later on\n lshcache.add_fingerprint(fingerprint, doc_id=docid)\n\n candidate_pairs = set()\n for b in lshcache.bins:\n for bucket_id in b:\n if len(b[bucket_id]) > 1:\n pairs_ = set(itertools.combinations(b[bucket_id], r=2))\n candidate_pairs.update(pairs_)\n \n return candidate_pairs", "Then run through some data adding documents to the LSH cache", "hasher = minhash.MinHasher(seeds=100, char_ngram=5, hashbytes=4)\nlshcache = cache.Cache(bands=10, hasher=hasher)\n\n# read in the data file and add the first 100 documents to the LSH cache\nwith open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh:\n feed = itertools.islice(fh, 100)\n for line in feed:\n docid, articletext = line.decode('utf8').split('\\t', 1)\n lshcache.add_fingerprint(hasher.fingerprint(line), docid)\n\n# for every bucket in the LSH cache get the candidate duplicates\ncandidate_pairs = set()\nfor b in lshcache.bins:\n for bucket_id in b:\n if len(b[bucket_id]) > 1: # if the bucket contains more than a single document\n pairs_ = set(itertools.combinations(b[bucket_id], r=2))\n candidate_pairs.update(pairs_)", "candidate_pairs now contains a bunch of document IDs that may be duplicates of each other", "candidate_pairs", "Now let's run LSH on a few different parameter settings and see what the results look like. To save some time I'm only using the first 1000 documents.", "num_candidates = []\nbands = [2, 5, 10, 20]\nfor num_bands in bands:\n with open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh:\n feed = itertools.islice(fh, 1000)\n candidates = candidate_duplicates(feed, char_ngram=5, seeds=100, bands=num_bands, hashbytes=4)\n num_candidates.append(len(candidates))\n\nfig, ax = plt.subplots(figsize=(8, 6))\nplt.bar(bands, num_candidates, align='center');\nplt.title('Number of candidate duplicate pairs found by LSH using 100 minhash fingerprint.');\nplt.xlabel('Number of bands');\nplt.ylabel('Number of candidate duplicates');\nplt.xticks(bands, bands);", "So the more promiscuous [4] version (20 bands per fingerprint) finds many more candidate pairs than the conservative 2 bands model. The first implication of this difference is that it leads to you having to do more comparisons to find the real duplicates. Let's see what that looks like in practice.\nWe'll slightly modify the candidate_duplicates function so that it stores the line number along with the document ID, that way we can retrieve to document contents easier later on.", "def candidate_duplicates(document_feed, char_ngram=5, seeds=100, bands=5, hashbytes=4):\n char_ngram = 5\n sims = []\n hasher = minhash.MinHasher(seeds=seeds, char_ngram=char_ngram, hashbytes=hashbytes)\n if seeds % bands != 0:\n raise ValueError('Seeds has to be a multiple of bands. {} % {} != 0'.format(seeds, bands))\n \n lshcache = cache.Cache(num_bands=bands, hasher=hasher)\n for i_line, line in enumerate(document_feed):\n line = line.decode('utf8')\n docid, headline_text = line.split('\\t', 1)\n fingerprint = hasher.fingerprint(headline_text.encode('utf8'))\n \n # in addition to storing the fingerpring store the line\n # number and document ID to help analysis later on\n lshcache.add_fingerprint(fingerprint, doc_id=(i_line, docid))\n\n candidate_pairs = set()\n for b in lshcache.bins:\n for bucket_id in b:\n if len(b[bucket_id]) > 1:\n pairs_ = set(itertools.combinations(b[bucket_id], r=2))\n candidate_pairs.update(pairs_)\n \n return candidate_pairs\n\nlines = []\nwith open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh:\n # read the first 1000 lines into memory so we can compare them\n for line in itertools.islice(fh, 1000):\n lines.append(line.decode('utf8'))\n \n # reset file pointer and do LSH\n fh.seek(0)\n feed = itertools.islice(fh, 1000)\n candidates = candidate_duplicates(feed, char_ngram=5, seeds=100, bands=20, hashbytes=4)\n\n# go over all the generated candidates comparing their similarities\nsimilarities = []\nfor ((line_a, docid_a), (line_b, docid_b)) in candidates:\n doc_a, doc_b = lines[line_a], lines[line_b]\n shingles_a = shingles(lines[line_a])\n shingles_b = shingles(lines[line_b])\n \n jaccard_sim = jaccard(shingles_a, shingles_b)\n fingerprint_a = set(hasher.fingerprint(doc_a.encode('utf8')))\n fingerprint_b = set(hasher.fingerprint(doc_b.encode('utf8')))\n minhash_sim = len(fingerprint_a & fingerprint_b) / len(fingerprint_a | fingerprint_b)\n similarities.append((docid_a, docid_b, jaccard_sim, minhash_sim))\n\nimport random\n\nprint('There are {} candidate duplicates in total'.format(len(candidates)))\nrandom.sample(similarities, k=15)", "So LSH with 20 bands indeed finds a lot of candidate duplicates (111 out of 1000), some of which - for instance (3256, 3186) above - are not all that similar. Let's see how many LSH missed given some similarity threshold.", "sims_all = np.zeros((1000, 1000), dtype=np.float64)\nfor i, line in enumerate(lines):\n for j in range(i+1, len(lines)):\n shingles_a = shingles(lines[i])\n shingles_b = shingles(lines[j])\n jaccard_sim = jaccard(shingles_a, shingles_b)\n \n # similarities are symmetric so we only care about the\n # upper diagonal here and leave (j, i) to be 0\n sims_all[i, j] = jaccard_sim\n\n# turn the candidates into a dictionary so we have easy access to\n# candidates pairs that were found\ncandidates_dict = {(line_a, line_b): (docid_a, docid_b) for ((line_a, docid_a), (line_b, docid_b)) in candidates}\nfound = 0\nfor i in range(len(lines)):\n for j in range(i+1, len(lines)):\n if sims_all[i, j] >= .9:\n # documents i and j have an actual Jaccard similarity >= 90%\n found += ((i, j) in candidates_dict or (j, i) in candidates_dict)\n\nprint('Out of {} pairs with similarity >= 90% {} were found, that\\'s {:.1%}'.format((sims_all >= .9).sum(), found, found / (sims_all >= .9).sum()))", "That seems pretty well inline with the <a href=\"#bands_rows\">figure</a> showing how setting bands and rows affects the probability of finding similar documents. So we're doing quite well in terms of the true positives, what about the false positives? 27 pairs of documents from the ones found were true positives, so the rest are false positives. Since LSH found 110 document pairs in total $110-27 = 83$ pairs were incorrect, that's 83 documents that were checked in vein in comparison to the 499000 pairs we would have had to go through for an all pairs comparison.\n499000 is the number of entries on the upper diagonal of a $1000\\times1000$ matrix. Since document similarities are symmetric we only need to compare i to j not j to i, so that's $\\frac{1000\\times1000}{2}$. We also don't need compare i to i or j to j which cuts out the last 1000 entries on the diagonal.\nReferences\n\n[1] <cite>Reuters Corpora (RCV1, RCV2, TRC2)</cite> http://trec.nist.gov/data/reuters/reuters.html\n[2] <cite>Amazon product data</cite> http://jmcauley.ucsd.edu/data/amazon/\n[3] <cite>Mining Massive Datasets</cite> http://www.mmds.org http://infolab.stanford.edu/~ullman/mmds/ch3.pdf by Leskovec, Rajamaran and Ullman\n[4] <cite>promiscuous</cite> demonstrating or implying an unselective approach; indiscriminate or casual: the city fathers were promiscuous with their honours.", "# preprocess RCV1 to be contained in a single file\nimport glob, zipfile, re\nimport xml.etree.ElementTree as ET\n\nfiles = glob.glob('../data/rcv1/xml/*.zip')\nwith open('../data/rcv1/headline.text.txt', 'wb') as out:\n for f in files:\n zf = zipfile.ZipFile(f)\n for zi in zf.namelist():\n fh = zf.open(zi, 'r')\n root = ET.fromstring(fh.read().decode('latin-1'))\n itemid = root.attrib['itemid']\n headline = root.find('./headline').text\n text = ' '.join(root.find('./text').itertext())\n text = re.sub('\\s+', ' ', text)\n out.write(('{}\\t{} {}\\n'.format(itemid, headline, text)).encode('utf8'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
frankbearzou/Data-analysis
Star Wars survey/Star Wars survey.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "Data Exploration", "star_wars = pd.read_csv('star_wars.csv', encoding=\"ISO-8859-1\")\n\nstar_wars.head()\n\nstar_wars.columns", "Data Cleaning\nRemove invalid first column RespondentID which are NaN.", "star_wars = star_wars.dropna(subset=['RespondentID'])", "Change the second and third columns.", "star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].isnull().value_counts()\n\nstar_wars['Have you seen any of the 6 films in the Star Wars franchise?'].value_counts()", "The values for the second and third columns which are Have you seen any of the 6 films in the Star Wars franchise? and Do you consider yourself to be a fan of the Star Wars film franchise? respectively are Yes, No, NaN. We want to change them to True or False.", "star_wars['Have you seen any of the 6 films in the Star Wars franchise?'] = star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].map({'Yes': True, 'No': False})\n\nstar_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'] = star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].map({'Yes': True, 'No': False})", "Cleaning the columns from index 3 to 9.\nFrom the fourth column to ninth columns are checkbox questions:\n If values are the movie names: they have seen the movies.\n If values are NaN: they have not seen the movies. \nWe are going to convert the values of these columns to bool type.", "for col in star_wars.columns[3:9]:\n star_wars[col] = star_wars[col].apply(lambda x: False if pd.isnull(x) else True)", "Rename the columns from index 3 to 9 for better readibility.\nseen_1 means Star Wars Episode I, and so on.", "star_wars.rename(columns={'Which of the following Star Wars films have you seen? Please select all that apply.': 'seen_1', \\\n 'Unnamed: 4': 'seen_2', \\\n 'Unnamed: 5': 'seen_3', \\\n 'Unnamed: 6': 'seen_4', \\\n 'Unnamed: 7': 'seen_5', \\\n 'Unnamed: 8': 'seen_6'}, inplace=True)", "Cleaning the columns from index 9 to 15.\nChanging data type to float.", "star_wars[star_wars.columns[9:15]] = star_wars[star_wars.columns[9:15]].astype(float)", "Renaming columns names.", "star_wars.rename(columns={'Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film.': 'ranking_1', \\\n 'Unnamed: 10': 'ranking_2', \\\n 'Unnamed: 11': 'ranking_3', \\\n 'Unnamed: 12': 'ranking_4', \\\n 'Unnamed: 13': 'ranking_5', \\\n 'Unnamed: 14': 'ranking_6'}, inplace=True)", "Cleaning the cloumns from index 15 to 29.", "star_wars.rename(columns={'Please state whether you view the following characters favorably, unfavorably, or are unfamiliar with him/her.': 'Luck Skywalker', \\\n 'Unnamed: 16': 'Han Solo', \\\n 'Unnamed: 17': 'Princess Leia Oragana', \\\n 'Unnamed: 18': 'Obi Wan Kenobi', \\\n 'Unnamed: 19': 'Yoda', \\\n 'Unnamed: 20': 'R2-D2', \\\n 'Unnamed: 21': 'C-3P0', \\\n 'Unnamed: 22': 'Anakin Skywalker', \\\n 'Unnamed: 23': 'Darth Vader', \\\n 'Unnamed: 24': 'Lando Calrissian', \\\n 'Unnamed: 25': 'Padme Amidala', \\\n 'Unnamed: 26': 'Boba Fett', \\\n 'Unnamed: 27': 'Emperor Palpatine', \\\n 'Unnamed: 28': 'Jar Jar Binks'}, inplace=True)", "Data Analysis\nFinding The Most Seen Movie", "seen_sum = star_wars[['seen_1', 'seen_2', 'seen_3', 'seen_4', 'seen_5', 'seen_6']].sum()\n\nseen_sum\n\nseen_sum.idxmax()", "From the data above, we can find that the most seen movie is the episode V.", "ax = seen_sum.plot(kind='bar') \nfor p in ax.patches:\n ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.01))\nplt.show()", "Finding The Highest Ranked Movie.", "ranking_mean = star_wars[['ranking_1', 'ranking_2', 'ranking_3', 'ranking_4', 'ranking_5', 'ranking_6']].mean()\n\nranking_mean\n\nranking_mean.idxmin()", "The highest ranked movie is ranking_5 which is the episode V.", "ranking_mean.plot(kind='bar')\nplt.show()", "Let's break down data by Gender.", "males = star_wars[star_wars['Gender'] == 'Male']\nfemales = star_wars[star_wars['Gender'] == 'Female']", "The number of movies seen.", "males[males.columns[3:9]].sum().plot(kind='bar', title='male seen')\nplt.show()\n\nmales[females.columns[3:9]].sum().plot(kind='bar', title='female seen')\nplt.show()", "The ranking of movies.", "males[males.columns[9:15]].mean().plot(kind='bar', title='Male Ranking')\nplt.show()\n\nfemales[males.columns[9:15]].mean().plot(kind='bar', title='Female Ranking')\nplt.show()", "From the charts above, we do not find significant difference among gender.\nStar Wars Character Favorability Ratings", "star_wars['Luck Skywalker'].value_counts()\n\nstar_wars[star_wars.columns[15:29]].head()\n\nfav = star_wars[star_wars.columns[15:29]].dropna()\n\nfav.head()", "Convert fav to pivot table.", "fav_df_list = []\nfor col in fav.columns.tolist():\n row = fav[col].value_counts()\n d1 = pd.DataFrame(data={'favorably': row[0] + row[1], \\\n 'neutral': row[2], \\\n 'unfavorably': row[4] + row[5], \\\n 'Unfamiliar': row[3]}, \\\n index=[col], \\\n columns=['favorably', 'neutral', 'unfavorably', 'Unfamiliar'])\n fav_df_list.append(d1)\n\nfav_pivot = pd.concat(fav_df_list)\n\nfav_pivot\n\nfig = plt.figure()\nax = plt.subplot(111)\n\nfav_pivot.plot(kind='barh', stacked=True, figsize=(10,10), ax=ax)\n\n\n# Shrink current axis's height by 10% on the bottom\nbox = ax.get_position()\nax.set_position([box.x0, box.y0 + box.height * 0.1,\n box.width, box.height * 0.9])\n\n# Put a legend below current axis\nax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),\n fancybox=True, shadow=True, ncol=5)\n\nplt.show()", "Who Shot First?", "shot_first = star_wars['Which character shot first?'].value_counts()\n\nshot_first\n\nshot_sum = shot_first.sum()\n\nshot_first = shot_first.apply(lambda x: x / shot_sum * 100)\n\nshot_first\n\nax = shot_first.plot(kind='barh')\nfor p in ax.patches:\n ax.annotate(str(\"{0:.2f}%\".format(round(p.get_width(),2))), (p.get_width() * 1.005, p.get_y() + p.get_height() * 0.5))\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/kfp-tekton-backend
samples/contrib/arena-samples/standalonejob/standalone_pipeline.ipynb
apache-2.0
[ "Arena Kubeflow Pipeline Notebook demo\nPrepare data volume\nYou should prepare data volume user-susan by following docs. \nAnd run arena data list to check if it's created.", "! arena data list", "Define the necessary environment variables and install the KubeFlow Pipeline SDK\nWe assume this notebook kernel has access to Python's site-packages and is in Python3.\nPlease fill in the below environment variables with you own settings.\n\nKFP_PACKAGE: The latest release of kubeflow pipeline platform library.\nKUBEFLOW_PIPELINE_LINK: The link to access the KubeFlow pipeline API.\nMOUNT: The mount configuration to map data above into the training job. The format is 'data:/directory'\nGPUs: The number of the GPUs for training.", "KFP_SERVICE=\"ml-pipeline.kubeflow.svc.cluster.local:8888\"\nKFP_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp/0.1.14/kfp.tar.gz'\nKFP_ARENA_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp-arena/kfp-arena-0.3.tar.gz'\nKUBEFLOW_PIPELINE_LINK = ''\nMOUNT=\"['user-susan:/training']\"\nGPUs=1", "Install the necessary python packages\nNote: Please change pip3 to the package manager that's used for this Notebook Kernel.", "!pip3 install $KFP_PACKAGE --upgrade", "Note: Install arena's python package", "!pip3 install $KFP_ARENA_PACKAGE --upgrade", "2. Define pipeline tasks using the kfp library.", "import arena\nimport kfp.dsl as dsl\n\[email protected](\n name='pipeline to run jobs',\n description='shows how to run pipeline jobs.'\n)\ndef sample_pipeline(learning_rate='0.01',\n dropout='0.9',\n model_version='1'):\n \"\"\"A pipeline for end to end machine learning workflow.\"\"\"\n\n # 1. prepare data\n prepare_data = arena.StandaloneOp(\n name=\"prepare-data\",\n\timage=\"byrnedo/alpine-curl\",\n data=MOUNT,\n\tcommand=\"mkdir -p /training/dataset/mnist && \\\n cd /training/dataset/mnist && \\\n curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-images-idx3-ubyte.gz && \\\n curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-labels-idx1-ubyte.gz && \\\n curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-images-idx3-ubyte.gz && \\\n curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-labels-idx1-ubyte.gz\")\n # 2. prepare source code\n prepare_code = arena.StandaloneOp(\n name=\"source-code\",\n image=\"alpine/git\",\n data=MOUNT,\n command=\"mkdir -p /training/models/ && \\\n cd /training/models/ && \\\n if [ ! -d /training/models/tensorflow-sample-code ]; then https://github.com/cheyang/tensorflow-sample-code.git; else echo no need download;fi\")\n\n # 3. train the models\n train = arena.StandaloneOp(\n name=\"train\",\n image=\"tensorflow/tensorflow:1.11.0-gpu-py3\",\n gpus=GPUs,\n data=MOUNT,\n command=\"echo %s; \\\n echo %s; \\\n python /training/models/tensorflow-sample-code/tfjob/docker/mnist/main.py --max_steps 500 --data_dir /training/dataset/mnist --log_dir /training/output/mnist\" % (prepare_data.output, prepare_code.output),\n metric_name=\"Train-accuracy\",\n metric_unit=\"PERCENTAGE\",\n )\n # 4. export the model\n export_model = arena.StandaloneOp(\n name=\"export-model\",\n image=\"tensorflow/tensorflow:1.11.0-py3\",\n data=MOUNT,\n command=\"echo %s; \\\n python /training/models/tensorflow-sample-code/tfjob/docker/mnist/export_model.py --model_version=%s --checkpoint_step=400 --checkpoint_path=/training/output/mnist /training/output/models\" % (train.output,model_version))\n\n\nlearning_rate = \"0.001\"\ndropout = \"0.8\"\nmodel_verison = \"1\"\n\narguments = {\n 'learning_rate': learning_rate,\n 'dropout': dropout,\n 'model_version': model_version,\n}\n\nimport kfp\nclient = kfp.Client(host=KUBEFLOW_PIPELINE_LINK)\nrun = client.create_run_from_pipeline_func(sample_pipeline, arguments=arguments).run_info\n\nprint('The above run link is assuming you ran this cell on JupyterHub that is deployed on the same cluster. ' +\n 'The actual run link is ' + KUBEFLOW_PIPELINE_LINK + '/#/runs/details/' + run.id)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
googledatalab/notebooks
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
apache-2.0
[ "<h1>About this Notebook</h1>\n\nThis notebook demonstrates the experience of using ML Workbench to create a machine learning model for text classification and setting it up for online prediction. Training the model is done \"locally\" inside Datalab. In the next notebook (Text Classification --- 20NewsGroup (large data)), it demonstrates how to do it by using Cloud ML Engine services.\nIf you have any feedback, please send them to [email protected].\nData\nThe 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics. The classification problem is to identify the newsgroup a post was summited to, given the text of the post.\nThere are a few versions of this dataset from different sources online. Below, we use the version within scikit-learn which is already split into a train and test/eval set. For a longer introduction to this dataset, see the scikit-learn website\nDownload Data", "import numpy as np\nimport pandas as pd\nimport os\nimport re\nimport csv\nfrom sklearn.datasets import fetch_20newsgroups\n\n# data will be downloaded. Note that an error message saying something like \"No handlers could be found for \n# logger sklearn.datasets.twenty_newsgroups\" might be printed, but this is not an error.\nnews_train_data = fetch_20newsgroups(subset='train', shuffle=True, random_state=42, remove=('headers', 'footers', 'quotes'))\nnews_test_data = fetch_20newsgroups(subset='test', shuffle=True, random_state=42, remove=('headers', 'footers', 'quotes'))", "Cleaning the Raw Data\nPrinting the 3rd element in the test dataset shows the data contains text with newlines, punctuation, misspellings, and other items common in text documents. To build a model, we will clean up the text by removing some of these issues.", "news_train_data.data[2], news_train_data.target_names[news_train_data.target[2]]\n\ndef clean_and_tokenize_text(news_data):\n \"\"\"Cleans some issues with the text data\n Args:\n news_data: list of text strings\n Returns:\n For each text string, an array of tokenized words are returned in a list\n \"\"\"\n cleaned_text = []\n for text in news_data:\n x = re.sub('[^\\w]|_', ' ', text) # only keep numbers and letters and spaces\n x = x.lower()\n x = re.sub(r'[^\\x00-\\x7f]',r'', x) # remove non ascii texts\n tokens = [y for y in x.split(' ') if y] # remove empty words\n tokens = ['[number]' if x.isdigit() else x for x in tokens] # convert all numbers to '[number]' to reduce vocab size.\n cleaned_text.append(tokens)\n return cleaned_text\n\nclean_train_tokens = clean_and_tokenize_text(news_train_data.data)\nclean_test_tokens = clean_and_tokenize_text(news_test_data.data)", "Get Vocabulary\nWe will need to filter the vocabulary to remove high frequency words and low frequency words.", "def get_unique_tokens_per_row(text_token_list):\n \"\"\"Collect unique tokens per row.\n Args:\n text_token_list: list, where each element is a list containing tokenized text\n Returns:\n One list containing the unique tokens in every row. For example, if row one contained\n ['pizza', 'pizza'] while row two contained ['pizza', 'cake', 'cake'], then the output list\n would contain ['pizza' (from row 1), 'pizza' (from row 2), 'cake' (from row 2)]\n \"\"\"\n words = []\n for row in text_token_list:\n words.extend(list(set(row)))\n return words\n\n# Make a plot where the x-axis is a token, and the y-axis is how many text documents\n# that token is in. \nwords = pd.DataFrame(get_unique_tokens_per_row(clean_train_tokens) , columns=['words'])\ntoken_frequency = words['words'].value_counts() # how many documents contain each token.\ntoken_frequency.plot(logy=True)\n\nvocab = token_frequency[np.logical_and(token_frequency < 1000, token_frequency > 10)]\nvocab.plot(logy=True)\n\ndef filter_text_by_vocab(news_data, vocab):\n \"\"\"Removes tokens if not in vocab.\n Args:\n news_data: list, where each element is a token list\n vocab: set containing the tokens to keep.\n Returns:\n List of strings containing the final cleaned text data\n \"\"\"\n text_strs = []\n for row in news_data:\n words_to_keep = [token for token in row if token in vocab or token == '[number]']\n text_strs.append(' '.join(words_to_keep))\n return text_strs\n\nclean_train_data = filter_text_by_vocab(clean_train_tokens, set(vocab.index))\nclean_test_data = filter_text_by_vocab(clean_test_tokens, set(vocab.index))\n\n# Check a few instances of cleaned data\nclean_train_data[:3]", "Save the Cleaned Data For Training", "!mkdir -p ./data\n\nwith open('./data/train.csv', 'w') as f:\n writer = csv.writer(f, lineterminator='\\n')\n for target, text in zip(news_train_data.target, clean_train_data):\n writer.writerow([news_train_data.target_names[target], text])\n \nwith open('./data/eval.csv', 'w') as f:\n writer = csv.writer(f, lineterminator='\\n')\n for target, text in zip(news_test_data.target, clean_test_data):\n writer.writerow([news_test_data.target_names[target], text]) \n \n# Also save the vocab, which will be useful in making new predictions.\nwith open('./data/vocab.txt', 'w') as f:\n vocab.to_csv(f)", "Create Model with ML Workbench\nThe MLWorkbench Magics are a set of Datalab commands that allow an easy code-free experience to training, deploying, and predicting ML models. This notebook will take the cleaned data from the previous notebook and build a text classification model. The MLWorkbench Magics are a collection of magic commands for each step in ML workflows: analyzing input data to build transforms, transforming data, training a model, evaluating a model, and deploying a model.\nFor details of each command, run with --help. For example, \"%%ml train --help\".\nWhen the dataset is small (like with the 20 newsgroup data), there is little benefit of using cloud services. This notebook will run the analyze, transform, and training steps locally. However, we will take the locally trained model and deploy it to ML Engine and show how to make real predictions on a deployed model. Every MLWorkbench magic can run locally or use cloud services (adding --cloud flag).\nThe next notebook (Text Classification --- 20NewsGroup (large data)) in this sequence shows the cloud version of every command, and gives the normal experience when building models are large datasets. However, we will still use the 20 newsgroup data.", "import google.datalab.contrib.mlworkbench.commands # This loads the '%%ml' magics", "First, define the dataset we are going to use for training.", "%%ml dataset create\nname: newsgroup_data\nformat: csv\ntrain: ./data/train.csv\neval: ./data/eval.csv\nschema:\n - name: news_label\n type: STRING\n - name: text\n type: STRING\n\n%%ml dataset explore\nname: newsgroup_data", "Step 1: Analyze\nThe first step in the MLWorkbench workflow is to analyze the data for the requested transformations. We are going to build a bag of words representation on the text and use this in a linear model. Therefore, the analyze step will compute the vocabularies and related statistics of the data for traing.", "%%ml analyze\noutput: ./analysis\ndata: newsgroup_data\nfeatures:\n news_label:\n transform: target\n text:\n transform: bag_of_words\n\n!ls ./analysis", "Step 2: Transform\nThis step is optional as training can start from csv data (the same data used in the analysis step). The transform step performs some transformations on the input data and saves the results to a special TensorFlow file called a TFRecord file containing TF.Example protocol buffers. This allows training to start from preprocessed data. If this step is not used, training would have to perform the same preprocessing on every row of csv data every time it is used. As TensorFlow reads the same data row multiple times during training, this means the same row would be preprocessed multiple times. By writing the preprocessed data to disk, we can speed up training. Because the the 20 newsgroups data is small, this step does not matter, but we do it anyway for illustration. This step is recommended if there are text column in a dataset, and required if there are image columns in a dataset.\nWe run the transform step for the training and eval data.", "!rm -rf ./transform\n\n%%ml transform --shuffle\noutput: ./transform\nanalysis: ./analysis\ndata: newsgroup_data\n\n# note: the errors_* files are all 0 size, which means no error.\n!ls ./transform/ -l -h", "Create a \"transformed dataset\" to use in next step.", "%%ml dataset create\nname: newsgroup_transformed\ntrain: ./transform/train-*\neval: ./transform/eval-*\nformat: transformed ", "Step 3: Training\nMLWorkbench automatically builds standard TensorFlow models without you having to write any TensorFlow code.", "# Training should use an empty output folder. So if you run training multiple times,\n# use different folders or remove the output from the previous run.\n!rm -fr ./train", "The following training step takes about 10~15 minutes.", "%%ml train\noutput: ./train\nanalysis: ./analysis/\ndata: newsgroup_transformed\nmodel_args:\n model: linear_classification\n top-n: 5", "Go to Tensorboard (link shown above) to monitor the training progress. Note that training stops when it detects accuracy is no longer increasing for eval data.", "# You can also plot the summary events which will be saved with the notebook.\n\nfrom google.datalab.ml import Summary\n\nsummary = Summary('./train')\nsummary.list_events()\n\nsummary.plot(['loss', 'accuracy'])", "The output of training is two models, one in training_output/model and another in training_output/evaluation_model. These tensorflow models are identical except the latter assumes the target column is part of the input and copies the target value to the output. Therefore, the latter is ideal for evaluation.", "!ls ./train/", "Step 4: Evaluation using batch prediction\nBelow, we use the evaluation model and run batch prediction locally. Batch prediction is needed for large datasets where the data cannot fit in memory. For demo purpose, we will use the training evaluation data again.", "%%ml batch_predict\nmodel: ./train/evaluation_model/\noutput: ./batch_predict\nformat: csv\ndata:\n csv: ./data/eval.csv\n\n# It creates a results csv file, and a results schema json file.\n!ls ./batch_predict", "Note that the output of prediction is a csv file containing the score for each label class. 'predicted_n' is the label for the nth largest score. We care about 'predicted', the final model prediction.", "!head -n 5 ./batch_predict/predict_results_eval.csv\n\n%%ml evaluate confusion_matrix --plot\ncsv: ./batch_predict/predict_results_eval.csv\n\n%%ml evaluate accuracy\ncsv: ./batch_predict/predict_results_eval.csv", "Step 5: BigQuery to analyze evaluate results\nSometimes you want to query your prediction/evaluation results using SQL. It is easy.", "# Create bucket\n!gsutil mb gs://bq-mlworkbench-20news-lab\n!gsutil cp -r ./batch_predict/predict_results_eval.csv gs://bq-mlworkbench-20news-lab\n\n# Use Datalab's Bigquery API to load CSV files into table.\n\nimport google.datalab.bigquery as bq\nimport json\n\nwith open('./batch_predict/predict_results_schema.json', 'r') as f:\n schema = json.load(f)\n\n# Create BQ Dataset\nbq.Dataset('newspredict').create()\n\n# Create the table\ntable = bq.Table('newspredict.result1').create(schema=schema, overwrite=True)\ntable.load('gs://bq-mlworkbench-20news-lab/predict_results_eval.csv', mode='overwrite',\n source_format='csv', csv_options=bq.CSVOptions(skip_leading_rows=1))", "Now, run any SQL queries on \"table newspredict.result1\". Below we query all wrong predictions.", "%%bq query\nSELECT * FROM newspredict.result1 WHERE predicted != target", "Prediction\nLocal Instant Prediction\nThe MLWorkbench also supports running prediction and displaying the results within the notebook. Note that we use the non-evaluation model below (./train/model) which takes input with no target column.", "%%ml predict\nmodel: ./train/model/\nheaders: text\ndata:\n - nasa\n - windows xp", "Why Does My Model Predict this? Prediction Explanation.\n\"%%ml explain\" gives you insights on what are important features in the prediction data that contribute positively or negatively to certain labels. We use LIME under \"%%ml explain\". (LIME is an open sourced library performing feature sensitivity analysis. It is based on the work presented in this paper. LIME is included in Datalab.)\nIn this case, we will check which words in text are contributing most to the predicted label.", "# Pick some data from eval csv file. They are cleaned text.\n# The truth labels for the following 3 instances are\n# - rec.autos\n# - comp.windows.x\n# - talk.politics.mideast\n\ninstance0 = ('little confused models [number] [number] heard le se someone tell differences far features ' +\n 'performance curious book value [number] model less book value usually words demand ' +\n 'year heard mid spring early summer best buy')\ninstance1 = ('hi requirement closing opening different display servers within x application manner display ' +\n 'associated client proper done during transition problems')\ninstance2 = ('attacking drive kuwait country whose citizens close blood business ties saudi citizens thinks ' +\n 'helped saudi arabia least eastern muslim country doing anything help kuwait protect saudi arabia ' +\n 'indeed masses citizens demonstrating favor butcher saddam killed muslims killing relatively rich ' +\n 'muslims nose west saudi arabia rolled iraqi invasion charge saudi arabia idea governments official ' +\n 'religion de facto de human nature always ones rise power world country citizens leader slick ' +\n 'operator sound guys angels posting edited stuff following friday york times reported group definitely ' +\n 'conservative followers house rule country enough reported besides complaining government conservative ' +\n 'enough asserted approx [number] [number] kingdom charge under saudi islamic law brings death penalty ' +\n 'diplomatic guy bin isn called severe punishment [number] women drove public while protest ban women ' +\n 'driving guy group said al said women fired jobs happen heard muslims ban women driving basis qur etc ' +\n 'yet folks ban women called choose rally behind hate women allowed tv radio immoral kingdom house neither ' +\n 'least nor favorite government earth restrict religious political lot among things likely replacements ' +\n 'going lot worse citizens country house feeling heat lately last six months read religious police ' +\n 'government western women fully stupid women imo sends wrong signals morality read cracked down few home ' +\n 'based religious posted government owned newspapers offering money turns group dare worship homes secret ' +\n 'place government grown try take wind conservative opposition things small taste happen guys house trying ' +\n 'long run others general west evil zionists rule hate west crowd')\n\ndata = [instance0, instance1, instance2]\n\n%%ml predict\nmodel: ./train/model/\nheaders: text\ndata: $data", "The first and second instances are predicted correctly. The third is wrong. Below we run \"%%ml explain\" to understand more.", "%%ml explain --detailview_only\nmodel: ./train/model\nlabels: rec.autos\ntype: text\ndata: $instance0\n\n%%ml explain --detailview_only\nmodel: ./train/model\nlabels: comp.windows.x\ntype: text\ndata: $instance1", "On instance 2, the top prediction result does not match truth. Predicted is \"talk.politics.guns\" while truth is \"talk.politics.mideast\". So let's analyze these two labels.", "%%ml explain --detailview_only\nmodel: ./train/model\nlabels: talk.politics.guns,talk.politics.mideast\ntype: text\ndata: $instance2", "Deploying Model to ML Engine\nNow that we have a trained model, have analyzed the results, and have tested the model output locally, we are ready to deploy it to the cloud for real predictions. \nDeploying a model requires the files are on GCS. The next few cells makes a bucket on GCS, copies the locally trained model, and deploys it.", "!gsutil -q mb gs://bq-mlworkbench-20news-lab\n\n# Move the regular model to GCS\n!gsutil -m cp -r ./train/model gs://bq-mlworkbench-20news-lab", "See this doc https://cloud.google.com/ml-engine/docs/how-tos/managing-models-jobs for a the definition of ML Engine models and versions. An ML Engine version runs predictions and is contained in a ML Engine model. We will create a new ML Engine model, and depoly the TensorFlow graph as a ML Engine version. This can be done using gcloud (see https://cloud.google.com/ml-engine/docs/how-tos/deploying-models), or Datalab which we use below.", "%%ml model deploy\npath: gs://bq-mlworkbench-20news-lab\nname: news.alpha", "How to Build Your Own Prediction Client\nA common task is to call a deployed model from different applications. Below is an example of writing a python client to run prediction. \nCovering model permissions topics is outside the scope of this notebook, but for more information see https://cloud.google.com/ml-engine/docs/tutorials/python-guide and https://developers.google.com/identity/protocols/application-default-credentials .", "from oauth2client.client import GoogleCredentials\nfrom googleapiclient import discovery\nfrom googleapiclient import errors\n\n# Store your project ID, model name, and version name in the format the API needs.\napi_path = 'projects/{your_project_ID}/models/{model_name}/versions/{version_name}'.format(\n your_project_ID=google.datalab.Context.default().project_id,\n model_name='news',\n version_name='alpha')\n\n# Get application default credentials (possible only if the gcloud tool is\n# configured on your machine). See https://developers.google.com/identity/protocols/application-default-credentials\n# for more info.\ncredentials = GoogleCredentials.get_application_default()\n\n# Build a representation of the Cloud ML API.\nml = discovery.build('ml', 'v1', credentials=credentials)\n\n# Create a dictionary containing data to predict.\n# Note that the data is a list of csv strings.\nbody = {\n 'instances': ['nasa',\n 'windows ex']}\n\n# Create a request\nrequest = ml.projects().predict(\n name=api_path,\n body=body)\n\nprint('The JSON request: \\n')\nprint(request.to_json())\n\n# Make the call.\ntry:\n response = request.execute()\n print('\\nThe response:\\n')\n print(json.dumps(response, indent=2))\nexcept errors.HttpError, err:\n # Something went wrong, print out some information.\n print('There was an error. Check the details:')\n print(err._get_reason())", "To demonstrate prediction client further, check API Explorer (https://developers.google.com/apis-explorer). it allows you to send raw HTTP requests to many Google APIs. This is useful for understanding the requests and response, and help you build your own client with your favorite language.\nPlease visit https://developers.google.com/apis-explorer/#search/ml%20engine/ml/v1/ml.projects.predict and enter the following values for each text box.", "# The output of this cell is placed in the name box\n# Store your project ID, model name, and version name in the format the API needs.\napi_path = 'projects/{your_project_ID}/models/{model_name}/versions/{version_name}'.format(\n your_project_ID=google.datalab.Context.default().project_id,\n model_name='news',\n version_name='alpha')\nprint('Place the following in the name box')\nprint(api_path)", "The fields text box can be empty.\nNote that because we deployed the non-evaluation model, our depolyed model takes a csv input which only has one column. In general, the \"instances\" is a list of csv strings for models trained by MLWorkbench.\nClick in the request body box, and note a small drop down menu appears in the FAR RIGHT of the input box. Slect \"Freeform editor\". Then enter the following in the request body box.", "print('Place the following in the request body box')\nrequest = {'instances': ['nasa', 'windows xp']}\nprint(json.dumps(request))", "Then click the \"Authorize and execute\" button. The prediction results are returned in the browser.\nCleaning up the deployed model", "%%ml model delete\nname: news.alpha\n\n%%ml model delete\nname: news\n\n# Delete the GCS bucket\n!gsutil -m rm -r gs://bq-mlworkbench-20news-lab\n\n# Delete BQ table\n\nbq.Dataset('newspredict').delete(delete_contents = True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
mit
[ "1A.2 - Modules, fichiers, expressions régulières\nLe langage Python est défini par un ensemble de règle, une grammaire. Seul, il n'est bon qu'à faire des calculs. Les modules sont des collections de fonctionnalités pour interagir avec des capteurs ou des écrans ou pour faire des calculs plus rapides ou plus complexes.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Fichiers\nLes fichiers permettent deux usages principaux :\n\nrécupérer des données d'une exécution du programme à l'autre (lorsque le programme s'arrête, toutes les variables sont perdues)\néchanger des informations avec d'autres programmes (Excel par exemple).\n\nLe format le plus souvent utilisé est le fichier plat, texte, txt, csv, tsv. C'est un fichier qui contient une information structurée sous forme de matrice, en ligne et colonne car c'est comme que les informations numériques se présentent le plus souvent. Un fichier est une longue séquence de caractères. Il a fallu choisir une convention pour dire que deux ensembles de caractères ne font pas partie de la même colonne ou de la même ligne. La convention la plus répandue est :\n\n\\t : séparateur de colonnes\n\\n : séparateur de lignes\n\nLe caractère \\ indique au langage python que le caractère qui suit fait partie d'un code. Vous trouverez la liste des codes : String and Bytes literals.\nAparté : aujourd'hui, lire et écrire des fichiers est tellement fréquent qu'il existe des outils qui font ça dans une grande variété de formats. Vous découvrirez cela lors de la séance 10. Il est utile pourtant de le faire au moins une fois soi-même pour comprendre la logique des outils et pour ne pas être bloqué dans les cas non prévus.\nEcrire et lire des fichiers est beaucoup plus long que de jouer avec des variables. Ecrire signifie qu'on enregistre les données sur le disque dur : elles passent du programme au disque dur (elles deviennent permanentes). Elles font le chemin inverse lors de la lecture.\nEcriture\nIl est important de retenir qu'un fichier texte ne peut recevoir que des chaînes de caractères.", "mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes\nwith open (\"mat.txt\", \"w\") as f : # création d'un fichier en mode écriture\n for i in range (0,len (mat)) : # \n for j in range (0, len (mat [i])) : # \n s = str (mat [i][j]) # conversion en chaîne de caractères\n f.write (s + \"\\t\") #\n f.write (\"\\n\") # \n \n# on vérifie que le fichier existe : \nimport os\nprint([ _ for _ in os.listdir(\".\") if \"mat\" in _ ] )\n\n# la ligne précédente utilise le symbole _ : c'est une variable \n# le caractère _ est une lettre comme une autre\n# on pourrait écrire :\n# print([ fichier for fichier in os.listdir(\".\") if \"mat\" in fichier ] )\n# on utilise cette convention pour dire que cette variable n'a pas vocation à rester", "Le même programme mais écrit avec une écriture condensée :", "mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes\nwith open (\"mat.txt\", \"w\") as f : # création d'un fichier\n s = '\\n'.join ( '\\t'.join( str(x) for x in row ) for row in mat )\n f.write ( s )\n \n# on vérifie que le fichier existe : \nprint([ _ for _ in os.listdir(\".\") if \"mat\" in _ ] ) ", "On regare les premières lignes du fichier mat2.txt :", "import pyensae\n%load_ext pyensae\n%head mat.txt", "Lecture", "with open (\"mat.txt\", \"r\") as f : # ouverture d'un fichier\n mat = [ row.strip(' \\n').split('\\t') for row in f.readlines() ]\nprint(mat)", "On retrouve les mêmes informations à ceci près qu'il ne faut pas oublier de convertir les nombres initiaux en float.", "with open (\"mat.txt\", \"r\") as f : # ouverture d'un fichier\n mat = [ [ float(x) for x in row.strip(' \\n').split('\\t') ] for row in f.readlines() ]\nprint(mat)", "Voilà qui est mieux. Le module os.path propose différentes fonctions pour manipuler les noms de fichiers. Le module os propose différentes fonctions pour manipuler les fichiers :", "import os\nfor f in os.listdir('.'):\n print (f)", "with\nDe façon pragmatique, l'instruction with permet d'écrire un code plus court d'une instruction : close. Les deux bouts de code suivant sont équivalents :", "with open(\"exemple_fichier.txt\", \"w\") as f:\n f.write(\"something\")\n\nf = open(\"exemple_fichier.txt\", \"w\")\nf.write(\"something\")\nf.close()", "L'instruction close ferme le fichier. A l'ouverture, le fichier est réservé par le programme Python, aucune autre application ne peut écrire dans le même fichier. Après l'instruction close, une autre application pour le supprimer, le modifier. Avec le mot clé with, la méthode close est implicitement appelée.\nà quoi ça sert ?\nOn écrit très rarement un fichier texte. Ce format est le seul reconnu par toutes les applications. Tous les logiciels, tous les langages proposent des fonctionnalités qui exportent les données dans un format texte. Dans certaines circonstances, les outils standards ne fonctionnent pas - trop grops volumes de données, problème d'encoding, caractère inattendu -. Il faut se débrouiller.\nExercice 1 : Excel $\\rightarrow$ Python $\\rightarrow$ Excel\nIl faut télécharger le fichier seance4_excel.xlsx qui contient une table de trois colonnes. Il faut :\n\nenregistrer le fichier au format texte,\nle lire sous python\ncréer une matrice carrée 3x3 où chaque valeur est dans sa case (X,Y),\nenregistrer le résultat sous format texte,\nle récupérer sous Excel. \n\nAutres formats de fichiers\nLes fichiers texte sont les plus simples à manipuler mais il existe d'autres formats classiques~:\n\nhtml : les pages web\nxml : données structurées\n[zip](http://fr.wikipedia.org/wiki/ZIP_(format_de_fichier), gz : données compressées\nwav, mp3, ogg : musique\nmp4, Vorbis : vidéo\n...\n\nModules\nLes modules sont des extensions du langages. Python ne sait quasiment rien faire seul mais il bénéficie de nombreuses extensions. On distingue souvent les extensions présentes lors de l'installation du langage (le module math) des extensions externes qu'il faut soi-même installer (numpy). Deux liens :\n\nmodules officiels\nmodules externes\n\nLe premier réflexe est toujours de regarder si un module ne pourrait pas vous être utile avant de commencer à programmer. Pour utiliser une fonction d'un module, on utilise l'une des syntaxes suivantes :", "import math\nprint (math.cos(1))\n\nfrom math import cos\nprint (cos(1))\n\nfrom math import * # cette syntaxe est déconseillée car il est possible qu'une fonction\nprint (cos(1)) # porte le même nom qu'une des vôtres", "Exercice 2 : trouver un module (1)\nAller à la page modules officiels (ou utiliser un moteur de recherche) pour trouver un module permettant de générer des nombres aléatoires. Créer une liste de nombres aléatoires selon une loi uniforme puis faire une permutation aléatoire de cette séquence.\nExercice 3 : trouver un module (2)\nTrouver un module qui vous permette de calculer la différence entre deux dates puis déterminer le jour de la semaine où vous êtes nés.\nModule qu'on crée soi-même\nIl est possible de répartir son programme en plusieurs fichiers. Par exemple, un premier fichier monmodule.py qui contient une fonction :", "# fichier monmodule.py\nimport math\n\ndef fonction_cos_sequence(seq) :\n return [ math.cos(x) for x in seq ]\n\nif __name__ == \"__main__\" :\n print (\"ce message n'apparaît que si ce programme est le point d'entrée\")", "La cellule suivante vous permet d'enregistrer le contenu de la cellule précédente dans un fichier appelée monmodule.py.", "code = \"\"\"\n# -*- coding: utf-8 -*-\nimport math\ndef fonction_cos_sequence(seq) :\n return [ math.cos(x) for x in seq ] \nif __name__ == \"__main__\" :\n print (\"ce message n'apparaît que si ce programme est le point d'entrée\")\n\"\"\"\nwith open(\"monmodule.py\", \"w\", encoding=\"utf8\") as f :\n f.write(code)", "Le second fichier :", "import monmodule\n\nprint ( monmodule.fonction_cos_sequence ( [ 1, 2, 3 ] ) )", "Note : Si le fichier monmodule.py est modifié, python ne recharge pas automatiquement le module si celui-ci a déjà été chargé. On peut voir la liste des modules en mémoire dans la variable sys.modules :", "import sys\nlist(sorted(sys.modules))[:10]", "Pour retirer le module de la mémoire, il faut l'enlever de sys.modules avec l'instruction del sys.modules['monmodule']. Python aura l'impression que le module monmodule.py est nouveau et il l'importera à nouveau.\nExercice 4 : son propre module\nQue se passe-t-il si vous remplacez if __name__ == \"__main__\": par if True :, ce qui équivaut à retirer la ligne if __name__ == \"__main__\": ?\nExpressions régulières\nPour la suite de la séance, on utilise comme préambule les instructions suivantes :", "import pyensae.datasource\ndiscours = pyensae.datasource.download_data('voeux.zip', website = 'xd')", "La documentation pour les expressions régulières est ici : regular expressions. Elles permettent de rechercher des motifs dans un texte :\n\n4 chiffres / 2 chiffres / 2 chiffres correspond au motif des dates, avec une expression régulière, il s'écrit : [0-9]{4}/[0-9]{2}/[0-9]{2}\nla lettre a répété entre 2 et 10 fois est un autre motif, il s'écrit : a{2,10}.", "import re # les expressions régulières sont accessibles via le module re\nexpression = re.compile(\"[0-9]{2}/[0-9]{2}/[0-9]{4}\")\ntexte = \"\"\"Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Ma seconde femme est morte le 10/11/63.\"\"\"\ncherche = expression.findall(texte)\nprint(cherche)", "Pourquoi la troisième date n'apparaît pas dans la liste de résultats ?\nExercice 5 : chercher un motif dans un texte\nOn souhaite obtenir toutes les séquences de lettres commençant par je ? Quel est le motif correspondant ? Il ne reste plus qu'à terminer le programme précédent.\nExercice 6 : chercher un autre motif dans un texte\nAvec la même expression régulière, rechercher indifféremment le mot securite ou insecurite.\nOn peut passer du temps à construire des expressions assez complexes surtout quand on oublie quelques Petites subtilités avec les expressions régulières en Python.\nExercice 7 : recherche les urls dans une page wikipédia\nOn pourra prendre comme exemple la page du programme Python.\nExercice 8 : construire un texte à motif\nA l'inverse des expressions régulières, des modules comme Mako ou Jinja2 permettent de construire simplement des documents qui suivent des règles. Ces outils sont très utilisés pour la construction de page web. On appelle cela faire du templating. Créer une page web qui affiche à l'aide d'un des modules la liste des dimanches de cette année." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
menpo/menpofit-notebooks
notebooks/DeformableModels/ConstrainedLocalModel/CLMs Basics.ipynb
bsd-3-clause
[ "Constrained Local Models - Basics\nThe aim of this notebook is to showcase how one can build and fit CLMs to images using menpofit.\nNote that this notebook assumes that the user has previously gone through the AAMs Basics notebook and he/she is already familiar with the basics of Menpo's Deformable Model Fitting framework explained in there.\n1. Loading data", "%matplotlib inline\nfrom pathlib import Path\n\n\npath_to_lfpw = Path('/vol/atlas/databases/lfpw')\n\nimport menpo.io as mio\n\ntraining_images = []\n# load landmarked images\nfor i in mio.import_images(path_to_lfpw / 'trainset', verbose=True):\n # crop image\n i = i.crop_to_landmarks_proportion(0.1)\n # convert it to grayscale if needed\n if i.n_channels == 3:\n i = i.as_greyscale(mode='luminosity')\n # append it to the list\n training_images.append(i)\n\nfrom menpowidgets import visualize_images\n\nvisualize_images(training_images)", "2. Build a CLM with default parameters\nBuilding a CLM using Menpo can be done using a single line of code.", "from menpofit.clm import CLM\n\n\nclm = CLM(\n training_images, \n verbose=True,\n group='PTS',\n diagonal=200\n)\n\nprint(clm)\n\nclm.view_clm_widget()", "3. Fit the previous CLM\nIn Menpo, CLMs can be fitted to images by creating Fitter objects around them. \nOne of the most popular algorithms for fitting CLMs is the Regularized Landmark Mean-Shift algorithm. In order to fit our CLM using this algorithm using Menpo, the user needs to define a GradientDescentCLMFitter object. This can be done again using a single line of code!!!", "from menpofit.clm import GradientDescentCLMFitter\n\nfitter = GradientDescentCLMFitter(clm, n_shape=[6, 12])", "Fitting a GradientDescentCLMFitter to an image is as simple as calling its fit method. Let's try it by fitting some images of the LFPW database test set!!!", "import menpo.io as mio\n\n# load test images\ntest_images = []\nfor i in mio.import_images(path_to_lfpw / 'testset', max_images=5, verbose=True):\n # crop image\n i = i.crop_to_landmarks_proportion(0.5)\n # convert it to grayscale if needed\n if i.n_channels == 3:\n i = i.as_greyscale(mode='luminosity')\n # append it to the list\n test_images.append(i)", "Note that for the purpose of this simple fitting demonstration we will just fit the first 5 images of the LFPW test set.", "from menpofit.fitter import noisy_shape_from_bounding_box\n\nfitting_results = []\n\nfor i in test_images:\n gt_s = i.landmarks['PTS'].lms\n # generate perturbed landmarks\n s = noisy_shape_from_bounding_box(gt_s, gt_s.bounding_box())\n # fit image\n fr = fitter.fit_from_shape(i, s, gt_shape=gt_s) \n fitting_results.append(fr)\n\n # print fitting error\n print(fr)\n\nfrom menpowidgets import visualize_fitting_result\n\nvisualize_fitting_result(fitting_results)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Soil-Carbon-Coalition/atlasdata
Combining rows w groupby, transform, or multiIndex.ipynb
mit
[ "Transect\nGroupby and transform allow me to combine rows into a single 'transect' row.\nOr, use a multiIndex, a hierarchical index, so I can target specific cells using id and type. The index item for a multiIndex is a TUPLE.", "%matplotlib inline\nimport sys\nimport numpy as np\nimport pandas as pd\nimport json\nimport matplotlib.pyplot as plt\nfrom io import StringIO\nprint(sys.version)\nprint(\"Pandas:\", pd.__version__)\n\ndf = pd.read_csv('C:/Users/Peter/Documents/atlas/atlasdata/obs_types/transect.csv', parse_dates=['date'])\ndf = df.astype(dtype='str')# we don't need numbers in this dataset.\ndf=df.replace('nan','')\n#this turns dates into strings with the proper format for JSON:\n#df['date'] = df['date'].dt.strftime('%Y-%m-%d')\n\ndf.type = df.type.str.replace('\\*remonitoring notes','transect')\ndf.type = df.type.str.replace('\\*plot summary','transect')", "shift data to correct column\nusing loc for assignment: df.loc[destination condition, column] = df.loc[source]", "df.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column\n\n\ndf.loc[df.type.str.contains('lineminus'),['miscPhoto']]=df['url']\ndf.loc[df.type.str.contains('lineplus'),['miscPhoto']]=df['url']\ndf.loc[df.type.str.contains('misc'),['miscPhoto']]=df['url']\n\n\n#now to deal with type='photo'\nphotos = df[df.type=='photo']\nnonphotos = df[df.type != 'photo'] #we can concatenate these later\ngrouped = photos.groupby(['id','date'])\n\nphotos.shape\n\nvalues=grouped.groups.values()\nfor value in values:\n photos.loc[value[2],['type']] = 'misc'\n #photos.loc[value[1],['type']] = 'linephoto2'\n\nphotos.loc[photos.type=='linephoto1']\n\nfor name, group in grouped:\n print(grouped[name])\n\nphotos = df[df.type == 'photo']\nphotos.set_index(['id','date'],inplace=True)\nphotos.index[1]\n\nphotos=df[df.type=='photo']\nphotos.groupby(['id','date']).count()\n\nphotos.loc[photos.index[25],['type','note']]\n\n#combine photo captions\ndf['caption']=''\ndf.loc[(df.type.str.contains('lineminus'))|(df.type.str.contains('lineplus')),['caption']]=df['type'] + ' | ' + df['note'] \ndf.loc[df.type.str.contains('lineplus'),['caption']]=df['url']\ndf.loc[df.type.str.contains('misc'),['caption']]=df['url']\n\n\n\n\ndf['mystart'] = 'Baseline summary:'\ndf.loc[df.type =='transect',['site_description']]= df[['mystart','label1','value1','label2','value2','label3','value3','note']].apply(' | '.join, axis=1)\ndf.loc[df.type.str.contains('line-'),['linephoto1']]=df['url']\ndf.loc[df.type.str.contains('line\\+'),['linephoto2']]=df['url']#be sure to escape the +\ndf.loc[df.type.str.contains('linephoto1'),['linephoto1']]=df['url']\ndf.loc[df.type.str.contains('linephoto2'),['linephoto2']]=df['url']\n\ndf.loc[df.type == 'plants',['general_observations']]=df['note']", "use groupby and transform to fill the row", "#since we're using string methods, NaNs won't work\nmycols =['general_observations','mapPhoto','linephoto1','linephoto2','miscPhoto','site_description']\nfor item in mycols:\n df[item] = df[item].fillna('')\n\ndf.mapPhoto = df.groupby('id')['mapPhoto'].transform(lambda x: \"%s\" % ''.join(x))\n\ndf.linephoto1 = df.groupby(['id','date'])['linephoto1'].transform(lambda x: \"%s\" % ''.join(x))\ndf.linephoto2 = df.groupby(['id','date'])['linephoto2'].transform(lambda x: \"%s\" % ''.join(x))\ndf.miscPhoto = df.groupby(['id','date'])['miscPhoto'].transform(lambda x: \"%s\" % ''.join(x))\n\n\ndf['site_description'] = df['site_description'].str.strip()\n\ndf.to_csv('test.csv')\n#done to here. Next, figure out what to do with linephotos, unclassified photos, and their notes.\n#make column for photocaptions. When adding linephoto1, add 'note' and 'type' fields to caption column. E.g. 'linephoto1: 100line- | view east along transect.' Then join the rows in the groupby transform and add to site_description field.\n\ndf.shape\n\ndf[(df.type.str.contains('line\\+'))&(df.linephoto2.str.len()<50)]\n\nmaps.str.len().sort_values()\n", "shift data to correct row using a multi-Index", "ids = list(df['id'])#make a list of ids to iterate over, before the hierarchical index\n\n#df.type = df.type.map({'\\*plot summary':'transect','\\*remonitoring notes':'transect'})\n\ndf.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column\n\ndf.set_index(['id','type'],inplace=True) # hierarchical index so we can call locations\n\n#a hierarchical index uses a tuple. You can set values using loc.\n#this format: df.loc[destination] = df.loc[source].values[0]\nfor item in ids:\n df.loc[(item,'*plot summary'),'mapPhoto'] = df.loc[(item,'map'),'mapPhoto'].values[0]\n\n#generates a pink warning about performance, but oh well.\n\n#here we are using an expression in parens to test for a condition\n(df['type'].str.contains('\\s') & df['note'].notnull()).value_counts()\n\ndf.url = df.url.str.replace(' ','_');df.url\n\ndf.url.head()\n\ndf['newurl'] = df.url.str.replace\n\ndf.newurl.head()\n\n#for combining rows try something like this:\nprint(df.groupby('somecolumn')['temp variable'].apply(' '.join).reset_index())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ajgpitch/qutip-notebooks
examples/homodyned-Jaynes-Cummings-emission.ipynb
lgpl-3.0
[ "QuTiP example: Homodyned Jaynes-Cummings emission\nK.A. Fischer, Stanford University\nThis Jupyter notebook demonstrates how to simulate quantum statistics of homodyned emission from a detuned Jaynes-Cummings system. The\npurpose is to understand how well the first polariton of a dissipative Jaynes-Cummings system can act as an ideal two-level system. This notebook closely follows an example from my simulation paper, <a href=\"https://arxiv.org/abs/1611.01566\">An architecture for self-homodyned nonclassical light</a>, Phys. Rev. Applied 7, 044002 (2017).\nFor more information about QuTiP see the project web page: http://qutip.org/", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom qutip import *", "Introduction for the two-level system\nThe quantum two-level system (TLS) is the simplest possible model for quantum light-matter interaction. In the version we simulate here, the system is driven by a continuous-mode coherent state, whose dipolar interaction with the system is represented by the following Hamiltonain\n$$ H_\\mathrm{TLS} =\\hbar \\omega_0 \\sigma^\\dagger \\sigma + \\frac{\\hbar\\Omega_\\mathrm{TLS}(t)}{2}\\left( \\sigma\\textrm{e}^{-i\\omega_dt} + \\sigma^\\dagger \\textrm{e}^{i\\omega_dt}\\right),$$\nwhere $\\omega_0$ is the system's transition frequency, $\\sigma$ is the system's atomic lowering operator, $\\omega_d$ is the coherent state's center frequency, and $\\Omega_\\mathrm{TLS}(t)$ is the coherent state's driving strength.\nThe time-dependence can be removed to simplify the simulation by a rotating frame transformation, and is particularly simple when the driving field is resonant with the transition frequency ($\\omega_d=\\omega_0$). Then,\n$$ \\tilde{H}_\\mathrm{TLS} =\\frac{\\hbar\\Omega(t)}{2}\\left( \\sigma+ \\sigma^\\dagger \\right).$$\nSetup the two-level system properties", "# define system operators\ngamma = 1 # decay rate\nsm_TLS = destroy(2) # dipole operator\nc_op_TLS = [np.sqrt(gamma)*sm_TLS] # represents spontaneous emission\n\n# choose range of driving strengths to simulate\nOm_list_TLS = gamma*np.logspace(-2, 1, 300)\n\n# calculate steady-state density matricies for the driving strengths\nrho_ss_TLS = []\nfor Om in Om_list_TLS:\n H_TLS = Om * (sm_TLS + sm_TLS.dag())\n rho_ss_TLS.append(steadystate(H_TLS, c_op_TLS))", "The emission can be decomposed into a so-called coherent and incoherent portion. The coherent portion is simply due to the classical mean of the dipole moment, i.e.\n$$I_\\mathrm{c}=\\lim_{t\\rightarrow\\infty}\\Gamma\\langle\\sigma^\\dagger(t)\\rangle\\langle\\sigma(t)\\rangle,$$\nwhile the incoherent portion is due to the standard deviation of the dipole moment (which represents its quantum fluctuations), i.e.\n$$I_\\mathrm{inc}=\\lim_{t\\rightarrow\\infty}\\Gamma\\langle\\sigma^\\dagger(t)\\sigma(t)\\rangle-I_\\mathrm{c}.$$\nTogether, these emissions conspire in a way to result in zero second-order coherence for the two-level system, i.e. $g^{(2)}(0)=0$.", "# decompose the emitted light into the coherent and incoherent \n# portions\nI_c_TLS = expect(sm_TLS.dag(), rho_ss_TLS)*expect(sm_TLS, rho_ss_TLS)\nI_inc_TLS = expect(sm_TLS.dag()*sm_TLS, rho_ss_TLS) - I_c_TLS", "Visualize the incoherent and coherent emissions", "plt.semilogx(Om_list_TLS, abs(I_c_TLS), \n label='TLS $I_\\mathrm{c}$')\nplt.semilogx(Om_list_TLS, abs(I_inc_TLS), \n 'r', label='TLS $I_\\mathrm{inc}$')\nplt.xlabel('Driving strength [$\\Gamma$]')\nplt.ylabel('Normalized flux [$\\Gamma$]')\nplt.legend(loc=2);", "Introduction for the Jaynes-Cummings system\nThe quantum Jaynes-Cummings (JC) system represents one of the most fundamental models for quantum light-matter interaction, which models the interaction between a quantum two-level system (e.g. an atomic transition) and a single photonic mode. Here, the strong interaction between light and matter creates new quantum states known as polaritons in an anharmonic ladder of states. In a phenomenon known as photon blockade, the most anharmonic polariton is used as a two-level system to produce emission with $g^{(2)}(0)<1$. We will investigate how well the emission compares to that of a two-level system by comparing both its coherent and incoherent components as well as its $g^{(2)}(0)$.\nIn the version we simulate here, the Jaynes-Cummings system is driven by a continuous-mode coherent state, whose dipolar interaction with the system is represented by the following Hamiltonain\n$$ H =\\hbar \\omega_a a^\\dagger a + \\hbar \\left(\\omega_a+\\Delta\\right) \\sigma^\\dagger \\sigma+ \\hbar g\\left(a^\\dagger\\sigma +a\\sigma^\\dagger\\right) + \\frac{\\hbar\\Omega(t)}{2}\\left( a\\textrm{e}^{-i\\omega_dt} + a^\\dagger \\textrm{e}^{i\\omega_dt}\\right),$$\nwhere additionally $\\omega_a$ is the cavity's resonant frequency and $\\Delta$ is the cavity-atom detuning. We will investigate for finite $\\Delta$ because this increases the anharmonicity of the Jaynes-Cummings ladder. The time-dependence can additionally be removed to simplify the simulation by a rotating frame transformation in a very similar manner as before.\nSetup the JC system properties", "# truncate size of cavity's Fock space\nN = 15\n\n# setup system operators\nsm = tensor(destroy(2), qeye(N))\na = tensor(qeye(2), destroy(N))\n\n# define system parameters, barely into strong coupling regime\nkappa = 1\ng = 0.6 * kappa\ndetuning = 3 * g # cavity-atom detuning\ndelta_s = detuning/2 + np.sqrt(detuning ** 2 / 4 + g ** 2)\n\n# we only consider cavities in the good-emitter limit, where \n# the atomic decay is irrelevant\nc_op = [np.sqrt(kappa)*a]", "Effective polaritonic two-level system\nIn the ideal scenario, the most anharmonic polariton and the ground state form an ideal two-level system with effective emission rate of\n$$\\Gamma_\\mathrm{eff}= \\frac{\\kappa}{2}+2\\,\\textrm{Im} \\left{\\sqrt{ g^2-\\left( \\frac{\\kappa}{4}+\\frac{\\textbf{i}\\Delta}{2} \\right)^2 }\\right}.$$", "effective_gamma = kappa / 2 + 2 * np.imag(\n np.sqrt(g ** 2 - (kappa / 4 + 1j * detuning / 2) ** 2))\n\n# set driving strength based on the effective polariton's \n# emission rate (driving strength goes as sqrt{gamma})\nOm = 0.4 * np.sqrt(effective_gamma)", "Define reference system for homodyne interference\nFor the purposes of optimally homodyning the JC output, we wish to transmit light through a bare cavity (no atom involved) and calculate its coherent amplitude. (This of course could easily be analytically calculated but QuTiP certainly is trivially capable of such a calculation.)", "# reference cavity operator\na_r = destroy(N)\nc_op_r = [np.sqrt(kappa)*a_r]\n\n# reference cavity Hamiltonian, no atom coupling\nH_c = Om * (a_r + a_r.dag()) + delta_s * a_r.dag() * a_r\n\n# solve for coherent state amplitude at driving strength Om\nrho_ss_c = steadystate(H_c, c_op_r)\nalpha = -expect(rho_ss_c, a_r)\nalpha_c = alpha.conjugate()", "Calculate JC emission\nThe steady-state emitted flux from the JC system is given by $T=\\kappa\\langle a^\\dagger a \\rangle$, however with an additional homodyne interference it is $T=\\langle b^\\dagger b \\rangle$, where the operator $b=\\sqrt{\\kappa}/2\\, a + \\beta$ is a new operator representing the interference between the JC emssion and a coherent state of amplitude $\\beta$.\nThe interference present in the operator $b$ now allows for the alteration of the measured portion of the coherently scattered light, though it leaves the incoherent portion unchanged since the incident flux has only a coherent portion. We're interested in studying the optimal homodyne interference to allow the JC emission to match the TLS emission as closely as possible. This optimum is determined from the above reference cavity, such that $\\beta=-\\sqrt{\\kappa}/2\\langle a_\\textrm{ref} \\rangle$.", "def calculate_rho_ss(delta_scan):\n H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \\\n delta_scan * (\n sm.dag() * sm + a.dag() * a) - detuning * sm.dag() * sm\n return steadystate(H, c_op)\n\n\ndelta_list = np.linspace(-6 * g, 9 * g, 200)\nrho_ss = parfor(calculate_rho_ss, delta_list)\n\n# calculate JC emission\nI = expect(a.dag()*a, rho_ss)\n\n# calculate JC emission homodyned with optimal state beta\nI_int = expect((a.dag() + alpha_c) * (a + alpha), rho_ss)", "Visualize the emitted flux with and without interference\nThe dashed black line shows the intensity without interference and the violet line shows the intensity with interference. The vertical gray line indicates the spectral position of the anharmonic polariton. Note its narrower linewidth due to the slower effective decay rate (more atom-like since we're in the good-emitter limit).", "plt.figure(figsize=(8,5))\n\nplt.plot(delta_list/g, I/effective_gamma,\n 'k', linestyle='dashed', label='JC')\nplt.plot(delta_list/g, I_int/effective_gamma,\n 'blueviolet', label='JC w/ interference')\nplt.vlines(delta_s/g, 0, 0.7, 'gray')\nplt.xlim(-6, 9)\nplt.ylim(0, 0.7)\nplt.xlabel('Detuning [g]')\nplt.ylabel('Noramlized flux [$\\Gamma_\\mathrm{eff}$]')\nplt.legend(loc=1);", "Calculate coherent/incoherent portions of emission from JC system and its $g^{(2)}(0)$\nWe note that\n$$g^{(2)}(0)=\\frac{\\langle a^\\dagger a^\\dagger a a \\rangle}{\\langle a^\\dagger a \\rangle^2}.$$", "Om_list = kappa*np.logspace(-2, 1, 300)*np.sqrt(effective_gamma)\n\ndef calculate_rho_ss(Om):\n H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \\\n delta_s*(sm.dag()*sm + a.dag()*a) - detuning*sm.dag()*sm\n return steadystate(H, c_op)\n\nrho_ss = parfor(calculate_rho_ss, Om_list)\n\n# decompose emission again into incoherent and coherent portions\nI_c = expect(a.dag(), rho_ss)*expect(a, rho_ss)\nI_inc = expect(a.dag()*a, rho_ss) - I_c\n\n# additionally calculate g^(2)(0)\ng20 = expect(a.dag()*a.dag()*a*a, rho_ss)/expect(a.dag()*a, rho_ss)**2", "Visualize the results\nThe dashed black line in the top figure represents the coherent portion of the emission and can clearly be seen to dominate the emission for large driving strengths. Here, the emission significantly deviates from that of a two-level system, which saturates by these driving strengths. The lack of saturation for the JC system occurs due to the harmonic ladder above the anharmonic polariton. Additionally, the $g^{(2)}(0)$ values are all quite large relative to the ideal TLS value of zero (bottom plot).", "plt.figure(figsize=(8,8))\n\nplt.subplot(211)\nplt.semilogx(Om_list/np.sqrt(effective_gamma), abs(I_c)/kappa,\n 'k', linestyle='dashed', label='JC $I_\\mathrm{c}$')\nplt.semilogx(Om_list/np.sqrt(effective_gamma), abs(I_inc)/kappa,\n 'r', linestyle='dashed', label='JC $I_\\mathrm{inc}$')\nplt.xlabel(r'Driving strength [$\\Gamma_\\mathrm{eff}$]')\nplt.ylabel('Normalized Flux [$\\kappa$]')\nplt.legend(loc=2)\n\nplt.subplot(212)\nplt.loglog(Om_list/np.sqrt(effective_gamma), g20,\n 'k', linestyle='dashed')\nlim = (1e-4, 2e0)\nplt.ylim(lim)\nplt.xlabel(r'Driving strength [$\\Gamma_\\mathrm{eff}$]')\nplt.ylabel('$g^{(2)}(0)$');", "Calculate homodyned JC emission\nNow we recalculate the coherent and incoherent portions as well as the $g^{(2)}(0)$ for the homodyned JC emission, but use the operator $b$ instead of $\\sqrt{\\kappa}/2\\,a$. Thus\n$$g^{(2)}(0)=\\frac{\\langle b^\\dagger b^\\dagger b b \\rangle}{\\langle b^\\dagger b \\rangle^2}.$$", "def calculate_rho_ss_c(Om):\n H_c = Om * (a_r + a_r.dag()) + delta_s * a_r.dag() * a_r\n return steadystate(H_c, c_op_r)\n\nrho_ss_c = parfor(calculate_rho_ss_c, Om_list)\n\n# calculate list of interference values for all driving strengths\nalpha_list = -expect(rho_ss_c, a_r)\nalpha_c_list = alpha_list.conjugate()\n\n# decompose emission for all driving strengths\ng20_int = []\nI_c_int = []\nI_inc_int = []\nfor i, rho in enumerate(rho_ss):\n g20_int.append(\n expect((a.dag() + alpha_c_list[i]) * \n (a.dag() + alpha_c_list[i]) * \n (a + alpha_list[i]) * \n (a + alpha_list[i]),\n rho) /\n expect((a.dag() + alpha_c_list[i]) * \n (a + alpha_list[i]),\n rho)**2\n )\n I_c_int.append(expect(a.dag() + alpha_c_list[i], rho) * \n expect(a + alpha_list[i], rho))\n I_inc_int.append(expect(\n (a.dag() + alpha_c_list[i]) * \n (a + alpha_list[i]), rho) - I_c_int[-1])", "Calculate the results\nThe dashed red and blue lines, which represent the TLS decomposition are now matched well by the JC decomposition with optimal homodyne interference (red and blue). The dashed black line is shown again as a reminder of the JC system's coherent emission without interference, which does not saturate for large driving strengths. Additionally, with the interference the $g^{(2)}(0)$ value improves by many orders of magnitude.", "plt.figure(figsize=(8,8))\n\nplt.subplot(211)\nplt.semilogx(Om_list_TLS, abs(I_c_TLS),\n linestyle='dashed', label='TLS $I_\\mathrm{c}$')\nplt.semilogx(Om_list_TLS, abs(I_inc_TLS), 'r', \n linestyle='dashed', label='TLS $I_\\mathrm{inc}$')\nplt.semilogx(Om_list/np.sqrt(effective_gamma),\n abs(I_c/effective_gamma), 'k', linestyle='dashed', \n label='JC $I_\\mathrm{c}$')\nplt.semilogx(Om_list/np.sqrt(effective_gamma),\n abs(I_inc/effective_gamma), \n 'r', label='JC $I_\\mathrm{inc}$')\nplt.semilogx(Om_list/np.sqrt(effective_gamma),\n abs(I_c_int/effective_gamma),\n 'b', label='JC w/ homodyne $I_\\mathrm{c}$')\nplt.semilogx(Om_list/np.sqrt(effective_gamma),\n abs(I_inc_int/effective_gamma),\n 'r')\nplt.ylim(5e-4, 0.6)\nplt.xlabel(r'Driving strength [$\\Gamma_\\mathrm{eff}$]')\nplt.ylabel('Normalized flux [$\\Gamma_\\mathrm{eff}$]')\nplt.legend(loc=2)\n\nplt.subplot(212)\nplt.loglog(Om_list/np.sqrt(effective_gamma), g20,\n 'k', linestyle='dashed', label='JC')\nplt.loglog(Om_list/np.sqrt(effective_gamma), g20_int,\n 'blueviolet', label='JC w/ interference')\nplt.ylim(lim)\nplt.xlabel(r'Driving strength [$\\Gamma_\\mathrm{eff}$]')\nplt.ylabel(r'$g^{(2)}(0)$')\nplt.legend(loc=4);", "Second-order coherence with delay\nWe additionally consider the second-order coherence as a function of time delay, i.e.\n$$g^{(2)}(\\tau)=\\lim_{t\\rightarrow\\infty}\\frac{\\langle b^\\dagger(t)b^\\dagger(t+\\tau)b(t+\\tau)b(t)\\rangle}{\\langle b^\\dagger(t)b(t)\\rangle^2},$$\nand show how it is calculated in the context of homodyne interference.", "# first calculate the steady state\nH = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \\\n delta_s * (sm.dag() * sm + a.dag() * a) - \\\n detuning * sm.dag() * sm\nrho0 = steadystate(H, c_op)\n\ntaulist = np.linspace(0, 5/effective_gamma, 1000)\n\n# next evolve the states according the quantum regression theorem\n\n# ...with the b operator\ncorr_vec_int = expect(\n (a.dag() + alpha.conjugate()) * (a + alpha),\n mesolve(\n H, (a + alpha) * rho0 * (a.dag() + alpha.conjugate()),\n taulist, c_op, [],\n options=Options(atol=1e-13, rtol=1e-11)\n ).states\n)\nn_int = expect(rho0, (a.dag() + alpha.conjugate()) * (a + alpha))\n\n# ...with the a operator\ncorr_vec = expect(\n a.dag() * a ,\n mesolve(\n H, a * rho0 * a.dag(),\n taulist, c_op, [],\n options=Options(atol=1e-12, rtol=1e-10)\n ).states\n)\nn = expect(rho0, a.dag() * a)\n\n# ...perform the same for the TLS comparison\nH_TLS = Om*(sm_TLS + sm_TLS.dag())*np.sqrt(effective_gamma)\nc_ops_TLS = [sm_TLS*np.sqrt(effective_gamma)]\nrho0_TLS = steadystate(H_TLS, c_ops_TLS)\ncorr_vec_TLS = expect(\n sm_TLS.dag() * sm_TLS,\n mesolve(\n H_TLS, sm_TLS * rho0_TLS * sm_TLS.dag(),\n taulist, c_ops_TLS, []\n ).states\n)\nn_TLS = expect(rho0_TLS, sm_TLS.dag() * sm_TLS)", "Visualize the comparison to TLS correlations\nAt a moderate driving strength, the JC correlation (dashed black line) is seen to significantly deviate from that of the TLS (dotted purple line). On the other hand, after the optimal homodyne inteference, the emission correlations (solid purple line) match the ideal correlations very well.", "plt.figure(figsize=(8,5))\n\nl1, = plt.plot(taulist*effective_gamma, corr_vec_TLS/n_TLS**2,\n 'blueviolet', linestyle='dotted', label='TLS')\nplt.plot(taulist*effective_gamma, corr_vec/n**2,\n 'k', linestyle='dashed', label='JC')\nplt.plot(taulist*effective_gamma, corr_vec_int/n_int**2,\n 'blueviolet', label='JC w/ interference')\nplt.xlabel('$\\\\tau$ [$1/\\Gamma_\\mathrm{eff}$]')\nplt.ylabel('$g^{(2)}(\\\\tau)$')\nplt.legend(loc=2);", "Versions", "from qutip.ipynbtools import version_table\n\nversion_table()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
anonyXmous/CapstoneProject
Mini_Project_Naive_Bayes.ipynb
unlicense
[ "Basic Text Classification with Naive Bayes\n\nIn the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on Lab 10 of Harvard's CS109 class. Please free to go to the original lab for additional exercises and solutions.", "%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nfrom six.moves import range\n\n# Setup Pandas\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\n\n# Setup Seaborn\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")", "Table of Contents\n\nRotten Tomatoes Dataset\nExplore\n\n\nThe Vector Space Model and a Search Engine\nIn Code\n\n\nNaive Bayes\nMultinomial Naive Bayes and Other Likelihood Functions\nPicking Hyperparameters for Naive Bayes and Text Maintenance\n\n\nInterpretation\n\nRotten Tomatoes Dataset", "critics = pd.read_csv('./critics.csv')\n#let's drop rows with missing quotes\ncritics = critics[~critics.quote.isnull()]\ncritics.head()", "Explore", "n_reviews = len(critics)\nn_movies = critics.rtid.unique().size\nn_critics = critics.critic.unique().size\n\n\nprint(\"Number of reviews: {:d}\".format(n_reviews))\nprint(\"Number of critics: {:d}\".format(n_critics))\nprint(\"Number of movies: {:d}\".format(n_movies))\n\ndf = critics.copy()\ndf['fresh'] = df.fresh == 'fresh'\ngrp = df.groupby('critic')\ncounts = grp.critic.count() # number of reviews by each critic\nmeans = grp.fresh.mean() # average freshness for each critic\n \nmeans[counts > 100].hist(bins=10, edgecolor='w', lw=1)\nplt.xlabel(\"Average Rating per critic\")\nplt.ylabel(\"Number of Critics\")\nplt.yticks([0, 2, 4, 6, 8, 10]);", "<div class=\"span5 alert alert-info\">\n<h3>Exercise Set I</h3>\n<br/>\n<b>Exercise/Answers:</b> \n<br/>\n<li> Look at the histogram above. Tell a story about the average ratings per critic. \n<b> The average fresh ratings per critic is around 0.6 with a minimum ratings of 0.35 and max of 0.81 </b>\n<li> What shape does the distribution look like? \n<b> The shape looks like a normal distribution or bell shape </b>\n<li> What is interesting about the distribution? What might explain these interesting things?\n<b> </b>\n</div>\n\nThe Vector Space Model and a Search Engine\nAll the diagrams here are snipped from Introduction to Information Retrieval by Manning et. al. which is a great resource on text processing. For additional information on text mining and natural language processing, see Foundations of Statistical Natural Language Processing by Manning and Schutze.\nAlso check out Python packages nltk, spaCy, pattern, and their associated resources. Also see word2vec.\nLet us define the vector derived from document $d$ by $\\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry \"slot\" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a corpus.\nTo define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So \"hello\" may be at index 5 and \"world\" at index 99.\nSuppose we have the following corpus:\nA Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree. The grapes seemed ready to burst with juice, and the Fox's mouth watered as he gazed longingly at them.\nSuppose we treat each sentence as a document $d$. The vocabulary (often called the lexicon) is the following:\n$V = \\left{\\right.$ a, along, and, as, at, beautiful, branches, bunch, burst, day, fox, fox's, from, gazed, grapes, hanging, he, juice, longingly, mouth, of, one, ready, ripe, seemed, spied, the, them, to, trained, tree, vine, watered, with$\\left.\\right}$\nThen the document\nA Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree\nmay be represented as the following sparse vector of word counts:\n$$\\bar V(d) = \\left( 4,1,0,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,2,1,0,1,0,0,1,0,0,0,1,1,0,0 \\right)$$\nor more succinctly as\n[(0, 4), (1, 1), (5, 1), (6, 1), (7, 1), (9, 1), (10, 1), (12, 1), (14, 1), (15, 1), (20, 2), (21, 1), (23, 1),\n(26, 1), (30, 1), (31, 1)]\nalong with a dictionary\n{\n 0: a, 1: along, 5: beautiful, 6: branches, 7: bunch, 9: day, 10: fox, 12: from, 14: grapes, \n 15: hanging, 19: mouth, 20: of, 21: one, 23: ripe, 24: seemed, 25: spied, 26: the, \n 30: tree, 31: vine, \n}\nThen, a set of documents becomes, in the usual sklearn style, a sparse matrix with rows being sparse arrays representing documents and columns representing the features/words in the vocabulary.\nNotice that this representation loses the relative ordering of the terms in the document. That is \"cat ate rat\" and \"rat ate cat\" are the same. Thus, this representation is also known as the Bag-Of-Words representation.\nHere is another example, from the book quoted above, although the matrix is transposed here so that documents are columns:\n\nSuch a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, jealous and jealousy after stemming are the same feature. One could also make use of other \"Natural Language Processing\" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove \"stopwords\" from our vocabulary, such as common words like \"the\". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all depends on our application.\nFrom the book:\n\nThe standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\\bar V(d_1)$ and $\\bar V(d_2)$:\n\n$$S_{12} = \\frac{\\bar V(d_1) \\cdot \\bar V(d_2)}{|\\bar V(d_1)| \\times |\\bar V(d_2)|}$$\n\n\nThere is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below. \n\n\n\nThe key idea now: to assign to each document d a score equal to the dot product:\n\n$$\\bar V(q) \\cdot \\bar V(d)$$\nThen we can use this simple Vector Model as a Search engine.\nIn Code", "from sklearn.feature_extraction.text import CountVectorizer\n\ntext = ['Hop on pop', 'Hop off pop', 'Hop Hop hop']\nprint(\"Original text is\\n{}\".format('\\n'.join(text)))\n\nvectorizer = CountVectorizer(min_df=0)\n\n# call `fit` to build the vocabulary\nvectorizer.fit(text)\n\n# call `transform` to convert text to a bag of words\nx = vectorizer.transform(text)\n\n# CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to \n# convert back to a \"normal\" numpy array\nx = x.toarray()\n\nprint(\"\")\nprint(\"Transformed text vector is \\n{}\".format(x))\n\n# `get_feature_names` tracks which word is associated with each column of the transformed x\nprint(\"\")\nprint(\"Words for each feature:\")\nprint(vectorizer.get_feature_names())\n\n# Notice that the bag of words treatment doesn't preserve information about the *order* of words, \n# just their frequency\n\ndef make_xy(critics, vectorizer=None):\n #Your code here \n if vectorizer is None:\n vectorizer = CountVectorizer()\n X = vectorizer.fit_transform(critics.quote)\n X = X.tocsc() # some versions of sklearn return COO format\n y = (critics.fresh == 'fresh').values.astype(np.int)\n return X, y\nX, y = make_xy(critics)", "Naive Bayes\nFrom Bayes' Theorem, we have that\n$$P(c \\vert f) = \\frac{P(c \\cap f)}{P(f)}$$\nwhere $c$ represents a class or category, and $f$ represents a feature vector, such as $\\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the features in the document. $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as\n$$P(c \\vert f) \\propto P(f \\vert c) P(c) $$\n$P(c)$ is called the prior and is simply the probability of seeing class $c$. But what is $P(f \\vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the likelihood and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are conditionally independent given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear within that class. This is a very important distinction. Recall that if two events are independent, then:\n$$P(A \\cap B) = P(A) \\cdot P(B)$$\nThus, conditional independence implies\n$$P(f \\vert c) = \\prod_i P(f_i | c) $$\nwhere $f_i$ is an individual feature (a word in this example).\nTo make a classification, we then choose the class $c$ such that $P(c \\vert f)$ is maximal.\nThere is a small caveat when computing these probabilities. For floating point underflow we change the product into a sum by going into log space. This is called the LogSumExp trick. So:\n$$\\log P(f \\vert c) = \\sum_i \\log P(f_i \\vert c) $$\nThere is another caveat. What if we see a term that didn't exist in the training data? This means that $P(f_i \\vert c) = 0$ for that term, and thus $P(f \\vert c) = \\prod_i P(f_i | c) = 0$, which doesn't help us at all. Instead of using zeros, we add a small negligible value called $\\alpha$ to each count. This is called Laplace Smoothing.\n$$P(f_i \\vert c) = \\frac{N_{ic}+\\alpha}{N_c + \\alpha N_i}$$\nwhere $N_{ic}$ is the number of times feature $i$ was seen in class $c$, $N_c$ is the number of times class $c$ was seen and $N_i$ is the number of times feature $i$ was seen globally. $\\alpha$ is sometimes called a regularization parameter.\nMultinomial Naive Bayes and Other Likelihood Functions\nSince we are modeling word counts, we are using variation of Naive Bayes called Multinomial Naive Bayes. This is because the likelihood function actually takes the form of the multinomial distribution.\n$$P(f \\vert c) = \\frac{\\left( \\sum_i f_i \\right)!}{\\prod_i f_i!} \\prod_{f_i} P(f_i \\vert c)^{f_i} \\propto \\prod_{i} P(f_i \\vert c)$$\nwhere the nasty term out front is absorbed as a normalization constant such that probabilities sum to 1.\nThere are many other variations of Naive Bayes, all which depend on what type of value $f_i$ takes. If $f_i$ is continuous, we may be able to use Gaussian Naive Bayes. First compute the mean and variance for each class $c$. Then the likelihood, $P(f \\vert c)$ is given as follows\n$$P(f_i = v \\vert c) = \\frac{1}{\\sqrt{2\\pi \\sigma^2_c}} e^{- \\frac{\\left( v - \\mu_c \\right)^2}{2 \\sigma^2_c}}$$\n<div class=\"span5 alert alert-info\">\n<h3>Exercise Set II</h3>\n\n<p><b>Exercise:</b> Implement a simple Naive Bayes classifier:</p>\n\n<ol>\n<li> split the data set into a training and test set\n<li> Use `scikit-learn`'s `MultinomialNB()` classifier with default parameters.\n<li> train the classifier over the training set and test on the test set\n<li> print the accuracy scores for both the training and the test sets\n</ol>\n\nWhat do you notice? Is this a good classifier? If not, why not?\n<b>Noticed that the accuracy on test set is 100%.\nThe model perfectly predicted if the movie will be rated as fresh based on the reviews and this is a very good classifier\n</b>\n</div>", "# your turn\n# split the data set into a training and test set\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import MultinomialNB\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)\n\nclf = MultinomialNB()\nclf.fit(X_train, y_train)\n\nprint('accuracy score on training set: ', clf.score(X_train, y_train))\nprint('accuracy score on test set: ', clf.score(X_test, clf.predict(X_test)))\nprint('Noticed that the accuracy on test set is 100%.')\nprint('The model perfectly predicted if the movie will be rated as fresh based on the reviews')\nprint('This is a very good classifier')", "Picking Hyperparameters for Naive Bayes and Text Maintenance\nWe need to know what value to use for $\\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise.\nFirst, let's find an appropriate value for min_df for the CountVectorizer. min_df can be either an integer or a float/decimal. If it is an integer, min_df represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum percentage of documents a word must appear in to be included in the vocabulary. From the documentation:\n\nmin_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.\n\n<div class=\"span5 alert alert-info\">\n<h3>Exercise Set III</h3>\n\n<p><b>ANSWERS:</b> Construct the cumulative distribution of document frequencies (df). The $x$-axis is a document count $x_i$ and the $y$-axis is the percentage of words that appear less than $x_i$ times. For example, at $x=5$, plot a point representing the percentage or number of words that appear in 5 or fewer documents.</p>\n\n<b> Done, please see below cell </b> \n<p><b>Exercise:</b> Look for the point at which the curve begins climbing steeply. This may be a good value for `min_df`. If we were interested in also picking `max_df`, we would likely pick the value where the curve starts to plateau. What value did you choose?</p>\n<b>The curve climbing steeply at 1 and starts to plateau at 60.\nmin_df=1 while max_df=60</b>\n</div>", "# Your turn.\n# contruct the frequency of words\nvectorizer = CountVectorizer(stop_words='english')\nX = vectorizer.fit_transform(critics.quote)\nword_freq_df = pd.DataFrame({'term': vectorizer.get_feature_names(), 'occurrences':np.asarray(X.sum(axis=0)).ravel().tolist()})\nword_freq_df['frequency'] = word_freq_df['occurrences']/np.sum(word_freq_df['occurrences'])\nword_freq_sorted=word_freq_df.sort_values('occurrences', ascending = False)\nword_freq_sorted.reset_index(drop=True, inplace=True)\nsum_words = len(word_freq_sorted)\n\n# create the cum frequency distribution\nsaved_cnt=0\ndf=[]\nfor i in range(1, 100): \n prev_cnt = len(word_freq_sorted[word_freq_sorted['occurrences']==i])\n saved_cnt += prev_cnt\n if i==1:\n df=pd.DataFrame([[i, prev_cnt, prev_cnt, prev_cnt/sum_words]], columns=['x', 'freq','cumfreq', 'percent'])\n else:\n df2=pd.DataFrame([[i, prev_cnt, saved_cnt, saved_cnt/sum_words]], columns=['x', 'freq','cumfreq', 'percent']) \n df = df.append(df2, ignore_index=True)\n\n# create the bar grapp \nplt.bar(df.x, df.percent, align='center', alpha=0.5)\nplt.xticks(range(0,101,10))\nplt.ylabel('Percentage of words that appears less than x')\nplt.xlabel('Document count of words (x)')\nplt.title('Cumulative percent distribution of words that appears in the reviews')\n \nplt.show()\n", "The parameter $\\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function cv_score performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold.", "from sklearn.model_selection import KFold\ndef cv_score(clf, X, y, scorefunc):\n result = 0.\n nfold = 5\n for train, test in KFold(nfold).split(X): # split data into train/test groups, 5 times\n clf.fit(X[train], y[train]) # fit the classifier, passed is as clf.\n result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data\n return result / nfold # average", "We use the log-likelihood as the score here in scorefunc. The higher the log-likelihood, the better. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV.\nThe custom scoring function scorefunc allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using roc_auc, precision, recall, or F1-score as the scoring function.", "def log_likelihood(clf, x, y):\n prob = clf.predict_log_proba(x)\n rotten = y == 0\n fresh = ~rotten\n return prob[rotten, 0].sum() + prob[fresh, 1].sum()", "We'll cross-validate over the regularization parameter $\\alpha$.\nLet's set up the train and test masks first, and then we can run the cross-validation procedure.", "from sklearn.model_selection import train_test_split\n_, itest = train_test_split(range(critics.shape[0]), train_size=0.7)\nmask = np.zeros(critics.shape[0], dtype=np.bool)\nmask[itest] = True\n", "<div class=\"span5 alert alert-info\">\n<h3>Exercise Set IV</h3>\n\n<p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p>\n<b> ANSWER: The function log_likelihood is the logarithmic value of the probability </b>\n<p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\\alpha$ that is too high?</p> <b>ANSWER: A large value of alpha will overfit the model </b>\n<p><b>Exercise:</b> Using the skeleton code below, find the best values of the parameter `alpha`, and use the value of `min_df` you chose in the previous exercise set. Use the `cv_score` function above with the `log_likelihood` function for scoring.</p>\n<b/> ANSWER: the best `alpha` is equal to 1\n</div>", "from sklearn.naive_bayes import MultinomialNB\n\n#the grid of parameters to search over\nalphas = [.1, 1, 5, 10, 50]\nbest_min_df = 1 # YOUR TURN: put your value of min_df here.\n\n#Find the best value for alpha and min_df, and the best classifier\nbest_alpha = None\nmaxscore=-np.inf\nfor alpha in alphas: \n vectorizer = CountVectorizer(min_df=best_min_df) \n Xthis, ythis = make_xy(critics, vectorizer)\n Xtrainthis = Xthis[mask]\n ytrainthis = ythis[mask]\n # your turn\n clf = MultinomialNB(alpha)\n clf.fit(Xtrainthis, ytrainthis)\n score = cv_score(clf, Xtrainthis, ytrainthis, log_likelihood)\n if (best_alpha is None) or (score > best_score):\n print('cv_score for ', alpha, score ) \n best_score = score\n best_alpha = alpha\n \n\nprint(\"alpha: {}\".format(best_alpha))\n", "<div class=\"span5 alert alert-info\">\n<h3>Exercise Set V: Working with the Best Parameters</h3>\n\n<p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p>\n<b/> ANSWER: Yes, it is a better classifier since it improves the accuracy on test data from 72 (`alpha`= .1) to 74 percent (`alpha` = 1)\n</div>", "vectorizer = CountVectorizer(min_df=best_min_df)\nX, y = make_xy(critics, vectorizer)\nxtrain=X[mask]\nytrain=y[mask]\nxtest=X[~mask]\nytest=y[~mask]\n\nclf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)\n\n#your turn. Print the accuracy on the test and training dataset\ntraining_accuracy = clf.score(xtrain, ytrain)\ntest_accuracy = clf.score(xtest, ytest)\n\nprint(\"Accuracy on training data: {:2f}\".format(training_accuracy))\nprint(\"Accuracy on test data: {:2f}\".format(test_accuracy))\n\nfrom sklearn.metrics import confusion_matrix\nprint(confusion_matrix(ytest, clf.predict(xtest)))\nprint(xtest.shape)", "Interpretation\nWhat are the strongly predictive features?\nWe use a neat trick to identify strongly predictive features (i.e. words). \n\nfirst, create a data set such that each row has exactly one feature. This is represented by the identity matrix.\nuse the trained classifier to make predictions on this matrix\nsort the rows by predicted probabilities, and pick the top and bottom $K$ rows", "words = np.array(vectorizer.get_feature_names())\n\nx = np.matrix(np.identity(xtest.shape[1]), copy=False)\nprobs = clf.predict_log_proba(x)[:, 0]\nind = np.argsort(probs)\n\ngood_words = words[ind[:10]]\nbad_words = words[ind[-10:]]\n\ngood_prob = probs[ind[:10]]\nbad_prob = probs[ind[-10:]]\n\nprint(\"Good words\\t P(fresh | word)\")\nfor w, p in list(zip(good_words, good_prob)):\n print(\"{:>20}\".format(w), \"{:.2f}\".format(1 - np.exp(p)))\n \nprint(\"Bad words\\t P(fresh | word)\")\nfor w, p in list(zip(bad_words, bad_prob)):\n print(\"{:>20}\".format(w), \"{:.2f}\".format(1 - np.exp(p)))", "<br/> <b>good words P(fresh | word) </b>\n <br/> touching 0.96\n <br/> delight 0.95\n <br/> delightful 0.95\n <br/> brilliantly 0.94\n <br/> energetic 0.94\n <br/> superb 0.94\n <br/> ensemble 0.93\n <br/> childhood 0.93\n <br/> engrossing 0.93\n <br/> absorbing 0.93\n <br/> <b> Bad words P(fresh | word) </b>\n <br/> sorry 0.13\n <br/> plodding 0.13\n <br/> dull 0.11\n <br/> bland 0.11\n <br/> disappointing 0.10\n <br/> forced 0.10\n <br/> uninspired 0.08\n <br/> pointless 0.07\n <br/> unfortunately 0.07\n <br/> stupid 0.06\n<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VI</h3>\n\n<p><b>Exercise:</b> Why does this method work? What does the probability for each row in the identity matrix represent</p>\n\n</div>\n\nThe above exercise is an example of feature selection. There are many other feature selection methods. A list of feature selection methods available in sklearn is here. The most common feature selection technique for text mining is the chi-squared $\\left( \\chi^2 \\right)$ method.\nPrediction Errors\nWe can see mis-predictions as well.", "x, y = make_xy(critics, vectorizer)\n\nprob = clf.predict_proba(x)[:, 0]\npredict = clf.predict(x)\n\nbad_rotten = np.argsort(prob[y == 0])[:5]\nbad_fresh = np.argsort(prob[y == 1])[-5:]\n\nprint(\"Mis-predicted Rotten quotes\")\nprint('---------------------------')\nfor row in bad_rotten:\n print(critics[y == 0].quote.iloc[row])\n print(\"\")\n\nprint(\"Mis-predicted Fresh quotes\")\nprint('--------------------------')\nfor row in bad_fresh:\n print(critics[y == 1].quote.iloc[row])\n print(\"\")", "<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VII: Predicting the Freshness for a New Review</h3>\n<br/>\n<div>\n<b>Exercise:</b>\n<ul>\n<li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'*\n<li> Is the result what you'd expect? Why (not)?\n<b/> The predicted result is \"Fresh\" which is not I expect. The word 'Not' is not taken into account thus the analysis mistakenly predicted it as \"Fresh\" based on the words remarkable, touching and superb which have a high probability of being a good review. The solution is to take the analysis into a bi-gram level which will take pair each words together and come up with an analysis based on consecutive pair of words. This will in effect see that the review is rotten since \"not remarkable\" will be taken as a negative review.\n</ul>\n</div>\n</div>", "#your turn\n# Predicting the Freshness for a New Review\ndocs_new = ['This movie is not remarkable, touching, or superb in any way']\nX_new = vectorizer.transform(docs_new)\nX_new = X_new.tocsc() \nstr = \"Fresh\" if clf.predict(X_new) == 1 else \"Rotten\"\nprint('\"', docs_new[0], '\"==> ', \"\", str)", "Aside: TF-IDF Weighting for Term Importance\nTF-IDF stands for \nTerm-Frequency X Inverse Document Frequency.\nIn the standard CountVectorizer model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weight this term frequency by the inverse of its popularity in all documents. For example, if the word \"movie\" showed up in all the documents, it would not have much predictive value. It could actually be considered a stopword. By weighing its counts by 1 divided by its overall frequency, we downweight it. We can then use this TF-IDF weighted features as inputs to any classifier. TF-IDF is essentially a measure of term importance, and of how discriminative a word is in a corpus. There are a variety of nuances involved in computing TF-IDF, mainly involving where to add the smoothing term to avoid division by 0, or log of 0 errors. The formula for TF-IDF in scikit-learn differs from that of most textbooks: \n$$\\mbox{TF-IDF}(t, d) = \\mbox{TF}(t, d)\\times \\mbox{IDF}(t) = n_{td} \\log{\\left( \\frac{\\vert D \\vert}{\\vert d : t \\in d \\vert} + 1 \\right)}$$\nwhere $n_{td}$ is the number of times term $t$ occurs in document $d$, $\\vert D \\vert$ is the number of documents, and $\\vert d : t \\in d \\vert$ is the number of documents that contain $t$", "# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction\n# http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref\nfrom sklearn.feature_extraction.text import TfidfVectorizer\ntfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')\nXtfidf=tfidfvectorizer.fit_transform(critics.quote)\n \n", "<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VIII: Enrichment</h3>\n\n<p>\nThere are several additional things we could try. Try some of these as exercises:\n<ol>\n<li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram contains 2 words, a trigram contains 3 words, and 6-gram contains 6 words. This is useful because \"not good\" and \"so good\" mean very different things. On the other hand, as n increases, the model does not scale well since the feature set becomes more sparse.\n<li> Try a model besides Naive Bayes, one that would allow for interactions between words -- for example, a Random Forest classifier.\n<li> Try adding supplemental features -- information about genre, director, cast, etc.\n<li> Use word2vec or [Latent Dirichlet Allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) to group words into topics and use those topics for prediction.\n<li> Use TF-IDF weighting instead of word counts.\n</ol>\n</p>\n\n<b>Exercise:</b> Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result.\n</div>\n\nBIGRAM USING NAIVE BAYES", "def print_top_words(model, feature_names, n_top_words):\n for topic_idx, topic in enumerate(model.components_):\n print(\"Topic #%d:\" % topic_idx)\n print(\" \".join([feature_names[i]\n for i in topic.argsort()[:-n_top_words - 1:-1]]))\n print()\n\n# Your turn\ndef make_xy_bigram(critics, bigram_vectorizer=None):\n #Your code here \n if bigram_vectorizer is None:\n bigram_vectorizer = CountVectorizer(ngram_range=(1, 2),token_pattern=r'\\b\\w+\\b', min_df=1)\n X = bigram_vectorizer.fit_transform(critics.quote)\n X = X.tocsc() # some versions of sklearn return COO format\n y = (critics.fresh == 'fresh').values.astype(np.int)\n return X, y\n\nvectorizer = CountVectorizer(ngram_range=(1, 2),\n token_pattern=r'\\b\\w+\\b', min_df=1, stop_words='english')\nX, y = make_xy_bigram(critics, vectorizer)\nxtrain=X[mask]\nytrain=y[mask]\nxtest=X[~mask]\nytest=y[~mask]\n\nclf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)\n\n#your turn. Print the accuracy on the test and training dataset\ntraining_accuracy = clf.score(xtrain, ytrain)\ntest_accuracy = clf.score(xtest, ytest)\n\nprint(\"Accuracy on training data: {:2f}\".format(training_accuracy))\nprint(\"Accuracy on test data: {:2f}\".format(test_accuracy))\n", "Using bigram from nltk package", "import itertools\nimport pandas as pd\nfrom nltk.collocations import BigramCollocationFinder \nfrom nltk.metrics import BigramAssocMeasures\n \ndef bigram_word_feats(words, score_fn=BigramAssocMeasures.chi_sq, n=200):\n bigram_finder = BigramCollocationFinder.from_words(words)\n bigrams = bigram_finder.nbest(score_fn, n)\n return dict([(ngram, True) for ngram in itertools.chain(words, bigrams)])\n\n\n\nimport collections\nimport nltk.classify.util, nltk.metrics\nfrom nltk import precision, recall\nfrom nltk.classify import NaiveBayesClassifier\nfrom nltk.corpus import movie_reviews\n\npos_review = critics[critics['fresh']=='fresh']\nneg_review = critics[critics['fresh']=='rotten']\n\nnegfeats = [(bigram_word_feats(row['quote'].split()),'neg') for index, row in neg_review.iterrows()]\nposfeats = [(bigram_word_feats(row['quote'].split()),'pos') for index, row in pos_review.iterrows()]\n \nnegcutoff = int(len(negfeats)*.7) \nposcutoff = int(len(posfeats)*.7) \n\ntrainfeats = negfeats[:negcutoff] + posfeats[:poscutoff] \ntestfeats = negfeats[negcutoff:] + posfeats[poscutoff:] \n \nclassifier = NaiveBayesClassifier.train(trainfeats) \nrefsets = collections.defaultdict(set) \ntestsets = collections.defaultdict(set) \n \nfor i, (feats, label) in enumerate(testfeats): \n refsets[label].add(i) \n observed = classifier.classify(feats) \n testsets[observed].add(i) \nclassifier.show_most_informative_features() \n", "Using RANDOM FOREST classifier instead of Naive Bayes", "from sklearn.model_selection import cross_val_score\nfrom sklearn.ensemble import RandomForestClassifier\n\nclf = RandomForestClassifier(n_estimators=10, max_depth=None,\n min_samples_split=2, random_state=0)\nscores = cross_val_score(clf, X, y)\nscores.mean() ", "Try adding supplemental features -- information about genre, director, cast, etc.", "# Create a random forest classifier. By convention, clf means 'classifier'\n#clf = RandomForestClassifier(n_jobs=2)\n\n# Train the classifier to take the training features and learn how they relate\n# to the training y (the species)\n#clf.fit(train[features], y)\n\ncritics.head()", "Use word2vec or Latent Dirichlet Allocation to group words into topics and use those topics for prediction.", "from sklearn.decomposition import NMF, LatentDirichletAllocation\n\nvectorizer = CountVectorizer(min_df=best_min_df)\nX, y = make_xy(critics, vectorizer)\nxtrain=X[mask]\nytrain=y[mask]\nxtest=X[~mask]\nytest=y[~mask]\n\nlda = LatentDirichletAllocation(n_topics=10, max_iter=5,\n learning_method='online',\n learning_offset=50.,\n random_state=0)\nlda.fit(X)\n\nprint(\"\\nTopics in LDA model:\")\nfeature_names = vectorizer.get_feature_names()\nprint_top_words(lda, feature_names, n_top_words=20)\n\n", "Use TF-IDF weighting instead of word counts.", "# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction\n# http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref\nfrom sklearn.feature_extraction.text import TfidfVectorizer\ntfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')\nXtfidf=tfidfvectorizer.fit_transform(critics.quote)\nX = Xtfidf.tocsc() # some versions of sklearn return COO format\ny = (critics.fresh == 'fresh').values.astype(np.int)\n\nxtrain=X[mask]\nytrain=y[mask]\nxtest=X[~mask]\nytest=y[~mask]\n\nclf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)\n\n#your turn. Print the accuracy on the test and training dataset\ntraining_accuracy = clf.score(xtrain, ytrain)\ntest_accuracy = clf.score(xtest, ytest)\n\nprint(\"Accuracy on training data: {:2f}\".format(training_accuracy))\nprint(\"Accuracy on test data: {:2f}\".format(test_accuracy)) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lepmik/nest-simulator
doc/model_details/aeif_models_implementation.ipynb
gpl-2.0
[ "NEST implementation of the aeif models\nHans Ekkehard Plesser and Tanguy Fardet, 2016-09-09\nThis notebook provides a reference solution for the Adaptive Exponential Integrate and Fire\n(AEIF) neuronal model and compares it with several numerical implementation using simpler solvers.\nIn particular this justifies the change of implementation in September 2016 to make the simulation\ncloser to the reference solution.\nPosition of the problem\nBasics\nThe equations governing the evolution of the AEIF model are\n$$\\left\\lbrace\\begin{array}{rcl}\n C_m\\dot{V} &=& -g_L(V-E_L) + g_L \\Delta_T e^{\\frac{V-V_T}{\\Delta_T}} + I_e + I_s(t) -w\\\n \\tau_s\\dot{w} &=& a(V-E_L) - w\n\\end{array}\\right.$$\nwhen $V < V_{peak}$ (threshold/spike detection).\nOnce a spike occurs, we apply the reset conditions:\n$$V = V_r \\quad \\text{and} \\quad w = w + b$$\nDivergence\nIn the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large.\nThis can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\\Delta_T$ is small.\nTested solutions\nOld implementation (before September 2016)\nThe orginal solution that was adopted was to bind the exponential argument to be smaller that 10 (ad hoc value to be close to the original implementation in BRIAN).\nAs will be shown in the notebook, this solution does not converge to the reference LSODAR solution.\nNew implementation\nThe new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$.\nWe will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller.\nReference solution\nThe reference solution is implemented using the LSODAR solver which is described and compared in the following references:\n\nhttp://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one)\nhttp://www.sciencedirect.com/science/article/pii/S0377042712000684\nhttp://www.radford.edu/~thompson/RP/rootfinding.pdf\nhttps://computation.llnl.gov/casc/nsde/pubs/u88007.pdf\nhttp://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf\nhttp://www.sciencedirect.com/science/article/pii/0377042789903348\nhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf\nhttps://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf\n\nTechnical details and requirements\nImplementation of the functions\n\nThe old and new implementations are reproduced using Scipy and are called by the scipy_aeif function\nThe NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver.\nThe reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package.\n\nRequirements\nTo run this notebook, you need:\n\nnumpy and scipy\nassimulo\nmatplotlib", "import numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (15, 6)", "Scipy functions mimicking the NEST code\nRight hand side functions", "def rhs_aeif_new(y, _, p):\n '''\n New implementation bounding V < V_peak\n \n Parameters\n ----------\n y : list\n Vector containing the state variables [V, w]\n _ : unused var\n p : Params instance\n Object containing the neuronal parameters.\n \n Returns\n -------\n dv : double\n Derivative of V\n dw : double\n Derivative of w\n '''\n v = min(y[0], p.Vpeak)\n w = y[1]\n Ispike = 0.\n \n if p.DeltaT != 0.:\n Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT)\n \n dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm\n dw = (p.a * (v-p.EL) - w) / p.tau_w\n \n return dv, dw\n\n\ndef rhs_aeif_old(y, _, p):\n '''\n Old implementation bounding the argument of the\n exponential function (e_arg < 10.).\n \n Parameters\n ----------\n y : list\n Vector containing the state variables [V, w]\n _ : unused var\n p : Params instance\n Object containing the neuronal parameters.\n \n Returns\n -------\n dv : double\n Derivative of V\n dw : double\n Derivative of w\n '''\n v = y[0]\n w = y[1]\n Ispike = 0.\n \n if p.DeltaT != 0.:\n e_arg = min((v-p.vT)/p.DeltaT, 10.)\n Ispike = p.gL * p.DeltaT * np.exp(e_arg)\n \n dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm\n dw = (p.a * (v-p.EL) - w) / p.tau_w\n \n return dv, dw", "Complete model", "def scipy_aeif(p, f, simtime, dt):\n '''\n Complete aeif model using scipy `odeint` solver.\n \n Parameters\n ----------\n p : Params instance\n Object containing the neuronal parameters.\n f : function\n Right-hand side function (either `rhs_aeif_old`\n or `rhs_aeif_new`)\n simtime : double\n Duration of the simulation (will run between\n 0 and tmax)\n dt : double\n Time increment.\n \n Returns\n -------\n t : list\n Times at which the neuronal state was evaluated.\n y : list\n State values associated to the times in `t`\n s : list\n Spike times.\n vs : list\n Values of `V` just before the spike.\n ws : list\n Values of `w` just before the spike\n fos : list\n List of dictionaries containing additional output\n information from `odeint`\n '''\n t = np.arange(0, simtime, dt) # time axis\n n = len(t) \n y = np.zeros((n, 2)) # V, w\n y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.)\n y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.)\n s = [] # spike times \n vs = [] # membrane potential at spike before reset\n ws = [] # w at spike before step\n fos = [] # full output dict from odeint()\n \n # imitate NEST: update time-step by time-step\n for k in range(1, n):\n \n # solve ODE from t_k-1 to t_k\n d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True)\n y[k, :] = d[1, :]\n fos.append(fo)\n \n # check for threshold crossing\n if y[k, 0] >= p.Vpeak:\n s.append(t[k])\n vs.append(y[k, 0])\n ws.append(y[k, 1])\n \n y[k, 0] = p.Vreset # reset\n y[k, 1] += p.b # step\n \n return t, y, s, vs, ws, fos", "LSODAR reference solution\nSetting assimulo class", "from assimulo.solvers import LSODAR\nfrom assimulo.problem import Explicit_Problem\n\nclass Extended_Problem(Explicit_Problem):\n\n # need variables here for access\n sw0 = [ False ]\n ts_spikes = []\n ws_spikes = []\n Vs_spikes = []\n \n def __init__(self, p):\n self.p = p\n self.y0 = [self.p.EL, 5.] # V, w\n # reset variables\n self.ts_spikes = []\n self.ws_spikes = []\n self.Vs_spikes = []\n\n #The right-hand-side function (rhs)\n\n def rhs(self, t, y, sw):\n \"\"\"\n This is the function we are trying to simulate (aeif model).\n \"\"\"\n V, w = y[0], y[1]\n Ispike = 0.\n \n if self.p.DeltaT != 0.:\n Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT)\n dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm\n dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w\n return np.array([dotV, dotW])\n\n # Sets a name to our function\n name = 'AEIF_nosyn'\n\n # The event function\n def state_events(self, t, y, sw):\n \"\"\"\n This is our function that keeps track of our events. When the sign\n of any of the events has changed, we have an event.\n \"\"\"\n event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike\n if event_0 < 0:\n if not self.ts_spikes:\n self.ts_spikes.append(t)\n self.Vs_spikes.append(y[0])\n self.ws_spikes.append(y[1])\n elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01):\n self.ts_spikes.append(t)\n self.Vs_spikes.append(y[0])\n self.ws_spikes.append(y[1])\n return np.array([event_0])\n\n #Responsible for handling the events.\n def handle_event(self, solver, event_info):\n \"\"\"\n Event handling. This functions is called when Assimulo finds an event as\n specified by the event functions.\n \"\"\"\n ev = event_info\n event_info = event_info[0] # only look at the state events information.\n if event_info[0] > 0:\n solver.sw[0] = True\n solver.y[0] = self.p.Vreset\n solver.y[1] += self.p.b\n else:\n solver.sw[0] = False\n\n def initialize(self, solver):\n solver.h_sol=[]\n solver.nq_sol=[]\n\n def handle_result(self, solver, t, y):\n Explicit_Problem.handle_result(self, solver, t, y)\n # Extra output for algorithm analysis\n if solver.report_continuously:\n h, nq = solver.get_algorithm_data()\n solver.h_sol.extend([h])\n solver.nq_sol.extend([nq])", "LSODAR reference model", "def reference_aeif(p, simtime):\n '''\n Reference aeif model using LSODAR.\n \n Parameters\n ----------\n p : Params instance\n Object containing the neuronal parameters.\n f : function\n Right-hand side function (either `rhs_aeif_old`\n or `rhs_aeif_new`)\n simtime : double\n Duration of the simulation (will run between\n 0 and tmax)\n dt : double\n Time increment.\n \n Returns\n -------\n t : list\n Times at which the neuronal state was evaluated.\n y : list\n State values associated to the times in `t`\n s : list\n Spike times.\n vs : list\n Values of `V` just before the spike.\n ws : list\n Values of `w` just before the spike\n h : list\n List of the minimal time increment at each step.\n '''\n #Create an instance of the problem\n exp_mod = Extended_Problem(p) #Create the problem\n exp_sim = LSODAR(exp_mod) #Create the solver\n\n exp_sim.atol=1.e-8\n exp_sim.report_continuously = True\n exp_sim.store_event_points = True\n\n exp_sim.verbosity = 30\n\n #Simulate\n t, y = exp_sim.simulate(simtime) #Simulate 10 seconds\n \n return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol", "Set the parameters and simulate the models\nParams (chose a dictionary)", "# Regular spiking\naeif_param = {\n 'V_reset': -58.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 420.,\n 'g_L': 11.,\n 'tau_w': 300.,\n 'E_L': -70.,\n 'Delta_T': 2.,\n 'a': 3.,\n 'b': 0.,\n 'C_m': 200.,\n 'V_m': -70., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n 'tau_syn_ex': 0.2\n}\n\n# Bursting\naeif_param2 = {\n 'V_reset': -46.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 500.0,\n 'g_L': 10.,\n 'tau_w': 120.,\n 'E_L': -58.,\n 'Delta_T': 2.,\n 'a': 2.,\n 'b': 100.,\n 'C_m': 200.,\n 'V_m': -58., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n}\n\n# Close to chaos (use resol < 0.005 and simtime = 200)\naeif_param3 = {\n 'V_reset': -48.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 160.,\n 'g_L': 12.,\n 'tau_w': 130.,\n 'E_L': -60.,\n 'Delta_T': 2.,\n 'a': -11.,\n 'b': 30.,\n 'C_m': 100.,\n 'V_m': -60., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n}\n\nclass Params(object):\n '''\n Class giving access to the neuronal\n parameters.\n '''\n def __init__(self):\n self.params = aeif_param\n self.Vpeak = aeif_param[\"V_peak\"]\n self.Vreset = aeif_param[\"V_reset\"]\n self.gL = aeif_param[\"g_L\"]\n self.Cm = aeif_param[\"C_m\"]\n self.EL = aeif_param[\"E_L\"]\n self.DeltaT = aeif_param[\"Delta_T\"]\n self.tau_w = aeif_param[\"tau_w\"]\n self.a = aeif_param[\"a\"]\n self.b = aeif_param[\"b\"]\n self.vT = aeif_param[\"V_th\"]\n self.Ie = aeif_param[\"I_e\"]\n \np = Params()", "Simulate the 3 implementations", "# Parameters of the simulation\nsimtime = 100.\nresol = 0.01\n\nt_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resol)\nt_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resol)\nt_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime)", "Plot the results\nZoom out", "fig, ax = plt.subplots()\nax2 = ax.twinx()\n\n# Plot the potentials\nax.plot(t_ref, y_ref[:,0], linestyle=\"-\", label=\"V ref.\")\nax.plot(t_old, y_old[:,0], linestyle=\"-.\", label=\"V old\")\nax.plot(t_new, y_new[:,0], linestyle=\"--\", label=\"V new\")\n\n# Plot the adaptation variables\nax2.plot(t_ref, y_ref[:,1], linestyle=\"-\", c=\"k\", label=\"w ref.\")\nax2.plot(t_old, y_old[:,1], linestyle=\"-.\", c=\"m\", label=\"w old\")\nax2.plot(t_new, y_new[:,1], linestyle=\"--\", c=\"y\", label=\"w new\")\n\n# Show\nax.set_xlim([0., simtime])\nax.set_ylim([-65., 40.])\nax.set_xlabel(\"Time (ms)\")\nax.set_ylabel(\"V (mV)\")\nax2.set_ylim([-20., 20.])\nax2.set_ylabel(\"w (pA)\")\nax.legend(loc=6)\nax2.legend(loc=2)\nplt.show()", "Zoom in", "fig, ax = plt.subplots()\nax2 = ax.twinx()\n\n# Plot the potentials\nax.plot(t_ref, y_ref[:,0], linestyle=\"-\", label=\"V ref.\")\nax.plot(t_old, y_old[:,0], linestyle=\"-.\", label=\"V old\")\nax.plot(t_new, y_new[:,0], linestyle=\"--\", label=\"V new\")\n\n# Plot the adaptation variables\nax2.plot(t_ref, y_ref[:,1], linestyle=\"-\", c=\"k\", label=\"w ref.\")\nax2.plot(t_old, y_old[:,1], linestyle=\"-.\", c=\"y\", label=\"w old\")\nax2.plot(t_new, y_new[:,1], linestyle=\"--\", c=\"m\", label=\"w new\")\n\nax.set_xlim([90., 92.])\nax.set_ylim([-65., 40.])\nax.set_xlabel(\"Time (ms)\")\nax.set_ylabel(\"V (mV)\")\nax2.set_ylim([17.5, 18.5])\nax2.set_ylabel(\"w (pA)\")\nax.legend(loc=5)\nax2.legend(loc=2)\nplt.show()", "Compare properties at spike times", "print(\"spike times:\\n-----------\")\nprint(\"ref\", np.around(s_ref, 3)) # ref lsodar\nprint(\"old\", np.around(s_old, 3))\nprint(\"new\", np.around(s_new, 3))\n\nprint(\"\\nV at spike time:\\n---------------\")\nprint(\"ref\", np.around(vs_ref, 3)) # ref lsodar\nprint(\"old\", np.around(vs_old, 3))\nprint(\"new\", np.around(vs_new, 3))\n\nprint(\"\\nw at spike time:\\n---------------\")\nprint(\"ref\", np.around(ws_ref, 3)) # ref lsodar\nprint(\"old\", np.around(ws_old, 3))\nprint(\"new\", np.around(ws_new, 3))", "Size of minimal integration timestep", "plt.semilogy(t_ref, h_ref, label='Reference')\nplt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old')\nplt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New')\n\nplt.legend(loc=6)\nplt.show();", "Convergence towards LSODAR reference with step size\nZoom out", "plt.plot(t_ref, y_ref[:,0], label=\"V ref.\")\nresolutions = (0.1, 0.01, 0.001)\ndi_res = {}\n\nfor resol in resolutions:\n t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resol)\n t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resol)\n di_res[resol] = (t_old, y_old, t_new, y_new)\n plt.plot(t_old, y_old[:,0], linestyle=\":\", label=\"V old, r={}\".format(resol))\n plt.plot(t_new, y_new[:,0], linestyle=\"--\", linewidth=1.5, label=\"V new, r={}\".format(resol))\nplt.xlim(0., simtime)\nplt.xlabel(\"Time (ms)\")\nplt.ylabel(\"V (mV)\")\nplt.legend(loc=2)\nplt.show();", "Zoom in", "plt.plot(t_ref, y_ref[:,0], label=\"V ref.\")\nfor resol in resolutions:\n t_old, y_old = di_res[resol][:2]\n t_new, y_new = di_res[resol][2:]\n plt.plot(t_old, y_old[:,0], linestyle=\"--\", label=\"V old, r={}\".format(resol))\n plt.plot(t_new, y_new[:,0], linestyle=\"-.\", linewidth=2., label=\"V new, r={}\".format(resol))\nplt.xlim(90., 92.)\nplt.ylim([-62., 2.])\nplt.xlabel(\"Time (ms)\")\nplt.ylabel(\"V (mV)\")\nplt.legend(loc=2)\nplt.show();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.16/_downloads/plot_sensor_connectivity.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute all-to-all connectivity in sensor space\nComputes the Phase Lag Index (PLI) between all gradiometers and shows the\nconnectivity in 3D using the helmet geometry. The left visual stimulation data\nare used which produces strong connectvitiy in the right occipital sensors.", "# Author: Martin Luessi <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nfrom scipy import linalg\n\nimport mne\nfrom mne import io\nfrom mne.connectivity import spectral_connectivity\nfrom mne.datasets import sample\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Add a bad channel\nraw.info['bads'] += ['MEG 2443']\n\n# Pick MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Create epochs for the visual condition\nevent_id, tmin, tmax = 3, -0.2, 1.5 # need a long enough epoch for 5 cycles\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))\n\n# Compute connectivity for band containing the evoked response.\n# We exclude the baseline period\nfmin, fmax = 3., 9.\nsfreq = raw.info['sfreq'] # the sampling frequency\ntmin = 0.0 # exclude the baseline period\ncon, freqs, times, n_epochs, n_tapers = spectral_connectivity(\n epochs, method='pli', mode='multitaper', sfreq=sfreq, fmin=fmin, fmax=fmax,\n faverage=True, tmin=tmin, mt_adaptive=False, n_jobs=1)\n\n# the epochs contain an EOG channel, which we remove now\nch_names = epochs.ch_names\nidx = [ch_names.index(name) for name in ch_names if name.startswith('MEG')]\ncon = con[idx][:, idx]\n\n# con is a 3D array where the last dimension is size one since we averaged\n# over frequencies in a single band. Here we make it 2D\ncon = con[:, :, 0]\n\n# Now, visualize the connectivity in 3D\nfrom mayavi import mlab # noqa\n\nmlab.figure(size=(600, 600), bgcolor=(0.5, 0.5, 0.5))\n\n# Plot the sensor locations\nsens_loc = [raw.info['chs'][picks[i]]['loc'][:3] for i in idx]\nsens_loc = np.array(sens_loc)\n\npts = mlab.points3d(sens_loc[:, 0], sens_loc[:, 1], sens_loc[:, 2],\n color=(1, 1, 1), opacity=1, scale_factor=0.005)\n\n# Get the strongest connections\nn_con = 20 # show up to 20 connections\nmin_dist = 0.05 # exclude sensors that are less than 5cm apart\nthreshold = np.sort(con, axis=None)[-n_con]\nii, jj = np.where(con >= threshold)\n\n# Remove close connections\ncon_nodes = list()\ncon_val = list()\nfor i, j in zip(ii, jj):\n if linalg.norm(sens_loc[i] - sens_loc[j]) > min_dist:\n con_nodes.append((i, j))\n con_val.append(con[i, j])\n\ncon_val = np.array(con_val)\n\n# Show the connections as tubes between sensors\nvmax = np.max(con_val)\nvmin = np.min(con_val)\nfor val, nodes in zip(con_val, con_nodes):\n x1, y1, z1 = sens_loc[nodes[0]]\n x2, y2, z2 = sens_loc[nodes[1]]\n points = mlab.plot3d([x1, x2], [y1, y2], [z1, z2], [val, val],\n vmin=vmin, vmax=vmax, tube_radius=0.001,\n colormap='RdBu')\n points.module_manager.scalar_lut_manager.reverse_lut = True\n\n\nmlab.scalarbar(title='Phase Lag Index (PLI)', nb_labels=4)\n\n# Add the sensor names for the connections shown\nnodes_shown = list(set([n[0] for n in con_nodes] +\n [n[1] for n in con_nodes]))\n\nfor node in nodes_shown:\n x, y, z = sens_loc[node]\n mlab.text3d(x, y, z, raw.ch_names[picks[node]], scale=0.005,\n color=(0, 0, 0))\n\nview = (-88.7, 40.8, 0.76, np.array([-3.9e-4, -8.5e-3, -1e-2]))\nmlab.view(*view)" ]
[ "code", "markdown", "code", "markdown", "code" ]
deepmind/deepmind-research
enformer/enformer-usage.ipynb
apache-2.0
[ "Copyright 2021 DeepMind Technologies Limited\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n https://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nThis colab showcases the usage of the Enformer model published in\n\"Effective gene expression prediction from sequence by integrating long-range interactions\"\nŽiga Avsec, Vikram Agarwal, Daniel Visentin, Joseph R. Ledsam, Agnieszka Grabska-Barwinska, Kyle R. Taylor, Yannis Assael, John Jumper, Pushmeet Kohli, David R. Kelley\nNote: This colab will not yet work since the model isn't yet publicly available. We are working on enabling this and will update the colab accordingly.\nSteps\nThis colab demonstrates how to\n- Make predictions with Enformer and reproduce Fig. 1d\n- Compute contribution scores and reproduce parts of Fig. 2a\n- Predict the effect of a genetic variant and reproduce parts of Fig. 3g\n- Score multiple variants in a VCF \nSetup\nStart the colab kernel with GPU: Runtime -> Change runtime type -> GPU", "import tensorflow as tf\n# Make sure the GPU is enabled \nassert tf.config.list_physical_devices('GPU'), 'Start the colab kernel with GPU: Runtime -> Change runtime type -> GPU'\n\n!pip install kipoiseq==0.5.2 --quiet > /dev/null\n# You can ignore the pyYAML error", "Imports", "import tensorflow_hub as hub\nimport joblib\nimport gzip\nimport kipoiseq\nfrom kipoiseq import Interval\nimport pyfaidx\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport seaborn as sns\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\ntransform_path = 'gs://dm-enformer/models/enformer.finetuned.SAD.robustscaler-PCA500-robustscaler.transform.pkl'\nmodel_path = 'https://tfhub.dev/deepmind/enformer/1'\nfasta_file = '/root/data/genome.fa'\nclinvar_vcf = '/root/data/clinvar.vcf.gz'\n\n# Download targets from Basenji2 dataset \n# Cite: Kelley et al Cross-species regulatory sequence activity prediction. PLoS Comput. Biol. 16, e1008050 (2020).\ntargets_txt = 'https://raw.githubusercontent.com/calico/basenji/master/manuscripts/cross2020/targets_human.txt'\ndf_targets = pd.read_csv(targets_txt, sep='\\t')\ndf_targets.head(3)", "Download files\nDownload and index the reference genome fasta file\nCredit to Genome Reference Consortium: https://www.ncbi.nlm.nih.gov/grc\nSchneider et al 2017 http://dx.doi.org/10.1101/gr.213611.116: Evaluation of GRCh38 and de novo haploid genome assemblies demonstrates the enduring quality of the reference assembly", "!mkdir -p /root/data\n!wget -O - http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz | gunzip -c > {fasta_file}\npyfaidx.Faidx(fasta_file)\n!ls /root/data", "Download the clinvar file. Reference:\nLandrum MJ, Lee JM, Benson M, Brown GR, Chao C, Chitipiralla S, Gu B, Hart J, Hoffman D, Jang W, Karapetyan K, Katz K, Liu C, Maddipatla Z, Malheiro A, McDaniel K, Ovetsky M, Riley G, Zhou G, Holmes JB, Kattman BL, Maglott DR. ClinVar: improving access to variant interpretations and supporting evidence. Nucleic Acids Res . 2018 Jan 4. PubMed PMID: 29165669 .", "!wget https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/clinvar.vcf.gz -O /root/data/clinvar.vcf.gz", "Code (double click on the title to show the code)", "# @title `Enformer`, `EnformerScoreVariantsNormalized`, `EnformerScoreVariantsPCANormalized`,\nSEQUENCE_LENGTH = 393216\n\nclass Enformer:\n\n def __init__(self, tfhub_url):\n self._model = hub.load(tfhub_url).model\n\n def predict_on_batch(self, inputs):\n predictions = self._model.predict_on_batch(inputs)\n return {k: v.numpy() for k, v in predictions.items()}\n\n @tf.function\n def contribution_input_grad(self, input_sequence,\n target_mask, output_head='human'):\n input_sequence = input_sequence[tf.newaxis]\n\n target_mask_mass = tf.reduce_sum(target_mask)\n with tf.GradientTape() as tape:\n tape.watch(input_sequence)\n prediction = tf.reduce_sum(\n target_mask[tf.newaxis] *\n self._model.predict_on_batch(input_sequence)[output_head]) / target_mask_mass\n\n input_grad = tape.gradient(prediction, input_sequence) * input_sequence\n input_grad = tf.squeeze(input_grad, axis=0)\n return tf.reduce_sum(input_grad, axis=-1)\n\n\nclass EnformerScoreVariantsRaw:\n\n def __init__(self, tfhub_url, organism='human'):\n self._model = Enformer(tfhub_url)\n self._organism = organism\n \n def predict_on_batch(self, inputs):\n ref_prediction = self._model.predict_on_batch(inputs['ref'])[self._organism]\n alt_prediction = self._model.predict_on_batch(inputs['alt'])[self._organism]\n\n return alt_prediction.mean(axis=1) - ref_prediction.mean(axis=1)\n\n\nclass EnformerScoreVariantsNormalized:\n\n def __init__(self, tfhub_url, transform_pkl_path,\n organism='human'):\n assert organism == 'human', 'Transforms only compatible with organism=human'\n self._model = EnformerScoreVariantsRaw(tfhub_url, organism)\n with tf.io.gfile.GFile(transform_pkl_path, 'rb') as f:\n transform_pipeline = joblib.load(f)\n self._transform = transform_pipeline.steps[0][1] # StandardScaler.\n \n def predict_on_batch(self, inputs):\n scores = self._model.predict_on_batch(inputs)\n return self._transform.transform(scores)\n\n\nclass EnformerScoreVariantsPCANormalized:\n\n def __init__(self, tfhub_url, transform_pkl_path,\n organism='human', num_top_features=500):\n self._model = EnformerScoreVariantsRaw(tfhub_url, organism)\n with tf.io.gfile.GFile(transform_pkl_path, 'rb') as f:\n self._transform = joblib.load(f)\n self._num_top_features = num_top_features\n \n def predict_on_batch(self, inputs):\n scores = self._model.predict_on_batch(inputs)\n return self._transform.transform(scores)[:, :self._num_top_features]\n\n\n# TODO(avsec): Add feature description: Either PCX, or full names.\n\n# @title `variant_centered_sequences`\n\nclass FastaStringExtractor:\n \n def __init__(self, fasta_file):\n self.fasta = pyfaidx.Fasta(fasta_file)\n self._chromosome_sizes = {k: len(v) for k, v in self.fasta.items()}\n\n def extract(self, interval: Interval, **kwargs) -> str:\n # Truncate interval if it extends beyond the chromosome lengths.\n chromosome_length = self._chromosome_sizes[interval.chrom]\n trimmed_interval = Interval(interval.chrom,\n max(interval.start, 0),\n min(interval.end, chromosome_length),\n )\n # pyfaidx wants a 1-based interval\n sequence = str(self.fasta.get_seq(trimmed_interval.chrom,\n trimmed_interval.start + 1,\n trimmed_interval.stop).seq).upper()\n # Fill truncated values with N's.\n pad_upstream = 'N' * max(-interval.start, 0)\n pad_downstream = 'N' * max(interval.end - chromosome_length, 0)\n return pad_upstream + sequence + pad_downstream\n\n def close(self):\n return self.fasta.close()\n\n\ndef variant_generator(vcf_file, gzipped=False):\n \"\"\"Yields a kipoiseq.dataclasses.Variant for each row in VCF file.\"\"\"\n def _open(file):\n return gzip.open(vcf_file, 'rt') if gzipped else open(vcf_file)\n \n with _open(vcf_file) as f:\n for line in f:\n if line.startswith('#'):\n continue\n chrom, pos, id, ref, alt_list = line.split('\\t')[:5]\n # Split ALT alleles and return individual variants as output.\n for alt in alt_list.split(','):\n yield kipoiseq.dataclasses.Variant(chrom=chrom, pos=pos,\n ref=ref, alt=alt, id=id)\n\n\ndef one_hot_encode(sequence):\n return kipoiseq.transforms.functional.one_hot_dna(sequence).astype(np.float32)\n\n\ndef variant_centered_sequences(vcf_file, sequence_length, gzipped=False,\n chr_prefix=''):\n seq_extractor = kipoiseq.extractors.VariantSeqExtractor(\n reference_sequence=FastaStringExtractor(fasta_file))\n\n for variant in variant_generator(vcf_file, gzipped=gzipped):\n interval = Interval(chr_prefix + variant.chrom,\n variant.pos, variant.pos)\n interval = interval.resize(sequence_length)\n center = interval.center() - interval.start\n\n reference = seq_extractor.extract(interval, [], anchor=center)\n alternate = seq_extractor.extract(interval, [variant], anchor=center)\n\n yield {'inputs': {'ref': one_hot_encode(reference),\n 'alt': one_hot_encode(alternate)},\n 'metadata': {'chrom': chr_prefix + variant.chrom,\n 'pos': variant.pos,\n 'id': variant.id,\n 'ref': variant.ref,\n 'alt': variant.alt}}\n\n# @title `plot_tracks`\n\ndef plot_tracks(tracks, interval, height=1.5):\n fig, axes = plt.subplots(len(tracks), 1, figsize=(20, height * len(tracks)), sharex=True)\n for ax, (title, y) in zip(axes, tracks.items()):\n ax.fill_between(np.linspace(interval.start, interval.end, num=len(y)), y)\n ax.set_title(title)\n sns.despine(top=True, right=True, bottom=True)\n ax.set_xlabel(str(interval))\n plt.tight_layout()", "Make predictions for a genetic sequenece", "model = Enformer(model_path)\n\nfasta_extractor = FastaStringExtractor(fasta_file)\n\n# @title Make predictions for an genomic example interval\ntarget_interval = kipoiseq.Interval('chr11', 35_082_742, 35_197_430) # @param\n\nsequence_one_hot = one_hot_encode(fasta_extractor.extract(target_interval.resize(SEQUENCE_LENGTH)))\npredictions = model.predict_on_batch(sequence_one_hot[np.newaxis])['human'][0]\n\n# @title Plot tracks\ntracks = {'DNASE:CD14-positive monocyte female': predictions[:, 41],\n 'DNASE:keratinocyte female': predictions[:, 42],\n 'CHIP:H3K27ac:keratinocyte female': predictions[:, 706],\n 'CAGE:Keratinocyte - epidermal': np.log10(1 + predictions[:, 4799])}\nplot_tracks(tracks, target_interval)", "Contribution scores example", "# @title Compute contribution scores\ntarget_interval = kipoiseq.Interval('chr12', 54_223_589, 54_338_277) # @param\n\nsequence_one_hot = one_hot_encode(fasta_extractor.extract(target_interval.resize(SEQUENCE_LENGTH)))\npredictions = model.predict_on_batch(sequence_one_hot[np.newaxis])['human'][0]\n\ntarget_mask = np.zeros_like(predictions)\nfor idx in [447, 448, 449]:\n target_mask[idx, 4828] = 1\n target_mask[idx, 5111] = 1\n# This will take some time since tf.function needs to get compiled.\ncontribution_scores = model.contribution_input_grad(sequence_one_hot.astype(np.float32), target_mask).numpy()\npooled_contribution_scores = tf.nn.avg_pool1d(np.abs(contribution_scores)[np.newaxis, :, np.newaxis], 128, 128, 'VALID')[0, :, 0].numpy()[1088:-1088]\n\ntracks = {'CAGE predictions': predictions[:, 4828],\n 'Enformer gradient*input': np.minimum(pooled_contribution_scores, 0.03)}\nplot_tracks(tracks, target_interval);", "Variant scoring example", "# @title Score the variant\nvariant = kipoiseq.Variant('chr16', 57025062, 'C', 'T', id='rs11644125') # @param\n\n# Center the interval at the variant\ninterval = kipoiseq.Interval(variant.chrom, variant.start, variant.start).resize(SEQUENCE_LENGTH)\nseq_extractor = kipoiseq.extractors.VariantSeqExtractor(reference_sequence=fasta_extractor)\ncenter = interval.center() - interval.start\n\nreference = seq_extractor.extract(interval, [], anchor=center)\nalternate = seq_extractor.extract(interval, [variant], anchor=center)\n\n# Make predictions for the refernece and alternate allele\nreference_prediction = model.predict_on_batch(one_hot_encode(reference)[np.newaxis])['human'][0]\nalternate_prediction = model.predict_on_batch(one_hot_encode(alternate)[np.newaxis])['human'][0]\n\n# @title Visualize some tracks\nvariant_track = np.zeros_like(reference_prediction[:, 0], dtype=bool)\nvariant_track[variant_track.shape[0] // 2] = True\ntracks = {'variant': variant_track,\n 'CAGE/neutrofils ref': reference_prediction[:, 4767],\n 'CAGE/neutrofils alt-ref': alternate_prediction[:, 4767] - reference_prediction[:, 4767],\n 'CHIP:H3K27ac:neutrophil ref': reference_prediction[:, 2280],\n 'CHIP:H3K27ac:neutrophil alt-ref': alternate_prediction[:, 2280] - reference_prediction[:, 2280],\n }\n\nplot_tracks(tracks, interval.resize(reference_prediction.shape[0] * 128), height=1)", "Score variants in a VCF file\nReport top 20 PCs", "enformer_score_variants = EnformerScoreVariantsPCANormalized(model_path, transform_path, num_top_features=20)\n\n# Score the first 5 variants from ClinVar\n# Lower-dimensional scores (20 PCs)\nit = variant_centered_sequences(clinvar_vcf, sequence_length=SEQUENCE_LENGTH,\n gzipped=True, chr_prefix='chr')\nexample_list = []\nfor i, example in enumerate(it):\n if i >= 5:\n break\n variant_scores = enformer_score_variants.predict_on_batch(\n {k: v[tf.newaxis] for k,v in example['inputs'].items()})[0]\n variant_scores = {f'PC{i}': score for i, score in enumerate(variant_scores)}\n example_list.append({**example['metadata'],\n **variant_scores})\n if i % 2 == 0:\n print(f'Done {i}')\ndf = pd.DataFrame(example_list)\ndf", "Report all 5,313 features (z-score normalized)", "enformer_score_variants_all = EnformerScoreVariantsNormalized(model_path, transform_path)\n\n# Score the first 5 variants from ClinVar\n# All Scores\nit = variant_centered_sequences(clinvar_vcf, sequence_length=SEQUENCE_LENGTH,\n gzipped=True, chr_prefix='chr')\nexample_list = []\nfor i, example in enumerate(it):\n if i >= 5:\n break\n variant_scores = enformer_score_variants_all.predict_on_batch(\n {k: v[tf.newaxis] for k,v in example['inputs'].items()})[0]\n variant_scores = {f'{i}_{name[:20]}': score for i, (name, score) in enumerate(zip(df_targets.description, variant_scores))}\n example_list.append({**example['metadata'],\n **variant_scores})\n if i % 2 == 0:\n print(f'Done {i}')\ndf = pd.DataFrame(example_list)\ndf" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
johntanz/ROP
.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
gpl-2.0
[ "Masimo Analysis\nFor Pulse Ox. Analysis, make sure the data file is the right .csv format:\na) Headings on Row 1\nb) Open the csv file through Notepad or TextEdit and delete extra \nrow commas (non-printable characters)\nc) There are always Dates in Column A and Time in Column B. \nd) There might be a row that says \"Time Gap Present\". Delete this row from Notepad \nor TextEdit", "#the usual beginning\nimport pandas as pd\nimport numpy as np\nfrom pandas import Series, DataFrame\nfrom datetime import datetime, timedelta\nfrom pandas import concat\n\n#define any string with 'C' as NaN\ndef readD(val):\n if 'C' in val:\n return np.nan\n return val", "Import File into Python\nChange File Name!", "df = pd.read_csv('/Users/John/Dropbox/LLU/ROP/Pulse Ox/ROP018PO.csv',\n parse_dates={'timestamp': ['Date','Time']},\n index_col='timestamp',\n usecols=['Date', 'Time', 'SpO2', 'PR', 'PI', 'Exceptions'],\n na_values=['0'],\n converters={'Exceptions': readD}\n )\n\n#parse_dates tells the read_csv function to combine the date and time column \n#into one timestamp column and parse it as a timestamp.\n# pandas is smart enough to know how to parse a date in various formats\n\n#index_col sets the timestamp column to be the index.\n\n#usecols tells the read_csv function to select only the subset of the columns.\n#na_values is used to turn 0 into NaN\n\n#converters: readD is the dict that means any string with 'C' with be NaN (for PI)\n\n#dfclean = df[27:33][df[27:33].loc[:, ['SpO2', 'PR', 'PI', 'Exceptions']].apply(pd.notnull).all(1)]\n#clean the dataframe to get rid of rows that have NaN for PI purposes\ndf_clean = df[df.loc[:, ['PI', 'Exceptions']].apply(pd.notnull).all(1)]\n\n\"\"\"Pulse ox date/time is 1 mins and 32 seconds faster than phone. Have to correct for it.\"\"\"\n\nTC = timedelta(minutes=1, seconds=32)", "Set Date and Time of ROP Exam and Eye Drops", "df_first = df.first_valid_index() #get the first number from index\n\nY = pd.to_datetime(df_first) #convert index to datetime\n# Y = TIME DATA COLLECTION BEGAN / First data point on CSV\n\n# SYNTAX: \n# datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])\n\nW = datetime(2016, 1, 20, 7, 30)+TC\n# W = first eye drop dtarts\nX = datetime(2016, 1, 20, 8, 42)+TC\n# X = ROP Exam Started\nZ = datetime(2016, 1, 20, 8, 46)+TC\n# Z = ROP Exam Ended\n\ndf_last = df.last_valid_index() #get the last number from index\n\nQ = pd.to_datetime(df_last) \n\n# Q = TIME DATA COLLECTION ENDED / Last Data point on CSV", "Baseline Averages", "avg0PI = df_clean.PI[Y:W].mean()\navg0O2 = df.SpO2[Y:W].mean()\navg0PR = df.PR[Y:W].mean()\n\nprint 'Baseline Averages\\n', 'PI :\\t',avg0PI, '\\nSpO2 :\\t',avg0O2,'\\nPR :\\t',avg0PR,\n#df.std() for standard deviation", "Average q 5 Min for 1 hour after 1st Eye Drops", "# Every 5 min Average from start of eye drops to start of exam\n\ndef perdeltadrop(start, end, delta):\n rdrop = []\n curr = start\n while curr < end:\n rdrop.append(curr)\n curr += delta\n return rdrop\n \ndfdropPI = df_clean.PI[W:W+timedelta(hours=1)]\ndfdropO2 = df.SpO2[W:W+timedelta(hours=1)]\ndfdropPR = df.PR[W:W+timedelta(hours=1)]\nwindrop = timedelta(minutes=5)#make the range\nrdrop = perdeltadrop(W, W+timedelta(minutes=15), windrop)\n\navgdropPI = Series(index = rdrop, name = 'PI DurEyeD')\navgdropO2 = Series(index = rdrop, name = 'SpO2 DurEyeD')\navgdropPR = Series(index = rdrop, name = 'PR DurEyeD')\n\nfor i in rdrop:\n avgdropPI[i] = dfdropPI[i:(i+windrop)].mean()\n avgdropO2[i] = dfdropO2[i:(i+windrop)].mean()\n avgdropPR[i] = dfdropPR[i:(i+windrop)].mean()\n \nresultdrops = concat([avgdropPI, avgdropO2, avgdropPR], axis=1, join='inner')\nprint resultdrops\n", "Average Every 10 Sec During ROP Exam for first 4 minutes", "#AVERAGE DURING ROP EXAM FOR FIRST FOUR MINUTES\ndef perdelta1(start, end, delta):\n r1 = []\n curr = start\n while curr < end:\n r1.append(curr)\n curr += delta\n return r1\n\ndf1PI = df_clean.PI[X:X+timedelta(minutes=4)]\ndf1O2 = df.SpO2[X:X+timedelta(minutes=4)]\ndf1PR = df.PR[X:X+timedelta(minutes=4)]\nwin1 = timedelta(seconds=10) #any unit of time & make the range\n\nr1 = perdelta1(X, X+timedelta(minutes=4), win1)\n\n#make the series to store\navg1PI = Series(index = r1, name = 'PI DurEx')\navg1O2 = Series(index = r1, name = 'SpO2 DurEx')\navg1PR = Series(index = r1, name = 'PR DurEX')\n#average!\nfor i1 in r1:\n avg1PI[i1] = df1PI[i1:(i1+win1)].mean()\n avg1O2[i1] = df1O2[i1:(i1+win1)].mean()\n avg1PR[i1] = df1PR[i1:(i1+win1)].mean()\n\nresult1 = concat([avg1PI, avg1O2, avg1PR], axis=1, join='inner')\nprint result1\n", "Average Every 5 Mins Hour 1-2 After ROP Exam", "#AVERAGE EVERY 5 MINUTES ONE HOUR AFTER ROP EXAM\n\ndef perdelta2(start, end, delta):\n r2 = []\n curr = start\n while curr < end:\n r2.append(curr)\n curr += delta\n return r2\n\n# datetime(year, month, day, hour, etc.)\n\ndf2PI = df_clean.PI[Z:(Z+timedelta(hours=1))]\ndf2O2 = df.SpO2[Z:(Z+timedelta(hours=1))]\ndf2PR = df.PR[Z:(Z+timedelta(hours=1))]\nwin2 = timedelta(minutes=5) #any unit of time, make the range\n\nr2 = perdelta2(Z, (Z+timedelta(hours=1)), win2) #define the average using function\n\n#make the series to store\navg2PI = Series(index = r2, name = 'PI q5MinHr1')\navg2O2 = Series(index = r2, name = 'O2 q5MinHr1')\navg2PR = Series(index = r2, name = 'PR q5MinHr1')\n\n#average!\nfor i2 in r2:\n avg2PI[i2] = df2PI[i2:(i2+win2)].mean()\n avg2O2[i2] = df2O2[i2:(i2+win2)].mean()\n avg2PR[i2] = df2PR[i2:(i2+win2)].mean()\n\nresult2 = concat([avg2PI, avg2O2, avg2PR], axis=1, join='inner')\nprint result2", "Average Every 15 Mins Hour 2-3 After ROP Exam", "#AVERAGE EVERY 15 MINUTES TWO HOURS AFTER ROP EXAM\n\ndef perdelta3(start, end, delta):\n r3 = []\n curr = start\n while curr < end:\n r3.append(curr)\n curr += delta\n return r3\n\n# datetime(year, month, day, hour, etc.)\n\ndf3PI = df_clean.PI[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]\ndf3O2 = df.SpO2[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]\ndf3PR = df.PR[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]\nwin3 = timedelta(minutes=15) #any unit of time, make the range\n\nr3 = perdelta3((Z+timedelta(hours=1)), (Z+timedelta(hours=2)), win3)\n\n#make the series to store\navg3PI = Series(index = r3, name = 'PI q15MinHr2')\navg3O2 = Series(index = r3, name = 'O2 q15MinHr2')\navg3PR = Series(index = r3, name = 'PR q15MinHr2')\n\n#average!\nfor i3 in r3:\n avg3PI[i3] = df3PI[i3:(i3+win3)].mean()\n avg3O2[i3] = df3O2[i3:(i3+win3)].mean()\n avg3PR[i3] = df3PR[i3:(i3+win3)].mean()\n \nresult3 = concat([avg3PI, avg3O2, avg3PR], axis=1, join='inner')\nprint result3\n", "Average Every 30 Mins Hour 3-4 After ROP Exam", "#AVERAGE EVERY 30 MINUTES THREE HOURS AFTER ROP EXAM\n\ndef perdelta4(start, end, delta):\n r4 = []\n curr = start\n while curr < end:\n r4.append(curr)\n curr += delta\n return r4\n\n# datetime(year, month, day, hour, etc.)\n\ndf4PI = df_clean.PI[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]\ndf4O2 = df.SpO2[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]\ndf4PR = df.PR[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]\nwin4 = timedelta(minutes=30) #any unit of time, make the range\n\nr4 = perdelta4((Z+timedelta(hours=2)), (Z+timedelta(hours=3)), win4)\n\n#make the series to store\navg4PI = Series(index = r4, name = 'PI q30MinHr3')\navg4O2 = Series(index = r4, name = 'O2 q30MinHr3')\navg4PR = Series(index = r4, name = 'PR q30MinHr3')\n\n#average!\nfor i4 in r4:\n avg4PI[i4] = df4PI[i4:(i4+win4)].mean()\n avg4O2[i4] = df4O2[i4:(i4+win4)].mean()\n avg4PR[i4] = df4PR[i4:(i4+win4)].mean()\n \nresult4 = concat([avg4PI, avg4O2, avg4PR], axis=1, join='inner')\nprint result4\n", "Average Every Hour 4-24 Hours Post ROP Exam", "#AVERAGE EVERY 60 MINUTES 4-24 HOURS AFTER ROP EXAM\n\ndef perdelta5(start, end, delta):\n r5 = []\n curr = start\n while curr < end:\n r5.append(curr)\n curr += delta\n return r5\n\n# datetime(year, month, day, hour, etc.)\n\ndf5PI = df_clean.PI[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]\ndf5O2 = df.SpO2[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]\ndf5PR = df.PR[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]\nwin5 = timedelta(minutes=60) #any unit of time, make the range\n\nr5 = perdelta5((Z+timedelta(hours=3)), (Z+timedelta(hours=24)), win5)\n\n#make the series to store\navg5PI = Series(index = r5, name = 'PI q60MinHr4+')\navg5O2 = Series(index = r5, name = 'O2 q60MinHr4+')\navg5PR = Series(index = r5, name = 'PR q60MinHr4+')\n\n#average!\nfor i5 in r5:\n avg5PI[i5] = df5PI[i5:(i5+win5)].mean()\n avg5O2[i5] = df5O2[i5:(i5+win5)].mean()\n avg5PR[i5] = df5PR[i5:(i5+win5)].mean()\n\nresult5 = concat([avg5PI, avg5O2, avg5PR], axis=1, join='inner')\nprint result5\n", "Mild, Moderate, and Severe Desaturation Events", "df_O2_pre = df[Y:W]\n\n\n#Find count of these ranges\nbelow = 0 # v <=80\nmiddle = 0 #v >= 81 and v<=84\nabove = 0 #v >=85 and v<=89\nls = []\n\nb_dict = {}\nm_dict = {}\na_dict = {}\n\nfor i, v in df_O2_pre['SpO2'].iteritems():\n \n if v <= 80: #below block\n \n if not ls: \n ls.append(v)\n else:\n if ls[0] >= 81: #if the range before was not below 80\n\n if len(ls) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2\n\n if ls[0] <= 84: #was it in the middle range?\n m_dict[middle] = ls\n middle += 1\n ls = [v]\n elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?\n a_dict[above] = ls\n above += 1\n ls = [v]\n\n else: #old list wasn't long enough to count\n ls = [v]\n else: #if in the same range\n ls.append(v)\n \n elif v >= 81 and v<= 84: #middle block\n \n if not ls:\n ls.append(v)\n else:\n if ls[0] <= 80 or (ls[0]>=85 and ls[0]<= 89): #if not in the middle range\n if len(ls) >= 5: #if range was greater than 10 seconds\n\n if ls[0] <= 80: #was it in the below range?\n b_dict[below] = ls\n below += 1\n ls = [v]\n elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?\n a_dict[above] = ls\n above += 1\n ls = [v]\n else: #old list wasn't long enough to count\n ls = [v]\n\n else:\n ls.append(v)\n \n elif v >= 85 and v <=89: #above block\n \n if not ls:\n ls.append(v)\n else:\n if ls[0] <=84 : #if not in the above range\n\n if len(ls) >= 5: #if range was greater than \n if ls[0] <= 80: #was it in the below range?\n b_dict[below] = ls\n below += 1\n ls = [v]\n elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?\n m_dict[middle] = ls\n middle += 1\n ls = [v]\n else: #old list wasn't long enough to count\n ls = [v]\n else:\n ls.append(v)\n \n else: #v>90 or something else weird. start the list over\n ls = []\n#final list check\nif len(ls) >= 5:\n if ls[0] <= 80: #was it in the below range?\n b_dict[below] = ls\n below += 1\n ls = [v]\n elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?\n m_dict[middle] = ls\n middle += 1\n ls = [v]\n elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?\n a_dict[above] = ls\n above += 1\n \nb_len = 0.0\nfor key, val in b_dict.iteritems():\n b_len += len(val)\n\nm_len = 0.0\nfor key, val in m_dict.iteritems():\n m_len += len(val)\n \na_len = 0.0\nfor key, val in a_dict.iteritems():\n a_len += len(val)\n \n\n \n\n #post exam duraiton length analysis\ndf_O2_post = df[Z:Q]\n\n\n#Find count of these ranges\nbelow2 = 0 # v <=80\nmiddle2= 0 #v >= 81 and v<=84\nabove2 = 0 #v >=85 and v<=89\nls2 = []\n\nb_dict2 = {}\nm_dict2 = {}\na_dict2 = {}\n\nfor i2, v2 in df_O2_post['SpO2'].iteritems():\n \n if v2 <= 80: #below block\n \n if not ls2: \n ls2.append(v2)\n else:\n if ls2[0] >= 81: #if the range before was not below 80\n\n if len(ls2) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2\n\n if ls2[0] <= 84: #was it in the middle range?\n m_dict2[middle2] = ls2\n middle2 += 1\n ls2 = [v2]\n elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?\n a_dict2[above2] = ls2\n above2 += 1\n ls2 = [v2]\n\n else: #old list wasn't long enough to count\n ls2 = [v2]\n else: #if in the same range\n ls2.append(v2)\n \n elif v2 >= 81 and v2<= 84: #middle block\n \n if not ls2:\n ls2.append(v2)\n else:\n if ls2[0] <= 80 or (ls2[0]>=85 and ls2[0]<= 89): #if not in the middle range\n if len(ls2) >= 5: #if range was greater than 10 seconds\n\n if ls2[0] <= 80: #was it in the below range?\n b_dict2[below2] = ls2\n below2 += 1\n ls2 = [v2]\n elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?\n a_dict2[above2] = ls2\n above2 += 1\n ls2 = [v2]\n else: #old list wasn't long enough to count\n ls2 = [v2]\n\n else:\n ls2.append(v2)\n \n elif v2 >= 85 and v2 <=89: #above block\n \n if not ls2:\n ls2.append(v2)\n else:\n if ls2[0] <=84 : #if not in the above range\n\n if len(ls2) >= 5: #if range was greater than \n if ls2[0] <= 80: #was it in the below range?\n b_dict2[below2] = ls2\n below2 += 1\n ls2 = [v2]\n elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?\n m_dict2[middle2] = ls2\n middle2 += 1\n ls2 = [v2]\n else: #old list wasn't long enough to count\n ls2 = [v2]\n else:\n ls2.append(v2)\n \n else: #v2>90 or something else weird. start the list over\n ls2 = []\n#final list check\nif len(ls2) >= 5:\n if ls2[0] <= 80: #was it in the below range?\n b_dict2[below2] = ls2\n below2 += 1\n ls2= [v2]\n elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?\n m_dict2[middle2] = ls2\n middle2 += 1\n ls2 = [v2]\n elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?\n a_dict2[above2] = ls2\n above2 += 1\n \nb_len2 = 0.0\nfor key, val2 in b_dict2.iteritems():\n b_len2 += len(val2)\n\nm_len2 = 0.0\nfor key, val2 in m_dict2.iteritems():\n m_len2 += len(val2)\n \na_len2 = 0.0\nfor key, val2 in a_dict2.iteritems():\n a_len2 += len(val2)\n\n#print results from count and min\n\nprint \"Desat Counts for X mins\\n\" \nprint \"Pre Mild Desat (85-89) Count: %s\\t\" %above, \"for %s min\" %((a_len*2)/60.)\nprint \"Pre Mod Desat (81-84) Count: %s\\t\" %middle, \"for %s min\" %((m_len*2)/60.) \nprint \"Pre Sev Desat (=< 80) Count: %s\\t\" %below, \"for %s min\\n\" %((b_len*2)/60.)\n\nprint \"Post Mild Desat (85-89) Count: %s\\t\" %above2, \"for %s min\" %((a_len2*2)/60.) \nprint \"Post Mod Desat (81-84) Count: %s\\t\" %middle2, \"for %s min\" %((m_len2*2)/60.) \nprint \"Post Sev Desat (=< 80) Count: %s\\t\" %below2, \"for %s min\\n\" %((b_len2*2)/60.) \n\n\n\nprint \"Data Recording Time!\"\nprint '*' * 10\nprint \"Pre-Exam Data Recording Length\\t\", X - Y # start of exam - first data point\nprint \"Post-Exam Data Recording Length\\t\", Q - Z #last data point - end of exam\nprint \"Total Data Recording Length\\t\", Q - Y #last data point - first data point\n\nPre = ['Pre',(X-Y)]\nPost = ['Post',(Q-Z)]\nTotal = ['Total',(Q-Y)]\nRTL = [Pre, Post, Total]\n\nPreMild = ['Pre Mild Desats \\t',(above), 'for', (a_len*2)/60., 'mins']\nPreMod = ['Pre Mod Desats \\t',(middle), 'for', (m_len*2)/60., 'mins']\nPreSev = ['Pre Sev Desats \\t',(below), 'for', (b_len*2)/60., 'mins']\nPreDesats = [PreMild, PreMod, PreSev]\n\nPostMild = ['Post Mild Desats \\t',(above2), 'for', (a_len2*2)/60., 'mins']\nPostMod = ['Post Mod Desats \\t',(middle2), 'for', (m_len2*2)/60., 'mins']\nPostSev = ['Post Sev Desats \\t',(below2), 'for', (b_len2*2)/60., 'mins']\nPostDesats = [PostMild, PostMod, PostSev]\n\n#creating a list for recording time length\n\n#did it count check sort correctly? get rid of the ''' if you want to check your values\n'''\nprint \"Mild check\"\nfor key, val in b_dict.iteritems():\n print all(i <=80 for i in val)\n\nprint \"Moderate check\"\nfor key, val in m_dict.iteritems():\n print all(i >= 81 and i<=84 for i in val)\n \nprint \"Severe check\"\nfor key, val in a_dict.iteritems():\n print all(i >= 85 and i<=89 for i in val)\n'''", "Export to CSV", "import csv\nclass excel_tab(csv.excel):\n delimiter = '\\t'\ncsv.register_dialect(\"excel_tab\", excel_tab)\n\nwith open('ROP018_PO.csv', 'w') as f: #CHANGE CSV FILE NAME, saves in same directory\n writer = csv.writer(f, dialect=excel_tab)\n #writer.writerow(['PI, O2, PR']) accidently found this out but using commas = gives me columns YAY! fix this\n #to make code look nice ok nice\n writer.writerow([avg0PI, ',PI Start'])\n for i in rdrop:\n writer.writerow([avgdropPI[i]]) #NEEDS BRACKETS TO MAKE IT SEQUENCE\n for i in r1:\n writer.writerow([avg1PI[i]])\n for i in r2:\n writer.writerow([avg2PI[i]])\n for i in r3:\n writer.writerow([avg3PI[i]])\n for i in r4:\n writer.writerow([avg4PI[i]])\n for i in r5:\n writer.writerow([avg5PI[i]])\n writer.writerow([avg0O2, ',SpO2 Start'])\n for i in rdrop:\n writer.writerow([avgdropO2[i]])\n for i in r1:\n writer.writerow([avg1O2[i]])\n for i in r2:\n writer.writerow([avg2O2[i]])\n for i in r3:\n writer.writerow([avg3O2[i]])\n for i in r4:\n writer.writerow([avg4O2[i]])\n for i in r5:\n writer.writerow([avg5O2[i]])\n writer.writerow([avg0PR, ',PR Start'])\n for i in rdrop:\n writer.writerow([avgdropPR[i]])\n for i in r1:\n writer.writerow([avg1PR[i]])\n for i in r2:\n writer.writerow([avg2PR[i]])\n for i in r3:\n writer.writerow([avg3PR[i]])\n for i in r4:\n writer.writerow([avg4PR[i]])\n for i in r5:\n writer.writerow([avg5PR[i]])\n writer.writerow(['Data Recording Time Length'])\n writer.writerows(RTL)\n writer.writerow(['Pre Desat Counts for X Minutes'])\n writer.writerows(PreDesats)\n writer.writerow(['Post Dest Counts for X Minutes'])\n writer.writerows(PostDesats)\n\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
poethacker/hello
Clustering.ipynb
apache-2.0
[ "Clustering Methods Covered Here\n\nK Means, \nHclus, \nDBSCAN, \nGaussian Mixture Models,\nBirch,\nminiBatch Kmeans\nMean Shift\n\nSilhouette Coefficient\nIf the ground truth labels are not known, evaluation must be performed using the model itself. The Silhouette Coefficient (sklearn.metrics.silhouette_score) is an example of such an evaluation, where a higher Silhouette Coefficient score relates to a model with better defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores:\na: The mean distance between a sample and all other points in the same class.\nb: The mean distance between a sample and all other points in the next nearest cluster.\nThe Silhouette Coefficient s for a single sample is then given as: b-a/(max(a,b)\nHomogeneity, completeness and V-measure\nthe following two desirable objectives for any cluster assignment:\n- homogeneity: each cluster contains only members of a single class.\n- completeness: all members of a given class are assigned to the same cluster.\nthose concept as scores homogeneity_score and completeness_score. Both are bounded below by 0.0 and above by 1.0 (higher is better): Their harmonic mean called V-measure is computed by v_measure_score\nK Means Clustering\nThe KMeans algorithm clusters data by trying to separate samples in n groups of equal variance, minimizing a criterion known as the inertia or within-cluster sum-of-squares. This algorithm requires the number of clusters to be specified. It scales well to large number of samples and has been used across a large range of application areas in many different fields.\nThe k-means algorithm divides a set of samples into disjoint clusters , each described by the mean of the samples in the cluster called the cluster “centroids”. The K-means algorithm aims to choose centroids that minimise the inertia, or within-cluster sum of squared criterion:\nInertia, or the within-cluster sum of squares criterion, can be recognized as a measure of how internally coherent clusters are. It suffers from various drawbacks:\nInertia makes the assumption that clusters are convex and isotropic, which is not always the case. It responds poorly to elongated clusters, or manifolds with irregular shapes.\nInertia is not a normalized metric. But in very high-dimensional spaces, Euclidean distances tend to become inflated (this is an instance of the so-called “curse of dimensionality”). Running a dimensionality reduction algorithm such as PCA prior to k-means clustering can alleviate this problem and speed up the computations.", "import warnings\nwarnings.filterwarnings(\"ignore\")\n\nfrom collections import Counter\nimport numpy as np\nfrom scipy import stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.cluster import KMeans\nfrom sklearn import metrics\nfrom sklearn.metrics import pairwise_distances\nfrom sklearn.cluster import AgglomerativeClustering\nfrom sklearn.cluster import DBSCAN\n\nclusdf=pd.read_csv('C:\\\\Users\\\\ajaohri\\\\Desktop\\\\ODSP\\\\data\\\\plantTraits.csv')", "https://vincentarelbundock.github.io/Rdatasets/doc/cluster/plantTraits.html\nUsage\ndata(plantTraits)\nFormat\nA data frame with 136 observations on the following 31 variables.\n\n\npdias\nDiaspore mass (mg)\n\n\nlongindex\nSeed bank longevity\n\n\ndurflow\nFlowering duration\n\n\nheight\nPlant height, an ordered factor with levels 1 < 2 < ... < 8.\n\n\nbegflow\nTime of first flowering, an ordered factor with levels 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9\n\n\nmycor\nMycorrhizas, an ordered factor with levels 0never < 1 sometimes< 2always\n\n\nvegaer\naerial vegetative propagation, an ordered factor with levels 0never < 1 present but limited< 2important.\n\n\nvegsout\nunderground vegetative propagation, an ordered factor with 3 levels identical to vegaer above.\n\n\nautopoll\nselfing pollination, an ordered factor with levels 0never < 1rare < 2 often< the rule3\n\n\ninsects\ninsect pollination, an ordered factor with 5 levels 0 < ... < 4.\n\n\nwind\nwind pollination, an ordered factor with 5 levels 0 < ... < 4.\n\n\nlign\na binary factor with levels 0:1, indicating if plant is woody.\n\n\npiq\na binary factor indicating if plant is thorny.\n\n\nros\na binary factor indicating if plant is rosette.\n\n\nsemiros\nsemi-rosette plant, a binary factor (0: no; 1: yes).\n\n\nleafy\nleafy plant, a binary factor.\n\n\nsuman\nsummer annual, a binary factor.\n\n\nwinan\nwinter annual, a binary factor.\n\n\nmonocarp\nmonocarpic perennial, a binary factor.\n\n\npolycarp\npolycarpic perennial, a binary factor.\n\n\nseasaes\nseasonal aestival leaves, a binary factor.\n\n\nseashiv\nseasonal hibernal leaves, a binary factor.\n\n\nseasver\nseasonal vernal leaves, a binary factor.\n\n\neveralw\nleaves always evergreen, a binary factor.\n\n\neverparti\nleaves partially evergreen, a binary factor.\n\n\nelaio\nfruits with an elaiosome (dispersed by ants), a binary factor.\n\n\nendozoo\nendozoochorous fruits, a binary factor.\n\n\nepizoo\nepizoochorous fruits, a binary factor.\n\n\naquat\naquatic dispersal fruits, a binary factor.\n\n\nwindgl\nwind dispersed fruits, a binary factor.\n\n\nunsp\nunspecialized mechanism of seed dispersal, a binary factor.", "clusdf = clusdf.drop(\"Unnamed: 0\", axis=1)\n\nclusdf.head()\n\nclusdf.info()\n\n#missing values\nclusdf.apply(lambda x: sum(x.isnull().values), axis = 0) \n\nclusdf.head(20)\n\nclusdf=clusdf.fillna(clusdf.mean())", "To measure the quality of clustering results, there are two kinds of validity indices: external indices and internal indices.\nAn external index is a measure of agreement between two partitions where the first partition is the a priori known clustering structure, and the second results from the clustering procedure (Dudoit et al., 2002).\nInternal indices are used to measure the goodness of a clustering structure without external information (Tseng et al., 2005", "from sklearn.decomposition import PCA\nfrom sklearn.preprocessing import scale\n\nclusdf_scale = scale(clusdf)\nn_samples, n_features = clusdf_scale.shape\n\nn_samples, n_features\n\nreduced_data = PCA(n_components=2).fit_transform(clusdf_scale)\n\n#assuming height to be Y variable to be predicted\n#n_digits = len(np.unique(clusdf.height))\n#From R Cluster sizes:\n#[1] \"26 29 5 32\"\nn_digits=4\n\nkmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)\nkmeans.fit(reduced_data)\n\nclusdf.head(20)\n\n# Plot the decision boundary. For that, we will assign a color to each\nh=0.02\nx_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1\ny_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n\n# Obtain labels for each point in mesh. Use last trained model.\nZ = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1)\nplt.clf()\nplt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=plt.cm.Paired,\n aspect='auto', origin='lower')\n\nplt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)\n# Plot the centroids as a white X\ncentroids = kmeans.cluster_centers_\nplt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='w', zorder=10)\nplt.title('K-means clustering on the digits dataset (PCA-reduced data)\\n'\n 'Centroids are marked with white cross')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\nplt.show()\n\nkmeans = KMeans(n_clusters=4, random_state=0).fit(reduced_data)\n\nkmeans.labels_\n\n np.unique(kmeans.labels_, return_counts=True)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.hist(kmeans.labels_)\nplt.show()\n\nkmeans.cluster_centers_\n\nmetrics.silhouette_score(reduced_data, kmeans.labels_, metric='euclidean')", "Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred.\nDrawbacks\nContrary to inertia, MI-based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting).\nHierarchical clustering\nHierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample.\nThe AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The linkage criteria determines the metric used for the merge strategy:\n\nWard minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.\nMaximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.\nAverage linkage minimizes the average of the distances between all observations of pairs of clusters.\nSingle linkage minimizes the distance between the closest observations of pairs of clusters.", "clustering = AgglomerativeClustering(n_clusters=4).fit(reduced_data)\n\nclustering \n\nclustering.labels_\n\n np.unique(clustering.labels_, return_counts=True)\n\nfrom scipy.cluster.hierarchy import dendrogram, linkage\n\n\nZ = linkage(reduced_data)\n\ndendrogram(Z)\n#dn1 = hierarchy.dendrogram(Z, ax=axes[0], above_threshold_color='y',orientation='top')\nplt.show()\n\nmetrics.silhouette_score(reduced_data, clustering.labels_, metric='euclidean')", "DBSCAN\nThe DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to the DBSCAN is the concept of core samples, which are samples that are in areas of high density. A cluster is therefore a set of core samples, each close to each other (measured by some distance measure) and a set of non-core samples that are close to a core sample (but are not themselves core samples). There are two parameters to the algorithm, min_samples and eps, which define formally what we mean when we say dense. Higher min_samples or lower eps indicate higher density necessary to form a cluster.\nMore formally, we define a core sample as being a sample in the dataset such that there exist min_samples other samples within a distance of eps, which are defined as neighbors of the core sample. This tells us that the core sample is in a dense area of the vector space. A cluster is a set of core samples that can be built by recursively taking a core sample, finding all of its neighbors that are core samples, finding all of their neighbors that are core samples, and so on. A cluster also has a set of non-core samples, which are samples that are neighbors of a core sample in the cluster but are not themselves core samples. Intuitively, these samples are on the fringes of a cluster.", "db = DBSCAN().fit(reduced_data)\n\ndb\n\ndb.labels_\n\nclusdf.shape\n\nreduced_data.shape\n\nreduced_data[:10,:2]\n\nfor i in range(0, reduced_data.shape[0]):\n if db.labels_[i] == 0:\n c1 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='r',marker='+')\n elif db.labels_[i] == 1:\n c2 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='g',marker='o')\n elif db.labels_[i] == -1:c3 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='b',marker='*')\n \nplt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2','Noise'])\nplt.title('DBSCAN finds 2 clusters and noise')\nplt.show()\n ", "Gaussian mixture models\na mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with \"mixture distributions\" relate to deriving the properties of the overall population from those of the sub-populations, \"mixture models\" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.\nsklearn.mixture is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided.\nA Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.\nScikit-learn implements different classes to estimate Gaussian mixture models, that correspond to different estimation strategies.\ncite- https://jakevdp.github.io/PythonDataScienceHandbook/05.12-gaussian-mixtures.html", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nimport numpy as np\n\nclusdf.head()\n\nreduced_data\n\n# Plot the data with K Means Labels\nfrom sklearn.cluster import KMeans\nkmeans = KMeans(4, random_state=0)\nlabels = kmeans.fit(reduced_data).predict(reduced_data)\nplt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis');\n\nX=reduced_data\n\nfrom sklearn.cluster import KMeans\nfrom scipy.spatial.distance import cdist\n\ndef plot_kmeans(kmeans, X, n_clusters=4, rseed=0, ax=None):\n labels = kmeans.fit_predict(X)\n\n # plot the input data\n ax = ax or plt.gca()\n ax.axis('equal')\n ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)\n\n # plot the representation of the KMeans model\n centers = kmeans.cluster_centers_\n radii = [cdist(X[labels == i], [center]).max()\n for i, center in enumerate(centers)]\n for c, r in zip(centers, radii):\n ax.add_patch(plt.Circle(c, r, fc='#CCCCCC', lw=3, alpha=0.5, zorder=1))\n\nkmeans = KMeans(n_clusters=4, random_state=0)\nplot_kmeans(kmeans, X)\n\nrng = np.random.RandomState(13)\nX_stretched = np.dot(X, rng.randn(2, 2))\n\nkmeans = KMeans(n_clusters=4, random_state=0)\nplot_kmeans(kmeans, X_stretched)\n\nfrom sklearn.mixture import GMM\ngmm = GMM(n_components=4).fit(X)\nlabels = gmm.predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis');\n\nprobs = gmm.predict_proba(X)\nprint(probs[:5].round(3))\n\nsize = 50 * probs.max(1) ** 2 # square emphasizes differences\nplt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size);\n\nfrom matplotlib.patches import Ellipse\n\ndef draw_ellipse(position, covariance, ax=None, **kwargs):\n \"\"\"Draw an ellipse with a given position and covariance\"\"\"\n ax = ax or plt.gca()\n \n # Convert covariance to principal axes\n if covariance.shape == (2, 2):\n U, s, Vt = np.linalg.svd(covariance)\n angle = np.degrees(np.arctan2(U[1, 0], U[0, 0]))\n width, height = 2 * np.sqrt(s)\n else:\n angle = 0\n width, height = 2 * np.sqrt(covariance)\n \n # Draw the Ellipse\n for nsig in range(1, 4):\n ax.add_patch(Ellipse(position, nsig * width, nsig * height,\n angle, **kwargs))\n \ndef plot_gmm(gmm, X, label=True, ax=None):\n ax = ax or plt.gca()\n labels = gmm.fit(X).predict(X)\n if label:\n ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)\n else:\n ax.scatter(X[:, 0], X[:, 1], s=40, zorder=2)\n ax.axis('equal')\n \n w_factor = 0.2 / gmm.weights_.max()\n for pos, covar, w in zip(gmm.means_, gmm.covars_, gmm.weights_):\n draw_ellipse(pos, covar, alpha=w * w_factor)\n\ngmm = GMM(n_components=4, random_state=42)\nplot_gmm(gmm, X)\n\ngmm = GMM(n_components=4, covariance_type='full', random_state=42)\nplot_gmm(gmm, X_stretched)\n\nfrom sklearn.datasets import make_moons\nXmoon, ymoon = make_moons(200, noise=.05, random_state=0)\nplt.scatter(Xmoon[:, 0], Xmoon[:, 1]);\n\ngmm2 = GMM(n_components=2, covariance_type='full', random_state=0)\nplot_gmm(gmm2, Xmoon)\n\ngmm16 = GMM(n_components=16, covariance_type='full', random_state=0)\nplot_gmm(gmm16, Xmoon, label=False)", "mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data", " \n%matplotlib inline\nn_components = np.arange(1, 21)\nmodels = [GMM(n, covariance_type='full', random_state=0).fit(Xmoon)\n for n in n_components]\n\nplt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC')\nplt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC')\nplt.legend(loc='best')\nplt.xlabel('n_components')\nplt.show()", "The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. Here it is 8.\nBIRCH\nThe Birch (Balanced Iterative Reducing and Clustering using Hierarchies ) builds a tree called the Characteristic Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of Characteristic Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Characteristic Feature subclusters (CF Subclusters) and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children.\nThe CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This information includes:\n\nNumber of samples in a subcluster.\nLinear Sum - A n-dimensional vector holding the sum of all samples\nSquared Sum - Sum of the squared L2 norm of all samples.\nCentroids - To avoid recalculation linear sum / n_samples.\nSquared norm of the centroids.\n\nIt is a memory-efficient, online-learning algorithm provided as an alternative to MiniBatchKMeans. It constructs a tree data structure with the cluster centroids being read off the leaf. These can be either the final cluster centroids or can be provided as input to another clustering algorithm such as AgglomerativeClustering.", "from sklearn.cluster import Birch\n\n\nX = reduced_data\nbrc = Birch(branching_factor=50, n_clusters=None, threshold=0.5,compute_labels=True)\nbrc.fit(X) \n\n\nbrc.predict(X)\n\nlabels = brc.predict(X)\nplt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis');\nplt.show()", "# Mini Batch K-Means\nThe MiniBatchKMeans is a variant of the KMeans algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically reduce the amount of computation required to converge to a local solution. In contrast to other algorithms that reduce the convergence time of k-means, mini-batch k-means produces results that are generally only slightly worse than the standard algorithm.\nThe algorithm iterates between two major steps, similar to vanilla k-means. In the first step, samples are drawn randomly from the dataset, to form a mini-batch. These are then assigned to the nearest centroid. In the second step, the centroids are updated. In contrast to k-means, this is done on a per-sample basis.", "from sklearn.cluster import MiniBatchKMeans\nimport numpy as np\nX = reduced_data \n # manually fit on batches\nkmeans = MiniBatchKMeans(n_clusters=2,random_state=0,batch_size=6)\nkmeans = kmeans.partial_fit(X[0:6,:])\nkmeans = kmeans.partial_fit(X[6:12,:])\nkmeans.cluster_centers_\n\nkmeans.predict(X)\n\n# fit on the whole data\nkmeans = MiniBatchKMeans(n_clusters=4,random_state=0,batch_size=6,max_iter=10).fit(X)\nkmeans.cluster_centers_\n\nkmeans.predict(X)\n\n# Plot the decision boundary. For that, we will assign a color to each\nh=0.02\nx_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1\ny_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n\n# Obtain labels for each point in mesh. Use last trained model.\nZ = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1)\nplt.clf()\nplt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=plt.cm.Paired,\n aspect='auto', origin='lower')\n\nplt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)\n# Plot the centroids as a white X\ncentroids = kmeans.cluster_centers_\nplt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='w', zorder=10)\nplt.title('K-means clustering on the digits dataset (PCA-reduced data)\\n'\n 'Centroids are marked with white cross')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\nplt.show()", "Mean Shift\nMeanShift clustering aims to discover blobs in a smooth density of samples. It is a centroid based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids.\nMean shift clustering using a flat kernel.\nMean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids.\nSeeding is performed using a binning technique for scalability.", "print(__doc__)\n\nimport numpy as np\nfrom sklearn.cluster import MeanShift, estimate_bandwidth\nfrom sklearn.datasets.samples_generator import make_blobs\n\n# #############################################################################\n# Generate sample data\ncenters = [[1, 1], [-1, -1], [1, -1]]\nX = reduced_data\n\n# #############################################################################\n# Compute clustering with MeanShift\n\n# The following bandwidth can be automatically detected using\nbandwidth = estimate_bandwidth(X, quantile=0.2, n_samples=500)\n\nms = MeanShift(bandwidth=bandwidth, bin_seeding=True)\nms.fit(X)\nlabels = ms.labels_\ncluster_centers = ms.cluster_centers_\n\nlabels_unique = np.unique(labels)\nn_clusters_ = len(labels_unique)\n\nprint(\"number of estimated clusters : %d\" % n_clusters_)\n\n# #############################################################################\n# Plot result\nimport matplotlib.pyplot as plt\nfrom itertools import cycle\n\nplt.figure(1)\nplt.clf()\n\ncolors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')\nfor k, col in zip(range(n_clusters_), colors):\n my_members = labels == k\n cluster_center = cluster_centers[k]\n plt.plot(X[my_members, 0], X[my_members, 1], col + '.')\n plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,\n markeredgecolor='k', markersize=14)\nplt.title('Estimated number of clusters: %d' % n_clusters_)\nplt.show()", "knowledge of the ground truth class assignments labels_true and \nour clustering algorithm assignments of the same samples labels_pred\nhttps://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation\n- adjusted Rand index is a function that measures the similarity of the two assignments\n\nthe Mutual Information is a function that measures the agreement of the two assignments, ignoring permutations.\n\nThe following two desirable objectives for any cluster assignment:\n- homogeneity: each cluster contains only members of a single class.\n- completeness: all members of a given class are assigned to the same cluster.\nWe can turn those concept as scores.Both are bounded below by 0.0 and above by 1.0 (higher is better)\n\nhomogeneity_score and \ncompleteness_score. \n\nTheir harmonic mean called V-measure is computed by \n- v_measure_score\n\nThe Silhouette Coefficient is defined for each sample and is composed of two scores:\na: The mean distance between a sample and all other points in the same class.\nb: The mean distance between a sample and all other points in the next nearest cluster.", "from sklearn import metrics\nfrom sklearn.metrics import pairwise_distances\nfrom sklearn import datasets\ndataset = datasets.load_iris()\nX = dataset.data\ny = dataset.target\n\nimport numpy as np\nfrom sklearn.cluster import KMeans\nkmeans_model = KMeans(n_clusters=3, random_state=1).fit(X)\nlabels = kmeans_model.labels_\n\n\n\nlabels_true=y\nlabels_pred=labels\n\nfrom sklearn import metrics\nmetrics.adjusted_rand_score(labels_true, labels_pred) \n\nfrom sklearn import metrics\nmetrics.adjusted_mutual_info_score(labels_true, labels_pred)\n\nmetrics.homogeneity_score(labels_true, labels_pred)\n\n\nmetrics.completeness_score(labels_true, labels_pred) \n\nmetrics.v_measure_score(labels_true, labels_pred) \n\nmetrics.silhouette_score(X, labels, metric='euclidean')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/probability
tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TFP Probabilistic Layers: Regression\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Probabilistic_Layers_Regression\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIn this example we show how to fit regression models using TFP's \"probabilistic layers.\"\nDependencies & Prerequisites", "#@title Import { display-mode: \"form\" }\n\n\nfrom pprint import pprint\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\nimport tensorflow.compat.v2 as tf\ntf.enable_v2_behavior()\n\nimport tensorflow_probability as tfp\n\nsns.reset_defaults()\n#sns.set_style('whitegrid')\n#sns.set_context('talk')\nsns.set_context(context='talk',font_scale=0.7)\n\n%matplotlib inline\n\ntfd = tfp.distributions", "Make things Fast!\nBefore we dive in, let's make sure we're using a GPU for this demo. \nTo do this, select \"Runtime\" -> \"Change runtime type\" -> \"Hardware accelerator\" -> \"GPU\".\nThe following snippet will verify that we have access to a GPU.", "if tf.test.gpu_device_name() != '/device:GPU:0':\n print('WARNING: GPU device not found.')\nelse:\n print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))", "Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.)\nMotivation\nWouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,", "negloglik = lambda y, rv_y: -rv_y.log_prob(y)", "Well not only is it possible, but this colab shows how! (In context of linear regression problems.)", "#@title Synthesize dataset.\nw0 = 0.125\nb0 = 5.\nx_range = [-20, 60]\n\ndef load_dataset(n=150, n_tst=150):\n np.random.seed(43)\n def s(x):\n g = (x - x_range[0]) / (x_range[1] - x_range[0])\n return 3 * (0.25 + g**2.)\n x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]\n eps = np.random.randn(n) * s(x)\n y = (w0 * x * (1. + np.sin(x)) + b0) + eps\n x = x[..., np.newaxis]\n x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)\n x_tst = x_tst[..., np.newaxis]\n return y, x, x_tst\n\ny, x, x_tst = load_dataset()", "Case 1: No Uncertainty", "# Build model.\nmodel = tf.keras.Sequential([\n tf.keras.layers.Dense(1),\n tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),\n])\n\n# Do inference.\nmodel.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)\nmodel.fit(x, y, epochs=1000, verbose=False);\n\n# Profit.\n[print(np.squeeze(w.numpy())) for w in model.weights];\nyhat = model(x_tst)\nassert isinstance(yhat, tfd.Distribution)\n\n#@title Figure 1: No uncertainty.\nw = np.squeeze(model.layers[-2].kernel.numpy())\nb = np.squeeze(model.layers[-2].bias.numpy())\n\nplt.figure(figsize=[6, 1.5]) # inches\n#plt.figure(figsize=[8, 5]) # inches\nplt.plot(x, y, 'b.', label='observed');\nplt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);\nplt.ylim(-0.,17);\nplt.yticks(np.linspace(0, 15, 4)[1:]);\nplt.xticks(np.linspace(*x_range, num=9));\n\nax=plt.gca();\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\nax.spines['left'].set_position(('data', 0))\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n#ax.spines['left'].set_smart_bounds(True)\n#ax.spines['bottom'].set_smart_bounds(True)\nplt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))\n\nplt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)", "Case 2: Aleatoric Uncertainty", "# Build model.\nmodel = tf.keras.Sequential([\n tf.keras.layers.Dense(1 + 1),\n tfp.layers.DistributionLambda(\n lambda t: tfd.Normal(loc=t[..., :1],\n scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),\n])\n\n# Do inference.\nmodel.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)\nmodel.fit(x, y, epochs=1000, verbose=False);\n\n# Profit.\n[print(np.squeeze(w.numpy())) for w in model.weights];\nyhat = model(x_tst)\nassert isinstance(yhat, tfd.Distribution)\n\n#@title Figure 2: Aleatoric Uncertainty\nplt.figure(figsize=[6, 1.5]) # inches\nplt.plot(x, y, 'b.', label='observed');\n\nm = yhat.mean()\ns = yhat.stddev()\n\nplt.plot(x_tst, m, 'r', linewidth=4, label='mean');\nplt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');\nplt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');\n\nplt.ylim(-0.,17);\nplt.yticks(np.linspace(0, 15, 4)[1:]);\nplt.xticks(np.linspace(*x_range, num=9));\n\nax=plt.gca();\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\nax.spines['left'].set_position(('data', 0))\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n#ax.spines['left'].set_smart_bounds(True)\n#ax.spines['bottom'].set_smart_bounds(True)\nplt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))\n\nplt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)", "Case 3: Epistemic Uncertainty", "# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.\ndef posterior_mean_field(kernel_size, bias_size=0, dtype=None):\n n = kernel_size + bias_size\n c = np.log(np.expm1(1.))\n return tf.keras.Sequential([\n tfp.layers.VariableLayer(2 * n, dtype=dtype),\n tfp.layers.DistributionLambda(lambda t: tfd.Independent(\n tfd.Normal(loc=t[..., :n],\n scale=1e-5 + tf.nn.softplus(c + t[..., n:])),\n reinterpreted_batch_ndims=1)),\n ])\n\n# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.\ndef prior_trainable(kernel_size, bias_size=0, dtype=None):\n n = kernel_size + bias_size\n return tf.keras.Sequential([\n tfp.layers.VariableLayer(n, dtype=dtype),\n tfp.layers.DistributionLambda(lambda t: tfd.Independent(\n tfd.Normal(loc=t, scale=1),\n reinterpreted_batch_ndims=1)),\n ])\n\n# Build model.\nmodel = tf.keras.Sequential([\n tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),\n tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),\n])\n\n# Do inference.\nmodel.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)\nmodel.fit(x, y, epochs=1000, verbose=False);\n\n# Profit.\n[print(np.squeeze(w.numpy())) for w in model.weights];\nyhat = model(x_tst)\nassert isinstance(yhat, tfd.Distribution)\n\n#@title Figure 3: Epistemic Uncertainty\nplt.figure(figsize=[6, 1.5]) # inches\nplt.clf();\nplt.plot(x, y, 'b.', label='observed');\n\nyhats = [model(x_tst) for _ in range(100)]\navgm = np.zeros_like(x_tst[..., 0])\nfor i, yhat in enumerate(yhats):\n m = np.squeeze(yhat.mean())\n s = np.squeeze(yhat.stddev())\n if i < 25:\n plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)\n avgm += m\nplt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)\n\nplt.ylim(-0.,17);\nplt.yticks(np.linspace(0, 15, 4)[1:]);\nplt.xticks(np.linspace(*x_range, num=9));\n\nax=plt.gca();\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\nax.spines['left'].set_position(('data', 0))\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n#ax.spines['left'].set_smart_bounds(True)\n#ax.spines['bottom'].set_smart_bounds(True)\nplt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))\n\nplt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)", "Case 4: Aleatoric & Epistemic Uncertainty", "# Build model.\nmodel = tf.keras.Sequential([\n tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),\n tfp.layers.DistributionLambda(\n lambda t: tfd.Normal(loc=t[..., :1],\n scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),\n])\n\n# Do inference.\nmodel.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)\nmodel.fit(x, y, epochs=1000, verbose=False);\n\n# Profit.\n[print(np.squeeze(w.numpy())) for w in model.weights];\nyhat = model(x_tst)\nassert isinstance(yhat, tfd.Distribution)\n\n#@title Figure 4: Both Aleatoric & Epistemic Uncertainty\nplt.figure(figsize=[6, 1.5]) # inches\nplt.plot(x, y, 'b.', label='observed');\n\nyhats = [model(x_tst) for _ in range(100)]\navgm = np.zeros_like(x_tst[..., 0])\nfor i, yhat in enumerate(yhats):\n m = np.squeeze(yhat.mean())\n s = np.squeeze(yhat.stddev())\n if i < 15:\n plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)\n plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);\n plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);\n avgm += m\nplt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)\n\nplt.ylim(-0.,17);\nplt.yticks(np.linspace(0, 15, 4)[1:]);\nplt.xticks(np.linspace(*x_range, num=9));\n\nax=plt.gca();\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\nax.spines['left'].set_position(('data', 0))\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n#ax.spines['left'].set_smart_bounds(True)\n#ax.spines['bottom'].set_smart_bounds(True)\nplt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))\n\nplt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)", "Case 5: Functional Uncertainty", "#@title Custom PSD Kernel\nclass RBFKernelFn(tf.keras.layers.Layer):\n def __init__(self, **kwargs):\n super(RBFKernelFn, self).__init__(**kwargs)\n dtype = kwargs.get('dtype', None)\n\n self._amplitude = self.add_variable(\n initializer=tf.constant_initializer(0),\n dtype=dtype,\n name='amplitude')\n \n self._length_scale = self.add_variable(\n initializer=tf.constant_initializer(0),\n dtype=dtype,\n name='length_scale')\n\n def call(self, x):\n # Never called -- this is just a layer so it can hold variables\n # in a way Keras understands.\n return x\n\n @property\n def kernel(self):\n return tfp.math.psd_kernels.ExponentiatedQuadratic(\n amplitude=tf.nn.softplus(0.1 * self._amplitude),\n length_scale=tf.nn.softplus(5. * self._length_scale)\n )\n\n# For numeric stability, set the default floating-point dtype to float64\ntf.keras.backend.set_floatx('float64')\n\n# Build model.\nnum_inducing_points = 40\nmodel = tf.keras.Sequential([\n tf.keras.layers.InputLayer(input_shape=[1]),\n tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),\n tfp.layers.VariationalGaussianProcess(\n num_inducing_points=num_inducing_points,\n kernel_provider=RBFKernelFn(),\n event_shape=[1],\n inducing_index_points_initializer=tf.constant_initializer(\n np.linspace(*x_range, num=num_inducing_points,\n dtype=x.dtype)[..., np.newaxis]),\n unconstrained_observation_noise_variance_initializer=(\n tf.constant_initializer(np.array(0.54).astype(x.dtype))),\n ),\n])\n\n# Do inference.\nbatch_size = 32\nloss = lambda y, rv_y: rv_y.variational_loss(\n y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])\nmodel.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)\nmodel.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)\n\n# Profit.\nyhat = model(x_tst)\nassert isinstance(yhat, tfd.Distribution)\n\n#@title Figure 5: Functional Uncertainty\n\ny, x, _ = load_dataset()\n\nplt.figure(figsize=[6, 1.5]) # inches\nplt.plot(x, y, 'b.', label='observed');\n\nnum_samples = 7\nfor i in range(num_samples):\n sample_ = yhat.sample().numpy()\n plt.plot(x_tst,\n sample_[..., 0].T,\n 'r',\n linewidth=0.9,\n label='ensemble means' if i == 0 else None);\n\nplt.ylim(-0.,17);\nplt.yticks(np.linspace(0, 15, 4)[1:]);\nplt.xticks(np.linspace(*x_range, num=9));\n\nax=plt.gca();\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\nax.spines['left'].set_position(('data', 0))\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n#ax.spines['left'].set_smart_bounds(True)\n#ax.spines['bottom'].set_smart_bounds(True)\nplt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))\n\nplt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jamesnw/wtb-data
notebooks/Style Similarity.ipynb
mit
[ "# Style Similarity\n\n# Import libraries\nimport numpy as np\nimport pandas as pd\n# Import the data\nimport WTBLoad\nwtb = WTBLoad.load()", "Question: I want to know how similar 2 style are. I really like Apricot Blondes, and I want to see what other styles Apricot would go in. Perhaps it would be good in a German Pils.\nHow to get there: The dataset shows the percentage of votes that said a style-addition combo would likely taste good. So, we can compare the votes on each addition for any two styles, and see how similar they are.", "import math\n# Square the difference of each row, and then return the mean of the column. \n# This is the average difference between the two.\n# It will be higher if they are different, and lower if they are similar\ndef similarity(styleA, styleB):\n diff = np.square(wtb[styleA] - wtb[styleB])\n return diff.mean()\n\nres = []\n# Loop through each addition pair\nwtb = wtb.T\nfor styleA in wtb.columns:\n for styleB in wtb.columns:\n # Skip if styleA and combo B are the same. \n # To prevent duplicates, skip if A is after B alphabetically\n if styleA != styleB and styleA < styleB:\n res.append([styleA, styleB, similarity(styleA, styleB)])\ndf = pd.DataFrame(res, columns=[\"styleA\", \"styleB\", \"similarity\"])", "Top 10 most similar styles", "df.sort_values(\"similarity\").head(10)", "10 Least Similar styles", "df.sort_values(\"similarity\", ascending=False).head(10)", "Similarity of a specific combo", "def comboSimilarity(styleA, styleB):\n # styleA needs to be before styleB alphabetically\n if styleA > styleB:\n addition_temp = styleA\n styleA = styleB\n styleB = addition_temp\n return df.loc[df['styleA'] == styleA].loc[df['styleB'] == styleB]\ncomboSimilarity('Blonde Ale', 'German Pils')", "But is that good or bad? How does it compare to others?", "df.describe()", "We can see that Blonde Ales and German Pils are right between the mean and 50th percentile, so it's not a bad idea, but it's not a good idea either.\nWe can also take a look at this visually to confirm.", "%matplotlib inline\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nn, bins, patches = plt.hist(df['similarity'], bins=50)\n\nsimilarity = float(comboSimilarity('Blonde Ale', 'German Pils')['similarity'])\n\n# Find the histogram bin that holds the similarity between the two\ntarget = np.argmax(bins>similarity)\npatches[target].set_fc('r')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb
apache-2.0
[ "TensorFlow Transfer Learning\nThis notebook shows how to use pre-trained models from TensorFlowHub. Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application.\nLearning Objectives\n\nKnow how to apply image augmentation\nKnow how to download and use a TensorFlow Hub module as a layer in Keras.", "import os\nimport pathlib\n\nimport IPython.display as display\nimport matplotlib.pylab as plt\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_hub as hub\nfrom PIL import Image\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import (\n Conv2D,\n Dense,\n Dropout,\n Flatten,\n MaxPooling2D,\n Softmax,\n)", "Exploring the data\nAs usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.\nThe below tf.keras.utils.get_file command downloads a dataset to the local Keras cache. To see the files through a terminal, copy the output of the cell below.", "data_dir = tf.keras.utils.get_file(\n \"flower_photos\",\n \"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz\",\n untar=True,\n)\n\n# Print data path\nprint(\"cd\", data_dir)", "We can use python's built in pathlib tool to get a sense of this unstructured data.", "data_dir = pathlib.Path(data_dir)\n\nimage_count = len(list(data_dir.glob(\"*/*.jpg\")))\nprint(\"There are\", image_count, \"images.\")\n\nCLASS_NAMES = np.array(\n [item.name for item in data_dir.glob(\"*\") if item.name != \"LICENSE.txt\"]\n)\nprint(\"These are the available classes:\", CLASS_NAMES)", "Let's display the images so we can see what our model will be trying to learn.", "roses = list(data_dir.glob(\"roses/*\"))\n\nfor image_path in roses[:3]:\n display.display(Image.open(str(image_path)))", "Building the dataset\nKeras has some convenient methods to read in image data. For instance tf.keras.preprocessing.image.ImageDataGenerator is great for small local datasets. A tutorial on how to use it can be found here, but what if we have so many images, it doesn't fit on a local machine? We can use tf.data.datasets to build a generator based on files in a Google Cloud Storage Bucket.\nWe have already prepared these images to be stored on the cloud in gs://cloud-ml-data/img/flower_photos/. The images are randomly split into a training set with 90% data and an iterable with 10% data listed in CSV files:\nTraining set: train_set.csv\nEvaluation set: eval_set.csv \nExplore the format and contents of the train.csv by running:", "!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv \\\n | head -5 > /tmp/input.csv\n!cat /tmp/input.csv\n\n!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | \\\n sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt\n!cat /tmp/labels.txt", "Let's figure out how to read one of these images from the cloud. TensorFlow's tf.io.read_file can help us read the file contents, but the result will be a Base64 image string. Hmm... not very readable for humans or Tensorflow.\nThankfully, TensorFlow's tf.image.decode_jpeg function can decode this string into an integer array, and tf.image.convert_image_dtype can cast it into a 0 - 1 range float. Finally, we'll use tf.image.resize to force image dimensions to be consistent for our neural network.\nWe'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.", "IMG_HEIGHT = 224\nIMG_WIDTH = 224\nIMG_CHANNELS = 3\n\nBATCH_SIZE = 32\n# 10 is a magic number tuned for local training of this dataset.\nSHUFFLE_BUFFER = 10 * BATCH_SIZE\nAUTOTUNE = tf.data.experimental.AUTOTUNE\n\nVALIDATION_IMAGES = 370\nVALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE\n\ndef decode_img(img, reshape_dims):\n # Convert the compressed string to a 3D uint8 tensor.\n img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)\n # Use `convert_image_dtype` to convert to floats in the [0,1] range.\n img = tf.image.convert_image_dtype(img, tf.float32)\n # Resize the image to the desired size.\n return tf.image.resize(img, reshape_dims)", "Is it working? Let's see!\nTODO 1.a: Run the decode_img function and plot it to see a happy looking daisy.", "img = tf.io.read_file(\n \"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg\"\n)\n\n# Uncomment to see the image string.\n# print(img)\nimg = decode_img(img, [IMG_WIDTH, IMG_HEIGHT])\nplt.imshow(img.numpy());", "One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. tf.io.decode_csv reads in csv rows (or each line in a csv file), while tf.math.equal will help us format our label such that it's a boolean array with a truth value corresponding to the class in CLASS_NAMES, much like the labels for the MNIST Lab.", "def decode_csv(csv_row):\n record_defaults = [\"path\", \"flower\"]\n filename, label_string = tf.io.decode_csv(csv_row, record_defaults)\n image_bytes = tf.io.read_file(filename=filename)\n label = tf.math.equal(CLASS_NAMES, label_string)\n return image_bytes, label", "Next, we'll transform the images to give our network more variety to train on. There are a number of image manipulation functions. We'll cover just a few:\n\ntf.image.random_crop - Randomly deletes the top/bottom rows and left/right columns down to the dimensions specified.\ntf.image.random_flip_left_right - Randomly flips the image horizontally\ntf.image.random_brightness - Randomly adjusts how dark or light the image is.\ntf.image.random_contrast - Randomly adjusts image contrast.\n\nTODO 1.b: Add the missing parameters from the random augment functions.", "MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7%\nCONTRAST_LOWER = 0.2\nCONTRAST_UPPER = 1.8\n\n\ndef read_and_preprocess(image_bytes, label, random_augment=False):\n if random_augment:\n img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10])\n img = tf.image.random_crop(img, [IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS])\n img = tf.image.random_flip_left_right(img)\n img = tf.image.random_brightness(img, MAX_DELTA)\n img = tf.image.random_contrast(img, CONTRAST_LOWER, CONTRAST_UPPER)\n else:\n img = decode_img(image_bytes, [IMG_WIDTH, IMG_HEIGHT])\n return img, label\n\n\ndef read_and_preprocess_with_augment(image_bytes, label):\n return read_and_preprocess(image_bytes, label, random_augment=True)", "Finally, we'll make a function to craft our full dataset using tf.data.dataset. The tf.data.TextLineDataset will read in each line in our train/eval csv files to our decode_csv function.\n.cache is key here. It will store the dataset in memory", "def load_dataset(csv_of_filenames, batch_size, training=True):\n dataset = (\n tf.data.TextLineDataset(filenames=csv_of_filenames)\n .map(decode_csv)\n .cache()\n )\n\n if training:\n dataset = (\n dataset.map(read_and_preprocess_with_augment)\n .shuffle(SHUFFLE_BUFFER)\n .repeat(count=None)\n ) # Indefinately.\n else:\n dataset = dataset.map(read_and_preprocess).repeat(\n count=1\n ) # Each photo used once.\n\n # Prefetch prepares the next set of batches while current batch is in use.\n return dataset.batch(batch_size=batch_size).prefetch(buffer_size=AUTOTUNE)", "We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.", "train_path = \"gs://cloud-ml-data/img/flower_photos/train_set.csv\"\ntrain_data = load_dataset(train_path, 1)\nitr = iter(train_data)", "TODO 1.c: Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on?", "image_batch, label_batch = next(itr)\nimg = image_batch[0]\nplt.imshow(img)\nprint(label_batch[0])", "Note: It may take a 4-5 minutes to see result of different batches. \nMobileNetV2\nThese flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis and there are three color channels, making the information here over 200 times larger!\nHow do our current techniques stand up? Copy your best model architecture over from the <a href=\"2_mnist_models.ipynb\">MNIST models lab</a> and see how well it does after training for 5 epochs of 50 steps.\nTODO 2.a Copy over the most accurate model from 2_mnist_models.ipynb or build a new CNN Keras model.", "eval_path = \"gs://cloud-ml-data/img/flower_photos/eval_set.csv\"\nnclasses = len(CLASS_NAMES)\nhidden_layer_1_neurons = 400\nhidden_layer_2_neurons = 100\ndropout_rate = 0.25\nnum_filters_1 = 64\nkernel_size_1 = 3\npooling_size_1 = 2\nnum_filters_2 = 32\nkernel_size_2 = 3\npooling_size_2 = 2\n\nlayers = [\n Conv2D(\n num_filters_1,\n kernel_size=kernel_size_1,\n activation=\"relu\",\n input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS),\n ),\n MaxPooling2D(pooling_size_1),\n Conv2D(num_filters_2, kernel_size=kernel_size_2, activation=\"relu\"),\n MaxPooling2D(pooling_size_2),\n Flatten(),\n Dense(hidden_layer_1_neurons, activation=\"relu\"),\n Dense(hidden_layer_2_neurons, activation=\"relu\"),\n Dropout(dropout_rate),\n Dense(nclasses),\n Softmax(),\n]\n\nold_model = Sequential(layers)\nold_model.compile(\n optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"]\n)\n\ntrain_ds = load_dataset(train_path, BATCH_SIZE)\neval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)\n\nold_model.fit(\n train_ds,\n epochs=5,\n steps_per_epoch=5,\n validation_data=eval_ds,\n validation_steps=VALIDATION_STEPS,\n)", "If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?\nEnter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.\nTensorflow Hub is a database of models, many of which can be used for Transfer Learning. We'll use a model called MobileNet which is an architecture optimized for image classification on mobile devices, which can be done with TensorFlow Lite. Let's compare how a model trained on ImageNet data compares to one built from scratch.\nThe tensorflow_hub python package has a function to include a Hub model as a layer in Keras. We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.\nTODO 2.b: Add a Hub Keras Layer at the top of the model using the handle provided.", "module_selection = \"mobilenet_v2_100_224\"\nmodule_handle = \"https://tfhub.dev/google/imagenet/{}/feature_vector/4\".format(\n module_selection\n)\n\ntransfer_model = tf.keras.Sequential(\n [\n hub.KerasLayer(module_handle, trainable=False),\n tf.keras.layers.Dropout(rate=0.2),\n tf.keras.layers.Dense(\n nclasses,\n activation=\"softmax\",\n kernel_regularizer=tf.keras.regularizers.l2(0.0001),\n ),\n ]\n)\ntransfer_model.build((None,) + (IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))\ntransfer_model.summary()", "Even though we're only adding one more Dense layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!\nMoment of truth. Let's compile this new model and see how it compares to our MNIST architecture.", "transfer_model.compile(\n optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"]\n)\n\ntrain_ds = load_dataset(train_path, BATCH_SIZE)\neval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)\n\ntransfer_model.fit(\n train_ds,\n epochs=5,\n steps_per_epoch=5,\n validation_data=eval_ds,\n validation_steps=VALIDATION_STEPS,\n)", "Alright, looking better!\nStill, there's clear room to improve. Data bottlenecks are especially prevalent with image data due to the size of the image files. There's much to consider such as the computation of augmenting images and the bandwidth to transfer images between machines.\nThink life is too short, and there has to be a better way? In the next lab, we'll blast away these problems by developing a cloud strategy to train with TPUs!\nBonus Exercise\nKeras has a local way to do distributed training, but we'll be using a different technique in the next lab. Want to give the local way a try? Check out this excellent blog post to get started. Or want to go full-blown Keras? It also has a number of pre-trained models ready to use.\nCopyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aliasvishnu/TensorFlow-Creative-Applications
Training a network with TensorFlow/.ipynb_checkpoints/Sine wave predictor-checkpoint.ipynb
gpl-3.0
[ "Building a neural network with TensorFlow\nIn this module we are going to build a neural network for regression. Regression is the prediction of a real-valued number given some inputs.", "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Let's generate some data, in this case, a noisy sine wave as plotted below", "n_observations = 1000\nxs = np.linspace(-3.0, 3.0, n_observations)\nys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations)\n\nplt.scatter(xs, ys, alpha=0.15, marker = '+')\nplt.show()\n# alpha makes the points transparent and marker changes it from dots to +'s", "We are going to use placeholders from now on. Placeholder for X and Y are as follows", "X = tf.placeholder(tf.float32, name = 'X')\nY = tf.placeholder(tf.float32, name = 'Y')\n\nsess = tf.InteractiveSession()\n\nn = tf.random_normal([1000]).eval()\nn_ = tf.random_normal([1000], stddev = 0.1).eval()\n\nplt.hist(n) # plt.hist(n, 20) gives answer with 20 buckets\nplt.hist(n_) # We need initial values much closer to 0 for initializing the weights", "We need two parameters, weight W and bias B for our model", "W = tf.Variable(tf.random_normal([1], stddev=0.1), name = 'weight')\nB = tf.Variable(0.0, name = 'bias')", "We need to define model, and a cost function", "# Perceptron model (or Linear regression)\nY_ = X*W + B \n\ndef distance(y, y_):\n return tf.abs(y-y_)\n\n# cost = distance(Y_, tf.sin(X))\ncost = tf.reduce_mean(distance(Y_, Y))\n\noptimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.01).minimize(cost)", "Now we have defined the variables, we need to run the code\nBefore we run the code, we must also run the variables using tf.initialize_all_variables() to give an initial value to W and B.", "n_iterations = 100\nsess.run(tf.initialize_all_variables())\nfor _ in range(n_iterations):\n sess.run(optimizer, feed_dict = {X:xs, Y:ys})\n training_cost = sess.run(cost, feed_dict = {X:xs, Y:ys})\n \n # This is how to print the values mid execution\n print training_cost, sess.run(W), sess.run(B)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
monicathieu/cu-psych-r-tutorial
public/tutorials/python/3_descriptives/lesson.ipynb
mit
[ "Descriptive statistics\n\nGoals of this lesson\nStudents will learn:\n\nHow to group and categorize data in Python\nHow to generative descriptive statistics in Python", "# load packages we will be using for this lesson\nimport pandas as pd", "0. Open dataset and load package\nThis dataset examines the relationship between multitasking and working memory. Link here to original paper by Uncapher et al. 2016.", "# use pd.read_csv to open data into python\ndf = pd.read_csv(\"uncapher_2016_repeated_measures_dataset.csv\")", "1. Familiarize yourself with the data\nQuick review from data cleaning: take a look at the basic data structure, number of rows and columns.", "df.head()\n\ndf.shape\n\ndf.columns", "2. Selecting relevant variables\nSometimes datasets have many variables that are unnecessary for a given analysis. To simplify your life, and your code, we can select only the given variables we'd like to use for now.", "df = df[[\"subjNum\", \"groupStatus\", \"adhd\", \"hitRate\", \"faRate\", \"dprime\"]]\ndf.head()", "3. Basic Descriptives\nSummarizing data\nLet's learn how to make simple tables of summary statistics.\nFirst, we will calculate summary info across all data using describe(), a useful function for creating summaries. Note that we're not creating a new object for this summary (i.e. not using the = symbol), so this will print but not save.", "df.describe()", "3. Grouping data\nNext, we will learn how to group data based on certain variables of interest.\nWe will use the groupby() function in pandas, which will automatically group any subsequent actions called on the data.", "df.groupby([\"groupStatus\"]).mean()", "We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking: high or low). \nWe will first make a factor for ADHD (median-split), and add it as a grouping variable using the cut() function in pandas:", "df[\"adhdF\"] = pd.cut(df[\"adhd\"],bins=2,labels=[\"Low\",\"High\"])", "Then we'll check how evenly split these groups are by using groupby() the size() functions:", "df.groupby([\"groupStatus\",\"adhdF\"]).size()", "Then we'll calculate some summary info about these groups:", "df.groupby([\"groupStatus\",\"adhdF\"]).mean()", "A note on piping / stringing commands together\nIn R, we often use the pipe %&gt;% to string a series of steps together. We can do the same in python with many functions in a row\nThis is how we're able to take the output of df.groupby([\"groupStatus\",\"adhdF\"]) and then send that output into the mean() function\n\n5. Extra: Working with a long dataset\nThis is a repeated measures (\"long\") dataset, with multiple rows per subject. This makes things a bit tricker, but we are going to show you some tools for how to work with \"long\" datasets.\nHow many unique subjects are in the data?", "subList = df[\"subjNum\"].unique()\nnSubs = len(subList)\nnSubs", "How many trials were there per subject?", "nTrialsPerSubj = df.groupby([\"subjNum\"]).size().reset_index(name=\"nTrials\")\nnTrialsPerSubj.head()", "Combine summary statistics with the full data frame\nFor some analyses, you might want to add a higher level variable (e.g. subject average hitRate) alongside your long data. We can do this by summarizing the data in a new data frame and then merging it with the full data.", "avgHR = df.groupby([\"subjNum\"])[\"hitRate\"].mean().reset_index(name=\"avgHR\")\navgHR.head()\n\ndf = df.merge(avgHR,on=\"subjNum\")\ndf.head()", "You should now have an avgHR column in df, which will repeat within each subject, but vary across subjects." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Naereen/notebooks
agreg/Mémoisation_en_Python_et_OCaml.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Mémoïsation,-en-Python-et-en-OCaml\" data-toc-modified-id=\"Mémoïsation,-en-Python-et-en-OCaml-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Mémoïsation, en Python et en OCaml</a></div><div class=\"lev2 toc-item\"><a href=\"#En-Python\" data-toc-modified-id=\"En-Python-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>En Python</a></div><div class=\"lev3 toc-item\"><a href=\"#Exemples-de-fonctions-à-mémoïser\" data-toc-modified-id=\"Exemples-de-fonctions-à-mémoïser-111\"><span class=\"toc-item-num\">1.1.1&nbsp;&nbsp;</span>Exemples de fonctions à mémoïser</a></div><div class=\"lev3 toc-item\"><a href=\"#Mémoïsation-générique,-non-typée\" data-toc-modified-id=\"Mémoïsation-générique,-non-typée-112\"><span class=\"toc-item-num\">1.1.2&nbsp;&nbsp;</span>Mémoïsation générique, non typée</a></div><div class=\"lev3 toc-item\"><a href=\"#Essais\" data-toc-modified-id=\"Essais-113\"><span class=\"toc-item-num\">1.1.3&nbsp;&nbsp;</span>Essais</a></div><div class=\"lev3 toc-item\"><a href=\"#Mémoïsation-générique-et-typée\" data-toc-modified-id=\"Mémoïsation-générique-et-typée-114\"><span class=\"toc-item-num\">1.1.4&nbsp;&nbsp;</span>Mémoïsation générique et typée</a></div><div class=\"lev3 toc-item\"><a href=\"#Bonus-:-on-peut-utiliser-la-syntaxe-d'un-décorateur-en-Python\" data-toc-modified-id=\"Bonus-:-on-peut-utiliser-la-syntaxe-d'un-décorateur-en-Python-115\"><span class=\"toc-item-num\">1.1.5&nbsp;&nbsp;</span>Bonus : on peut utiliser la syntaxe d'un décorateur en Python</a></div><div class=\"lev3 toc-item\"><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-116\"><span class=\"toc-item-num\">1.1.6&nbsp;&nbsp;</span>Conclusion</a></div><div class=\"lev2 toc-item\"><a href=\"#En-OCaml\" data-toc-modified-id=\"En-OCaml-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>En OCaml</a></div><div class=\"lev3 toc-item\"><a href=\"#Préliminaires\" data-toc-modified-id=\"Préliminaires-121\"><span class=\"toc-item-num\">1.2.1&nbsp;&nbsp;</span>Préliminaires</a></div><div class=\"lev3 toc-item\"><a href=\"#Exemples-de-fonctions-à-mémoïser\" data-toc-modified-id=\"Exemples-de-fonctions-à-mémoïser-122\"><span class=\"toc-item-num\">1.2.2&nbsp;&nbsp;</span>Exemples de fonctions à mémoïser</a></div><div class=\"lev3 toc-item\"><a href=\"#Mémoïsation-pour-des-fonctions-d'un-argument\" data-toc-modified-id=\"Mémoïsation-pour-des-fonctions-d'un-argument-123\"><span class=\"toc-item-num\">1.2.3&nbsp;&nbsp;</span>Mémoïsation pour des fonctions d'un argument</a></div><div class=\"lev3 toc-item\"><a href=\"#Essais\" data-toc-modified-id=\"Essais-124\"><span class=\"toc-item-num\">1.2.4&nbsp;&nbsp;</span>Essais</a></div><div class=\"lev3 toc-item\"><a href=\"#Exemple-de-la-suite-de-Fibonacci\" data-toc-modified-id=\"Exemple-de-la-suite-de-Fibonacci-125\"><span class=\"toc-item-num\">1.2.5&nbsp;&nbsp;</span>Exemple de la suite de Fibonacci</a></div><div class=\"lev3 toc-item\"><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-126\"><span class=\"toc-item-num\">1.2.6&nbsp;&nbsp;</span>Conclusion</a></div>\n\n# Mémoïsation, en Python et en OCaml\n\nCe document montre deux exemples d'implémentations d'un procédé générique (mais basique) de [mémoïsation](https://fr.wikipedia.org/wiki/M%C3%A9mo%C3%AFsation) en [Python](https://python.org/) et en [OCaml](https://ocaml.org/)\n\n## En Python\n\n### Exemples de fonctions à mémoïser\nOn commence avec des fonctions inutilement lentes :", "from time import sleep\n\ndef f1(n):\n sleep(3)\n return n + 3\n\ndef f2(n):\n sleep(4)\n return n * n\n\n%timeit f1(10)\n\n%timeit f2(10)", "Mémoïsation générique, non typée\nC'est étrangement court !", "def memo(f):\n memoire = {} # dictionnaire vide, {} ou dict()\n def memo_f(n): # nouvelle fonction\n if n not in memoire: # verification\n memoire[n] = f(n) # stockage\n return memoire[n] # lecture\n return memo_f # ==> f memoisée !", "Essais", "memo_f1 = memo(f1)\n\nprint(\"3 secondes...\")\nprint(memo_f1(10)) # 13, 3 secondes après\nprint(\"0 secondes !\")\nprint(memo_f1(10)) # instantanné !\n\n# différent de ces deux lignes !\n\nprint(\"3 secondes...\")\nprint(memo(f1)(10))\nprint(\"3 secondes...\")\nprint(memo(f1)(10)) # 3 secondes aussi !\n\n%timeit memo_f1(10) # instantanné !", "Et :", "memo_f2 = memo(f2)\n\nprint(\"4 secondes...\")\nprint(memo_f2(10)) # 100, 4 secondes après\nprint(\"0 secondes !\")\nprint(memo_f2(10)) # instantanné !\n\n%timeit memo_f2(10) # instantanné !", "Mémoïsation générique et typée\nCe n'est pas tellement plus compliquée de typer la mémoïsation.", "def memo_avec_type(f):\n memoire = {} # dictionnaire vide, {} ou dict()\n def memo_f_avec_type(n):\n if (type(n), n) not in memoire:\n memoire[(type(n), n)] = f(n)\n return memoire[(type(n), n)]\n return memo_f_avec_type", "Avantage, on obtient un résultat plus cohérent \"au niveau de la reproducibilité des résultats\", par exemple :", "def fonction_sur_entiers_ou_flottants(n):\n if isinstance(n, int):\n return 'Int'\n elif isinstance(n, float):\n return 'Float'\n else:\n return '?'\n\ntest0 = fonction_sur_entiers_ou_flottants\nprint(test0(1))\nprint(test0(1.0)) # résultat correct !\nprint(test0(\"1\"))\n\ntest1 = memo(fonction_sur_entiers_ou_flottants)\nprint(test1(1))\nprint(test1(1.0)) # résultat incorrect !\nprint(test1(\"1\"))\n\ntest2 = memo_avec_type(fonction_sur_entiers_ou_flottants)\nprint(test2(1))\nprint(test2(1.0)) # résultat correct !\nprint(test2(\"1\"))", "Bonus : on peut utiliser la syntaxe d'un décorateur en Python", "def fibo(n):\n if n <= 1: return 1\n else: return fibo(n-1) + fibo(n-2)\n\nprint(\"Test de fibo() non mémoisée :\")\nfor n in range(10):\n print(\"F_{} = {}\".format(n, fibo(n)))", "Cette fonction récursive est terriblement lente !", "%timeit fibo(35)\n\n# version plus rapide !\n@memo\ndef fibo2(n):\n if n <= 1: return 1\n else: return fibo2(n-1) + fibo2(n-2)\n\nprint(\"Test de fibo() mémoisée (plus rapide) :\")\nfor n in range(10):\n print(\"F_{} = {}\".format(n, fibo2(n)))\n\n%timeit fibo2(35)", "Autre exemple, ou le gain de temps est moins significatif.", "def factorielle(n):\n if n <= 0: return 0\n elif n == 1: return 1\n else: return n * factorielle(n-1)\n\nprint(\"Test de factorielle() non mémoisée :\")\nfor n in range(10):\n print(\"{}! = {}\".format(n, factorielle(n)))\n\n%timeit factorielle(30)\n\n@memo\ndef factorielle2(n):\n if n <= 0: return 0\n elif n == 1: return 1\n else: return n * factorielle2(n-1)\n\nprint(\"Test de factorielle() mémoisée :\")\nfor n in range(10):\n print(\"{}! = {}\".format(n, factorielle2(n)))\n\n%timeit factorielle2(30)", "Conclusion\nEn Python, c'est facile, avec des dictionnaires génériques et une syntaxe facilitée avec un décorateur.\nBonus : ce décorateur est dans la bibliothèque standard dans le module functools !", "from functools import lru_cache # lru = least recently updated\n\n@lru_cache(maxsize=None)\ndef fibo3(n):\n if n <= 1: return 1\n else: return fibo3(n-1) + fibo3(n-2)\n\nprint(\"Test de fibo() mémoisée avec functools.lru_cache (plus rapide) :\")\nfor n in range(10):\n print(\"F_{} = {}\".format(n, fibo3(n)))\n\n%timeit fibo2(35)\n\n%timeit fibo3(35)\n\n%timeit fibo2(70)\n\n%timeit fibo3(70)", "(On obtient presque les mêmes performances que notre implémentation manuelle)\n\nEn OCaml\nJe traite exactement les mêmes exemples.\n\nJ'expérimente l'utilisation de deux kernels Jupyter différents pour afficher des exemples de codes écrits dans deux langages dans le même notebook... Ce n'est pas très propre mais ça marche.\n\nPréliminaires\nQuelques fonctions nécessaires pour ces exemples :", "let print = Format.printf;;\nlet sprintf = Format.sprintf;;\nlet time = Unix.time;;\nlet sleep n = Sys.command (sprintf \"sleep %i\" n);;\n\nlet timeit (repet : int) (f : 'a -> 'a) (x : 'a) () : float =\n let time0 = time () in\n for _ = 1 to repet do\n ignore (f x);\n done;\n let time1 = time () in\n (time1 -. time0 ) /. (float_of_int repet)\n;;", "Exemples de fonctions à mémoïser", "let f1 n =\n ignore (sleep 3);\n n + 2\n;;\n\nlet _ = f1 10;; (* 13, après 3 secondes *)\n\ntimeit 3 f1 10 ();; (* 3 secondes *)", "Et un autre exemple similaire :", "let f2 n =\n ignore (sleep 4);\n n * n\n;;\n\nlet _ = f2 10;; (* 100, après 3 secondes *)\n\ntimeit 3 f2 10 ();; (* 4 secondes *)", "Mémoïsation pour des fonctions d'un argument\nOn utilise le module Hashtbl de la bibliothèque standard.", "let memo f =\n let memoire = Hashtbl.create 128 in (* taille 128 par defaut *)\n let memo_f n =\n if Hashtbl.mem memoire n then (* lecture *)\n Hashtbl.find memoire n\n else begin\n let res = f n in (* calcul *)\n Hashtbl.add memoire n res; (* stockage *)\n res\n end\n in\n memo_f (* nouvelle fonction *)\n;;", "Essais\nDeux exemples :", "let memo_f1 = memo f1 ;;\nlet _ = memo_f1 10 ;; (* 3 secondes *)\nlet _ = memo_f1 10 ;; (* instantanné *)\n\ntimeit 100 memo_f1 20 ();; (* 0.03 secondes *)\n\nlet memo_f2 = memo f2 ;;\nlet _ = memo_f2 10 ;; (* 4 secondes *)\nlet _ = memo_f2 10 ;; (* instantanné *)\n\ntimeit 100 memo_f2 20 ();; (* 0.04 secondes *)", "Ma fonction timeit fait un nombre paramétrique de répétitions sur des entrées non aléatoires, donc le temps moyen observé dépend du nombre de répétitions !", "timeit 10000 memo_f2 50 ();; (* 0.04 secondes *)", "Exemple de la suite de Fibonacci", "let rec fibo = function\n | 0 | 1 -> 1\n | n -> (fibo (n - 1)) + (fibo (n - 2))\n;;\n\nfibo 40;;\n\ntimeit 10 fibo 40 ();; (* 4.2 secondes ! *)", "Et avec la mémoïsation automatique :", "let memo_fibo = memo fibo;;\n\nmemo_fibo 40;;\n\ntimeit 10 memo_fibo 41 ();; (* 0.7 secondes ! *)", "Conclusion\nEn OCaml, ce n'était pas trop dur non plus en utilisant une table de hachage (dictionnaire), disponibles dans le module Hashtbl.\nOn est confronté à une limitation de Caml, à savoir que la la fonction memo_f doit être bien typée pour être renvoyée par memo f donc memo ne peut pas avoir un type générique : il faut écrire un décorateur de fonction pour chaque signature bien connue de la fonction qu'on veut mémoïser..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
iannesbitt/ml_bootcamp
Python-Crash-Course/Python Crash Course Exercises .ipynb
mit
[ "Python Crash Course Exercises\nThis is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp\nExercises\nAnswer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.\n What is 7 to the power of 4?", "7**4", "Split this string:\ns = \"Hi there Sam!\"\n\ninto a list.", "s = 'Hi there Sam!'\n\ns.split()", "Given the variables:\nplanet = \"Earth\"\ndiameter = 12742\n\n Use .format() to print the following string: \nThe diameter of Earth is 12742 kilometers.", "planet = \"Earth\"\ndiameter = 12742\n\n'The diameter of {} is {} kilometers.'.format(planet,diameter)", "Given this nested list, use indexing to grab the word \"hello\"", "lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]\n\nlst[3][1][2][0]", "Given this nested dictionary grab the word \"hello\". Be prepared, this will be annoying/tricky", "d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}\n\nd['k1'][3]['tricky'][3]['target'][3]", "What is the main difference between a tuple and a list?", "# Tuple is immutable, list items can be changed", "Create a function that grabs the email website domain from a string in the form: \[email protected]\n\nSo for example, passing \"[email protected]\" would return: domain.com", "def domainGet(inp):\n return inp.split('@')[1]\n\ndomainGet('[email protected]')", "Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.", "def findDog(inp):\n return 'dog' in inp.lower().split()\n\nfindDog('Is there a dog here?')", "Create a function that counts the number of times the word \"dog\" occurs in a string. Again ignore edge cases.", "def countDog(inp):\n dog = 0\n for x in inp.lower().split():\n if x == 'dog':\n dog += 1\n return dog\n\ncountDog('This dog runs faster than the other dog dude!')", "Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:\nseq = ['soup','dog','salad','cat','great']\n\nshould be filtered down to:\n['soup','salad']", "seq = ['soup','dog','salad','cat','great']\n\nlist(filter(lambda item:item[0]=='s',seq))", "Final Problem\nYou are driving a little too fast, and a police officer stops you. Write a function\n to return one of 3 possible results: \"No ticket\", \"Small ticket\", or \"Big Ticket\". \n If your speed is 60 or less, the result is \"No Ticket\". If speed is between 61 \n and 80 inclusive, the result is \"Small Ticket\". If speed is 81 or more, the result is \"Big Ticket\". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all \n cases.", "def caught_speeding(speed, is_birthday):\n if is_birthday:\n speed = speed - 5\n if speed > 80:\n return 'Big Ticket'\n elif speed > 60:\n return 'Small Ticket'\n else:\n return 'No Ticket'\n\ncaught_speeding(81,True)\n\ncaught_speeding(81,False)", "Great job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
davidparks21/qso_lya_detection_pipeline
lucid_work/notebooks/feature_visualization.ipynb
mit
[ "Feature Visualization\n\nThis notebook does basic feature visualization of David Parks DLA CNN Model\n\nInstall imports, define and load model", "# Imports\n\nimport numpy as np\nimport tensorflow as tf\nimport scipy.ndimage as nd\nimport time\nimport imageio\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nimport lucid.modelzoo.vision_models as models\nfrom lucid.misc.io import show\nimport lucid.optvis.objectives as objectives\nimport lucid.optvis.param as param\nimport lucid.optvis.render as render\nimport lucid.optvis.transform as transform\n\nfrom lucid.optvis.objectives import wrap_objective, _dot, _dot_cossim\nfrom lucid.optvis.transform import standard_transforms, crop_or_pad_to, pad, jitter, random_scale, random_rotate\n\nfrom lucid.modelzoo.vision_base import Model\n\nclass DLA(Model):\n model_path = '../protobufs/full_model_8_13.pb'\n image_shape = [1, 400]\n image_value_range = [0, 1]\n input_name = 'x'\n\nmodel = DLA()\nmodel.load_graphdef()\n\nLAYERS = { 'conv1': ['Conv2D', 100],\n 'conv1_relu': ['Relu', 100],\n 'pool1': ['MaxPool', 100],\n 'conv2': ['Conv2D_1', 96],\n 'conv2_relu': ['Relu_1', 96],\n 'pool2': ['MaxPool_1', 96],\n 'conv3': ['Conv2D_2', 96],\n 'conv3_relu': ['Relu_2', 96],\n 'pool3': ['MaxPool_2', 96]}", "Simple 3D Visualizations of a neuron\n\nCreate 3D visualizations", "# Specify param.image size to work with our models input, must be a multiple of 400.\nparam_f = lambda: param.image(120, h=120, channels=3)\n\n# std_transforms = [\n# pad(2, mode=\"constant\", constant_value=.5),\n# jitter(2)]\n# transforms = std_transforms + [crop_or_pad_to(*model.image_shape[:2])]\ntransforms = []\n\n# Specify the objective\n\n# neuron = lambda n: objectives.neuron(LAYERS['pool1'][0], n)\n# obj = neuron(0)\nchannel = lambda n: objectives.channel(LAYERS['pool1'][0], n)\nobj = channel(0)\n\n# Specify the number of optimzation steps, will output image at each step\nthresholds = (1, 2, 4, 8, 16, 32, 64, 128, 256, 512)\n\n\n# Render the objevtive\nimgs = render.render_vis(model, obj, param_f, thresholds=thresholds, transforms=transforms)\nshow([nd.zoom(img[0], [1,1,1], order=0) for img in imgs])\n\n# test = np.array(imgs)\n# test = test.reshape(400)\n# test = test[0:400:1]\n\n# fig = plt.figure(frameon=False);\n# ax = plt.Axes(fig, [0, 0, 1, 1]);\n# ax.set_axis_off();\n# fig.add_axes(ax);\n# ax.plot(test, 'black');\n# ax.set(xlim=(0, 400));\n# ax.set(ylim=(0,1))", "Simple 1D visualizations", "# Specify param.image size\nparam_f = lambda: param.image(400, h=1, channels=1)\n\ntransforms = []\n\n# Specify the objective\n\n# neuron = lambda n: objectives.neuron(LAYERS['pool1'][0], n)\n# obj = neuron(0)\nchannel = lambda n: objectives.channel(LAYERS['pool1'][0], n)\nobj = channel(0)\n\n# Specify the number of optimzation steps,\nthresholds = (128,)\n\n\n# Render the objevtive\nimgs = render.render_vis(model, obj, param_f, thresholds=thresholds, transforms=transforms, verbose=False)\n\n# Display visualization\n\ntest = np.array(imgs)\ntest = test.reshape(400)\ntest = test[0:400:1]\n\nfig = plt.figure(frameon=False);\nax = plt.Axes(fig, [0, 0, 1, 1]);\nax.set_axis_off();\nfig.add_axes(ax);\nax.plot(test, 'black');\nax.set(xlim=(0, 400));\nax.set(ylim=(0,1))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
aschaffn/phys202-2015-work
assignments/assignment03/NumpyEx03.ipynb
mit
[ "Numpy Exercise 3\nImports", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va", "Geometric Brownian motion\nHere is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.", "def brownian(maxt, n):\n \"\"\"Return one realization of a Brownian (Wiener) process with n steps and a max time of t.\"\"\"\n t = np.linspace(0.0,maxt,n)\n h = t[1]-t[0]\n Z = np.random.normal(0.0,1.0,n-1)\n dW = np.sqrt(h)*Z\n W = np.zeros(n)\n W[1:] = dW.cumsum()\n return t, W", "Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.", "t,W = brownian(1.0, 1000)\n\nassert isinstance(t, np.ndarray)\nassert isinstance(W, np.ndarray)\nassert t.dtype==np.dtype(float)\nassert W.dtype==np.dtype(float)\nassert len(t)==len(W)==1000", "Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.", "plt.plot(t,W)\nplt.xlabel(\"$t$\")\nplt.ylabel(\"$W(t)$\")\n\nassert True # this is for grading", "Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.", "dW = np.diff(W)\ndW.mean(), dW.std()\n\nassert len(dW)==len(W)-1\nassert dW.dtype==np.dtype(float)", "Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:\n$$\nX(t) = X_0 e^{((\\mu - \\sigma^2/2)t + \\sigma W(t))}\n$$\nUse Numpy ufuncs and no loops in your function.", "def geo_brownian(t, W, X0, mu, sigma):\n \"\"\"Return X(t) for geometric brownian motion with drift mu, volatility sigma.\"\"\"\n exponent = 0.5 * t * (mu - sigma)**2 + sigma * W \n return X0 * np.exp(exponent)\n\nassert True # leave this for grading", "Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\\mu=0.5$ and $\\sigma=0.3$ with the Wiener process you computed above.\nVisualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.", "plt.plot(t, geo_brownian(t, W, 1.0, 0.5, 0.3))\nplt.xlabel(\"$t$\")\nplt.ylabel(\"$X(t)$\")\n\nassert True # leave this for grading" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
OpenWeavers/openanalysis
doc/Langauge/04-Control Structures.ipynb
gpl-3.0
[ "Control Structures\nControl Structures construct a fundamental part of language along with syntax,semantics and core libraries. It is the Control Structures which makes the program more lively. Since they contol the flow of execution of program, they are named Control Structures\nif statement\nUsage:\npython\n if condition:\n statement_1\n statement_2\n ...\n statement_n\n<div class=\"alert alert-info\">\n**Note** \n\n\nIn `Python`, block of code means, the lines with same indentation( i.e., same number of tabs or spaces before it). Here `statement_1` upto `statement_n` are in `if` block. This enhances the code readability\n</div>\nExample:", "response = input(\"Enter an integer : \")\nnum = int(response)\nif num % 2 == 0:\n print(\"{} is an even number\".format(num))", "<div class=\"alert alert-info\">\n\n**Note** Typecasting\n\n\n\n`int(response)` converted the string `response` to integer. If user enters anything other than integer, `ValueError` is raised\n\n</div>\n\nif-else statement\nUsage:\npython\n if condition:\n statement_1\n statement_2\n ...\n statement_n\n else:\n statement_1\n statement_2\n ...\n statement_n\nExample:", "response = input(\"Enter an integer : \")\nnum = int(response)\nif num % 2 == 0:\n print(\"{} is an even number\".format(num))\nelse:\n print(\"{} is an odd number\".format(num))", "Single Line if-else\nThis serves as a replacement for ternery operator avaliable in C\nUsage:\nC ternery\nc\n result = (condition) ? value_true : value_false\nPython Single Line if else \npython\n result = value_true if condition else value_false \nExample:", "response = input(\"Enter an integer : \")\nnum = int(response)\nresult = \"even\" if num % 2 == 0 else \"odd\"\nprint(\"{} is {} number\".format(num,result))", "if-else ladder\nUsage:\npython \n if condition_1:\n statements_1\n elif condition_2:\n statements_2\n elif condition_3:\n statements_3\n ...\n ...\n ...\n elif condition_n:\n statements_n\n else:\n statements_last\n<div class=\"alert alert-info\">\n**Note** \n\n\n`Python` uses `elif` instead of `else if` like in `C`,`Java` or `C#`\n</div>\nExample:", "response = input(\"Enter an integer (+ve or -ve) : \")\nnum = int(response)\nif num > 0:\n print(\"{} is +ve\".format(num))\nelif num == 0:\n print(\"Zero\")\nelse:\n print(\"{} is -ve\".format(num))", "<div class=\"alert alert-info\">\n**Note**: No `switch-case`\n\n\nThere is no `switch-case` structure in Python. It can be realized using `if-else ladder` or any other ways\n</div>\n\nwhile loop\nUsage:\npython\n while condition:\n statement_1\n statement_2\n ...\n statement_n\nExample:", "response = input(\"Enter an integer : \")\nnum = int(response)\nprev,current = 0,1\ni = 0\nwhile i < num:\n prev,current = current,prev + current\n print('Fib[{}] = {}'.format(i,current),end=',')\n i += 1", "<div class=\"alert alert-info\">\n**Note**\n\n\n- Multiple assignments in single statement can be done\n-`Python` doesn't support `++` and `--` operators as in `C`\n- There is no `do-while` loop in Python\n\n</div>\n\nfor loop\nUsage:\npython\n for object in collection:\n do_something_with_object\n<div class=\"alert alert-info\">\n**Notes**\n\n\n- `C` like `for(init;test;modify)` is not supported in Python\n- Python provides `range` object for iterating over numbers\n\nUsage of `range` object:\n\n```python\n x = range(start = 0,stop,step = 1)\n```\n\nnow `x` can be iterated, and it generates numbers including `start` excluding `stop` differing in the steps of `step`\n</div>\n\nExample:", "for i in range(10):\n print(i, end=',')\n\nfor i in range(2,10,3):\n print(i, end=',')\n\nresponse = input(\"Enter an integer : \")\nnum = int(response)\nprev,current = 0,1\nfor i in range(num):\n prev,current = current,prev + current\n print('Fib[{}] = {}'.format(i,current),end=',')", "<div class=\"alert alert-info\">\n**Note**\n\n\n\nLoop control statements `break` and `continue` work in the same way as they work in `C`\n</div>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
grfiv/MNIST
svm.scikit/svm_poly_pca.scikit_benchmark.ipynb
mit
[ "MNIST digit recognition using SVC and PCA with Polynomial kernel\n> Using optimal parameters, fit to BOTH original and deskewed data", "from __future__ import division\nimport os, time, math\nimport cPickle as pickle\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy\nimport csv\n\nfrom operator import itemgetter\nfrom tabulate import tabulate\n\nfrom print_imgs import print_imgs # my own function to print a grid of square images\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.utils import shuffle\nfrom sklearn.decomposition import PCA\n\nfrom sklearn.svm import SVC\n\nfrom sklearn.cross_validation import StratifiedKFold\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.grid_search import RandomizedSearchCV\n\nfrom sklearn.metrics import classification_report, confusion_matrix\n\nnp.random.seed(seed=1009)\n\n%matplotlib inline\n\n#%qtconsole ", "Where's the data?", "file_path = '../data/'\n\ntrain_img_deskewed_filename = 'train-images_deskewed.csv'\ntrain_img_original_filename = 'train-images.csv'\n\ntest_img_deskewed_filename = 't10k-images_deskewed.csv'\ntest_img_original_filename = 't10k-images.csv'\n \ntrain_label_filename = 'train-labels.csv'\ntest_label_filename = 't10k-labels.csv'", "How much of the data will we use?", "portion = 1.0 # set to less than 1.0 for testing; set to 1.0 to use the entire dataset", "Read the training images and labels, both original and deskewed", "# read both trainX files\nwith open(file_path + train_img_original_filename,'r') as f:\n data_iter = csv.reader(f, delimiter = ',')\n data = [data for data in data_iter]\ntrainXo = np.ascontiguousarray(data, dtype = np.float64) \n\nwith open(file_path + train_img_deskewed_filename,'r') as f:\n data_iter = csv.reader(f, delimiter = ',')\n data = [data for data in data_iter]\ntrainXd = np.ascontiguousarray(data, dtype = np.float64)\n\n# vertically concatenate the two files\ntrainX = np.vstack((trainXo, trainXd))\n\ntrainXo = None\ntrainXd = None\n\n# read trainY twice and vertically concatenate\nwith open(file_path + train_label_filename,'r') as f:\n data_iter = csv.reader(f, delimiter = ',')\n data = [data for data in data_iter]\ntrainYo = np.ascontiguousarray(data, dtype = np.int8) \ntrainYd = np.ascontiguousarray(data, dtype = np.int8)\n\ntrainY = np.vstack((trainYo, trainYd)).ravel()\n\ntrainYo = None\ntrainYd = None\ndata = None\n\n# shuffle trainX & trainY\ntrainX, trainY = shuffle(trainX, trainY, random_state=0)\n\n# use less data if specified\nif portion < 1.0:\n trainX = trainX[:portion*trainX.shape[0]]\n trainY = trainY[:portion*trainY.shape[0]]\n\n \nprint(\"trainX shape: {0}\".format(trainX.shape))\nprint(\"trainY shape: {0}\\n\".format(trainY.shape))\n\nprint(trainX.flags)", "Read the DESKEWED test images and labels", "# read testX\nwith open(file_path + test_img_deskewed_filename,'r') as f:\n data_iter = csv.reader(f, delimiter = ',')\n data = [data for data in data_iter]\ntestX = np.ascontiguousarray(data, dtype = np.float64) \n\n# read testY\nwith open(file_path + test_label_filename,'r') as f:\n data_iter = csv.reader(f, delimiter = ',')\n data = [data for data in data_iter]\ntestY = np.ascontiguousarray(data, dtype = np.int8)\n\n# shuffle testX, testY\ntestX, testY = shuffle(testX, testY, random_state=0)\n\n# use a smaller dataset if specified\nif portion < 1.0:\n testX = testX[:portion*testX.shape[0]]\n testY = testY[:portion*testY.shape[0]]\n\nprint(\"testX shape: {0}\".format(testX.shape))\nprint(\"testY shape: {0}\".format(testY.shape))", "Use the smaller, fewer images for testing\nPrint a sample", "print_imgs(images = trainX, \n actual_labels = trainY, \n predicted_labels = trainY,\n starting_index = np.random.randint(0, high=trainY.shape[0]-36, size=1)[0],\n size = 6)", "PCA dimensionality reduction", "t0 = time.time()\n\npca = PCA(n_components=0.85, whiten=True)\n\ntrainX = pca.fit_transform(trainX)\ntestX = pca.transform(testX)\n\nprint(\"trainX shape: {0}\".format(trainX.shape))\nprint(\"trainY shape: {0}\\n\".format(trainY.shape))\nprint(\"testX shape: {0}\".format(testX.shape))\nprint(\"testY shape: {0}\".format(testY.shape))\n\nprint(\"\\ntime in minutes {0:.2f}\".format((time.time()-t0)/60))", "SVC Parameter Settings", "# default parameters for SVC\n# ==========================\ndefault_svc_params = {}\n\ndefault_svc_params['C'] = 1.0 # penalty\ndefault_svc_params['class_weight'] = None # Set the parameter C of class i to class_weight[i]*C\n # set to 'auto' for unbalanced classes\ndefault_svc_params['gamma'] = 0.0 # Kernel coefficient for 'rbf', 'poly' and 'sigmoid'\n\ndefault_svc_params['kernel'] = 'rbf' # 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable\n # use of 'sigmoid' is discouraged\ndefault_svc_params['shrinking'] = True # Whether to use the shrinking heuristic. \ndefault_svc_params['probability'] = False # Whether to enable probability estimates. \ndefault_svc_params['tol'] = 0.001 # Tolerance for stopping criterion. \ndefault_svc_params['cache_size'] = 200 # size of the kernel cache (in MB).\n\ndefault_svc_params['max_iter'] = -1 # limit on iterations within solver, or -1 for no limit. \n \ndefault_svc_params['verbose'] = False \ndefault_svc_params['degree'] = 3 # 'poly' only\ndefault_svc_params['coef0'] = 0.0 # 'poly' and 'sigmoid' only\n\n\n# set the parameters for the classifier\n# =====================================\nsvc_params = dict(default_svc_params)\n\nsvc_params['cache_size'] = 2000\nsvc_params['probability'] = True\nsvc_params['kernel'] = 'poly'\nsvc_params['C'] = 1.0\nsvc_params['gamma'] = 0.1112\nsvc_params['degree'] = 3\nsvc_params['coef0'] = 1\n\n\n# create the classifier itself\n# ============================\nsvc_clf = SVC(**svc_params)", "Fit the training data", "t0 = time.time()\n\nsvc_clf.fit(trainX, trainY)\n\nprint(\"\\ntime in minutes {0:.2f}\".format((time.time()-t0)/60))", "Predict the test set and analyze the result", "target_names = [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]\n\npredicted_values = svc_clf.predict(testX)\ny_true, y_pred = testY, predicted_values\n\nprint(classification_report(y_true, y_pred, target_names=target_names))\n\ndef plot_confusion_matrix(cm, \n target_names,\n title='Proportional Confusion matrix', \n cmap=plt.cm.Paired): \n \"\"\"\n given a confusion matrix (cm), make a nice plot\n see the skikit-learn documentation for the original done for the iris dataset\n \"\"\"\n plt.figure(figsize=(8, 6))\n plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(target_names))\n plt.xticks(tick_marks, target_names, rotation=45)\n plt.yticks(tick_marks, target_names)\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n \ncm = confusion_matrix(y_true, y_pred) \n\nprint(cm)\nmodel_accuracy = sum(cm.diagonal())/len(testY)\nmodel_misclass = 1 - model_accuracy\nprint(\"\\nModel accuracy: {0}, model misclass rate: {1}\".format(model_accuracy, model_misclass))\n\nplot_confusion_matrix(cm, target_names)", "Learning Curves\nsee http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html\n\nThe score is the model accuracy\n\nThe red line shows how well the model fits the data it was trained on: \n\na high score indicates low bias ... the model does fit the training data\nit's not unusual for the red line to start at 1.00 and decline slightly\n\n\na low score indicates the model does not fit the training data ... more predictor variables are ususally indicated, or a different model\n\n\n\nThe green line shows how well the model predicts the test data: if it's rising then it means more data to train on will produce better predictions", "t0 = time.time()\n\nfrom sklearn.learning_curve import learning_curve\nfrom sklearn.cross_validation import ShuffleSplit\n\n\ndef plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,\n n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):\n \"\"\"\n Generate a simple plot of the test and training learning curve.\n\n Parameters\n ----------\n estimator : object type that implements the \"fit\" and \"predict\" methods\n An object of that type which is cloned for each validation.\n\n title : string\n Title for the chart.\n\n X : array-like, shape (n_samples, n_features)\n Training vector, where n_samples is the number of samples and\n n_features is the number of features.\n\n y : array-like, shape (n_samples) or (n_samples, n_features), optional\n Target relative to X for classification or regression;\n None for unsupervised learning.\n\n ylim : tuple, shape (ymin, ymax), optional\n Defines minimum and maximum yvalues plotted.\n\n cv : integer, cross-validation generator, optional\n If an integer is passed, it is the number of folds (defaults to 3).\n Specific cross-validation objects can be passed, see\n sklearn.cross_validation module for the list of possible objects\n\n n_jobs : integer, optional\n Number of jobs to run in parallel (default 1).\n \"\"\"\n plt.figure(figsize=(8, 6))\n plt.title(title)\n if ylim is not None:\n plt.ylim(*ylim)\n plt.xlabel(\"Training examples\")\n plt.ylabel(\"Score\")\n \n train_sizes, train_scores, test_scores = learning_curve(\n estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)\n \n train_scores_mean = np.mean(train_scores, axis=1)\n train_scores_std = np.std(train_scores, axis=1)\n test_scores_mean = np.mean(test_scores, axis=1)\n test_scores_std = np.std(test_scores, axis=1)\n \n plt.grid()\n\n plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\n train_scores_mean + train_scores_std, alpha=0.1,\n color=\"r\")\n plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\n test_scores_mean + test_scores_std, alpha=0.1, color=\"g\")\n plt.plot(train_sizes, train_scores_mean, 'o-', color=\"r\",\n label=\"Training score\")\n plt.plot(train_sizes, test_scores_mean, 'o-', color=\"g\",\n label=\"Cross-validation score\")\n plt.tight_layout()\n\n plt.legend(loc=\"best\")\n return plt\n\nC_gamma = \"C=\"+str(np.round(svc_params['C'],4))+\", gamma=\"+str(np.round(svc_params['gamma'],6))\ntitle = \"Learning Curves (SVM, Poly, \" + C_gamma + \")\"\n\nplot_learning_curve(estimator = svc_clf, \n title = title, \n X = trainX, \n y = trainY, \n ylim = (0.85, 1.01), \n cv = ShuffleSplit(n = trainX.shape[0], \n n_iter = 5, \n test_size = 0.2, \n random_state=0), \n n_jobs = 8)\n\nplt.show()\n\nprint(\"\\ntime in minutes {0:.2f}\".format((time.time()-t0)/60))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/custom/sdk-custom-image-classification-batch.ipynb
apache-2.0
[ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Custom training and batch prediction\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-batch.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-batch.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex SDK for Python to train and deploy a custom image classification model for batch prediction.\nDataset\nThe dataset used for this tutorial is the cifar10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.\nObjective\nIn this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.\nThe steps performed include:\n\nCreate a Vertex AI custom job for training a model.\nTrain a TensorFlow model.\nMake a batch prediction.\nCleanup resources.\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest (preview) version of Vertex SDK for Python.", "import os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n! pip install {USER_FLAG} --upgrade google-cloud-aiplatform", "Install the latest GA version of google-cloud-storage library as well.", "! pip install {USER_FLAG} --upgrade google-cloud-storage", "Install the pillow library for loading images.", "! pip install {USER_FLAG} --upgrade pillow", "Install the numpy library for manipulation of image data.", "! pip install {USER_FLAG} --upgrade numpy", "Restart the kernel\nOnce you've installed everything, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nSelect a GPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select \"Runtime --> Change runtime type > GPU\"\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI API and Compute Engine API.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "import os\n\nPROJECT_ID = \"\"\n\nif not os.getenv(\"IS_TESTING\"):\n # Get your Google Cloud project ID from gcloud\n shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\n\n\nIn the Cloud Console, go to the Create service account key\n page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and\n click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex AI\"\ninto the filter box, and select\n Vertex AI Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nEnter the path to your service account key as the\nGOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "import sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# If on Google Cloud Notebooks, then don't execute this code\nif not IS_GOOGLE_CLOUD_NOTEBOOK:\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex AI runs\nthe code from this package. In this tutorial, Vertex AI also saves the\ntrained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\nREGION = \"[your-region]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport Vertex SDK for Python\nImport the Vertex SDK for Python into your Python environment and initialize it.", "import os\nimport sys\n\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import gapic as aip\n\naiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)", "Set hardware accelerators\nYou can set hardware accelerators for both training and prediction.\nSet the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nSee the locations where accelerators are available.\nOtherwise specify (None, None) to use a container image to run on a CPU.\nNote: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.", "TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)\n\nDEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)", "Set pre-built containers\nVertex AI provides pre-built containers to run training and prediction.\nFor the latest list, see Pre-built containers for training and Pre-built containers for prediction", "TRAIN_VERSION = \"tf-gpu.2-1\"\nDEPLOY_VERSION = \"tf2-gpu.2-1\"\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)", "Set machine types\nNext, set the machine types to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nMACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)", "Tutorial\nNow you are ready to start creating your own custom-trained model with CIFAR10.\nTrain a model\nThere are two ways you can train a custom model using a container image:\n\n\nUse a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.\n\n\nUse your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.\n\n\nDefine the command args for the training script\nPrepare the command-line arguments to pass to your training script.\n- args: The command line arguments to pass to the corresponding Python module. In this example, they will be:\n - \"--epochs=\" + EPOCHS: The number of epochs for training.\n - \"--steps=\" + STEPS: The number of steps (batches) per epoch.\n - \"--distribute=\" + TRAIN_STRATEGY\" : The training distribution strategy to use for single or distributed training.\n - \"single\": single device.\n - \"mirror\": all GPU devices on a single compute instance.\n - \"multi\": all GPU devices on all compute instances.", "JOB_NAME = \"custom_job_\" + TIMESTAMP\nMODEL_DIR = \"{}/{}\".format(BUCKET_NAME, JOB_NAME)\n\nif not TRAIN_NGPU or TRAIN_NGPU < 2:\n TRAIN_STRATEGY = \"single\"\nelse:\n TRAIN_STRATEGY = \"mirror\"\n\nEPOCHS = 20\nSTEPS = 100\n\nCMDARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n]", "Training script\nIn the next cell, you will write the contents of the training script, task.py. In summary:\n\nGet the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.\nLoads CIFAR10 dataset from TF Datasets (tfds).\nBuilds a model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps\nSaves the trained model (save(MODEL_DIR)) to the specified model directory.", "%%writefile task.py\n# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--lr', dest='lr',\n default=0.01, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=200, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint('DEVICES', device_lib.list_local_devices())\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n# Preparing dataset\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\ndef make_datasets_unbatched():\n # Scaling CIFAR10 data from (0, 255] to (0., 1.]\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n datasets, info = tfds.load(name='cifar10',\n with_info=True,\n as_supervised=True)\n return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()\n\n\n# Build the Keras model\ndef build_and_compile_cnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n model.compile(\n loss=tf.keras.losses.sparse_categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=['accuracy'])\n return model\n\n# Train the model\nNUM_WORKERS = strategy.num_replicas_in_sync\n# Here the batch size scales up by number of workers since\n# `tf.data.Dataset.batch` expects the global batch size.\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\nMODEL_DIR = os.getenv(\"AIP_MODEL_DIR\")\n\ntrain_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_cnn_model()\n\nmodel.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nmodel.save(MODEL_DIR)", "Train the model\nDefine your custom training job on Vertex AI.\nUse the CustomTrainingJob class to define the job, which takes the following parameters:\n\ndisplay_name: The user-defined name of this training pipeline.\nscript_path: The local path to the training script.\ncontainer_uri: The URI of the training container image.\nrequirements: The list of Python package dependencies of the script.\nmodel_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container.\n\nUse the run function to start training, which takes the following parameters:\n\nargs: The command line arguments to be passed to the Python script.\nreplica_count: The number of worker replicas.\nmodel_display_name: The display name of the Model if the script produces a managed Model.\nmachine_type: The type of machine to use for training.\naccelerator_type: The hardware accelerator type.\naccelerator_count: The number of accelerators to attach to a worker replica.\n\nThe run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.", "job = aiplatform.CustomTrainingJob(\n display_name=JOB_NAME,\n script_path=\"task.py\",\n container_uri=TRAIN_IMAGE,\n requirements=[\"tensorflow_datasets==1.3.0\"],\n model_serving_container_image_uri=DEPLOY_IMAGE,\n)\n\nMODEL_DISPLAY_NAME = \"cifar10-\" + TIMESTAMP\n\n# Start the training\nif TRAIN_GPU:\n model = job.run(\n model_display_name=MODEL_DISPLAY_NAME,\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_type=TRAIN_GPU.name,\n accelerator_count=TRAIN_NGPU,\n )\nelse:\n model = job.run(\n model_display_name=MODEL_DISPLAY_NAME,\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_count=0,\n )", "Make a batch prediction request\nSend a batch prediction request to your deployed model.\nGet test data\nDownload images from the CIFAR dataset and preprocess them.\nDownload the test images\nDownload the provided set of images from the CIFAR dataset:", "# Download the images\n! gsutil -m cp -r gs://cloud-samples-data/ai-platform-unified/cifar_test_images .", "Preprocess the images\nBefore you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.\nx_test:\nNormalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.\ny_test:\nYou can extract the labels from the image filenames. Each image's filename format is \"image_{LABEL}_{IMAGE_NUMBER}.jpg\"", "import numpy as np\nfrom PIL import Image\n\n# Load image data\nIMAGE_DIRECTORY = \"cifar_test_images\"\n\nimage_files = [file for file in os.listdir(IMAGE_DIRECTORY) if file.endswith(\".jpg\")]\n\n# Decode JPEG images into numpy arrays\nimage_data = [\n np.asarray(Image.open(os.path.join(IMAGE_DIRECTORY, file))) for file in image_files\n]\n\n# Scale and convert to expected format\nx_test = [(image / 255.0).astype(np.float32).tolist() for image in image_data]\n\n# Extract labels from image name\ny_test = [int(file.split(\"_\")[1]) for file in image_files]", "Prepare data for batch prediction\nBefore you can run the data through batch prediction, you need to save the data into one of a few possible formats.\nFor this tutorial, use JSONL as it's compatible with the 3-dimensional list that each image is currently represented in. To do this:\n\nIn a file, write each instance as JSON on its own line.\nUpload this file to Cloud Storage.\n\nFor more details on batch prediction input formats: https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#batch_request_input", "import json\n\nBATCH_PREDICTION_INSTANCES_FILE = \"batch_prediction_instances.jsonl\"\n\nBATCH_PREDICTION_GCS_SOURCE = (\n BUCKET_NAME + \"/batch_prediction_instances/\" + BATCH_PREDICTION_INSTANCES_FILE\n)\n\n# Write instances at JSONL\nwith open(BATCH_PREDICTION_INSTANCES_FILE, \"w\") as f:\n for x in x_test:\n f.write(json.dumps(x) + \"\\n\")\n\n# Upload to Cloud Storage bucket\n! gsutil cp $BATCH_PREDICTION_INSTANCES_FILE $BATCH_PREDICTION_GCS_SOURCE\n\nprint(\"Uploaded instances to: \", BATCH_PREDICTION_GCS_SOURCE)", "Send the prediction request\nTo make a batch prediction request, call the model object's batch_predict method with the following parameters: \n- instances_format: The format of the batch prediction request file: \"jsonl\", \"csv\", \"bigquery\", \"tf-record\", \"tf-record-gzip\" or \"file-list\"\n- prediction_format: The format of the batch prediction response file: \"jsonl\", \"csv\", \"bigquery\", \"tf-record\", \"tf-record-gzip\" or \"file-list\"\n- job_display_name: The human readable name for the prediction job.\n - gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.\n- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.\n- model_parameters: Additional filtering parameters for serving prediction results.\n- machine_type: The type of machine to use for training.\n- accelerator_type: The hardware accelerator type.\n- accelerator_count: The number of accelerators to attach to a worker replica.\n- starting_replica_count: The number of compute instances to initially provision.\n- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.\nCompute instance scaling\nYou can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.\nIf you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.", "MIN_NODES = 1\nMAX_NODES = 1\n\n# The name of the job\nBATCH_PREDICTION_JOB_NAME = \"cifar10_batch-\" + TIMESTAMP\n\n# Folder in the bucket to write results to\nDESTINATION_FOLDER = \"batch_prediction_results\"\n\n# The Cloud Storage bucket to upload results to\nBATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + \"/\" + DESTINATION_FOLDER\n\n# Make SDK batch_predict method call\nbatch_prediction_job = model.batch_predict(\n instances_format=\"jsonl\",\n predictions_format=\"jsonl\",\n job_display_name=BATCH_PREDICTION_JOB_NAME,\n gcs_source=BATCH_PREDICTION_GCS_SOURCE,\n gcs_destination_prefix=BATCH_PREDICTION_GCS_DEST_PREFIX,\n model_parameters=None,\n machine_type=DEPLOY_COMPUTE,\n accelerator_type=DEPLOY_GPU,\n accelerator_count=DEPLOY_NGPU,\n starting_replica_count=MIN_NODES,\n max_replica_count=MAX_NODES,\n sync=True,\n)", "Retrieve batch prediction results\nWhen the batch prediction is done processing, you can finally view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated when you created the batch prediction job. The predictions are located in a subdirectory starting with the name prediction. Within that directory, there is a file named prediction.results-xxxx-of-xxxx.\nLet's display the contents. You will get a row for each prediction. The row is the softmax probability distribution for the corresponding CIFAR10 classes.", "RESULTS_DIRECTORY = \"prediction_results\"\nRESULTS_DIRECTORY_FULL = RESULTS_DIRECTORY + \"/\" + DESTINATION_FOLDER\n\n# Create missing directories\nos.makedirs(RESULTS_DIRECTORY, exist_ok=True)\n\n# Get the Cloud Storage paths for each result\n! gsutil -m cp -r $BATCH_PREDICTION_GCS_DEST_PREFIX $RESULTS_DIRECTORY\n\n# Get most recently modified directory\nlatest_directory = max(\n [\n os.path.join(RESULTS_DIRECTORY_FULL, d)\n for d in os.listdir(RESULTS_DIRECTORY_FULL)\n ],\n key=os.path.getmtime,\n)\n\n# Get downloaded results in directory\nresults_files = []\nfor dirpath, subdirs, files in os.walk(latest_directory):\n for file in files:\n if file.startswith(\"prediction.results\"):\n results_files.append(os.path.join(dirpath, file))\n\n# Consolidate all the results into a list\nresults = []\nfor results_file in results_files:\n # Download each result\n with open(results_file, \"r\") as file:\n results.extend([json.loads(line) for line in file.readlines()])", "Evaluate results\nYou can then run a quick evaluation on the prediction results:\n\nnp.argmax: Convert each list of confidence levels to a label\nCompare the predicted labels to the actual labels\nCalculate accuracy as correct/total\n\nTo improve the accuracy, try training for a higher number of epochs.", "y_predicted = [np.argmax(result[\"prediction\"]) for result in results]\n\ncorrect = sum(y_predicted == np.array(y_test))\naccuracy = len(y_predicted)\nprint(\n f\"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}\"\n)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nTraining Job\nModel\nCloud Storage Bucket", "delete_training_job = True\ndelete_model = True\n\n# Warning: Setting this to true will delete everything in your bucket\ndelete_bucket = False\n\n# Delete the training job\njob.delete()\n\n# Delete the model\nmodel.delete()\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil -m rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
elenduuche/deep-learning
intro-to-rnns/.ipynb_checkpoints/Anna KaRNNa-checkpoint.ipynb
mit
[ "Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">", "import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf", "First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.", "with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)", "Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.", "text[:100]", "And we can see the characters encoded as integers.", "chars[:100]", "Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.", "np.max(chars)+1", "Making training and validation batches\nNow I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.", "def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the first split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y", "Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.", "train_x, train_y, val_x, val_y = split_data(chars, 10, 50)\n\ntrain_x.shape", "Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:", "train_x[:,:50]", "I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.", "def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]", "Building the model\nBelow is a function where I build the graph for the network.", "def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n # When we're using this network for sampling later, we'll be passing in\n # one character at a time, so providing an option for that\n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n \n # Keep probability placeholder for drop out layers\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # One-hot encoding the input and target characters\n x_one_hot = tf.one_hot(inputs, num_classes)\n y_one_hot = tf.one_hot(targets, num_classes)\n\n ### Build the RNN layers\n # Use a basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n ### Run the data through the RNN layers\n # This makes a list where each element is on step in the sequence\n rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]\n \n # Run each sequence step through the RNN and collect the outputs\n outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)\n final_state = state\n \n # Reshape output so it's a bunch of rows, one output row for each step for each batch\n seq_output = tf.concat(outputs, axis=1)\n output = tf.reshape(seq_output, [-1, lstm_size])\n \n # Now connect the RNN outputs to a softmax layer\n with tf.variable_scope('softmax'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))\n softmax_b = tf.Variable(tf.zeros(num_classes))\n \n # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch\n # of rows of logit outputs, one for each step and batch\n logits = tf.matmul(output, softmax_w) + softmax_b\n \n # Use softmax to get the probabilities for predicted characters\n preds = tf.nn.softmax(logits, name='predictions')\n \n # Reshape the targets to match the logits\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)\n cost = tf.reduce_mean(loss)\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n # Export the nodes\n # NOTE: I'm using a namedtuple here because I think they are cool\n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph", "Hyperparameters\nHere I'm defining the hyperparameters for the network. \n\nbatch_size - Number of sequences running through the network in one pass.\nnum_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\nlstm_size - The number of units in the hidden layers.\nnum_layers - Number of hidden LSTM layers to use\nlearning_rate - Learning rate for training\nkeep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.\n\nTips and Tricks\nMonitoring Validation Loss vs. Training Loss\nIf you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\nIf your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\nIf your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\nApproximate number of parameters\nThe two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\nThe number of parameters in your model. This is printed when you start training.\nThe size of your dataset. 1MB file is approximately 1 million characters.\n\nThese two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\nI have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.\nI have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n\nBest models strategy\nThe winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\nIt is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\nBy the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.", "batch_size = 100\nnum_steps = 100 \nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001\nkeep_prob = 0.5", "Training\nTime for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.\nHere I'm saving checkpoints with the format\ni{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt", "epochs = 20\n# Save every N iterations\nsave_every_n = 200\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/______.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: keep_prob,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/i{}_l{}_v{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))", "Saved checkpoints\nRead up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables", "tf.train.get_checkpoint_state('checkpoints')", "Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.", "def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)", "Here, pass in the path to a checkpoint and sample from the network.", "checkpoint = \"checkpoints/____.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jlandmann/oggm
docs/notebooks/flowline_model.ipynb
gpl-3.0
[ "<img src=\"https://raw.githubusercontent.com/OGGM/oggm/master/docs/_static/logo.png\" width=\"40%\" align=\"left\">\nGetting started with flowline models: idealized experiments\nIn this notebook we are going to explore the basic functionalities of OGGM flowline model(s). For this purpose we are going to used simple, \"idealized\" glaciers, run with simple linear mass-balance profiles.", "# The commands below are just importing the necessary modules and functions\n# Plot defaults\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (9, 6) # Default plot size\n# Scientific packages\nimport numpy as np\n# Constants\nfrom oggm.cfg import SEC_IN_YEAR, A\n# OGGM models\nfrom oggm.core.models.massbalance import LinearMassBalanceModel\nfrom oggm.core.models.flowline import FluxBasedModel\nfrom oggm.core.models.flowline import VerticalWallFlowline, TrapezoidalFlowline, ParabolicFlowline\n# This is to set a default parameter to a function. Just ignore it for now\nfrom functools import partial\nFlowlineModel = partial(FluxBasedModel, inplace=False)", "Basics\nSet-up a simple run with a constant linear bed. We will first define the bed:\nGlacier bed", "# This is the bed rock, linearily decreasing from 3000m altitude to 1000m, in 200 steps\nnx = 200\nbed_h = np.linspace(3400, 1400, nx)\n# At the begining, there is no glacier so our glacier surface is at the bed altitude\nsurface_h = bed_h\n# Let's set the model grid spacing to 100m (needed later)\nmap_dx = 100\n\n# plot this\nplt.plot(bed_h, color='k', label='Bedrock')\nplt.plot(surface_h, label='Initial glacier')\nplt.xlabel('Grid points')\nplt.ylabel('Altitude (m)')\nplt.legend(loc='best');", "Now we have to decide how wide our glacier is, and what it the shape of its bed. For a start, we will use a \"u-shaped\" bed (see the documentation), with a constant width of 300m:", "# The units of widths is in \"grid points\", i.e. 3 grid points = 300 m in our case\nwidths = np.zeros(nx) + 3.\n# Define our bed\ninit_flowline = VerticalWallFlowline(surface_h=surface_h, bed_h=bed_h, widths=widths, map_dx=map_dx)", "The init_flowline variable now contains all deometrical information needed by the model. It can give access to some attributes, which are quite useless for a non-existing glacier:", "print('Glacier length:', init_flowline.length_m)\nprint('Glacier area:', init_flowline.area_km2)\nprint('Glacier volume:', init_flowline.volume_km3)", "Mass balance\nThen we will need a mass balance model. In our case this will be a simple linear mass-balance, defined by the equilibrium line altitude and an altitude gradient (in [mm m$^{-1}$]):", "# ELA at 3000m a.s.l., gradient 4 mm m-1\nmb_model = LinearMassBalanceModel(3000, grad=4)", "The mass-balance model gives you the mass-balance for any altitude you want, in units [m s$^{-1}$]. Let us compute the annual mass-balance along the glacier profile:", "annual_mb = mb_model.get_mb(surface_h) * SEC_IN_YEAR\n\n# Plot it\nplt.plot(annual_mb, bed_h, color='C2', label='Mass-balance')\nplt.xlabel('Annual mass-balance (m yr-1)')\nplt.ylabel('Altitude (m)')\nplt.legend(loc='best');", "Model run\nNow that we have all the ingredients to run the model, we just have to initialize it:", "# The model requires the initial glacier bed, a mass-balance model, and an initial time (the year y0)\nmodel = FlowlineModel(init_flowline, mb_model=mb_model, y0=0.)", "We can now run the model for 150 years and see how the output looks like:", "model.run_until(150)\n# Plot the initial conditions first:\nplt.plot(init_flowline.bed_h, color='k', label='Bedrock')\nplt.plot(init_flowline.surface_h, label='Initial glacier')\n# The get the modelled flowline (model.fls[-1]) and plot it's new surface\nplt.plot(model.fls[-1].surface_h, label='Glacier after {} years'.format(model.yr))\nplt.xlabel('Grid points')\nplt.ylabel('Altitude (m)')\nplt.legend(loc='best');", "Let's print out a few infos about our glacier:", "print('Year:', model.yr)\nprint('Glacier length (m):', model.length_m)\nprint('Glacier area (km2):', model.area_km2)\nprint('Glacier volume (km3):', model.volume_km3)", "Note that the model time is now 150. Runing the model with the sane input will do nothing:", "model.run_until(150)\nprint('Year:', model.yr)\nprint('Glacier length (m):', model.length_m)", "If we want to compute longer, we have to set the desired date:", "model.run_until(500)\n# Plot the initial conditions first:\nplt.plot(init_flowline.bed_h, color='k', label='Bedrock')\nplt.plot(init_flowline.surface_h, label='Initial glacier')\n# The get the modelled flowline (model.fls[-1]) and plot it's new surface\nplt.plot(model.fls[-1].surface_h, label='Glacier after {} years'.format(model.yr))\nplt.xlabel('Grid points')\nplt.ylabel('Altitude (m)')\nplt.legend(loc='best');\n\nprint('Year:', model.yr)\nprint('Glacier length (m):', model.length_m)\nprint('Glacier area (km2):', model.area_km2)\nprint('Glacier volume (km3):', model.volume_km3)", "Note that in order to store some intermediate steps of the evolution of the glacier, it might be useful to make a loop:", "# Reinitialize the model\nmodel = FlowlineModel(init_flowline, mb_model=mb_model, y0=0.)\n# Year 0 to 600 in 6 years step\nyrs = np.arange(0, 600, 5)\n# Array to fill with data\nnsteps = len(yrs)\nlength = np.zeros(nsteps)\nvol = np.zeros(nsteps)\n# Loop\nfor i, yr in enumerate(yrs):\n model.run_until(yr)\n length[i] = model.length_m\n vol[i] = model.volume_km3\n# I store the final results for later use\nsimple_glacier_h = model.fls[-1].surface_h", "We can now plot the evolution of the glacier length and volume with time:", "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))\nax1.plot(yrs, length);\nax1.set_xlabel('Years')\nax1.set_ylabel('Length (m)');\nax2.plot(yrs, vol);\nax2.set_xlabel('Years')\nax2.set_ylabel('Volume (km3)');", "A first experiment\nOk, now we have seen the basics. Will will now define a simple experiment, in which we will now make the glacier wider at the top (in the accumulation area). This is a common situation for valley glaciers.", "# We define the widths as before:\nwidths = np.zeros(nx) + 3.\n# But we now make our glacier 600 me wide fir the first grid points:\nwidths[0:15] = 6\n# Define our new bed\nwider_flowline = VerticalWallFlowline(surface_h=surface_h, bed_h=bed_h, widths=widths, map_dx=map_dx)", "We will now run our model with the new inital conditions, and store the output in a new variable for comparison:", "# Reinitialize the model with the new input\nmodel = FlowlineModel(wider_flowline, mb_model=mb_model, y0=0.)\n# Array to fill with data\nnsteps = len(yrs)\nlength_w = np.zeros(nsteps)\nvol_w = np.zeros(nsteps)\n# Loop\nfor i, yr in enumerate(yrs):\n model.run_until(yr)\n length_w[i] = model.length_m\n vol_w[i] = model.volume_km3\n# I store the final results for later use\nwider_glacier_h = model.fls[-1].surface_h", "Compare the results:", "# Plot the initial conditions first:\nplt.plot(init_flowline.bed_h, color='k', label='Bedrock')\n# Then the final result\nplt.plot(simple_glacier_h, label='Simple glacier')\nplt.plot(wider_glacier_h, label='Wider glacier')\nplt.xlabel('Grid points')\nplt.ylabel('Altitude (m)')\nplt.legend(loc='best');\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))\nax1.plot(yrs, length, label='Simple glacier');\nax1.plot(yrs, length_w, label='Wider glacier');\nax1.legend(loc='best')\nax1.set_xlabel('Years')\nax1.set_ylabel('Length (m)');\nax2.plot(yrs, vol, label='Simple glacier');\nax2.plot(yrs, vol_w, label='Wider glacier');\nax2.legend(loc='best')\nax2.set_xlabel('Years')\nax2.set_ylabel('Volume (km3)');", "Ice flow parameters\nThe ice flow parameters are going to have a strong influence on the behavior of the glacier. The default in OGGM is to set Glen's creep parameter A to the \"standard value\" defined by Cuffey and Patterson:", "# Default in OGGM\nprint(A)", "We can change this and see what happens:", "# Reinitialize the model with the new parameter\nmodel = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A / 10)\n# Array to fill with data\nnsteps = len(yrs)\nlength_s1 = np.zeros(nsteps)\nvol_s1 = np.zeros(nsteps)\n# Loop\nfor i, yr in enumerate(yrs):\n model.run_until(yr)\n length_s1[i] = model.length_m\n vol_s1[i] = model.volume_km3\n# I store the final results for later use\nstiffer_glacier_h = model.fls[-1].surface_h\n\n# And again\nmodel = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A * 10)\n# Array to fill with data\nnsteps = len(yrs)\nlength_s2 = np.zeros(nsteps)\nvol_s2 = np.zeros(nsteps)\n# Loop\nfor i, yr in enumerate(yrs):\n model.run_until(yr)\n length_s2[i] = model.length_m\n vol_s2[i] = model.volume_km3\n# I store the final results for later use\nsofter_glacier_h = model.fls[-1].surface_h\n\n# Plot the initial conditions first:\nplt.plot(init_flowline.bed_h, color='k', label='Bedrock')\n# Then the final result\nplt.plot(simple_glacier_h, label='Default A')\nplt.plot(stiffer_glacier_h, label='A / 10')\nplt.plot(softer_glacier_h, label='A * 10')\nplt.xlabel('Grid points')\nplt.ylabel('Altitude (m)')\nplt.legend(loc='best');", "In his seminal paper, Oerlemans also uses a so-called \"sliding parameter\", representing basal sliding. In OGGM this parameter is set to 0 per default, but it can be modified at whish:", "# Change sliding to use Oerlemans value:\nmodel = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A, fs=5.7e-20)\n# Array to fill with data\nnsteps = len(yrs)\nlength_s3 = np.zeros(nsteps)\nvol_s3 = np.zeros(nsteps)\n# Loop\nfor i, yr in enumerate(yrs):\n model.run_until(yr)\n length_s3[i] = model.length_m\n vol_s3[i] = model.volume_km3\n# I store the final results for later use\nsliding_glacier_h = model.fls[-1].surface_h\n\n# Plot the initial conditions first:\nplt.plot(init_flowline.bed_h, color='k', label='Bedrock')\n# Then the final result\nplt.plot(simple_glacier_h, label='Default')\nplt.plot(sliding_glacier_h, label='Sliding glacier')\nplt.xlabel('Grid points')\nplt.ylabel('Altitude (m)')\nplt.legend(loc='best');", "More experiments for self-study\nThese simple models of glacier evolution are extremely useful tools to learn about the behavior of glaciers. Here is a non-exhaustive list of questions that one could address with this simple model:\n- study the model code and try to find out how the equations are solved numerically\n- more maritime conditions lead to steeper mass balance gradients. Vary the mass balance gradient and examine the response time and equilibrium glacier profiles for various values of the mass balance gradient.\n- apply a periodically varying mass balance forcing. How long does the period T need to be chosen to let the glacier get close to equilibrium with the prescribed climate? Make a plot of the ELA versus glacier length or volume. Can you explain the hysteresis?\n- study a glacier with an overdeepening in the bed profile. Find equilibrium lengths and profiles by stepwise or slow linear changes in the mass balance forcing. Can you find two different equilibrium glaciers which are subject to the same mass balance forcing? Can you explain what is going on?\n- A surging glacier can be represented by the model by periodically (typically every 100 years for a period of 10 years) increasing the sliding factor (by a factor of 10 or so). Study the effect of a varying sliding parameter on the glacier geometry. Compare the mean equilibrium length and volume of surging and non-surging glaciers under the same climatic conditions.\n- Apply a random white-noise perturbation to the mass balance profile. What is the relation between the standard deviation of the noise and the variability of volume and length for different glaciers (e.g. steep and flat glaciers)?\n- ..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
judithfan/graphcomm
experiments/recog/preprocess_sketches.ipynb
mit
[ "%matplotlib inline\nimport os\nimport numpy as np\nfrom PIL import Image\nimport matplotlib\nfrom matplotlib import pyplot,pylab\nplt = pyplot\nimport scipy\nfrom __future__ import division\nimport seaborn as sns\nsns.set_style('white')\nimport string\nimport pandas as pd", "purpose\n\nupload sketches to S3\nbuild stimulus dictionary and write to database\n\nupload sketches to s3", "upload_dir = './sketch'\n\nimport boto\nrunThis = 0\nif runThis:\n conn = boto.connect_s3()\n b = conn.create_bucket('sketchpad_basic_pilot2_sketches')\n all_files = [i for i in os.listdir(upload_dir) if i != '.DS_Store']\n for a in all_files:\n print a\n k = b.new_key(a)\n k.set_contents_from_filename(os.path.join(upload_dir,a))\n k.set_acl('public-read')", "build stimulus dictionary", "## read in experimental metadata file\npath_to_metadata = '../../analysis/sketchpad_basic_pilot2_group_data.csv'\nmeta = pd.read_csv(path_to_metadata)\n\n## clean up and add filename column\nmeta2 = meta.drop(['svg','png','Unnamed: 0'],axis=1)\nfilename = []\ngames = []\nfor i,row in meta2.iterrows():\n filename.append('gameID_{}_trial_{}.png'.format(row['gameID'],row['trialNum']))\n games.append([])\nmeta2['filename'] = filename\nmeta2['games'] = games\n\n## write out metadata to json file\nstimdict = meta2.to_dict(orient='records')\nimport json\nwith open('sketchpad_basic_recog_meta.js', 'w') as fout:\n json.dump(stimdict, fout)\n\nJ = json.loads(open('sketchpad_basic_recog_meta.js',mode='ru').read())\nassert len(J)==len(meta2)\n\n'{} unique games.'.format(len(np.unique(meta2.gameID.values)))", "upload stim dictionary to mongo (db = 'stimuli', collection='sketchpad_basic_recog')", "# set vars \nauth = pd.read_csv('auth.txt', header = None) # this auth.txt file contains the password for the sketchloop user\npswd = auth.values[0][0]\nuser = 'sketchloop'\nhost = 'rxdhawkins.me' ## cocolab ip address\n\n# have to fix this to be able to analyze from local\nimport pymongo as pm\nconn = pm.MongoClient('mongodb://sketchloop:' + pswd + '@127.0.0.1')\n\ndb = conn['stimuli']\ncoll = db['sketchpad_basic_pilot2_sketches']\n\n## actually add data now to the database\nfor (i,j) in enumerate(J):\n if i%100==0:\n print ('%d of %d' % (i,len(J)))\n coll.insert_one(j)\n\n## How many sketches have been retrieved at least once? equivalent to: coll.find({'numGames':{'$exists':1}}).count()\ncoll.find({'numGames':{'$gte':0}}).count()\n\n## stashed away handy querying things\n\n# coll.find({'numGames':{'$gte':1}}).sort('trialNum')[0]\n\n# from bson.objectid import ObjectId\n# coll.find({'_id':ObjectId('5a9a003d47e3d54db0bf33cc')}).count()", "crop 3d objects", "import os\nfrom PIL import Image\n\ndef RGBA2RGB(image, color=(255, 255, 255)):\n \"\"\"Alpha composite an RGBA Image with a specified color.\n\n Simpler, faster version than the solutions above.\n\n Source: http://stackoverflow.com/a/9459208/284318\n\n Keyword Arguments:\n image -- PIL RGBA Image object\n color -- Tuple r, g, b (default 255, 255, 255)\n\n \"\"\"\n image.load() # needed for split()\n background = Image.new('RGB', image.size, color)\n background.paste(image, mask=image.split()[3]) # 3 is the alpha channel\n return background\n\ndef load_and_crop_image(path, dest='object_cropped', imsize=224):\n im = Image.open(path)\n# if np.array(im).shape[-1] == 4:\n# im = RGBA2RGB(im)\n \n # crop to sketch only\n arr = np.asarray(im)\n if len(arr.shape)==2:\n w,h = np.where(arr!=127)\n else:\n w,h,d = np.where(arr!=127) # where the image is not white \n if len(h)==0:\n print(path) \n xlb = min(h)\n xub = max(h)\n ylb = min(w)\n yub = max(w)\n lb = min([xlb,ylb])\n ub = max([xub,yub]) \n im = im.crop((lb, lb, ub, ub)) \n im = im.resize((imsize, imsize), Image.ANTIALIAS)\n objname = path.split('/')[-1]\n if not os.path.exists(dest):\n os.makedirs(dest)\n im.save(os.path.join(dest,objname))\n\nrun_this = 0\nif run_this:\n ## actually crop images now\n data_dir = './object'\n allobjs = ['./object/' + i for i in os.listdir(data_dir)]\n for o in allobjs:\n load_and_crop_image(o)\n\nrun_this = 0\nif run_this:\n ## rename objects in folder\n data_dir = './object'\n allobjs = [data_dir + '/' + i for i in os.listdir(data_dir) if i != '.DS_Store']\n for o in allobjs:\n if len(o.split('_'))==4:\n os.rename(o, os.path.join(data_dir, o.split('/')[-1].split('_')[2] + '.png'))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stevetjoa/stanford-mir
audio_representation.ipynb
mit
[ "%matplotlib inline\nimport numpy, scipy, matplotlib.pyplot as plt, IPython.display as ipd\nimport librosa, librosa.display\nimport stanford_mir; stanford_mir.init()", "&larr; Back to Index\nAudio Representation\nIn performance, musicians convert sheet music representations into sound which is transmitted through the air as air pressure oscillations. In essence, sound is simply air vibrating (Wikipedia). Sound vibrates through the air as longitudinal waves, i.e. the oscillations are parallel to the direction of propagation.\nAudio refers to the production, transmission, or reception of sounds that are audible by humans. An audio signal is a representation of sound that represents the fluctuation in air pressure caused by the vibration as a function of time. Unlike sheet music or symbolic representations, audio representations encode everything that is necessary to reproduce an acoustic realization of a piece of music. However, note parameters such as onsets, durations, and pitches are not encoded explicitly. This makes converting from an audio representation to a\nsymbolic representation a difficult and ill-defined task.\nWaveforms and the Time Domain\nThe basic representation of an audio signal is in the time domain. \nLet's listen to a file:", "x, sr = librosa.load('audio/c_strum.wav')\nipd.Audio(x, rate=sr)", "(If you get an error using librosa.load, you may need to install ffmpeg.)\nThe change in air pressure at a certain time is graphically represented by a pressure-time plot, or simply waveform.\nTo plot a waveform, use librosa.display.waveplot:", "plt.figure(figsize=(15, 5))\nlibrosa.display.waveplot(x, sr, alpha=0.8)", "Digital computers can only capture this data at discrete moments in time. The rate at which a computer captures audio data is called the sampling frequency (often abbreviated fs) or sampling rate (often abbreviated sr). For this workshop, we will mostly work with a sampling frequency of 44100 Hz, the sampling rate of CD recordings.\nTimbre: Temporal Indicators\nTimbre is the quality of sound that distinguishes the tone of different instruments and voices even if the sounds have the same pitch and loudness.\nOne characteristic of timbre is its temporal evolution. The envelope of a signal is a smooth curve that approximates the amplitude extremes of a waveform over time.\nEnvelopes are often modeled by the ADSR model (Wikipedia) which describes four phases of a sound: attack, decay, sustain, release. \nDuring the attack phase, the sound builds up, usually with noise-like components over a broad frequency range. Such a noise-like short-duration sound at the start of a sound is often called a transient.\nDuring the decay phase, the sound stabilizes and reaches a steady periodic pattern.\nDuring the sustain phase, the energy remains fairly constant.\nDuring the release phase, the sound fades away.\nThe ADSR model is a simplification and does not necessarily model the amplitude envelopes of all sounds.", "ipd.Image(\"https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/ADSR_parameter.svg/640px-ADSR_parameter.svg.png\")", "Timbre: Spectral Indicators\nAnother property used to characterize timbre is the existence of partials and their relative strengths. Partials are the dominant frequencies in a musical tone with the lowest partial being the fundamental frequency.\nThe partials of a sound are visualized with a spectrogram. A spectrogram shows the intensity of frequency components over time. (See Fourier Transform and Short-Time Fourier Transform for more.)\nPure Tone\nLet's synthesize a pure tone at 1047 Hz, concert C6:", "T = 2.0 # seconds\nf0 = 1047.0\nsr = 22050\nt = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable\nx = 0.1*numpy.sin(2*numpy.pi*f0*t)\nipd.Audio(x, rate=sr)", "Display the spectrum of the pure tone:", "X = scipy.fft(x[:4096])\nX_mag = numpy.absolute(X) # spectral magnitude\nf = numpy.linspace(0, sr, 4096) # frequency variable\nplt.figure(figsize=(14, 5))\nplt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum\nplt.xlabel('Frequency (Hz)')", "Oboe\nLet's listen to an oboe playing a C6:", "x, sr = librosa.load('audio/oboe_c6.wav')\nipd.Audio(x, rate=sr)\n\nprint(x.shape)", "Display the spectrum of the oboe:", "X = scipy.fft(x[10000:14096])\nX_mag = numpy.absolute(X)\nplt.figure(figsize=(14, 5))\nplt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum\nplt.xlabel('Frequency (Hz)')", "Clarinet\nLet's listen to a clarinet playing a concert C6:", "x, sr = librosa.load('audio/clarinet_c6.wav')\nipd.Audio(x, rate=sr)\n\nprint(x.shape)\n\nX = scipy.fft(x[10000:14096])\nX_mag = numpy.absolute(X)\nplt.figure(figsize=(14, 5))\nplt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum\nplt.xlabel('Frequency (Hz)')", "Notice the difference in the relative amplitudes of the partial components. All three signals have approximately the same pitch and fundamental frequency, yet their timbres differ.\n&larr; Back to Index" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
h-mayorquin/hopfield_sequences
notebooks/2016-12-11(Study of connectivity distribution).ipynb
mit
[ "Study of connectivity distribution\nThis notebook is to study the connectivity distrubtion", "from __future__ import print_function\nimport sys\nsys.path.append('../')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\nimport seaborn as sns\n\nfrom hopfield import Hopfield\n\n%matplotlib inline\nsns.set(font_scale=2.0)\n\nprng = np.random.RandomState(seed=100)\nnormalize = True\nT = 1.0\nn_store = 7\n\nN_samples = 1000", "Dependency in how many states are added\nHere we see whether the distribution of synaptic influences depends on the state of the vector.\nFirst we start with only two", "n_dim = 400\n\nnn = Hopfield(n_dim=n_dim, T=T, prng=prng)\nlist_of_patterns = nn.generate_random_patterns(n_dim)\nnn.train(list_of_patterns, normalize=normalize)", "We generate two random patterns and test whether the field (h) is dependent on the initial state. We see very similar normal behavior for both of them", "x = np.dot(nn.w, np.sign(prng.normal(size=n_dim)))\ny = np.dot(nn.w, np.sign(prng.normal(size=n_dim)))\n\nfig = plt.figure(figsize=(16, 12))\nax = fig.add_subplot(111)\nax.hist(x)\nax.hist(y)\nprint(np.std(x))", "We now then try this with 10 patterns to see how the distributions behave", "fig = plt.figure(figsize=(16, 12))\nax = fig.add_subplot(111)\n\nfor i in range(10):\n x = np.dot(nn.w, np.sign(prng.normal(size=n_dim)))\n ax.hist(x, alpha=0.5)", "We see that the normal distribution is mainted, then we calculate the field h (result of the np.dot(w, s) calculation) for a bunch of different initial random states and concatenate the results to see how the whole distribution looks like", "n_dim = 400\n\nnn = Hopfield(n_dim=n_dim, T=T, prng=prng)\nlist_of_patterns = nn.generate_random_patterns(n_dim)\nnn.train(list_of_patterns, normalize=normalize)\n\n\nx = np.empty(n_dim)\n\nfor i in range(N_samples):\n h = np.dot(nn.w, np.sign(prng.normal(size=n_dim)))\n x = np.concatenate((x, h))\n\n\nfig = plt.figure(figsize=(16, 12))\nax = fig.add_subplot(111)\nn, bins, patches = ax.hist(x, bins=30)\n\nprint(np.var(x))\nprint(nn.sigma)", "Dependence on network size\nNow we test test how the histogram looks for different sizes", "n_dimensions = [200, 800, 2000, 5000]\nfig = plt.figure(figsize=(16, 12))\ngs = gridspec.GridSpec(2, 2)\n\nfor index, n_dim in enumerate(n_dimensions):\n\n nn = Hopfield(n_dim=n_dim, T=T, prng=prng)\n list_of_patterns = nn.generate_random_patterns(n_store)\n nn.train(list_of_patterns, normalize=normalize)\n\n x = np.empty(n_dim)\n for i in range(N_samples):\n h = np.dot(nn.w, np.sign(prng.normal(size=n_dim)))\n x = np.concatenate((x, h))\n \n ax = fig.add_subplot(gs[index//2, index%2])\n ax.set_xlim([-1, 1])\n ax.set_title('n_dim = ' + str(n_dim) + ' std = ' + str(np.std(x)))\n \n weights = np.ones_like(x)/float(len(x))\n n, bins, patches = ax.hist(x, bins=30, weights=weights, normed=False)", "Now we calculate the variance of the h vector as a function of the dimension", "n_dimensions = np.logspace(1, 4, num=20)\nvariances = []\nstandar_deviations = []\n\nfor index, n_dim in enumerate(n_dimensions):\n print('number', index, 'of', n_dimensions.size, ' n_dim =', n_dim)\n\n n_dim = int(n_dim)\n \n\n nn = Hopfield(n_dim=n_dim, T=T, prng=prng)\n list_of_patterns = nn.generate_random_patterns(n_store)\n nn.train(list_of_patterns, normalize=normalize)\n\n x = np.empty(n_dim)\n for i in range(N_samples):\n h = np.dot(nn.w, np.sign(prng.normal(size=n_dim)))\n x = np.concatenate((x, h))\n \n variances.append(np.var(x))\n standar_deviations.append(np.std(x))\n\n\nfig = plt.figure(figsize=(16, 12))\nax = fig.add_subplot(111)\nax.semilogx(n_dimensions, variances,'*-', markersize=16, label='var')\nax.semilogx(n_dimensions, standar_deviations, '*-', markersize=16, label='std')\nax.axhline(y=nn.sigma, color='k', label='nn.sigma')\nax.legend()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/probability
discussion/examples/TFP_and_Jax.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Probability Authors.", "#@title ##### Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TFP, backed by Jax\nJax-backed TFP is a work in progress, but many distributions and bijectors are currently working! How do you use the alternative backend?\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/discussion/examples/TFP_and_Jax.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/discussion/examples/TFP_and_Jax.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nImporting", "# Importing the TFP with Jax backend\n!pip3 install -q 'tfp-nightly[jax]' tf-nightly-cpu # We (currently) still require TF, but TF's smaller CPU build will work.\nimport tensorflow_probability as tfp\ntfp = tfp.experimental.substrates.jax\ntf = tfp.tf2jax\n\n# Standard TFP Imports\ntfd = tfp.distributions\ntfb = tfp.bijectors\ntfpk = tfp.math.psd_kernels\n\n# Jax imports\nimport jax\nimport jax.numpy as np\nfrom jax import random\n\n# Other imports\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(style='white')", "TF Interface to Jax\nWe've reimplemented the TF API, but with Jax functions instead of TF functions and DeviceArrays instead of TF Tensors.", "tf.ones(5)\n\ntf.matmul(tf.ones([1, 2]), tf.ones([2, 4]))", "Some differences:\nShapes are tuples, not TensorShapes", "tf.ones(5).shape", "Randomness is stateless, like in Jax and requires Jax PRNGKeys to operate.", "tf.random.stateless_uniform([1, 2], seed=random.PRNGKey(0))", "Placeholders don't exist.", "tf.compat.v1.placeholder_with_default(tf.ones(5), (5,))", "Math libraries\nTFP's math libraries are now largely working, i.e. tfp.math\nBijectors\nMost bijectors have tests passing!\nUnary bijectors", "bij = tfb.Shift(1.)(tfb.Scale(3.))\nprint(bij.forward(np.ones(5)))\nprint(bij.inverse(np.ones(5)))", "Meta bijectors", "b = tfb.FillScaleTriL(diag_bijector=tfb.Exp(), diag_shift=None)\nprint(b.forward(x=[0., 0., 0.]))\nprint(b.inverse(y=[[1., 0], [.5, 2]]))\n\nb = tfb.Chain([tfb.Exp(), tfb.Softplus()])\n# or: \n# b = tfb.Exp()(tfb.Softplus())\nprint(b.forward(-np.ones(5)))", "MCMC coming soon\nWe are migrating TFP's random samplers to be internally-stateless, then will update MCMC to support JAX.\nSome don't work yet\n\nFor example: FFJORD, MAF (WIP), Real NVP\n\nDistributions\nWhen sampling, we need to pass in a seed.", "dist = tfd.Normal(loc=0., scale=1.)\nprint(dist.sample(seed=random.PRNGKey(0)))", "Jax distributions obey the same batching semantics as their TensorFlow counterparts.", "dist = tfd.Normal(np.zeros(5), np.ones(5))\ns = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))\nprint(dist.log_prob(s).shape)\n\ndist = tfd.Independent(tfd.Normal(np.zeros(5), np.ones(5)), 1)\ns = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))\nprint(dist.log_prob(s).shape)", "Most meta distributions are working!", "dist = tfd.TransformedDistribution(\n tfd.MultivariateNormalDiag(tf.zeros(5), tf.ones(5)),\n tfb.Exp())\n# or:\n# dist = tfb.Exp()(tfd.MultivariateNormalDiag(tf.zeros(5), tf.ones(5)))\ns = dist.sample(sample_shape=2, seed=random.PRNGKey(0))\nprint(s)\nprint(dist.log_prob(s).shape)", "Gaussian processes and PSD kernels also work.", "k1, k2, k3 = random.split(random.PRNGKey(0), 3)\nobservation_noise_variance = 0.01\nf = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2)\nobservation_index_points = tf.random.stateless_uniform(\n [50], minval=-1.,maxval= 1., seed=k1)[..., np.newaxis]\nobservations = f(observation_index_points) + tfd.Normal(loc=0., scale=np.sqrt(observation_noise_variance)).sample(seed=k2)\n\nindex_points = np.linspace(-1., 1., 100)[..., np.newaxis]\n\nkernel = tfpk.ExponentiatedQuadratic(length_scale=0.1)\n\ngprm = tfd.GaussianProcessRegressionModel(\n kernel=kernel,\n index_points=index_points,\n observation_index_points=observation_index_points,\n observations=observations,\n observation_noise_variance=observation_noise_variance)\n\nsamples = gprm.sample(10, seed=k3)\nfor i in range(10):\n plt.plot(index_points, samples[i])\nplt.show()", "Works in progress:\n\nMaking all bijector/distribution tests pass (at around 90% now)\nMaking bijectors/distributions convertible to/from Pytrees\nMCMC (and a push for stateless sampling at large in TFP)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DOV-Vlaanderen/pydov
docs/notebooks/search_bodem.ipynb
mit
[ "Example of DOV search methods for soil data (bodemgegevens)\n\nUse cases explained below\n\nIntroduction to the bodem-objects\nGet bodemsites in a bounding box\nGet bodemlocaties with specific properties\nGet all direct and indirect bodemobservaties linked to a bodemlocatie\nGet all bodemobservaties in a bodemmonster\nFind all bodemlocaties where observations exist for organic carbon percentage in East-Flanders between 0 and 30 cm deep\nCalculate carbon stock in Ghent in the layer 0 - 23 cm", "%matplotlib inline\n\nimport inspect, sys\nimport warnings; warnings.simplefilter('ignore')\n\n# check pydov path\nimport pydov", "Get information about the datatype 'Bodemlocatie'\nOther datatypes are also possible:\n* Bodemsite: BodemsiteSearch\n* Bodemmonster: BodemmonsterSearch\n* Bodemobservatie: BodemobservatieSearch", "from pydov.search.bodemlocatie import BodemlocatieSearch\nbodemlocatie = BodemlocatieSearch()", "A description is provided for the 'Bodemlocatie' datatype:", "bodemlocatie.get_description()", "The different fields that are available for objects of the 'Bodemlocatie' datatype can be requested with the get_fields() method:", "fields = bodemlocatie.get_fields()\n\n# print available fields\nfor f in fields.values():\n print(f['name'])", "You can get more information of a field by requesting it from the fields dictionary:\n* name: name of the field\n* definition: definition of this field\n* cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.\n* notnull: whether the field is mandatory or not\n* type: datatype of the values of this field", "fields['type']", "Optionally, if the values of the field have a specific domain the possible values are listed as values:", "fields['type']['values']", "Example use cases\nGet bodemsites in a bounding box\nGet data for all the bodemsites that are geographically located completely within the bounds of the specified box.\nThe coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.\nThe same methods can be used for other bodem objects.", "from pydov.search.bodemsite import BodemsiteSearch\nbodemsite = BodemsiteSearch()\n\nfrom pydov.util.location import Within, Box\n\ndf = bodemsite.search(location=Within(Box(148000, 160800, 160000, 169500)))\ndf.head()", "The dataframe contains a list of bodemsites. The available data are flattened to represent unique attributes per row of the dataframe.\nUsing the pkey_bodemsite field one can request the details of this bodemsite in a webbrowser:", "for pkey_bodemsite in set(df.pkey_bodemsite):\n print(pkey_bodemsite)", "Get bodemlocaties with specific properties\nNext to querying bodem objects based on their geographic location within a bounding box, we can also search for bodem objects matching a specific set of properties. \nThe same methods can be used for all bodem objects.\nFor this we can build a query using a combination of the 'Bodemlocatie' fields and operators provided by the WFS protocol.\nA list of possible operators can be found below:", "[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]", "In this example we build a query using the PropertyIsEqualTo operator to find all bodemlocaties with bodemstreek 'zandstreek'.\nWe use max_features=10 to limit the results to 10.", "from owslib.fes import PropertyIsEqualTo\n\nquery = PropertyIsEqualTo(propertyname='bodemstreek',\n literal='Zandstreek')\ndf = bodemlocatie.search(query=query, max_features=10)\n\ndf.head()", "Once again we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties:", "for pkey_bodemlocatie in set(df.pkey_bodemlocatie):\n print(pkey_bodemlocatie)", "Get all direct and indirect bodemobservaties in bodemlocatie\nGet all bodemobservaties in a specific bodemlocatie.\nDirect means bodemobservaties directly linked with a bodemlocatie.\nIndirect means bodemobservaties linked with child-objects of the bodemlocatie, like bodemmonsters.", "from pydov.search.bodemobservatie import BodemobservatieSearch\nfrom pydov.search.bodemlocatie import BodemlocatieSearch\nbodemobservatie = BodemobservatieSearch()\nbodemlocatie = BodemlocatieSearch()\n\nfrom owslib.fes import PropertyIsEqualTo\nfrom pydov.util.query import Join\n\nbodemlocaties = bodemlocatie.search(query=PropertyIsEqualTo(propertyname='naam', literal='VMM_INF_52'), \n return_fields=('pkey_bodemlocatie',))\n\nbodemobservaties = bodemobservatie.search(query=Join(bodemlocaties, 'pkey_bodemlocatie'))\nbodemobservaties.head()", "Get all bodemobservaties in a bodemmonster\nGet all bodemobservaties linked with a bodemmonster", "from pydov.search.bodemmonster import BodemmonsterSearch\nbodemmonster = BodemmonsterSearch()\n\nbodemmonsters = bodemmonster.search(query=PropertyIsEqualTo(propertyname = 'identificatie', literal='A0057359'),\n return_fields=('pkey_bodemmonster',))\n\nbodemobservaties = bodemobservatie.search(query=Join(bodemmonsters, on = 'pkey_parent', using='pkey_bodemmonster'))\nbodemobservaties.head()", "Find all soil locations with a given soil classification\nGet all soil locations with a given soil classification:", "from owslib.fes import PropertyIsEqualTo\nfrom pydov.util.query import Join\n\nfrom pydov.search.bodemclassificatie import BodemclassificatieSearch\nfrom pydov.search.bodemlocatie import BodemlocatieSearch\n\nbodemclassificatie = BodemclassificatieSearch()\nbl_Scbz = bodemclassificatie.search(query=PropertyIsEqualTo('bodemtype', 'Scbz'), return_fields=['pkey_bodemlocatie'])\n\nbodemlocatie = BodemlocatieSearch()\nbl = bodemlocatie.search(query=Join(bl_Scbz, 'pkey_bodemlocatie'))\nbl.head()", "We can also get their observations:", "from pydov.search.bodemobservatie import BodemobservatieSearch\n\nbodemobservatie = BodemobservatieSearch()\nobs = bodemobservatie.search(query=Join(bl_Scbz, 'pkey_bodemlocatie'), max_features=10)\nobs.head()", "Get all depth intervals and observations from a soil location", "from pydov.search.bodemlocatie import BodemlocatieSearch\nfrom pydov.search.bodemdiepteinterval import BodemdiepteintervalSearch\nfrom pydov.util.query import Join\nfrom owslib.fes import PropertyIsEqualTo\n\nbodemlocatie = BodemlocatieSearch()\nbodemdiepteinterval = BodemdiepteintervalSearch()\n\nbodemlocaties = bodemlocatie.search(query=PropertyIsEqualTo(propertyname='naam', literal='VMM_INF_52'),\n return_fields=('pkey_bodemlocatie',))\n\nbodemdiepteintervallen = bodemdiepteinterval.search(\n query=Join(bodemlocaties, on='pkey_bodemlocatie'))\nbodemdiepteintervallen", "And get their observations:", "from pydov.search.bodemobservatie import BodemobservatieSearch\n\nbodemobservatie = BodemobservatieSearch()\n\nbodemobservaties = bodemobservatie.search(query=Join(\n bodemdiepteintervallen, on='pkey_parent', using='pkey_diepteinterval'))\nbodemobservaties.head()", "Find all bodemlocaties where observations exist for organic carbon percentage in East-Flanders between 0 and 30 cm deep\nGet boundaries of East-Flanders by using a WFS", "from owslib.etree import etree\nfrom owslib.wfs import WebFeatureService\nfrom pydov.util.location import (\n GmlFilter,\n Within,\n)\n\nprovinciegrenzen = WebFeatureService(\n 'https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs',\n version='1.1.0')\n\nprovincie_filter = PropertyIsEqualTo(propertyname='NAAM', literal='Oost-Vlaanderen')\nprovincie_poly = provinciegrenzen.getfeature(\n typename='VRBG:Refprv',\n filter=etree.tostring(provincie_filter.toXML()).decode(\"utf8\")).read()", "Get bodemobservaties in East-Flanders with the requested properties", "from owslib.fes import PropertyIsEqualTo\nfrom owslib.fes import And\n\nfrom pydov.search.bodemobservatie import BodemobservatieSearch\n\nbodemobservatie = BodemobservatieSearch()\n\n# Select only layers with the boundaries 10-30\nbodemobservaties = bodemobservatie.search(\n location=GmlFilter(provincie_poly, Within),\n query=And([\n PropertyIsEqualTo(propertyname=\"parameter\", literal=\"Organische C - percentage\"),\n PropertyIsEqualTo(propertyname=\"diepte_tot_cm\", literal = '30'),\n PropertyIsEqualTo(propertyname=\"diepte_van_cm\", literal = '0')\n ]))\n\n\nbodemobservaties.head()", "Now we have all observations with the requested properties. \nNext we need to link them with the bodemlocatie", "from pydov.search.bodemlocatie import BodemlocatieSearch\nfrom pydov.util.query import Join\nimport pandas as pd\n\n# Find bodemlocatie information for all observations\nbodemlocatie = BodemlocatieSearch()\nbodemlocaties = bodemlocatie.search(query=Join(bodemobservaties, on = 'pkey_bodemlocatie', using='pkey_bodemlocatie'))\n\n# remove x, y, mv_mtaw from observatie dataframe to prevent duplicates while merging\nbodemobservaties = bodemobservaties.drop(['x', 'y', 'mv_mtaw'], axis=1)\n\n# Merge the bodemlocatie information together with the observation information\nmerged = pd.merge(bodemobservaties, bodemlocaties, on=\"pkey_bodemlocatie\", how='left')\n\nmerged.head()", "To export the results to CSV, you can use for example: \npython\nmerged.to_csv(\"test.csv\")\nWe can plot also the results on a map\nThis can take some time!", "import folium\nfrom folium.plugins import MarkerCluster\nfrom pyproj import Transformer\n\n# convert the coordinates to lat/lon for folium\ndef convert_latlon(x1, y1):\n transformer = Transformer.from_crs(\"epsg:31370\", \"epsg:4326\", always_xy=True)\n x2,y2 = transformer.transform(x1, y1)\n return x2, y2\n\n#convert coordinates to wgs84\nmerged['lon'], merged['lat'] = zip(*map(convert_latlon, merged['x'], merged['y']))\n\n# Get only location and value\nloclist = merged[['lat', 'lon']].values.tolist()\n\n# initialize the Folium map on the centre of the selected locations, play with the zoom until ok\nfmap = folium.Map(location=[merged['lat'].mean(), merged['lon'].mean()], zoom_start=10)\nmarker_cluster = MarkerCluster().add_to(fmap)\nfor loc in range(0, len(loclist)):\n popup = 'Bodemlocatie: ' + merged['pkey_bodemlocatie'][loc] \n popup = popup + '<br> Bodemobservatie: ' + merged['pkey_bodemobservatie'][loc]\n popup = popup + '<br> Value: ' + merged['waarde'][loc] + \"%\"\n folium.Marker(loclist[loc], popup=popup).add_to(marker_cluster)\nfmap", "Calculate carbon stock in Ghent in the layer 0 - 23 cm\nAt the moment, there are no bulkdensities available. As soon as there are observations with bulkdensities, this example can be used to calculate a carbon stock in a layer.\nGet boundaries of Ghent using WFS", "from owslib.etree import etree\nfrom owslib.fes import PropertyIsEqualTo\nfrom owslib.wfs import WebFeatureService\nfrom pydov.util.location import (\n GmlFilter,\n Within,\n)\n\nstadsgrenzen = WebFeatureService(\n 'https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs',\n version='1.1.0')\n\ngent_filter = PropertyIsEqualTo(propertyname='NAAM', literal='Gent')\ngent_poly = stadsgrenzen.getfeature(\n typename='VRBG:Refgem',\n filter=etree.tostring(gent_filter.toXML()).decode(\"utf8\")).read()\n", "First get all observations in Ghent for organisch C percentage in requested layer", "from owslib.fes import PropertyIsEqualTo, PropertyIsGreaterThan, PropertyIsLessThan\nfrom owslib.fes import And\n\nfrom pydov.search.bodemobservatie import BodemobservatieSearch\n\nbodemobservatie = BodemobservatieSearch()\n\n# all layers intersect the layer 0-23cm\ncarbon_observaties = bodemobservatie.search(\n location=GmlFilter(gent_poly, Within),\n query=And([\n PropertyIsEqualTo(propertyname=\"parameter\", literal=\"Organische C - percentage\"),\n PropertyIsGreaterThan(propertyname=\"diepte_tot_cm\", literal = '0'),\n PropertyIsLessThan(propertyname=\"diepte_van_cm\", literal = '23')\n ]),\n return_fields=('pkey_bodemlocatie', 'waarde'))\ncarbon_observaties = carbon_observaties.rename(columns={\"waarde\": \"organic_c_percentage\"})\ncarbon_observaties.head()\n", "Then get all observations in Ghent for bulkdensity in requested layer", "density_observaties = bodemobservatie.search(\n location=GmlFilter(gent_poly, Within),\n query=And([\n PropertyIsEqualTo(propertyname=\"parameter\", literal=\"Bulkdensiteit - gemeten\"),\n PropertyIsGreaterThan(propertyname=\"diepte_tot_cm\", literal = '0'),\n PropertyIsLessThan(propertyname=\"diepte_van_cm\", literal = '23')\n ]),\n return_fields=('pkey_bodemlocatie', 'waarde'))\n\ndensity_observaties = density_observaties.rename(columns={\"waarde\": \"bulkdensity\"})\ndensity_observaties.head()", "Merge results together based on their bodemlocatie. Only remains the records where both parameters exists", "import pandas as pd\n\nmerged = pd.merge(carbon_observaties, density_observaties, on=\"pkey_bodemlocatie\")\n\nmerged.head()", "Filter Aardewerk soil locations\nSince we know that Aardewerk soil locations make use of a specific suffix, a query could be built filtering these out.\nSince we only need to match a partial string in the name, we will build a query using the PropertyIsLike operator to find all Aardewerk bodemlocaties.\nWe use max_features=10 to limit the results to 10.", "from owslib.fes import PropertyIsLike\n\nquery = PropertyIsLike(propertyname='naam',\n literal='KART_PROF_%', wildCard='%')\ndf = bodemlocatie.search(query=query, max_features=10)\n\ndf.head()", "As seen in the soil data example, we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties:", "for pkey_bodemlocatie in set(df.pkey_bodemlocatie):\n print(pkey_bodemlocatie)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
histogrammar/histogrammar-python
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
apache-2.0
[ "Histogrammar advanced tutorial\nHistogrammar is a Python package that allows you to make histograms from numpy arrays, and pandas and spark dataframes. (There is also a scala backend for Histogrammar.) \nThis advanced tutorial shows how to:\n- work with spark dataframes, \n- make many histograms at ones, which is one of the nice features of histogrammar, and how to configure that. For example how to set bin specifications, or how to deal with a time-axis.\nEnjoy!", "%%capture\n# install histogrammar (if not installed yet)\nimport sys\n\n!\"{sys.executable}\" -m pip install histogrammar\n\nimport histogrammar as hg\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib", "Data generation\nLet's first load some data!", "# open a pandas dataframe for use below\nfrom histogrammar import resources\ndf = pd.read_csv(resources.data(\"test.csv.gz\"), parse_dates=[\"date\"])\n\ndf.head()", "What about Spark DataFrames?\nNo problem! We can easily perform the same steps on a Spark DataFrame. One important thing to note there is that we need to include a jar file when we create our Spark session. This is used by spark to create the histograms using Histogrammar. The jar file will be automatically downloaded the first time you run this command.", "# download histogrammar jar files if not already installed, used for histogramming of spark dataframe\ntry:\n from pyspark.sql import SparkSession\n from pyspark.sql.functions import col\n from pyspark import __version__ as pyspark_version\n pyspark_installed = True\nexcept ImportError:\n print(\"pyspark needs to be installed for this example\")\n pyspark_installed = False\n\n# this is the jar file for spark 3.0\n# for spark 2.X, in the jars string, for both jar files change \"_2.12\" into \"_2.11\".\n\nif pyspark_installed:\n scala = '2.12' if int(pyspark_version[0]) >= 3 else '2.11'\n hist_jar = f'io.github.histogrammar:histogrammar_{scala}:1.0.20'\n hist_spark_jar = f'io.github.histogrammar:histogrammar-sparksql_{scala}:1.0.20'\n \n spark = SparkSession.builder.config(\n \"spark.jars.packages\", f'{hist_spark_jar},{hist_jar}'\n ).getOrCreate()\n\n sdf = spark.createDataFrame(df)", "Filling histograms with spark\nFilling histograms with spark dataframes is just as simple as it is with pandas dataframes.", "# example: filling from a pandas dataframe\nhist = hg.SparselyHistogram(binWidth=100, quantity='transaction')\nhist.fill.numpy(df)\nhist.plot.matplotlib();\n\n# for spark you will need this spark column function:\nif pyspark_installed:\n from pyspark.sql.functions import col", "Let's make the same histogram but from a spark dataframe. There are just two differences:\n- When declaring a histogram, always set quantity to col('columns_name') instead of 'columns_name'\n- When filling the histogram from a dataframe, use the fill.sparksql() method instead of fill.numpy().", "# example: filling from a pandas dataframe\nif pyspark_installed:\n hist = hg.SparselyHistogram(binWidth=100, quantity=col('transaction'))\n hist.fill.sparksql(sdf)\n hist.plot.matplotlib();", "Apart from these two differences, all functionality is the same between pandas and spark histograms!\nLike pandas, we can also do directly from the dataframe:", "if pyspark_installed:\n h2 = sdf.hg_SparselyProfileErr(25, col('longitude'), col('age'))\n h2.plot.matplotlib();\n\nif pyspark_installed:\n h3 = sdf.hg_TwoDimensionallySparselyHistogram(25, col('longitude'), 10, col('latitude'))\n h3.plot.matplotlib();", "All examples below also work with spark dataframes.\nMaking many histograms at once\nHistogrammar has a nice method to make many histograms in one go. See here.\nBy default automagical binning is applied to make the histograms.", "hists = df.hg_make_histograms()\n\n# histogrammar has made histograms of all features, using an automated binning.\nhists.keys()\n\nh = hists['transaction']\nh.plot.matplotlib();\n\n# you can select which features you want to histogram with features=:\nhists = df.hg_make_histograms(features = ['longitude', 'age', 'eyeColor'])\n\n# you can also make multi-dimensional histograms\n# here longitude is the first axis of each histogram.\nhists = df.hg_make_histograms(features = ['longitude:age', 'longitude:age:eyeColor'])", "Working with timestamps", "# Working with a dedicated time axis, make histograms of each feature over time.\nhists = df.hg_make_histograms(time_axis=\"date\")\n\nhists.keys()\n\nh2 = hists['date:age']\nh2.plot.matplotlib();", "Histogrammar does not support pandas' timestamps natively, but converts timestamps into nanoseconds since 1970-1-1.", "h2.bin_edges()", "The datatype shows the datetime though:", "h2.datatype\n\n# convert these back to timestamps with:\npd.Timestamp(h2.bin_edges()[0])\n\n# For the time axis, you can set the binning specifications with time_width and time_offset:\nhists = df.hg_make_histograms(time_axis=\"date\", time_width='28d', time_offset='2014-1-4', features=['date:isActive', 'date:age'])\n\nhists['date:isActive'].plot.matplotlib();", "Setting binning specifications", "# histogram selections. Here 'date' is the first axis of each histogram.\nfeatures=[\n 'date', 'latitude', 'longitude', 'age', 'eyeColor', 'favoriteFruit', 'transaction'\n]\n\n# Specify your own binning specifications for individual features or combinations thereof.\n# This bin specification uses open-ended (\"sparse\") histograms; unspecified features get\n# auto-binned. The time-axis binning, when specified here, needs to be in nanoseconds.\nbin_specs={\n 'longitude': {'binWidth': 10.0, 'origin': 0.0},\n 'latitude': {'edges': [-100, -75, -25, 0, 25, 75, 100]},\n 'age': {'num': 100, 'low': 0, 'high': 100},\n 'transaction': {'centers': [-1000, -500, 0, 500, 1000, 1500]},\n 'date': {'binWidth': pd.Timedelta('4w').value, 'origin': pd.Timestamp('2015-1-1').value}\n}\n\n\n# this binning specification is making:\n# - a sparse histogram for: longitude\n# - an irregular binned histogram for: latitude\n# - a closed-range evenly spaced histogram for: age\n# - a histogram centered around bin centers for: transaction\nhists = df.hg_make_histograms(features=features, bin_specs=bin_specs)\n\nhists.keys()\n\nhists['transaction'].plot.matplotlib();\n\n# all available bin specifications are (just examples):\n\nbin_specs = {'x': {'bin_width': 1, 'bin_offset': 0}, # SparselyBin histogram\n 'y': {'num': 10, 'low': 0.0, 'high': 2.0}, # Bin histogram\n 'x:y': [{}, {'num': 5, 'low': 0.0, 'high': 1.0}], # SparselyBin vs Bin histograms\n 'a': {'edges': [0, 2, 10, 11, 21, 101]}, # IrregularlyBin histogram\n 'b': {'centers': [1, 6, 10.5, 16, 20, 100]}, # CentrallyBin histogram\n 'c': {'max': True}, # Maximize histogram\n 'd': {'min': True}, # Minimize histogram\n 'e': {'sum': True}, # Sum histogram\n 'z': {'deviate': True}, # Deviate histogram\n 'f': {'average': True}, # Average histogram\n 'a:f': [{'edges': [0, 10, 101]}, {'average': True}], # IrregularlyBin vs Average histograms\n 'g': {'thresholds': [0, 2, 10, 11, 21, 101]}, # Stack histogram \n 'h': {'bag': True}, # Bag histogram\n }\n\n# to set binning specs for a specific 2d histogram, you can do this:\n# if these are not provide, the 1d binning specifications are picked up for 'a:f'\nbin_specs = {'a:f': [{'edges': [0, 10, 101]}, {'average': True}]}\n\n# For example \nfeatures = ['latitude:age', 'longitude:age', 'age', 'longitude']\n\nbin_specs = {\n 'latitude': {'binWidth': 25},\n 'longitude:': {'edges': [-100, -75, -25, 0, 25, 75, 100]},\n 'age': {'deviate': True},\n 'longitude:age': [{'binWidth': 25}, {'average': True}],\n}\n\nhists = df.hg_make_histograms(features=features, bin_specs=bin_specs)\n\nh = hists['latitude:age']\nh.bins\n\nhists['longitude:age'].plot.matplotlib();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deepfield/ibis
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
apache-2.0
[ "Extending Ibis Part 2: Adding a New Reduction Expression\nThis notebook will show you how to add a new reduction operation (bitwise_and) to an existing backend (PostgreSQL).\nA reduction operation is a function that maps $N$ rows to 1 row, for example the sum function.\nDescription\nWe're going to add a bitwise_and function to ibis. bitwise_and computes the logical AND of the individual bits of an integer.\nFor example,\n```\n 0101\n 0111\n 0011\n& 1101\n\n0001\n```\nStep 1: Define the Operation\nLet's define the bitwise_and operation as a function that takes any integer typed column as input and returns an integer\nhaskell\nbitwise_and :: Column Int -&gt; Int", "import ibis.expr.datatypes as dt\nimport ibis.expr.rules as rlz\n\nfrom ibis.expr.operations import Reduction, Arg\n\n\nclass BitwiseAnd(Reduction):\n arg = Arg(rlz.column(rlz.integer))\n where = Arg(rlz.boolean, default=None)\n output_type = rlz.scalar_like('arg')", "We just defined a BitwiseAnd class that takes one integer column as input, and returns a scalar output of the same type as the input. This matches both the requirements of a reduction and the spepcifics of the function that we want to implement.\nNote: It is very important that you write the correct argument rules and output type here. The expression will not work otherwise.\nStep 2: Define the API\nBecause every reduction in ibis has the ability to filter out values during aggregation (a typical feature in databases and analytics tools), to make an expression out of BitwiseAnd we need to pass an additional argument: where to our BitwiseAnd constructor.", "from ibis.expr.types import IntegerColumn # not IntegerValue! reductions are only valid on columns\n\n\ndef bitwise_and(integer_column, where=None):\n return BitwiseAnd(integer_column, where=where).to_expr()\n\n\nIntegerColumn.bitwise_and = bitwise_and", "Interlude: Create some expressions using bitwise_and", "import ibis\n\nt = ibis.table([('bigint_col', 'int64'), ('string_col', 'string')], name='t')\n\nt.bigint_col.bitwise_and()\n\nt.bigint_col.bitwise_and(t.string_col == '1')", "Step 3: Turn the Expression into SQL", "import sqlalchemy as sa\n\n\[email protected](BitwiseAnd)\ndef compile_sha1(translator, expr):\n # pull out the arguments to the expression\n arg, where = expr.op().args\n \n # compile the argument\n compiled_arg = translator.translate(arg)\n \n # call the appropriate postgres function\n agg = sa.func.bit_and(compiled_arg)\n \n # handle a non-None filter clause\n if where is not None:\n return agg.filter(translator.translate(where))\n return agg", "Step 4: Putting it all Together\nConnect to the ibis_testing database\nNOTE:\nTo be able to execute the rest of this notebook you need to run the following command from your ibis clone:\nsh\nci/build.sh", "con = ibis.postgres.connect(\n user='postgres',\n host='postgres',\n password='postgres',\n database='ibis_testing'\n)", "Create and execute a bitwise_and expression", "t = con.table('functional_alltypes')\nt\n\nexpr = t.bigint_col.bitwise_and()\nexpr\n\nsql_expr = expr.compile()\nprint(sql_expr)\n\nexpr.execute()", "Let's see what a bitwise_and call looks like with a where argument", "expr = t.bigint_col.bitwise_and(where=(t.bigint_col == 10) | (t.bigint_col == 40))\nexpr\n\nresult = expr.execute()\nresult", "Let's confirm that taking bitwise AND of 10 and 40 is in fact 8", "10 & 40\n\nprint(' {:0>8b}'.format(10))\nprint('& {:0>8b}'.format(40))\nprint('-' * 10)\nprint(' {:0>8b}'.format(10 & 40))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
claudiuskerth/PhDthesis
Data_analysis/SNP-indel-calling/dadi/05_2D.ipynb
mit
[ "Table of Contents\n<p>", "from ipyparallel import Client\n\ncl = Client()\n\ncl.ids\n\n%%px --local\n\n# run whole cell on all engines a well as in the local IPython session\n\nimport numpy as np\n\nimport sys\n\nsys.path.insert(0, '/home/claudius/Downloads/dadi')\n\nimport dadi\n\n%ll dadiExercises/\n\n%less dadiExercises/EryPar.unfolded.2dsfs.dadi_format\n\n# import 2D unfolded spectrum\n\nsfs2d_unfolded = dadi.Spectrum.from_file('dadiExercises/EryPar.unfolded.2dsfs.dadi_format')\n\n%page sfs2d_unfolded\n\nsfs2d_unfolded.sample_sizes\n\n# add population labels\nsfs2d_unfolded.pop_ids = [\"ery\", \"par\"]", "For the estimation of the 2D SFS, realSFS has only taken sites that had data from at least 9 individuals in each population (see assembly.sh, lines 1423 onwards).", "sfs2d_unfolded.S()", "The 2D spectrum contains counts from 60k sites that are variable in par or ery or both.", "import pylab\n\n%matplotlib inline\n\n# note this needs to be in the same cell as the dadi plotting function call to take effect\npylab.rcParams['font.size'] = 14.0\npylab.rcParams['figure.figsize'] = [12.0, 10.0]\n\ndadi.Plotting.plot_single_2d_sfs(sfs2d_unfolded, vmin=1, cmap=pylab.cm.jet)\n\n%psource dadi.Plotting.plot_single_2d_sfs", "More colormaps", "sfs2d_folded = sfs2d_unfolded.fold()\n\n# plot the folded GLOBAL minor allele frequency spectrum\n\ndadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1, cmap=pylab.cm.jet)\n\n# setting the smallest grid size slightly larger than the largest population sample size (36)\npts_l = [40, 50, 60]", "The fitting of parameters for various 1D models to the SFS's of par and ery has indicated the following:\n- ery has undergone a population size increase by >20 fold (between about 1-2 $\\times2N_{ref}$ generations ago) and later (<1 $\\times2N_{ref}$ generations ago) a decrease to about 15% of the ancient populations size\n- par has undergone only one size change to <10% of the ancient population size, this is inferred to have happened in the distant past, about 2-4 ($\\times 2N_{ref}$) generations ago\nI think it would be good to incorporate this information in the specification of a more complex 2D model.", "%pinfo dadi.Demographics2D.split_mig", "There are a couple of built-in models that I could use, but I think I need a custom model here that includes the information from the 1D model fitting.\nI would like to write a model function that specifies an ancient split between ery and par, then a population decline in par that lasts until the present and later an exponential growth in ery that is more recently followed by a population decline.\nAn alternative model to test would be a population decline in the ancestral population, followed by the split between, later population increase in ery which is more recently followed by a population decline.", "def split_1grow_2decline_1decline_nomig((nu1s, nu2s, nu2f, nu1b, nu1f, Ts, T2, Tb, Tf), (n1, n2), pts):\n \"\"\"\n model function: specifies an ancient split, followed by growth in pop1 and \n decline in pop2, later also decline in pop1\n \n nu1s: rel. size of pop1 after split\n nu2s: rel. size of pop2 after split\n nu2f: final rel. size for pop2\n nu1b: rel. size of pop1 after first size change\n nu1f: final rel. size of pop1\n Ts: time betweem population split and size change in pop2\n T2: time between size change in pop2 and first size change in pop1\n Tb: time between first and second size change in pop1\n Tf: time between second size change in pop1 and present\n \n The population split happend Tf+Tb+T2+Ts (x2N) generations in the past.\n \n n1,n2: sample sizes\n pts: number of grid points to use in extrapolation\n \"\"\"\n \n # define grid\n xx = yy = dadi.Numerics.default_grid(pts)\n \n # phi for the equilibrium ancestral pop\n phi = dadi.PhiManip.phi_1D(xx)\n \n # population split into pop1 and pop2\n phi = dadi.PhiManip.phi_1D_to_2D(xx, phi)\n \n # stepwise change in size for pop1 and pop2 after split\n phi = dadi.Integration.two_pops(phi, xx, Ts, nu2=nu2s, nu1=nu1s, m12=0, m21=0)\n # stepwise change in size for pop2 only\n phi = dadi.Integration.two_pops(phi, xx, T2, nu2=nu2f, nu1=nu1s, m12=0, m21=0)\n # stepwise change in size for pop1 only\n phi = dadi.Integration.two_pops(phi, xx, Tb, nu2=nu2f, nu1=nu1b, m12=0, m21=0)\n # stepwise change in size for pop1 only\n phi = dadi.Integration.two_pops(phi, xx, Tf, nu2=nu2f, nu1=nu1f, m12=0, m21=0)\n \n # calculate spectrum\n sfs = dadi.Spectrum.from_phi(phi, (n1, n2), (xx, yy))\n return sfs", "I wonder which population dadi assumes to be pop1. In the sfs2d spectrum object, ery is pop1 and par is pop2.", "?dadi.PhiManip.phi_1D_to_2D\n\n# create link to function that specifies the model\nfunc = split_1grow_2decline_1decline_nomig\n\n# create extrapolating version of the model function\nfunc_ex = dadi.Numerics.make_extrap_log_func(func)\n\n?split_1grow_2decline_1decline_nomig\n\nnu1s = 0.5\nnu2s = 0.5\nnu2f = 0.05\nnu1b = 40\nnu1f = 0.15\nTs = 0.1\nT2 = 0.1\nTb = 0.1\nTf = 0.1\n\nsfs2d_folded.sample_sizes\n\nmodel_spectrum = func_ex((nu1s, nu2s, nu2f, nu1b, nu1f, Ts, T2, Tb, Tf), sfs2d_folded.sample_sizes, pts_l)\n\ntheta = dadi.Inference.optimal_sfs_scaling(model_spectrum.fold(), sfs2d_folded)\n\ntheta\n\ndadi.Plotting.plot_2d_comp_multinom(model_spectrum.fold(), sfs2d_folded, vmin=1)", "I think this indicates that I need to allow the split to be more recent." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-2/cmip6/models/sandbox-2/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-2\nSource ID: SANDBOX-2\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-2', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
spacecoffin/GravelKicker
Notebooks/Experiments/17_01_10_regress_w_gendy_dists.ipynb
apache-2.0
[ "I realized I made a mistake in the design of the Gendy GK module: the module is based on the premise that all arguments to a synth are real valued. The Gendy parameters ampdist and durdist are categorical variables and as such violate the assumption.", "import sys\nsys.path.append('/Users/spacecoffin/Development')\n\nimport GravelKicker as gk\nimport librosa\nimport numpy as np\nimport os\nimport pandas as pd\n\nfrom datetime import datetime\nfrom supriya.tools import nonrealtimetools\n\nthis_dir = '/Users/spacecoffin/Development/GravelKicker/__gen_files'\n\npmtx = gk.generator.gendy1.gen_params(rows=100)\n\ndf = gk.generator.gendy1.format_params(pmtx)\n\ndf.to_pickle()\n\npmtx", "Generation/rendering timing\n~$0.189$ seconds per example/.aiff.\n18.9s for 100", "%time\n\nfor i, row in df.iterrows():\n \n session = nonrealtimetools.Session()\n \n builder = gk.generator.gendy1.make_builder(row)\n \n out = gk.generator.gendy1.build_out(builder)\n \n synthdef = builder.build()\n \n with session.at(0):\n synth_a = session.add_synth(duration=10, synthdef=synthdef)\n \n gk.util.render_session(session, this_dir, row[\"hash\"])", "Feature extraction timing\n~$0.88$ seconds per example/.aiff.\n1m 28s for 100", "%timeit\n\nfor i, row in df.iterrows():\n \n y, sr = librosa.load(os.path.join(this_dir, \"aif_files\", row[\"hash\"] + \".aiff\"))\n \n _y_normed = librosa.util.normalize(y)\n _mfcc = librosa.feature.mfcc(y=_y_normed, sr=sr, n_mfcc=13)\n _cent = np.mean(librosa.feature.spectral_centroid(y=_y_normed, sr=sr))\n \n _mfcc_mean = gk.feature_extraction.get_stats(_mfcc)[\"mean\"]\n \n X_row = np.append(_mfcc_mean, _cent)\n \n if i==0:\n X_mtx = X_row\n else:\n X_mtx = np.vstack((X_mtx, X_row))", "Thought: For feature extraction, it would probably be faster to extract all time domain vectors $y$ into a NumPy array and perform the necessary LibROSA operations across the rows of the vector, possibly leveraging under-the-hood efficiencies. \n\"1min 43s per loop\" below", "for i, row in df.iterrows():\n \n session = nonrealtimetools.Session()\n \n builder = gk.generator.gendy1.make_builder(row)\n \n out = gk.generator.gendy1.build_out(builder)\n \n synthdef = builder.build()\n \n with session.at(0):\n synth_a = session.add_synth(duration=10, synthdef=synthdef)\n \n gk.util.render_session(session, this_dir, row[\"hash\"])\n \n y, sr = librosa.load(os.path.join(this_dir, \"aif_files\", row[\"hash\"] + \".aiff\"))\n \n _y_normed = librosa.util.normalize(y)\n _mfcc = librosa.feature.mfcc(y=_y_normed, sr=sr, n_mfcc=13)\n _cent = np.mean(librosa.feature.spectral_centroid(y=_y_normed, sr=sr))\n \n _mfcc_mean = gk.feature_extraction.get_stats(_mfcc)[\"mean\"]\n \n X_row = np.append(_mfcc_mean, _cent)\n \n if i==0:\n X_mtx = X_row\n else:\n X_mtx = np.vstack((X_mtx, X_row))\n\nX_mtx.shape\n\ndef col_rename_4_mfcc(c):\n if (c < 13):\n return \"mfcc_mean_{}\".format(c)\n else:\n return \"spectral_centroid\"\n\npd.DataFrame(X_mtx).rename_axis(lambda c: col_rename_4_mfcc(c), axis=1)\n\nfrom sklearn import linear_model\nfrom sklearn import model_selection\nfrom sklearn import preprocessing\n\nimport sklearn as sk\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\npmtx.shape\n\nX_mtx.shape\n\nX_mtx[0]\n\nX_train, X_test, y_train, y_test = sk.model_selection.train_test_split(\n X_mtx, pmtx, test_size=0.4, random_state=1)\n\n# Create linear regression objectc\nregr = linear_model.LinearRegression()\n\n# Train the model using the training sets\nregr.fit(X_train, y_train)\n\n# The coefficients\nprint('Coefficients: \\n', regr.coef_)\n# The mean squared error\nprint(\"Mean squared error: %.2f\"\n % np.mean((regr.predict(X_test) - y_test) ** 2))\n# Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(X_test, y_test))", "Preprocessing", "# Scale data\nstandard_scaler = sk.preprocessing.StandardScaler()\nX_scaled = standard_scaler.fit_transform(X_mtx)\n#Xte_s = standard_scaler.transform(X_test)\n\nrobust_scaler = sk.preprocessing.RobustScaler()\nX_rscaled = robust_scaler.fit_transform(X_mtx)\n#Xte_r = robust_scaler.transform(X_test)\n\nX_scaled.mean(axis=0)\n\nX_scaled.mean(axis=0).mean()\n\nX_scaled.std(axis=0)\n\nX_train, X_test, y_train, y_test = sk.model_selection.train_test_split(\n X_scaled, pmtx, test_size=0.4, random_state=1)\n\n# Create linear regression objectc\nregr = linear_model.LinearRegression()\n\n# Train the model using the training sets\nregr.fit(X_train, y_train)\n\n# The coefficients\nprint('Coefficients: \\n', regr.coef_)\n# The mean squared error\nprint(\"Mean squared error: %.2f\"\n % np.mean((regr.predict(X_test) - y_test) ** 2))\n# Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(X_test, y_test))\n\ny_test[0]\n\nX_test[0]\n\nregr.predict(X_test[0])\n\ny_test[0]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NeuroDataDesign/seelviz
Tony/ipynb/Ilastik on Raw and HistEq Fear199 Data.ipynb
apache-2.0
[ "Ilastik for 10/31/16 Week\nFrom last week, I was able to generate 3D TIFF slices and image classifiers on Fear199 downsampled data. However, my problems were that:\n1) The TIFF slices were odd, cigar-shaped tubes.\n2) I was unable to generate a significant classifier using the existing data because of the weird image layout.\n3) I had trouble loading in the TIFF stack despite having generated one via ImageJ \nWhat I did this week was:\n1) Figure out why my original data was the odd cigar-shaped data. \n2) Correctly generate a subset of TIFF slices for Fear199. \n3) Generate a pixel-based object classifier. \nWhat I need help with/still need to learn:\n1) How to interpret/better validate my classifier results (currently have hdf5/TIFF output, how can I validate this?)\n2) How to apply this to density mapping\nTask 1: Why was my original data cigar-shaped?\nWhen downloading the image from ndreg, there were two different approaches to generating the numpy array. I've shown both below:", "## Script used to download nii run on Docker\nfrom ndreg import *\nimport matplotlib\nimport ndio.remote.neurodata as neurodata\nimport nibabel as nb\ninToken = \"Fear199\"\nnd = neurodata()\nprint(nd.get_metadata(inToken)['dataset']['voxelres'].keys())\ninImg = imgDownload(inToken, resolution=5)\nimgWrite(inImg, \"./Fear199.nii\")\n\n## Method 1:\nimport os\nimport numpy as np\nfrom PIL import Image\nimport nibabel as nib\nimport scipy.misc\nTokenName = 'Fear199.nii'\nimg = nib.load(TokenName)\n\n## Convert into np array (or memmap in this case)\ndata = img.get_data()\nprint data.shape\nprint type(data)\n\n## Method 2:\nrawData = sitk.GetArrayFromImage(inImg) ## convert to simpleITK image to normal numpy ndarray\nprint type(rawData)", "In a nutshell, method 1 generates an array with shape (x, y, z) -- specifically, (540, 717, 1358). The method 2 generates a numpy array with shape (z, y, x) -- specifically, (1358, 717, 540). Since we want the first column to be z slices, the original method was granting me x-slices (hence the cigar-tube dimensions).\nIn order to interconvert, we can either just use the rawData approach after directly calling from ndstore, or we can take our numpy array after loading from nibabel and use numpy's swapaxes method to just swap two of the dimensions (shown below).", "## if we have (i, j, k), we want (k, j, i) (converts nibabel format to sitk format)\nnew_im = newer_img.swapaxes(0,2) # just swap i and k", "Task 2: Generating raw TIFF slices.\nNow that I have appropiate coordinates, I generated a subset of TIFF slices to run the training module for the image classifier. Using the script here:", "plane = 0;\nfor plane in (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 100, 101, 102, 103, 104):\n output = np.asarray(rawData[plane])\n ## Save as TIFF for Ilastik\n scipy.misc.toimage(output).save('RAWoutfile' + TokenName + 'ITK' + str(plane) + '.tiff')", "As shown above, I generated data for the first 13 planes, and then some subset of planes from z = 100 to z = 104. I then trained the classifier on the 0 through 12 data, and used the 100 to 104 slices to validate my results. Below I've included images of the raw and histogram equalized images at one specific slice to just show the raw images I was working with.\nRaw Fear 199 at Z = 100:\n\nHistogram Equalized Fear 199 at Z = 100:\n\nIn order to generate the histogram equalized TIFF slices, I wrote a Jupyter notebook here:\nhttps://github.com/NeuroDataDesign/seelviz/blob/gh-pages/Tony/ipynb/generate%2BhistEQ.ipynb\nTask 3: Generate a pixel-based object-classifier for both the raw and the histogram equalized data.\nFrom there, I generated a pixel-based object-classifier for both the raw and the histogram equalized data. Shown first are images demonstrating me generating the classifier for the raw data. Then I show the images of me generating the classifier for the histogram equalized data.\nFirst, the raw data classifier. I load in planes 0-12 TIFF slices and generate a classifier by training it to select for borders, background, and individual bright points:\nLoading Raw Training Data\n\nSelecting Borders in Raw Training Data\n\nSelecting Bright Points in Raw Training Data\n\nCloseup of Feature Selection in Raw Training Data\n\nObject Output for Raw Training Data\n\nIn order to run Ilastik headlessly on 3D classified data, use globstring syntax to tell ilastik which images to combine for each volume. EG:\nFor me to get it to work, I made a folder called \"runtime\" where each file was named WITHOUT the underscores. Make sure to add the * to get it to work, also.\nRepeating the process for the histogram equalized brains, we get:\nLoading HistEq Training Data\n\nSelecting Borders and Background in HistEq Training Data\n\nSelecting Bright Points in HistEq Training Data\n\nObject Output for HistEq Training Data\n\nAgain running our classifier using the headless display (command below), we can generate a TIFF probability mapping.\nHowever, upon analyzing the object probabilities, I discovered that both are just black, blank TIFFs. Apparently, these stacks each had 0 objects.\nObject Output for New Data (both)\n\nProblems:\nAs I mentioned briefly previously, I spent a lot of time fiddling with the inputs/outputs such that I could then run the classifier on new data. After running the object classifier on the new TIFF slice in both cases, my TIFF version of the probability map is completely black. Evidently, this indicates one of two problems:\n 1) I have not generated an object classifier (unlikely, given that I can see objects showing up on the predictions in Ilastik)\n 2) I am improperly using the headless display to call my classifier (likely, given the fiddling I went through to get that step to work). \nAn obvious workaround would be loading the new data inside the batch predictions menu built into Ilastik. I spent time trying to do so, but I ran into some odd challenges. When I loaded the individual TIFF slices, they're obviously the wrong dimensions (since they're each x-y-greyscale, while the classifier takes z-y-x-greyscale). When I created a TIFF stack, Ilastik threw a completely mysterious error (that, after Googling, seems to be resolved by redownloading an older version of Ilastik). I tried to do this, but that didn't work - the issue still happens to be open on their Github page.\nJustifications for Sample Size\nAs per my deliverable, I'm supposed to provide reasoning for my sample sizes and trials. I chose 12 samples because:\n 1) There are over 1300 total Z slices, and I need a manageable subset\n 2) When I started with 6 and 10 slices, the classifier didn't find any objects (see last step) \n12 just happened to be the minimum number that I needed in order to get any number of raw data objects to show up in the classifier. Thus, that's why I did 12 for raw.\nAs for the HistEq reasoning, I just went with what worked for raw to be consistent.\nFuture goals:\nEventually, we want our methodology to do something similar to this: http://ilastik.org/documentation/counting/counting. We want to be able to use our pixel-based classifier to try to find the number of neurons in a given 3D image.\nHowever, much to my chagrin, this methodology currently does NOT support 3D data -- it only supports 2D data. Need to do some tweaking for this one to work." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JonasHarnau/apc
apc/vignettes/vignette_mesothelioma.ipynb
gpl-3.0
[ "Mesothelioma Mortality\nWe loosely replicate the empirical parts of Martinez Miranda et al. (2015).\nFirst, import the package", "import apc\n\n# Turn off FutureWarning from statsmodels\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)", "Take a first look at the data", "data = apc.asbestos()\ndata.head()", "Set up a model and attach the data to it.", "model = apc.Model()\nmodel.data_from_df(data)", "Now we look at a first plot of the data. We plot the response over each of the three time-scales.", "model.plot_data_sums(figsize=(10,4))", "Martinez Miranda et al. (2015), drop age groups older than 89 due to sparsity. We redo the plot looking exclusively at data for these age groups.", "model.sub_model(age_from_to=(80,None)).plot_data_sums(figsize=(10,4))", "We can see that there is indeed a sharp drop towards the end of the sample. Thus, we set up a sub-model that does not include these groups.", "model = model.sub_model(age_from_to=(None, 89))", "To confirm, we take a look at the data in vector form as organized by data_from_df:", "model.data_vector.tail()", "Success! The oldest age groups have been removed.\nNext, we plot the data of one time-scale within another.", "model.plot_data_within(figsize=(10,8), logy=True)", "From the cohort within period plot (bottom middle), we can see that mortality seems to slowly taper off for the 1917-1938 cohorts while that for the 1939-1960 appears to still be rising.\nNext, we fit a deviance table of a Poisson model to the data.", "model.fit_table('poisson_response')\nmodel.deviance_table", "We see that an age-period-cohort model cannot be rejected with a p-value of 0.85 (against a saturated model with as many parameters as observations). The same holds for an age-cohort model with a p-value of 0.78. A reduction from an age-period-cohort to an age-cohort model yields a p-value of 0.03. Miranda et al. point out that it may still be acceptable to use this model since it eases forecasting substantially: it makes it unnecessary to extrapolate the period parameters into the future which would introduce another source of uncertainty. Further, simpler models often seem to be beneficial for forecasting.\nRemark: see Nielsen (2014) for an explanation of the individual predictors.\nWe thus fit an age-cohort model to the data.", "model.fit('poisson_response', 'AC')", "We can now plot the parameters and their standard errors.", "model.plot_parameters(around_coef=False)", "The level combined with the two linear trends specify a plane. The detrended double sums of double differences in the bottom row show deviations over and above this plane. To obtain the fitted value of the linear predictor for a given age and cohort, we add together the level, and the value of the linear trends and detrended double sums at the relevant age and cohort. The fitted value for the response would be the exponential of this value. The detrended double sums start and end in zero by design. \nWe can move on to look at a residual plot.", "model.plot_residuals('deviance')", "We would like this plot to look like white noise. While this looks quite reasonable for the upper age groups, the pattern appears somewhat different for the lower ages up to about 37. It seems that the fit there is generally somewhat better. From what we saw above, this may relate to the fact that the death counts for these age groups are quite low so that predicting something close to zero is going to give good results. Fitting these cells seems somewhat 'easier'.\nNext, we move on to forecast from the model. The idea is to forecast mortality for future periods based on parameter estimates that are already available from the data. We can visualize that in a heatmap plot in age-cohort space.", "model.plot_data_heatmaps(space='AC')", "Here, the idea is to fill in the empty values in the bottom right triangle. Estimates for age and cohort effects for these cells are already available. Since we do not have a period effect in the model, this is all we need.\nWe can now forecast from the model.", "model.forecast()", "This call generated (distribution) forecasts for individual cells, as well as aggregated by age, period and cohort. In the heatmap plot above, these correspond to row, column and (counter-)diagonal sums in the lower right triangle, respectively. Finally, a forecast for the total, that is the sum over all cells in the triangle, is available.\nWe find the peak in the point forecasts.", "peak_year = model.forecasts['Period']['point_forecast'].idxmax()\nprint('Peak year is {}.'.format(peak_year))\nmodel.forecasts['Period'].loc[peak_year-2:peak_year+2]", "We can see that the generated arrays include not just the point forecast but also standard errors (broken down into process and estimation error) and quantile forecasts.\nNext, we plot the forecasts aggregated by period.", "model.plot_forecast()", "The plot includes one and two standard error bands. If we look closely, we can see that the fit seems to be somewhat worse for the last couple periods before the sample ends. One way to correct this is apply intercept correction. Martinez Miranda et al. (2015) suggest to multiply the point forecasts by the ratio of the last realization to the last fitted value.", "final_realized = model.data_vector.sum(level='Period').sort_index().iloc[-1][0]\nfinal_fitted = model.fitted_values.sum(level='Period').sort_index().iloc[-1]\nprint('Death counts for last period: {}'.format(final_realized))\nprint('Fitted value for last period: {:.2f}'.format(final_fitted))\nprint('Intercept correction factor: {:.2f}'.format(final_realized/final_fitted))", "We take a look at the plot with intercept correction, limiting our attention toe the period from 1990 to 2040.", "model.plot_forecast(ic=True, from_to=(1990,2040))", "This plot does have a more natural appearance, lacking the jump at the end of the sample.\nWe can also look at forecasts over a different time scale, for example by age. In this case, we already have some data available for all age groups under consideration. We may then be more interested not just in the forecasts, but maybe in the sum of response and forecast.", "model.plot_forecast(by='Age', aggregate=True)", "We can see that the forecast tells us to expect a shift in the mortality peak from age around 70 to the late 70s.\nReferences\n\nMartínez Miranda, M. D., Nielsen, B., & Nielsen, J. P. (2015). Inference and forecasting in the age-period-cohort model with unknown exposure with an application to mesothelioma mortality. Journal of the Royal Statistical Society: Series A (Statistics in Society), 178(1), 29–55.\nNielsen, B. (2014). Deviance analysis of age-period-cohort models. Nuffield Discussion Paper, (W03). Download" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/hadgem3-gc31-mm/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: MOHC\nSource ID: HADGEM3-GC31-MM\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:15\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mm', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jasontlam/snorkel
tutorials/intro/Intro_Tutorial_2.ipynb
apache-2.0
[ "Intro. to Snorkel: Extracting Spouse Relations from the News\nPart II: Generating and modeling noisy training labels\nIn this part of the tutorial, we will write labeling functions which express various heuristics, patterns, and weak supervision strategies to label our data.\nIn most real-world settings, hand-labeled training data is prohibitively expensive and slow to collect. A common scenario, though, is to have access to tons of unlabeled training data, and have some idea of how to label it programmatically. For example:\n\nWe may be able to think of text patterns that would indicate two people mentioned in a sentence are married, such as seeing the word \"spouse\" between the mentions.\nWe may have access to an external knowledge base (KB) that lists some known pairs of married people, and can use these to heuristically label some subset of our data.\n\nOur labeling functions will capture these types of strategies. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will model their accuracies with a generative model, which Snorkel will help us easily apply.\nThis will ultimately produce a single set of noise-aware training labels, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our NIPS 2016 paper.", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport os\n\n# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE\n# Note that this is necessary for parallel execution amongst other things...\n# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'\n\nimport numpy as np\nfrom snorkel import SnorkelSession\nsession = SnorkelSession()", "We repeat our definition of the Spouse Candidate subclass from Parts II and III.", "from snorkel.models import candidate_subclass\n\nSpouse = candidate_subclass('Spouse', ['person1', 'person2'])", "Using a labeled development set\nIn our setting here, we will use the phrase \"development set\" to refer to a small set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the test set, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions.\nIn our case, we already loaded existing labels for a development set (split 1), so we can load them again now:", "from snorkel.annotations import load_gold_labels\n\nL_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)", "Creating and Modeling a Noisy Training Set\nOur biggest step in the data programming pipeline is the creation - and modeling - of a noisy training set. We'll approach this in three main steps:\n\n\nCreating labeling functions (LFs): This is where most of our development time would actually go into if this were a real application. Labeling functions encode our heuristics and weak supervision signals to generate (noisy) labels for our training candidates.\n\n\nApplying the LFs: Here, we actually use them to label our candidates!\n\n\nTraining a generative model of our training set: Here we learn a model over our LFs, learning their respective accuracies automatically. This will allow us to combine them into a single, higher-quality label set.\n\n\nWe'll also add some detail on how to go about developing labeling functions and then debugging our model of them to improve performance.\n1. Creating Labeling Functions\nIn Snorkel, our primary interface through which we provide training signal to the end extraction model we are training is by writing labeling functions (LFs) (as opposed to hand-labeling massive training sets). We'll go through some examples for our spouse extraction task below.\nA labeling function is just a Python function that accepts a Candidate and returns 1 to mark the Candidate as true, -1 to mark the Candidate as false, and 0 to abstain from labeling the Candidate (note that the non-binary classification setting is covered in the advanced tutorials!).\nIn the next stages of the Snorkel pipeline, we'll train a model to learn the accuracies of the labeling functions and trewieght them accordingly, and then use them to train a downstream model. It turns out by doing this, we can get high-quality models even with lower-quality labeling functions. So they don't need to be perfect! Now on to writing some:", "import re\nfrom snorkel.lf_helpers import (\n get_left_tokens, get_right_tokens, get_between_tokens,\n get_text_between, get_tagged_text,\n)", "Pattern-based LFs\nThese LFs express some common sense text patterns which indicate that a person pair might be married. For example, LF_husband_wife looks for words in spouses between the person mentions, and LF_same_last_name checks to see if the two people have the same last name (but aren't the same whole name).", "spouses = {'spouse', 'wife', 'husband', 'ex-wife', 'ex-husband'}\nfamily = {'father', 'mother', 'sister', 'brother', 'son', 'daughter',\n 'grandfather', 'grandmother', 'uncle', 'aunt', 'cousin'}\nfamily = family | {f + '-in-law' for f in family}\nother = {'boyfriend', 'girlfriend' 'boss', 'employee', 'secretary', 'co-worker'}\n\n# Helper function to get last name\ndef last_name(s):\n name_parts = s.split(' ')\n return name_parts[-1] if len(name_parts) > 1 else None \n\ndef LF_husband_wife(c):\n return 1 if len(spouses.intersection(get_between_tokens(c))) > 0 else 0\n\ndef LF_husband_wife_left_window(c):\n if len(spouses.intersection(get_left_tokens(c[0], window=2))) > 0:\n return 1\n elif len(spouses.intersection(get_left_tokens(c[1], window=2))) > 0:\n return 1\n else:\n return 0\n \ndef LF_same_last_name(c):\n p1_last_name = last_name(c.person1.get_span())\n p2_last_name = last_name(c.person2.get_span())\n if p1_last_name and p2_last_name and p1_last_name == p2_last_name:\n if c.person1.get_span() != c.person2.get_span():\n return 1\n return 0\n\ndef LF_no_spouse_in_sentence(c):\n return -1 if np.random.rand() < 0.75 and len(spouses.intersection(c.get_parent().words)) == 0 else 0\n\ndef LF_and_married(c):\n return 1 if 'and' in get_between_tokens(c) and 'married' in get_right_tokens(c) else 0\n \ndef LF_familial_relationship(c):\n return -1 if len(family.intersection(get_between_tokens(c))) > 0 else 0\n\ndef LF_family_left_window(c):\n if len(family.intersection(get_left_tokens(c[0], window=2))) > 0:\n return -1\n elif len(family.intersection(get_left_tokens(c[1], window=2))) > 0:\n return -1\n else:\n return 0\n\ndef LF_other_relationship(c):\n return -1 if len(other.intersection(get_between_tokens(c))) > 0 else 0", "Distant Supervision LFs\nIn addition to writing labeling functions that describe text pattern-based heuristics for labeling training examples, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these.", "import bz2\n\n# Function to remove special characters from text\ndef strip_special(s):\n return ''.join(c for c in s if ord(c) < 128)\n\n# Read in known spouse pairs and save as set of tuples\nwith bz2.BZ2File('data/spouses_dbpedia.csv.bz2', 'rb') as f:\n known_spouses = set(\n tuple(strip_special(x.decode('utf-8')).strip().split(',')) for x in f.readlines()\n )\n# Last name pairs for known spouses\nlast_names = set([(last_name(x), last_name(y)) for x, y in known_spouses if last_name(x) and last_name(y)])\n \ndef LF_distant_supervision(c):\n p1, p2 = c.person1.get_span(), c.person2.get_span()\n return 1 if (p1, p2) in known_spouses or (p2, p1) in known_spouses else 0\n\ndef LF_distant_supervision_last_names(c):\n p1, p2 = c.person1.get_span(), c.person2.get_span()\n p1n, p2n = last_name(p1), last_name(p2)\n return 1 if (p1 != p2) and ((p1n, p2n) in last_names or (p2n, p1n) in last_names) else 0", "For later convenience we group the labeling functions into a list.", "LFs = [\n LF_distant_supervision, LF_distant_supervision_last_names, \n LF_husband_wife, LF_husband_wife_left_window, LF_same_last_name,\n LF_no_spouse_in_sentence, LF_and_married, LF_familial_relationship, \n LF_family_left_window, LF_other_relationship\n]", "Developing Labeling Functions\nAbove, we've written a bunch of labeling functions already, which should give you some sense about how to go about it. While writing them, we probably want to check to make sure that they at least work as intended before adding to our set. Suppose we're thinking about writing a simple LF:", "def LF_wife_in_sentence(c):\n \"\"\"A simple example of a labeling function\"\"\"\n return 1 if 'wife' in c.get_parent().words else 0", "One simple thing we can do is quickly test it on our development set (or any other set), without saving it to the database. This is simple to do. For example, we can easily get every candidate that this LF labels as true:", "labeled = []\nfor c in session.query(Spouse).filter(Spouse.split == 1).all():\n if LF_wife_in_sentence(c) != 0:\n labeled.append(c)\nprint(\"Number labeled:\", len(labeled))", "We can then easily put this into the Viewer as usual (try it out!):\nSentenceNgramViewer(labeled, session)\nWe also have a simple helper function for getting the empirical accuracy of a single LF with respect to the development set labels for example. This function also returns the evaluation buckets of the candidates (true positive, false positive, true negative, false negative):", "from snorkel.lf_helpers import test_LF\ntp, fp, tn, fn = test_LF(session, LF_wife_in_sentence, split=1, annotator_name='gold')", "2. Applying the Labeling Functions\nNext, we need to actually run the LFs over all of our training candidates, producing a set of Labels and LabelKeys (just the names of the LFs) in the database. We'll do this using the LabelAnnotator class, a UDF which we will again run with UDFRunner. Note that this will delete any existing Labels and LabelKeys for this candidate set. We start by setting up the class:", "from snorkel.annotations import LabelAnnotator\nlabeler = LabelAnnotator(lfs=LFs)", "Finally, we run the labeler. Note that we set a random seed for reproducibility, since some of the LFs involve random number generators. Again, this can be run in parallel, given an appropriate database like Postgres is being used:", "np.random.seed(1701)\n%time L_train = labeler.apply(split=0)\nL_train", "If we've already created the labels (saved in the database), we can load them in as a sparse matrix here too:", "%time L_train = labeler.load_matrix(session, split=0)\nL_train", "Note that the returned matrix is a special subclass of the scipy.sparse.csr_matrix class, with some special features which we demonstrate below:", "L_train.get_candidate(session, 0)\n\nL_train.get_key(session, 0)", "We can also view statistics about the resulting label matrix.\n\nCoverage is the fraction of candidates that the labeling function emits a non-zero label for.\nOverlap is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a non-zero label for.\nConflict is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a conflicting non-zero label for.", "L_train.lf_stats(session)", "3. Fitting the Generative Model\nNow, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other.", "from snorkel.learning import GenerativeModel\n\ngen_model = GenerativeModel()\ngen_model.train(L_train, epochs=100, decay=0.95, step_size=0.1 / L_train.shape[0], reg_param=1e-6)\n\ngen_model.weights.lf_accuracy", "We now apply the generative model to the training candidates to get the noise-aware training label set. We'll refer to these as the training marginals:", "train_marginals = gen_model.marginals(L_train)", "We'll look at the distribution of the training marginals:", "import matplotlib.pyplot as plt\nplt.hist(train_marginals, bins=20)\nplt.show()", "We can view the learned accuracy parameters, and other statistics about the LFs learned by the generative model:", "gen_model.learned_lf_stats()", "Using the Model to Iterate on Labeling Functions\nNow that we have learned the generative model, we can stop here and use this to potentially debug and/or improve our labeling function set. First, we apply the LFs to our development set:", "L_dev = labeler.apply_existing(split=1)", "And finally, we get the score of the generative model:", "tp, fp, tn, fn = gen_model.error_analysis(session, L_dev, L_gold_dev)", "Interpreting Generative Model Performance\nAt this point, we should be getting an F1 score of around 0.4 to 0.5 on the development set, which is pretty good! However, we should be very careful in interpreting this. Since we developed our labeling functions using this development set as a guide, and our generative model is composed of these labeling functions, we expect it to score very well here! \nIn fact, it is probably somewhat overfit to this set. However this is fine, since in the next tutorial, we'll train a more powerful end extraction model which will generalize beyond the development set, and which we will evaluate on a blind test set (i.e. one we never looked at during development).\nDoing Some Error Analysis\nAt this point, we might want to look at some examples in one of the error buckets. For example, one of the false negatives that we did not correctly label as true mentions. To do this, we can again just use the Viewer:", "from snorkel.viewer import SentenceNgramViewer\n\n# NOTE: This if-then statement is only to avoid opening the viewer during automated testing of this notebook\n# You should ignore this!\nimport os\nif 'CI' not in os.environ:\n sv = SentenceNgramViewer(fn, session)\nelse:\n sv = None\n\nsv\n\nc = sv.get_selected() if sv else list(fp.union(fn))[0]\nc", "We can easily see the labels that the LFs gave to this candidate using simple ORM-enabled syntax:", "c.labels", "We can also now explore some of the additional functionalities of the lf_stats method for our dev set LF labels, L_dev: we can plug in the gold labels that we have, and the accuracies that our generative model has learned:", "L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])", "Note that for labeling functions with low coverage, our learned accuracies are closer to our prior of 70% accuracy.\nSaving our training labels\nFinally, we'll save the training_marginals, which are our probabilistic training labels, so that we can use them in the next tutorial to train our end extraction model:", "from snorkel.annotations import save_marginals\n%time save_marginals(session, L_train, train_marginals)", "Next, in Part III, we'll use these probabilistic training labels to train a deep neural network." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
anabranch/data_analysis_with_python_and_pandas
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
apache-2.0
[ "NumPy Array Basics - Vectorization", "import sys\nprint(sys.version)\nimport numpy as np\nprint(np.__version__)\n\nnpa = np.random.random_integers(0,50,20)", "Now I’ve harped on about vectorization in the last couple of videos and I’ve told you that it’s great but I haven’t shown you how it’s so great.\nHere are the two powerful reasons\n- Concise\n- Efficient\nThe fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a high-level programming model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.\nYou can read more here:\nhttps://en.wikipedia.org/wiki/Array_programming", "npa", "With vectorization we can apply changes to the entire array extremely efficiently, no more for loops. If we want to double the array, we just multiply by 2 if we want to cube it we just cube it.", "npa * 2\n\nnpa ** 3\n\n[x * 2 for x in npa]", "So who cares? Again it’s going to be efficiency thing just like boolean selection Let’s try something a bit more complex.\nDefine a function named new_func that cubes the value if it is less than 5 and squares it if it is greater or equal to 5.", "def new_func(numb):\n if numb < 10:\n return numb**3\n else:\n return numb**2\n\nnew_func(npa)", "However we can’t just pass in the whole vector because we’re going to get this array ambiguity.", "?np.vectorize", "We need to vectorize this operation and we do that with np.vectorize\nWe can then apply that to our entire array and it takes care of the complexity for us. We can think in terms of the data without having to think about each individual element.", "vect_new_func = np.vectorize(new_func)\n\ntype(vect_new_func)\n\nvect_new_func(npa)\n\n[new_func(x) for x in npa]", "It's also much faster to vectorize operations and while these are simple examples the benefits will become apparent as we continue through this course.\nthis has changed since python3 and the list comprehension has gotten much faster. However, this doesn't mean that vectorization is slower, just that it's a bit heavier because it places a lot more tools at your disposal like we'll see in the next video.", "%timeit [new_func(x) for x in npa]\n%timeit vect_new_func(npa)\n\nnpa2 = np.random.random_integers(0,100,20*1000)", "Speed comparisons with size.", "%timeit [new_func(x) for x in npa2]\n%timeit vect_new_func(npa2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
markomanninen/PLCParser
Hy -level PLCParser.ipynb
mit
[ "<img src=\"plcparser_icon.png\" />\n<center>$A \\land B \\land C$</center>\nPLCParser implemented in Hy ~language\nSee also:\n\nPropositional Logic Clause Parser main project hub\nJupyter notebook with Calypso Hy kernel\n\nMagics\nLoad extension for using magics on the document:", "%load_ext hyPLCParser\n#%reload_ext hyPLCParser", "Line magics\nPrefix and infix support", "%plc (1 and? 1)\n\n%plc (True ⊕ False ⊕ True ⊕ False ⊕ True)\n\n%plc ( ( ∧ 1 1 1 ) ∨ ( ∧ 1 1 0 ) )", "Cell magics\nPrefix and infix support", "%%plc\n#$(1 and? (or? 0 1))", "Registering additional operators", "%%plc\n; register + sign for infix notation\n#>+\n; evaluate code\n#$(1 + (2 + (3)))", "Adding more complex custom operators", "%%plc\n\n; use operator macro to add mean operator with custom symbol\n(defoperator mean x̄ [&rest args]\n  (/ (sum args) (len args)))\n\n; try prefix notation with nested structure\n(print (x̄ 1 2 3 4))\n(print (x̄ 1 2 (x̄ 3 4)))\n\n; note that infix notation in cell magics needs to be prefixed with \n; #$ reader macro marker while in line magics it is not required\n(print #$(1 x̄ 2 x̄ 3 x̄ 4))", "Order of precedence\nBy default order of precedence is from left to right. Here we will use defoperators to define additional operators beyond logical ones. Then for variety we use defmixfix macro to evaluate clause. First evaluation will give 9 as an answer because evaluation is started from 1 + 2 and then that is multiplied", "%%plc\n\n(defoperators * +)\n(print \"First\"\n (defmixfix 1 + 2 * 3))\n\n(defprecedence * +)\n(print \"Second\"\n (defmixfix 1 + 2 * 3))", "Mixing Hy and Python on same cell", "# the first line is hy code supporting infix and prefix logical clauses\n%plc ( 1 and? 1 or? (0) )\n\n# the second line is python code. this is possible because above code is line magics\n[a for a in (1, 2, 3)]", "Normal Hy language support", "%%plc\n\n; just define a function ...\n(defn f [x] (print x))\n\n; ... and call it\n(f 3.1416)\n\n; cant use python code in plc cell magics!\n\n%%plc\n\n; set up variables\n(setv A True B True C True)\n(setv clause \"( A ∧ B ∧ C )\")\n\n; use variables on clause\n(print clause \"=\" #$( A ∧ B ∧ C ))", "The MIT License\nCopyright © 2017 Marko Manninen" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
agile-geoscience/gio
docs/userguide/Read_OpendTect_horizons.ipynb
apache-2.0
[ "Read OpendTect horizons\nThe best way to export horizons from OpendTect is with these options:\n\nx/y and inline/crossline\nwith header (single or multi-line, it doesn't matter)\nchoose all the attributes you want\n\nOn the last point, if you choose multiple horizons in one file, you can only have one attribute in the file. \nIL/XL only, single-line header, multiple attributes", "import gio\n\nds = gio.read_odt('data/OdT/3d_horizon/Segment_ILXL_Single-line-header.dat')\nds\n\nds['twt'].plot()", "IL/XL and XY, multi-line header, multiple attributes\nLoad everything (default)\nX and Y are loaded as cdp_x and cdp_y, to be consistent with the seisnc standard in segysak.", "ds = gio.read_odt('../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat')\nds\n\nimport matplotlib.pyplot as plt\n\nplt.scatter(ds.coords['cdp_x'], ds.coords['cdp_y'], s=5)", "Load only inline, crossline, TWT\nThere is only one attribute here: Z, which is the two-way time of the horizon.\nNote that when loading data from OpendTect, you always get an xarray.Dataset, even if there's only a single attribute. This is because the format supports multiple grids and we didn't want you to have to guess what a given file would produce.", "fname = '../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat'\nnames = ['Inline', 'Crossline', 'Z'] # Must match OdT DAT file.\n\nds = gio.read_odt(fname, names=names)\nds", "XY only\nIf you have a file with no IL/XL, gio can try to load data using only X and Y:\n\nIf there's a header you can load any number of attributes.\nIf there's no header, you can only one attribute (e.g. TWT) automagically...\nOR, if there's no header, you can provide names to tell gio what everything is.\n\ngio must create fake inline and crossline numbers; you can provide an origin and a step size. For example, notice above that the true inline and crossline numbers are:\n\ninline: 376, 378, 380, etc.\ncrossline: 812, 814, 816, etc.\n\nSo we can pass an origin of (376, 812) and a step of (2, 2) to mimic these.\nHeader present", "fname = '../data/OdT/3d_horizon/Segment_XY_Single-line-header.dat'\n\nds = gio.read_odt(fname, origin=(376, 812), step=(2, 2))\nds\n\nds['twt'].plot()", "No header, more than one attribute: raises an error", "fname = '../data/OdT/3d_horizon/Segment_XY_No-header.dat'\n\nds = gio.read_odt(fname)\nds\n\n# Raises an error:\n\nfname = '../data/OdT/3d_horizon/Segment_XY_No-header.dat'\n\nds = gio.read_odt(fname, names=['X', 'Y', 'TWT'])\nds\n\nds['twt'].plot()", "Sparse data\nSometimes a surface only exists at a few points, e.g. a 3D seismic interpretation grid. In general, loading data like this is completely safe if you have inline and xline locations. If you only have (x, y) locations, gio will attempt to load it, but you should inspect the result carefullly.", "fname = '../data/OdT/3d_horizon/Nimitz_Salmon_XY-and-ILXL_Single-line-header.dat'\n\nds = gio.read_odt(fname)\nds\n\nds['twt'].plot.imshow()", "There's some sort of artifact with the default plot style, which uses pcolormesh I think.", "ds['twt'].plot()", "Multiple horizons in one file\nYou can export multiple horizons from OpendTect. These will be loaded as one xarray.Dataset as different Data variables. (The actual attribute you exported from OdT is always called Z; this information is not retained in the xarray.)", "fname = '../../gio-dev/data/OdT/3d_horizon/multi_horizon/Multi_header_H2_and_H4_X_Y_iL_xL_Z_in_sec.dat'\n\nds = gio.read_odt(fname)\nds\n\nds['F3_Demo_2_FS6'].plot()\n\nds['F3_Demo_4_Truncation'].plot()", "Multi-horizon, no header\nUnfortunately, OdT exports (x, y) in the first two columns, meaning you can't assume that columns 3 and 4 are inline, crossline. So if there's no header, and XY as well as inline/xline, you have to give the column names:", "import gio\n\nfname = '../data/OdT/3d_horizon/Test_Multi_XY-and-ILXL_Z-only.dat'\n\nds = gio.read_odt(fname, names=['Horizon', 'X', 'Y', 'Inline', 'Crossline', 'Z'])\nds", "Undefined values\nThese are exported as '1e30' by default. You can override this (not add to it, which is the default pandas behaviour) by passing one or more na_values.", "fname = '../data/OdT/3d_horizon/Segment_XY_No-header_NULLs.dat'\n\nds = gio.read_odt(fname, names=['X', 'Y', 'TWT'])\nds\n\nds['twt'].plot()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JAmarel/Phys202
Matplotlib/Matplotlib.ipynb
mit
[ "Visualization with Matplotlib\nLearning Objectives: Learn how to make basic plots using Matplotlib's pylab API and how to use the Matplotlib documentation.\nThis notebook focuses only on the Matplotlib API, rather that the broader question of how you can use this API to make effective and beautiful visualizations.\nImports\nThe following imports should be used in all of your notebooks where Matplotlib in used:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np", "Overview\nThe following conceptual organization is simplified and adapted from Benjamin Root's AnatomyOfMatplotlib tutorial.\nFigures and Axes\n\nIn Matplotlib a single visualization is a Figure.\nA Figure can have multiple areas, called subplots. Each subplot is an Axes.\nIf you don't create a Figure and Axes yourself, Matplotlib will automatically create one for you.\nAll plotting commands apply to the current Figure and Axes.\n\nThe following functions can be used to create and manage Figure and Axes objects.\nFunction | Description \n:-----------------|:----------------------------------------------------------\nfigure | Creates a new Figure\ngca | Get the current Axes instance\nsavefig | Save the current Figure to a file\nsca | Set the current Axes instance\nsubplot | Create a new subplot Axes for the current Figure\nsubplots | Create a new Figure and a grid of subplots Axes\nPlotting Functions\nOnce you have created a Figure and one or more Axes objects, you can use the following function to put data onto that Axes.\nFunction | Description\n:-----------------|:--------------------------------------------\nbar | Make a bar plot\nbarh | Make a horizontal bar plot\nboxplot | Make a box and whisker plot\ncontour | Plot contours\ncontourf | Plot filled contours\nhist | Plot a histogram\nhist2d | Make a 2D histogram plot\nimshow | Display an image on the axes\nmatshow | Display an array as a matrix\npcolor | Create a pseudocolor plot of a 2-D array\npcolormesh | Plot a quadrilateral mesh\nplot | Plot lines and/or markers\nplot_date | Plot with data with dates\npolar | Make a polar plot\nscatter | Make a scatter plot of x vs y\nPlot modifiers\nYou can then use the following functions to modify your visualization.\nFunction | Description\n:-----------------|:---------------------------------------------------------------------\nannotate | Create an annotation: a piece of text referring to a data point\nbox | Turn the Axes box on or off\nclabel | Label a contour plot\ncolorbar | Add a colorbar to a plot\ngrid | Turn the Axes grids on or off\nlegend | Place a legend on the current Axes\nloglog | Make a plot with log scaling on both the x and y axis\nsemilogx | Make a plot with log scaling on the x axis \nsemilogy | Make a plot with log scaling on the y axis\nsubplots_adjust | Tune the subplot layout\ntick_params | Change the appearance of ticks and tick labels\nticklabel_format| Change the ScalarFormatter used by default for linear axes\ntight_layout | Automatically adjust subplot parameters to give specified padding\ntext | Add text to the axes\ntitle | Set a title of the current axes\nxkcd | Turns on XKCD sketch-style drawing mode\nxlabel | Set the x axis label of the current axis\nxlim | Get or set the x limits of the current axes\nxticks | Get or set the x-limits of the current tick locations and labels\nylabel | Set the y axis label of the current axis\nylim | Get or set the y-limits of the current axes\nyticks | Get or set the y-limits of the current tick locations and labels\nBasic plotting\nFor now, we will work with basic line plots (plt.plot) to show how the Matplotlib pylab plotting API works. In this case, we don't create a Figure so Matplotlib does that automatically.", "t = np.linspace(0, 10.0, 100)\nplt.plot(t, np.sin(t))\nplt.xlabel('Time')\nplt.ylabel('Signal')\nplt.title('My Plot'); # supress text output", "Basic plot modification\nWith a third argument you can provide the series color and line/marker style. Here we create a Figure object and modify its size.", "f = plt.figure(figsize=(9,6)) # 9\" x 6\", default is 8\" x 5.5\"\n\nplt.plot(t, np.sin(t), 'r.');\nplt.xlabel('x')\nplt.ylabel('y')", "Here is a list of the single character color strings:\nb: blue\n g: green\n r: red\n c: cyan\n m: magenta\n y: yellow\n k: black\n w: white\nThe following will show all of the line and marker styles:", "from matplotlib import lines\nlines.lineStyles.keys()\n\nfrom matplotlib import markers\nmarkers.MarkerStyle.markers.keys()", "To change the plot's limits, use xlim and ylim:", "plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')\nplt.xlim(-1.0, 11.0)\nplt.ylim(-1.0, 1.0)", "You can change the ticks along a given axis by using xticks, yticks and tick_params:", "plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')\nplt.xlim(0.0, 10.0)\nplt.ylim(-1.0, 1.0)\nplt.xticks([0,5,10], ['zero','five','10'])\nplt.tick_params(axis='y', direction='inout', length=10)", "Box and grid\nYou can enable a grid or disable the box. Notice that the ticks and tick labels remain.", "plt.plot(np.random.rand(100), 'b-')\nplt.grid(True)\nplt.box(False)", "Multiple series\nMultiple calls to a plotting function will all target the current Axes:", "plt.plot(t, np.sin(t), label='sin(t)')\nplt.plot(t, np.cos(t), label='cos(t)')\nplt.xlabel('t')\nplt.ylabel('Signal(t)')\nplt.ylim(-1.5, 1.5)\nplt.xlim(right=12.0)\nplt.legend()", "Subplots\nSubplots allow you to create a grid of plots in a single figure. There will be an Axes associated with each subplot and only one Axes can be active at a time.\nThe first way you can create subplots is to use the subplot function, which creates and activates a new Axes for the active Figure:", "plt.subplot(2,1,1) # 2 rows x 1 col, plot 1\nplt.plot(t, np.exp(0.1*t))\nplt.ylabel('Exponential')\n\nplt.subplot(2,1,2) # 2 rows x 1 col, plot 2\nplt.plot(t, t**2)\nplt.ylabel('Quadratic')\nplt.xlabel('x')\n\nplt.tight_layout()", "In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner:", "f, ax = plt.subplots(2, 2,figsize=(15,5))\n\nfor i in range(2):\n for j in range(2):\n plt.sca(ax[i,j])\n plt.plot(np.random.rand(20))\n plt.xlabel('x')\n plt.ylabel('y')\n\nplt.tight_layout()", "The subplots function also makes it easy to pass arguments to Figure and to share axes:", "f, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,6))\n\nfor i in range(2):\n for j in range(2):\n plt.sca(ax[i,j])\n plt.plot(np.random.rand(20))\n if i==1:\n plt.xlabel('x')\n if j==0:\n plt.ylabel('y')\n\nplt.tight_layout()", "More marker and line styling\nAll plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. Fro more information see:\n\nControlling line properties\nSpecifying colors", "plt.plot(t, np.sin(t), marker='o', color='darkblue',\n linestyle='--', alpha=0.3, markersize=10)", "Resources\n\nMatplotlib Documentation, Matplotlib developers.\nMatplotlib Gallery, Matplotlib developers.\nMatplotlib List of Plotting Commands, Matplotlib developers.\nAnatomyOfMatplotlib, Benjamin Root.\nMatplotlib Tutorial, J.R. Johansson." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jnobre/lxmls-toolkit-2017
.ipynb_checkpoints/Lxmls_Day5-checkpoint.ipynb
mit
[ "LXMLS 2017 - Day 5\nDeep Learning I\nDeep learning is the name behind the latest wave of neural network research. This is a very old topic, dating\nfrom the first half of the 20th century, that has attained formidable impact in the machine learning community\nrecently. There is nothing particularly difficult in deep learning. You have already visited all the mathematical\nprinciples you need in the first days of the labs of this school. At their core, deep learning models are\njust functions mapping vector inputs x to vector outputs y, constructed by composing linear and non-linear\nfunctions. This composition can be expressed in the form of a computation graph, where each node applies a\nfunction to its inputs and passes the result as its output. The parameters of the model are the weights given\nto the different inputs in nodes applying linear functions. This vaguely resembles synapse strengths in human\nneural networks, hence the name artificial neural networks.\nDue to their compositional nature, gradient methods and the chain rule can be applied learn the parameters\nof these models regardless of their complexity. See Section for a refresh on the basic concept. We will also refer\nto the gradient learning methods introduced in Section 1.4.4. Today we will focus on feed-forward networks.\nTomorrow we will extended today’s class to recurrent neural networks (RNNs).\nSome of the changes that led to the surge of deep learning are not only improvements on the existing neural\nnetwork algorithms, but also the increase in the amount of data available and computing power. In particular,\nthe use of Graphical Processing Units (GPUs) has allowed neural networks to be applied to very large datasets.\nWorking with GPUs is not trivial as it requires dealing with specialized hardware. Luckily, as it is often the\ncase, we are one Python import away from solving this problem.\nFor the particular case of deep learning, there is a growing number of python toolboxes available that allow\nyou to design custom computational graphs for GPUs as e.g. Theano1 or TensorFlow2\n.\nIn these labs we will be working with Theano. Theano allows us to express computation graphs symbolically\nin terms of basic algebraic operations. It also automatically computes gradients and produces CUDAcompatible\ncode for GPUs. The exercises are designed to gain a low-level understanding of Theano. If you\nare only looking forward to utilize pre-designed models, the Keras toolbox3 provides high-level operations\ncompatible with both Theano and TensorFlow.\nExercise 5.1 - Start by loading Amazon sentiment corpus used in day 1", "import numpy as np\nimport lxmls.readers.sentiment_reader as srs\nscr = srs.SentimentCorpus(\"books\")\ntrain_x = scr.train_X.T\ntrain_y = scr.train_y[:, 0]\ntest_x = scr.test_X.T\ntest_y = scr.test_y[:, 0]", "Go to lxmls/deep learning/mlp.py:class NumpyMLP:def grads() and complete the code of the NumpyMLP class with\nthe Backpropagation recursion that we just saw.", " def grads(self, x, y):\n \"\"\"\n Computes the gradients of the network with respect to cross entropy\n error cost\n \"\"\"\n\n # Run forward and store activations for each layer\n activations = self.forward(x, all_outputs=True)\n\n # For each layer in reverse store the gradients for each parameter\n nabla_params = [None] * (2*self.n_layers)\n\n for n in np.arange(self.n_layers-1, -1, -1):\n\n # Get weigths and bias (always in even and odd positions)\n # Note that sometimes we need the weight from the next layer\n W = self.params[2*n]\n b = self.params[2*n+1]\n if n != self.n_layers-1:\n W_next = self.params[2*(n+1)]\n\n # Solving Exercise 5.1\n if n < self.n_layers - 1 :\n ent = np.dot(W_next.T, ent )\n ent *= activations[ n ] * (1 - activations[ n ] ) # This is correct but confusing n+1 is n in the guide\n else: # NOTE: This assumes cross entropy cost\n if self.actvfunc[ n ] == 'sigmoid':\n ent = ( activations[ n ] - y ) / y.shape[ 0 ]\n elif self.actvfunc[ n ] == 'softmax':\n I = index2onehot( y , W.shape[ 0 ] )\n ent = (activations[ n ] - I ) / y.shape[ 0 ]\n\n nabla_W = np.zeros( W.shape )\n for l in np.arange( ent.shape[ 1 ] ):\n if n == 0:\n nabla_W += np.outer( ent[ :, l ], x[ :, l] )\n else:\n nabla_W += np.outer( ent[ :, l ], activations[ n - 1] [ :, l ] )\n nabla_b = np.sum( ent , 1, keepdims=True )\n\n #End of the solution 5.1\n\n # Store the gradients\n nabla_params[2*n] = nabla_W\n nabla_params[2*n+1] = nabla_b\n\n return nabla_params", "Once you are done. Try different network geometries by increasing the number of layers and layer sizes e.g.", "# Neural network modules\nimport lxmls.deep_learning.mlp as dl\nimport lxmls.deep_learning.sgd as sgd\n# Model parameters\ngeometry = [train_x.shape[0], 20, 2]\nactvfunc = ['sigmoid', 'softmax']\n# Instantiate model\nmlp = dl.NumpyMLP(geometry, actvfunc)", "You can test the different hyperparameters", "# Model parameters\nn_iter = 5\nbsize = 5\nlrate = 0.01\n# Train\nsgd.SGD_train(mlp, n_iter, bsize=bsize, lrate=lrate, train_set=(train_x, train_y))\nacc_train = sgd.class_acc(mlp.forward(train_x), train_y)[0]\nacc_test = sgd.class_acc(mlp.forward(test_x), test_y)[0]\nprint \"MLP (%s) Amazon Sentiment Accuracy train: %f test: %f\" % (geometry, acc_train,acc_test)", "5.4.4 Some final reflections on Backpropagation\nIf you are new to the neural network topic, this is about the most important piece of theory you should learn\nabout deep learning. Here are some reflections that you should keep in mind.\n* Thanks to the multi-layer structure and the chain rule, Backpropagation allows models that compose\nlinear and non-linear functions with any depth (in principle5\n).\n* The formulas are also valid for other cost functions and output layer non-linearities with minor modifi-\ncations. It is only necessary to compute the equivalent of Eq. 5.13.\n* The formulas are also valid for hidden non-linearities other than the sigmoid. Element-wise non-linear\ntransformations still allow the simplification in Eq. 5.21. With little effort it is also possible to deal with\nother cases.\n* However, there is an important limitation: unlike the log-linear models, the optimization problem is non\nconvex. This removes some formal guarantees, most importantly we can get trapped in local minima\nduring training\n5.5 Deriving gradients and GPU code with Theano\n5.5.1 An Introduction to Theano\nAs you may have observed, the speed of SGD training for MLPs slows down considerably when we increase the number of layers. One reason for this is that the code that we use here is not very optimized. It is thought for you to learn the basic principles. Even if the code was more optimized, it would still be very slow for reasonable network sizes. The cost of computing each linear layer is proportional to the dimensionality of the previous and current layers, which in most cases will be rather large.\nFor this reason most deep learning applications use Graphics Processing Units (GPU) in their computations. This specialized hardware is normally used to accelerate computer graphics, but can also be used for general computation intensive tasks. However, we need to deal with specific interfaces and operations in order to use a GPU. This is where Theano comes in. Theano is a multidimensional symbolic expression python module\nwith focus on neural networks. It will provide us with the following nice features:\n\nSymbolic expressions: Express the operations of the MLP (forward pass, cost) symbolically, as mathematical operations rather than explicit code\nSymbolic Differentiation: As a consequence of the previous feature, we can compute gradients of arbitrary mathematical functions automatically.\nGPU integration: The code will be ready to work on a GPU, provided that you have one and it is active within Theano. It will also be faster on normal CPUs since the symbolic operations are compiled to C\ncode.\nTheano is focused on Deep Learning, with an active community and several tutorials easily available. However, this does not come at a free price. There are a number of limitations\nSymbolic algebra is more difficult to debug, as we can not easily step in at each operation.\nWorking with CPU and GPU code implies that we have to be more careful about the types of the variables.\nTheano tends to output long error messages. However, once you get used to it, error messages accurately point the source of the problem.\nHandling recurrent neural networks is much simpler than in Numpy but it still implies working with complicated constructs that are complicated to debug.\n\nExercise 5.2 Get in contact with Theano. Learn the difference between a symbolic representation and a function. Start\nby implementing the first layer of our previous MLP in Numpy", "# Numpy code\nx = test_x # Test set\nW1, b1 = mlp.params[:2] # Weights and bias of fist layer\nz1 = np.dot(W1, x) + b1 # Linear transformation\ntilde_z1 = 1/(1+np.exp(-z1)) # Non-linear transformation", "Now we will implement this in Theano. We start by creating the variables over which we will produce the operations. For\nexample the symbolic input is defined as", "# Theano code.\n# NOTE: We use undescore to denote symbolic equivalents to Numpy variables.\n# This is no Python convention!.\nimport theano\nimport theano.tensor as T\n_x = T.matrix('x')\n\n_W1 = theano.shared(value=W1, name='W1', borrow=True)\n_b1 = theano.shared(value=b1, name='b1', borrow=True,broadcastable=(False, True))\n", "Important: One of the main differences between Numpy and theano data is broadcast. In Numpy if we sum an array\nwith shape (N, M) to one with shape (1, M), the second array will be copied N times to form a (N, M) matrix. This is\nknown as broadcasting. In Theano this is not automatic. You need to specify broadcasting explicitly. This is important\nfor example when using a bias, which will be copied to match the number of examples in the batch. In other cases, like\nwhen using variables for recurrent neural networks, broadcast has to be set to False. Broadcast is one of the typical source of errors when you start working with Theano. Keep this in mind.\nNow lets describe the operations we want to do with the variables. Again only symbolically. This is done by replacing\nour usual operations by Theano symbolic ones when necessary e. g. the internal product dot() or the sigmoid. Some\noperations like e.g. + are automatically recognized by Theano (operator overloading).", "_z1 = T.dot(_W1, _x) + _b1\n_tilde_z1 = T.nnet.sigmoid(_z1)\n# Keep in mind that naming variables is useful when debugging\n_z1.name = 'z1'\n_tilde_z1.name = 'tilde_z1'\n\n# Show computation graph\nprint \"\\nThis is my symbolic perceptron\\n\"\ntheano.printing.debugprint(_tilde_z1)\n\n# Compile\nlayer1 = theano.function([_x], _tilde_z1)\n\n# Check Numpy and Theano mactch\nif np.allclose(tilde_z1, layer1(x.astype(theano.config.floatX))):\n print \"\\nNumpy and Theano Perceptrons are equivalent\"\nelse:\n set_trace()\n # raise ValueError, \"Numpy and Theano Perceptrons are different\"", "5.5.2 Symbolic Forward Pass\nExercise 5.3 Complete the method forward() inside of the lxmls/deep learning/mlp.py:class TheanoMLP. Note that this is called only once at the initialization of the class. To debug your implementation put a breakpoint at the init function call. Hint: Note that this is very similar to NumpyMLP.forward(). You just need to keep track of the symbolic variable representing the output of the network after each layer is applied and compile the function at the end. After you are finished instantiate a Theano class and check that Numpy and Theano forward pass are the same.", "def _forward(self, x, all_outputs=False):\n \"\"\"\n Symbolic forward pass\n\n all_outputs = True return symbolic input and intermediate activations\n \"\"\"\n\n # This will store activations at each layer and the input. This is\n # needed to compute backpropagation\n if all_outputs:\n activations = [x]\n\n # Input\n tilde_z = x\n\n # ----------\n # Solution to Exercise 5.3\n for n in range(self.n_layers):\n\n # Get weigths and bias (always in even and odd positions)\n W = self.params[2*n]\n b = self.params[2*n+1]\n\n z = T.dot(W, tilde_z) + b # Linear transformation\n\n # see e.g. theano.printing.debugprint(tilde_z)\n z.name = 'z%d' % (n+1)\n\n # Non-linear transformation\n if self.actvfunc[n] == \"sigmoid\":\n tilde_z = T.nnet.sigmoid( z )\n elif self.actvfunc[n] == \"softmax\":\n tilde_z = T.nnet.softmax( z.T ).T\n\n tilde_z.name = 'tilde_z%d' % (n+1) # Name variable\n\n if all_outputs:\n activations.append(tilde_z)\n # End of solution to Exercise 5.3\n # ----------\n\n if all_outputs:\n tilde_z = activations\n\n return tilde_z\n\n\nmlp_a = dl.NumpyMLP(geometry, actvfunc)\nmlp_b = dl.TheanoMLP(geometry, actvfunc)", "To help debugging in Theano is sometimes useful to switch off the optimizer. This helps Theano point out which part\nof the Python code generated the error", "theano.config.optimizer='None'\n\nassert np.allclose(mlp_a.forward(test_x), mlp_b.forward(test_x)),\"ERROR: Numpy and Theano forward passes differ\"", "5.5.3 Symbolic Differentiation\nExercise 5.4 We first see an example that does not use any of the code in TheanoMLP but rather continues from what\nyou wrote in Ex. 5.2. In this exercise you completed a sigmoid layer with Theano. To get some values for the weights\nwe used the first layer of the network you trained in 5.2. Now we are going to use the second layer as well. This is thus assuming that your network in 5.2 has only two layers e.g. the recommended geometry (I, 20, 2). Make sure this is the case before starting this exercise.\nFor the sake of clarity, lets write here the part of Ex. 5.2 that we had completed", "# Get the values from our MLP\nW1, b1 = mlp.params[:2] # Weights and bias of fist layer\n# First layer symbolic variables\n_x = T.matrix('x')\n_W1 = theano.shared(value=W1, name='W1', borrow=True)\n_b1 = theano.shared(value=b1, name='b1', borrow=True, broadcastable=(False, True))\n# First layer symbolic expressions\n_z1 = T.dot(_W1, _x) + _b1\n_tilde_z1 = T.nnet.sigmoid(_z1)", "Now we just need to complete this with the second layer, using a softmax non-linearity", "W2, b2 = mlp.params[2:] # Weights and bias of second (and last!) layer\n# Second layer symbolic variables\n_W2 = theano.shared(value=W2, name='W2', borrow=True)\n_b2 = theano.shared(value=b2, name='b2', borrow=True, broadcastable=(False, True))\n# Second layer symbolic expressions\n_z2 = T.dot(_W2, _tilde_z1) + _b2\n# NOTE: Theano softmax does not support T.nnet.softmax(_z2, axis=1) this is a workaround\n_tilde_z2 = T.nnet.softmax(_z2.T).T", "With this, we could compile a function to obtain the output of the network symb tilde z2 for a given input symb x. In\nthis exercise we are however interested in obtaining the misclassification cost. This is given in Eq: 5.5. First we are going\nto need the symbolic variable for the correct output", "_y = T.ivector('y')", "The minus posterior probability of the class given the input is the same as selecting the k(m)-th softmax output, were\nk(m) is the index of the correct class for xm. If we want to do this for a vector y containing M different examples, we can\nwrite this as", "_F = -T.mean(T.log(_tilde_z2[_y, T.arange(_y.shape[0])]))", "Now obtaining a function that computes the gradient could not be easier", "_nabla_F = T.grad(_F, _W1)\nnabla_F = theano.function([_x, _y], _nabla_F)", "5.5.4 Symbolic mini-batch update\nExercise 5.5 Define the updates list. This is a list where each element is a tuple of a parameter and the update rule to be\napplied that parameter. In this case we are defining the SGD update rule, but take into account that using more complex\nupdate rules like e.g. momentum or adam implies just replacing the last line of the following snippet.", "W2, b2 = mlp_a.params[2:4]\n\n# Second layer symbolic variables\n_W2 = theano.shared(value=W2, name='W2', borrow=True)\n_b2 = theano.shared(value=b2, name='b2', borrow=True,\n broadcastable=(False, True))\n_z2 = T.dot(_W2, _tilde_z1) + _b2\n_tilde_z2 = T.nnet.softmax(_z2.T).T\n\n# Ground truth\n_y = T.ivector('y')\n\n# Cost\n_F = -T.mean(T.log(_tilde_z2[_y, T.arange(_y.shape[0])]))\n\n# Gradient\n_nabla_F = T.grad(_F, _W1)\nnabla_F = theano.function([_x, _y], _nabla_F)\n\n# Print computation graph\nprint \"\\nThis is my softmax classification cost\\n\"\ntheano.printing.debugprint(_F)", "Exercise 5.6", "import time\n\n# Understanding the mini-batch function and givens/updates parameters\n\n# Numpy\ngeometry = [train_x.shape[0], 20, 2]\nactvfunc = ['sigmoid', 'softmax']\nmlp_a = dl.NumpyMLP(geometry, actvfunc)\n#\ninit_t = time.clock()\nsgd.SGD_train(mlp_a, n_iter, bsize=bsize, lrate=lrate, train_set=(train_x, train_y))\nprint \"\\nNumpy version took %2.2f sec\" % (time.clock() - init_t)\nacc_train = sgd.class_acc(mlp_a.forward(train_x), train_y)[0]\nacc_test = sgd.class_acc(mlp_a.forward(test_x), test_y)[0]\nprint \"Amazon Sentiment Accuracy train: %f test: %f\\n\" % (acc_train, acc_test)\n\n# Theano grads\nmlp_b = dl.TheanoMLP(geometry, actvfunc)\ninit_t = time.clock()\nsgd.SGD_train(mlp_b, n_iter, bsize=bsize, lrate=lrate, train_set=(train_x, train_y))\nprint \"\\nCompiled gradient version took %2.2f sec\" % (time.clock() - init_t)\nacc_train = sgd.class_acc(mlp_b.forward(train_x), train_y)[0]\nacc_test = sgd.class_acc(mlp_b.forward(test_x), test_y)[0]\nprint \"Amazon Sentiment Accuracy train: %f test: %f\\n\" % (acc_train, acc_test)\n\n# Theano compiled batch\n\n# Cast data into the types and shapes used in the theano graph\n# IMPORTANT: This is the main source of errors when beginning with theano\ntrain_x = train_x.astype(theano.config.floatX)\ntrain_y = train_y.astype('int32')\n\n# Model\nmlp_c = dl.TheanoMLP(geometry, actvfunc)\n\n# Define givens variables to be used in the batch update\n# Get symbolic variables returning a mini-batch of data\n\n# Define updates variable. This is a list of gradient descent updates\n# The output is a list following theano.function updates parameter. This\n# consists on a list of tuples with each parameter and update rule\n_x = T.matrix('x')\n_y = T.ivector('y')\n_F = mlp_c._cost(_x, _y)\nupdates = [(par, par - lrate*T.grad(_F, par)) for par in mlp_c.params]\n\n#\n# Define the batch update function. This will return the cost of each batch\n# and update the MLP parameters at the same time using updates\nbatch_up = theano.function([_x, _y], _F, updates=updates)\nn_batch = int(np.ceil(float(train_x.shape[1])/bsize)) \n#\n\ninit_t = time.clock()\nsgd.SGD_train(mlp_c, n_iter, batch_up=batch_up, n_batch=n_batch, bsize=bsize,\n train_set=(train_x, train_y))\nprint \"\\nTheano compiled batch update version took %2.2f sec\" % (time.clock() - init_t)\ninit_t = time.clock()\n\nacc_train = sgd.class_acc(mlp_c.forward(train_x), train_y)[0]\nacc_test = sgd.class_acc(mlp_c.forward(test_x), test_y)[0]\nprint \"Amazon Sentiment Accuracy train: %f test: %f\\n\" % (acc_train, acc_test)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
thushear/MLInAction
kaggle/cs228-python-tutorial.ipynb
apache-2.0
[ "CS228 Python Tutorial\nAdapted by Volodymyr Kuleshov and Isaac Caswell from the CS231n Python tutorial by Justin Johnson (http://cs231n.github.io/python-numpy-tutorial/).\nIntroduction\nPython is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.\nWe expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.\nSome of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).\nIn this tutorial, we will cover:\n\nBasic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes\nNumpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting\nMatplotlib: Plotting, Subplots, Images\nIPython: Creating notebooks, Typical workflows\n\nBasics of Python\nPython is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:", "def quicksort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) / 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quicksort(left) + middle + quicksort(right)\n\nprint quicksort([3,6,8,10,1,2,1])", "Python versions\nThere are currently two different supported versions of Python, 2.7 and 3.4. Somewhat confusingly, Python 3.0 introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.4 and vice versa. For this class all code will use Python 2.7.\nYou can check your Python version at the command line by running python --version.\nBasic data types\nNumbers\nIntegers and floats work as you would expect from other languages:", "x = 3\nprint x, type(x)\n\nprint x + 1 # Addition;\nprint x - 1 # Subtraction;\nprint x * 2 # Multiplication;\nprint x ** 2 # Exponentiation;\n\nx += 1\nprint x # Prints \"4\"\nx *= 2\nprint x # Prints \"8\"\n\ny = 2.5\nprint type(y) # Prints \"<type 'float'>\"\nprint y, y + 1, y * 2, y ** 2 # Prints \"2.5 3.5 5.0 6.25\"", "Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.\nPython also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.\nBooleans\nPython implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&amp;&amp;, ||, etc.):", "t, f = True, False\nprint type(t) # Prints \"<type 'bool'>\"", "Now we let's look at the operations:", "print t and f # Logical AND;\nprint t or f # Logical OR;\nprint not t # Logical NOT;\nprint t != f # Logical XOR;", "Strings", "hello = 'hello' # String literals can use single quotes\nworld = \"world\" # or double quotes; it does not matter.\nprint hello, len(hello)\n\nhw = hello + ' ' + world # String concatenation\nprint hw # prints \"hello world\"\n\nhw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting\nprint hw12 # prints \"hello world 12\"", "String objects have a bunch of useful methods; for example:", "s = \"hello\"\nprint s.capitalize() # Capitalize a string; prints \"Hello\"\nprint s.upper() # Convert a string to uppercase; prints \"HELLO\"\nprint s.rjust(7) # Right-justify a string, padding with spaces; prints \" hello\"\nprint s.center(7) # Center a string, padding with spaces; prints \" hello \"\nprint s.replace('l', '(ell)') # Replace all instances of one substring with another;\n # prints \"he(ell)(ell)o\"\nprint ' world '.strip() # Strip leading and trailing whitespace; prints \"world\"", "You can find a list of all string methods in the documentation.\nContainers\nPython includes several built-in container types: lists, dictionaries, sets, and tuples.\nLists\nA list is the Python equivalent of an array, but is resizeable and can contain elements of different types:", "xs = [3, 1, 2] # Create a list\nprint xs, xs[2]\nprint xs[-1] # Negative indices count from the end of the list; prints \"2\"\n\nxs[2] = 'foo' # Lists can contain elements of different types\nprint xs\n\nxs.append('bar') # Add a new element to the end of the list\nprint xs \n\nx = xs.pop() # Remove and return the last element of the list\nprint x, xs ", "As usual, you can find all the gory details about lists in the documentation.\nSlicing\nIn addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:", "nums = range(5) # range is a built-in function that creates a list of integers\nprint nums # Prints \"[0, 1, 2, 3, 4]\"\nprint nums[2:4] # Get a slice from index 2 to 4 (exclusive); prints \"[2, 3]\"\nprint nums[2:] # Get a slice from index 2 to the end; prints \"[2, 3, 4]\"\nprint nums[:2] # Get a slice from the start to index 2 (exclusive); prints \"[0, 1]\"\nprint nums[:] # Get a slice of the whole list; prints [\"0, 1, 2, 3, 4]\"\nprint nums[:-1] # Slice indices can be negative; prints [\"0, 1, 2, 3]\"\nnums[2:4] = [8, 9] # Assign a new sublist to a slice\nprint nums # Prints \"[0, 1, 8, 8, 4]\"", "Loops\nYou can loop over the elements of a list like this:", "animals = ['cat', 'dog', 'monkey']\nfor animal in animals:\n print animal", "If you want access to the index of each element within the body of a loop, use the built-in enumerate function:", "animals = ['cat', 'dog', 'monkey']\nfor idx, animal in enumerate(animals):\n print '#%d: %s' % (idx + 1, animal)", "List comprehensions:\nWhen programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:", "nums = [0, 1, 2, 3, 4]\nsquares = []\nfor x in nums:\n squares.append(x ** 2)\nprint squares", "You can make this code simpler using a list comprehension:", "nums = [0, 1, 2, 3, 4]\nsquares = [x ** 2 for x in nums]\nprint squares", "List comprehensions can also contain conditions:", "nums = [0, 1, 2, 3, 4]\neven_squares = [x ** 2 for x in nums if x % 2 == 0]\nprint even_squares", "Dictionaries\nA dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:", "d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data\nprint d['cat'] # Get an entry from a dictionary; prints \"cute\"\nprint 'cat' in d # Check if a dictionary has a given key; prints \"True\"\n\nd['fish'] = 'wet' # Set an entry in a dictionary\nprint d['fish'] # Prints \"wet\"\n\nprint d['monkey'] # KeyError: 'monkey' not a key of d\n\nprint d.get('monkey', 'N/A') # Get an element with a default; prints \"N/A\"\nprint d.get('fish', 'N/A') # Get an element with a default; prints \"wet\"\n\ndel d['fish'] # Remove an element from a dictionary\nprint d.get('fish', 'N/A') # \"fish\" is no longer a key; prints \"N/A\"", "You can find all you need to know about dictionaries in the documentation.\nIt is easy to iterate over the keys in a dictionary:", "d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal in d:\n legs = d[animal]\n print 'A %s has %d legs' % (animal, legs)", "If you want access to keys and their corresponding values, use the iteritems method:", "d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal, legs in d.iteritems():\n print 'A %s has %d legs' % (animal, legs)", "Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:", "nums = [0, 1, 2, 3, 4]\neven_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}\nprint even_num_to_square", "Sets\nA set is an unordered collection of distinct elements. As a simple example, consider the following:", "animals = {'cat', 'dog'}\nprint 'cat' in animals # Check if an element is in a set; prints \"True\"\nprint 'fish' in animals # prints \"False\"\n\n\nanimals.add('fish') # Add an element to a set\nprint 'fish' in animals\nprint len(animals) # Number of elements in a set;\n\nanimals.add('cat') # Adding an element that is already in the set does nothing\nprint len(animals) \nanimals.remove('cat') # Remove an element from a set\nprint len(animals) ", "Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:", "animals = {'cat', 'dog', 'fish'}\nfor idx, animal in enumerate(animals):\n print '#%d: %s' % (idx + 1, animal)\n# Prints \"#1: fish\", \"#2: dog\", \"#3: cat\"", "Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:", "from math import sqrt\nprint {int(sqrt(x)) for x in range(30)}", "Tuples\nA tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:", "d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys\nt = (5, 6) # Create a tuple\nprint type(t)\nprint d[t] \nprint d[(1, 2)]\n\nt[0] = 1", "Functions\nPython functions are defined using the def keyword. For example:", "def sign(x):\n if x > 0:\n return 'positive'\n elif x < 0:\n return 'negative'\n else:\n return 'zero'\n\nfor x in [-1, 0, 1]:\n print sign(x)", "We will often define functions to take optional keyword arguments, like this:", "def hello(name, loud=False):\n if loud:\n print 'HELLO, %s' % name.upper()\n else:\n print 'Hello, %s!' % name\n\nhello('Bob')\nhello('Fred', loud=True)", "Classes\nThe syntax for defining classes in Python is straightforward:", "class Greeter:\n\n # Constructor\n def __init__(self, name):\n self.name = name # Create an instance variable\n\n # Instance method\n def greet(self, loud=False):\n if loud:\n print 'HELLO, %s!' % self.name.upper()\n else:\n print 'Hello, %s' % self.name\n\ng = Greeter('Fred') # Construct an instance of the Greeter class\ng.greet() # Call an instance method; prints \"Hello, Fred\"\ng.greet(loud=True) # Call an instance method; prints \"HELLO, FRED!\"", "Numpy\nNumpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.\nTo use Numpy, we first need to import the numpy package:", "import numpy as np", "Arrays\nA numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.\nWe can initialize numpy arrays from nested Python lists, and access elements using square brackets:", "a = np.array([1, 2, 3]) # Create a rank 1 array\nprint type(a), a.shape, a[0], a[1], a[2]\na[0] = 5 # Change an element of the array\nprint a \n\nb = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array\nprint b\n\nprint b.shape \nprint b[0, 0], b[0, 1], b[1, 0]", "Numpy also provides many functions to create arrays:", "a = np.zeros((2,2)) # Create an array of all zeros\nprint a\n\nb = np.ones((1,2)) # Create an array of all ones\nprint b\n\nc = np.full((2,2), 7) # Create a constant array\nprint c \n\nd = np.eye(2) # Create a 2x2 identity matrix\nprint d\n\ne = np.random.random((2,2)) # Create an array filled with random values\nprint e", "Array indexing\nNumpy offers several ways to index into arrays.\nSlicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:", "import numpy as np\n\n# Create the following rank 2 array with shape (3, 4)\n# [[ 1 2 3 4]\n# [ 5 6 7 8]\n# [ 9 10 11 12]]\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\n\n# Use slicing to pull out the subarray consisting of the first 2 rows\n# and columns 1 and 2; b is the following array of shape (2, 2):\n# [[2 3]\n# [6 7]]\nb = a[:2, 1:3]\nprint b", "A slice of an array is a view into the same data, so modifying it will modify the original array.", "print a[0, 1] \nb[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]\nprint a[0, 1] ", "You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:", "# Create the following rank 2 array with shape (3, 4)\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\nprint a", "Two ways of accessing the data in the middle row of the array.\nMixing integer indexing with slices yields an array of lower rank,\nwhile using only slices yields an array of the same rank as the\noriginal array:", "row_r1 = a[1, :] # Rank 1 view of the second row of a \nrow_r2 = a[1:2, :] # Rank 2 view of the second row of a\nrow_r3 = a[[1], :] # Rank 2 view of the second row of a\nprint row_r1, row_r1.shape \nprint row_r2, row_r2.shape\nprint row_r3, row_r3.shape\n\n# We can make the same distinction when accessing columns of an array:\ncol_r1 = a[:, 1]\ncol_r2 = a[:, 1:2]\nprint col_r1, col_r1.shape\nprint\nprint col_r2, col_r2.shape", "Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:", "a = np.array([[1,2], [3, 4], [5, 6]])\n\n# An example of integer array indexing.\n# The returned array will have shape (3,) and \nprint a[[0, 1, 2], [0, 1, 0]]\n\n# The above example of integer array indexing is equivalent to this:\nprint np.array([a[0, 0], a[1, 1], a[2, 0]])\n\n# When using integer array indexing, you can reuse the same\n# element from the source array:\nprint a[[0, 0], [1, 1]]\n\n# Equivalent to the previous integer array indexing example\nprint np.array([a[0, 1], a[0, 1]])", "One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:", "# Create a new array from which we will select elements\na = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nprint a\n\n# Create an array of indices\nb = np.array([0, 2, 0, 1])\n\n# Select one element from each row of a using the indices in b\nprint a[np.arange(4), b] # Prints \"[ 1 6 7 11]\"\n\n# Mutate one element from each row of a using the indices in b\na[np.arange(4), b] += 10\nprint a", "Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:", "import numpy as np\n\na = np.array([[1,2], [3, 4], [5, 6]])\n\nbool_idx = (a > 2) # Find the elements of a that are bigger than 2;\n # this returns a numpy array of Booleans of the same\n # shape as a, where each slot of bool_idx tells\n # whether that element of a is > 2.\n\nprint bool_idx\n\n# We use boolean array indexing to construct a rank 1 array\n# consisting of the elements of a corresponding to the True values\n# of bool_idx\nprint a[bool_idx]\n\n# We can do all of the above in a single concise statement:\nprint a[a > 2]", "For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.\nDatatypes\nEvery numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:", "x = np.array([1, 2]) # Let numpy choose the datatype\ny = np.array([1.0, 2.0]) # Let numpy choose the datatype\nz = np.array([1, 2], dtype=np.int64) # Force a particular datatype\n\nprint x.dtype, y.dtype, z.dtype", "You can read all about numpy datatypes in the documentation.\nArray math\nBasic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:", "x = np.array([[1,2],[3,4]], dtype=np.float64)\ny = np.array([[5,6],[7,8]], dtype=np.float64)\n\n# Elementwise sum; both produce the array\nprint x + y\nprint np.add(x, y)\n\n# Elementwise difference; both produce the array\nprint x - y\nprint np.subtract(x, y)\n\n# Elementwise product; both produce the array\nprint x * y\nprint np.multiply(x, y)\n\n# Elementwise division; both produce the array\n# [[ 0.2 0.33333333]\n# [ 0.42857143 0.5 ]]\nprint x / y\nprint np.divide(x, y)\n\n# Elementwise square root; produces the array\n# [[ 1. 1.41421356]\n# [ 1.73205081 2. ]]\nprint np.sqrt(x)", "Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:", "x = np.array([[1,2],[3,4]])\ny = np.array([[5,6],[7,8]])\n\nv = np.array([9,10])\nw = np.array([11, 12])\n\n# Inner product of vectors; both produce 219\nprint v.dot(w)\nprint np.dot(v, w)\n\n# Matrix / vector product; both produce the rank 1 array [29 67]\nprint x.dot(v)\nprint np.dot(x, v)\n\n# Matrix / matrix product; both produce the rank 2 array\n# [[19 22]\n# [43 50]]\nprint x.dot(y)\nprint np.dot(x, y)", "Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:", "x = np.array([[1,2],[3,4]])\n\nprint np.sum(x) # Compute sum of all elements; prints \"10\"\nprint np.sum(x, axis=0) # Compute sum of each column; prints \"[4 6]\"\nprint np.sum(x, axis=1) # Compute sum of each row; prints \"[3 7]\"", "You can find the full list of mathematical functions provided by numpy in the documentation.\nApart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:", "print x\nprint x.T\n\nv = np.array([[1,2,3]])\nprint v \nprint v.T", "Broadcasting\nBroadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.\nFor example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:", "# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = np.empty_like(x) # Create an empty matrix with the same shape as x\n\n# Add the vector v to each row of the matrix x with an explicit loop\nfor i in range(4):\n y[i, :] = x[i, :] + v\n\nprint y", "This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:", "vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other\nprint vv # Prints \"[[1 0 1]\n # [1 0 1]\n # [1 0 1]\n # [1 0 1]]\"\n\ny = x + vv # Add x and vv elementwise\nprint y", "Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:", "import numpy as np\n\n# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = x + v # Add v to each row of x using broadcasting\nprint y", "The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.\nBroadcasting two arrays together follows these rules:\n\nIf the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.\nThe two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.\nThe arrays can be broadcast together if they are compatible in all dimensions.\nAfter broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.\nIn any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension\n\nIf this explanation does not make sense, try reading the explanation from the documentation or this explanation.\nFunctions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.\nHere are some applications of broadcasting:", "# Compute outer product of vectors\nv = np.array([1,2,3]) # v has shape (3,)\nw = np.array([4,5]) # w has shape (2,)\n# To compute an outer product, we first reshape v to be a column\n# vector of shape (3, 1); we can then broadcast it against w to yield\n# an output of shape (3, 2), which is the outer product of v and w:\n\nprint np.reshape(v, (3, 1)) * w\n\n# Add a vector to each row of a matrix\nx = np.array([[1,2,3], [4,5,6]])\n# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),\n# giving the following matrix:\n\nprint x + v\n\n# Add a vector to each column of a matrix\n# x has shape (2, 3) and w has shape (2,).\n# If we transpose x then it has shape (3, 2) and can be broadcast\n# against w to yield a result of shape (3, 2); transposing this result\n# yields the final result of shape (2, 3) which is the matrix x with\n# the vector w added to each column. Gives the following matrix:\n\nprint (x.T + w).T\n\n# Another solution is to reshape w to be a row vector of shape (2, 1);\n# we can then broadcast it directly against x to produce the same\n# output.\nprint x + np.reshape(w, (2, 1))\n\n# Multiply a matrix by a constant:\n# x has shape (2, 3). Numpy treats scalars as arrays of shape ();\n# these can be broadcast together to shape (2, 3), producing the\n# following array:\nprint x * 2", "Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.\nThis brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.\nMatplotlib\nMatplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.", "import matplotlib.pyplot as plt", "By running this special iPython command, we will be displaying plots inline:", "%matplotlib inline", "Plotting\nThe most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:", "# Compute the x and y coordinates for points on a sine curve\nx = np.arange(0, 3 * np.pi, 0.1)\ny = np.sin(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y)", "With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:", "y_cos = np.cos(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y_sin)\nplt.plot(x, y_cos)\nplt.xlabel('x axis label')\nplt.ylabel('y axis label')\nplt.title('Sine and Cosine')\nplt.legend(['Sine', 'Cosine'])", "Subplots\nYou can plot different things in the same figure using the subplot function. Here is an example:", "# Compute the x and y coordinates for points on sine and cosine curves\nx = np.arange(0, 3 * np.pi, 0.1)\ny_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Set up a subplot grid that has height 2 and width 1,\n# and set the first such subplot as active.\nplt.subplot(2, 1, 1)\n\n# Make the first plot\nplt.plot(x, y_sin)\nplt.title('Sine')\n\n# Set the second subplot as active, and make the second plot.\nplt.subplot(2, 1, 2)\nplt.plot(x, y_cos)\nplt.title('Cosine')\n\n# Show the figure.\nplt.show()", "You can read much more about the subplot function in the documentation." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Boialex/MIPT-ML
hw5/hw1_Modules.ipynb
gpl-3.0
[ "import numpy as np\nfrom scipy.optimize import check_grad\nfrom gradient_check import eval_numerical_gradient_array\n\ndef rel_error(x, y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))", "Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.", "class Module(object):\n def __init__ (self):\n self.output = None\n self.gradInput = None\n self.training = True\n \"\"\"\n Basically, you can think of a module as of a something (black box) \n which can process `input` data and produce `ouput` data.\n This is like applying a function which is called `forward`: \n \n output = module.forward(input)\n \n The module should be able to perform a backward pass: to differentiate the `forward` function. \n More, it should be able to differentiate it if is a part of chain (chain rule).\n The latter implies there is a gradient from previous step of a chain rule. \n \n gradInput = module.backward(input, gradOutput)\n \"\"\"\n \n def forward(self, input):\n \"\"\"\n Takes an input object, and computes the corresponding output of the module.\n \"\"\"\n return self.updateOutput(input)\n\n def backward(self,input, gradOutput):\n \"\"\"\n Performs a backpropagation step through the module, with respect to the given input.\n \n This includes \n - computing a gradient w.r.t. `input` (is needed for further backprop),\n - computing a gradient w.r.t. parameters (to update parameters while optimizing).\n \"\"\"\n self.updateGradInput(input, gradOutput)\n self.accGradParameters(input, gradOutput)\n return self.gradInput\n \n\n def updateOutput(self, input):\n \"\"\"\n Computes the output using the current parameter set of the class and input.\n This function returns the result which is stored in the `output` field.\n \n Make sure to both store the data in `output` field and return it. \n \"\"\"\n \n # The easiest case:\n \n # self.output = input \n # return self.output\n \n pass\n\n def updateGradInput(self, input, gradOutput):\n \"\"\"\n Computing the gradient of the module with respect to its own input. \n This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.\n \n The shape of `gradInput` is always the same as the shape of `input`.\n \n Make sure to both store the gradients in `gradInput` field and return it.\n \"\"\"\n \n # The easiest case:\n \n # self.gradInput = gradOutput \n # return self.gradInput\n \n pass \n \n def accGradParameters(self, input, gradOutput):\n \"\"\"\n Computing the gradient of the module with respect to its own parameters.\n No need to override if module has no parameters (e.g. ReLU).\n \"\"\"\n pass\n \n def zeroGradParameters(self): \n \"\"\"\n Zeroes `gradParams` variable if the module has params.\n \"\"\"\n pass\n \n def getParameters(self):\n \"\"\"\n Returns a list with its parameters. \n If the module does not have parameters return empty list. \n \"\"\"\n return []\n \n def getGradParameters(self):\n \"\"\"\n Returns a list with gradients with respect to its parameters. \n If the module does not have parameters return empty list. \n \"\"\"\n return []\n \n def training(self):\n \"\"\"\n Sets training mode for the module.\n Training and testing behaviour differs for Dropout, BatchNorm.\n \"\"\"\n self.training = True\n \n def evaluate(self):\n \"\"\"\n Sets evaluation mode for the module.\n Training and testing behaviour differs for Dropout, BatchNorm.\n \"\"\"\n self.training = False\n \n def __repr__(self):\n \"\"\"\n Pretty printing. Should be overrided in every module if you want \n to have readable description. \n \"\"\"\n return \"Module\"", "Sequential container\nDefine a forward and backward pass procedures.", "class Sequential(Module):\n \"\"\"\n This class implements a container, which processes `input` data sequentially. \n \n `input` is processed by each module (layer) in self.modules consecutively.\n The resulting array is called `output`. \n \"\"\"\n \n def __init__ (self):\n super(Sequential, self).__init__()\n self.modules = []\n \n def add(self, module):\n \"\"\"\n Adds a module to the container.\n \"\"\"\n self.modules.append(module)\n\n def updateOutput(self, input):\n \"\"\"\n Basic workflow of FORWARD PASS:\n \n y_0 = module[0].forward(input)\n y_1 = module[1].forward(y_0)\n ...\n output = module[n-1].forward(y_{n-2}) \n \n \n Just write a little loop. \n \"\"\"\n\n # Your code goes here. ################################################\n self.y = [np.array(input)]\n for module in self.modules:\n self.y.append(module.forward(self.y[-1]))\n self.y.pop(0)\n self.output = self.y[-1]\n return self.output\n\n def backward(self, input, gradOutput):\n \"\"\"\n Workflow of BACKWARD PASS:\n \n g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)\n g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})\n ...\n g_1 = module[1].backward(y_0, g_2) \n gradInput = module[0].backward(input, g_1) \n \n \n !!!\n \n To ech module you need to provide the input, module saw while forward pass, \n it is used while computing gradients. \n Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass) \n and NOT `input` to this Sequential module. \n \n !!!\n \n \"\"\"\n # Your code goes here. ################################################\n g = np.array(gradOutput)\n self.y = [np.array(input)] + self.y\n for i, module in enumerate(reversed(self.modules)):\n g = module.backward(self.y[-i - 2], g)\n self.gradInput = g\n self.y.pop(0)\n return self.gradInput\n \n\n def zeroGradParameters(self): \n for module in self.modules:\n module.zeroGradParameters()\n \n def getParameters(self):\n \"\"\"\n Should gather all parameters in a list.\n \"\"\"\n return [x.getParameters() for x in self.modules]\n \n def getGradParameters(self):\n \"\"\"\n Should gather all gradients w.r.t parameters in a list.\n \"\"\"\n return [x.getGradParameters() for x in self.modules]\n \n def __repr__(self):\n string = \"\".join([str(x) + '\\n' for x in self.modules])\n return string\n \n def __getitem__(self,x):\n return self.modules.__getitem__(x)", "Layers\n\ninput: batch_size x n_feats1\noutput: batch_size x n_feats2", "class Linear(Module):\n \"\"\"\n A module which applies a linear transformation \n A common name is fully-connected layer, InnerProductLayer in caffe. \n \n The module should work with 2D input of shape (n_samples, n_feature).\n \"\"\"\n def __init__(self, n_in, n_out):\n super(Linear, self).__init__()\n \n # This is a nice initialization\n stdv = 1./np.sqrt(n_in)\n self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))\n self.b = np.random.uniform(-stdv, stdv, size = n_out)\n \n self.gradW = np.zeros_like(self.W)\n self.gradb = np.zeros_like(self.b)\n \n def updateOutput(self, input):\n # Your code goes here. ################################################\n self.output = np.add(input.dot(self.W.T), self.b)\n return self.output\n \n def updateGradInput(self, input, gradOutput):\n # Your code goes here. ################################################\n self.gradInput = gradOutput.dot(self.W)\n return self.gradInput\n \n def accGradParameters(self, input, gradOutput):\n # Your code goes here. ################################################\n self.gradW = gradOutput.T.dot(input)\n self.gradb = gradOutput.sum(axis=0)\n \n def zeroGradParameters(self):\n self.gradW.fill(0)\n self.gradb.fill(0)\n \n def getParameters(self):\n return [self.W, self.b]\n \n def getGradParameters(self):\n return [self.gradW, self.gradb]\n \n def __repr__(self):\n s = self.W.shape\n q = 'Linear %d -> %d' %(s[1],s[0])\n return q\n\ninput_dim = 3\noutput_dim = 2\n\nx = np.random.randn(5, input_dim)\nw = np.random.randn(output_dim, input_dim)\nb = np.random.randn(output_dim)\ndout = np.random.randn(5, output_dim)\nlinear = Linear(input_dim, output_dim)\n\ndef update_W_matrix(new_W):\n linear.W = new_W\n return linear.forward(x)\n\ndef update_bias(new_b):\n linear.b = new_b\n return linear.forward(x)\n\ndx = linear.backward(x, dout)\ndx_num = eval_numerical_gradient_array(lambda x: linear.forward(x), x, dout)\ndw_num = eval_numerical_gradient_array(update_W_matrix, w, dout)\ndb_num = eval_numerical_gradient_array(update_bias, b, dout)\nprint 'Testing Linear_backward function:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, linear.gradW)\nprint 'db error: ', rel_error(db_num, linear.gradb)", "This one is probably the hardest but as others only takes 5 lines of code in total. \n- input: batch_size x n_feats\n- output: batch_size x n_feats", "class SoftMax(Module):\n def __init__(self):\n super(SoftMax, self).__init__()\n \n def updateOutput(self, input):\n # start with normalization for numerical stability\n self.output = np.subtract(input, input.max(axis=1, keepdims=True))\n \n # Your code goes here. ################################################\n self.output = np.exp(self.output)\n out_sum = self.output.sum(axis=1, keepdims=True)\n self.output = np.divide(self.output, out_sum)\n return self.output\n \n def updateGradInput(self, input, gradOutput):\n # Your code goes here. ################################################\n batch_size, n_feats = self.output.shape\n a = self.output.reshape(batch_size, n_feats, -1)\n b = self.output.reshape(batch_size, -1, n_feats)\n self.gradInput = np.multiply(gradOutput.reshape(batch_size, -1, n_feats), \n np.subtract(np.multiply(np.eye(n_feats), a),\n np.multiply(a, b))).sum(axis=2)\n return self.gradInput\n \n def __repr__(self):\n return \"SoftMax\"\n\nsoft_max = SoftMax()\nx = np.random.randn(5, 3)\ndout = np.random.randn(5, 3)\ndx_numeric = eval_numerical_gradient_array(lambda x: soft_max.forward(x), x, dout)\ndx = soft_max.backward(x, dout)\n\n# The error should be around 1e-10\nprint 'Testing SoftMax grad:'\nprint 'dx error: ', rel_error(dx_numeric, dx)", "Implement dropout. The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask. \nThis is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout.\nWhile training (self.training == True) it should sample a mask on each iteration (for every batch). When testing this module should implement identity transform i.e. self.output = input.\n\ninput: batch_size x n_feats\noutput: batch_size x n_feats", "class Dropout(Module):\n def __init__(self, p=0.5):\n super(Dropout, self).__init__()\n \n self.p = p\n self.mask = None\n \n def updateOutput(self, input):\n # Your code goes here. ################################################\n if self.training:\n self.mask = np.random.binomial(1, self.p, size=len(input))\n else:\n self.mask = np.ones(len(input))\n self.mask = self.mask.reshape(len(self.mask), -1)\n self.output = np.multiply(input, self.mask)\n return self.output\n \n def updateGradInput(self, input, gradOutput):\n # Your code goes here. ################################################\n self.gradInput = np.multiply(gradOutput, self.mask)\n return self.gradInput\n \n def __repr__(self):\n return \"Dropout\"", "Activation functions\nHere's the complete example for the Rectified Linear Unit non-linearity (aka ReLU):", "class ReLU(Module):\n def __init__(self):\n super(ReLU, self).__init__()\n \n def updateOutput(self, input):\n self.output = np.maximum(input, 0)\n return self.output\n \n def updateGradInput(self, input, gradOutput):\n self.gradInput = np.multiply(gradOutput , input > 0)\n return self.gradInput\n \n def __repr__(self):\n return \"ReLU\"", "Implement Leaky Rectified Linear Unit. Expriment with slope.", "class LeakyReLU(Module):\n def __init__(self, slope = 0.03):\n super(LeakyReLU, self).__init__()\n \n self.slope = slope\n \n def updateOutput(self, input):\n # Your code goes here. ################################################\n self.output = input.copy()\n self.output[self.output < 0] *= self.slope\n return self.output\n \n def updateGradInput(self, input, gradOutput):\n # Your code goes here. ################################################\n self.gradInput = gradOutput.copy()\n self.gradInput[input < 0] *= self.slope\n return self.gradInput\n \n def __repr__(self):\n return \"LeakyReLU\"", "Criterions\nCriterions are used to score the models answers.", "class Criterion(object):\n def __init__ (self):\n self.output = None\n self.gradInput = None\n \n def forward(self, input, target):\n \"\"\"\n Given an input and a target, compute the loss function \n associated to the criterion and return the result.\n \n For consistency this function should not be overrided,\n all the code goes in `updateOutput`.\n \"\"\"\n return self.updateOutput(input, target)\n\n def backward(self, input, target):\n \"\"\"\n Given an input and a target, compute the gradients of the loss function\n associated to the criterion and return the result. \n\n For consistency this function should not be overrided,\n all the code goes in `updateGradInput`.\n \"\"\"\n return self.updateGradInput(input, target)\n \n def updateOutput(self, input, target):\n \"\"\"\n Function to override.\n \"\"\"\n return self.output\n\n def updateGradInput(self, input, target):\n \"\"\"\n Function to override.\n \"\"\"\n return self.gradInput \n\n def __repr__(self):\n \"\"\"\n Pretty printing. Should be overrided in every module if you want \n to have readable description. \n \"\"\"\n return \"Criterion\"", "The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.", "class MSECriterion(Criterion):\n def __init__(self):\n super(MSECriterion, self).__init__()\n \n def updateOutput(self, input, target): \n self.output = np.sum(np.power(np.subtractact(input, target), 2)) / input.shape[0]\n return self.output \n \n def updateGradInput(self, input, target):\n self.gradInput = np.subtract(input, target) * 2 / input.shape[0]\n return self.gradInput\n\n def __repr__(self):\n return \"MSECriterion\"", "You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula, \nremember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size.", "class ClassNLLCriterion(Criterion):\n def __init__(self):\n a = super(ClassNLLCriterion, self)\n super(ClassNLLCriterion, self).__init__()\n \n def updateOutput(self, input, target): \n \n # Use this trick to avoid numerical errors\n eps = 1e-15 \n input_clamp = np.clip(input, eps, 1 - eps)\n \n # Your code goes here. ################################################\n self.output = -np.sum(np.multiply(target, np.log(input_clamp))) / len(input)\n return self.output\n\n def updateGradInput(self, input, target):\n \n # Use this trick to avoid numerical errors\n input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )\n \n # Your code goes here. ################################################\n self.gradInput = np.subtract(input_clamp, target) / len(input)\n return self.gradInput\n \n def __repr__(self):\n return \"ClassNLLCriterion\"" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
snth/ctdeep
PyconZA-2016.ipynb
mit
[ "Deep Learning in Python\n\nGet the code: github.com/snth/ctdeep\n\nTobias Brandt\n<img src=\"img/argon_logo.png\" align=left width=200>\n<!-- <img src=\"http://www.argonassetmanagement.co.za/css/img/logo.png\" align=left width=200> -->\n\nAbout Me\n\nex-physicist, quant, pythonista\ngithub.com/snth\[email protected]\nMember of Cape Town Deep Learning Meetup\n\nTutorial Outline\n\nDeep Learning in Python is simple and powerful!\nAn introduction to (Artificial) Neural Networks\nAn introduction to Deep Learning\n\nRequirements\n\nPython 3.4 (or legacy Python 2.7)\nKeras >= 1.0.0\nTheano or Tensorflow\ngit clone https://github.com/snth/ctdeep.git\n\nDeep Learning in Python is simple", "import keras\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\n\nbatch_size = 128\nnb_classes = 10\nnb_epoch = 15\n\n# input image dimensions\nimg_rows, img_cols = 28, 28\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\npool_size = (2, 2)\n# convolution kernel size\nkernel_size = (3, 3)\n\n# %load ..\\keras\\examples\\mnist_cnn.py\n'''Trains a simple convnet on the MNIST dataset.\n\nGets to 99.25% test accuracy after 12 epochs\n(there is still a lot of margin for parameter tuning).\n16 seconds per epoch on a GRID K520 GPU.\n'''\n\nfrom __future__ import print_function\nimport numpy as np\nnp.random.seed(1337) # for reproducibility\n\nfrom keras.datasets import mnist\nfrom keras.utils import np_utils\nfrom keras import backend as K\n\n# the data, shuffled and split between train and test sets\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n(images_train, labels_train), (images_test, labels_test) = (X_train, y_train), (X_test, y_test)\n\nif K.image_dim_ordering() == 'th':\n X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)\n X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)\n input_shape = (1, img_rows, img_cols)\nelse:\n X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)\n X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)\n input_shape = (img_rows, img_cols, 1)\n\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\nX_train /= 255\nX_test /= 255\nprint('X_train shape:', X_train.shape)\nprint(X_train.shape[0], 'train samples')\nprint(X_test.shape[0], 'test samples')\n\n# convert class vectors to binary class matrices\nY_train = np_utils.to_categorical(y_train, nb_classes)\nY_test = np_utils.to_categorical(y_test, nb_classes)\n\nmodel = Sequential()\n\nmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape))\nmodel.add(Activation('relu'))\nmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=pool_size))\nmodel.add(Dropout(0.25))\n\nmodel.add(Flatten())\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))\n\nprint(\"Compiling the model ...\")\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adadelta',\n metrics=['accuracy'])\n#model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,\n# verbose=1, validation_data=(X_test, Y_test))\n\nimport os\nfor epoch in range(1, nb_epoch+1):\n weights_path = \"models/mnist_cnn_{}_weights.h5\".format(epoch)\n if os.path.exists(weights_path):\n print(\"Loading precomputed weights for epoch {} ...\".format(epoch))\n model.load_weights(weights_path)\n print('Evaluating the model on the test set ...')\n score = model.evaluate(X_test, Y_test, verbose=1)\n print('Test score:', score[0])\n print('Test accuracy:', score[1])\n else:\n print(\"Fitting the model for epoch {} ...\".format(epoch))\n model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=1,\n validation_data=(X_test, Y_test), verbose=1)\n model.save_weights(weights_path)\n\nprint('Evaluating the model on the test set ...')\nscore = model.evaluate(X_test, Y_test, verbose=1)\nprint('Test score:', score[0])\nprint('Test accuracy:', score[1])", "So building and training Neural Networks in Python in simple!\nBut it is also powerful!\nNeural Style Transfer: github.com/titu1994/Neural-Style-Transfer\n<font size=20>\n<table border=\"0\"><tr>\n<td><img src=\"img/golden_gate.jpg\" width=250></td>\n<td>+</td>\n<td><img src=\"img/starry_night.jpg\" width=250></td>\n<td>=</td>\n<td><img src=\"img/golden_gate_iteration_20.png\" width=250></td>\n</tr></table>\n</font>\nNeural Networks\n<!-- \n\n[Yann LeCun Slides](https://drive.google.com/folderview?id=0BxKBnD5y2M8NclFWSXNxa0JlZTg&usp=drive_web) ([local](pdf/000c-yann-lecun-lecon-inaugurale-college-de-france-20160204.pdf))\n\n![Intelligent Systems](img/LeCun_flight.png) -->\n\n<img src='img/LeCun_flight.png' align='middle' width=800>\nA neuron looks something like this\n\nSymbolically we can represent the key parts we want to model as\n\nIn order to build an artifical \"brain\" we need to connect together many neurons in a \"neural network\"\n\nWe can model the response of each neuron with various activation functions\n\nTraining a Neural Network\n<!--\n\n![Perceptron Model](img/perceptron_node.png) -->\n\n<img src='img/perceptron_node.png' align='middle' width=400>\nMathematically the activation of each neuron can be represented by\n\nwhere $W$ and $b$ are the weights and bias respectively.\nLoss Function\n<!--\n![Loss Function](img/loss_function.png)\n-->\n<img src='img/loss_function.png' width=800>\n\n\nNeural Networks in Python\nKeras\n\nHigh level library for specifying and training neural networks\nCan use Theano or TensorFlow as backend\n\nKeras makes Neural Networks awesome!\nTheano\n\nPython library that provides efficient (low-level) tools for working with Neural Networks\nIn particular:\nAutomatic Differentiation (AD)\nCompiled computation graphs\nGPU accelerated computation\n\n\n\nTensorflow\n\nDeep Learning framework by Google\n\nThe MNIST Dataset\n\n70,000 handwritten digits\n60,000 for training\n10,000 for testing\n\n\nAs 28x28 pixel images", "from __future__ import absolute_import, print_function, division\nfrom ipywidgets import interact, interactive, widgets\nimport numpy as np\nnp.random.seed(1337) # for reproducibility", "Let's load some data", "from keras.datasets import mnist\n#(images_train, labels_train), (images_test, labels_test) = mnist.load_data()\nprint(\"Data shapes:\")\nprint('images',images_train.shape)\nprint('labels', labels_train.shape)", "and then visualise it", "%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\ndef plot_mnist_digit(image, figsize=None):\n \"\"\" Plot a single MNIST image.\"\"\"\n fig = plt.figure()\n ax = fig.add_subplot(1, 1, 1)\n if figsize:\n ax.set_figsize(*figsize)\n ax.matshow(image, cmap = matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([]))\n plt.show()\n\ndef plot_1_by_2_images(image, reconstruction, figsize=None):\n fig = plt.figure(figsize=figsize)\n ax = fig.add_subplot(1, 2, 1)\n ax.matshow(image, cmap = matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([]))\n ax = fig.add_subplot(1, 2, 2)\n ax.matshow(reconstruction, cmap = matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([]))\n plt.show()\n\ndef plot_10_by_10_images(images, figsize=None):\n \"\"\" Plot 100 MNIST images in a 10 by 10 table. Note that we crop\n the images so that they appear reasonably close together. The\n image is post-processed to give the appearance of being continued.\"\"\"\n fig = plt.figure(figsize=figsize)\n #images = [image[3:25, 3:25] for image in images]\n #image = np.concatenate(images, axis=1)\n for x in range(10):\n for y in range(10):\n ax = fig.add_subplot(10, 10, 10*y+x+1)\n ax.matshow(images[10*y+x], cmap = matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([]))\n plt.show()\n\ndef plot_10_by_20_images(left, right, figsize=None):\n \"\"\" Plot 100 MNIST images next to their reconstructions\"\"\"\n fig = plt.figure(figsize=figsize)\n for x in range(10):\n for y in range(10):\n ax = fig.add_subplot(10, 21, 21*y+x+1)\n ax.matshow(left[10*y+x], cmap = matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([]))\n ax = fig.add_subplot(10, 21, 21*y+11+x+1)\n ax.matshow(right[10*y+x], cmap = matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([]))\n plt.show()\n\nplot_10_by_10_images(images_train, figsize=(8,8))\n\ndef draw_image(i):\n plot_mnist_digit(images_train[i])\n print('label:', labels_train[i])\n\ninteract(draw_image, i=(0, len(images_train)-1))\nNone", "Data Preprocessing\nTransform \"images\" to \"features\" ...\nMost machine learning algorithms expect a flat array of numbers", "def to_features(X):\n return X.reshape(-1, 784).astype(\"float32\") / 255.0\n\ndef to_images(X):\n return (X*255.0).astype('uint8').reshape(-1, 28, 28)\n\nprint('data shape:', images_train.shape, images_train.dtype)\nprint('features shape', to_features(images_train).shape, to_features(images_train).dtype)", "Split the data into a \"training\" and \"test\" set ...", "#(images_train, labels_train), (images_test, labels_test) = mnist.load_data()\nX_train = to_features(images_train)\nX_test = to_features(images_test)\nprint(X_train.shape, 'training samples')\nprint(X_test.shape, 'test samples')", "Transform the labels to a \"one-hot\" encoding ...", "# The labels need to be transformed into class indicators\nfrom keras.utils import np_utils\ny_train = np_utils.to_categorical(labels_train, nb_classes=10)\ny_test = np_utils.to_categorical(labels_test, nb_classes=10)\nprint('labels_train:', labels_train.shape, labels_train.dtype)\nprint('y_train:', y_test.shape, y_train.dtype)", "For example, let's inspect the first 2 labels:", "print('labels_train[:2]:\\n', labels_train[:2][:, np.newaxis])\nprint('y_train[:2]\\n', y_train[:2])", "Simple Multi-Layer Perceptron (MLP)\nThe simplest kind of Artificial Neural Network is as Multi-Layer Perceptron (MLP) with a single hidden layer.", "# Neural Network Architecture Parameters\nnb_input = 784\nnb_hidden = 512\nnb_output = 10\n# Training Parameters\nnb_epoch = 1\nbatch_size = 128", "First we define the \"architecture\" of the network", "from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation\n\nmlp = Sequential()\n\nmlp.add(Dense(output_dim=nb_hidden, input_dim=nb_input, init='uniform'))\nmlp.add(Activation('sigmoid'))\n\nmlp.add(Dense(output_dim=nb_output, input_dim=nb_hidden, init='uniform'))\nmlp.add(Activation('softmax'))", "then we compile it. This takes the symbolic computational graph of the model and compiles it an efficient implementation which can then be used to train and evaluate the model. \nNote that we have to specify what loss/objective function we want to use as well which optimisation algorithm to use. SGD stands for Stochastic Gradient Descent.", "mlp.compile(loss='categorical_crossentropy', optimizer='SGD',\n metrics=[\"accuracy\"])", "Next we train the model on our training data. Watch the loss, which is the objective function which we are minimising, and the estimated accuracy of the model.", "mlp.fit(X_train, y_train, \n batch_size=batch_size, nb_epoch=nb_epoch,\n verbose=1)", "Once the model is trained, we can evaluate its performance on the test data.", "mlp.evaluate(X_test, y_test)\n\n#plot_10_by_10_images(images_test, figsize=(8,8))\n\ndef draw_mlp_prediction(j):\n plot_mnist_digit(to_images(X_test)[j])\n prediction = mlp.predict_classes(X_test[j:j+1], verbose=False)[0]\n print('predict:', prediction, '\\tactual:', labels_test[j])\n\ninteract(draw_mlp_prediction, j=(0, len(X_test)-1))\nNone", "Deep Learning\nWhy do we want Deep Neural Networks?\nUniversal Approximation Theorem\nThe theorem thus states that simple neural networks can represent \na wide variety of interesting functions when given appropriate parameters;\nhowever, it does not touch upon the algorithmic learnability of those parameters.\n\nPower of combinations\nOn the (Small) Number of Atoms in the Universe\nOn the number of Go positions\nWhile <a href=\"https://www.theguardian.com/technology/2016/mar/09/google-deepmind-alphago-ai-defeats-human-lee-sedol-first-game-go-contest\">discussing</a> the complexity of the game of Go, <a href=\"https://en.wikipedia.org/wiki/Demis_Hassabis\">Demis Hassabis</a> said:\n<blockquote>\n<i>There are more possible Go positions than there are atoms in the universe.</i>\n</blockquote>\n\nA Go board has 19 &times; 19 points, each of which can be empty or occupied by black or white, so there are 3<sup>(19 &times; 19)</sup> <tt>&cong;</tt> 10<sup>172</sup> possible board positions, but \"only\" about 10<sup>170</sup> of those positions are legal.\n<p>The crucial idea is, that as a number of <i>physical things</i>, 10<sup>80</sup> is a really big number. But as a number of <i>combinations</i> of things, 10<sup>80</sup> is a rather small number. It doesn't take a universe of stuff to get up to 10<sup>80</sup> combinations; we can get there with, for example, a passphrase field that is 40 characters long:\n\n<blockquote id=\"passphrase\">\n<tt>a correct horse battery staple troubador</tt>\n</blockquote>\n\n### On the number of digital pictures ###\n\nThere is an art project to display every possible picture. Surely that would take a long time, because there must be many possible pictures. But how many?\n\nWe will assume the color model known as True Color, in which each pixel can be one of 2^24 ≅ 17 million distinct colors. The digital camera shown below left has 12 million pixels, and we'll also consider much smaller pictures: the array below middle, with 300 pixels, and the array below right with just 12 pixels; shown are some of the possible pictures:\n\n<img src=\"img/norvig_atoms.png\" align=\"center\">\n\n**Quiz: Which of these produces a number of pictures similar to the number of atoms in the universe?**\n\n**Answer: An array of n pixels produces (17 million)^n different pictures. (17 million)^12 ≅ 10^86, so the tiny 12-pixel array produces a million times more pictures than the number of atoms in the universe!**\n\nHow about the 300 pixel array? It can produce 10^2167 pictures. You may think the number of atoms in the universe is big, but that's just peanuts to the number of pictures in a 300-pixel array. And 12M pixels? 10^86696638 pictures. Fuggedaboutit!\n\nSo the number of possible pictures is really, really, really big. And the number of atoms in the universe is looking relatively small, at least as a number of combinations.\n\n### ==> The Curse of Dimensionality!\n\n### Feature Hierarchies\n\n<!--\n![Feature Hierarchy](img/feature_hierarchy.png)\n-->\n\n<img src=\"img/feature_hierarchy.png\" width=800>\n\n# A Deeper MLP\n\nNext we build a two-layer MLP with the same number of hidden nodes, half in each layer.", "from keras.models import Sequential\nnb_layers = 2\nmlp2 = Sequential()\n# add hidden layers\nfor i in range(nb_layers):\n mlp2.add(Dense(output_dim=nb_hidden//nb_layers, input_dim=nb_input if i==0 else nb_hidden//nb_layers, init='uniform'))\n mlp2.add(Activation('sigmoid'))\n# add output layer\nmlp2.add(Dense(output_dim=nb_output, input_dim=nb_hidden//nb_layers, init='uniform'))\nmlp2.add(Activation('softmax'))\n\nmlp2.compile(loss='categorical_crossentropy', optimizer='SGD',\n metrics=[\"accuracy\"])\n\nmlp2.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch,\n verbose=1)", "Did you notice anything about the accuracy? Let's train it some more.", "mlp2.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch,\n verbose=1)\n\nmlp2.evaluate(X_test, y_test)", "Autoencoders\nHinton 2006 (local)", "from IPython.display import HTML\nHTML('<iframe src=\"pdf/Hinton2006-science.pdf\" width=800 height=400></iframe>')\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Dropout\n\nprint('nb_input =', nb_input)\nprint('nb_hidden =', nb_hidden)\nae = Sequential()\n# encoder\nae.add(Dense(nb_hidden, input_dim=nb_input, init='uniform'))\nae.add(Activation('sigmoid'))\n# decoder\nae.add(Dense(nb_input, input_dim=nb_hidden, init='uniform'))\nae.add(Activation('sigmoid'))\n\nae.compile(loss='mse', optimizer='SGD')\n\nnb_epoch = 1\nae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)\n\nplot_10_by_20_images(images_test, to_images(ae.predict(X_test)),\n figsize=(10,5))\n\nfrom keras.optimizers import SGD\nsgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)\n\nae.compile(loss='mse', optimizer=sgd)\nnb_epoch = 1\nae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)\n\nplot_10_by_20_images(images_test, to_images(ae.predict(X_test)),\n figsize=(10,5))\n\ndef draw_ae_prediction(j):\n X_plot = X_test[j:j+1]\n prediction = ae.predict(X_plot, verbose=False)\n plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])\n\ninteract(draw_ae_prediction, j=(0, len(X_test)-1))\nNone", "A better Autoencoder", "from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Dropout\n\ndef make_autoencoder(nb_input=nb_input, nb_hidden=nb_hidden,\n activation='sigmoid', init='uniform'):\n ae = Sequential()\n # encoder\n ae.add(Dense(nb_hidden, input_dim=nb_input, init=init))\n ae.add(Activation(activation))\n # decoder\n ae.add(Dense(nb_input, input_dim=nb_hidden, init=init))\n ae.add(Activation(activation))\n return ae\n\nnb_epoch = 1\nae2 = make_autoencoder(activation='sigmoid', init='glorot_uniform')\nae2.compile(loss='mse', optimizer='adam')\nae2.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)\n\nplot_10_by_20_images(images_test, to_images(ae2.predict(X_test)), figsize=(10,5))\n\ndef draw_ae2_prediction(j):\n X_plot = X_test[j:j+1]\n prediction = ae2.predict(X_plot, verbose=False)\n plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])\n\ninteract(draw_ae2_prediction, j=(0, len(X_test)-1))\nNone", "Stacked Autoencoder", "from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Dropout\n\nclass StackedAutoencoder(object):\n \n def __init__(self, layers, mode='autoencoder',\n activation='sigmoid', init='uniform', final_activation='softmax',\n dropout=0.2, optimizer='SGD', metrics=None):\n self.layers = layers\n self.mode = mode\n self.activation = activation\n self.final_activation = final_activation\n self.init = init\n self.dropout = dropout\n self.optimizer = optimizer\n self.metrics = metrics\n self._model = None\n \n self.build()\n self.compile()\n \n def _add_layer(self, model, i, is_encoder):\n if is_encoder:\n input_dim, output_dim = self.layers[i], self.layers[i+1]\n activation = self.final_activation if i==len(self.layers)-2 else self.activation\n else:\n input_dim, output_dim = self.layers[i+1], self.layers[i]\n activation = self.activation\n model.add(Dense(output_dim=output_dim,\n input_dim=input_dim,\n init=self.init))\n model.add(Activation(activation))\n \n def build(self):\n self.encoder = Sequential()\n self.decoder = Sequential()\n self.autoencoder = Sequential()\n for i in range(len(self.layers)-1):\n self._add_layer(self.encoder, i, True)\n self._add_layer(self.autoencoder, i, True)\n #if i<len(self.layers)-2:\n # self.autoencoder.add(Dropout(self.dropout))\n\n # Note that the decoder layers are in reverse order\n for i in reversed(range(len(self.layers)-1)):\n self._add_layer(self.decoder, i, False)\n self._add_layer(self.autoencoder, i, False)\n \n def compile(self):\n print(\"Compiling the encoder ...\")\n self.encoder.compile(loss='categorical_crossentropy', optimizer=self.optimizer, metrics=self.metrics)\n print(\"Compiling the decoder ...\")\n self.decoder.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)\n print(\"Compiling the autoencoder ...\")\n return self.autoencoder.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)\n \n def fit(self, X_train, Y_train, batch_size, nb_epoch, verbose=1):\n result = self.autoencoder.fit(X_train, Y_train,\n batch_size=batch_size, nb_epoch=nb_epoch,\n verbose=verbose)\n # copy the weights to the encoder\n for i, l in enumerate(self.encoder.layers):\n l.set_weights(self.autoencoder.layers[i].get_weights())\n for i in range(len(self.decoder.layers)):\n self.decoder.layers[-1-i].set_weights(self.autoencoder.layers[-1-i].get_weights())\n return result\n \n def pretrain(self, X_train, batch_size, nb_epoch, verbose=1):\n for i in range(len(self.layers)-1):\n # Greedily train each layer\n print(\"Now pretraining layer {} [{}-->{}]\".format(i+1, self.layers[i], self.layers[i+1]))\n ae = Sequential()\n self._add_layer(ae, i, True)\n #ae.add(Dropout(self.dropout))\n self._add_layer(ae, i, False)\n ae.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)\n ae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=verbose)\n # Then lift the training data up one layer\n print(\"\\nTransforming data from\", X_train.shape, \"to\", (X_train.shape[0], self.layers[i+1]))\n enc = Sequential()\n self._add_layer(enc, i, True)\n enc.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)\n enc.layers[0].set_weights(ae.layers[0].get_weights())\n enc.layers[1].set_weights(ae.layers[1].get_weights())\n X_train = enc.predict(X_train, verbose=verbose)\n print(\"\\nShape check:\", X_train.shape, \"\\n\")\n # Then copy the learned weights\n self.encoder.layers[2*i].set_weights(ae.layers[0].get_weights())\n self.encoder.layers[2*i+1].set_weights(ae.layers[1].get_weights())\n self.autoencoder.layers[2*i].set_weights(ae.layers[0].get_weights())\n self.autoencoder.layers[2*i+1].set_weights(ae.layers[1].get_weights())\n self.decoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights())\n self.decoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights())\n self.autoencoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights())\n self.autoencoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights())\n \n \n def evaluate(self, X_test, Y_test):\n return self.autoencoder.evaluate(X_test, Y_test)\n \n def predict(self, X, verbose=False):\n return self.autoencoder.predict(X, verbose=verbose)\n\n def _get_paths(self, name):\n model_path = \"models/{}_model.yaml\".format(name)\n weights_path = \"models/{}_weights.hdf5\".format(name)\n return model_path, weights_path\n\n def save(self, name='autoencoder'):\n model_path, weights_path = self._get_paths(name)\n open(model_path, 'w').write(self.autoencoder.to_yaml())\n self.autoencoder.save_weights(weights_path, overwrite=True)\n \n def load(self, name='autoencoder'):\n model_path, weights_path = self._get_paths(name)\n self.autoencoder = keras.models.model_from_yaml(open(model_path))\n self.autoencoder.load_weights(weights_path)\n\nnb_epoch = 3\nsae = StackedAutoencoder(layers=[nb_input, 500, 150, 50, 10],\n activation='sigmoid',\n final_activation='softmax',\n init='uniform',\n dropout=0.25,\n optimizer='SGD') # replace with 'adam', 'relu', 'glorot_uniform'\n\nsae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)\n\nplot_10_by_20_images(images_test, to_images(sae.predict(X_test)), figsize=(10,5))\n\ndef draw_sae_prediction(j):\n X_plot = X_test[j:j+1]\n prediction = sae.predict(X_plot, verbose=False)\n plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])\n print(sae.encoder.predict(X_plot, verbose=False)[0])\n\ninteract(draw_sae_prediction, j=(0, len(X_test)-1))\nNone\n\nsae.pretrain(X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)", "Visualising the Filters", "def visualise_filter(model, layer_index, filter_index):\n from keras import backend as K\n\n # build a loss function that maximizes the activation\n # of the nth filter on the layer considered\n layer_output = model.layers[layer_index].get_output()\n loss = K.mean(layer_output[:, filter_index])\n\n # compute the gradient of the input picture wrt this loss\n input_img = model.layers[0].input\n grads = K.gradients(loss, input_img)[0]\n\n # normalization trick: we normalize the gradient\n grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)\n\n # this function returns the loss and grads given the input picture\n iterate = K.function([input_img], [loss, grads])\n\n # we start from a gray image with some noise\n input_img_data = np.random.random((1,nb_input,))\n # run gradient ascent for 20 steps\n step = 1\n for i in range(100):\n loss_value, grads_value = iterate([input_img_data])\n input_img_data += grads_value * step\n\n #print(\"Current loss value:\", loss_value)\n if loss_value <= 0.:\n # some filters get stuck to 0, we can skip them\n break\n print(\"Current loss value:\", loss_value)\n\n # decode the resulting input image\n if loss_value>0:\n #return input_img_data[0]\n return input_img_data\n else:\n raise ValueError(loss_value)\n\n\ndef draw_filter(i):\n flt = visualise_filter(mlp, 3, 4)\n #print(flt)\n plot_mnist_digit(to_images(flt)[0])\n\ninteract(draw_filter, i=[0, 9])", "We're hiring! (bitbucket.org/argonasset/opportunities)\n<img src='img/argon_website.png' align='center' width=600>\nThank you\n\nCome join the Cape Town Deep Learrning Meetup!\nGet the code: github.com/snth/ctdeep\nWe're hiring --> bitbucket.org/argonasset/opportunities\nEmail me: [email protected]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
parrt/msan501
notes/aliasing.ipynb
mit
[ "Data aliasing\nOne of the trickiest things about programming is figuring out exactly what data a variable refers to. Remember that we use names like data and salary to represent memory cells holding data values. The names are easier to remember than the physical memory addresses, but we can get fooled. For example, it's obvious that two variables x and y can both have the same integer value 7:", "x = y = 7\nprint(x,y)", "But, did you know that they are both referring to the same 7 object? In other words, variables in Python are always references or pointers to data so the variables are not technically holding the value. Pointers are like phone numbers that \"point at\" phones but pointers themselves are not the phone itself.\nWe can uncover this secret level of indirection using the built-in id(x) function that returns the physical memory address pointed out by x. To demonstrate that, let's ask what x and y point at:", "x = y = 7\nprint(id(x))\nprint(id(y))", "Wow! They are the same. That number represents the memory location where Python has stored the shared 7 object.\nOf course, as programmers we don't think of these atomic elements as referring to the same object; just keep in mind that they do. We are more likely to view them as copies of the same number, as lolviz shows visually:", "from lolviz import *\ncallviz(varnames=['x','y'])", "Let's verify that the same thing happens for strings:", "name = 'parrt'\nuserid = name # userid now points at the same memory as name\nprint(id(name))\nprint(id(userid))", "Ok, great, so we are in fact sharing the same memory address to hold the string 'parrt' and both of the variable names point at that same shared space. We call this aliasing, in the language implementation business.\nThings only get freaky when we start changing shared data. This can't happen with integers and strings because they are immutable (can't be changed). Let's look at two identical copies of a single list:", "you = [1,3,5]\nme = [1,3,5]\nprint(id(you))\nprint(id(me))\ncallviz(varnames=['you','me'])", "Those lists have the same value but live a different memory addresses. They are not aliased; they are not shared. Consequently, changing one does not change the other:", "you = [1,3,5]\nme = [1,3,5]\nprint(you, me)\nyou[0] = 99\nprint(you, me)", "On the other hand, let's see what happens if we make you and me share the same copy of the list (point at the same memory location):", "you = [1,3,5]\nme = you\nprint(id(you))\nprint(id(me))\nprint(you, me)\ncallviz(varnames=['you','me'])", "Now, changing one appears to change the other, but in fact both simply refer to the same location in memory:", "you[0] = 99\nprint(you, me)\ncallviz(varnames=['you','me'])", "Don't confuse changing the pointer to the list with changing the list elements:", "you = [1,3,5]\nme = you\ncallviz(varnames=['you','me'])\n\nme = [9,7,5] # doesn't affect `you` at all\nprint(you)\nprint(me)\ncallviz(varnames=['you','me'])", "This aliasing of data happens a great deal when we pass lists or other data structures to functions. Passing list Quantity to a function whose argument is called data means that the two are aliased. We'll look at this in more detail in the \"Visibility of symbols\" section of Organizing your code with functions.\nShallow copies", "X = [[1,2],[3,4]]\nY = X.copy() # shallow copy\ncallviz(varnames=['X','Y'])\n\nX[0][1] = 99\ncallviz(varnames=['X','Y'])\nprint(Y)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
erickpeirson/statistical-computing
Linear and Quadratic Discriminant Analysis.ipynb
cc0-1.0
[ "%pylab inline\n\nimport numpy as np\nfrom scipy import stats, optimize\nimport pandas as pd", "Linear Discriminant Analysis\nPredict groupings in continuous data.", "X = np.linspace(0, 20, 100)\ndef f(x):\n if x < 7:\n return 'a', 2. + np.random.random()\n elif x < 14:\n return 'b', 4 + np.random.random()\n else:\n return 'c', 6 + np.random.random()\nK, Y = zip(*[f(x) for x in X])\ncolors = plt.get_cmap('Set1')\ncategories = ['a', 'b', 'c']\nplt.scatter(X, Y, c=[colors(categories.index(k)*20) for k in K])\nplt.show()", "LDA is like inverted ANOVA: ANOVA looks for differences in a continuous response among categories, whereas LDA infers categories using a continuous predictor.", "bycategory = [ [Y[i] for i in xrange(len(Y)) if K[i] == k] for k in categories ]\n\nplt.figure(figsize=(10, 5))\nplt.subplot(121)\nplt.boxplot(bycategory)\nplt.ylim(0, 8)\nplt.title('ANOVA')\n\nplt.subplot(122)\nplt.boxplot(bycategory, 0, 'rs', 0)\nplt.title('LDA')\nplt.xlim(0, 8)\n\nplt.show()", "LDA assumes that the variance in each group is the same, and that the predictor(s) are normally distributed for each group. In other words, different $\\mu_k$, one shared $\\sigma$.", "X = [np.linspace(0, 7, 50),\n np.linspace(2, 10, 50),\n np.linspace(7, 16, 50)]\n\nplt.figure(figsize=(10, 4))\nk = 1\nfor x in X:\n mu_k = x.mean()\n plt.plot(x, stats.norm.pdf(x, loc=mu_k))\n plt.plot([mu_k, mu_k], [0, 0.5], c='k')\n plt.text(mu_k + 0.2, 0.5, \"$\\mu_%i$\" % k, size=18)\n k += 1\nplt.ylim(0, 0.75)\n\nplt.xlabel('Predictor', size=18)\nplt.ylabel('Probability', size=18)\nplt.show()", "Recall Bayes Theorem: this allows us to \"flip\" the predictor and the response.\n$P(A|B) = \\frac{P(B|A)P(A)}{P(B|A) P(A) P(B|\\bar{A}) P(\\bar{A})}$\nTherefore the probability of group $k$ given the continuous predictor B is:\n$P(A_k|B) = \\frac{P(B|A_k) P(A_k)}{\\sum_{l=1}^k P(B|A_l) P(A_l)}$\nThe probability that a value $X=x$ came from group $Y=k$ is:\n$P(Y=k|X=x) = \\frac{f(x|Y=k)\\pi(Y=k)}{\\sum_{l=1}^k f(x|Y=l)\\pi(Y=l)}$\nWhere $\\pi(Y=k)$ is the probability of $Y=k$ regardless of $x$. This is just the relative representation of each group:\n$\\pi(Y=k) = \\frac{n_k}{\\sum_{l=1}^k n_l}$\nAnd $f(x|Y=k) = f_k(x)$ is the PDF for group $k$:\n$f_k(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{\\frac{-(x-\\mu_k)^2}{2\\sigma^2}}$\nTherefore:\n$P(Y=k|X=x) = \\frac{\\frac{1}{\\sqrt{2\\pi}\\sigma}e^{\\frac{-(x-\\mu_k)^2}{2\\sigma^2}}\\pi(Y=k)}{\\sum_{l=1}^k[ \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{\\frac{-(x-\\mu_l)^2}{2\\sigma^2}} \\pi(Y=l)]}$\nAssume for the sake of algebra that each $k \\in K$ is represented equally: $\\pi(Y=k) = \\frac{1}{K}$. So:\n$P(Y=k|X=x) = \\frac{e^{\\frac{-(x-\\mu_k)^2}{2\\sigma^2}}}{\\sum_{l=1}^k e^{\\frac{-(x-\\mu_l)^2}{2\\sigma^2}}}$\nDiscriminant function\nOur prediction should be the category with the largest probability at $x$. In other words we want to choose category $k$ that maximizes $P(Y=k|X=x)$. We can therefore ignore the denomenator in the equation above. This is the same as maximizing:\n$e^\\frac{-(x-\\mu_k)^2}{2\\sigma^2}$\nSince $\\log$ is monotonic, this is the same as maximizing:\n$\\frac{-(x-\\mu_k)^2}{2\\sigma^2} = \\frac{-(x^2 - 2x\\mu_k + \\mu_k)^2}{2\\sigma^2}$\n...which is the same as maximizing:\n$\\delta(x) = \\frac{2x\\mu_k}{2\\sigma^2} - \\frac{\\mu_k^2}{2\\sigma^2} $\n$\\delta(x)$ is the discriminant function. In Quadratic Discriminant Analysis, the $x$ term becomes $x^2$.\nIn the $k=2$ case, the boundary point $x^*$ (where our predictions flip) is found by setting:\n$\\delta_1(x^) = \\delta_2(x^)$\n$\\frac{2x\\mu_1}{2\\sigma^2} - \\frac{\\mu_1^2}{2\\sigma^2} = \\frac{2x\\mu_2}{2\\sigma^2} - \\frac{\\mu_2^2}{2\\sigma^2}$\n...\n$x^* = \\frac{\\mu_1 + \\mu_2}{2}$\n...which is precisely halfway between the two means. This makes sense, since variance is equal.\nIn practice, we estimate $\\mu_k$ with $\\hat{\\mu_k} = \\bar{x_1}$.", "class LDAModel_1D(object):\n \"\"\"\n Linear Discriminant Analysis with one predictor.\n \n Parameters\n ----------\n X_bound : list\n Boundary points between categories in ``K_ordered``.\n K_ordered : list\n Categories, ordered by mean.\n \"\"\"\n def __init__(self, mu, sigma, K_labels):\n assert len(mu) == len(sigma)\n assert len(K_labels) == len(mu)\n \n self.K = len(K_labels)\n self.K_labels = K_labels\n self.mu = mu\n self.sigma = sigma\n \n def find_bounds(self):\n K_ordered = np.array(self.K_)[np.argsort(np.array(X_means.values()))]\n self.X_bound = []\n for i in xrange(1, len(K_ordered)):\n k_0, k_1 = K_ordered[i-1], K_ordered[i]\n mu_0, mu_1 = X_means[k_0], X_means[k_1]\n self.X_bound.append(mu_0 + ((mu_1 - mu_0)/2.))\n \n def _predict(self, x):\n for i in xrange(self.K):\n if i == 0:\n comp = lambda x: x <= self.X_bound[0]\n elif i == self.K - 1:\n comp = lambda x: x >= self.X_bound[-1]\n else:\n comp = lambda x: self.X_bound[i-1] < x < self.X_bound[i]\n if comp(x):\n return self.K_ordered[i]\n \n def predict(self, x, criterion=None):\n if criterion:\n return self.K_labels[criterion(self.posterior(x))]\n return self.K_labels[np.argmax(self.posterior(x))]\n \n def posterior(self, x):\n post_values = [stats.norm.pdf(x, loc=self.mu[i], scale=self.sigma[i]) \n for i in xrange(self.K)]\n return [pv/sum(post_values) for pv in post_values]\n\ndef lda(K_x, X):\n \"\"\"\n Calculate the boundary points between categories.\n \n Parameters\n ----------\n K_x : list\n Known category for each observation.\n X : list\n Observations of a continuous variable.\n \n Returns\n -------\n model : :class:`.LDAModel_1D`\n \"\"\"\n \n K = set(K_x)\n X_grouped = {k:[] for k in list(K)}\n for k, x in zip(K_x, X):\n X_grouped[k].append(x)\n K_labels, mu = zip(*[(k, mean(v)) for k,v in X_grouped.iteritems()])\n sigma = [mean([np.var(v) for v in X_grouped.values()]) for i in xrange(len(K_labels))]\n\n return LDAModel_1D(mu, sigma, K_labels)\n\nX = np.linspace(0, 20, 100)\ndef f(x):\n if x < 7:\n return 'a', 2. + np.random.random()\n elif x < 14:\n return 'b', 4 + np.random.random()\n else:\n return 'c', 6 + np.random.random()\nK, Y = zip(*[f(x) for x in X])\n\nmodel = lda(K, X)", "Iris Example", "iris = pd.read_csv('data/iris.csv')\n\niris_training = pd.concat([iris[iris.Species == 'setosa'].sample(25, random_state=8675309),\n iris[iris.Species == 'versicolor'].sample(25, random_state=8675309),\n iris[iris.Species == 'virginica'].sample(25, random_state=8675309)])\n\niris_test = iris.loc[iris.index.difference(iris_training.index)]\n\niris_training.groupby('Species')['Sepal.Length'].hist()\nplt.show()\n\nmodel = lda(iris_training.Species, iris_training['Sepal.Length'])\n\npredictions = np.array([model.predict(x) for x in iris_test['Sepal.Length']])\ntruth = iris_test['Species'].values\n\nresults = pd.DataFrame(np.array([predictions, truth]).T, \n columns=['Prediction', 'Truth'])\nvcounts = results.groupby('Prediction').Truth.value_counts()\nvcounts_dense = np.zeros((3,3))\nfor i in xrange(model.K):\n k_i = model.K_labels[i]\n for j in xrange(model.K):\n k_j = model.K_labels[j]\n try:\n vcounts_dense[i,j] = vcounts[k_i][k_j]\n except KeyError:\n pass\ncomparison = pd.DataFrame(vcounts_dense, columns=model.K_labels)\ncomparison['Truth'] = model.K_labels\ncomparison \n\nx = stats.norm.rvs(loc=4, scale=1.3, size=200)\n\ndef qda(K_x, X):\n K = set(K_x)\n X_grouped = {k:[] for k in list(K)}\n for k, x in zip(K_x, X):\n X_grouped[k].append(x)\n \n # Maximize f to find mu and sigma\n params_k = {}\n for k, x in X_grouped.iteritems():\n guess = (np.mean(x), np.std(x))\n \n # Variance must be greater than 0.\n constraints = {'type': 'eq', 'fun': lambda params: params[1] > 0}\n f = lambda params: np.sum(((-1.*(x - params[0])**2)/(2.*params[1]**2)) - np.log(params[1]*np.sqrt(2.*np.pi)))\n params_k[k] = optimize.minimize(lambda params: -1.*f(params), guess, constraints=constraints).x\n\n K_ordered = np.array(params_k.keys())[np.argsort(np.array(zip(*params_k.values())[0]))]\n X_bound = []\n for i in xrange(1, len(K_ordered)):\n k_0, k_1 = K_ordered[i-1], K_ordered[i]\n mu_0, sigma2_0 = params_k[k_0]\n mu_1, sigma2_1 = params_k[k_1]\n delta_0 = lambda x: ((-1.*(x - mu_0)**2)/(2.*sigma2_0**2)) - np.log(sigma2_0*np.sqrt(2.*np.pi))\n delta_1 = lambda x: ((-1.*(x - mu_1)**2)/(2.*sigma2_1**2)) - np.log(sigma2_1*np.sqrt(2.*np.pi))\n bound = lambda x: np.abs(delta_0(x) - delta_1(x))\n o = optimize.minimize(bound, mu_0 + (mu_1-mu_0))\n X_bound.append(o.x)\n\n mu, sigma = zip(*params_k.values())\n return LDAModel_1D(mu, sigma, params_k.keys())\n\nqmodel = qda(iris_training.Species, iris_training['Sepal.Length'])", "$P(Y=k|X=x) \\propto \\frac{-(x-\\mu_k)^2}{2\\sigma_k^2} - log(\\sigma_k\\sqrt{2\\pi})$\n$-\\sum_{x \\in X} \\frac{-(x-\\mu_k)^2}{2\\sigma_k^2} - log(\\sigma_k\\sqrt{2\\pi})$\n$\\bigg|\\bigg(\\frac{-(x-\\mu_k)^2}{2\\sigma_k^2} - log(\\sigma_k\\sqrt{2\\pi})\\bigg) - \\bigg(\\frac{-(x-\\mu_{k'})^2}\n{2\\sigma_{k'}^2} - log(\\sigma_{k'}\\sqrt{2\\pi})\\bigg)\\bigg|$", "qpredictions = np.array([qmodel.predict(x) for x in iris_test['Sepal.Length']])\n\nplt.figure(figsize=(15, 5))\nX_ = np.linspace(0, 20, 200)\niris_training.groupby('Species')['Sepal.Length'].hist()\n# iris_test.groupby('Species')['Sepal.Length'].hist()\n\nax = plt.gca()\nax2 = ax.twinx()\nfor k in qmodel.K_labels:\n i = qmodel.K_labels.index(k)\n ax2.plot(X_, stats.norm.pdf(X_, loc=qmodel.mu[i], scale=qmodel.sigma[i]), \n label='{0}, $\\mu={1}$, $\\sigma={2}$'.format(k, qmodel.mu[i], qmodel.sigma[i]), lw=4)\n\n\nplt.legend(loc=2)\nplt.xlim(2, 9)\nplt.show()\n\nresults = pd.DataFrame(np.array([qpredictions, truth]).T, \n columns=['Prediction', 'Truth'])\nvcounts = results.groupby('Prediction').Truth.value_counts()\nvcounts_dense = np.zeros((3,3))\nfor i in xrange(qmodel.K):\n k_i = qmodel.K_labels[i]\n for j in xrange(qmodel.K):\n k_j = qmodel.K_labels[j]\n try:\n vcounts_dense[i,j] = vcounts[k_i][k_j]\n except KeyError:\n pass\ncomparison = pd.DataFrame(vcounts_dense, columns=qmodel.K_labels)\ncomparison['Truth'] = qmodel.K_labels\ncomparison \n\nc = np.array(zip(qpredictions, truth)).T\nfloat((c[0] == c[1]).sum())/c.shape[1]\n\nHemocrit = pd.read_csv('data/Hemocrit.csv')\n\nmodel = lda(Hemocrit.status, Hemocrit.hemocrit)", "The default approach was to predict 'Cheat' when $P(Cheater\\big|X) > 0.5$.", "# Histogram of hemocrit values for cheaters and non-cheaters.\nHemocrit[Hemocrit.status == 'Cheat'].hemocrit.hist(histtype='step')\nHemocrit[Hemocrit.status == 'Clean'].hemocrit.hist(histtype='step')\nplt.ylim(0, 40)\nplt.ylabel('N')\n\n# Probability of being a cheater (or not) as a function of hemocrit.\nax = plt.gca()\nax2 = ax.twinx()\n\nR = np.linspace(0, 100, 500)\npost = np.array([model.posterior(r) for r in R])\nax2.plot(R, post[:, 0], label=model.K_labels[0])\nax2.plot(R, post[:, 1], label=model.K_labels[1])\nplt.ylabel('P(Y=k)')\n\nplt.xlabel('Hemocrit')\nplt.legend()\nplt.xlim(40, 60)\nplt.title('Criterion: P > 0.5')\nplt.show()", "Confusion matrix:", "predictions = [model.predict(h) for h in Hemocrit.hemocrit]\ntruth = Hemocrit.status.values\nconfusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth'))\n\nconfusion.groupby('Prediction').Truth.value_counts()", "Trying the same thing, but with QDA:", "qmodel = qda(Hemocrit.status, Hemocrit.hemocrit)\nqpredictions = np.array([qmodel.predict(h) for h in Hemocrit.hemocrit])\ntruth = Hemocrit.status.values\nqconfusion = pd.DataFrame(np.array([qpredictions, truth]).T, columns=('Prediction', 'Truth'))", "Receiver Operator Characteristic (ROC) curve\nProvides a visual summary of the confusion matrix over a range of criteria. Given a confusion matrix, $N=TN+FP$, $P=TP+FN$.", "plt.figure(figsize=(5, 5))\nplt.text(0.25, 0.75, 'TN', size=18)\nplt.text(0.75, 0.75, 'FP', size=18)\nplt.text(0.25, 0.25, 'FN', size=18)\nplt.text(0.75, 0.25, 'TN', size=18)\n\nplt.xticks([0.25, 0.75], ['Neg', 'Pos'], size=20)\nplt.yticks([0.25, 0.75], ['Pos', 'Neg'], size=20)\nplt.ylabel('Truth', size=24)\nplt.xlabel('Prediction', size=24)\nplt.title('Confusion Matrix', size=26)\nplt.show()", "The true positive rate, or Power (or Sensitivity) is $\\frac{TP}{P}$ and the Type 1 error is $\\frac{FP}{N}$. The ROC curve shows Power vs. Type 1 error. Ideally, we can achieve a high true positive rate at a very low false positive rate:", "plt.figure()\nX = np.linspace(0., 0.5, 200)\nf = lambda x: 0.001 if x < 0.01 else 0.8\nplt.plot(X, map(f, X))\nplt.ylabel('True positive rate (power)')\nplt.xlabel('False positive rate (type 1 error)')\nplt.show()", "With the hemocrit example:", "ROC = []\nC = []\nfor p in np.arange(0.5, 1.0, 0.005):\n criterion = lambda posterior: 0 if posterior[0] > p else 1\n predictions = [model.predict(h, criterion) for h in Hemocrit.hemocrit]\n truth = Hemocrit.status.values\n confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth'))\n \n FP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Clean'].shape[0]\n N = confusion[confusion['Truth'] == 'Clean'].shape[0]\n FP_rate = float(FP)/N\n\n TP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Cheat'].shape[0]\n P = confusion[confusion['Truth'] == 'Cheat'].shape[0]\n TP_rate = float(TP)/P\n ROC.append((FP_rate, TP_rate))\n C.append(p)\n\nplt.title('ROC curve for LDA')\nFP_rate, TP_rate = zip(*ROC)\nplt.plot(FP_rate, TP_rate)\nfor i in xrange(0, len(FP_rate), 10):\n plt.plot(FP_rate[i], TP_rate[i], 'ro')\n plt.text(FP_rate[i]+0.001, TP_rate[i]+0.01, C[i])\nplt.xlim(-0.01, 0.14)\nplt.ylim(0, .7)\nplt.ylabel('True positive rate (power)')\nplt.xlabel('False positive rate (type 1 error)')\nplt.show()\n\nQROC = []\nC = []\nfor p in np.arange(0.5, 1.0, 0.005):\n criterion = lambda posterior: 0 if posterior[0] > p else 1\n predictions = [qmodel.predict(h, criterion) for h in Hemocrit.hemocrit]\n truth = Hemocrit.status.values\n confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth'))\n \n FP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Clean'].shape[0]\n N = confusion[confusion['Truth'] == 'Clean'].shape[0]\n FP_rate = float(FP)/N\n\n TP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Cheat'].shape[0]\n P = confusion[confusion['Truth'] == 'Cheat'].shape[0]\n TP_rate = float(TP)/P\n QROC.append((FP_rate, TP_rate))\n C.append(p)\n\nplt.title('ROC curve for QDA')\nFP_rate, TP_rate = zip(*QROC)\nplt.plot(FP_rate, TP_rate)\nfor i in xrange(0, len(FP_rate), 10):\n plt.plot(FP_rate[i], TP_rate[i], 'ro')\n plt.text(FP_rate[i]+0.001, TP_rate[i]+0.01, C[i])\nplt.xlim(-0.01, 0.14)\nplt.ylim(0, .7)\nplt.ylabel('True positive rate (power)')\nplt.xlabel('False positive rate (type 1 error)')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tjwei/HackNTU_Data_2017
Week11/DIY_AI/FeedForward-Forward Propagation.ipynb
mit
[ "import numpy as np\n\n%run magic.ipynb", "Feedforward Network\n一樣有輸入 x, 輸出 y。 但是中間預測、計算的樣子有點不同。\n<img src=\"https://upload.wikimedia.org/wikipedia/en/5/54/Feed_forward_neural_net.gif\" />\n模型是這樣的\n一樣考慮輸入是四維向量,輸出有 3 個類別。\n我們的輸入 $x=\\begin{pmatrix} x_0 \\ x_1 \\ x_2 \\ x_3 \\end{pmatrix} $ 是一個向量,我們看成 column vector 好了\n第 0 層\n而 Weight: $ \nW^{(0)} = \\begin{pmatrix} W^{(0)}0 \\ W^{(0)}_1 \\ W^{(0)}_2 \\ W^{(0)}_3 \\ W^{(0)}_4 \\ W^{(0)}_5 \\end{pmatrix} =\n\\begin{pmatrix} \nW^{(0)}{0,0} & W^{(0)}{0,1} & W^{(0)}{0,2} & W^{(0)}{0,3}\\ \nW^{(0)}{0,0} & W^{(0)}{0,1} & W^{(0)}{0,2} & W^{(0)}{0,3}\\ \nW^{(0)}{0,0} & W^{(0)}{0,1} & W^{(0)}{0,2} & W^{(0)}{0,3}\\ \nW^{(0)}{0,0} & W^{(0)}{0,1} & W^{(0)}{0,2} & W^{(0)}{0,3}\\ \nW^{(0)}{0,0} & W^{(0)}{0,1} & W^{(0)}{0,2} & W^{(0)}{0,3}\\ \nW^{(0)}{0,0} & W^{(0)}{0,1} & W^{(0)}{0,2} & W^{(0)}_{0,3}\n \\end{pmatrix} $\nBias: $b^{(0)}=\\begin{pmatrix} b^{(0)}_0 \\ b^{(0)}_1 \\ b^{(0)}_2 \\ b^{(0)}_3 \\ b^{(0)}_4 \\ b^{(0)}_5 \\end{pmatrix} $ \n我們先計算\"線性輸出\" $ c^{(0)} = \\begin{pmatrix} c^{(0)}_0 \\ c^{(0)}_1 \\ c^{(0)}_2 \\ c^{(0)}_3 \\ c^{(0)}_4 \\ c^{(0)}_5 \\end{pmatrix} = W^{(0)}x+b^{(0)} =\n\\begin{pmatrix} W^{(0)}_0 x + b^{(0)}_0 \\ W^{(0)}_1 x + b^{(0)}_1 \\ W^{(0)}_2 x + b^{(0)}_2 \\\nW^{(0)}_3 x + b^{(0)}_3 \\ W^{(0)}_4 x + b^{(0)}_4 \\ W^{(0)}_5 x + b^{(0)}_5 \\end{pmatrix} $, \n然後再將結果逐項對一個非線性的函數 $f$ 最後得到一個向量。\n$d^{(0)} = \\begin{pmatrix} d^{(0)}_0 \\ d^{(0)}_1 \\ d^{(0)}_2 \\ d^{(0)}_3 \\ d^{(0)}_4 \\ d^{(0)}_5 \\end{pmatrix} \n = f({W x + b}) = \\begin{pmatrix} f(c^{(0)}_0) \\ f(c^{(0)}_1) \\ f(c^{(0)}_2) \\ f(c^{(0)}_3) \\ f(c^{(0)}_4) \\ f(c^{(0)}_5) \\end{pmatrix} $\n這裡的 $f$ 常常會用 sigmoid , tanh,或者 ReLU ( https://en.wikipedia.org/wiki/Activation_function )。\n第 1 層\n這裡接到輸出,其實和 softmax regression 一樣。\n只是輸入變成 $d^{(0)}, Weight 和 Bias 現在叫做 W^{(1)} 和 b^{(1)} \n因為維度改變,現在 W^{(1)} 是 3x6 的矩陣。 後面接到的輸出都一樣。\n所以線性輸出\n$ c^{(1)} = W^{(1)} d^{(0)} + b^{(1)} $\n$ d^{(1)} = e^{c^{(1)}} $\n當輸入為 x, 最後的 softmax 預測類別是 i 的機率為\n$q_i = Predict_{W^{(0)}, W^{(1)}, b^{(0)}, b^{(1)}}(Y=i|x) = \\frac {d^{(1)}_i} {\\sum_j d^{(1)}_j}$\n合起來看,就是 $q = \\frac {d^{(1)}} {\\sum_j d^{(1)}_j}$\n問題\n我們簡化一下算式,如果 $W^{(0)}, W^{(1)}, b^{(0)}, b^{(1)}$ 設定為 $A, b, C, d$ (C, d 與前面無關),請簡化成為一個算式。\n可以利用 softmax function 的符號\n$\\sigma (\\mathbf {z} ){j}={\\frac {e^{z{j}}}{\\sum k e^{z{k}}}}$", "# 參考答案\n%run solutions/ff_oneline.py", "任務:計算最後的猜測機率 $q$\n設定:輸入 4 維, 輸出 3 維, 隱藏層 6 維\n* 設定一些權重 $A,b,C,d$ (隨意自行填入,或者用 np.random.randint(-2,3, size=...))\n* 設定輸入 $x$ (隨意自行填入,或者用 np.random.randint(-2,3, size=...))\n* 自行定義 relu, sigmoid 函數 (Hint: np.maximum)\n* 算出隱藏層 $z$\n* 自行定義 softmax\n* 算出最後的 q", "# 請在這裡計算\nnp.random.seed(1234)\n\n\n\n# 參考答案,設定權重\n%run -i solutions/ff_init_variables.py\ndisplay(A)\ndisplay(b)\ndisplay(C)\ndisplay(d)\ndisplay(x)\n\n# 參考答案 定義 relu, sigmoid 及計算 z\n%run -i solutions/ff_compute_z.py\ndisplay(z_relu)\ndisplay(z_sigmoid)\n\n# 參考答案 定義 softmax 及計算 q\n%run -i solutions/ff_compute_q.py\ndisplay(q_relu)\ndisplay(q_sigmoid)\n", "練習\n設計一個網路:\n* 輸入是二進位 0 ~ 15\n* 輸出依照對於 3 的餘數分成三類", "# Hint 下面產生數字 i 的 2 進位向量\ni = 13\nx = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2)\nx\n\n# 請在這裡計算\n\n\n\n# 參考解答\n%run -i solutions/ff_mod3.py", "練習\n設計一個網路來判斷井字棋是否有連成直線(只需要判斷其中一方即可):\n* 輸入是 9 維向量,0 代表空格,1 代表有下子 \n* 輸出是二維(softmax)或一維(sigmoid)皆可,用來代表 True, False\n有連線的例子\n```\nX\nX__\nXXX\nXXX\nXX_\n_XX\nX\n_XX\nX\n```\n沒連線的例子\n```\nXX_\nX__\n_XX\nX\nXX_\nX_X\n__X\nXX\n_X\n```", "# 請在這裡計算\n\n\n\n#參考答案\n%run -i solutions/ff_tic_tac_toe.py\n\n# 測試你的答案\ndef my_result(x):\n # return 0 means no, 1 means yes\n return (C@relu(A@x+b)+d).argmax()\n # or sigmoid based\n # return (C@relu(A@x+b)+d) > 0\n\ndef truth(x):\n x = x.reshape(3,3)\n return (x.all(axis=0).any() or\n x.all(axis=1).any() or\n x.diagonal().all() or\n x[::-1].diagonal().all())\n\nfor i in range(512):\n x = np.array([[(i>>j)&1] for j in range(9)])\n assert my_result(x) == truth(x)\nprint(\"test passed\")" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
QInfer/qinfer-examples
qinfer-1.0-paper.ipynb
agpl-3.0
[ "QInfer: Statistical inference software for quantum applications\nExamples and Figures\nChristopher Granade, Christopher Ferrie, Ian Hincks, Steven Casagrande, Thomas Alexander, Jonathan Gross, Michal Kononenko, Yuval Sanders\nPreamble\n\nThis section contains commands needed for formatting as a Jupyter Notebook.", "from __future__ import division, print_function\n%matplotlib inline\n\nfrom qinfer import *\nimport os\nimport numpy as np\nfrom scipy.linalg import expm\nimport matplotlib.pyplot as plt\n\ntry:\n plt.style.use('ggplot-rq')\nexcept IOError:\n try:\n plt.style.use('ggplot')\n except:\n raise RuntimeError('Cannot set the style. Likely cause is out of date matplotlib; >= 1.4 required.')\n\npaperfig_path = os.path.abspath(os.path.join('..', 'fig'))\n\ndef paperfig(name):\n plt.savefig(os.path.join(paperfig_path, name + '.png'), format='png', dpi=200)\n plt.savefig(os.path.join(paperfig_path, name + '.pdf'), format='pdf', bbox_inches='tight')", "Applications in Quantum Information\n\nPhase and Frequency Learning", ">>> from qinfer import *\n>>> model = SimplePrecessionModel()\n>>> prior = UniformDistribution([0, 1])\n>>> n_particles = 2000\n>>> n_experiments = 100\n>>> updater = SMCUpdater(model, n_particles, prior)\n>>> heuristic = ExpSparseHeuristic(updater)\n>>> true_params = prior.sample()\n>>> for idx_experiment in range(n_experiments):\n... experiment = heuristic()\n... datum = model.simulate_experiment(true_params, experiment)\n... updater.update(datum, experiment)\n>>> print(updater.est_mean())\n\nmodel = SimplePrecessionModel()\nprior = UniformDistribution([0, 1])\nupdater = SMCUpdater(model, 2000, prior)\nheuristic = ExpSparseHeuristic(updater)\ntrue_params = prior.sample()\nest_hist = []\nfor idx_experiment in range(100):\n experiment = heuristic()\n datum = model.simulate_experiment(true_params, experiment)\n updater.update(datum, experiment)\n est_hist.append(updater.est_mean())\nplt.plot(est_hist, label='Est.')\nplt.hlines(true_params, 0, 100, label='True')\nplt.legend(ncol=2)\nplt.xlabel('# of Experiments Performed')\nplt.ylabel(r'$\\omega$')\npaperfig('freq-est-updater-loop')", "State and Process Tomography", "import matplotlib\n\nos.path.join(matplotlib.get_configdir(), 'stylelib')\n\n>>> from qinfer import *\n>>> from qinfer.tomography import *\n>>> basis = pauli_basis(1) # Single-qubit Pauli basis.\n>>> model = TomographyModel(basis)\n>>> prior = GinibreReditDistribution(basis)\n>>> updater = SMCUpdater(model, 8000, prior)\n>>> heuristic = RandomPauliHeuristic(updater)\n>>> true_state = prior.sample()\n>>>\n>>> for idx_experiment in range(500):\n>>> experiment = heuristic()\n>>> datum = model.simulate_experiment(true_state, experiment)\n>>> updater.update(datum, experiment)\n>>> \n>>> plt.figure(figsize=(4, 4))\n>>> plot_rebit_posterior(updater, true_state=true_state, rebit_axes=[1, 3], legend=False, region_est_method='hull')\n>>> plt.legend(ncol=1, numpoints=1, scatterpoints=1, bbox_to_anchor=(1.9, 0.5), loc='right')\n>>> plt.xticks([-1, 0, 1])\n>>> plt.yticks([-1, 0, 1])\n>>> plt.xlabel(r'$\\operatorname{Tr}(\\sigma_x \\rho)$')\n>>> plt.ylabel(r'$\\operatorname{Tr}(\\sigma_z \\rho)$')\n>>> paperfig('rebit-tomo')", "Randomized Benchmarking", ">>> from qinfer import *\n>>> import numpy as np\n>>> p, A, B = 0.95, 0.5, 0.5\n>>> ms = np.linspace(1, 800, 201).astype(int)\n>>> signal = A * p ** ms + B\n>>> n_shots = 25\n>>> counts = np.random.binomial(p=signal, n=n_shots)\n>>> data = np.column_stack([counts, ms, n_shots * np.ones_like(counts)])\n>>> mean, cov = simple_est_rb(data, n_particles=12000, p_min=0.8)\n>>> print(mean, np.sqrt(np.diag(cov)))\n\nfrom qinfer import *\nimport numpy as np\np, A, B = 0.95, 0.5, 0.5\nms = np.linspace(1, 800, 201).astype(int)\nsignal = A * p ** ms + B\nn_shots = 25\ncounts = np.random.binomial(p=signal, n=n_shots)\ndata = np.column_stack([counts, ms, n_shots * np.ones_like(counts)])\nmean, cov, extra = simple_est_rb(data, n_particles=12000, p_min=0.8, return_all=True)\n\nfig, axes = plt.subplots(ncols=2, figsize=(8, 3))\n\nplt.sca(axes[0])\nextra['updater'].plot_posterior_marginal(range_max=1)\nplt.xlim(xmax=1)\nylim = plt.ylim(ymin=0)\nplt.vlines(p, *ylim)\nplt.ylim(*ylim);\nplt.legend(['Posterior', 'True'], loc='upper left', ncol=1)\n\nplt.sca(axes[1])\nextra['updater'].plot_covariance()\n\npaperfig('rb-simple-est')", "Additional Functionality\n\nDerived Models", ">>> from qinfer import *\n>>> import numpy as np\n>>> model = BinomialModel(SimplePrecessionModel())\n>>> n_meas = 25\n>>> prior = UniformDistribution([0, 1])\n>>> updater = SMCUpdater(model, 2000, prior)\n>>> true_params = prior.sample()\n>>> for t in np.linspace(0.1,20,20):\n... experiment = np.array([(t, n_meas)], dtype=model.expparams_dtype)\n... datum = model.simulate_experiment(true_params, experiment)\n... updater.update(datum, experiment)\n>>> print(updater.est_mean())\n\nmodel = BinomialModel(SimplePrecessionModel())\nn_meas = 25\nprior = UniformDistribution([0, 1])\nupdater = SMCUpdater(model, 2000, prior)\nheuristic = ExpSparseHeuristic(updater)\ntrue_params = prior.sample()\nest_hist = []\nfor t in np.linspace(0.1, 20, 20):\n experiment = np.array([(t, n_meas)], dtype=model.expparams_dtype)\n datum = model.simulate_experiment(true_params, experiment)\n updater.update(datum, experiment)\n est_hist.append(updater.est_mean())\nplt.plot(est_hist, label='Est.')\nplt.hlines(true_params, 0, 20, label='True')\nplt.legend(ncol=2)\nplt.xlabel('# of Times Sampled (25 measurements/ea)')\nplt.ylabel(r'$\\omega$')\npaperfig('derived-model-updater-loop')", "Time-Dependent Models", ">>> from qinfer import *\n>>> import numpy as np\n>>> prior = UniformDistribution([0, 1])\n>>> true_params = np.array([[0.5]])\n>>> n_particles = 2000\n>>> model = RandomWalkModel(\n... BinomialModel(SimplePrecessionModel()), NormalDistribution(0, 0.01**2))\n>>> updater = SMCUpdater(model, n_particles, prior)\n>>> t = np.pi / 2\n>>> n_meas = 40\n>>> expparams = np.array([(t, n_meas)], dtype=model.expparams_dtype)\n>>> for idx in range(1000):\n... datum = model.simulate_experiment(true_params, expparams)\n... true_params = np.clip(model.update_timestep(true_params, expparams)[:, :, 0], 0, 1)\n... updater.update(datum, expparams)\n\nprior = UniformDistribution([0, 1])\ntrue_params = np.array([[0.5]])\nmodel = RandomWalkModel(BinomialModel(SimplePrecessionModel()), NormalDistribution(0, 0.01**2))\nupdater = SMCUpdater(model, 2000, prior)\nexpparams = np.array([(np.pi / 2, 40)], dtype=model.expparams_dtype)\n\ndata_record = []\ntrajectory = []\nestimates = []\n\nfor idx in range(1000):\n datum = model.simulate_experiment(true_params, expparams)\n true_params = np.clip(model.update_timestep(true_params, expparams)[:, :, 0], 0, 1)\n \n updater.update(datum, expparams)\n\n data_record.append(datum)\n trajectory.append(true_params[0, 0])\n estimates.append(updater.est_mean()[0])\n\nts = 40 * np.pi / 2 * np.arange(len(data_record)) / 1e3\nplt.plot(ts, trajectory, label='True')\nplt.plot(ts, estimates, label='Estimated')\nplt.xlabel(u'$t$ (µs)')\nplt.ylabel(r'$\\omega$ (GHz)')\nplt.legend(ncol=2)\npaperfig('time-dep-rabi')", "Performance and Robustness Testing", "performance = perf_test_multiple(\n # Use 100 trials to estimate expectation over data.\n 100, \n # Use a simple precession model both to generate,\n # data, and to perform estimation.\n SimplePrecessionModel(),\n # Use 2,000 particles and a uniform prior.\n 2000, UniformDistribution([0, 1]),\n # Take 50 measurements with $t_k = ab^k$.\n 50, ExpSparseHeuristic\n)\n# Calculate the Bayes risk by taking a mean over the trial index.\nrisk = np.mean(performance['loss'], axis=0)\nplt.semilogy(risk)\nplt.xlabel('# of Experiments Performed')\nplt.ylabel('Bayes Risk')\npaperfig('bayes-risk')", "Parallelization\nHere, we demonstrate parallelization with ipyparallel and the DirectViewParallelizedModel model.\nFirst, create a model which is not designed to be useful, but rather to be expensive to evaluate a single likelihood.", "class ExpensiveModel(FiniteOutcomeModel):\n \"\"\"\n The likelihood of this model randomly generates \n a dim-by-dim conjugate-symmetric matrix for every expparam and \n modelparam, exponentiates it, and returns \n the overlap with the |0> state.\n \"\"\"\n def __init__(self, dim=36):\n super(ExpensiveModel, self).__init__()\n self.dim=dim\n \n @property\n def n_modelparams(self): \n return 2\n @property\n def expparams_dtype(self): \n return 'float'\n def n_outcomes(self, expparams): \n return 2\n def are_models_valid(self, mps):\n return np.ones(mps.shape).astype(bool)\n \n def prob(self):\n # random symmetric matrix\n mat = np.random.rand(self.dim, self.dim)\n mat += mat.T\n # and exponentiate resulting square matrix\n mat = expm(1j * mat)\n # compute overlap with |0> state\n return np.abs(mat[0,0])**2\n \n def likelihood(self, outcomes, mps, eps): \n \n # naive for loop.\n pr0 = np.empty((mps.shape[0], eps.shape[0]))\n for idx_eps in range(eps.shape[0]):\n for idx_mps in range(mps.shape[0]):\n pr0[idx_mps, idx_eps] = self.prob()\n \n # compute the prob of each outcome by taking pr0 or 1-pr0\n return FiniteOutcomeModel.pr0_to_likelihood_array(outcomes, pr0)", "Now, we can use Jupyter's %timeit magic to see how long it takes, for example, to compute the likelihood 5x1000x10=50000 times.", "emodel = ExpensiveModel(dim=16)\n%timeit -q -o -n1 -r1 emodel.likelihood(np.array([0,1,0,0,1]), np.zeros((1000,1)), np.zeros((10,1)))", "Next, we initialize the Client which communicates with the parallel processing engines.\nIn the accompaning paper, this code was run on a single machine with dual \"Intel(R) Xeon(R) CPU X5675 @ 3.07GHz\" processors, for a total of 12 physical cores, and therefore, 24 engines were online.", "# Do not demand that ipyparallel be installed, or ipengines be running;\n# instead, fail silently.\nrun_parallel = True\ntry:\n from ipyparallel import Client\n import dill\n rc = Client() # set profile here if desired\n dview = rc[:]\n dview.execute('from qinfer import *')\n dview.execute('from scipy.linalg import expm')\n print(\"Number of engines available: {}\".format(len(dview)))\nexcept:\n run_parallel = False\n print('Parallel Engines or libraries could not be initialized; Parallel section will not be evaluated.')", "Finally, we run the parallel tests, looping over different numbers of engines used.", "if run_parallel:\n par_n_particles = 5000\n \n par_test_outcomes = np.array([0,1,0,0,1])\n par_test_modelparams = np.zeros((par_n_particles, 1)) # only the shape matters\n par_test_expparams = np.zeros((10, 1)) # only the shape matters\n \n def compute_L(model):\n model.likelihood(par_test_outcomes, par_test_modelparams, par_test_expparams)\n \n serial_time = %timeit -q -o -n1 -r1 compute_L(emodel)\n serial_time = serial_time.all_runs[0]\n \n n_engines = np.arange(2,len(dview)+1,2)\n par_time = np.zeros(n_engines.shape[0])\n \n for idx_ne, ne in enumerate(n_engines):\n dview_test = rc[:ne]\n dview_test.use_dill()\n par_model = DirectViewParallelizedModel(emodel, dview_test, serial_threshold=1)\n\n result = %timeit -q -o -n1 -r1 compute_L(par_model)\n par_time[idx_ne] = result.all_runs[0]", "And plot the results.", "if run_parallel:\n fig = plt.figure()\n plt.plot(np.concatenate([[1], n_engines]), np.concatenate([[serial_time], par_time])/serial_time,'-o')\n ax = plt.gca()\n ax.set_xscale('log', basex=2)\n ax.set_yscale('log', basey=2)\n plt.xlim([0.8, np.max(n_engines)+2])\n plt.ylim([2**-4,1.2])\n plt.xlabel('# Engines')\n plt.ylabel('Normalized Computation Time')\n par_xticks = [1,2,4,8,12,16,24]\n ax.set_xticks(par_xticks)\n ax.set_xticklabels(par_xticks)\n paperfig('parallel-likelihood')", "Appendices\n\nCustom Models", "from qinfer import FiniteOutcomeModel\nimport numpy as np\n\nclass MultiCosModel(FiniteOutcomeModel):\n\n @property\n def n_modelparams(self):\n return 2\n\n @property\n def is_n_outcomes_constant(self):\n return True\n\n def n_outcomes(self, expparams):\n return 2\n\n def are_models_valid(self, modelparams):\n return np.all(np.logical_and(modelparams > 0, modelparams <= 1), axis=1)\n\n @property\n def expparams_dtype(self):\n return [('ts', 'float', 2)]\n\n def likelihood(self, outcomes, modelparams, expparams):\n super(MultiCosModel, self).likelihood(outcomes, modelparams, expparams)\n pr0 = np.empty((modelparams.shape[0], expparams.shape[0]))\n\n w1, w2 = modelparams.T\n t1, t2 = expparams['ts'].T\n\n for idx_model in range(modelparams.shape[0]):\n for idx_experiment in range(expparams.shape[0]):\n pr0[idx_model, idx_experiment] = (\n np.cos(w1[idx_model] * t1[idx_experiment] / 2) *\n np.cos(w2[idx_model] * t2[idx_experiment] / 2)\n ) ** 2\n\n return FiniteOutcomeModel.pr0_to_likelihood_array(outcomes, pr0)\n\n>>> mcm = MultiCosModel()\n>>> modelparams = np.dstack(np.mgrid[0:1:100j,0:1:100j]).reshape(-1, 2)\n>>> expparams = np.empty((81,), dtype=mcm.expparams_dtype)\n>>> expparams['ts'] = np.dstack(np.mgrid[1:10,1:10] * np.pi / 2).reshape(-1, 2)\n>>> D = mcm.simulate_experiment(modelparams, expparams, repeat=2)\n>>> print(isinstance(D, np.ndarray))\nTrue\n>>> D.shape == (2, 10000, 81)\nTrue" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/compass
packages/propensity/12.cleanup.ipynb
apache-2.0
[ "# Copyright 2022 Google LLC.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "12. Cleanup BigQuery artifacts\nThis notebook helps to clean up interim tables generated while executing notebooks from 01 to 09.\nImport required modules", "# Add custom utils module to Python environment.\nimport os\nimport sys\nsys.path.append(os.path.abspath(os.pardir))\n\nfrom google.cloud import bigquery\nfrom utils import helpers", "Set parameters", "# Get GCP configurations.\nconfigs = helpers.get_configs('config.yaml')\ndest_configs = configs.destination\n\n# GCP project ID where queries and other computation will be run.\nPROJECT_ID = dest_configs.project_id\n# BigQuery dataset name to store query results (if needed).\nDATASET_NAME = dest_configs.dataset_name", "List all tables in the BigQuery Dataset", "# Initialize BigQuery Client.\nbq_client = bigquery.Client()\n\nall_tables = []\nfor table in bq_client.list_tables(DATASET_NAME):\n all_tables.append(table.table_id)\n\nprint(all_tables)", "Remove list of tables\nSelect table names from the printed out list in above cell.", "# Define specific tables to remove from the dataset.\ntables_to_delete = ['table1', 'table2']\n\n# Or uncomment below to remove all tables in the dataset.\n# tables_to_delete = all_tables\n\n# Remove tables from BigQuery dataset.\nfor table_id in tables_to_delete:\n bq_client.delete_table(f'{PROJECT_ID}.{DATASET_NAME}.{table_id}')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Algorithms
Python/Chapter-06/Calculator-Frame.ipynb
gpl-2.0
[ "%%HTML\n<style>\n.container { width:100% } \n</style>", "Normally, I would just write\n%run Stack.ipynb\nhere. As this does not work in Deepnote, I have included the implementation of the class Stack here.", "class Stack:\n def __init__(self):\n self.mStackElements = []\n\n def push(self, e):\n self.mStackElements.append(e)\n\n def pop(self):\n assert len(self.mStackElements) > 0, \"popping empty stack\"\n self.mStackElements = self.mStackElements[:-1]\n\n def top(self):\n assert len(self.mStackElements) > 0, \"top of empty stack\"\n return self.mStackElements[-1]\n\n def isEmpty(self):\n return self.mStackElements == []\n\n def copy(self):\n C = Stack()\n C.mStackElements = self.mStackElements[:]\n return C\n\n def __str__(self):\n C = self.copy()\n result = C._convert()\n return result\n\n def _convert(self):\n if self.isEmpty():\n return '|'\n t = self.top()\n self.pop()\n return self._convert() + ' ' + str(t) + ' |'\n\ndef createStack(L):\n S = Stack()\n n = len(L)\n for i in range(n):\n S.push(L[i])\n return S", "The Shunting Yard Algorithm (Operator Precedence Parsing)", "import re", "The function $\\texttt{isWhiteSpace}(s)$ checks whether $s$ contains only blanks and tabulators.", "def isWhiteSpace(s):\n whitespace = re.compile(r'[ \\t]+')\n return whitespace.fullmatch(s)", "The function $\\texttt{toFloat}(s)$ tries to convert the string $s$ to a floating point number. If this works out, this number is returned. Otherwise, the string $s$ is returned unchanged.", "def toFloat(s):\n try:\n return float(s) \n except ValueError:\n return s\n\ntoFloat('0.123')\n\ntoFloat('+')", "The module re provides support for <a href='https://en.wikipedia.org/wiki/Regular_expression'>regular expressions</a>. These are needed for\n<em style=\"color:blue;\">tokenizing</em> a string.\nThe function $\\texttt{tokenize}(s)$ takes a string and splits this string into a list of tokens. Whitespace is discarded.", "def tokenize(s):\n regExp = r'''\n 0|[1-9][0-9]* | # integer\n (?:0|[1-9][0-9])+[.][0-9]+ | # floating point number\n \\*\\* | # power operator\n [-+*/()] | # arithmetic operators and parentheses\n [ \\t] | # white space\n sqrt | # square root\n sin | # sine function\n cos | # cosine function\n tan | # tangent function\n asin | # arcus sine\n acos | # arcus cosine\n atan | # arcus tangent\n exp | # exponential function\n log | # natural logarithm\n x | # variable\n e | # Euler's number\n pi # π\n '''\n L = [toFloat(t) for t in re.findall(regExp, s, flags=re.VERBOSE) if not isWhiteSpace(t)]\n return list(reversed(L))\n\ntokenize('x**2 - 2')", "The module math provides a number of mathematical functions like exp, sin, log etc.", "import math", "The function $\\texttt{findZero}(f, a, b, n)$ takes a function $f$ and two numbers $a$ and $b$ such that\n\n$a < b$,\n$f(a) \\leq 0$, and \n$0 \\leq f(b)$.\n\nIt uses the bisection method to find a number $x \\in [a, b]$ such that $f(x) \\approx 0$.", "def findZero(f, a, b, n):\n assert a < b , f'{a} has to be less than b'\n assert f(a) * f(b) <= 0, f'f({a}) * f({b}) > 0'\n if f(a) <= 0 <= f(b):\n for k in range(n):\n c = 0.5 * (a + b) \n # print(f'f({c}) = {f(c)}, {b-a}')\n if f(c) < 0:\n a = c\n elif f(c) > 0:\n b = c\n else:\n return c\n else:\n for k in range(n):\n c = 0.5 * (a + b) \n # print(f'f({c}) = {f(c)}, {b-a}')\n if f(c) > 0:\n a = c\n elif f(c) < 0:\n b = c\n else:\n return c\n return (a + b) / 2\n\ndef f(x):\n return 2 - x ** 2\n\nr = findZero(f, 0, 2, 54)\nr\n\nr * r", "The function $\\texttt{precedence}(o)$ calculates the precedence of the operator $o$.", "def precedence(op):\n \"your code here\"\n assert False, f'unkown operator in precedence: {op}'", "The function $\\texttt{isUnaryOperator}(o)$ returns True of $o$ is a unary operator.", "def isUnaryOperator(op):\n \"your code here\"", "The function $\\texttt{isConstOperator}(o)$ returns True of $o$ is a constant like eor pi. \nThe variable x is also considered as a constant operator.", "def isConstOperator(op):\n \"your code here\"", "The function $\\texttt{isLeftAssociative}(o)$ returns True of $o$ is left associative.", "def isLeftAssociative(op):\n \"your code here\"\n assert False, f'unkown operator in isLeftAssociative: {op}'", "The function $\\texttt{evalBefore}(o_1, o_2)$ receives to strings representing arithmetical operators. It returns True if the operator $o_1$ should be evaluated before the operator $o_2$ in an arithmetical expression of the form $a \\;\\texttt{o}_1\\; b \\;\\texttt{o}_2\\; c$. In order to determine whether $o_1$ should be evaluated before $o_2$ it uses the \n<em style=\"color:blue\">precedence</em> and the <em style=\"color:blue\">associativity</em> of the operators.\nIts behavior is specified by the following rules:\n- $\\texttt{precedence}(o_1) > \\texttt{precedence}(o_2) \\rightarrow \\texttt{evalBefore}(\\texttt{o}_1, \\texttt{o}_2) = \\texttt{True}$,\n- $o_1 = o_2 \\wedge \\neg\\texttt{isUnaryOperator}(o_1)\\rightarrow \\texttt{evalBefore}(\\texttt{o}_1, \\texttt{o}_2) = \\texttt{isLeftAssociative}(o_1)$,\n- $\\texttt{isUnaryOperator}(o_1) \\wedge \\texttt{isUnaryOperator}(o_2) \\rightarrow \n \\texttt{evalBefore}(\\texttt{o}_1, \\texttt{o}_2) = \\texttt{False}$,\n- $\\texttt{precedence}(o_1) = \\texttt{precedence}(o_2) \\wedge o_1 \\not= o_2 \\wedge \n \\neg\\texttt{isUnaryOperator}(o_1) \\rightarrow \n \\texttt{evalBefore}(\\texttt{o}_1, \\texttt{o}_2) = \\texttt{True}$,\n- $\\texttt{precedence}(o_1) < \\texttt{precedence}(o_2) \\rightarrow \\texttt{evalBefore}(\\texttt{o}_1, \\texttt{o}_2) = \\texttt{False}$.", "def evalBefore(stackOp, nextOp):\n \"your code here\"\n assert False, f'incomplete case distinction in evalBefore({stackOp}, {nextOp})'", "The class Calculator supports three member variables:\n - the token stack mTokens,\n - the operator stack mOperators,\n - the argument stack mArguments,\n - the floating point number mValue, which is the current value of x.\nThe constructor takes a list of tokens TL and initializes the token stack with these \ntokens.", "class Calculator:\n def __init__(self, TL, x):\n self.mTokens = createStack(TL)\n self.mOperators = Stack()\n self.mArguments = Stack()\n self.mValue = x", "The method __str__ is used to convert an object of class Calculator to a string.", "def toString(self):\n return '\\n'.join(['_'*50, \n 'TokenStack: ' + str(self.mTokens), \n 'Arguments: ' + str(self.mArguments), \n 'Operators: ' + str(self.mOperators), \n '_'*50])\n\nCalculator.__str__ = toString\ndel toString", "The function $\\texttt{evaluate}(\\texttt{self})$ evaluates the expression that is given by the tokens on the mTokenStack.\nThere are two phases:\n1. The first phase is the <em style=\"color:blue\">reading phase</em>. In this phase\n the tokens are removed from the token stack mTokens.\n2. The second phase is the <em style=\"color:blue\">evaluation phase</em>. In this phase,\n the remaining operators on the operator stack mOperators are evaluated. Note that some operators are already \n evaluated in the reading phase.\nWe can describe what happens in the reading phase using \n<em style=\"color:blue\">rewrite rules</em> that describe how the three stacks mTokens, mArguments and mOperators\nare changed in each step. Here, a step is one iteration of the first while-loop of the function evaluate.\nThe following rewrite rules are executed until the token stack mTokens is empty.\n1. If the token on top of the token stack is an integer, it is removed from the token stack and pushed onto the argument stack.\n The operator stack remains unchanged in this case.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [\\texttt{token} ] & \\wedge \\\n \\texttt{isInteger}(\\texttt{token}) & \\Rightarrow \\[0.2cm]\n \\texttt{mArguments}' = \\texttt{mArguments} + [\\texttt{token}] & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokensRest} & \\wedge \\\n \\texttt{mOperators}' = \\texttt{mOperators}\n \\end{array} \n $$\n Here, the primed variable $\\texttt{mArguments}'$ refers to the argument stack after $\\texttt{token}$\n has been pushed onto it.\nIn the following rules we implicitly assume that the token on top of the token stack is not an integer but \n rather a parenthesis or a proper operator. In order to be more concise, we suppress this precondition from the \n following rewrite rules.\n2. If the operator stack is empty, the next token is pushed onto the operator stack.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [\\texttt{op} ] & \\wedge \\\n \\texttt{mOperators} = [] & \\Rightarrow \\[0.2cm]\n \\texttt{mOperators}' = \\texttt{mOperators} + [\\texttt{op}] & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokensRest} & \\wedge \\\n \\texttt{mArguments}' = \\texttt{mArguments} \n \\end{array} \n $$\n3. If the next token is an opening parenthesis, this parenthesis token is pushed onto the operator stack.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [\\texttt{'('} ] & \\Rightarrow \\[0.2cm]\n \\texttt{mOperators}' = \\texttt{mOperators} + [\\texttt{'('}] & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokensRest} & \\wedge \\\n \\texttt{mArguments}' = \\texttt{mArguments} \n \\end{array} \n $$\n4. If the next token is a closing parenthesis and the operator on top of the operator stack is an opening parenthesis, then both \n parentheses are removed.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [\\texttt{')'} ] & \\wedge \\\n \\texttt{mOperators} =\\texttt{mOperatorsRest} + [\\texttt{'('}] & \\Rightarrow \\[0.2cm]\n \\texttt{mOperators}' = \\texttt{mOperatorsRest} & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokensRest} & \\wedge \\\n \\texttt{mArguments}' = \\texttt{mArguments} \n \\end{array} \n $$\n5. If the next token is a closing parenthesis but the operator on top of the operator stack is not an opening parenthesis, \n the operator on top of the operator stack is evaluated. Note that the token stack is not changed in this case.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [\\texttt{')'} ] & \\wedge \\\n \\texttt{mOperatorsRest} + [\\texttt{op}] & \\wedge \\\n \\texttt{op} \\not= \\texttt{'('} & \\wedge \\\n \\texttt{mArguments} = \\texttt{mArgumentsRest} + [\\texttt{lhs}, \\texttt{rhs}] & \\Rightarrow \\[0.2cm]\n \\texttt{mOperators}' = \\texttt{mOperatorsRest} & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokens} & \\wedge \\\n \\texttt{mArguments}' = \\texttt{mArgumentsRest} + [\\texttt{lhs} \\;\\texttt{op}\\; \\texttt{rhs}]\n \\end{array} \n $$\n Here, the expression $\\texttt{lhs} \\;\\texttt{op}\\; \\texttt{rhs}$ denotes evaluating the operator $\\texttt{op}$ with the arguments\n $\\texttt{lhs}$ and $\\texttt{rhs}$.\n6. If the token on top of the operator stack is an opening parenthesis, then the operator on top of the token stack\n is pushed onto the operator stack.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [\\texttt{op}] & \\wedge \\\n \\texttt{op} \\not= \\texttt{')'} & \\wedge \\\n \\texttt{mOperators} = \\texttt{mOperatorsRest} + [\\texttt{'('}] & \\Rightarrow \\[0.2cm]\n \\texttt{mOperator}' = \\texttt{mOperator} + [\\texttt{op}] & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokensRest} & \\wedge \\\n \\texttt{mArguments}' = \\texttt{mArguments}\n \\end{array} \n $$\nIn the remaining cases neither the token on top of the token stack nor the operator on top of the operator stack can be\n a parenthesis. The following rules will implicitly assume that this is the case.\n7. If the operator on top of the operator stack needs to be evaluated before the operator on top of the token stack,\n the operator on top of the operator stack is evaluated.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [o_2] & \\wedge \\\n \\texttt{mOperatorsRest} + [o_1] & \\wedge \\\n \\texttt{evalBefore}(o_1, o_2) & \\wedge \\ \n \\texttt{mArguments} = \\texttt{mArgumentsRest} + [\\texttt{lhs}, \\texttt{rhs}] & \\Rightarrow \\[0.2cm]\n \\texttt{mOperators}' = \\texttt{mOperatorRest} & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokens} & \\wedge \\\n \\texttt{mArguments}' = \\texttt{mArgumentsRest} + [\\texttt{lhs} \\;o_1\\; \\texttt{rhs}]\n \\end{array} \n $$\n8. Otherwise, the operator on top of the token stack is pushed onto the operator stack.\n $$\\begin{array}{lc}\n \\texttt{mTokens} = \\texttt{mTokensRest} + [o_2] & \\wedge \\\n \\texttt{mOperators} = \\texttt{mOperatorsRest} + [o_1] & \\wedge \\\n \\neg \\texttt{evalBefore}(o_1, o_2) & \\Rightarrow \\[0.2cm]\n \\texttt{mOperators}' = \\texttt{mOperators} + [o_2] & \\wedge \\\n \\texttt{mTokens}' = \\texttt{mTokensRest} & \\wedge \\\n \\texttt{mArguments}' = \\texttt{mArguments}\n \\end{array} \n $$\nIn every step of the evaluation phase we \n- remove one operator from the operator stack, \n- remove its arguments from the argument stack, \n- evaluate the operator, and \n- push the result back on the argument stack.", "def evaluate(self):\n \"your code here\"\n return self.mArguments.top()\n \nCalculator.evaluate = evaluate\ndel evaluate", "The method $\\texttt{popAndEvaluate}(\\texttt{self})$ removes an operator from the operator stack and removes the corresponding arguments from the \narguments stack. It evaluates the operator and pushes the result on the argument stack.", "def popAndEvaluate(self):\n \"your code here\"\n \nCalculator.popAndEvaluate = popAndEvaluate\ndel popAndEvaluate", "The function testEvaluateExpr takes three arguments:\n- s is a string that can be interpreted as an arithmetic expression.\n This string might contain the variable $x$. In this arithmetic expression,\n unary function symbols need not be be followed by parenthesis.\n- t is a string that contains an arithmetic expression. The syntax\n of this expression has to follow the rules of the programming language\n python.\n- x is a floating point value. This value is supposed to be the value of\n the variable $x$ that might occur in s and t. \nThe function evaluates susing the class Calculator, while t is evaluated \nusing the predefined function eval. If the results differ, an exception is raised.", "def testEvaluateExpr(s, t, x):\n TL = tokenize(s)\n C = Calculator(TL, x)\n r1 = C.evaluate()\n r2 = eval(t, { 'math': math }, { 'x': x })\n assert r1 == r2, f'{r1} != {r2}'\n\ntestEvaluateExpr('sin cos x', 'math.sin(math.cos(x))', 0)\n\ntestEvaluateExpr('sin x**2', 'math.sin(math.pi)**2', math.pi)\n\ntestEvaluateExpr('log e ** x + 1 * 2 - 3', 'math.log(math.e**x) + 1 * 2 - 3 ', 1)", "The function computeZero takes three arguments:\n* s is a string that can be interpreted as a function $f$ of the variable x.\n For example, s could be equal to 'x * x - 2.0'.\n* left and right are floating point numbers.\nIt is required that the function $f$ changes signs in the interval $[\\texttt{left}, \\texttt{right}]$.\nThen computeZero returns a floating point value $x_0$ such that $f(x_0) \\approx 0$.", "def computeZero(s, left, right):\n TL = tokenize(s)\n\n def f(x):\n c = Calculator(TL, x)\n return c.evaluate()\n\n return findZero(f, left, right, 54);", "The cell below should output the number 0.7390851332151607.", "computeZero('log exp x - cos(sqrt(x**2))', 0, 1)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csdms/coupling
docs/demos/cem_and_waves.ipynb
mit
[ "<img src=\"../_static/pymt-logo-header-text.png\">\nCoastline Evolution Model + Waves\n\nLink to this notebook: https://github.com/csdms/pymt/blob/master/docs/demos/cem_and_waves.ipynb\nInstall command: $ conda install notebook pymt_cem\n\nThis example explores how to use a BMI implementation to couple the Waves component with the Coastline Evolution Model component.\nLinks\n\nCEM source code: Look at the files that have deltas in their name.\nCEM description on CSDMS: Detailed information on the CEM model.\n\nInteracting with the Coastline Evolution Model BMI using Python\nSome magic that allows us to view images within the notebook.", "%matplotlib inline\nimport numpy as np", "Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!", "from pymt import models\n\ncem, waves = models.Cem(), models.Waves()", "Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.", "waves.get_output_var_names()\n\ncem.get_input_var_names()", "We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,\n\"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity\"\n\nQuite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).\nOK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.", "args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)\ncem.initialize(*args)\nargs = waves.setup()\nwaves.initialize(*args)", "Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.", "def plot_coast(spacing, z):\n import matplotlib.pyplot as plt\n \n xmin, xmax = 0., z.shape[1] * spacing[1] * 1e-3\n ymin, ymax = 0., z.shape[0] * spacing[0] * 1e-3\n\n plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')\n plt.colorbar().ax.set_ylabel('Water Depth (m)')\n plt.xlabel('Along shore (km)')\n plt.ylabel('Cross shore (km)')", "It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.", "grid_id = cem.get_var_grid('sea_water__depth')\nspacing = cem.get_grid_spacing(grid_id)\nshape = cem.get_grid_shape(grid_id)\nz = np.empty(shape)\ncem.get_value('sea_water__depth', out=z)\nplot_coast(spacing, z)", "Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.", "qs = np.zeros_like(z)\nqs[0, 100] = 750", "The CSDMS Standard Name for this variable is:\n\"land_surface_water_sediment~bedload__mass_flow_rate\"\n\nYou can get an idea of the units based on the quantity part of the name. \"mass_flow_rate\" indicates mass per time. You can double-check this with the BMI method function get_var_units.", "cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')\n\nwaves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .3)\nwaves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)\ncem.set_value(\"sea_surface_water_wave__height\", 2.)\ncem.set_value(\"sea_surface_water_wave__period\", 7.)", "Set the bedload flux and run the model.", "for time in range(3000):\n waves.update()\n angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')\n\n cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)\n cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)\n cem.update()\n\ncem.get_value('sea_water__depth', out=z)\n\nplot_coast(spacing, z)", "Let's add another sediment source with a different flux and update the model.", "qs[0, 150] = 500\nfor time in range(3750):\n waves.update()\n angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')\n cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)\n\n cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)\n cem.update()\n \ncem.get_value('sea_water__depth', out=z)\n\nplot_coast(spacing, z)", "Here we shut off the sediment supply completely.", "qs.fill(0.)\nfor time in range(4000):\n waves.update()\n angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')\n cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)\n\n cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)\n cem.update()\n \ncem.get_value('sea_water__depth', out=z)\n\nplot_coast(spacing, z)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tabakg/potapov_interpolation
Dispersion_relation_chi_2_voxels_approach.ipynb
gpl-3.0
[ "Solving phase and frequency matching conditions for $\\chi^{(2)}$ with voxels of increasing resolution.\nApplying voxels of increasing resolution to $\\chi^{(3)}$.\nRefractive index from:\nhttp://refractiveindex.info/?shelf=main&book=LiNbO3&page=Zelmon-o\nsee also the notebook Dispersion_relation_chi_2_and_interpolations", "import sympy as sp\nimport numpy as np\nimport scipy.constants\nfrom sympy.utilities.autowrap import ufuncify\nimport time\nimport itertools\n#from scipy import interpolate\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sympy import init_printing\ninit_printing() \nimport random\n\ndef plot_arr(arr):\n fig = plt.figure(figsize=(15,15))\n ax = fig.add_subplot(111)\n cax = ax.matshow(np.asmatrix(arr), interpolation='nearest')\n fig.colorbar(cax)\n plt.show()\n\n## from https://www.andreas-jung.com/contents/a-python-decorator-for-measuring-the-execution-time-of-methods\n\ndef timeit(method):\n def timed(*args, **kw):\n ts = time.time()\n result = method(*args, **kw)\n te = time.time()\n print '%r %2.2f sec' % \\\n (method.__name__, te-ts)\n return result\n return timed\n\nlambd,nu,nu1,nu2,nu3,nu4 = sp.symbols('lambda nu nu_1 nu_2 nu_3 nu_4')\nl2 = lambd **2\n\ndef n_symb(pol='o'):\n s = 1.\n if pol == 'o':\n s += 2.6734 * l2 / (l2 - 0.01764)\n s += 1.2290 * l2 / (l2 - 0.05914)\n s += 12.614 * l2 / (l2 - 474.6)\n else:\n s += 2.9804 * l2 / (l2 - 0.02047)\n s += 0.5981 * l2 / (l2 - 0.0666)\n s += 8.9543 * l2 / (l2 - 416.08)\n return sp.sqrt(s)\n\ndef k_symb(symbol=nu,pol='o'):\n '''k is accurate for nu inputs between 6-60.'''\n return ((n_symb(pol=pol) * symbol )\n .subs(lambd,scipy.constants.c / (symbol*1e7))) ## / scipy.constants.c\n\nexpressions = [k_symb(nu1,pol='o'), k_symb(nu2,pol='o'), k_symb(nu3,pol='e')]\n\ndispersion_difference_function = sum(expressions)\ndispersion_difference_function = dispersion_difference_function.subs(nu3,-nu1-nu2)\n\ndispersion_difference_function\n\nk_of_nu1_nu2 = ufuncify([nu1,nu2],dispersion_difference_function)", "Find maximum derivative of dispersion_difference_function over a range. This could be used as a bound for $\\epsilon$ to guarantee results.", "def find_max_der(expression,symbol,input_range):\n expr_der = sp.diff(expression,symbol)\n expr_def_func = ufuncify([symbol],expr_der)\n return max(abs(expr_def_func(input_range)))\n\n## Apply the triangle inequality over a range of nus\n\nnus = np.asarray([6.+ i*5e-2 for i in range(1+int(1e3))])\nmax_derivative = sum([find_max_der(exp,om,nus) for om,exp in zip([nu1,nu2,nu3],expressions)])\n\nmax_derivative", "Methods for systematic search over ranges\nDefinitions:\nbase -- The number base to use, i.e. the factor to increase the grid resolution at each step.\nstarting_i -- index of starting step. 0 means we use a grid of size base by base.\nmax_i -- final index.\neps -- desired resolution at step max_i\nDescription\nTo look for solutions more efficiently, we can recursively enhance the resolution of the grid in which we are looking. At each step, decrease the cutoff eps_current by some factor (for now let's make it base). For the set of voxels in each step that are close enough to a solution of the equation, increase the resolution by a factor of base and examine the resulting smaller voxels. Continue until the last step.", "eps = 0.00002\nstarting_i = 0\nmax_i = 4\nbase = 10\n\nmin_value = 6.\nmax_value = 20.\n\n@timeit\ndef setup_ranges(max_i,base):\n ranges= {}\n for i in range(max_i+1):\n ranges[i] = np.linspace(min_value,max_value,1+pow(base,i+1))\n return ranges", "Note: How to obtain the index of $\\nu_3$.", "i = 2\n\n1+pow(base,i+1)\n\nnp.linspace(min_value,max_value,1+pow(base,i+1))\n\nspacing = (max_value-min_value)/ pow(base,i+1)\nspacing\n\nnum_indices_from_zero = min_value / spacing\nnum_indices_from_zero\n\nranges[i]\n\nsample_index = solution_containing_voxels[2].keys()[1000]\n\nsample_index\n\nranges[2][(sum(sample_index)+int(num_indices_from_zero))]", "Main methods used", "@timeit\ndef initial_voxels(max_i,base,starting_i,eps):\n solution_containing_voxels = {}\n eps_current = eps * pow(base,max_i-starting_i)\n solution_containing_voxels[starting_i] = {}\n \n for i1,om1 in enumerate(ranges[starting_i]):\n for i2,om2 in enumerate(ranges[starting_i]):\n err = k_of_nu1_nu2(om1,om2)\n if abs(err) < eps_current:\n solution_containing_voxels[starting_i][i1,i2] = err\n return solution_containing_voxels\n\n@timeit\ndef add_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels):\n for i in range(starting_i+1,max_i+1):\n eps_current = eps * pow(base,max_i-i)\n solution_containing_voxels[i] = {}\n for (i1,i2) in solution_containing_voxels[i-1]:\n step_size = int(base/2)\n max_length = pow(base,i+1)\n for i1_new in range(max(0,i1*base-step_size),min(max_length,i1*base+step_size+1)):\n for i2_new in range(max(0,i2*base-step_size),min(max_length,i2*base+step_size+1)):\n err = k_of_nu1_nu2(ranges[i][i1_new],ranges[i][i2_new])\n if abs(err) < eps_current:\n solution_containing_voxels[i][i1_new,i2_new] = err \n\n@timeit\ndef plot_voxels(solution_containing_voxels,i):\n voxels = np.zeros((1+pow(base,i+1),1+pow(base,i+1)))\n for (i1,i2) in solution_containing_voxels[i]:\n voxels[i1,i2] = 1\n plot_arr(voxels)\n\ndef voxel_solutions(ranges,k_of_nu1_nu2,max_i,base,starting_i,eps):\n solution_containing_voxels = initial_voxels(ranges,k_of_nu1_nu2,max_i,\n base,starting_i,eps)\n add_high_res_voxels(ranges,k_of_nu1_nu2,max_i,base,starting_i,eps,\n solution_containing_voxels)\n return solution_containing_voxels\n\nranges = setup_ranges(max_i,base)\n\nsolution_containing_voxels = initial_voxels(max_i,base,starting_i,eps)\n\nadd_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels)\n\nplot_voxels(solution_containing_voxels,0)\n\nplot_voxels(solution_containing_voxels,1)\n\nplot_voxels(solution_containing_voxels,2)\n\n## Number of solutions found for each resolution:\n\nfor i in range(0,5):\n print len(solution_containing_voxels[i])\n\n## Number of solutions found for each resolution:\nplt.title('Number of voxels per step')\nplt.semilogy([(len(solution_containing_voxels[i])) for i in range(0,max_i)])", "Different bases comparison\nBase 10", "eps = 0.006\nstarting_i = 0\nmax_i = 2\nbase = 10\n\n## maximum grid length:\n1+pow(base,max_i+1)\n\nranges = setup_ranges(max_i,base)\nsolution_containing_voxels = initial_voxels(max_i,base,starting_i,eps)\nadd_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels)\n\n## Number of solutions found for each resolution:\n\nfor i in range(0,max_i+1):\n print len(solution_containing_voxels[i])\n\n## Number of solutions found for each resolution:\nplt.title('Number of voxels per step')\nplt.semilogy([(len(solution_containing_voxels[i])) for i in range(0,max_i+1)])\n\nplot_voxels(solution_containing_voxels,max_i)", "Base 2", "eps = 0.006\nstarting_i = 0\nmax_i = 9\nbase = 2\n\n## maximum grid length:\n1+pow(base,max_i+1)\n\nranges = setup_ranges(max_i,base)\nsolution_containing_voxels = initial_voxels(max_i,base,starting_i,eps)\nadd_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels)\n\n## Number of solutions found for each resolution:\n\nfor i in range(0,max_i+1):\n print len(solution_containing_voxels[i])\n\n## Number of solutions found for each resolution:\nplt.title('Number of voxels per step')\nplt.semilogy([(len(solution_containing_voxels[i])) for i in range(0,max_i+1)])\n\nplot_voxels(solution_containing_voxels,max_i)", "Discussion\nThe number of solution voxels increases by a factor of base at each step. This happens because the function being optimize is close to linear near the solutions and because we decrease eps_current by a factor of base at each step. As a result, the total number of voxels increases by a factor of base**2 at each step, but the thickness of the solution voxel surface decreases by a factor of base.\nThe cost of the algorithm is dominated by the last step. This is because the number of voxels increases approximately by a factor of base at each step, and the computational cost at each step is the number of voxels from the previous step multiplied by base**2. As such, the algorithm runtime is essentially the number of solution points. Notice in the above experiments the runtime was similar for different bases used. \nA more careful analysis assuming a large number of points and the scaling described above gives a geometric series for the runtime, where the sum starts at the last step ($b$ stands for base and $p$ stands for number of points):\n\\begin{align}\nb^2(\\frac{p}{b} + \\frac{p}{b^2} + ...)\n=\np\\left( b + 1 + \\frac{1}{b} + ...\\right)\n\\approx \\frac{pb^2}{b-1}.\n\\end{align}\nThe other factor contributing to the runtime is breaking away from the scaling law in the above discussion. \nUsing the same technique for $\\chi^{(3)}$.\nLet's extend the above search technique to using four-wave mixing.\nThe problems here will be larger, so runtime may be more important. For this reason, I tested different values of epsilon at each stage. Using a smaller epsilon will improve the runtime, but may miss correct solutions. Compare the number of solutions and runtime to the methods used in the notebook Dispersion_relation_two_approaches. The complexity is essentially linear in the number of solutions, but here the constant factor may be large if epsilon is not chosen carefully at each step. For this reason one may prefer to use the other techniques for $\\chi^{(3)}$ problems. The number of solutions in the method below converges to the number of solutions using the other two methods.", "eps = 2e-4\nstarting_i = 0\nmax_i = 1\nbase = 10\n\nrelative_scalings = [4,4,10]\n\nphi1_min = 30.\nphi1_max = 34.\nranges1 = {}\nfor i in range(0,2):\n ranges1[i] = np.linspace(phi1_min,phi1_max,relative_scalings[0]*pow(base,i+1)+1)\n\nphi2_min = -13\nphi2_max = -9\nranges2 = {}\nfor i in range(0,2):\n ranges2[i] = np.linspace(phi2_min,phi2_max,relative_scalings[1]*pow(base,i+1)+1)\n\nnu3_min = -26.\nnu3_max = -16.\nranges3 = {}\nfor i in range(0,2):\n ranges3[i] = np.linspace(nu3_min,nu3_max,relative_scalings[2]*pow(base,i+1)+1)\n\nprint len(ranges1[1]),len(ranges2[1]),len(ranges3[1])\n\nphi1, phi2 = sp.symbols('phi_1 phi_2')\n\nex1 = (k_symb(nu1,pol='e')+k_symb(nu2,pol='e')).expand().subs({nu1:(phi1 + phi2)/2, nu2: (phi1-phi2)/2})\n\nex2 = -(k_symb(nu3,pol='e')+k_symb(nu4,pol='e')).expand().subs(nu4,-phi1-nu3)\n\nf_phi12_nu3 =ufuncify([phi1,phi2,nu3], ex1-ex2)\n\n@timeit\ndef initial_voxels_4wv(max_i,base,starting_i,eps,eps_factor = None):\n\n if eps_factor is None:\n eps_factor = pow(base,max_i-starting_i)\n eps_current = eps * eps_factor\n \n solution_containing_voxels = {}\n solution_containing_voxels[starting_i] = {}\n \n for i1,om1 in enumerate(ranges1[starting_i]):\n for i2,om2 in enumerate(ranges2[starting_i]):\n for i3,om3 in enumerate(ranges3[starting_i]):\n err = f_phi12_nu3(om1,om2,om3)\n if abs(err) < eps_current:\n solution_containing_voxels[starting_i][i1,i2,i3] = err\n return solution_containing_voxels\n\n@timeit\ndef add_high_res_voxels_4wv(max_i,base,starting_i,eps,solution_containing_voxels):\n for i in range(starting_i+1,max_i+1):\n eps_current = eps * pow(base,max_i-i)\n solution_containing_voxels[i] = {}\n for (i1,i2,i3) in solution_containing_voxels[i-1]:\n step_size = int(base/2)\n max_length = pow(base,i+1)\n for i1_new in range(max(0,i1*base-step_size),min(relative_scalings[0]*max_length,i1*base+step_size+1)):\n for i2_new in range(max(0,i2*base-step_size),min(relative_scalings[1]*max_length,i2*base+step_size+1)):\n for i3_new in range(max(0,i3*base-step_size),min(relative_scalings[2]*max_length,i3*base+step_size+1)):\n err = f_phi12_nu3(ranges1[i][i1_new],ranges2[i][i2_new],ranges3[i][i3_new])\n if abs(err) < eps_current:\n solution_containing_voxels[i][i1_new,i2_new,i3_new] = err \n\neps_factors = [1.5,2.,2.5,3.,3.5,4.]\nnum_found = {}\n\nfor eps_factor in eps_factors:\n solution_containing_voxels_4wv = initial_voxels_4wv(max_i,base,starting_i,eps,eps_factor = eps_factor)\n print 'big voxels: ', len(solution_containing_voxels_4wv[0].keys())\n add_high_res_voxels_4wv(max_i,base,starting_i,eps,solution_containing_voxels_4wv)\n num_found[eps_factor] = len(solution_containing_voxels_4wv[1].keys())\n print 'little voxels: ', num_found[eps_factor]\n\nplt.title('number of solutions found as a function of the epsilon multiplier for the voxel method')\nplt.plot(eps_factors, [num_found[eps] for eps in eps_factors] )", "Finding solutions in the voxels:\nIn general the nus to be considered do not lie on a grid. For this reason it is necessary to find which nus lie in each voxel.\nSetup for the experiment", "eps = 0.006\nstarting_i = 0\nmax_i = 2\nbase = 10\n\nranges = setup_ranges(max_i,base)\nsolution_containing_voxels = initial_voxels(max_i,base,starting_i,eps)\nadd_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels)\n\ni = 2 ## where to draw points from.\n\nscale = 0.1 ## scale on random noise to add to points\nDelta = ranges[i][1] - ranges[i][0] ## Delta here was made with ranges[0] spacing\nrange_min = ranges[i][0]\nranges_perturbed = [num+random.random()*scale for num in ranges[2]] ## make \n\nvalues = [ int(round( (el - range_min) / Delta)) for el in ranges_perturbed]\n\ndef make_dict_values_to_lists_of_inputs(values,inputs):\n D = {}\n for k, v in zip(values,inputs):\n D.setdefault(k, []).append(v)\n return D\n\nD = make_dict_values_to_lists_of_inputs(values,ranges_perturbed)\n\nsolution_nus = []\nfor indices in solution_containing_voxels[i].keys(): \n if all([ind in D for ind in indices]): ## check all indices are in D,\n for it in itertools.product(*[D[ind] for ind in indices]): ## get all nus in the voxel.\n solution_nus.append(it)\n\nplt.figure(figsize=(13,13))\nplt.title('perturbed grid points still in the solution voxels')\nplt.scatter([el[0] for el in solution_nus],[el[1] for el in solution_nus])", "Development for method to be used in the package", "def setup_ranges(max_i,base,min_value = 6.,max_value = 11.):\n ranges= {}\n for i in range(max_i+1):\n ranges[i] = np.linspace(min_value,max_value,1+pow(base,i+1))\n return ranges\n\n@timeit\ndef initial_voxels(ranges,k_of_nu1_nu2,max_i,base,starting_i,eps):\n solution_containing_voxels = {}\n eps_current = eps * pow(base,max_i-starting_i)\n solution_containing_voxels[starting_i] = {}\n\n for i1,om1 in enumerate(ranges[starting_i]):\n for i2,om2 in enumerate(ranges[starting_i]):\n err = k_of_nu1_nu2(om1,om2)\n if abs(err) < eps_current:\n solution_containing_voxels[starting_i][i1,i2] = err\n return solution_containing_voxels\n\n@timeit\ndef add_high_res_voxels(ranges,k_of_nu1_nu2,max_i,base,starting_i,eps,solution_containing_voxels):\n for i in range(starting_i+1,max_i+1):\n eps_current = eps * pow(base,max_i-i)\n solution_containing_voxels[i] = {}\n for (i1,i2) in solution_containing_voxels[i-1]:\n step_size = int(base/2)\n max_length = pow(base,i+1)\n for i1_new in range(max(0,i1*base-step_size),min(max_length,i1*base+step_size+1)):\n for i2_new in range(max(0,i2*base-step_size),min(max_length,i2*base+step_size+1)):\n err = k_of_nu1_nu2(ranges[i][i1_new],ranges[i][i2_new])\n if abs(err) < eps_current:\n solution_containing_voxels[i][i1_new,i2_new] = err\n\n@timeit\ndef plot_voxels(solution_containing_voxels,i):\n voxels = np.zeros((1+pow(base,i+1),1+pow(base,i+1)))\n for (i1,i2) in solution_containing_voxels[i]:\n voxels[i1,i2] = 1\n plot_arr(voxels)\n\ndef voxel_solutions(ranges,k_of_nu1_nu2,max_i,base,starting_i,eps):\n solution_containing_voxels = initial_voxels(ranges,k_of_nu1_nu2,max_i,\n base,starting_i,eps)\n add_high_res_voxels(ranges,k_of_nu1_nu2,max_i,base,starting_i,eps,\n solution_containing_voxels)\n return solution_containing_voxels\n\ndef generate_k_func(pols=(1,1,-1),n_symb = None):\n\n lambd,nu,nu1,nu2,nu3,nu4 = sp.symbols(\n 'lambda nu nu_1 nu_2 nu_3 nu_4')\n l2 = lambd **2\n\n if n_symb is None:\n def n_symb(pol=1):\n '''Valid for lambda between 0.5 and 5. (units are microns)'''\n s = 1.\n if pol == 1:\n s += 2.6734 * l2 / (l2 - 0.01764)\n s += 1.2290 * l2 / (l2 - 0.05914)\n s += 12.614 * l2 / (l2 - 474.6)\n else:\n s += 2.9804 * l2 / (l2 - 0.02047)\n s += 0.5981 * l2 / (l2 - 0.0666)\n s += 8.9543 * l2 / (l2 - 416.08)\n return sp.sqrt(s)\n\n def k_symb(symbol=nu,pol=1):\n '''k is accurate for nu inputs between 6-60 (units are 1e13 Hz).'''\n return ((n_symb(pol=pol) * symbol )\n .subs(lambd,scipy.constants.c / (symbol*1e7)))\n\n expressions = [k_symb(nu1,pols[0]),\n k_symb(nu2,pols[1]),\n k_symb(nu3,pols[2])]\n dispersion_difference_function = sum(expressions)\n dispersion_difference_function = dispersion_difference_function.subs(\n nu3,-nu1-nu2)\n k_of_nu1_nu2 = ufuncify([nu1,nu2],\n dispersion_difference_function)\n return k_of_nu1_nu2\n\npols = (1,1,-1)\nk_of_nu1_nu2 = generate_k_func(pols)\n\neps = 0.006\nstarting_i = 0\nmax_i = 2\nbase = 10\n\nmin_value = 6.\nmax_value = 20.\nranges = setup_ranges(max_i,base,min_value,max_value)\n\n\npos_nus_lst = np.random.uniform(min_value,max_value,1000) ## 100 random values\n\nDelta = ranges[max_i][1] - ranges[max_i][0] ## spacing in grid used\n\n## get index values\nvalues = [ int(round( (freq - min_value) / Delta)) for freq in pos_nus_lst]\n\n## make a dict to remember which frequencies belong in which grid voxel.\ngrid_indices_to_unrounded = make_dict_values_to_lists_of_inputs(values,pos_nus_lst)\ngrid_indices_to_ham_index = make_dict_values_to_lists_of_inputs(values,range(len(pos_nus_lst)))\n\nsolution_containing_voxels = voxel_solutions(ranges,k_of_nu1_nu2,\n max_i,base,starting_i,eps)\n\n## Let's figure out which indices we can expect for nu3\nspacing = (max_value-min_value)/ pow(base,max_i+1)\nnum_indices_from_zero = min_value / spacing ## float, round up or down\n\nsolutions_nu1_and_nu2 = solution_containing_voxels[max_i].keys()\n\nsolution_indices = []\nfor indices in solutions_nu1_and_nu2:\n for how_to_round_last_index in range(2):\n last_index = (sum(indices)\n + int(num_indices_from_zero)\n + how_to_round_last_index)\n if last_index < 0 or last_index >= len(ranges[max_i]):\n print \"breaking!\"\n break\n current_grid_indices = (indices[0],indices[1],last_index)\n if all([ind in grid_indices_to_ham_index for ind in current_grid_indices]):\n for it in itertools.product(*[grid_indices_to_ham_index[ind] for ind in current_grid_indices]):\n solution_indices.append(it)\n\nlen(solutions_nu1_and_nu2)\n\nlen(solution_indices)\n\nsample_indices = solution_indices[random.randint(0,len(solution_indices)-1)]\n\nsample_indices\n\npos_nus_lst[sample_indices[0]], pos_nus_lst[sample_indices[1]], pos_nus_lst[sample_indices[2]] \n\npos_nus_lst[sample_indices[0]]+pos_nus_lst[sample_indices[1]] - pos_nus_lst[sample_indices[2]] \n\nnp.zeros((0,0))\n\ntuple(map(lambda z: int(np.sign(z)),(4.5,5.5,-3.4)))\n\n1e-2\n\nimport math\ndef make_dict_values_to_lists_of_inputs(values,inputs):\n '''\n Make a dictionary mapping value to lists of corresponding inputs.\n\n Args:\n values (list of floats):\n Values in a list, corresponding to the inputs.\n inputs (list of floats):\n Inputs in a list.\n\n Returns:\n D (dict):\n dictionary mapping value to lists of corresponding inputs.\n '''\n D = {}\n for k, v in zip(values,inputs):\n if not math.isnan(k):\n D.setdefault(k, []).append(v)\n return D\n\nmake_dict_values_to_lists_of_inputs([float('NaN'),3,4,5],[1,1,2,3])\n\nmath.isnan(float('NaN'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yangliuy/yangliuy.github.io
markdown_generator/talks.ipynb
mit
[ "Talks markdown generator for academicpages\nTakes a TSV of talks with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in talks.py. Run either from the markdown_generator folder after replacing talks.tsv with one containing your data.\nTODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.", "import pandas as pd\nimport os", "Data format\nThe TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.\n\nFields that cannot be blank: title, url_slug, date. All else can be blank. type defaults to \"Talk\" \ndate must be formatted as YYYY-MM-DD.\nurl_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. \nThe .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]\nThe combination of url_slug and date must be unique, as it will be the basis for your filenames\n\n\n\nThis is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).", "!cat talks.tsv", "Import TSV\nPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \\t.\nI found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.", "talks = pd.read_csv(\"talks.tsv\", sep=\"\\t\", header=0)\ntalks", "Escape special characters\nYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.", "html_escape_table = {\n \"&\": \"&amp;\",\n '\"': \"&quot;\",\n \"'\": \"&apos;\"\n }\n\ndef html_escape(text):\n if type(text) is str:\n return \"\".join(html_escape_table.get(c,c) for c in text)\n else:\n return \"False\"", "Creating the markdown files\nThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.", "loc_dict = {}\n\nfor row, item in talks.iterrows():\n \n md_filename = str(item.date) + \"-\" + item.url_slug + \".md\"\n html_filename = str(item.date) + \"-\" + item.url_slug \n year = item.date[:4]\n \n md = \"---\\ntitle: \\\"\" + item.title + '\"\\n'\n md += \"collection: talks\" + \"\\n\"\n \n if len(str(item.type)) > 3:\n md += 'type: \"' + item.type + '\"\\n'\n else:\n md += 'type: \"Talk\"\\n'\n \n md += \"permalink: /talks/\" + html_filename + \"\\n\"\n \n if len(str(item.venue)) > 3:\n md += 'venue: \"' + item.venue + '\"\\n'\n \n if len(str(item.location)) > 3:\n md += \"date: \" + str(item.date) + \"\\n\"\n \n if len(str(item.location)) > 3:\n md += 'location: \"' + str(item.location) + '\"\\n'\n \n md += \"---\\n\"\n \n \n if len(str(item.talk_url)) > 3:\n md += \"\\n[More information here](\" + item.talk_url + \")\\n\" \n \n \n if len(str(item.description)) > 3:\n md += \"\\n\" + html_escape(item.description) + \"\\n\"\n \n \n md_filename = os.path.basename(md_filename)\n #print(md)\n \n with open(\"../_talks/\" + md_filename, 'w') as f:\n f.write(md)", "These files are in the talks directory, one directory below where we're working from.", "!ls ../_talks\n\n!cat ../_talks/2013-03-01-tutorial-1.md" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AstroHackWeek/AstroHackWeek2017
day1/notebooks/demo-KNN.ipynb
mit
[ "K-nearest neighbors", "import numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('notebook.mplstyle')\n%matplotlib inline\nfrom scipy.stats import mode", "Let's imagine we measure 2 quantities, $x_1$ and $x_2$ for some objects, and we know the classes that these objects belong to, e.g., \"star\", 0, or \"galaxy\", 1 (maybe we classified these objects by hand, or knew through some other means). We now observe ($x_1$, $x_2$) for some new object and want to know whether it belongs in class 0 or 1.\nWe'll first generate some fake data with known classes:", "a = np.random.multivariate_normal([1., 0.5], \n [[4., 0.],\n [0., 0.25]], size=512)\n\nb = np.random.multivariate_normal([10., 8.], \n [[1., 0.],\n [0., 25]], size=1024)\n\nX = np.vstack((a,b))\ny = np.concatenate((np.zeros(len(a)), \n np.ones(len(b))))\n\nX.shape, y.shape\n\nplt.figure(figsize=(6,6))\n\nplt.scatter(X[:,0], X[:,1], c=y, cmap='RdBu', marker='.', alpha=0.4)\n\nplt.xlim(-10, 20)\nplt.ylim(-10, 20)\n\nplt.title('Training data')\nplt.xlabel('$x_1$')\nplt.ylabel('$x_2$')\n\nplt.tight_layout()", "We now observe a new point, and would like to know which class it belongs to:", "np.random.seed(42)\nnew_pt = np.random.uniform(-10, 20, size=2)\n\nplt.figure(figsize=(6,6))\n\nplt.scatter(X[:,0], X[:,1], c=y, cmap='RdBu', marker='.', alpha=0.5, linewidth=0)\nplt.scatter(new_pt[0], new_pt[1], marker='+', color='g', s=100, linewidth=3)\n\nplt.xlim(-10, 20)\nplt.ylim(-10, 20)\n\nplt.xlabel('$x_1$')\nplt.ylabel('$x_2$')\n\nplt.tight_layout()", "KNN works by predicting the class of a new point based on the classes of the K training data points closest to the new point. The two things that can be customized about this method are K, the number of points to use, and the distance metric used to compute the distances between the new point and the training data. If the dimensions in your data are measured with different units or with very different measurement uncertainties, you might need to be careful with the way you choose this metric. For simplicity, we'll start by fixing K=16 and use a Euclidean distance to see how this works in practice:", "K = 16\n\ndef distance(pts1, pts2):\n pts1 = np.atleast_2d(pts1)\n pts2 = np.atleast_2d(pts2)\n return np.sqrt( (pts1[:,0]-pts2[:,0])**2 + (pts1[:,1]-pts2[:,1])**2)\n\n# compute the distance between all training data points and the new point\ndists = distance(X, new_pt)\n\n# get the classes (from the training data) of the K nearest points\nnearest_classes = y[np.argsort(dists)[:K]]\n\nnearest_classes", "All of the closest points are from class 1, so we would classify the new point as class=1. If there is a mixture of possible classes, take the class with more neighbors. If it's a tie, choose a class at random. That's it! Let's see how to use the KNN classifier in scikit-learn:", "from sklearn.neighbors import KNeighborsClassifier\n\nclf = KNeighborsClassifier(n_neighbors=16)\nclf.fit(X, y)\n\nclf.predict(new_pt.reshape(1, -1)) # input has to be 2D", "Let's visualize the decision boundary of this classifier by evaluating the predicted class for a grid of trial data:", "grid_1d = np.linspace(-10, 20, 256)\ngrid_x1, grid_x2 = np.meshgrid(grid_1d, grid_1d)\ngrid = np.stack((grid_x1.ravel(), grid_x2.ravel()), axis=1)\n\ny_grid = clf.predict(grid)\n\nplt.figure(figsize=(6,6))\n\nplt.pcolormesh(grid_x1, grid_x2, y_grid.reshape(grid_x1.shape), \n cmap='Set3', alpha=1.)\n\nplt.scatter(X[:,0], X[:,1], marker='.', alpha=0.65, linewidth=0)\n\nplt.xlim(-10, 20)\nplt.ylim(-10, 20)\n\nplt.xlabel('$x_1$')\nplt.ylabel('$x_2$')\n\nplt.tight_layout()", "KNN is very simple, but is very fast and is therefore useful in problems with large or wide datasets.\n\nLet's now look at a more complicated example where the training data classes overlap significantly:", "a = np.random.multivariate_normal([6., 0.5], \n [[8., 0.],\n [0., 0.25]], size=512)\n\nb = np.random.multivariate_normal([10., 4.], \n [[2., 0.],\n [0., 8]], size=1024)\n\nX2 = np.vstack((a,b))\ny2 = np.concatenate((np.zeros(len(a)), \n np.ones(len(b))))\n\nplt.figure(figsize=(6,6))\n\nplt.scatter(X2[:,0], X2[:,1], c=y2, cmap='RdBu', marker='.', alpha=0.4)\n\nplt.xlim(-10, 20)\nplt.ylim(-10, 20)\n\nplt.title('Training data')\nplt.xlabel('$x_1$')\nplt.ylabel('$x_2$')\n\nplt.tight_layout()", "What does the decision boundary look like in this case, as a function of the number of neighbors, K:", "for K in [4, 16, 64, 256]:\n clf2 = KNeighborsClassifier(n_neighbors=K)\n clf2.fit(X2, y2)\n\n y_grid2 = clf2.predict(grid)\n \n plt.figure(figsize=(6,6))\n\n plt.pcolormesh(grid_x1, grid_x2, y_grid2.reshape(grid_x1.shape), \n cmap='Set3', alpha=1.)\n\n plt.scatter(X2[:,0], X2[:,1], marker='.', alpha=0.65, linewidth=0)\n\n plt.xlim(-10, 20)\n plt.ylim(-10, 20)\n\n plt.xlabel('$x_1$')\n plt.ylabel('$x_2$')\n \n plt.title(\"$K={0}$\".format(K))\n\n plt.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/starthinker
colabs/monthly_budget_mover.ipynb
apache-2.0
[ "DV360 Monthly Budget Mover\nApply the previous month's budget/spend delta to the current month. Aggregate up the budget and spend from the previous month of each category declared then apply the delta of the spend and budget equally to each Line Item under that Category.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter DV360 Monthly Budget Mover Recipe Parameters\n\nNo changes made can be made in DV360 from the start to the end of this process\nMake sure there is budget information for the current and previous month's IOs in DV360\nMake sure the provided spend report has spend data for every IO in the previous month\nSpend report must contain 'Revenue (Adv Currency)' and 'Insertion Order ID'\nThere are no duplicate IO Ids in the categories outlined below\nThis process must be ran during the month of the budget it is updating\nIf you receive a 502 error then you must separate your jobs into two, because there is too much information being pulled in the sdf\nManually run this job\nOnce the job has completed go to the table for the new sdf and export to a csv\nTake the new sdf and upload it into DV360\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'recipe_timezone':'America/Los_Angeles', # Timezone for report dates.\n 'recipe_name':'', # Table to write to.\n 'auth_write':'service', # Credentials used for writing data.\n 'auth_read':'user', # Credentials used for reading data.\n 'partner_id':'', # The sdf file types.\n 'budget_categories':'{}', # A dictionary to show which IO Ids go under which Category. {\"CATEGORY1\":[12345,12345,12345], \"CATEGORY2\":[12345,12345]}\n 'filter_ids':[], # Comma separated list of filter ids for the request.\n 'excluded_ios':'', # A comma separated list of Inserion Order Ids that should be exluded from the budget calculations\n 'version':'5', # The sdf version to be returned.\n 'is_colab':True, # Are you running this in Colab? (This will store the files in Colab instead of Bigquery)\n 'dataset':'', # Dataset that you would like your output tables to be produced in.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute DV360 Monthly Budget Mover\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'dataset':{\n 'description':'Create a dataset where data will be combined and transfored for upload.',\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'dataset':{'field':{'name':'dataset','kind':'string','order':1,'description':'Place where tables will be created in BigQuery.'}}\n }\n },\n {\n 'dbm':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},\n 'report':{\n 'timeout':90,\n 'filters':{\n 'FILTER_ADVERTISER':{\n 'values':{'field':{'name':'filter_ids','kind':'integer_list','order':7,'default':'','description':'The comma separated list of Advertiser Ids.'}}\n }\n },\n 'body':{\n 'timezoneCode':{'field':{'name':'recipe_timezone','kind':'timezone','description':'Timezone for report dates.','default':'America/Los_Angeles'}},\n 'metadata':{\n 'title':{'field':{'name':'recipe_name','kind':'string','prefix':'Monthly_Budget_Mover_','order':1,'description':'Name of report in DV360, should be unique.'}},\n 'dataRange':'PREVIOUS_MONTH',\n 'format':'CSV'\n },\n 'params':{\n 'type':'TYPE_GENERAL',\n 'groupBys':[\n 'FILTER_ADVERTISER_CURRENCY',\n 'FILTER_INSERTION_ORDER'\n ],\n 'metrics':[\n 'METRIC_REVENUE_ADVERTISER'\n ]\n }\n }\n },\n 'delete':False\n }\n },\n {\n 'monthly_budget_mover':{\n 'auth':'user',\n 'is_colab':{'field':{'name':'is_colab','kind':'boolean','default':True,'order':7,'description':'Are you running this in Colab? (This will store the files in Colab instead of Bigquery)'}},\n 'report_name':{'field':{'name':'recipe_name','kind':'string','prefix':'Monthly_Budget_Mover_','order':1,'description':'Name of report in DV360, should be unique.'}},\n 'budget_categories':{'field':{'name':'budget_categories','kind':'json','order':3,'default':'{}','description':'A dictionary to show which IO Ids go under which Category. {\"CATEGORY1\":[12345,12345,12345], \"CATEGORY2\":[12345,12345]}'}},\n 'excluded_ios':{'field':{'name':'excluded_ios','kind':'integer_list','order':4,'description':'A comma separated list of Inserion Order Ids that should be exluded from the budget calculations'}},\n 'sdf':{\n 'auth':'user',\n 'version':{'field':{'name':'version','kind':'choice','order':6,'default':'5','description':'The sdf version to be returned.','choices':['SDF_VERSION_5','SDF_VERSION_5_1']}},\n 'partner_id':{'field':{'name':'partner_id','kind':'integer','order':1,'description':'The sdf file types.'}},\n 'file_types':'INSERTION_ORDER',\n 'filter_type':'FILTER_TYPE_ADVERTISER_ID',\n 'read':{\n 'filter_ids':{\n 'single_cell':True,\n 'values':{'field':{'name':'filter_ids','kind':'integer_list','order':4,'default':[],'description':'Comma separated list of filter ids for the request.'}}\n }\n },\n 'time_partitioned_table':False,\n 'create_single_day_table':False,\n 'dataset':{'field':{'name':'dataset','kind':'string','order':6,'default':'','description':'Dataset to be written to in BigQuery.'}},\n 'table_suffix':''\n },\n 'out_old_sdf':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':8,'default':'','description':'Dataset that you would like your output tables to be produced in.'}},\n 'table':{'field':{'name':'recipe_name','kind':'string','prefix':'SDF_OLD_','description':'Table to write to.'}},\n 'schema':[\n ],\n 'skip_rows':0,\n 'disposition':'WRITE_TRUNCATE'\n },\n 'file':'/content/old_sdf.csv'\n },\n 'out_new_sdf':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':8,'default':'','description':'Dataset that you would like your output tables to be produced in.'}},\n 'table':{'field':{'name':'recipe_name','kind':'string','prefix':'SDF_NEW_','description':'Table to write to.'}},\n 'schema':[\n ],\n 'skip_rows':0,\n 'disposition':'WRITE_TRUNCATE'\n },\n 'file':'/content/new_sdf.csv'\n },\n 'out_changes':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':8,'default':'','description':'Dataset that you would like your output tables to be produced in.'}},\n 'table':{'field':{'name':'recipe_name','kind':'string','prefix':'SDF_BUDGET_MOVER_LOG_','description':'Table to write to.'}},\n 'schema':[\n ],\n 'skip_rows':0,\n 'disposition':'WRITE_TRUNCATE'\n },\n 'file':'/content/log.csv'\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]