hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4a598a143b9c4fd81671567f0c66041d4b2c03d3
| 752,883 |
ipynb
|
Jupyter Notebook
|
notebooks/matplotlib-cookbook.ipynb
|
victorzhang-mars/msds593
|
bb283824cac01ab43df68578f5a187d6a127b839
|
[
"MIT"
] | null | null | null |
notebooks/matplotlib-cookbook.ipynb
|
victorzhang-mars/msds593
|
bb283824cac01ab43df68578f5a187d6a127b839
|
[
"MIT"
] | null | null | null |
notebooks/matplotlib-cookbook.ipynb
|
victorzhang-mars/msds593
|
bb283824cac01ab43df68578f5a187d6a127b839
|
[
"MIT"
] | null | null | null | 584.990676 | 337,280 | 0.945946 |
[
[
[
"# Basic Matplotlib cookbook\n\nBy [Terence Parr](https://parrt.cs.usfca.edu). If you like visualization in machine learning, check out my stuff at [explained.ai](https://explained.ai).\n\nThis notebook shows you how to generate basic versions of the common plots you'll need.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
]
],
[
[
"# Get some sample data",
"_____no_output_____"
]
],
[
[
"df_cars = pd.read_csv(\"data/cars.csv\")\ndf_cars.head()",
"_____no_output_____"
],
[
"# Get average miles per gallon for each car with the same number of cylinders\navg_mpg = df_cars.groupby('CYL').mean()['MPG']\navg_mpg",
"_____no_output_____"
],
[
"avg_wgt = df_cars.groupby('CYL').mean()['WGT'] # do the same for average weight",
"_____no_output_____"
],
[
"# Get average miles per gallon for each car with the same weight\navg_mpg_per_wgt = df_cars.groupby('WGT').mean()['MPG']\navg_mpg_per_wgt",
"_____no_output_____"
],
[
"# Get the unique list of cylinders in numerical order\ncyl = sorted(df_cars['CYL'].unique())\ncyl",
"_____no_output_____"
],
[
"# Get a list of all mpg values for three specific cylinder sizes\ncyl4 = df_cars[df_cars['CYL']==4]['MPG'].values\ncyl6 = df_cars[df_cars['CYL']==6]['MPG'].values\ncyl8 = df_cars[df_cars['CYL']==8]['MPG'].values",
"_____no_output_____"
],
[
"cyl4[0:20]",
"_____no_output_____"
]
],
[
[
"## The most common plots\n\nThis section shows how to draw very basic plots using the recommended template:\n\n```\nfig, ax = plt.subplots(figsize=(width,height))\nax.plottype(args)\nplt.show()\n```\n\nThe default plot style is not particularly beautiful nor informative, but we have to learn the basics first.",
"_____no_output_____"
],
[
"### Histogram of car weight visualized as barchart",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.hist(df_cars['WGT'])\nplt.show()",
"_____no_output_____"
]
],
[
[
"Changing the number of bins is sometimes a good idea; it's a matter of sending in a new parameter:",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.hist(df_cars['WGT'], bins=20)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Line plot of number of cylinders vs average miles per gallon",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.plot(cyl, avg_mpg)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Scatterplot of weight versus miles per gallon",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.scatter(df_cars['WGT'], df_cars['MPG'])\nplt.show()",
"_____no_output_____"
]
],
[
[
"Note that if you try to use `plot()` it gives you a screwed up plot; line drawing is not appropriate for data with multiple Y values per X value. Instead, the ",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.plot(df_cars['WGT'], df_cars['MPG'])\nax.set_title(\"OOOPS!\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Line plot of average miles per gallon grouped by weight",
"_____no_output_____"
],
[
"If we want to use a line plot, we should plot the weight versus average miles per gallon at that weight.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.plot(avg_mpg_per_wgt)\nplt.show()",
"_____no_output_____"
]
],
[
[
"I'm using a trick here. Note that `avg_mpg_per_wgt` is a series, which has an index (WGT) and the value (MPG) so I can pass this as a single parameter to matplotlib. matplotlib is flexible enough to recognize this and pull out the X and Y coordinates automatically for us.",
"_____no_output_____"
]
],
[
[
"avg_mpg_per_wgt",
"_____no_output_____"
]
],
[
[
"### Bar chart of average miles per gallon grouped by number of cylinders",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.bar(cyl, avg_mpg)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Box plot of miles per gallon grouped by number of cylinders\n\nA box plot needs a collection of values for each X coordinate, and we are passing in three lists.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2))\nax.boxplot([cyl4,cyl6,cyl8])\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Violin plot of miles per gallon grouped by number of cylinders\n\nAs with box plot, we need a collection of values for each X coordinate. All we've done here is to change the function we're calling.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2))\nax.violinplot([cyl4,cyl6,cyl8])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Creating a grid of plots",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(6,6)) # make one subplot (ax) on the figure\naxes = axes.flatten() # it comes out as a 2D matrix; convert to a vector\n\naxes[0].hist(df_cars['WGT'])\naxes[1].plot(cyl, avg_mpg)\naxes[2].scatter(df_cars['WGT'], df_cars['MPG'])\naxes[3].plot(avg_mpg_per_wgt)\naxes[4].bar(cyl, avg_mpg)\naxes[5].boxplot([cyl4,cyl6,cyl8])\n\nplt.tight_layout() # I add this anytime I have a grid as it \"does the right thing\"\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Adding a title and labels to axes\n\nAt a minimum, plots should always have labels on the axes and, regardless of the plot type, we can set the X and Y labels on the matplotlib canvas with two method calls. We can even set the overall title easily with another call.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.hist(df_cars['WGT'])\nax.set_xlabel(\"Weight (lbs)\")\nax.set_ylabel(\"Count at that weight\")\nax.set_title(\"Weight histogram\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Dual Y axes for single X axis\n\nWhen you want to plot to curves on the same graph that have the same X but different Y scales, it's a good idea to use dual Y axes. All it takes is a call to `twinx()` on your main canvas (`ax` variable) to get another canvas to draw on:",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax_wgt = ax.twinx()\nax.plot(cyl, avg_mpg)\nax_wgt.plot(cyl, avg_wgt)\nax.set_ylabel(\"MPG\")\nax_wgt.set_ylabel(\"WGT\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"We should be using different colors for those curves, but we'll look at that in another notebook.\n\nDual axes should be used infrequently but sometimes it's necessary for space reasons, so I'm showing it here.",
"_____no_output_____"
],
[
"## Displaying images\n\nDisplaying an image using matplotlib is done using function `imshow()`. First, we load a picture of Terence enjoying his childhood using the PIL library:",
"_____no_output_____"
]
],
[
[
"from PIL import Image\nfig, ax = plt.subplots(1, 1, figsize=(3, 4))\nmud = Image.open(\"images/mud.jpg\")\nplt.imshow(mud)\nax.axis('off') # don't show x, y axes\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Matrices as images",
"_____no_output_____"
],
[
"When you start doing machine learning, particularly deep learning, one of the first examples is to classify handwritten digits. I have created a sample CSV of these digits we can easily load into a data frame. Each row is a flattened array of 28x28=784 values for a single handwritten digit image, where values are in 0..1:",
"_____no_output_____"
]
],
[
[
"df_digits = pd.read_csv('https://mlbook.explained.ai/data/mnist-10k-sample.csv.zip')\ntrue_digits = df_digits['digit']\ndf_images = df_digits.drop('digit', axis=1) # ignore the true digit number\ndf_images.head(3)",
"_____no_output_____"
]
],
[
[
"Just as we did with a jpg image, we can treat a 2D matrix as an image and display it. Let's pull the first row, reshape to be a 28x28 matrix, and display using greyscale:",
"_____no_output_____"
]
],
[
[
"six_img_as_row = df_images.iloc[0].values # digit '3' is first row\nimg28x28 = six_img_as_row.reshape(28,28) # unflatten as 2D array\nfig, ax = plt.subplots(1, 1, figsize=(2,2))\nax.imshow(img28x28, cmap='binary')\nax.axis('off') # don't show x, y axes\nplt.show()",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(nrows=2, ncols=10, figsize=(8, 1.6))\n\nfor i, ax in enumerate(axes.flatten()):\n img_as_row = df_images.iloc[i].values\n img28x28 = img_as_row.reshape(28,28)\n ax.axis('off') # don't show x, y axes\n ax.imshow(img28x28, cmap='binary')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Heat maps\n\nIt's often difficult to look at a matrix of numbers and recognize patterns or see salient features. A good way to look for patterns is to visualize the matrix (or vector) as a heat map where each value gets a color on a spectrum.\n\nAs data, let's ask pandas for the correlation between every pair of columns:",
"_____no_output_____"
]
],
[
[
"C = df_cars.corr()\nC",
"_____no_output_____"
]
],
[
[
"Then we can display the absolute value of those correlations as the spectrum of blues:",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 1, figsize=(4, 4))\n\nC = np.abs(C)\n\n# Use vmin to set white (lowest color) to be the min value\nax.imshow(C, cmap='Blues', vmin=np.min(C.values))\n\n# Add correlation to each box\nfor i in range(4):\n for j in range(4):\n if i!=j:\n ax.text(i, j, f\"{C.iloc[i,j]:.2f}\", horizontalalignment='center')\n \nax.set_xticks(range(4))\nax.set_xticklabels(list(C.columns))\nax.set_yticks(range(4))\nax.set_yticklabels(list(C.columns))\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Saving plots as images\n\nYou can save plots in a variety of formats. `svg` and `pdf` are good ones because these files are actually a set of commands needed to redraw the image and so can be scaled very nicely. `png` and `gif` will be much smaller typically but have fixed resolution.\n\nInstead of calling `show()`, we use `savefig()` (but the image still appears in the notebook as well as storing it on the disk in the current working directory):",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure\nax.hist(df_cars['WGT'])\nax.set_xlabel(\"Weight (lbs)\")\nax.set_ylabel(\"Count at that weight\")\nax.set_title(\"Weight histogram\")\n\nplt.savefig(\"histo.pdf\", bbox_inches='tight', pad_inches=0)",
"_____no_output_____"
]
],
[
[
"The `bbox_inches='tight', pad_inches=0` parameters are something I use all the time to make sure there is no padding around the image. When I incorporate an image into a paper or something, I can add my own padding. it just gives us more control.\n\nOn your mac, use the Finder to go to the directory holding this notebook and you should see `histo.pdf`.",
"_____no_output_____"
],
[
"## Exercise\n\n\n1. Create your own notebook and retype all of these examples so that you start to memorize the details. Of course, once you have typed in the template a few times, you can cut-and-paste those parts:\n```\nfig, ax = plt.subplots(figsize=(2,1.5))\n...\nplt.show()\n```\n1. Add axis labels and a title to a few of the plots.\n1. Make sure that you can save at least one of the figures in each of `pdf` and `png` formats.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a59b21e4731712c7f487f82912f38b59956e7b3
| 540,185 |
ipynb
|
Jupyter Notebook
|
forcing/ice---sediment-content.ipynb
|
brogalla/Mn-sea-ice-paper
|
dd8b4002e6baabfaf319432726dc98c2c74f8e67
|
[
"Apache-2.0"
] | null | null | null |
forcing/ice---sediment-content.ipynb
|
brogalla/Mn-sea-ice-paper
|
dd8b4002e6baabfaf319432726dc98c2c74f8e67
|
[
"Apache-2.0"
] | null | null | null |
forcing/ice---sediment-content.ipynb
|
brogalla/Mn-sea-ice-paper
|
dd8b4002e6baabfaf319432726dc98c2c74f8e67
|
[
"Apache-2.0"
] | null | null | null | 1,203.084633 | 526,772 | 0.955615 |
[
[
[
"# Parameterization for sediment released by sea-ice",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nfrom mpl_toolkits.basemap import Basemap, cm\nimport netCDF4 as nc\nimport datetime as dt\nimport pickle\nimport scipy.ndimage as ndimage\nimport xarray as xr\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"##### Parameters",
"_____no_output_____"
]
],
[
[
"# Domain dimensions:\nimin, imax = 1479, 2179\njmin, jmax = 159, 799",
"_____no_output_____"
],
[
"# Home-made colormap:\nN = 256\nvals_cont = np.ones((N, 4))\nvals_cont[:, 0] = np.linspace(117/N, 1, N)\nvals_cont[:, 1] = np.linspace(82/N, 1, N)\nvals_cont[:, 2] = np.linspace(60/N, 1, N)\nsed_cmap = matplotlib.colors.ListedColormap(vals_cont).reversed()",
"_____no_output_____"
]
],
[
[
"##### Load files",
"_____no_output_____"
]
],
[
[
"# ANHA12 grid\nmesh = nc.Dataset('/ocean/brogalla/GEOTRACES/data/ANHA12/ANHA12_mesh1.nc')\nmesh_lon = np.array(mesh.variables['nav_lon'])\nmesh_lat = np.array(mesh.variables['nav_lat'])\ntmask = np.array(mesh.variables['tmask'])\nland_mask = np.ma.masked_where((tmask[0,:,:,:] > 0.1), tmask[0,:,:,:])",
"_____no_output_____"
]
],
[
[
"##### Functions:",
"_____no_output_____"
]
],
[
[
"def load_tracks(filename):\n nemo_file = nc.Dataset(filename)\n\n traj = np.array(nemo_file.variables['trajectory']) # dimensions: number of particles, tracks\n time = np.array(nemo_file.variables['time']) # units: seconds\n lat = np.array(nemo_file.variables['lat']) # degrees North\n lon = np.array(nemo_file.variables['lon']) # degrees East\n\n return traj, time, lon, lat",
"_____no_output_____"
],
[
"def check_laptev(CB_traj, CB_lon, CB_lat, CB_time):\n # does the parcel spend time in the laptev sea in the fall?\n\n # Define boundary latitudes and longitudes for the Laptev Sea region\n trajS_bdy1 = 68; trajN_bdy1 = 74;\n trajE_bdy1 = -170; trajW_bdy1 = -210;\n\n trajS_bdy2 = 70; trajN_bdy2 = 75;\n trajE_bdy2 = -185; trajW_bdy2 = -230;\n \n Laptev_particle = False\n # At each time step:\n for timestep in range(0,len(CB_traj)):\n if ((CB_lon[timestep] < trajE_bdy1) & (CB_lon[timestep] > trajW_bdy1) \\\n & (CB_lat[timestep] < trajN_bdy1) & (CB_lat[timestep] > trajS_bdy1)) or \\\n ((CB_lon[timestep] < trajE_bdy2) & (CB_lon[timestep] > trajW_bdy2) \\\n & (CB_lat[timestep] < trajN_bdy2) & (CB_lat[timestep] > trajS_bdy2)):\n\n start_time = dt.datetime(2015,12,31) - dt.timedelta(seconds=CB_time[0])\n current_time = start_time - dt.timedelta(seconds=CB_time[timestep])\n\n # And is the parcel on the shelf in the fall?\n if current_time.month in [9,10,11,12]:\n Laptev_particle = True\n break\n \n return Laptev_particle",
"_____no_output_____"
],
[
"def parcel_origin(CB_lon, CB_lat, CB_time, CB_traj):\n\n dim_parc = int((CB_lon.shape[0]/12)/np.ceil(CB_lon.shape[1]/(4*365))) # bottom converts 6 hour to days \n dim_time = int(12*((CB_lon.shape[0]/dim_parc)/12))\n\n particles_origin = np.zeros((dim_parc,dim_time))\n # --- Russian shelf in fall = 1\n # --- else = 0\n\n for release_time in range(0,dim_time):\n for location in range(0,dim_parc):\n ind = location + release_time*dim_parc\n lon_loc = CB_lon[ind,:]\n lat_loc = CB_lat[ind,:]\n time_loc = CB_time[ind,:]\n traj_loc = CB_traj[ind,:]\n\n Laptev_particle = check_laptev(traj_loc, lon_loc, lat_loc, time_loc)\n\n if Laptev_particle:\n particles_origin[location, release_time] = 1\n\n return particles_origin",
"_____no_output_____"
],
[
"def interp_np(nav_lon, nav_lat, var_in, lon_ANHA12, lat_ANHA12):\n # Interpolate some field to ANHA12 grid\n \n from scipy.interpolate import griddata\n LatLonPair = (nav_lon, nav_lat)\n var_out = griddata(LatLonPair, var_in, (lon_ANHA12, lat_ANHA12), method='cubic')\n # Take nearest neighbour interpolation to fill nans\n var_fill = griddata(LatLonPair, var_in, (lon_ANHA12, lat_ANHA12), method='nearest')\n \n # fill nans with constant value (0.1)\n var_out[np.isnan(var_out)] = var_fill[np.isnan(var_out)]\n return var_out",
"_____no_output_____"
]
],
[
[
"Parameterization components:\n\n1) Ice melt:\n - if (ice production < 0) --> ice is melting \n - units of ice melt, iiceprod, are in m/kt (180 s timestep)\n - convert m/kt to m/s\n - multiply iiceprod by the grid box area to get a volume of melt\n2) Sediment forcing\n - sediment content forcing field: units of grams of sediment / m3 of ice\n - background sediment content amount (include higher on shelf regions)\n - Laptev Sea sediment amounts\n - multiply forcing field by sediment content \n - multiply sediment forcing field by ice melt (m3) to get grams of sediment\n - add sediment to surface grid box + solubility, Mn content",
"_____no_output_____"
],
[
"### (2) Sediment forcing field",
"_____no_output_____"
],
[
"Load parcel trajectories",
"_____no_output_____"
]
],
[
[
"CB_traj, CB_time, CB_lon, CB_lat = load_tracks('/ocean/brogalla/GEOTRACES/parcels/trials/'+\\\n 'Particles_CB-20200205-extended-region2.nc')",
"_____no_output_____"
],
[
"particles_origin = parcel_origin(CB_lon, CB_lat, CB_time, CB_traj)",
"_____no_output_____"
],
[
"dim_parc = int((CB_lon.shape[0]/12)/np.ceil(CB_lon.shape[1]/(4*365)))\ndim_lons = len(set(CB_lon[0:dim_parc,0]))\n\nproportion_laptev = np.empty(CB_lon[0:dim_parc,0].shape)\n\nfor location in range(0,dim_parc):\n proportion_laptev[location] = np.sum(particles_origin[location,:])/particles_origin.shape[1]",
"_____no_output_____"
],
[
"parcel_lons = CB_lon[0:186, 0]\nparcel_lats = CB_lat[0:186, 0]",
"_____no_output_____"
]
],
[
[
"Forcing field dimensions",
"_____no_output_____"
]
],
[
[
"forcing_lons = mesh_lon[:,:]\nforcing_lats = mesh_lat[:,:]\nforcing_sed = np.zeros(forcing_lons.shape)",
"_____no_output_____"
]
],
[
[
"Interpolate Canada Basin proportions:",
"_____no_output_____"
]
],
[
[
"forcing_sed = interp_np(parcel_lons, parcel_lats, proportion_laptev, forcing_lons, forcing_lats)",
"_____no_output_____"
],
[
"forcing_sed[forcing_sed < 0] = 0\n\n# North of Nares Strait\nforcing_sed[(forcing_lons < -50) & (forcing_lons > -95) & (forcing_lats > 78) & (forcing_lats < 83.5)] = 0.03\n\n# CAA background rate\nforcing_sed[(forcing_lons >-128) & (forcing_lons < -45) & (forcing_lats < 77) & (forcing_lats > 60)] = 0.03\n\n# Beaufort Shelf background rate\nforcing_sed[(forcing_lons <-128) & (forcing_lats < 71.3) & (forcing_lats > 68)] = 0.02",
"_____no_output_____"
],
[
"Z2 = ndimage.gaussian_filter(forcing_sed, sigma=16, order=0)",
"_____no_output_____"
],
[
"# Zero the forcing field outside of the domain:\nZ2[0:imin, :] = 0; Z2[imax:-1, :] = 0;\nZ2[:, 0:jmin] = 0; Z2[:, jmax:-1] = 0;",
"_____no_output_____"
],
[
"fig, ax1, proj1 = pickle.load(open('/ocean/brogalla/GEOTRACES/pickles/mn-reference.pickle','rb'))\n\nx_model, y_model = proj1(forcing_lons, forcing_lats)\nCS1 = proj1.contourf(x_model, y_model, Z2, vmin=0.0, vmax=0.3, levels=np.arange(0,0.45,0.025), cmap=sed_cmap)\n\nx_sub, y_sub = proj1(mesh_lon, mesh_lat)\nproj1.plot(x_sub[imin:imax,jmax], y_sub[imin:imax,jmax], 'k-', lw=1.0,zorder=5)\nproj1.plot(x_sub[imin:imax,jmax].T, y_sub[imin:imax,jmax].T, 'k-', lw=1.0,zorder=5)\nproj1.plot(x_sub[imin:imax,jmin], y_sub[imin:imax,jmin], 'k-', lw=1.0,zorder=5)\nproj1.plot(x_sub[imin:imax,jmin].T, y_sub[imin:imax,jmin].T, 'k-', lw=1.0,zorder=5)\nproj1.plot(x_sub[imin,jmin:jmax], y_sub[imin,jmin:jmax], 'k-', lw=1.0,zorder=5)\nproj1.plot(x_sub[imin,jmin:jmax].T, y_sub[imin,jmin:jmax].T, 'k-', lw=1.0,zorder=5)\nproj1.plot(x_sub[imax,jmin:jmax], y_sub[imax,jmin:jmax], 'k-', lw=1.0,zorder=5)\nproj1.plot(x_sub[imax,jmin:jmax].T, y_sub[imax,jmin:jmax].T, 'k-', lw=1.0,zorder=5)\n\nx_parcel, y_parcel = proj1(parcel_lons, parcel_lats)\nproj1.scatter(x_parcel, y_parcel, s=20, zorder=2, c=proportion_laptev, edgecolor='k', \\\n cmap=sed_cmap, vmin=0, vmax=0.3, linewidths=0.3)\n\ncbaxes1 = fig.add_axes([0.52, 0.73, 0.33, 0.031]) \nCB1 = plt.colorbar(CS1, cax=cbaxes1, orientation='horizontal', ticks=np.arange(0,1.1,0.1))\nCB1.ax.tick_params(labelsize=7)\nCB1.outline.set_linewidth(1.0)\nCB1.ax.set_title('Proportion of shelf sediments in sea ice', fontsize=7)",
"_____no_output_____"
]
],
[
[
"save to forcing field:",
"_____no_output_____"
]
],
[
[
"file_write = xr.Dataset(\n {'prop_shelf': ((\"y\",\"x\"), Z2)}, \n coords = {\n \"y\": np.zeros(2400),\n \"x\": np.zeros(1632),\n },\n attrs = {\n 'long_name':'Proportion of shelf sediments in ice',\n 'units':'none',\n }\n)\n\nfile_write.to_netcdf('/ocean/brogalla/GEOTRACES/data/ice_sediment-20210722.nc')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a59c4b3716809d92e2b0ed67d1798f815127648
| 16,397 |
ipynb
|
Jupyter Notebook
|
labs/lab-3/notebooks/monitor-with-appinsights.ipynb
|
anacaroliness9/azure-data-science-e2e
|
6424fcc520ac85764d3151a7fc582880c20c795e
|
[
"MIT"
] | null | null | null |
labs/lab-3/notebooks/monitor-with-appinsights.ipynb
|
anacaroliness9/azure-data-science-e2e
|
6424fcc520ac85764d3151a7fc582880c20c795e
|
[
"MIT"
] | null | null | null |
labs/lab-3/notebooks/monitor-with-appinsights.ipynb
|
anacaroliness9/azure-data-science-e2e
|
6424fcc520ac85764d3151a7fc582880c20c795e
|
[
"MIT"
] | null | null | null | 16,397 | 16,397 | 0.669147 |
[
[
[
"# Enable application insights and add custom logs in your endpoint",
"_____no_output_____"
],
[
"## Get your Azure ML Workspace",
"_____no_output_____"
]
],
[
[
"!pip install azureml-core",
"_____no_output_____"
],
[
"import azureml\nfrom azureml.core import Workspace\nimport mlflow.azureml\n\nworkspace_name = '<YOUR-WORKSPACE>'\nresource_group = '<YOUR-RESOURCE-GROUP>'\nsubscription_id = '<YOUR-SUBSCRIPTION-ID>'\n\nworkspace = Workspace.get(name = workspace_name,\n resource_group = resource_group,\n subscription_id = subscription_id)",
"_____no_output_____"
]
],
[
[
"## Customize your entry script to add some custom logs",
"_____no_output_____"
]
],
[
[
"%%writefile /dbfs/models/churn-prediction/score.py\n\nimport mlflow\nimport json\nimport pandas as pd\nimport os\nimport xgboost as xgb\nimport time\nimport datetime\n\n# Called when the deployed service starts\ndef init():\n global model\n global train_stats\n\n # Get the path where the deployed model can be found.\n model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), './churn-prediction')\n \n # Load model\n model = mlflow.xgboost.load_model(model_path)\n\n# Handle requests to the service\ndef run(rawdata):\n try:\n data = pd.read_json(rawdata, orient = 'split')\n data_xgb = xgb.DMatrix(data) \n \n start_time = datetime.datetime.now()\n # Return the prediction\n prediction = predict(data_xgb)\n end_time = datetime.datetime.now()\n \n print(f'TOTAL_TIME (ms): {end_time - start_time}') # TRACK IN APP INSIGHTS\n \n info = json.dumps({\"payload\": rawdata})\n print(f'OUR_PAYLOAD: {info}') # TRACK IN APP INSIGHTS\n \n return prediction\n \n except Exception as e:\n error = str(e)\n print (f'ERROR: {error + time.strftime(\"%H:%M:%S\")}') # TRACK IN APP INSIGHTS\n raise Exception(error)\n\ndef predict(data):\n prediction = model.predict(data)[0]\n return {\"churn-prediction\": str(prediction)}",
"_____no_output_____"
]
],
[
[
"## Define your inference config (same we already did)",
"_____no_output_____"
]
],
[
[
"from azureml.core.model import InferenceConfig\nfrom azureml.core.environment import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\n\n# Create the environment\nenv = Environment(name='xgboost_env')\n\nconda_dep = CondaDependencies('/dbfs/models/churn-prediction/conda.yaml')\n\n# Define the packages needed by the model and scripts\nconda_dep.add_pip_package(\"azureml-defaults\")\n\n# Adds dependencies to PythonSection of myenv\nenv.python.conda_dependencies=conda_dep\n\ninference_config = InferenceConfig(entry_script=\"/dbfs/models/churn-prediction/score.py\",\n environment=env)",
"_____no_output_____"
]
],
[
[
"## Get the registered model",
"_____no_output_____"
]
],
[
[
"from azureml.core.model import Model\n\nmodel_name = 'churn-model'\nmodel_azure = Model.list(workspace = workspace, name = model_name)[0]",
"_____no_output_____"
]
],
[
[
"## New deployment on AKS. Now with App insights enable",
"_____no_output_____"
]
],
[
[
"from azureml.core.webservice import AksWebservice\nfrom azureml.core.compute import AksCompute\n\nendpoint_name = 'api-churn-prod'\naks_name = 'aks-e2e-ds'\n\naks_target = AksCompute(workspace, aks_name)\naks_config = AksWebservice.deploy_configuration(enable_app_insights = True)\n\naks_service = Model.deploy(workspace=workspace,\n name=endpoint_name,\n models=[model_azure],\n inference_config=inference_config,\n deployment_config=aks_config,\n deployment_target=aks_target,\n overwrite=True)\n\naks_service.wait_for_deployment(show_output = True)\nprint(aks_service.state)",
"_____no_output_____"
]
],
[
[
"### Call the API and see the results in the `Application Insights`",
"_____no_output_____"
]
],
[
[
"import requests\n\npayload1='{\"columns\":[\"Idade\",\"RendaMensal\",\"PercentualUtilizacaoLimite\",\"QtdTransacoesNegadas\",\"AnosDeRelacionamentoBanco\",\"JaUsouChequeEspecial\",\"QtdEmprestimos\",\"NumeroAtendimentos\",\"TMA\",\"IndiceSatisfacao\",\"Saldo\",\"CLTV\"],\"data\":[[21,9703,1.0,5.0,12.0,0.0,1.0,100,300,2,6438,71]]}'\n\npayload2='{\"columns\":[\"Idade\",\"RendaMensal\",\"PercentualUtilizacaoLimite\",\"QtdTransacoesNegadas\",\"AnosDeRelacionamentoBanco\",\"JaUsouChequeEspecial\",\"QtdEmprestimos\",\"NumeroAtendimentos\",\"TMA\",\"IndiceSatisfacao\",\"Saldo\",\"CLTV\"],\"data\":[[21,9703,1.0,5.0,12.0,0.0,1.0,1,5,5,6438,71]]}'\n\nheaders = {\n 'Content-Type': 'application/json'\n}\n\nprod_service_key = aks_service.get_keys()[0] if len(aks_service.get_keys()) > 0 else None\n\nheaders[\"Authorization\"] = \"Bearer {service_key}\".format(service_key=prod_service_key)\n\nfor count in range(5):\n print(f'Predição: {count}')\n response1 = requests.request(\"POST\", aks_service.scoring_uri, headers=headers, data=payload1)\n response2 = requests.request(\"POST\", aks_service.scoring_uri, headers=headers, data=payload2)\n\n print(response1.text)\n print(response2.text)",
"_____no_output_____"
]
],
[
[
"### Let's try to simulate some errors as well",
"_____no_output_____"
]
],
[
[
"payload3='{\"columns\":[\"Idade\",\"RendaMensalERRO\",\"PercentualUtilizacaoLimite\",\"QtdTransacoesNegadas\",\"AnosDeRelacionamentoBanco\",\"JaUsouChequeEspecial\",\"QtdEmprestimos\",\"NumeroAtendimentos\",\"TMA\",\"IndiceSatisfacao\",\"Saldo\",\"CLTV\"],\"data\":[[21,9703,1.0,5.0,12.0,0.0,1.0,1,5,5,6438,71]]}'\n\nfor count in range(10):\n response1 = requests.request(\"POST\", aks_service.scoring_uri, headers=headers, data=payload3)\n print(response1.text)\n print('\\n')",
"_____no_output_____"
]
],
[
[
"## Update your endpoint to enable/disable `application insights`",
"_____no_output_____"
],
[
"You can enable the `Application Insights` on an existing endpoint as well, to do this just run the code bellow passing True/False value.",
"_____no_output_____"
],
[
"### Get your endpoint (the ACI/AKS endpoint your already deployed)",
"_____no_output_____"
]
],
[
[
"from azureml.core.webservice import Webservice\n\nendpoint_name = 'api-churn-prod'\naks_service= Webservice(workspace, endpoint_name)",
"_____no_output_____"
]
],
[
[
"### Enable or disable `Application Insights`",
"_____no_output_____"
]
],
[
[
"aks_service.update(enable_app_insights=True)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a59d10f930fb59444c8b2eb32a4fe1398fd9ca8
| 16,886 |
ipynb
|
Jupyter Notebook
|
markdown_generator/talks.ipynb
|
peach-lucien/peach-lucien.github.io
|
338fb3fa07bfa265b58e14be25827f00089b8241
|
[
"MIT"
] | null | null | null |
markdown_generator/talks.ipynb
|
peach-lucien/peach-lucien.github.io
|
338fb3fa07bfa265b58e14be25827f00089b8241
|
[
"MIT"
] | null | null | null |
markdown_generator/talks.ipynb
|
peach-lucien/peach-lucien.github.io
|
338fb3fa07bfa265b58e14be25827f00089b8241
|
[
"MIT"
] | null | null | null | 38.377273 | 427 | 0.485609 |
[
[
[
"# Talks markdown generator for academicpages\n\nTakes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `talks.py`. Run either from the `markdown_generator` folder after replacing `talks.tsv` with one containing your data.\n\nTODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport os",
"_____no_output_____"
]
],
[
[
"## Data format\n\nThe TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.\n\n- Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to \"Talk\" \n- `date` must be formatted as YYYY-MM-DD.\n- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. \n - The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]`\n - The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenames\n\nThis is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).",
"_____no_output_____"
]
],
[
[
"!cat talks.tsv",
"title\ttype\turl_slug\tvenue\tdate\tlocation\ttalk_url\tdescription\r\nHighly Comparative Graph Analysis\tTalk\tcomplexnet\tThe 9th International Conference on Complex Networks and their Applications\t2020-12-03\tMadrid, Spain (Virtual)\t\t\r\nOvershooting behaviours in networks\tTalk\tcoxic\tCOXIC\t2020-12-14\tLondon, UK\t\t\r\nGood practices in distributed and online learning\tPanel discussion\tifest\tiFest 2019\t2019-08-12\tAlexandria, US\t\t\r\nTremor Analysis in Essential Tremor patients\tTalk\tiop\tComplexity in the 21st Century. Institute of Physics\t2019-07-18\tLondon, UK\t\t\r\nUsing time-series engagement data to predict student Performance\tTalk\tgmac\tGMAC Leadership conference\t2019-01-12\tFort Lauderdale, US\t\t\r\nImperial College Business School\tTalk\troundtable\tBusiness School Round Table\t2018-11-27\tLondon, UK\t\t\r\nLearning analytics dashboard and student engagement behaviours\tTalk\tfome\tFOME Oslo\t2018-11-15\tOslo, Norway\t\t\r\nPredicting patient tremor response to TACS\tTalk\tcmph\tCentre for Mathematical Precision Healthcare\t2018-10-20\tLondon, UK\t\t\r\nPredicting the effect of mutations on protein dynamics using graph theory\tPoster\ttokyotech\tTokyo Tech-Imperial College workshop – Bioscience and its interface with technology\t2016-11-05\tTokyo, Japan\t\t\r\nMarkov Stability of Protein Structures\tPoster\tbiophys\tConformational ensembles from experimental data and computer simulations, Biophysical Society\t2017-08-02\tBerlin, Germany\t\t\r\n"
]
],
[
[
"## Import TSV\n\nPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\\t`.\n\nI found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.",
"_____no_output_____"
]
],
[
[
"talks = pd.read_csv(\"talks.tsv\", sep=\"\\t\", header=0)\ntalks",
"_____no_output_____"
]
],
[
[
"## Escape special characters\n\nYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.",
"_____no_output_____"
]
],
[
[
"html_escape_table = {\n \"&\": \"&\",\n '\"': \""\",\n \"'\": \"'\"\n }\n\ndef html_escape(text):\n if type(text) is str:\n return \"\".join(html_escape_table.get(c,c) for c in text)\n else:\n return \"False\"",
"_____no_output_____"
]
],
[
[
"## Creating the markdown files\n\nThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.",
"_____no_output_____"
]
],
[
[
"loc_dict = {}\n\nfor row, item in talks.iterrows():\n \n md_filename = str(item.date) + \"-\" + item.url_slug + \".md\"\n html_filename = str(item.date) + \"-\" + item.url_slug \n year = item.date[:4]\n \n md = \"---\\ntitle: \\\"\" + item.title + '\"\\n'\n md += \"collection: talks\" + \"\\n\"\n \n if len(str(item.type)) > 3:\n md += 'type: \"' + item.type + '\"\\n'\n else:\n md += 'type: \"Talk\"\\n'\n \n md += \"permalink: /talks/\" + html_filename + \"\\n\"\n \n if len(str(item.venue)) > 3:\n md += 'venue: \"' + item.venue + '\"\\n'\n \n if len(str(item.location)) > 3:\n md += \"date: \" + str(item.date) + \"\\n\"\n \n if len(str(item.location)) > 3:\n md += 'location: \"' + str(item.location) + '\"\\n'\n \n md += \"---\\n\"\n \n \n if len(str(item.talk_url)) > 3:\n md += \"\\n[More information here](\" + item.talk_url + \")\\n\" \n \n \n if len(str(item.description)) > 3:\n md += \"\\n\" + html_escape(item.description) + \"\\n\"\n \n \n md_filename = os.path.basename(md_filename)\n #print(md)\n \n with open(\"../_talks/\" + md_filename, 'w') as f:\n f.write(md)",
"_____no_output_____"
]
],
[
[
"These files are in the talks directory, one directory below where we're working from.",
"_____no_output_____"
]
],
[
[
"!ls ../_talks",
"2016-11-05-tokyotech.md 2018-11-27-roundtable.md 2020-12-03-complexnet.md\r\n2017-08-02-biophys.md\t 2019-01-12-gmac.md\t 2020-12-14-coxic.md\r\n2018-10-20-cmph.md\t 2019-07-18-iop.md\r\n2018-11-15-fome.md\t 2019-08-12-ifest.md\r\n"
],
[
"!cat ../_talks/2013-03-01-tutorial-1.md",
"cat: ../_talks/2013-03-01-tutorial-1.md: No such file or directory\r\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a59e840d57cc3737010c74f44f341bcbcf7d1aa
| 37,947 |
ipynb
|
Jupyter Notebook
|
iris_classification.ipynb
|
yoonputer/test_deeplearning
|
520b833474b195c54f0106395008015d6a70f64c
|
[
"Apache-2.0"
] | null | null | null |
iris_classification.ipynb
|
yoonputer/test_deeplearning
|
520b833474b195c54f0106395008015d6a70f64c
|
[
"Apache-2.0"
] | null | null | null |
iris_classification.ipynb
|
yoonputer/test_deeplearning
|
520b833474b195c54f0106395008015d6a70f64c
|
[
"Apache-2.0"
] | null | null | null | 41.023784 | 7,342 | 0.498221 |
[
[
[
"<a href=\"https://colab.research.google.com/github/yoonputer/test_deeplearning/blob/master/iris_classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets",
"_____no_output_____"
],
[
"iris = datasets.load_iris()\niris.feature_names",
"_____no_output_____"
],
[
"import pandas as pd\ndf_iris = pd.DataFrame(iris.data)\ndf_iris.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 150 entries, 0 to 149\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 0 150 non-null float64\n 1 1 150 non-null float64\n 2 2 150 non-null float64\n 3 3 150 non-null float64\ndtypes: float64(4)\nmemory usage: 4.8 KB\n"
],
[
"import sqlite3\nconnect = sqlite3.connect('./db.sqlite3')\ndf_iris.to_sql('iris_resource', connect, if_exists='append', index=False)",
"_____no_output_____"
],
[
"df_load = pd.read_sql_query('select * from iris_resource',connect)\ndf_load.head(4)",
"_____no_output_____"
],
[
"x_data = df_load.to_numpy()\nx_data.shape",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"y_data = iris.target\ny_data, np.unique(y_data)",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"x_train, x_val,y_train, y_val = train_test_split(x_data, y_data)",
"_____no_output_____"
],
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"model = tf.keras.Sequential()\n\nmodel.add(tf.keras.Input(shape=(4,))) # input layer \n\nmodel.add(tf.keras.layers.Dense(64, activation='relu')) # hidden layer\nmodel.add(tf.keras.layers.Dense(64, activation='relu')) # hidden layer\n\nmodel.add(tf.keras.layers.Dense(3, activation='softmax')) # output layer\n\nmodel.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy',metrics=['acc'])",
"WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model.\n"
],
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 64) 320 \n_________________________________________________________________\ndense_1 (Dense) (None, 64) 4160 \n_________________________________________________________________\ndense_2 (Dense) (None, 3) 195 \n=================================================================\nTotal params: 4,675\nTrainable params: 4,675\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# model.fit(x_data, y_data, epochs=50, validation_split=0.3) # 학습\nmodel.fit(x_train, y_train, epochs=50, validation_data=(x_val,y_val)) # 학습",
"Epoch 1/50\n4/4 [==============================] - 1s 67ms/step - loss: 1.2282 - acc: 0.3036 - val_loss: 1.0995 - val_acc: 0.3158\nEpoch 2/50\n4/4 [==============================] - 0s 8ms/step - loss: 1.0296 - acc: 0.5179 - val_loss: 0.9590 - val_acc: 0.5000\nEpoch 3/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.9053 - acc: 0.5982 - val_loss: 0.8373 - val_acc: 0.8684\nEpoch 4/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.7972 - acc: 0.8661 - val_loss: 0.7387 - val_acc: 0.7895\nEpoch 5/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.7161 - acc: 0.7232 - val_loss: 0.6635 - val_acc: 0.6842\nEpoch 6/50\n4/4 [==============================] - 0s 8ms/step - loss: 0.6502 - acc: 0.6875 - val_loss: 0.6037 - val_acc: 0.7632\nEpoch 7/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.5966 - acc: 0.7679 - val_loss: 0.5532 - val_acc: 0.8684\nEpoch 8/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.5512 - acc: 0.8571 - val_loss: 0.5135 - val_acc: 0.9474\nEpoch 9/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.5150 - acc: 0.9286 - val_loss: 0.4824 - val_acc: 0.9737\nEpoch 10/50\n4/4 [==============================] - 0s 8ms/step - loss: 0.4858 - acc: 0.9107 - val_loss: 0.4535 - val_acc: 0.8947\nEpoch 11/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.4605 - acc: 0.9107 - val_loss: 0.4293 - val_acc: 0.9474\nEpoch 12/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.4380 - acc: 0.9375 - val_loss: 0.4114 - val_acc: 0.9737\nEpoch 13/50\n4/4 [==============================] - 0s 11ms/step - loss: 0.4226 - acc: 0.9643 - val_loss: 0.3921 - val_acc: 0.9737\nEpoch 14/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.4026 - acc: 0.9196 - val_loss: 0.3784 - val_acc: 0.8684\nEpoch 15/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.3963 - acc: 0.8571 - val_loss: 0.3621 - val_acc: 0.9474\nEpoch 16/50\n4/4 [==============================] - 0s 8ms/step - loss: 0.3753 - acc: 0.9196 - val_loss: 0.3520 - val_acc: 0.9737\nEpoch 17/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.3661 - acc: 0.9554 - val_loss: 0.3420 - val_acc: 0.9737\nEpoch 18/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.3549 - acc: 0.9554 - val_loss: 0.3258 - val_acc: 0.9737\nEpoch 19/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.3409 - acc: 0.9464 - val_loss: 0.3148 - val_acc: 0.9737\nEpoch 20/50\n4/4 [==============================] - 0s 8ms/step - loss: 0.3314 - acc: 0.9732 - val_loss: 0.3055 - val_acc: 0.9737\nEpoch 21/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.3172 - acc: 0.9732 - val_loss: 0.2920 - val_acc: 0.9737\nEpoch 22/50\n4/4 [==============================] - 0s 11ms/step - loss: 0.3094 - acc: 0.9554 - val_loss: 0.2817 - val_acc: 0.9737\nEpoch 23/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.3036 - acc: 0.9643 - val_loss: 0.2771 - val_acc: 1.0000\nEpoch 24/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.2914 - acc: 0.9643 - val_loss: 0.2652 - val_acc: 1.0000\nEpoch 25/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.2799 - acc: 0.9732 - val_loss: 0.2528 - val_acc: 0.9737\nEpoch 26/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.2745 - acc: 0.9554 - val_loss: 0.2443 - val_acc: 0.9737\nEpoch 27/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.2637 - acc: 0.9643 - val_loss: 0.2370 - val_acc: 1.0000\nEpoch 28/50\n4/4 [==============================] - 0s 8ms/step - loss: 0.2569 - acc: 0.9643 - val_loss: 0.2316 - val_acc: 1.0000\nEpoch 29/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.2512 - acc: 0.9464 - val_loss: 0.2196 - val_acc: 1.0000\nEpoch 30/50\n4/4 [==============================] - 0s 12ms/step - loss: 0.2360 - acc: 0.9732 - val_loss: 0.2089 - val_acc: 0.9737\nEpoch 31/50\n4/4 [==============================] - 0s 15ms/step - loss: 0.2335 - acc: 0.9643 - val_loss: 0.2006 - val_acc: 0.9737\nEpoch 32/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.2284 - acc: 0.9643 - val_loss: 0.1928 - val_acc: 0.9737\nEpoch 33/50\n4/4 [==============================] - 0s 11ms/step - loss: 0.2198 - acc: 0.9643 - val_loss: 0.1964 - val_acc: 0.9737\nEpoch 34/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.2129 - acc: 0.9643 - val_loss: 0.1792 - val_acc: 0.9737\nEpoch 35/50\n4/4 [==============================] - 0s 11ms/step - loss: 0.2046 - acc: 0.9643 - val_loss: 0.1721 - val_acc: 0.9737\nEpoch 36/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.2031 - acc: 0.9554 - val_loss: 0.1731 - val_acc: 1.0000\nEpoch 37/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.1937 - acc: 0.9554 - val_loss: 0.1610 - val_acc: 1.0000\nEpoch 38/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.1860 - acc: 0.9643 - val_loss: 0.1556 - val_acc: 0.9737\nEpoch 39/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.1850 - acc: 0.9643 - val_loss: 0.1502 - val_acc: 1.0000\nEpoch 40/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.1780 - acc: 0.9643 - val_loss: 0.1504 - val_acc: 1.0000\nEpoch 41/50\n4/4 [==============================] - 0s 11ms/step - loss: 0.1739 - acc: 0.9643 - val_loss: 0.1408 - val_acc: 1.0000\nEpoch 42/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.1735 - acc: 0.9643 - val_loss: 0.1355 - val_acc: 0.9737\nEpoch 43/50\n4/4 [==============================] - 0s 12ms/step - loss: 0.1675 - acc: 0.9643 - val_loss: 0.1333 - val_acc: 1.0000\nEpoch 44/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.1625 - acc: 0.9643 - val_loss: 0.1384 - val_acc: 1.0000\nEpoch 45/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.1610 - acc: 0.9732 - val_loss: 0.1241 - val_acc: 1.0000\nEpoch 46/50\n4/4 [==============================] - 0s 14ms/step - loss: 0.1562 - acc: 0.9643 - val_loss: 0.1200 - val_acc: 1.0000\nEpoch 47/50\n4/4 [==============================] - 0s 16ms/step - loss: 0.1540 - acc: 0.9732 - val_loss: 0.1224 - val_acc: 1.0000\nEpoch 48/50\n4/4 [==============================] - 0s 12ms/step - loss: 0.1528 - acc: 0.9732 - val_loss: 0.1136 - val_acc: 1.0000\nEpoch 49/50\n4/4 [==============================] - 0s 10ms/step - loss: 0.1456 - acc: 0.9732 - val_loss: 0.1157 - val_acc: 1.0000\nEpoch 50/50\n4/4 [==============================] - 0s 9ms/step - loss: 0.1473 - acc: 0.9554 - val_loss: 0.1122 - val_acc: 1.0000\n"
]
],
[
[
"# Evaluation",
"_____no_output_____"
]
],
[
[
"#model.evaluate(x_data, y_data) # - loss: 0.4124 - acc: 0.6800",
"_____no_output_____"
],
[
"model.evaluate(x_data, y_data)",
"5/5 [==============================] - 0s 3ms/step - loss: 0.1342 - acc: 0.9733\n"
],
[
"from sklearn.metrics import classification_report, confusion_matrix",
"_____no_output_____"
],
[
"y_pred = model.predict(x_data)\ny_pred.shape, y_pred[4]",
"_____no_output_____"
],
[
"import numpy as np\ny_pred_argmax = np.argmax(y_pred, axis=1)\ny_pred_argmax.shape, y_pred_argmax[4]",
"_____no_output_____"
],
[
"y_data.shape, y_data[4]",
"_____no_output_____"
],
[
"print(classification_report(y_data, y_pred_argmax)) # 데이터 쏠림이 있어서 KFold로 데이터를 shuffle을 해줘야 함",
" precision recall f1-score support\n\n 0 1.00 1.00 1.00 50\n 1 0.98 0.94 0.96 50\n 2 0.94 0.98 0.96 50\n\n accuracy 0.97 150\n macro avg 0.97 0.97 0.97 150\nweighted avg 0.97 0.97 0.97 150\n\n"
],
[
"y_data # 순서적으로 나누면 0,1,2가 쏠려서 나뉘는걸 알 수 있음",
"_____no_output_____"
],
[
"confusion_matrix(y_data, y_pred_argmax)",
"_____no_output_____"
],
[
"import seaborn as sns\nsns.heatmap(confusion_matrix(y_data, y_pred_argmax), annot=True)",
"_____no_output_____"
]
],
[
[
"-> 대각선에 숫자가 많아야 제대로 된것임 ( 2는 거의 맞추질 못한것)\n \n-> 데이터를 셔플 및 수정한 후 다시실행한거라서 이거는 맞음",
"_____no_output_____"
],
[
"# Service",
"_____no_output_____"
]
],
[
[
"x_data[25], y_data[25]",
"_____no_output_____"
],
[
"pred = model.predict([[5. , 3. , 1.6, 0.2]])\npred",
"_____no_output_____"
],
[
"import numpy as np\nnp.argmax(pred)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a5a05f9f4f9792691daceda14d3efd25c911858
| 451,651 |
ipynb
|
Jupyter Notebook
|
tools_pandas.ipynb
|
TeaCoffeeBreak/handson-ml2
|
f0fde1ebef95f0858060ccabb75c2542ea1ea5e5
|
[
"Apache-2.0"
] | null | null | null |
tools_pandas.ipynb
|
TeaCoffeeBreak/handson-ml2
|
f0fde1ebef95f0858060ccabb75c2542ea1ea5e5
|
[
"Apache-2.0"
] | null | null | null |
tools_pandas.ipynb
|
TeaCoffeeBreak/handson-ml2
|
f0fde1ebef95f0858060ccabb75c2542ea1ea5e5
|
[
"Apache-2.0"
] | null | null | null | 37.3327 | 21,978 | 0.496084 |
[
[
[
"**Tools - pandas**\n\n*The `pandas` library provides high-performance, easy-to-use data structures and data analysis tools. The main data structure is the `DataFrame`, which you can think of as an in-memory 2D table (like a spreadsheet, with column names and row labels). Many features available in Excel are available programmatically, such as creating pivot tables, computing columns based on other columns, plotting graphs, etc. You can also group rows by column value, or join tables much like in SQL. Pandas is also great at handling time series.*\n\nPrerequisites:\n* NumPy – if you are not familiar with NumPy, we recommend that you go through the [NumPy tutorial](tools_numpy.ipynb) now.",
"_____no_output_____"
],
[
"# Setup",
"_____no_output_____"
],
[
"First, let's import `pandas`. People usually import it as `pd`:",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"# `Series` objects\nThe `pandas` library contains these useful data structures:\n* `Series` objects, that we will discuss now. A `Series` object is 1D array, similar to a column in a spreadsheet (with a column name and row labels).\n* `DataFrame` objects. This is a 2D table, similar to a spreadsheet (with column names and row labels).\n* `Panel` objects. You can see a `Panel` as a dictionary of `DataFrame`s. These are less used, so we will not discuss them here.",
"_____no_output_____"
],
[
"## Creating a `Series`\nLet's start by creating our first `Series` object!",
"_____no_output_____"
]
],
[
[
"s = pd.Series([2,-1,3,5])\ns",
"_____no_output_____"
]
],
[
[
"## Similar to a 1D `ndarray`\n`Series` objects behave much like one-dimensional NumPy `ndarray`s, and you can often pass them as parameters to NumPy functions:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nnp.exp(s)",
"_____no_output_____"
]
],
[
[
"Arithmetic operations on `Series` are also possible, and they apply *elementwise*, just like for `ndarray`s:",
"_____no_output_____"
]
],
[
[
"s + [1000,2000,3000,4000]",
"_____no_output_____"
]
],
[
[
"Similar to NumPy, if you add a single number to a `Series`, that number is added to all items in the `Series`. This is called * broadcasting*:",
"_____no_output_____"
]
],
[
[
"s + 1000",
"_____no_output_____"
]
],
[
[
"The same is true for all binary operations such as `*` or `/`, and even conditional operations:",
"_____no_output_____"
]
],
[
[
"s < 0",
"_____no_output_____"
]
],
[
[
"## Index labels\nEach item in a `Series` object has a unique identifier called the *index label*. By default, it is simply the rank of the item in the `Series` (starting at `0`) but you can also set the index labels manually:",
"_____no_output_____"
]
],
[
[
"s2 = pd.Series([68, 83, 112, 68], index=[\"alice\", \"bob\", \"charles\", \"darwin\"])\ns2",
"_____no_output_____"
]
],
[
[
"You can then use the `Series` just like a `dict`:",
"_____no_output_____"
]
],
[
[
"s2[\"bob\"]",
"_____no_output_____"
]
],
[
[
"You can still access the items by integer location, like in a regular array:",
"_____no_output_____"
]
],
[
[
"s2[1]",
"_____no_output_____"
]
],
[
[
"To make it clear when you are accessing by label or by integer location, it is recommended to always use the `loc` attribute when accessing by label, and the `iloc` attribute when accessing by integer location:",
"_____no_output_____"
]
],
[
[
"s2.loc[\"bob\"]",
"_____no_output_____"
],
[
"s2.iloc[1]",
"_____no_output_____"
]
],
[
[
"Slicing a `Series` also slices the index labels:",
"_____no_output_____"
]
],
[
[
"s2.iloc[1:3]",
"_____no_output_____"
]
],
[
[
"This can lead to unexpected results when using the default numeric labels, so be careful:",
"_____no_output_____"
]
],
[
[
"surprise = pd.Series([1000, 1001, 1002, 1003])\nsurprise",
"_____no_output_____"
],
[
"surprise_slice = surprise[2:]\nsurprise_slice",
"_____no_output_____"
]
],
[
[
"Oh look! The first element has index label `2`. The element with index label `0` is absent from the slice:",
"_____no_output_____"
]
],
[
[
"try:\n surprise_slice[0]\nexcept KeyError as e:\n print(\"Key error:\", e)",
"Key error: 0\n"
]
],
[
[
"But remember that you can access elements by integer location using the `iloc` attribute. This illustrates another reason why it's always better to use `loc` and `iloc` to access `Series` objects:",
"_____no_output_____"
]
],
[
[
"surprise_slice.iloc[0]",
"_____no_output_____"
]
],
[
[
"## Init from `dict`\nYou can create a `Series` object from a `dict`. The keys will be used as index labels:",
"_____no_output_____"
]
],
[
[
"weights = {\"alice\": 68, \"bob\": 83, \"colin\": 86, \"darwin\": 68}\ns3 = pd.Series(weights)\ns3",
"_____no_output_____"
]
],
[
[
"You can control which elements you want to include in the `Series` and in what order by explicitly specifying the desired `index`:",
"_____no_output_____"
]
],
[
[
"s4 = pd.Series(weights, index = [\"colin\", \"alice\"])\ns4",
"_____no_output_____"
]
],
[
[
"## Automatic alignment\nWhen an operation involves multiple `Series` objects, `pandas` automatically aligns items by matching index labels.",
"_____no_output_____"
]
],
[
[
"print(s2.keys())\nprint(s3.keys())\n\ns2 + s3",
"Index(['alice', 'bob', 'charles', 'darwin'], dtype='object')\nIndex(['alice', 'bob', 'colin', 'darwin'], dtype='object')\n"
]
],
[
[
"The resulting `Series` contains the union of index labels from `s2` and `s3`. Since `\"colin\"` is missing from `s2` and `\"charles\"` is missing from `s3`, these items have a `NaN` result value. (ie. Not-a-Number means *missing*).\n\nAutomatic alignment is very handy when working with data that may come from various sources with varying structure and missing items. But if you forget to set the right index labels, you can have surprising results:",
"_____no_output_____"
]
],
[
[
"s5 = pd.Series([1000,1000,1000,1000])\nprint(\"s2 =\", s2.values)\nprint(\"s5 =\", s5.values)\n\ns2 + s5",
"s2 = [ 68 83 112 68]\ns5 = [1000 1000 1000 1000]\n"
]
],
[
[
"Pandas could not align the `Series`, since their labels do not match at all, hence the full `NaN` result.",
"_____no_output_____"
],
[
"## Init with a scalar\nYou can also initialize a `Series` object using a scalar and a list of index labels: all items will be set to the scalar.",
"_____no_output_____"
]
],
[
[
"meaning = pd.Series(42, [\"life\", \"universe\", \"everything\"])\nmeaning",
"_____no_output_____"
]
],
[
[
"## `Series` name\nA `Series` can have a `name`:",
"_____no_output_____"
]
],
[
[
"s6 = pd.Series([83, 68], index=[\"bob\", \"alice\"], name=\"weights\")\ns6",
"_____no_output_____"
]
],
[
[
"## Plotting a `Series`\nPandas makes it easy to plot `Series` data using matplotlib (for more details on matplotlib, check out the [matplotlib tutorial](tools_matplotlib.ipynb)). Just import matplotlib and call the `plot()` method:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\ntemperatures = [4.4,5.1,6.1,6.2,6.1,6.1,5.7,5.2,4.7,4.1,3.9,3.5]\ns7 = pd.Series(temperatures, name=\"Temperature\")\ns7.plot()\nplt.show()",
"_____no_output_____"
]
],
[
[
"There are *many* options for plotting your data. It is not necessary to list them all here: if you need a particular type of plot (histograms, pie charts, etc.), just look for it in the excellent [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) section of pandas' documentation, and look at the example code.",
"_____no_output_____"
],
[
"# Handling time\nMany datasets have timestamps, and pandas is awesome at manipulating such data:\n* it can represent periods (such as 2016Q3) and frequencies (such as \"monthly\"),\n* it can convert periods to actual timestamps, and *vice versa*,\n* it can resample data and aggregate values any way you like,\n* it can handle timezones.\n\n## Time range\nLet's start by creating a time series using `pd.date_range()`. This returns a `DatetimeIndex` containing one datetime per hour for 12 hours starting on October 29th 2016 at 5:30pm.",
"_____no_output_____"
]
],
[
[
"dates = pd.date_range('2016/10/29 5:30pm', periods=12, freq='H')\ndates",
"_____no_output_____"
]
],
[
[
"This `DatetimeIndex` may be used as an index in a `Series`:",
"_____no_output_____"
]
],
[
[
"temp_series = pd.Series(temperatures, dates)\ntemp_series",
"_____no_output_____"
]
],
[
[
"Let's plot this series:",
"_____no_output_____"
]
],
[
[
"temp_series.plot(kind=\"bar\")\n\nplt.grid(True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Resampling\nPandas lets us resample a time series very simply. Just call the `resample()` method and specify a new frequency:",
"_____no_output_____"
]
],
[
[
"temp_series_freq_2H = temp_series.resample(\"2H\")\ntemp_series_freq_2H",
"_____no_output_____"
]
],
[
[
"The resampling operation is actually a deferred operation, which is why we did not get a `Series` object, but a `DatetimeIndexResampler` object instead. To actually perform the resampling operation, we can simply call the `mean()` method: Pandas will compute the mean of every pair of consecutive hours:",
"_____no_output_____"
]
],
[
[
"temp_series_freq_2H = temp_series_freq_2H.mean()",
"_____no_output_____"
]
],
[
[
"Let's plot the result:",
"_____no_output_____"
]
],
[
[
"temp_series_freq_2H.plot(kind=\"bar\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Note how the values have automatically been aggregated into 2-hour periods. If we look at the 6-8pm period, for example, we had a value of `5.1` at 6:30pm, and `6.1` at 7:30pm. After resampling, we just have one value of `5.6`, which is the mean of `5.1` and `6.1`. Rather than computing the mean, we could have used any other aggregation function, for example we can decide to keep the minimum value of each period:",
"_____no_output_____"
]
],
[
[
"temp_series_freq_2H = temp_series.resample(\"2H\").min()\ntemp_series_freq_2H",
"_____no_output_____"
]
],
[
[
"Or, equivalently, we could use the `apply()` method instead:",
"_____no_output_____"
]
],
[
[
"temp_series_freq_2H = temp_series.resample(\"2H\").apply(np.min)\ntemp_series_freq_2H",
"_____no_output_____"
]
],
[
[
"## Upsampling and interpolation\nThis was an example of downsampling. We can also upsample (ie. increase the frequency), but this creates holes in our data:",
"_____no_output_____"
]
],
[
[
"temp_series_freq_15min = temp_series.resample(\"15Min\").mean()\ntemp_series_freq_15min.head(n=10) # `head` displays the top n values",
"_____no_output_____"
]
],
[
[
"One solution is to fill the gaps by interpolating. We just call the `interpolate()` method. The default is to use linear interpolation, but we can also select another method, such as cubic interpolation:",
"_____no_output_____"
]
],
[
[
"temp_series_freq_15min = temp_series.resample(\"15Min\").interpolate(method=\"cubic\")\ntemp_series_freq_15min.head(n=10)",
"_____no_output_____"
],
[
"temp_series.plot(label=\"Period: 1 hour\")\ntemp_series_freq_15min.plot(label=\"Period: 15 minutes\")\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Timezones\nBy default datetimes are *naive*: they are not aware of timezones, so 2016-10-30 02:30 might mean October 30th 2016 at 2:30am in Paris or in New York. We can make datetimes timezone *aware* by calling the `tz_localize()` method:",
"_____no_output_____"
]
],
[
[
"temp_series_ny = temp_series.tz_localize(\"America/New_York\")\ntemp_series_ny",
"_____no_output_____"
]
],
[
[
"Note that `-04:00` is now appended to all the datetimes. This means that these datetimes refer to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) - 4 hours.\n\nWe can convert these datetimes to Paris time like this:",
"_____no_output_____"
]
],
[
[
"temp_series_paris = temp_series_ny.tz_convert(\"Europe/Paris\")\ntemp_series_paris",
"_____no_output_____"
]
],
[
[
"You may have noticed that the UTC offset changes from `+02:00` to `+01:00`: this is because France switches to winter time at 3am that particular night (time goes back to 2am). Notice that 2:30am occurs twice! Let's go back to a naive representation (if you log some data hourly using local time, without storing the timezone, you might get something like this):",
"_____no_output_____"
]
],
[
[
"temp_series_paris_naive = temp_series_paris.tz_localize(None)\ntemp_series_paris_naive",
"_____no_output_____"
]
],
[
[
"Now `02:30` is really ambiguous. If we try to localize these naive datetimes to the Paris timezone, we get an error:",
"_____no_output_____"
]
],
[
[
"try:\n temp_series_paris_naive.tz_localize(\"Europe/Paris\")\nexcept Exception as e:\n print(type(e))\n print(e)",
"<class 'pytz.exceptions.AmbiguousTimeError'>\nCannot infer dst time from Timestamp('2016-10-30 02:30:00'), try using the 'ambiguous' argument\n"
]
],
[
[
"Fortunately using the `ambiguous` argument we can tell pandas to infer the right DST (Daylight Saving Time) based on the order of the ambiguous timestamps:",
"_____no_output_____"
]
],
[
[
"temp_series_paris_naive.tz_localize(\"Europe/Paris\", ambiguous=\"infer\")",
"_____no_output_____"
]
],
[
[
"## Periods\nThe `pd.period_range()` function returns a `PeriodIndex` instead of a `DatetimeIndex`. For example, let's get all quarters in 2016 and 2017:",
"_____no_output_____"
]
],
[
[
"quarters = pd.period_range('2016Q1', periods=8, freq='Q')\nquarters",
"_____no_output_____"
]
],
[
[
"Adding a number `N` to a `PeriodIndex` shifts the periods by `N` times the `PeriodIndex`'s frequency:",
"_____no_output_____"
]
],
[
[
"quarters + 3",
"_____no_output_____"
]
],
[
[
"The `asfreq()` method lets us change the frequency of the `PeriodIndex`. All periods are lengthened or shortened accordingly. For example, let's convert all the quarterly periods to monthly periods (zooming in):",
"_____no_output_____"
]
],
[
[
"quarters.asfreq(\"M\")",
"_____no_output_____"
]
],
[
[
"By default, the `asfreq` zooms on the end of each period. We can tell it to zoom on the start of each period instead:",
"_____no_output_____"
]
],
[
[
"quarters.asfreq(\"M\", how=\"start\")",
"_____no_output_____"
]
],
[
[
"And we can zoom out:",
"_____no_output_____"
]
],
[
[
"quarters.asfreq(\"A\")",
"_____no_output_____"
]
],
[
[
"Of course we can create a `Series` with a `PeriodIndex`:",
"_____no_output_____"
]
],
[
[
"quarterly_revenue = pd.Series([300, 320, 290, 390, 320, 360, 310, 410], index = quarters)\nquarterly_revenue",
"_____no_output_____"
],
[
"quarterly_revenue.plot(kind=\"line\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can convert periods to timestamps by calling `to_timestamp`. By default this will give us the first day of each period, but by setting `how` and `freq`, we can get the last hour of each period:",
"_____no_output_____"
]
],
[
[
"last_hours = quarterly_revenue.to_timestamp(how=\"end\", freq=\"H\")\nlast_hours",
"_____no_output_____"
]
],
[
[
"And back to periods by calling `to_period`:",
"_____no_output_____"
]
],
[
[
"last_hours.to_period()",
"_____no_output_____"
]
],
[
[
"Pandas also provides many other time-related functions that we recommend you check out in the [documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html). To whet your appetite, here is one way to get the last business day of each month in 2016, at 9am:",
"_____no_output_____"
]
],
[
[
"months_2016 = pd.period_range(\"2016\", periods=12, freq=\"M\")\none_day_after_last_days = months_2016.asfreq(\"D\") + 1\nlast_bdays = one_day_after_last_days.to_timestamp() - pd.tseries.offsets.BDay()\nlast_bdays.to_period(\"H\") + 9",
"_____no_output_____"
]
],
[
[
"# `DataFrame` objects\nA DataFrame object represents a spreadsheet, with cell values, column names and row index labels. You can define expressions to compute columns based on other columns, create pivot-tables, group rows, draw graphs, etc. You can see `DataFrame`s as dictionaries of `Series`.\n\n## Creating a `DataFrame`\nYou can create a DataFrame by passing a dictionary of `Series` objects:",
"_____no_output_____"
]
],
[
[
"people_dict = {\n \"weight\": pd.Series([68, 83, 112], index=[\"alice\", \"bob\", \"charles\"]),\n \"birthyear\": pd.Series([1984, 1985, 1992], index=[\"bob\", \"alice\", \"charles\"], name=\"year\"),\n \"children\": pd.Series([0, 3], index=[\"charles\", \"bob\"]),\n \"hobby\": pd.Series([\"Biking\", \"Dancing\"], index=[\"alice\", \"bob\"]),\n}\npeople = pd.DataFrame(people_dict)\npeople",
"_____no_output_____"
]
],
[
[
"A few things to note:\n* the `Series` were automatically aligned based on their index,\n* missing values are represented as `NaN`,\n* `Series` names are ignored (the name `\"year\"` was dropped),\n* `DataFrame`s are displayed nicely in Jupyter notebooks, woohoo!",
"_____no_output_____"
],
[
"You can access columns pretty much as you would expect. They are returned as `Series` objects:",
"_____no_output_____"
]
],
[
[
"people[\"birthyear\"]",
"_____no_output_____"
]
],
[
[
"You can also get multiple columns at once:",
"_____no_output_____"
]
],
[
[
"people[[\"birthyear\", \"hobby\"]]",
"_____no_output_____"
]
],
[
[
"If you pass a list of columns and/or index row labels to the `DataFrame` constructor, it will guarantee that these columns and/or rows will exist, in that order, and no other column/row will exist. For example:",
"_____no_output_____"
]
],
[
[
"d2 = pd.DataFrame(\n people_dict,\n columns=[\"birthyear\", \"weight\", \"height\"],\n index=[\"bob\", \"alice\", \"eugene\"]\n )\nd2",
"_____no_output_____"
]
],
[
[
"Another convenient way to create a `DataFrame` is to pass all the values to the constructor as an `ndarray`, or a list of lists, and specify the column names and row index labels separately:",
"_____no_output_____"
]
],
[
[
"values = [\n [1985, np.nan, \"Biking\", 68],\n [1984, 3, \"Dancing\", 83],\n [1992, 0, np.nan, 112]\n ]\nd3 = pd.DataFrame(\n values,\n columns=[\"birthyear\", \"children\", \"hobby\", \"weight\"],\n index=[\"alice\", \"bob\", \"charles\"]\n )\nd3",
"_____no_output_____"
]
],
[
[
"To specify missing values, you can either use `np.nan` or NumPy's masked arrays:",
"_____no_output_____"
]
],
[
[
"masked_array = np.ma.asarray(values, dtype=np.object)\nmasked_array[(0, 2), (1, 2)] = np.ma.masked\nd3 = pd.DataFrame(\n masked_array,\n columns=[\"birthyear\", \"children\", \"hobby\", \"weight\"],\n index=[\"alice\", \"bob\", \"charles\"]\n )\nd3",
"_____no_output_____"
]
],
[
[
"Instead of an `ndarray`, you can also pass a `DataFrame` object:",
"_____no_output_____"
]
],
[
[
"d4 = pd.DataFrame(\n d3,\n columns=[\"hobby\", \"children\"],\n index=[\"alice\", \"bob\"]\n )\nd4",
"_____no_output_____"
]
],
[
[
"It is also possible to create a `DataFrame` with a dictionary (or list) of dictionaries (or list):",
"_____no_output_____"
]
],
[
[
"people = pd.DataFrame({\n \"birthyear\": {\"alice\":1985, \"bob\": 1984, \"charles\": 1992},\n \"hobby\": {\"alice\":\"Biking\", \"bob\": \"Dancing\"},\n \"weight\": {\"alice\":68, \"bob\": 83, \"charles\": 112},\n \"children\": {\"bob\": 3, \"charles\": 0}\n})\npeople",
"_____no_output_____"
]
],
[
[
"## Multi-indexing\nIf all columns are tuples of the same size, then they are understood as a multi-index. The same goes for row index labels. For example:",
"_____no_output_____"
]
],
[
[
"d5 = pd.DataFrame(\n {\n (\"public\", \"birthyear\"):\n {(\"Paris\",\"alice\"):1985, (\"Paris\",\"bob\"): 1984, (\"London\",\"charles\"): 1992},\n (\"public\", \"hobby\"):\n {(\"Paris\",\"alice\"):\"Biking\", (\"Paris\",\"bob\"): \"Dancing\"},\n (\"private\", \"weight\"):\n {(\"Paris\",\"alice\"):68, (\"Paris\",\"bob\"): 83, (\"London\",\"charles\"): 112},\n (\"private\", \"children\"):\n {(\"Paris\", \"alice\"):np.nan, (\"Paris\",\"bob\"): 3, (\"London\",\"charles\"): 0}\n }\n)\nd5",
"_____no_output_____"
]
],
[
[
"You can now get a `DataFrame` containing all the `\"public\"` columns very simply:",
"_____no_output_____"
]
],
[
[
"d5[\"public\"]",
"_____no_output_____"
],
[
"d5[\"public\", \"hobby\"] # Same result as d5[\"public\"][\"hobby\"]",
"_____no_output_____"
]
],
[
[
"## Dropping a level\nLet's look at `d5` again:",
"_____no_output_____"
]
],
[
[
"d5",
"_____no_output_____"
]
],
[
[
"There are two levels of columns, and two levels of indices. We can drop a column level by calling `droplevel()` (the same goes for indices):",
"_____no_output_____"
]
],
[
[
"d5.columns = d5.columns.droplevel(level = 0)\nd5",
"_____no_output_____"
]
],
[
[
"## Transposing\nYou can swap columns and indices using the `T` attribute:",
"_____no_output_____"
]
],
[
[
"d6 = d5.T\nd6",
"_____no_output_____"
]
],
[
[
"## Stacking and unstacking levels\nCalling the `stack()` method will push the lowest column level after the lowest index:",
"_____no_output_____"
]
],
[
[
"d7 = d6.stack()\nd7",
"_____no_output_____"
]
],
[
[
"Note that many `NaN` values appeared. This makes sense because many new combinations did not exist before (eg. there was no `bob` in `London`).\n\nCalling `unstack()` will do the reverse, once again creating many `NaN` values.",
"_____no_output_____"
]
],
[
[
"d8 = d7.unstack()\nd8",
"_____no_output_____"
]
],
[
[
"If we call `unstack` again, we end up with a `Series` object:",
"_____no_output_____"
]
],
[
[
"d9 = d8.unstack()\nd9",
"_____no_output_____"
]
],
[
[
"The `stack()` and `unstack()` methods let you select the `level` to stack/unstack. You can even stack/unstack multiple levels at once:",
"_____no_output_____"
]
],
[
[
"d10 = d9.unstack(level = (0,1))\nd10",
"_____no_output_____"
]
],
[
[
"## Most methods return modified copies\nAs you may have noticed, the `stack()` and `unstack()` methods do not modify the object they apply to. Instead, they work on a copy and return that copy. This is true of most methods in pandas.",
"_____no_output_____"
],
[
"## Accessing rows\nLet's go back to the `people` `DataFrame`:",
"_____no_output_____"
]
],
[
[
"people",
"_____no_output_____"
]
],
[
[
"The `loc` attribute lets you access rows instead of columns. The result is a `Series` object in which the `DataFrame`'s column names are mapped to row index labels:",
"_____no_output_____"
]
],
[
[
"people.loc[\"charles\"]",
"_____no_output_____"
]
],
[
[
"You can also access rows by integer location using the `iloc` attribute:",
"_____no_output_____"
]
],
[
[
"people.iloc[2]",
"_____no_output_____"
]
],
[
[
"You can also get a slice of rows, and this returns a `DataFrame` object:",
"_____no_output_____"
]
],
[
[
"people.iloc[1:3]",
"_____no_output_____"
]
],
[
[
"Finally, you can pass a boolean array to get the matching rows:",
"_____no_output_____"
]
],
[
[
"people[np.array([True, False, True])]",
"_____no_output_____"
]
],
[
[
"This is most useful when combined with boolean expressions:",
"_____no_output_____"
]
],
[
[
"people[people[\"birthyear\"] < 1990]",
"_____no_output_____"
]
],
[
[
"## Adding and removing columns\nYou can generally treat `DataFrame` objects like dictionaries of `Series`, so the following work fine:",
"_____no_output_____"
]
],
[
[
"people",
"_____no_output_____"
],
[
"people[\"age\"] = 2018 - people[\"birthyear\"] # adds a new column \"age\"\npeople[\"over 30\"] = people[\"age\"] > 30 # adds another column \"over 30\"\nbirthyears = people.pop(\"birthyear\")\ndel people[\"children\"]\n\npeople",
"_____no_output_____"
],
[
"birthyears",
"_____no_output_____"
]
],
[
[
"When you add a new colum, it must have the same number of rows. Missing rows are filled with NaN, and extra rows are ignored:",
"_____no_output_____"
]
],
[
[
"people[\"pets\"] = pd.Series({\"bob\": 0, \"charles\": 5, \"eugene\":1}) # alice is missing, eugene is ignored\npeople",
"_____no_output_____"
]
],
[
[
"When adding a new column, it is added at the end (on the right) by default. You can also insert a column anywhere else using the `insert()` method:",
"_____no_output_____"
]
],
[
[
"people.insert(1, \"height\", [172, 181, 185])\npeople",
"_____no_output_____"
]
],
[
[
"## Assigning new columns\nYou can also create new columns by calling the `assign()` method. Note that this returns a new `DataFrame` object, the original is not modified:",
"_____no_output_____"
]
],
[
[
"people.assign(\n body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2,\n has_pets = people[\"pets\"] > 0\n)",
"_____no_output_____"
]
],
[
[
"Note that you cannot access columns created within the same assignment:",
"_____no_output_____"
]
],
[
[
"try:\n people.assign(\n body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2,\n overweight = people[\"body_mass_index\"] > 25\n )\nexcept KeyError as e:\n print(\"Key error:\", e)",
"Key error: 'body_mass_index'\n"
]
],
[
[
"The solution is to split this assignment in two consecutive assignments:",
"_____no_output_____"
]
],
[
[
"d6 = people.assign(body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2)\nd6.assign(overweight = d6[\"body_mass_index\"] > 25)",
"_____no_output_____"
]
],
[
[
"Having to create a temporary variable `d6` is not very convenient. You may want to just chain the assigment calls, but it does not work because the `people` object is not actually modified by the first assignment:",
"_____no_output_____"
]
],
[
[
"try:\n (people\n .assign(body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2)\n .assign(overweight = people[\"body_mass_index\"] > 25)\n )\nexcept KeyError as e:\n print(\"Key error:\", e)",
"Key error: 'body_mass_index'\n"
]
],
[
[
"But fear not, there is a simple solution. You can pass a function to the `assign()` method (typically a `lambda` function), and this function will be called with the `DataFrame` as a parameter:",
"_____no_output_____"
]
],
[
[
"(people\n .assign(body_mass_index = lambda df: df[\"weight\"] / (df[\"height\"] / 100) ** 2)\n .assign(overweight = lambda df: df[\"body_mass_index\"] > 25)\n)",
"_____no_output_____"
]
],
[
[
"Problem solved!",
"_____no_output_____"
],
[
"## Evaluating an expression\nA great feature supported by pandas is expression evaluation. This relies on the `numexpr` library which must be installed.",
"_____no_output_____"
]
],
[
[
"people.eval(\"weight / (height/100) ** 2 > 25\")",
"_____no_output_____"
]
],
[
[
"Assignment expressions are also supported. Let's set `inplace=True` to directly modify the `DataFrame` rather than getting a modified copy:",
"_____no_output_____"
]
],
[
[
"people.eval(\"body_mass_index = weight / (height/100) ** 2\", inplace=True)\npeople",
"_____no_output_____"
]
],
[
[
"You can use a local or global variable in an expression by prefixing it with `'@'`:",
"_____no_output_____"
]
],
[
[
"overweight_threshold = 30\npeople.eval(\"overweight = body_mass_index > @overweight_threshold\", inplace=True)\npeople",
"_____no_output_____"
]
],
[
[
"## Querying a `DataFrame`\nThe `query()` method lets you filter a `DataFrame` based on a query expression:",
"_____no_output_____"
]
],
[
[
"people.query(\"age > 30 and pets == 0\")",
"_____no_output_____"
]
],
[
[
"## Sorting a `DataFrame`\nYou can sort a `DataFrame` by calling its `sort_index` method. By default it sorts the rows by their index label, in ascending order, but let's reverse the order:",
"_____no_output_____"
]
],
[
[
"people.sort_index(ascending=False)",
"_____no_output_____"
]
],
[
[
"Note that `sort_index` returned a sorted *copy* of the `DataFrame`. To modify `people` directly, we can set the `inplace` argument to `True`. Also, we can sort the columns instead of the rows by setting `axis=1`:",
"_____no_output_____"
]
],
[
[
"people.sort_index(axis=1, inplace=True)\npeople",
"_____no_output_____"
]
],
[
[
"To sort the `DataFrame` by the values instead of the labels, we can use `sort_values` and specify the column to sort by:",
"_____no_output_____"
]
],
[
[
"people.sort_values(by=\"age\", inplace=True)\npeople",
"_____no_output_____"
]
],
[
[
"## Plotting a `DataFrame`\nJust like for `Series`, pandas makes it easy to draw nice graphs based on a `DataFrame`.\n\nFor example, it is trivial to create a line plot from a `DataFrame`'s data by calling its `plot` method:",
"_____no_output_____"
]
],
[
[
"people.plot(kind = \"line\", x = \"body_mass_index\", y = [\"height\", \"weight\"])\nplt.show()",
"_____no_output_____"
]
],
[
[
"You can pass extra arguments supported by matplotlib's functions. For example, we can create scatterplot and pass it a list of sizes using the `s` argument of matplotlib's `scatter()` function:",
"_____no_output_____"
]
],
[
[
"people.plot(kind = \"scatter\", x = \"height\", y = \"weight\", s=[40, 120, 200])\nplt.show()",
"_____no_output_____"
]
],
[
[
"Again, there are way too many options to list here: the best option is to scroll through the [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) page in pandas' documentation, find the plot you are interested in and look at the example code.",
"_____no_output_____"
],
[
"## Operations on `DataFrame`s\nAlthough `DataFrame`s do not try to mimick NumPy arrays, there are a few similarities. Let's create a `DataFrame` to demonstrate this:",
"_____no_output_____"
]
],
[
[
"grades_array = np.array([[8,8,9],[10,9,9],[4, 8, 2], [9, 10, 10]])\ngrades = pd.DataFrame(grades_array, columns=[\"sep\", \"oct\", \"nov\"], index=[\"alice\",\"bob\",\"charles\",\"darwin\"])\ngrades",
"_____no_output_____"
]
],
[
[
"You can apply NumPy mathematical functions on a `DataFrame`: the function is applied to all values:",
"_____no_output_____"
]
],
[
[
"np.sqrt(grades)",
"_____no_output_____"
]
],
[
[
"Similarly, adding a single value to a `DataFrame` will add that value to all elements in the `DataFrame`. This is called *broadcasting*:",
"_____no_output_____"
]
],
[
[
"grades + 1",
"_____no_output_____"
]
],
[
[
"Of course, the same is true for all other binary operations, including arithmetic (`*`,`/`,`**`...) and conditional (`>`, `==`...) operations:",
"_____no_output_____"
]
],
[
[
"grades >= 5",
"_____no_output_____"
]
],
[
[
"Aggregation operations, such as computing the `max`, the `sum` or the `mean` of a `DataFrame`, apply to each column, and you get back a `Series` object:",
"_____no_output_____"
]
],
[
[
"grades.mean()",
"_____no_output_____"
]
],
[
[
"The `all` method is also an aggregation operation: it checks whether all values are `True` or not. Let's see during which months all students got a grade greater than `5`:",
"_____no_output_____"
]
],
[
[
"(grades > 5).all()",
"_____no_output_____"
]
],
[
[
"Most of these functions take an optional `axis` parameter which lets you specify along which axis of the `DataFrame` you want the operation executed. The default is `axis=0`, meaning that the operation is executed vertically (on each column). You can set `axis=1` to execute the operation horizontally (on each row). For example, let's find out which students had all grades greater than `5`:",
"_____no_output_____"
]
],
[
[
"(grades > 5).all(axis = 1)",
"_____no_output_____"
]
],
[
[
"The `any` method returns `True` if any value is True. Let's see who got at least one grade 10:",
"_____no_output_____"
]
],
[
[
"(grades == 10).any(axis = 1)",
"_____no_output_____"
]
],
[
[
"If you add a `Series` object to a `DataFrame` (or execute any other binary operation), pandas attempts to broadcast the operation to all *rows* in the `DataFrame`. This only works if the `Series` has the same size as the `DataFrame`s rows. For example, let's substract the `mean` of the `DataFrame` (a `Series` object) from the `DataFrame`:",
"_____no_output_____"
]
],
[
[
"grades - grades.mean() # equivalent to: grades - [7.75, 8.75, 7.50]",
"_____no_output_____"
]
],
[
[
"We substracted `7.75` from all September grades, `8.75` from October grades and `7.50` from November grades. It is equivalent to substracting this `DataFrame`:",
"_____no_output_____"
]
],
[
[
"pd.DataFrame([[7.75, 8.75, 7.50]]*4, index=grades.index, columns=grades.columns)",
"_____no_output_____"
]
],
[
[
"If you want to substract the global mean from every grade, here is one way to do it:",
"_____no_output_____"
]
],
[
[
"grades - grades.values.mean() # substracts the global mean (8.00) from all grades",
"_____no_output_____"
]
],
[
[
"## Automatic alignment\nSimilar to `Series`, when operating on multiple `DataFrame`s, pandas automatically aligns them by row index label, but also by column names. Let's create a `DataFrame` with bonus points for each person from October to December:",
"_____no_output_____"
]
],
[
[
"bonus_array = np.array([[0,np.nan,2],[np.nan,1,0],[0, 1, 0], [3, 3, 0]])\nbonus_points = pd.DataFrame(bonus_array, columns=[\"oct\", \"nov\", \"dec\"], index=[\"bob\",\"colin\", \"darwin\", \"charles\"])\nbonus_points",
"_____no_output_____"
],
[
"grades + bonus_points",
"_____no_output_____"
]
],
[
[
"Looks like the addition worked in some cases but way too many elements are now empty. That's because when aligning the `DataFrame`s, some columns and rows were only present on one side, and thus they were considered missing on the other side (`NaN`). Then adding `NaN` to a number results in `NaN`, hence the result.\n\n## Handling missing data\nDealing with missing data is a frequent task when working with real life data. Pandas offers a few tools to handle missing data.\n \nLet's try to fix the problem above. For example, we can decide that missing data should result in a zero, instead of `NaN`. We can replace all `NaN` values by a any value using the `fillna()` method:",
"_____no_output_____"
]
],
[
[
"(grades + bonus_points).fillna(0)",
"_____no_output_____"
]
],
[
[
"It's a bit unfair that we're setting grades to zero in September, though. Perhaps we should decide that missing grades are missing grades, but missing bonus points should be replaced by zeros:",
"_____no_output_____"
]
],
[
[
"fixed_bonus_points = bonus_points.fillna(0)\nfixed_bonus_points.insert(0, \"sep\", 0)\nfixed_bonus_points.loc[\"alice\"] = 0\ngrades + fixed_bonus_points",
"_____no_output_____"
]
],
[
[
"That's much better: although we made up some data, we have not been too unfair.\n\nAnother way to handle missing data is to interpolate. Let's look at the `bonus_points` `DataFrame` again:",
"_____no_output_____"
]
],
[
[
"bonus_points",
"_____no_output_____"
]
],
[
[
"Now let's call the `interpolate` method. By default, it interpolates vertically (`axis=0`), so let's tell it to interpolate horizontally (`axis=1`).",
"_____no_output_____"
]
],
[
[
"bonus_points.interpolate(axis=1)",
"_____no_output_____"
]
],
[
[
"Bob had 0 bonus points in October, and 2 in December. When we interpolate for November, we get the mean: 1 bonus point. Colin had 1 bonus point in November, but we do not know how many bonus points he had in September, so we cannot interpolate, this is why there is still a missing value in October after interpolation. To fix this, we can set the September bonus points to 0 before interpolation.",
"_____no_output_____"
]
],
[
[
"better_bonus_points = bonus_points.copy()\nbetter_bonus_points.insert(0, \"sep\", 0)\nbetter_bonus_points.loc[\"alice\"] = 0\nbetter_bonus_points = better_bonus_points.interpolate(axis=1)\nbetter_bonus_points",
"_____no_output_____"
]
],
[
[
"Great, now we have reasonable bonus points everywhere. Let's find out the final grades:",
"_____no_output_____"
]
],
[
[
"grades + better_bonus_points",
"_____no_output_____"
]
],
[
[
"It is slightly annoying that the September column ends up on the right. This is because the `DataFrame`s we are adding do not have the exact same columns (the `grades` `DataFrame` is missing the `\"dec\"` column), so to make things predictable, pandas orders the final columns alphabetically. To fix this, we can simply add the missing column before adding:",
"_____no_output_____"
]
],
[
[
"grades[\"dec\"] = np.nan\nfinal_grades = grades + better_bonus_points\nfinal_grades",
"_____no_output_____"
]
],
[
[
"There's not much we can do about December and Colin: it's bad enough that we are making up bonus points, but we can't reasonably make up grades (well I guess some teachers probably do). So let's call the `dropna()` method to get rid of rows that are full of `NaN`s:",
"_____no_output_____"
]
],
[
[
"final_grades_clean = final_grades.dropna(how=\"all\")\nfinal_grades_clean",
"_____no_output_____"
]
],
[
[
"Now let's remove columns that are full of `NaN`s by setting the `axis` argument to `1`:",
"_____no_output_____"
]
],
[
[
"final_grades_clean = final_grades_clean.dropna(axis=1, how=\"all\")\nfinal_grades_clean",
"_____no_output_____"
]
],
[
[
"## Aggregating with `groupby`\nSimilar to the SQL language, pandas allows grouping your data into groups to run calculations over each group.\n\nFirst, let's add some extra data about each person so we can group them, and let's go back to the `final_grades` `DataFrame` so we can see how `NaN` values are handled:",
"_____no_output_____"
]
],
[
[
"final_grades[\"hobby\"] = [\"Biking\", \"Dancing\", np.nan, \"Dancing\", \"Biking\"]\nfinal_grades",
"_____no_output_____"
]
],
[
[
"Now let's group data in this `DataFrame` by hobby:",
"_____no_output_____"
]
],
[
[
"grouped_grades = final_grades.groupby(\"hobby\")\ngrouped_grades",
"_____no_output_____"
]
],
[
[
"We are ready to compute the average grade per hobby:",
"_____no_output_____"
]
],
[
[
"grouped_grades.mean()",
"_____no_output_____"
]
],
[
[
"That was easy! Note that the `NaN` values have simply been skipped when computing the means.",
"_____no_output_____"
],
[
"## Pivot tables\nPandas supports spreadsheet-like [pivot tables](https://en.wikipedia.org/wiki/Pivot_table) that allow quick data summarization. To illustrate this, let's create a simple `DataFrame`:",
"_____no_output_____"
]
],
[
[
"bonus_points",
"_____no_output_____"
],
[
"more_grades = final_grades_clean.stack().reset_index()\nmore_grades.columns = [\"name\", \"month\", \"grade\"]\nmore_grades[\"bonus\"] = [np.nan, np.nan, np.nan, 0, np.nan, 2, 3, 3, 0, 0, 1, 0]\nmore_grades",
"_____no_output_____"
]
],
[
[
"Now we can call the `pd.pivot_table()` function for this `DataFrame`, asking to group by the `name` column. By default, `pivot_table()` computes the mean of each numeric column:",
"_____no_output_____"
]
],
[
[
"pd.pivot_table(more_grades, index=\"name\")",
"_____no_output_____"
]
],
[
[
"We can change the aggregation function by setting the `aggfunc` argument, and we can also specify the list of columns whose values will be aggregated:",
"_____no_output_____"
]
],
[
[
"pd.pivot_table(more_grades, index=\"name\", values=[\"grade\",\"bonus\"], aggfunc=np.max)",
"_____no_output_____"
]
],
[
[
"We can also specify the `columns` to aggregate over horizontally, and request the grand totals for each row and column by setting `margins=True`:",
"_____no_output_____"
]
],
[
[
"pd.pivot_table(more_grades, index=\"name\", values=\"grade\", columns=\"month\", margins=True)",
"_____no_output_____"
]
],
[
[
"Finally, we can specify multiple index or column names, and pandas will create multi-level indices:",
"_____no_output_____"
]
],
[
[
"pd.pivot_table(more_grades, index=(\"name\", \"month\"), margins=True)",
"_____no_output_____"
]
],
[
[
"## Overview functions\nWhen dealing with large `DataFrames`, it is useful to get a quick overview of its content. Pandas offers a few functions for this. First, let's create a large `DataFrame` with a mix of numeric values, missing values and text values. Notice how Jupyter displays only the corners of the `DataFrame`:",
"_____no_output_____"
]
],
[
[
"much_data = np.fromfunction(lambda x,y: (x+y*y)%17*11, (10000, 26))\nlarge_df = pd.DataFrame(much_data, columns=list(\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"))\nlarge_df[large_df % 16 == 0] = np.nan\nlarge_df.insert(3,\"some_text\", \"Blabla\")\nlarge_df",
"_____no_output_____"
]
],
[
[
"The `head()` method returns the top 5 rows:",
"_____no_output_____"
]
],
[
[
"large_df.head()",
"_____no_output_____"
]
],
[
[
"Of course there's also a `tail()` function to view the bottom 5 rows. You can pass the number of rows you want:",
"_____no_output_____"
]
],
[
[
"large_df.tail(n=2)",
"_____no_output_____"
]
],
[
[
"The `info()` method prints out a summary of each columns contents:",
"_____no_output_____"
]
],
[
[
"large_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 10000 entries, 0 to 9999\nData columns (total 27 columns):\nA 8823 non-null float64\nB 8824 non-null float64\nC 8824 non-null float64\nsome_text 10000 non-null object\nD 8824 non-null float64\nE 8822 non-null float64\nF 8824 non-null float64\nG 8824 non-null float64\nH 8822 non-null float64\nI 8823 non-null float64\nJ 8823 non-null float64\nK 8822 non-null float64\nL 8824 non-null float64\nM 8824 non-null float64\nN 8822 non-null float64\nO 8824 non-null float64\nP 8824 non-null float64\nQ 8824 non-null float64\nR 8823 non-null float64\nS 8824 non-null float64\nT 8824 non-null float64\nU 8824 non-null float64\nV 8822 non-null float64\nW 8824 non-null float64\nX 8824 non-null float64\nY 8822 non-null float64\nZ 8823 non-null float64\ndtypes: float64(26), object(1)\nmemory usage: 2.1+ MB\n"
]
],
[
[
"Finally, the `describe()` method gives a nice overview of the main aggregated values over each column:\n* `count`: number of non-null (not NaN) values\n* `mean`: mean of non-null values\n* `std`: [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of non-null values\n* `min`: minimum of non-null values\n* `25%`, `50%`, `75%`: 25th, 50th and 75th [percentile](https://en.wikipedia.org/wiki/Percentile) of non-null values\n* `max`: maximum of non-null values",
"_____no_output_____"
]
],
[
[
"large_df.describe()",
"_____no_output_____"
]
],
[
[
"# Saving & loading\nPandas can save `DataFrame`s to various backends, including file formats such as CSV, Excel, JSON, HTML and HDF5, or to a SQL database. Let's create a `DataFrame` to demonstrate this:",
"_____no_output_____"
]
],
[
[
"my_df = pd.DataFrame(\n [[\"Biking\", 68.5, 1985, np.nan], [\"Dancing\", 83.1, 1984, 3]], \n columns=[\"hobby\",\"weight\",\"birthyear\",\"children\"],\n index=[\"alice\", \"bob\"]\n)\nmy_df",
"_____no_output_____"
]
],
[
[
"## Saving\nLet's save it to CSV, HTML and JSON:",
"_____no_output_____"
]
],
[
[
"my_df.to_csv(\"my_df.csv\")\nmy_df.to_html(\"my_df.html\")\nmy_df.to_json(\"my_df.json\")",
"_____no_output_____"
]
],
[
[
"Done! Let's take a peek at what was saved:",
"_____no_output_____"
]
],
[
[
"for filename in (\"my_df.csv\", \"my_df.html\", \"my_df.json\"):\n print(\"#\", filename)\n with open(filename, \"rt\") as f:\n print(f.read())\n print()\n",
"# my_df.csv\n,hobby,weight,birthyear,children\nalice,Biking,68.5,1985,\nbob,Dancing,83.1,1984,3.0\n\n\n# my_df.html\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>hobby</th>\n <th>weight</th>\n <th>birthyear</th>\n <th>children</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>alice</th>\n <td>Biking</td>\n <td>68.5</td>\n <td>1985</td>\n <td>NaN</td>\n </tr>\n <tr>\n <th>bob</th>\n <td>Dancing</td>\n <td>83.1</td>\n <td>1984</td>\n <td>3.0</td>\n </tr>\n </tbody>\n</table>\n\n# my_df.json\n{\"hobby\":{\"alice\":\"Biking\",\"bob\":\"Dancing\"},\"weight\":{\"alice\":68.5,\"bob\":83.1},\"birthyear\":{\"alice\":1985,\"bob\":1984},\"children\":{\"alice\":null,\"bob\":3.0}}\n\n"
]
],
[
[
"Note that the index is saved as the first column (with no name) in a CSV file, as `<th>` tags in HTML and as keys in JSON.\n\nSaving to other formats works very similarly, but some formats require extra libraries to be installed. For example, saving to Excel requires the openpyxl library:",
"_____no_output_____"
]
],
[
[
"try:\n my_df.to_excel(\"my_df.xlsx\", sheet_name='People')\nexcept ImportError as e:\n print(e)",
"No module named 'openpyxl'\n"
]
],
[
[
"## Loading\nNow let's load our CSV file back into a `DataFrame`:",
"_____no_output_____"
]
],
[
[
"my_df_loaded = pd.read_csv(\"my_df.csv\", index_col=0)\nmy_df_loaded",
"_____no_output_____"
]
],
[
[
"As you might guess, there are similar `read_json`, `read_html`, `read_excel` functions as well. We can also read data straight from the Internet. For example, let's load all U.S. cities from [simplemaps.com](http://simplemaps.com/):",
"_____no_output_____"
]
],
[
[
"us_cities = None\ntry:\n csv_url = \"http://simplemaps.com/files/cities.csv\"\n us_cities = pd.read_csv(csv_url, index_col=0)\n us_cities = us_cities.head()\nexcept IOError as e:\n print(e)\nus_cities",
"<urlopen error [Errno 8] nodename nor servname provided, or not known>\n"
]
],
[
[
"There are more options available, in particular regarding datetime format. Check out the [documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) for more details.",
"_____no_output_____"
],
[
"# Combining `DataFrame`s\n\n## SQL-like joins\nOne powerful feature of pandas is it's ability to perform SQL-like joins on `DataFrame`s. Various types of joins are supported: inner joins, left/right outer joins and full joins. To illustrate this, let's start by creating a couple simple `DataFrame`s:",
"_____no_output_____"
]
],
[
[
"city_loc = pd.DataFrame(\n [\n [\"CA\", \"San Francisco\", 37.781334, -122.416728],\n [\"NY\", \"New York\", 40.705649, -74.008344],\n [\"FL\", \"Miami\", 25.791100, -80.320733],\n [\"OH\", \"Cleveland\", 41.473508, -81.739791],\n [\"UT\", \"Salt Lake City\", 40.755851, -111.896657]\n ], columns=[\"state\", \"city\", \"lat\", \"lng\"])\ncity_loc",
"_____no_output_____"
],
[
"city_pop = pd.DataFrame(\n [\n [808976, \"San Francisco\", \"California\"],\n [8363710, \"New York\", \"New-York\"],\n [413201, \"Miami\", \"Florida\"],\n [2242193, \"Houston\", \"Texas\"]\n ], index=[3,4,5,6], columns=[\"population\", \"city\", \"state\"])\ncity_pop",
"_____no_output_____"
]
],
[
[
"Now let's join these `DataFrame`s using the `merge()` function:",
"_____no_output_____"
]
],
[
[
"pd.merge(left=city_loc, right=city_pop, on=\"city\")",
"_____no_output_____"
]
],
[
[
"Note that both `DataFrame`s have a column named `state`, so in the result they got renamed to `state_x` and `state_y`.\n\nAlso, note that Cleveland, Salt Lake City and Houston were dropped because they don't exist in *both* `DataFrame`s. This is the equivalent of a SQL `INNER JOIN`. If you want a `FULL OUTER JOIN`, where no city gets dropped and `NaN` values are added, you must specify `how=\"outer\"`:",
"_____no_output_____"
]
],
[
[
"all_cities = pd.merge(left=city_loc, right=city_pop, on=\"city\", how=\"outer\")\nall_cities",
"_____no_output_____"
]
],
[
[
"Of course `LEFT OUTER JOIN` is also available by setting `how=\"left\"`: only the cities present in the left `DataFrame` end up in the result. Similarly, with `how=\"right\"` only cities in the right `DataFrame` appear in the result. For example:",
"_____no_output_____"
]
],
[
[
"pd.merge(left=city_loc, right=city_pop, on=\"city\", how=\"right\")",
"_____no_output_____"
]
],
[
[
"If the key to join on is actually in one (or both) `DataFrame`'s index, you must use `left_index=True` and/or `right_index=True`. If the key column names differ, you must use `left_on` and `right_on`. For example:",
"_____no_output_____"
]
],
[
[
"city_pop2 = city_pop.copy()\ncity_pop2.columns = [\"population\", \"name\", \"state\"]\npd.merge(left=city_loc, right=city_pop2, left_on=\"city\", right_on=\"name\")",
"_____no_output_____"
]
],
[
[
"## Concatenation\nRather than joining `DataFrame`s, we may just want to concatenate them. That's what `concat()` is for:",
"_____no_output_____"
]
],
[
[
"result_concat = pd.concat([city_loc, city_pop])\nresult_concat",
"_____no_output_____"
]
],
[
[
"Note that this operation aligned the data horizontally (by columns) but not vertically (by rows). In this example, we end up with multiple rows having the same index (eg. 3). Pandas handles this rather gracefully:",
"_____no_output_____"
]
],
[
[
"result_concat.loc[3]",
"_____no_output_____"
]
],
[
[
"Or you can tell pandas to just ignore the index:",
"_____no_output_____"
]
],
[
[
"pd.concat([city_loc, city_pop], ignore_index=True)",
"_____no_output_____"
]
],
[
[
"Notice that when a column does not exist in a `DataFrame`, it acts as if it was filled with `NaN` values. If we set `join=\"inner\"`, then only columns that exist in *both* `DataFrame`s are returned:",
"_____no_output_____"
]
],
[
[
"pd.concat([city_loc, city_pop], join=\"inner\")",
"_____no_output_____"
]
],
[
[
"You can concatenate `DataFrame`s horizontally instead of vertically by setting `axis=1`:",
"_____no_output_____"
]
],
[
[
"pd.concat([city_loc, city_pop], axis=1)",
"_____no_output_____"
]
],
[
[
"In this case it really does not make much sense because the indices do not align well (eg. Cleveland and San Francisco end up on the same row, because they shared the index label `3`). So let's reindex the `DataFrame`s by city name before concatenating:",
"_____no_output_____"
]
],
[
[
"pd.concat([city_loc.set_index(\"city\"), city_pop.set_index(\"city\")], axis=1)",
"_____no_output_____"
]
],
[
[
"This looks a lot like a `FULL OUTER JOIN`, except that the `state` columns were not renamed to `state_x` and `state_y`, and the `city` column is now the index.",
"_____no_output_____"
],
[
"The `append()` method is a useful shorthand for concatenating `DataFrame`s vertically:",
"_____no_output_____"
]
],
[
[
"city_loc.append(city_pop)",
"_____no_output_____"
]
],
[
[
"As always in pandas, the `append()` method does *not* actually modify `city_loc`: it works on a copy and returns the modified copy.",
"_____no_output_____"
],
[
"# Categories\nIt is quite frequent to have values that represent categories, for example `1` for female and `2` for male, or `\"A\"` for Good, `\"B\"` for Average, `\"C\"` for Bad. These categorical values can be hard to read and cumbersome to handle, but fortunately pandas makes it easy. To illustrate this, let's take the `city_pop` `DataFrame` we created earlier, and add a column that represents a category:",
"_____no_output_____"
]
],
[
[
"city_eco = city_pop.copy()\ncity_eco[\"eco_code\"] = [17, 17, 34, 20]\ncity_eco",
"_____no_output_____"
]
],
[
[
"Right now the `eco_code` column is full of apparently meaningless codes. Let's fix that. First, we will create a new categorical column based on the `eco_code`s:",
"_____no_output_____"
]
],
[
[
"city_eco[\"economy\"] = city_eco[\"eco_code\"].astype('category')\ncity_eco[\"economy\"].cat.categories",
"_____no_output_____"
]
],
[
[
"Now we can give each category a meaningful name:",
"_____no_output_____"
]
],
[
[
"city_eco[\"economy\"].cat.categories = [\"Finance\", \"Energy\", \"Tourism\"]\ncity_eco",
"_____no_output_____"
]
],
[
[
"Note that categorical values are sorted according to their categorical order, *not* their alphabetical order:",
"_____no_output_____"
]
],
[
[
"city_eco.sort_values(by=\"economy\", ascending=False)",
"_____no_output_____"
]
],
[
[
"# What next?\nAs you probably noticed by now, pandas is quite a large library with *many* features. Although we went through the most important features, there is still a lot to discover. Probably the best way to learn more is to get your hands dirty with some real-life data. It is also a good idea to go through pandas' excellent [documentation](http://pandas.pydata.org/pandas-docs/stable/index.html), in particular the [Cookbook](http://pandas.pydata.org/pandas-docs/stable/cookbook.html).",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5a1db133d6d2f0eeccf181a4b850a914d8345b
| 35,264 |
ipynb
|
Jupyter Notebook
|
alvarezclaudia/Taller 1.ipynb
|
spulido99/NetworksAnalysis
|
3f6f585305f5825e25488bae8c6b427bc18436c6
|
[
"MIT"
] | null | null | null |
alvarezclaudia/Taller 1.ipynb
|
spulido99/NetworksAnalysis
|
3f6f585305f5825e25488bae8c6b427bc18436c6
|
[
"MIT"
] | null | null | null |
alvarezclaudia/Taller 1.ipynb
|
spulido99/NetworksAnalysis
|
3f6f585305f5825e25488bae8c6b427bc18436c6
|
[
"MIT"
] | null | null | null | 248.338028 | 31,950 | 0.910702 |
[
[
[
"# Importando módulos necesarios \nimport matplotlib.pyplot as plt\nimport numpy as np \nfrom scipy import stats \nimport seaborn as sns",
"_____no_output_____"
],
[
"# Ejercicio 1\n# Graficando Beta\na, b = 0.5, 0.5 # parametros de forma\nbeta = stats.beta(a,b)\nx = np.linspace(beta.ppf(0.01),\n beta.ppf(0.99), 100)\nfp = beta.pdf(x) # Función de Probabilidad\nplt.plot(x, fp)\n\na, b = 5, 1 # parametros de forma\nbeta = stats.beta(a,b)\nx = np.linspace(beta.ppf(0.01),\n beta.ppf(0.99), 100)\nfp = beta.pdf(x) # Función de Probabilidad\nplt.plot(x, fp)\n\na, b = 1, 3 # parametros de forma\nbeta = stats.beta(a,b)\nx = np.linspace(beta.ppf(0.01),\n beta.ppf(0.99), 100)\nfp = beta.pdf(x) # Función de Probabilidad\nplt.plot(x, fp)\n\na, b = 2, 2 # parametros de forma\nbeta = stats.beta(a,b)\nx = np.linspace(beta.ppf(0.01),\n beta.ppf(0.99), 100)\nfp = beta.pdf(x) # Función de Probabilidad\nplt.plot(x, fp)\n\na, b = 2, 5 # parametros de forma\nbeta = stats.beta(a,b)\nx = np.linspace(beta.ppf(0.01),\n beta.ppf(0.99), 100)\nfp = beta.pdf(x) # Función de Probabilidad\nplt.plot(x, fp)\n\n\nplt.title('Distribución Beta')\nplt.ylabel('probabilidad')\nplt.xlabel('valores')\nplt.show()",
"_____no_output_____"
],
[
"# Ejercicio 2\naleatorios = beta.rvs(1000) # genera aleatorios\nprint(np.mean(aleatorios))\nprint(np.median(aleatorios))\nprint(stats.mode(aleatorios))\nprint(np.std(aleatorios))\nprint(stats.skew(aleatorios))",
"0.517524681241\n0.550301911315\nModeResult(mode=array([ 1.]), count=array([17]))\n0.411891138469\n-0.06540722343219814\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4a5a229582ff8dfa6dcea81fb305fb1fdd944d84
| 607 |
ipynb
|
Jupyter Notebook
|
testing/dummy-v4-py2.ipynb
|
vlad17/IPython-2to3
|
6a27e355ab572b9d22229bbc866a83bbe1080043
|
[
"MIT"
] | 3 |
2018-03-01T06:54:30.000Z
|
2018-11-06T01:14:35.000Z
|
testing/dummy-v4-py2.ipynb
|
vlad17/IPython-2to3
|
6a27e355ab572b9d22229bbc866a83bbe1080043
|
[
"MIT"
] | 1 |
2016-12-22T07:53:41.000Z
|
2016-12-30T16:29:40.000Z
|
testing/dummy-v4-py2.ipynb
|
vlad17/IPython-2to3
|
6a27e355ab572b9d22229bbc866a83bbe1080043
|
[
"MIT"
] | 4 |
2016-12-22T06:36:51.000Z
|
2019-06-10T21:57:35.000Z
| 16.861111 | 34 | 0.518946 |
[
[
[
"print \"test\"",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4a5a2a5245040e08d96cbad646a93d79b5c1c05f
| 838,546 |
ipynb
|
Jupyter Notebook
|
src/Best_Model.ipynb
|
VVRud/roman_clf
|
e38e2db74ea5be3abc159f29fc94ce196331d369
|
[
"MIT"
] | null | null | null |
src/Best_Model.ipynb
|
VVRud/roman_clf
|
e38e2db74ea5be3abc159f29fc94ce196331d369
|
[
"MIT"
] | null | null | null |
src/Best_Model.ipynb
|
VVRud/roman_clf
|
e38e2db74ea5be3abc159f29fc94ce196331d369
|
[
"MIT"
] | null | null | null | 871.669439 | 412,356 | 0.947677 |
[
[
[
"import os\nimport sys\nfrom PIL import Image\nimport numpy as np\nimport shutil\n\nsys.path.extend(['..'])\n\nfrom utils.config import process_config\n\nimport tensorflow as tf\nfrom tensorflow.layers import (conv2d, max_pooling2d, average_pooling2d, batch_normalization, dropout, dense)\nfrom tensorflow.nn import (relu, softmax, leaky_relu)\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn.utils import shuffle",
"_____no_output_____"
],
[
"# Paths to use later\nDATA = '../data_splitted/'\nCONF = '../configs/roman.json'",
"_____no_output_____"
]
],
[
[
"# Configs creating",
"_____no_output_____"
]
],
[
[
"config_tf = tf.ConfigProto(allow_soft_placement=True)\nconfig_tf.gpu_options.allow_growth = True\nconfig_tf.gpu_options.per_process_gpu_memory_fraction = 0.95",
"_____no_output_____"
],
[
"config = process_config(CONF)",
"_____no_output_____"
]
],
[
[
"# Necessary functions",
"_____no_output_____"
]
],
[
[
"def normalize(image):\n return (image - image.min()) / (image.max() - image.min())\n\ndef shuffle_sim(a, b):\n assert a.shape[0] == a.shape[0], 'Shapes must be equal'\n \n ind = np.arange(a.shape[0])\n np.random.shuffle(ind)\n return a[ind], b[ind]",
"_____no_output_____"
],
[
"def read_train_test(path_to_data):\n data = {}\n for dset in ['train', 'test']:\n path_ = os.path.join(path_to_data, dset)\n X, Y = [], []\n classes = [d for d in os.listdir(path_) if os.path.isdir(os.path.join(path_, d))]\n classes.sort()\n \n for cl in classes:\n y = np.zeros((1, 8), dtype=np.int32)\n y[0, int(cl) - 1] = 1\n \n cl_path = os.path.join(path_, cl)\n filenames = [os.path.join(cl_path, pict) for pict in os.listdir(cl_path) if pict.endswith('.jpg')]\n \n for im in filenames:\n image = np.asarray(Image.open(im), dtype=np.float32)\n X.append(normalize(image).reshape((1, image.shape[0], image.shape[1], image.shape[2])))\n Y.append(y)\n \n a, b = shuffle_sim(np.concatenate(X), np.concatenate(Y))\n data[dset] = ([a, b])\n return data",
"_____no_output_____"
]
],
[
[
"# Model",
"_____no_output_____"
]
],
[
[
"class Model():\n \"\"\"\n Model class represents one object of model.\n\n :param config: Parsed config file.\n :param session_config: Formed session config file, if necessary.\n\n :return Model\n \"\"\"\n \n def __init__(self, config, session_config=None):\n\n # Configuring session\n self.config = config\n if session_config is not None:\n self.sess = tf.Session(config=session_config)\n else:\n self.sess = tf.Session()\n\n # Creating inputs to network\n with tf.name_scope('inputs'):\n self.x = tf.placeholder(\n dtype=tf.float32,\n shape=(None, config.image_size, config.image_size, 3))\n self.y = tf.placeholder(dtype=tf.int32, shape=(None, 8))\n self.training = tf.placeholder(dtype=tf.bool, shape=())\n\n # Creating epoch counter\n self.global_epoch = tf.Variable(\n 0, name='global_epoch', trainable=False, dtype=tf.int32)\n self.step = tf.assign(self.global_epoch, self.global_epoch + 1)\n\n # Building model\n if self.config.write_histograms:\n self.histograms = {}\n self.__build_model()\n\n # Summary writer\n self.summ_writer_train = tf.summary.FileWriter(\n config.train_summary_dir, graph=self.sess.graph)\n self.summ_writer_test = tf.summary.FileWriter(config.test_summary_dir)\n\n self.sess.run(tf.global_variables_initializer())\n\n # Saver\n self.saver = tf.train.Saver(max_to_keep=1, name='saver')\n\n def __initialize_local(self):\n \"\"\"\n Initialize local tensorflow variables.\n \n :return None\n \"\"\"\n\n self.sess.run(tf.local_variables_initializer())\n if self.config.write_histograms:\n self.histograms = {}\n\n def __add_histogram(self, scope, name, var):\n \"\"\"\n Add histograms to summary scope.\n \n :param scope: Scope object.\n :param name: Name of variable.\n :param var: Variable to add to histograms.\n \n :return None\n \"\"\"\n\n dict_var = scope.name + '/' + name\n hist = self.histograms.get(dict_var, None)\n if hist is not None:\n self.histograms[dict_var] = tf.concat([hist, var], 0)\n else:\n self.histograms[dict_var] = var\n tf.summary.histogram(name, self.histograms[dict_var])\n\n def __block(self,\n inp,\n ch,\n num,\n c_ker=[(3, 3), (3, 3)],\n c_str=[(1, 1), (1, 1)],\n act=relu,\n mp_ker=(2, 2),\n mp_str=(2, 2),\n mode='conc'):\n \"\"\"\n Create single convolution block of network.\n \n :param inp: Input Tensor of shape (batch_size, inp_size, inp_size, channels).\n :param ch: Number of channels to have in output Tensor.\n (If mode is 'conc', number of channels will be ch * 2)\n :param num: Number of block for variable scope.\n :param c_ker: List of tuples with shapes of kernels for each convolution operation. \n :param c_str: List of tuples with shapes of strides for each convolution operation.\n :param act: Activation function.\n :param mp_ker: Tuple-like pooling layer kernel size.\n :param mp_str: Tuple-like pooling layer stride size.\n :param mode: One of ['conc', 'mp', 'ap'] modes, where 'mp' and 'ap' are max- and average- \n pooling respectively, and 'conc' - concatenate mode. \n \n :return Transformed Tensor\n \"\"\"\n\n with tf.variable_scope('block_' + str(num)) as name:\n conv1 = conv2d(inp, ch, c_ker[0], strides=c_str[0])\n bn = batch_normalization(conv1)\n out = act(bn)\n if config.use_dropout_block:\n out = dropout(\n out, config.dropout_rate_block, training=self.training)\n print(out.shape)\n\n conv2 = conv2d(out, ch, c_ker[1], strides=c_str[1])\n bn = batch_normalization(conv2)\n out = act(bn)\n print(out.shape)\n\n if config.write_histograms:\n self.__add_histogram(name, 'conv1', conv1)\n self.__add_histogram(name, 'conv2', conv2)\n\n if mode == 'mp':\n out = max_pooling2d(out, mp_ker, strides=mp_str)\n elif mode == 'ap':\n out = average_pooling2d(out, mp_ker, mp_str)\n elif mode == 'conc':\n mp = max_pooling2d(out, mp_ker, strides=mp_str)\n ap = average_pooling2d(out, mp_ker, mp_str)\n out = tf.concat([mp, ap], -1)\n else:\n raise ValueError('Unknown mode.')\n\n print(out.shape)\n return out\n\n def __build_model(self):\n \"\"\"\n Build model.\n \n :return None\n \"\"\"\n \n with tf.name_scope('layers'):\n out = self.__block(self.x, 16, 1, mode='conc')\n out = self.__block(out, 32, 2, mode='conc')\n out = self.__block(out, 64, 3, mode='conc')\n out = self.__block(out, 256, 4, c_str=[(1, 1), (2, 2)], mode='mp')\n\n dim = np.prod(out.shape[1:])\n out = tf.reshape(out, [-1, dim])\n print(out.shape)\n\n with tf.variable_scope('dense') as scope:\n dense_l = dense(out, 128)\n out = batch_normalization(dense_l)\n out = leaky_relu(out, alpha=0.01)\n if config.use_dropout_dense:\n out = dropout(\n out,\n rate=config.dropout_rate_dense,\n training=self.training)\n print(out.shape)\n\n self.predictions = dense(out, 8, activation=softmax)\n\n if self.config.write_histograms:\n self.__add_histogram(scope, 'dense', dense_l)\n self.__add_histogram(scope, 'pred', self.predictions)\n\n with tf.name_scope('metrics'):\n amax_labels = tf.argmax(self.y, 1)\n amax_pred = tf.argmax(self.predictions, 1)\n\n cur_loss = tf.losses.softmax_cross_entropy(self.y,\n self.predictions)\n self.loss, self.loss_update = tf.metrics.mean(cur_loss)\n\n cur_acc = tf.reduce_mean(\n tf.cast(tf.equal(amax_labels, amax_pred), dtype=tf.float32))\n self.acc, self.acc_update = tf.metrics.mean(cur_acc)\n\n self.optimize = tf.train.AdamOptimizer(\n self.config.learning_rate).minimize(cur_loss)\n\n tf.summary.scalar('loss', self.loss)\n tf.summary.scalar('accuracy', self.acc)\n\n self.summary = tf.summary.merge_all()\n\n def train(self, dat, epochs, dat_v=None, batch=None):\n \"\"\"\n Train model on data.\n \n :param dat: List of data to train on, like [X, y].\n Where X is an array with size (None, image_size, image_size, 3) and\n y is an array with size (None ,8).\n :param epochs: Number of epochs to run training procedure.\n :param dat_v: List of data to validate on, like [X, y].\n Where X is an array with size (None, image_size, image_size, 3) and\n y is an array with size (None ,8).\n :param batch: Batch size to train on.\n \n :return None\n \"\"\"\n \n if batch is not None:\n steps = int(np.ceil(dat[0].shape[0] / batch))\n else:\n batch = dat[0].shape[0]\n steps = 1\n\n for epoch in range(epochs):\n self.__initialize_local()\n summary = tf.summary.Summary()\n\n for step in range(steps):\n start = step * batch\n finish = (\n step + 1) * batch if step + 1 != steps else dat[0].shape[0]\n\n _, _, _ = self.sess.run(\n [self.loss_update, self.acc_update, self.optimize],\n feed_dict={\n self.x: dat[0][start:finish],\n self.y: dat[1][start:finish],\n self.training: True\n })\n\n summary, loss, acc, ep = self.sess.run(\n [self.summary, self.loss, self.acc, self.step])\n self.summ_writer_train.add_summary(summary, ep)\n print(\n 'EP: {:3d}\\tLOSS: {:.10f}\\tACC: {:.10f}\\t'.format(\n ep, loss, acc),\n end='')\n\n if dat_v is not None:\n self.test(dat_v, batch=batch)\n else:\n print()\n\n def test(self, dat, batch=None):\n \"\"\"\n Test model on specific data.\n \n :param dat: List of data to test on, like [X, y].\n Where X is an array with size (None, image_size, image_size, 3) and\n y is an array with size (None ,8)..\n :param batch: Batch size to use.\n \n :return None\n \"\"\"\n \n if batch is not None:\n steps = int(np.ceil(dat[0].shape[0] / batch))\n else:\n steps = 1\n batch = dat[0].shape[0]\n\n self.__initialize_local()\n for step in range(steps):\n start = step * batch\n finish = (\n step + 1) * batch if step + 1 != steps else dat[0].shape[0]\n\n _, _ = self.sess.run(\n [self.loss_update, self.acc_update],\n feed_dict={\n self.x: dat[0][start:finish],\n self.y: dat[1][start:finish],\n self.training: False\n })\n\n summary, loss, acc, ep = self.sess.run(\n [self.summary, self.loss, self.acc, self.global_epoch])\n self.summ_writer_test.add_summary(summary, ep)\n print('VALID\\tLOSS: {:.10f}\\tACC: {:.10f}'.format(loss, acc))\n\n def predict_proba(self, data, batch=None):\n \"\"\"\n Predict probability of each class.\n \n :param data: An array to predict on with shape (None, image_size, image_size, 3)\n :param batch: Batch size to use.\n \n :return Array of predictions with shape (None, 8)\n \"\"\"\n \n if batch is not None:\n steps = int(np.ceil(data.shape[0] / batch))\n else:\n steps = 1\n batch = data.shape[0]\n\n self.__initialize_local()\n\n preds_arr = []\n for step in range(steps):\n start = step * batch\n finish = (step + 1) * batch if step + 1 != steps else data.shape[0]\n\n preds = self.sess.run(\n self.predictions,\n feed_dict={\n self.x: data[start:finish],\n self.y: np.zeros((finish - start, 8)),\n self.training: False\n })\n preds_arr.append(preds)\n\n return np.concatenate(preds_arr)\n\n def save_model(self, model_path=None):\n \"\"\"\n Save model weights to the folder with weights.\n \n :param model_path: String-like path to save in.\n \n :return None\n \"\"\"\n \n gstep = self.sess.run(self.global_epoch)\n if model_path is not None:\n self.saver.save(self.sess, model_path + 'model')\n else:\n self.saver.save(self.sess, config.checkpoint_dir + 'model')\n\n def load_model(self, model_path=None):\n \"\"\"\n Load model weights.\n \n :param model_path: String-like path to load from.\n \n :return None\n \"\"\"\n \n if model_path is not None:\n meta = [\n os.path.join(filename) for filename in os.listdir(model_path)\n if filename.endswith('.meta')\n ][0]\n self.saver = tf.train.import_meta_graph(\n os.path.join(model_path, meta))\n self.saver.restore(self.sess,\n tf.train.latest_checkpoint(model_path))\n else:\n meta = [\n os.path.join(filename)\n for filename in os.listdir(self.config.checkpoint_dir)\n if filename.endswith('.meta')\n ][0]\n self.saver = tf.train.import_meta_graph(\n os.path.join(self.config.checkpoint_dir, meta))\n self.saver.restore(\n self.sess,\n tf.train.latest_checkpoint(self.config.checkpoint_dir))\n\n def plot_misclassified(self, data, batch=None):\n \"\"\"\n Create and display a plot with misclassified images.\n \n :param data:\n :param batch:\n \n :return None\n \"\"\"\n \n predicted = np.argmax(self.predict_proba(data[0], batch), axis=1)\n real = np.argmax(data[1], axis=1)\n\n matches = (real != predicted)\n mism_count = np.sum(matches.astype(np.int32))\n images = data[0][matches]\n misk = predicted[matches]\n\n columns = 4\n rows = int(np.ceil(mism_count / columns))\n fig = plt.figure(figsize=(16, rows * 3))\n fig.suptitle(\n '{} photos were misclassiified.'.format(mism_count),\n fontsize=16,\n fontweight='bold')\n for i in range(mism_count):\n s_fig = fig.add_subplot(rows, columns, i + 1)\n s_fig.set_title('Classified as {}'.format(misk[i] + 1))\n plt.imshow(images[i])\n plt.show()\n\n def close(self):\n \"\"\"\n Close a session of model to load next one.\n \n :retun None\n \"\"\"\n \n self.sess.close()\n tf.reset_default_graph()",
"_____no_output_____"
],
[
"# Creating model object\nm = Model(config, config_tf)",
"(?, 126, 126, 16)\n(?, 124, 124, 16)\n(?, 62, 62, 32)\n(?, 60, 60, 32)\n(?, 58, 58, 32)\n(?, 29, 29, 64)\n(?, 27, 27, 64)\n(?, 25, 25, 64)\n(?, 12, 12, 128)\n(?, 10, 10, 256)\n(?, 4, 4, 256)\n(?, 2, 2, 256)\n(?, 1024)\n(?, 128)\n"
],
[
"# Loading previous model's weights\nm.load_model()",
"INFO:tensorflow:Restoring parameters from ../weights/4bl_batch/model\n"
],
[
"# Reading train and test data\ndat = read_train_test(DATA)",
"_____no_output_____"
],
[
"# Training model\n# IT IS NOT NECESSARY HERE, BUT THERE ARE LOGS FOR YOU TO SEE\nm.train(dat['train'], dat_v=dat['test'], epochs=100, batch=512)",
"EP: 952\tLOSS: 1.2879177332\tACC: 0.9855833650\tVALID\tLOSS: 1.2836978436\tACC: 0.9905711412\nEP: 953\tLOSS: 1.2981737852\tACC: 0.9756662846\tVALID\tLOSS: 1.2852492332\tACC: 0.9891567826\nEP: 954\tLOSS: 1.2948708534\tACC: 0.9793747663\tVALID\tLOSS: 1.2887247801\tACC: 0.9842739701\nEP: 955\tLOSS: 1.2962471247\tACC: 0.9779562950\tVALID\tLOSS: 1.2872936726\tACC: 0.9866648912\nEP: 956\tLOSS: 1.3014912605\tACC: 0.9723750353\tVALID\tLOSS: 1.2948012352\tACC: 0.9786503315\nEP: 957\tLOSS: 1.3054916859\tACC: 0.9677904248\tVALID\tLOSS: 1.2848935127\tACC: 0.9895945787\nEP: 958\tLOSS: 1.3040245771\tACC: 0.9699753523\tVALID\tLOSS: 1.2893249989\tACC: 0.9846612215\nEP: 959\tLOSS: 1.3023107052\tACC: 0.9717136621\tVALID\tLOSS: 1.2870891094\tACC: 0.9870015979\nEP: 960\tLOSS: 1.2985929251\tACC: 0.9752367139\tVALID\tLOSS: 1.2847663164\tACC: 0.9892578125\nEP: 961\tLOSS: 1.2961081266\tACC: 0.9780907035\tVALID\tLOSS: 1.2826277018\tACC: 0.9912109375\nEP: 962\tLOSS: 1.2914628983\tACC: 0.9824852347\tVALID\tLOSS: 1.2857968807\tACC: 0.9881802201\nEP: 963\tLOSS: 1.2894526720\tACC: 0.9844383597\tVALID\tLOSS: 1.2843693495\tACC: 0.9896955490\nEP: 964\tLOSS: 1.2928112745\tACC: 0.9808859825\tVALID\tLOSS: 1.2814817429\tACC: 0.9926252365\nEP: 965\tLOSS: 1.2883582115\tACC: 0.9857054353\tVALID\tLOSS: 1.2841055393\tACC: 0.9901838303\nEP: 966\tLOSS: 1.2880012989\tACC: 0.9861009717\tVALID\tLOSS: 1.2822120190\tACC: 0.9916486740\nEP: 967\tLOSS: 1.2869940996\tACC: 0.9869431257\tVALID\tLOSS: 1.2851006985\tACC: 0.9896955490\nEP: 968\tLOSS: 1.2878240347\tACC: 0.9863451123\tVALID\tLOSS: 1.2819479704\tACC: 0.9921369553\nEP: 969\tLOSS: 1.2874555588\tACC: 0.9864208102\tVALID\tLOSS: 1.2827582359\tACC: 0.9911603928\nEP: 970\tLOSS: 1.2878898382\tACC: 0.9860252738\tVALID\tLOSS: 1.2836091518\tACC: 0.9901838303\nEP: 971\tLOSS: 1.2856369019\tACC: 0.9885130525\tVALID\tLOSS: 1.2853077650\tACC: 0.9886180162\nEP: 972\tLOSS: 1.2922159433\tACC: 0.9818285108\tVALID\tLOSS: 1.2830560207\tACC: 0.9906721115\nEP: 973\tLOSS: 1.2859270573\tACC: 0.9880247712\tVALID\tLOSS: 1.2837991714\tACC: 0.9901838303\nEP: 974\tLOSS: 1.2916237116\tACC: 0.9821653962\tVALID\tLOSS: 1.2892332077\tACC: 0.9846612215\nEP: 975\tLOSS: 1.2954666615\tACC: 0.9787180424\tVALID\tLOSS: 1.2870831490\tACC: 0.9866648912\nEP: 976\tLOSS: 1.2900664806\tACC: 0.9839964509\tVALID\tLOSS: 1.2840120792\tACC: 0.9896955490\nEP: 977\tLOSS: 1.2843030691\tACC: 0.9897460938\tVALID\tLOSS: 1.2824499607\tACC: 0.9916486740\nEP: 978\tLOSS: 1.2837888002\tACC: 0.9900829196\tVALID\tLOSS: 1.2808463573\tACC: 0.9931640625\nEP: 979\tLOSS: 1.2877776623\tACC: 0.9861473441\tVALID\tLOSS: 1.2840666771\tACC: 0.9896955490\nEP: 980\tLOSS: 1.2859190702\tACC: 0.9877172709\tVALID\tLOSS: 1.2863013744\tACC: 0.9875909090\nEP: 981\tLOSS: 1.2873561382\tACC: 0.9868164062\tVALID\tLOSS: 1.2893266678\tACC: 0.9842739701\nEP: 982\tLOSS: 1.2909884453\tACC: 0.9832176566\tVALID\tLOSS: 1.2860469818\tACC: 0.9881297350\nEP: 983\tLOSS: 1.2915121317\tACC: 0.9825655818\tVALID\tLOSS: 1.2916061878\tACC: 0.9827586412\nEP: 984\tLOSS: 1.2942686081\tACC: 0.9796189070\tVALID\tLOSS: 1.2880814075\tACC: 0.9862270951\nEP: 985\tLOSS: 1.2901180983\tACC: 0.9837059379\tVALID\tLOSS: 1.2855560780\tACC: 0.9881297350\nEP: 986\tLOSS: 1.2868233919\tACC: 0.9871239066\tVALID\tLOSS: 1.2818808556\tACC: 0.9921875000\nEP: 987\tLOSS: 1.2831482887\tACC: 0.9908323884\tVALID\tLOSS: 1.2829189301\tACC: 0.9907226562\nEP: 988\tLOSS: 1.2854635715\tACC: 0.9885887504\tVALID\tLOSS: 1.2836046219\tACC: 0.9901838303\nEP: 989\tLOSS: 1.2868323326\tACC: 0.9867870212\tVALID\tLOSS: 1.2847268581\tACC: 0.9891567826\nEP: 990\tLOSS: 1.2851003408\tACC: 0.9891821146\tVALID\tLOSS: 1.2830617428\tACC: 0.9906721115\nEP: 991\tLOSS: 1.2820918560\tACC: 0.9916698337\tVALID\tLOSS: 1.2811940908\tACC: 0.9926252365\nEP: 992\tLOSS: 1.2827718258\tACC: 0.9912279248\tVALID\tLOSS: 1.2809362411\tACC: 0.9931135178\nEP: 993\tLOSS: 1.2849892378\tACC: 0.9889549613\tVALID\tLOSS: 1.2847273350\tACC: 0.9890052676\nEP: 994\tLOSS: 1.2861536741\tACC: 0.9880371094\tVALID\tLOSS: 1.2819882631\tACC: 0.9921369553\nEP: 995\tLOSS: 1.2831975222\tACC: 0.9909080863\tVALID\tLOSS: 1.2819807529\tACC: 0.9916486740\nEP: 996\tLOSS: 1.2827003002\tACC: 0.9913793802\tVALID\tLOSS: 1.2813835144\tACC: 0.9926252365\nEP: 997\tLOSS: 1.2831983566\tACC: 0.9908323884\tVALID\tLOSS: 1.2825347185\tACC: 0.9915981889\nEP: 998\tLOSS: 1.2847633362\tACC: 0.9891991019\tVALID\tLOSS: 1.2818758488\tACC: 0.9921369553\nEP: 999\tLOSS: 1.2823209763\tACC: 0.9916405082\tVALID\tLOSS: 1.2828958035\tACC: 0.9911603928\nEP: 1000\tLOSS: 1.2863641977\tACC: 0.9876878858\tVALID\tLOSS: 1.2884838581\tACC: 0.9855872989\nEP: 1001\tLOSS: 1.2883522511\tACC: 0.9855663180\tVALID\tLOSS: 1.2863037586\tACC: 0.9876919389\nEP: 1002\tLOSS: 1.2842767239\tACC: 0.9896703959\tVALID\tLOSS: 1.2808315754\tACC: 0.9931135178\nEP: 1003\tLOSS: 1.2846721411\tACC: 0.9893041849\tVALID\tLOSS: 1.2857190371\tACC: 0.9881802201\nEP: 1004\tLOSS: 1.2820070982\tACC: 0.9919139743\tVALID\tLOSS: 1.2810074091\tACC: 0.9926757812\nEP: 1005\tLOSS: 1.2827556133\tACC: 0.9914720654\tVALID\tLOSS: 1.2836546898\tACC: 0.9901838303\nEP: 1006\tLOSS: 1.2827738523\tACC: 0.9912279248\tVALID\tLOSS: 1.2808687687\tACC: 0.9931640625\nEP: 1007\tLOSS: 1.2828266621\tACC: 0.9912279248\tVALID\tLOSS: 1.2831186056\tACC: 0.9911603928\nEP: 1008\tLOSS: 1.2828083038\tACC: 0.9912573099\tVALID\tLOSS: 1.2811347246\tACC: 0.9931135178\nEP: 1009\tLOSS: 1.2861758471\tACC: 0.9880077243\tVALID\tLOSS: 1.2873544693\tACC: 0.9866143465\nEP: 1010\tLOSS: 1.2844421864\tACC: 0.9893211722\tVALID\tLOSS: 1.2845959663\tACC: 0.9896450639\nEP: 1011\tLOSS: 1.2878317833\tACC: 0.9859201908\tVALID\tLOSS: 1.2873858213\tACC: 0.9866648912\nEP: 1012\tLOSS: 1.2880508900\tACC: 0.9859958887\tVALID\tLOSS: 1.2818953991\tACC: 0.9926757812\nEP: 1013\tLOSS: 1.2960480452\tACC: 0.9780150056\tVALID\tLOSS: 1.2877219915\tACC: 0.9862775803\nEP: 1014\tLOSS: 1.3054500818\tACC: 0.9682956934\tVALID\tLOSS: 1.2905068398\tACC: 0.9837352037\nEP: 1015\tLOSS: 1.2946072817\tACC: 0.9792820215\tVALID\tLOSS: 1.2869136333\tACC: 0.9871531725\nEP: 1016\tLOSS: 1.2947911024\tACC: 0.9791429639\tVALID\tLOSS: 1.2840819359\tACC: 0.9896955490\nEP: 1017\tLOSS: 1.2946341038\tACC: 0.9789915681\tVALID\tLOSS: 1.2842190266\tACC: 0.9896955490\nEP: 1018\tLOSS: 1.2930077314\tACC: 0.9810497165\tVALID\tLOSS: 1.2858110666\tACC: 0.9881802201\nEP: 1019\tLOSS: 1.2892383337\tACC: 0.9845140576\tVALID\tLOSS: 1.2842018604\tACC: 0.9896450639\nEP: 1020\tLOSS: 1.2878122330\tACC: 0.9862694144\tVALID\tLOSS: 1.2798794508\tACC: 0.9941406250\nEP: 1021\tLOSS: 1.2852730751\tACC: 0.9885717630\tVALID\tLOSS: 1.2815167904\tACC: 0.9926252365\nEP: 1022\tLOSS: 1.2881864309\tACC: 0.9859789014\tVALID\tLOSS: 1.2888340950\tACC: 0.9850990176\nEP: 1023\tLOSS: 1.2896485329\tACC: 0.9841014743\tVALID\tLOSS: 1.2803854942\tACC: 0.9936017990\nEP: 1024\tLOSS: 1.2830625772\tACC: 0.9910131693\tVALID\tLOSS: 1.2800238132\tACC: 0.9940395951\nEP: 1025\tLOSS: 1.2841651440\tACC: 0.9897167087\tVALID\tLOSS: 1.2795692682\tACC: 0.9945783615\nEP: 1026\tLOSS: 1.2844251394\tACC: 0.9895189404\tVALID\tLOSS: 1.2819960117\tACC: 0.9919854403\nEP: 1027\tLOSS: 1.2874171734\tACC: 0.9865428805\tVALID\tLOSS: 1.2818830013\tACC: 0.9921369553\nEP: 1028\tLOSS: 1.2856018543\tACC: 0.9884960055\tVALID\tLOSS: 1.2832597494\tACC: 0.9907226562\nEP: 1029\tLOSS: 1.2820061445\tACC: 0.9918676615\tVALID\tLOSS: 1.2813777924\tACC: 0.9926252365\nEP: 1030\tLOSS: 1.2821303606\tACC: 0.9915771484\tVALID\tLOSS: 1.2814967632\tACC: 0.9925747514\nEP: 1031\tLOSS: 1.2829972506\tACC: 0.9910594821\tVALID\tLOSS: 1.2815092802\tACC: 0.9926252365\nEP: 1032\tLOSS: 1.2816469669\tACC: 0.9924316406\tVALID\tLOSS: 1.2809025049\tACC: 0.9931135178\nEP: 1033\tLOSS: 1.2807883024\tACC: 0.9931346774\tVALID\tLOSS: 1.2808302641\tACC: 0.9931640625\nEP: 1034\tLOSS: 1.2797570229\tACC: 0.9942626953\tVALID\tLOSS: 1.2798932791\tACC: 0.9941406250\nEP: 1035\tLOSS: 1.2796481848\tACC: 0.9944017529\tVALID\tLOSS: 1.2823084593\tACC: 0.9916992188\nEP: 1036\tLOSS: 1.2814104557\tACC: 0.9926757812\tVALID\tLOSS: 1.2817100286\tACC: 0.9921875000\nEP: 1037\tLOSS: 1.2794224024\tACC: 0.9946752787\tVALID\tLOSS: 1.2815309763\tACC: 0.9921369553\nEP: 1038\tLOSS: 1.2836717367\tACC: 0.9901586771\tVALID\tLOSS: 1.2823622227\tACC: 0.9916486740\nEP: 1039\tLOSS: 1.2808567286\tACC: 0.9931346774\tVALID\tLOSS: 1.2807531357\tACC: 0.9931135178\nEP: 1040\tLOSS: 1.2874259949\tACC: 0.9864548445\tVALID\tLOSS: 1.2950398922\tACC: 0.9791386127\nEP: 1041\tLOSS: 1.2905901670\tACC: 0.9832176566\tVALID\tLOSS: 1.2831114531\tACC: 0.9906216264\nEP: 1042\tLOSS: 1.2853623629\tACC: 0.9884496927\tVALID\tLOSS: 1.2819544077\tACC: 0.9921369553\nEP: 1043\tLOSS: 1.2849322557\tACC: 0.9890600443\tVALID\tLOSS: 1.2803577185\tACC: 0.9935513139\nEP: 1044\tLOSS: 1.2802917957\tACC: 0.9936523438\tVALID\tLOSS: 1.2787667513\tACC: 0.9951171875\n"
],
[
"# Saving model weights\n# IT IS NOT NECESSARY HERE, IF YOU DIDN'T TRAIN MODEL\nm.save_model()",
"_____no_output_____"
]
],
[
[
"# Plotting misclassified images",
"_____no_output_____"
]
],
[
[
"m.plot_misclassified(dat['train'], batch=256)",
"_____no_output_____"
],
[
"m.plot_misclassified(dat['test'], batch=256)",
"_____no_output_____"
],
[
"m.close()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a5a3a19d2417bf6f50d3a163e1e20cef74606f9
| 1,016,587 |
ipynb
|
Jupyter Notebook
|
xnaturev2-image-classifier-step-by-step.ipynb
|
epcervantes7/XNatureV2-Image-Classifier
|
dea94aafe768874f5547114309779c64f5cf8126
|
[
"MIT"
] | null | null | null |
xnaturev2-image-classifier-step-by-step.ipynb
|
epcervantes7/XNatureV2-Image-Classifier
|
dea94aafe768874f5547114309779c64f5cf8126
|
[
"MIT"
] | null | null | null |
xnaturev2-image-classifier-step-by-step.ipynb
|
epcervantes7/XNatureV2-Image-Classifier
|
dea94aafe768874f5547114309779c64f5cf8126
|
[
"MIT"
] | null | null | null | 564.456968 | 387,396 | 0.935534 |
[
[
[
"# Some useful functions",
"_____no_output_____"
]
],
[
[
"import time\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.utils.multiclass import unique_labels\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn import svm\nfrom keras.models import Sequential\nfrom keras.layers import Conv2D,MaxPooling2D, MaxPool2D\nfrom keras.layers import Activation, Dense, Flatten, Dropout\nfrom sklearn.ensemble import IsolationForest\nfrom sklearn.neighbors import LocalOutlierFactor\ndef plot_history(history):\n \"\"\"\n This function plot training history of a model \n \"\"\"\n plt.figure(1) \n plt.subplot(211) \n plt.plot(history.history['accuracy'])\n plt.plot(history.history['val_accuracy']) \n plt.title('model accuracy') \n plt.ylabel('accuracy')\n plt.xlabel('epoch') \n plt.legend(['train', 'test'], loc='upper left') \n plt.ylim(0.0, 1.1)\n # summarize history for loss \n\n plt.subplot(212) \n plt.plot(history.history['loss']) \n plt.plot(history.history['val_loss']) \n plt.title('model loss') \n plt.ylabel('loss') \n plt.xlabel('epoch') \n plt.legend(['train', 'test'], loc='upper left') \n plt.ylim(0.0, 1.1)\n \n plt.show()\n\n\ndef find_outliers(data,outliers_fraction,n_neighbors):\n \"\"\"\n This function finds and plots outliers using the Local Outlier Factor method \n \"\"\"\n # Example settings\n n_samples = data.shape[0]\n n_outliers = int(outliers_fraction * n_samples)\n n_inliers = n_samples - n_outliers\n\n # define outlier/anomaly detection methods to be compared\n anomaly_algorithms = [(\"Local Outlier Factor\", LocalOutlierFactor(\n n_neighbors=n_neighbors, contamination=outliers_fraction))]\n\n # Define datasets\n blobs_params = dict(random_state=0, n_samples=n_inliers, n_features=2)\n datasets = [data]\n\n # # Compare given classifiers under given settings\n xx, yy = np.meshgrid(np.linspace(-10000, 40000, 150),\n np.linspace(-10000, 40000, 150))\n\n# plt.figure(figsize=(5,5))\n plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05,\n hspace=.01)\n\n plot_num = 1\n rng = np.random.RandomState(42)\n\n for i_dataset, X_ in enumerate(datasets):\n for name, algorithm in anomaly_algorithms:\n t0 = time.time()\n algorithm.fit(X_)\n t1 = time.time()\n plt.subplot(len(datasets), len(anomaly_algorithms), plot_num)\n if i_dataset == 0:\n plt.title(name)\n\n # fit the data and tag outliers\n if name == \"Local Outlier Factor\":\n y_pred = algorithm.fit_predict(X_)\n else:\n y_pred = algorithm.fit(X).predict(X_)\n\n # plot the levels lines and the points\n if name != \"Local Outlier Factor\": # LOF does not implement predict\n Z = algorithm.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='black')\n# print(y_pred)\n colors = np.array(['b', 'y'])\n plt.scatter(X_[:, 0], X_[:, 1],alpha=0.5, color=colors[(y_pred + 1) // 2])\n plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'),\n transform=plt.gca().transAxes, size=15,\n horizontalalignment='right')\n plot_num += 1\n\n plt.show()\n return y_pred\n\n\ndef plot_confusion_matrix(y_true, y_pred, classes,\n normalize=False,\n title=None,\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if not title:\n if normalize:\n title = 'Normalized confusion matrix'\n else:\n title = 'Confusion matrix, without normalization'\n\n # Compute confusion matrix\n cm = confusion_matrix(y_true, y_pred)\n # Only use the labels that appear in the data\n classes = classes[unique_labels(y_true, y_pred)]\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n# print(cm)\n\n fig, ax = plt.subplots()\n im = ax.imshow(cm, interpolation='nearest', cmap=cmap)\n ax.figure.colorbar(im, ax=ax)\n\n # Loop over data dimensions and create text annotations.\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i in range(cm.shape[0]):\n for j in range(cm.shape[1]):\n ax.text(j, i, format(cm[i, j], fmt),\n ha=\"center\", va=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n fig.tight_layout()\n return ax\n\n\ndef get_model():\n \"\"\"\n This function creates and compile a Sequential model used as classifier\n \"\"\"\n model = Sequential()\n model.add(Conv2D(filters = 32, kernel_size = 2,input_shape=(SIZE,SIZE,1),padding='same'))\n model.add(Conv2D(filters = 32,kernel_size = 2,activation= 'relu',padding='same'))\n model.add(Activation('relu'))\n model.add(MaxPool2D(pool_size=2))\n\n model.add(Conv2D(filters = 64,kernel_size = 2,activation= 'relu',padding='same'))\n model.add(MaxPool2D(pool_size=2))\n\n model.add(Conv2D(filters = 128,kernel_size = 2,activation= 'relu',padding='same'))\n model.add(MaxPool2D(pool_size=2))\n\n model.add(Dropout(0.3))\n model.add(Flatten())\n model.add(Dense(128))\n model.add(Activation('relu'))\n model.add(Dropout(0.4))\n model.add(Dense(8,activation = 'softmax'))\n model.compile(loss='categorical_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])\n print('Compiled!')\n return model\n\nnp.set_printoptions(precision=2)",
"Using TensorFlow backend.\n"
]
],
[
[
"# Loading original dataset\n I used load_files from sklearn.datasets package function to load the original dataset\n",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_files\nimport numpy as np\n\ndata_dir = '../input/xnaturev2/XNature/'\n\n# loading file names and their respective target labels into numpy array! \ndef load_dataset(path):\n data = load_files(path)\n files = np.array(data['filenames'])\n targets = np.array(data['target'])\n target_labels = np.array(data['target_names'])\n return files,targets,target_labels\ndata, labels,target_labels = load_dataset(data_dir)\nprint('Loading complete!')\nprint('Data set size : ' , data.shape[0])",
"Loading complete!\nData set size : 2984\n"
]
],
[
[
"# 1. Prepare data",
"_____no_output_____"
],
[
"here I load the images and convert into gray images, then I performed a PCA in order to visualiza data to them find outliers, if exist.",
"_____no_output_____"
]
],
[
[
"#again prepare data load files and labels\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.utils import np_utils\nfrom keras.preprocessing.image import array_to_img, img_to_array, load_img\nfrom skimage.color import rgb2gray\nSIZE=100\ndef convert_image_to_array(files):\n images_as_array=[]\n for file in files:\n # Convert to Numpy Array\n images_as_array.append(rgb2gray(img_to_array(load_img(file,target_size=(SIZE, SIZE)))))\n \n return images_as_array\n\nX = np.array(convert_image_to_array(data))\nX=X.reshape(X.shape[0],X.shape[1],X.shape[2],1) #\nprint('Original set shape : ',X.shape)\n\n\nprint('1st original image shape ',X[0].shape)\nno_of_classes = len(np.unique(labels))\ny = np_utils.to_categorical(labels,no_of_classes)\n\n",
"Original set shape : (2984, 100, 100, 1)\n1st original image shape (100, 100, 1)\n"
]
],
[
[
"> ## Outliers remotion\n\nA simple visualization can help identify outliers, in this case I used PCA",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt \n%matplotlib inline\nfrom sklearn.decomposition import PCA\npca = PCA(2) # 100*100*3 from 64 to 2 dimensions\nprojected = pca.fit_transform(X.reshape(X.shape[0],SIZE*SIZE*1))\nscatter=plt.scatter(projected[:, 0], projected[:, 1],\n c=labels,cmap=plt.cm.get_cmap('Set1', 8), edgecolor='none', alpha=0.8)\nplt.title(\"PCA\")\nplt.xlabel('component 1')\nplt.ylabel('component 2')\nplt.colorbar()\n# plt.legend(handles=scatter.legend_elements()[0], labels=list(target_labels),loc='upper right', bbox_to_anchor=(1.5, 1))\n\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"some points look like outliers so I will use LocalOutlierFactor to remove some posible outliers",
"_____no_output_____"
]
],
[
[
"#plot outliers and show corresponding iamges\ny_pred=find_outliers(projected,0.001,27)\noutliers=X[y_pred==-1]\nlbs=y[y_pred==-1]\nfor ol,lb in zip(outliers,lbs): \n print(target_labels[np.argmax([lb])])\n plt.imshow(ol.reshape(SIZE,SIZE),cmap='gray')\n plt.show()\n ",
"_____no_output_____"
]
],
[
[
"### Remove outliers",
"_____no_output_____"
]
],
[
[
"X=X[y_pred!=-1]\ny=y[y_pred!=-1]\nprint(X.shape)\nprint(y.shape)",
"(2981, 100, 100, 1)\n(2981, 8)\n"
]
],
[
[
"Split data into training and testing datasets",
"_____no_output_____"
]
],
[
[
"#split data into training and test sets\nfrom sklearn.model_selection import train_test_split\n# x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.33,shuffle=True)\nx_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.33,shuffle=True, random_state=42)\nx_train = x_train.astype('float32')/255\nx_test = x_test.astype('float32')/255\nprint('Training set shape : ',x_train.shape)\nprint('Testing set shape : ',x_test.shape)",
"Training set shape : (1997, 100, 100, 1)\nTesting set shape : (984, 100, 100, 1)\n"
]
],
[
[
"## Imbalance analysis\nA simple bar chart show how the classes are imbalanced. Class knife has many more occurrences than the other classes",
"_____no_output_____"
]
],
[
[
"from matplotlib import pyplot as plt\n%matplotlib inline\nfrom collections import Counter\nimport pandas as pd\nD =Counter(np.argmax(y_train,axis=1))\nplt.title(\"number of ocurrences by class\")\nplt.bar(range(len(D)), D.values(), align='center')\nplt.xticks(range(len(D)), target_labels[list(D.keys())])\nplt.show()",
"_____no_output_____"
]
],
[
[
"In this case the class Knife has much more data than the others and it could cause overfitting and misinterpretation of results.\n\n\nIn order to eliminate this bias of imbalance we need to balance the dataset. We can use different balancing methods both, using manual augmentation or using some functions like balanced_batch_generator. Among them, the simplest that would be the undersampling of n-1 classes for the number of elements in the class with less elements or oversampling of the n-1 classes for the quantity of elements of the class with more elements. \n\nAnother known easy method to solve the imbalance problem is to adding weights to classes during the training as following:",
"_____no_output_____"
]
],
[
[
"from sklearn.utils import class_weight\n\ny_numbers=y_train.argmax(axis=1)\nclass_weights = class_weight.compute_class_weight('balanced',\n np.unique(y_numbers),\n y_numbers)\nclass_weights = dict(enumerate(class_weights))\nclass_weights",
"_____no_output_____"
]
],
[
[
"To help us with the imbalance task scikit-learn has a function that helps us calculate the weight of each class",
"_____no_output_____"
],
[
"# 2. Train and package model",
"_____no_output_____"
],
[
"Therefore, we only need to adjust some parameters and pass the weights of the classes during the training of our model",
"_____no_output_____"
]
],
[
[
"#train the model\n\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.callbacks import EarlyStopping\nfrom keras.callbacks import ReduceLROnPlateau\nfrom keras import backend as K\n\nmodel = get_model()\nmodel.summary()\n\nno_of_classes = len(np.unique(labels))\nbatch_size = 32\nes = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)\ncheckpointer = ModelCheckpoint(filepath = 'cnn_xnatureV2_balanced_weight.hdf5', verbose = 1, save_best_only = True)\nreduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,patience=3, verbose=1, min_lr=0.00005)\n\nhistory = model.fit(x_train,y_train,\n batch_size = 32,\n epochs=30,\n validation_split=0.2,\n class_weight=class_weights,\n callbacks = [es,checkpointer,reduce_lr],\n verbose=1, shuffle=True)",
"Compiled!\nModel: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 100, 100, 32) 160 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 100, 100, 32) 4128 \n_________________________________________________________________\nactivation_1 (Activation) (None, 100, 100, 32) 0 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 50, 50, 32) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 50, 50, 64) 8256 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 25, 25, 64) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 25, 25, 128) 32896 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 12, 12, 128) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 12, 12, 128) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 18432) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 128) 2359424 \n_________________________________________________________________\nactivation_2 (Activation) (None, 128) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 8) 1032 \n=================================================================\nTotal params: 2,405,896\nTrainable params: 2,405,896\nNon-trainable params: 0\n_________________________________________________________________\nTrain on 1597 samples, validate on 400 samples\nEpoch 1/30\n1597/1597 [==============================] - 29s 18ms/step - loss: 1.1371 - accuracy: 0.5911 - val_loss: 0.2041 - val_accuracy: 0.9375\n\nEpoch 00001: val_loss improved from inf to 0.20415, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 2/30\n1597/1597 [==============================] - 28s 17ms/step - loss: 0.2935 - accuracy: 0.8967 - val_loss: 0.0534 - val_accuracy: 0.9625\n\nEpoch 00002: val_loss improved from 0.20415 to 0.05337, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 3/30\n1597/1597 [==============================] - 28s 17ms/step - loss: 0.1415 - accuracy: 0.9555 - val_loss: 0.0507 - val_accuracy: 0.9375\n\nEpoch 00003: val_loss improved from 0.05337 to 0.05072, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 4/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 0.0911 - accuracy: 0.9681 - val_loss: 0.0185 - val_accuracy: 0.9950\n\nEpoch 00004: val_loss improved from 0.05072 to 0.01846, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 5/30\n1597/1597 [==============================] - 28s 17ms/step - loss: 0.0423 - accuracy: 0.9781 - val_loss: 0.0132 - val_accuracy: 0.9950\n\nEpoch 00005: val_loss improved from 0.01846 to 0.01317, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 6/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 0.0433 - accuracy: 0.9856 - val_loss: 0.0183 - val_accuracy: 0.9650\n\nEpoch 00006: val_loss did not improve from 0.01317\nEpoch 7/30\n1597/1597 [==============================] - 29s 18ms/step - loss: 0.0512 - accuracy: 0.9887 - val_loss: 0.0081 - val_accuracy: 0.9975\n\nEpoch 00007: val_loss improved from 0.01317 to 0.00808, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 8/30\n1597/1597 [==============================] - 29s 18ms/step - loss: 0.0172 - accuracy: 0.9919 - val_loss: 0.0113 - val_accuracy: 0.9725\n\nEpoch 00008: val_loss did not improve from 0.00808\nEpoch 9/30\n1597/1597 [==============================] - 29s 18ms/step - loss: 0.0106 - accuracy: 0.9919 - val_loss: 0.0034 - val_accuracy: 1.0000\n\nEpoch 00009: val_loss improved from 0.00808 to 0.00335, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 10/30\n1597/1597 [==============================] - 28s 17ms/step - loss: 0.0136 - accuracy: 0.9925 - val_loss: 0.0028 - val_accuracy: 0.9975\n\nEpoch 00010: val_loss improved from 0.00335 to 0.00275, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 11/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 0.0143 - accuracy: 0.9950 - val_loss: 0.0056 - val_accuracy: 0.9975\n\nEpoch 00011: val_loss did not improve from 0.00275\nEpoch 12/30\n1597/1597 [==============================] - 28s 17ms/step - loss: 0.0063 - accuracy: 0.9956 - val_loss: 5.9041e-04 - val_accuracy: 1.0000\n\nEpoch 00012: val_loss improved from 0.00275 to 0.00059, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 13/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 0.0102 - accuracy: 0.9975 - val_loss: 0.0041 - val_accuracy: 1.0000\n\nEpoch 00013: val_loss did not improve from 0.00059\nEpoch 14/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 0.0262 - accuracy: 0.9931 - val_loss: 0.0016 - val_accuracy: 1.0000\n\nEpoch 00014: val_loss did not improve from 0.00059\nEpoch 15/30\n1597/1597 [==============================] - 29s 18ms/step - loss: 0.0079 - accuracy: 0.9987 - val_loss: 0.0274 - val_accuracy: 0.9975\n\nEpoch 00015: val_loss did not improve from 0.00059\n\nEpoch 00015: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.\nEpoch 16/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 7.1842e-04 - accuracy: 1.0000 - val_loss: 2.2670e-04 - val_accuracy: 1.0000\n\nEpoch 00016: val_loss improved from 0.00059 to 0.00023, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 17/30\n1597/1597 [==============================] - 29s 18ms/step - loss: 4.4948e-04 - accuracy: 0.9994 - val_loss: 3.5735e-05 - val_accuracy: 1.0000\n\nEpoch 00017: val_loss improved from 0.00023 to 0.00004, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 18/30\n1597/1597 [==============================] - 29s 18ms/step - loss: 0.0146 - accuracy: 0.9994 - val_loss: 3.1464e-05 - val_accuracy: 1.0000\n\nEpoch 00018: val_loss improved from 0.00004 to 0.00003, saving model to cnn_xnatureV2_balanced_weight.hdf5\nEpoch 19/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 2.2207e-04 - accuracy: 1.0000 - val_loss: 7.5342e-05 - val_accuracy: 1.0000\n\nEpoch 00019: val_loss did not improve from 0.00003\nEpoch 20/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 2.3503e-04 - accuracy: 0.9994 - val_loss: 8.3691e-05 - val_accuracy: 1.0000\n\nEpoch 00020: val_loss did not improve from 0.00003\n\nEpoch 00020: ReduceLROnPlateau reducing learning rate to 5e-05.\nEpoch 21/30\n1597/1597 [==============================] - 28s 18ms/step - loss: 0.0067 - accuracy: 0.9987 - val_loss: 4.1686e-04 - val_accuracy: 1.0000\n\nEpoch 00021: val_loss did not improve from 0.00003\nEpoch 22/30\n1597/1597 [==============================] - 28s 17ms/step - loss: 1.8912e-04 - accuracy: 1.0000 - val_loss: 5.6690e-04 - val_accuracy: 1.0000\n\nEpoch 00022: val_loss did not improve from 0.00003\nEpoch 23/30\n1597/1597 [==============================] - 28s 17ms/step - loss: 0.0146 - accuracy: 0.9994 - val_loss: 2.9878e-04 - val_accuracy: 1.0000\n\nEpoch 00023: val_loss did not improve from 0.00003\nEpoch 00023: early stopping\n"
],
[
"plot_history(history)",
"_____no_output_____"
]
],
[
[
"# 3. Testing model",
"_____no_output_____"
]
],
[
[
"# load the weights that yielded the best validation accuracy\n# model.load_weights('cnn_xnatureV2_balanced_weight.hdf5')\n# evaluate and print test accuracy\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('\\n', 'Test accuracy:', score[1])\n",
"\n Test accuracy: 1.0\n"
],
[
"# plotting some prefictions\ny_pred = model.predict(x_test)\nfig = plt.figure(figsize=(16, 9))\nfor i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)):\n ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(x_test[idx]),cmap='gray')\n pred_idx = np.argmax(y_pred[idx])\n true_idx = np.argmax(y_test[idx])\n ax.set_title(\"{} ({})\".format(target_labels[pred_idx], target_labels[true_idx]),\n color=(\"green\" if pred_idx == true_idx else \"red\"))",
"_____no_output_____"
],
[
"\nplot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels,\n title='Confusion matrix, without normalization')\n\n# # Plot normalized confusion matrix\nplot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels, normalize=True,\n title='Normalized confusion matrix')\n\nplt.show()\n\nfrom sklearn.metrics import classification_report\nprint(classification_report(y_test.argmax(axis=1), y_pred.argmax(axis=1)))",
"Confusion matrix, without normalization\nNormalized confusion matrix\n"
],
[
"#show classification errors\nei=y_test.argmax(axis=1)!=y_pred.argmax(axis=1)\nim_err=x_test[ei]\nact=y_test[ei]\npre=y_pred[ei]\nfor er,a,p in zip(im_err,act,pre):\n plt.title(target_labels[np.argmax(p)]+\"/\"+target_labels[np.argmax(a)])\n plt.imshow(er.reshape(SIZE,SIZE),cmap='gray')\n plt.show()",
"_____no_output_____"
]
],
[
[
"# 4. Model Validation\nI used KFold with K= 10 and 10 epochs to validate the model, for each split I recompute the class weight. In order to evaluate the validation the confusion matrix classification is been presented. \n\nThe final result was ... \n\n<span style=\"color:blue\">Accuracy mean: *99.698%* std: 0.381</span>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom sklearn.model_selection import KFold\nfrom keras import backend as K\nfrom sklearn.utils import class_weight\n\n\n\nno_of_classes = len(np.unique(labels))\nbatch_size = 32\nkfold = KFold(n_splits=10, shuffle=True, random_state=7)\ncvscores=[]\n\nfor train_index, test_index in kfold.split(X,y):\n print(\"TRAIN:\", len(train_index), \"TEST:\", len(test_index))\n x_train, x_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n x_train = x_train.astype('float32')/255\n x_test = x_test.astype('float32')/255\n reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,patience=3, verbose=1, min_lr=0.0001)\n y_numbers=y_train.argmax(axis=1)\n class_weights = class_weight.compute_class_weight('balanced', np.unique(y_numbers), y_numbers)\n class_weights = dict(enumerate(class_weights))\n print(class_weight)\n model=get_model()\n history = model.fit(x_train,y_train,\n batch_size = 32,\n epochs=10,\n validation_split=0.2,\n class_weight=class_weights,\n callbacks = [reduce_lr],\n verbose=1, shuffle=True)\n # evaluate the model\n y_pred = model.predict(x_test)\n plot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels,\n title='Confusion matrix, without normalization')\n\n # # Plot normalized confusion matrix\n plot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels, normalize=True,\n title='Normalized confusion matrix')\n\n plt.show()\n scores = model.evaluate(x_test, y_test, verbose=0)\n print(\"%s: %.2f%%\" % (model.metrics_names[1], scores[1]*100))\n cvscores.append(scores[1] * 100)\nprint(np.mean(cvscores), np.std(cvscores))",
"TRAIN: 2682 TEST: 299\n<module 'sklearn.utils.class_weight' from '/opt/conda/lib/python3.6/site-packages/sklearn/utils/class_weight.py'>\nCompiled!\nTrain on 2145 samples, validate on 537 samples\nEpoch 1/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.9495 - accuracy: 0.7030 - val_loss: 0.2099 - val_accuracy: 0.9423\nEpoch 2/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.1669 - accuracy: 0.9217 - val_loss: 0.0913 - val_accuracy: 0.9832\nEpoch 3/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.1188 - accuracy: 0.9557 - val_loss: 0.1119 - val_accuracy: 0.9870\nEpoch 4/10\n2145/2145 [==============================] - 38s 18ms/step - loss: 0.0491 - accuracy: 0.9767 - val_loss: 0.1735 - val_accuracy: 0.9926\nEpoch 5/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.0369 - accuracy: 0.9814 - val_loss: 0.0530 - val_accuracy: 0.9963\nEpoch 6/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.0295 - accuracy: 0.9851 - val_loss: 0.0522 - val_accuracy: 0.9963\nEpoch 7/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.0199 - accuracy: 0.9911 - val_loss: 0.1637 - val_accuracy: 0.9944\nEpoch 8/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.0075 - accuracy: 0.9953 - val_loss: 0.4562 - val_accuracy: 0.9944\nEpoch 9/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.0245 - accuracy: 0.9958 - val_loss: 0.2518 - val_accuracy: 0.9963\n\nEpoch 00009: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.\nEpoch 10/10\n2145/2145 [==============================] - 37s 17ms/step - loss: 0.0074 - accuracy: 0.9967 - val_loss: 0.1537 - val_accuracy: 0.9963\nConfusion matrix, without normalization\nNormalized confusion matrix\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5a5d757b15ac6dc1f2ae56ef7d1de91d9f2db9
| 61,137 |
ipynb
|
Jupyter Notebook
|
notebooks/Word2Vec.ipynb
|
mico-boje/document-summarizer
|
4b24994ed7889b41bca71367a24c15013837b9df
|
[
"FTL"
] | null | null | null |
notebooks/Word2Vec.ipynb
|
mico-boje/document-summarizer
|
4b24994ed7889b41bca71367a24c15013837b9df
|
[
"FTL"
] | null | null | null |
notebooks/Word2Vec.ipynb
|
mico-boje/document-summarizer
|
4b24994ed7889b41bca71367a24c15013837b9df
|
[
"FTL"
] | null | null | null | 117.12069 | 6,632 | 0.742725 |
[
[
[
"import gensim.downloader as api\nimport gensim\nfrom gensim.models import Phrases\nfrom gensim.models import KeyedVectors, Word2Vec\nimport numpy as np\nimport nltk\nfrom nltk.corpus import stopwords\nimport string\nfrom sklearn.metrics.pairwise import cosine_similarity\nimport networkx as nx\nimport ast\nimport json",
"_____no_output_____"
],
[
"filename = r'/home/miboj/NLP/document-summarizer/data/processed/articles.json'\nfile = open(filename, encoding='ascii', errors='ignore')\ntext = file.read()\nfile.close()\n\nd = ast.literal_eval(text)",
"_____no_output_____"
],
[
"with open(filename) as json_file:\n data = json.load(json_file)\n\nfilename = r'/home/miboj/NLP/document-summarizer/data/processed/articles.json'\nfile = open(filename, encoding='ascii', errors='ignore')\ntext = file.read()\nfile.close()\n\njson_content = ast.literal_eval(text)\nsamples = json_content[0:10]",
"_____no_output_____"
],
[
"print(samples[1])",
"{'content': ['The Japan Air Self-Defense Force (JASDF) has inducted its first Kawasaki RC-2 ELINT platform (left) into service at Iruma AB in western Tokyo. The type will replace the older NAMC YS-11EB (right) in the role. ', 'pic.twitter.com/0cdb213Phq', ' MELBOURNE, Australia The Japan Air Self-Defense Force has inducted the first of a new intelligence-gathering aircraft into service, following a two-year flight test program.', ' The JASDF announced the induction of the RC-2 electronic intelligence, or elint, gathering aircraft at a ceremony held at Iruma Air Base in the western suburbs of the Japanese capital Tokyo on Oct. 1. ', \" The RC-2 is based on Kawasaki Heavy Industries' C-2 airlifter and has been heavily modified with multiple aircraft fairings that contain antennas for detecting, receiving and classifying electronic emissions. The aircraft made its maiden flight in early 2018, though the variant had been in development since at least 2015.\", ' Since that time, it underwent a series of flight tests conducted by the JASDF Air Development and Test Wing at Iruma, where the forces elint squadron is based. ', ' The RC-2 will replace the four NAMC YS-11EBs currently serving with the squadron, although its unknown if the new platform will replace the YS-11EBs on a one-for-one basis. The Defense Ministrys ', 'latest budget request', ', released on the same day of the RC-2s induction, seeks $67.2 million to acquire more of the specialized elint systems by purchasing an unspecified number of RC-2s.', ' Japan is also seeking to recapitalize its standoff jamming capability. The latest budget request sought $144.9 million to develop a new standoff jammer aircraft, with the accompanying graphic released by the ministry suggesting that it will also be based on the C-2. ', ' This aircraft will likely replace the two YS-11EAs and possibly the sole Kawasaki EC-1 in service with the JASDFs Electronic Warfare Squadron, which is also based at Iruma. ', ' The EC-1 is based on the older Kawasaki C-1 that Japan is slowly replacing with the C-2. The latest budget request is seeking $487.5 million to acquire two more of the airlifters in the coming fiscal year. ', ' Japan has acquired the C-2 at a relatively slow rate, with seven aircraft funded for fiscal 2014 through fiscal 2018. Fiscal 2019 received no funding for the effort.', ' In recent years, the country has also flirted with the idea of buying Lockheed Martin C-130J Super Hercules airlifters from the United States as a cheaper option.', ' The U.S. ally is also seeking a further $47.4 million to develop a new elint collection system that will eventually go on a new platform to replace the four Lockheed EP-3C Orion aircraft currently operated by the Japan Maritime Self-Defense Force.']}\n"
],
[
"tokens_list = []\nfor i in d:\n for sen in i['content']:\n tokens_list.append(sen)",
"_____no_output_____"
],
[
"import time\nstart_time = time.time()\nsentences = []\nword_count = 0\nstpwrds = stopwords.words('english') + list(string.punctuation) + ['—', '“', '”', \"'\", \"’\"]\nfor e, i in enumerate(tokens_list):\n words = []\n a = nltk.word_tokenize(i)\n for word in a:\n if word not in stpwrds:\n words.append(word)\n word_count += 1\n sentences.append(words)\nprint(\"--- %s seconds ---\" % (time.time() - start_time))",
"--- 30.925939798355103 seconds ---\n"
],
[
"print(len(sentences))\nprint(word_count)",
"103928\n1758362\n"
],
[
"\"\"\"\nsg - Training algorithm: 1 for skip-gram, 0 for CBOW\nhs - If 1, hierarchical softmax will be used for model training. If 0, and negative is non-zero, negative sampling will be used.\n\"\"\"\nmodel = Word2Vec(sentences, size=100, window=5, workers=12, sg=1, hs=1, compute_loss=True)",
"_____no_output_____"
],
[
"model.most_similar('weapon')",
"<ipython-input-9-ddf5759f98d5>:1: DeprecationWarning: Call to deprecated `most_similar` (Method will be removed in 4.0.0, use self.wv.most_similar() instead).\n model.most_similar('weapon')\n"
],
[
"model = model.wv",
"_____no_output_____"
],
[
"import re\ndef remove_empty_string(input_string):\n for e, i in enumerate(input_string):\n try:\n if i[-1] == ' ' and input_string[e+1][-1] == ' ':\n input_string[e] = i.rstrip()\n except IndexError:\n print('Out of index')\n joined_string = ''.join(input_string)\n for e, i in enumerate(joined_string):\n if i == ' ' and joined_string[e+1] == ' ':\n del i\n sentences = nltk.sent_tokenize(joined_string)\n return sentences\n",
"_____no_output_____"
],
[
"raw_string = [\" ROME — Defying reports that their planned partnership is \", \"doomed to fail\", \", France’s Naval Group and \", \"Italy’s Fincantieri\", \" have announced a joint venture to build and export naval vessels. \", \" The two \", \"state-controlled shipyards\", \" said they were forming a 50-50 joint venture after months of talks to integrate their activities. The move comes as Europe’s fractured shipbuilding industry faces stiffer global competition. \", \" The firms said in a statement that the deal would allow them to “jointly prepare winning offers for binational programs and export market,” as well as create joint supply chains, research and testing. \", \" Naval Group and Fincantieri first announced talks on cooperation last year after the latter negotiated a controlling share in French shipyard STX. But the deal was reportedly losing momentum due to resistance from French industry and a political row between France and Italy over migrants. \", \" The new deal falls short of the 10 percent share swap predicted by French Economy and Finance Minister Bruno Le Maire earlier this year, and far short of the total integration envisaged by Fincantieri CEO Giuseppe Bono. \", \" The statement called the joint venture the “first steps” toward the creation of an alliance that would create “a more efficient and competitive European shipbuilding industry.”\", \" Naval Group CEO Hervé Guillou, speaking at the Euronaval trade expo in Paris on Oct. 24, said the alliance is based on “two countries sharing a veritable naval ambition.”\", \" The joint venture is necessary because the “context of the global market has changed drastically,” he added, specifically mentioning new market entrants Russia, China, Singapore, Ukraine, India and Turkey.\", \"Sign up for the Early Bird Brief, the defense industry's most comprehensive news and information, straight to your inbox.\", \"By giving us your email, you are opting in to the Early Bird Brief.\", \" When asked about an initial product to be tackled under the alliance, Guillou acknowledged: “The answer is simple: there is nothing yet.”\", \" However, the firms said they are working toward a deal to build four logistics support ships for the French Navy, which will be based on an Italian design. \", \"Competition flares up for the follow-on portion of a deal previously won by the French shipbuilder.\", \" The firms also plan to jointly bid next year on work for midlife upgrades for Horizon frigates, which were built by France and Italy and are in service with both navies. The work would include providing a common combat management system. \", \" The statement was cautious about future acceleration toward integration. “A Government-to-Government Agreement would be needed to ensure the protection of sovereign assets, a fluid collaboration between the French and Italian teams and encourage further coherence of the National assistance programs, which provide a framework and support export sales,” the statement said.\", \" But the firms were optimistic the deal would be “a great opportunity for both groups and their eco-systems, by enhancing their ability to better serve the Italian and French navies, to capture new export contracts, to increase research funding and, ultimately, improve the competitiveness of both French and Italian naval sectors.”\", \" \", \"Sebastian Sprenger\", \" in Paris contributed to this report.\"]\nsentences = remove_empty_string(raw_string)",
"_____no_output_____"
],
[
"# The 'skipthoughts' module can be found at the root of the GitHub repository linked above\n#import skipthoughts\n\n# You would need to download pre-trained models first\n#model = skipthoughts.load_model()\n\n#encoder = skipthoughts.Encoder(model)\n#encoded = encoder.encode(sentences)\na = model['ROME']\na.shape",
"_____no_output_____"
],
[
"def get_embedding(sentences):\n embeddings = []\n stpwrds = stopwords.words('english') + list(string.punctuation) + ['—', '“', '”', \"'\", \"’\"]\n\n for i in sentences:\n temp = []\n words = nltk.word_tokenize(i)\n for word in words:\n true_len = len(words) - len([w for w in stpwrds if w in words])\n #if word not in stpwrds:\n if word in model.vocab:\n v = model[word]\n #else:\n # v = np.zeros(300,)\n temp.append(v)\n a = sum(temp)/true_len\n np_temp = np.array(a)\n #embeddings.append(temp)\n embeddings.append(np_temp)\n\n sentence_vectors = np.array(embeddings)\n return sentence_vectors",
"_____no_output_____"
],
[
"def get_sim_matrix(sentences, sentence_vectors):\n sim_mat = np.zeros([len(sentences), len(sentences)])\n sim_mat.shape \n for i in range(len(sentences)):\n for j in range(len(sentences)):\n if i != j:\n sim_mat[i][j] = cosine_similarity(sentence_vectors[i].reshape(1,100), sentence_vectors[j].reshape(1,100))[0,0]\n return sim_mat",
"_____no_output_____"
],
[
"def get_pagerank(sim_mat):\n nx_graph = nx.from_numpy_array(sim_mat)\n scores = nx.pagerank(nx_graph)\n return scores",
"_____no_output_____"
],
[
"def get_summary(num_sentences, scores, sentences):\n ranked_sentences = sorted(((scores[i],s) for i,s in enumerate(sentences)), reverse=True)\n #num_of_sentences = 4\n summary = ''\n for i in range(num_sentences):\n summary += ranked_sentences[i][1]\n summary += \" \"\n return summary",
"_____no_output_____"
],
[
"a = \"\"\"The statement called the joint venture the “first steps” toward the creation of an alliance that would create “a more efficient and competitive European shipbuilding industry.” Naval Group CEO Hervé Guillou, speaking at the Euronaval trade expo in Paris on Oct. 24, said the alliance is based on “two countries sharing a veritable naval ambition.” The joint venture is necessary because the “context of the global market has changed drastically,” he added, specifically mentioning new market entrants Russia, China, Singapore, Ukraine, India and Turkey.Sign up for the Early Bird Brief, the defense industry's most comprehensive news and information, straight to your inbox.By giving us your email, you are opting in to the Early Bird Brief.\nBut the firms were optimistic the deal would be “a great opportunity for both groups and their eco-systems, by enhancing their ability to better serve the Italian and French navies, to capture new export contracts, to increase research funding and, ultimately, improve the competitiveness of both French and Italian naval sectors.” Sebastian Sprenger in Paris contributed to this report.\nWhen asked about an initial product to be tackled under the alliance, Guillou acknowledged: “The answer is simple: there is nothing yet.” However, the firms said they are working toward a deal to build four logistics support ships for the French Navy, which will be based on an Italian design.\nThe firms also plan to jointly bid next year on work for midlife upgrades for Horizon frigates, which were built by France and Italy and are in service with both navies.\"\"\"",
"_____no_output_____"
],
[
"summary_samples = []\nsummary_len = []\nfor i in samples:\n i = remove_empty_string(i['content'])\n embeddings = get_embedding(i)\n sim_mat = get_sim_matrix(i, embeddings)\n scores = get_pagerank(sim_mat)\n sentence_length = len(i)\n summary = get_summary(int(sentence_length*0.3), scores, i)\n summary_samples.append(summary)\n summary_len.append(int(sentence_length*0.3))",
"_____no_output_____"
],
[
"sorted_summaries = []\nfor e, i in enumerate(summary_samples):\n a = nltk.sent_tokenize(i)\n o = samples[e]['content']\n b = remove_empty_string(o)\n #print(a)\n #print(b)\n res = [sort for x in b for sort in a if sort == x]\n sorted_summaries.append(res)",
"Out of index\n"
],
[
"for e, i in enumerate(sorted_summaries):\n print(e, \": \")\n print(\"len original: \", len(remove_empty_string(samples[e]['content'])))\n print(\"Summary len: \", summary_len[e])\n summary = \"\"\n for sen in i:\n summary += sen\n summary += \" \"\n print(summary)",
"0 : \nlen original: 17\nSummary len: 5\nI could not be more pleased with the confirmation of Gen. Allvin as the new vice chief of staff, Brown said. Allvin will succeed the current vice chief, Gen. Seve Wilson, who is expected to retire after 39 years in uniform. Wilson has served in his role since July 2016 and is the longest-serving vice chief in Air Force history. Gen. David Allvin is an experienced pilot who has commanded a n umber of wings, including the 438th Air Expeditionary Wing in Afghanistan. Allvin also commanded the 618th Air and Space Operations Center at Scott Air Force Base, Illinois, and served as director of strategy, plans and policy at U.S. European Commands headquarters at Stuttgart-Vaihingen, Germany. \n1 : \nlen original: 18\nSummary len: 5\npic.twitter.com/0cdb213Phq MELBOURNE, Australia The Japan Air Self-Defense Force has inducted the first of a new intelligence-gathering aircraft into service, following a two-year flight test program. The latest budget request sought $144.9 million to develop a new standoff jammer aircraft, with the accompanying graphic released by the ministry suggesting that it will also be based on the C-2. This aircraft will likely replace the two YS-11EAs and possibly the sole Kawasaki EC-1 in service with the JASDFs Electronic Warfare Squadron, which is also based at Iruma. The latest budget request is seeking $487.5 million to acquire two more of the airlifters in the coming fiscal year. The U.S. ally is also seeking a further $47.4 million to develop a new elint collection system that will eventually go on a new platform to replace the four Lockheed EP-3C Orion aircraft currently operated by the Japan Maritime Self-Defense Force. \n2 : \nlen original: 14\nSummary len: 4\nWe are helping the Air Force optimize the value of our 5G and other networking capabilities at these 3 bases and stand ready to work with them to extend these services across the entirety of the Air Force if they so choose. The Air Force made the award using so-called Other Transaction Agreements under its Enterprise IT as a Service (EITaaS) program, which aims to increase network speeds and modernize IT infrastructure at the services bases to support multi-domain operations. We think it is vital to test commercially provided services like 5G and software-based networking-as-a-service capabilities as we explore ways to help us innovate and improve our global air, space and cyber readiness, said Col. Justin K. Collins, deputy of the Air Forces enterprise IT & cyber infrastructure division. \n3 : \nlen original: 10\nSummary len: 3\n WASHINGTON The U.S. Air Force has awarded a nearly $40 million contract toParsons to produce ground vehicles that can clear mines or unexploded ordnance from airfields using a laser. The system is made up of a Cougar MRAP, Parsons' three-kilowatt ZEUS laser weapon, and an arm assembly that can move debris or other objects out of the way. The idea behind the RADBO is to allow airmen to clear threats from current or future airfields hardly the laser warfare capability sought by Pentagon planners for decades, but still a potentially important step, as it represents the first DoD ground-based laser system to be ordered into full production. \n4 : \nlen original: 15\nSummary len: 4\nIn order to turn a manned aircraft into an unmanned one, AFRL simply replaces the human pilot with a robot who interacts with the aircraft controls the same way a human would: it can pull the yoke, press pedals to control rudders and brakes, adjust the throttle and flip switches. And if users determine that they want to return the aircraft to a manned mission, ROBOpilot is simply removed and the pilots' seats are reinstalled. Since then, ROBOpilot has been cleared to fly again and installed in a new Cessna 206. \n5 : \nOut of index\nlen original: 39\nSummary len: 11\nWASHINGTON Building on its Switchblade 300 loitering missile legacy with the U.S. Army, AeroVironment is releasing a family of capabilities to include its new Switchblade 600, a larger version suited to go up against armored targets at greater ranges in denied and degraded environments. AeroVironment has provided the tube-launched, rucksack-portable Switchblade to the Army for roughly a decade, delivering thousands of them into theater, but the company sealed the largest loitering munitions deal to date with the service in May a $146 million contract, funded at $76 million for the first year, to supply the 300 version of the system for the Lethal Miniature Aerial Missile Systems program. The ability to identify a threat on the battlefield, assess it, neutralize the threat with an extremely high degree of precision, with low to no collateral damage, while always having the option of waving off the mission and reengaging the same or different target, is at the core of our solution sets and capabilities, Nawabi said, and were going beyond that. Department of Defense customers wanted the same features of the 300, but with greater effects, Todd Hanning, AeroVironments product line manager for tactical missile systems, said during the same event. This all-in-one, man-portable solution includes everything required to successfully launch, fly, track and engage non-line-of-sight targets with lethal effects. The 50-pound system can be set up and operational in less than 10 minutes and is designed to be capable of launching from ground, air or mobile platforms, providing superior force overmatch while minimizing exposure to enemy direct and indirect fires, Hanning said. We believe itll be the smartest loitering missile in the market.Battlefield payload delivery, including for lifesaving medical supply, is likely going to be an option commanders regularly seek from drones. AeroVironment began developing the 600 as a new class of loitering missiles to meet a set of requirements in an Army development program called Single Multi-Mission Attack Missile. AeroVironment is also continuing to find ways to integrate Switchblade into air and ground platforms. AeroVironment is also teaming with Kratos Defense and Security Systems to demonstrate a high-speed, long-range unmanned combat air vehicle that serves as a mothership to deliver large quantities of Switchblade 300s that can provide a mesh network of information back to a ground control station to tactically execute multiple attack scenarios cooperatively and to overwhelm and disable enemy systems, Hanning said. While AeroVironment is not one of the initial companies developing capabilities within the Armys Future Vertical Lift Air-Launched Effects, or FVL ALE, portfolio, we definitely see a way for AeroVironment to participate in that and really be a player in that market knowing that Switchblade 600 is definitely designed for air-launched effects, air-launched capability, Hush said, and thats something that well continue to work on and look at the opportunity to be a part of that effort. \n6 : \nlen original: 29\nSummary len: 8\nThey just learned that that delivery was canceled due to electrical problems with the aircraft, she said to Ellen Lord, the Pentagons top acquisition official. In a statement, Boeing said a minor electrical issue on a single KC-46 was found by the company during acceptance tests. Boeing expects to conclude this activity within the next several days and is working with the Air Force on a new delivery schedule. But Shaheen, speaking at the hearing, expressed frustration with Boeing over its repeated difficulties designing and building the new tankers, with challenges over the life span of the program that have included wiring issues and problems with the vision system that allows boom operators to safely refuel other planes. And yet weve got another aircraft thats not being delivered because of another problem. Lord responded that KC-46 problems have included design and engineering flaws as well as issues occurring during the manufacturing of the jet. However, she said the root cause of the problems is the fixed-price firm contract used for the KC-46 program, which makes Boeing financially responsible for any costs beyond the $4.9 billion ceiling. \n7 : \nlen original: 17\nSummary len: 5\n LONDON British efforts to introduce a new family of long-endurance, medium-altitude drones has moved a step closer with an announcement by the Ministry of Defence Sept. 28 that the first General Atomics Protector RG Mk1 off the production had made its maiden flight. The flight comes just over two months after the British announced they had inked a 65 million (U.S. $83 million) deal with General Atomics Aeronautical Systems to supply the first three of an expected fleet of at least 16 drones. A commitment for the additional drones could come in April next year. Crucially, the machine is also in line to be approved to fly in non-segregated airspace in places like the U.K. British Defence Minister Jeremy Quin said the inaugural flight of the production drone was a welcome step in development. For the moment the first Protector will stay in the United States to support systems testing as part of an MoD, U.S. Air Force and General Atomics team. \n8 : \nlen original: 16\nSummary len: 4\n WASHINGTON The U.S. State Department has preemptively cleared Switzerland to purchase the F-35A joint strike fighter and F/A-18E/F Super Hornet, just days after a public vote narrowly okd the Swiss government to move forward with a planned procurement of new fighter aircraft. Rather, the announcement is a bureaucratic move by State and DSCA to make sure that, should the jets be selected, there will not be delays in getting the stealth fighter cleared. A national referendum on Sept. 27 approved the plan to go ahead with the procurement, along with $2 billion for a complementary ground-based air defense system, was narrowly approved by 50.1 percent of voters, a margin of just 8,670 votes. The government will then evaluate the bids throughout the first half of 2021 and make a decision on the aircraft type and missile defense hardware by June. \n9 : \nlen original: 17\nSummary len: 5\n COLOGNE, Germany Swiss voters have approved a government plan to spend $6.5 billion on new fighter aircraft by a margin of 8,670 votes, with the two U.S. vendors in the race feeling the backlash of anti-Trump sentiments. The Swiss legislature last week approved the budget for the Air 2030 modernization program, which includes $6.5 billion for 30-40 new aircraft and $2 billion for a complementary ground-based, air defense system.The Swiss have kicked off flying season for the five types of combat aircraft under consideration to replace the country's aging fleet, with several demonstrations scheduled between now and early July. Amherd stressed that the aircraft budget is to be seen as a ceiling. The government will then evaluate the bids throughout the first half of 2021 and make a decision on the aircraft type and missile defense hardware by June. Opponents of the plan could still derail it by seeking another referendum, a step that would require 100,000 signatures and could take years to unfold. \n"
],
[
"for e, i in enumerate(samples):\n print(e)\n sample = \"\"\n temp = remove_empty_string(i['content'])\n for t in temp:\n sample += t\n sample += \" \"\n print(sample)",
"0\n Lt. Gen. David Allvin was confirmed by the Senate to be the Air Forces next vice chief of staff in a late-night vote Wednesday. Allvins nomination to become vice chief and receive his fourth star was approved unanimously. In a Thursday release, Chief of Staff Gen. Charles CQ Brown applauded Allvins confirmation. I could not be more pleased with the confirmation of Gen. Allvin as the new vice chief of staff, Brown said. When it comes to leading at the highest levels of joint strategy and policy, and as someone who sets the standard for critical collaboration with our allies and partners, there is no one more qualified for the role of vice chief. Allvin will succeed the current vice chief, Gen. Seve Wilson, who is expected to retire after 39 years in uniform. Wilson has served in his role since July 2016 and is the longest-serving vice chief in Air Force history. Allvin now serves director of strategy, plans and policy for the Joint Staff at the Pentagon, and is a senior member of the United States delegation to the United Nations Military Staff Committee.Lt. Gen. David Allvin is an experienced pilot who has commanded a n umber of wings, including the 438th Air Expeditionary Wing in Afghanistan. He is a command pilot with more than 4,600 flight hours in more than 30 aircraft, including the F-15, F-16, KC-135, C-17 and C-130. He has 800 flight hours as a test pilot. His past commands include the 97th Air Mobility Wing at Altus Air Force Base in Oklahoma from 2007 to 2009. Allvin also commanded the 438th Air Expeditionary Wing in Kabul, Afghanistan, in 2010 and 2011, during which time he also served as the commanding general of NATO Air Training Command. Allvin also commanded the 618th Air and Space Operations Center at Scott Air Force Base, Illinois, and served as director of strategy, plans and policy at U.S. European Commands headquarters at Stuttgart-Vaihingen, Germany. He graduated from the Air Force Academy in 1986.Stephen Losey covers leadership and personnel issues as the senior reporter for Air Force Times. He comes from an Air Force family, and his investigative reports have won awards from the Society of Professional Journalists. He has traveled to the Middle East to cover Air Force operations against the Islamic State. \n1\nThe Japan Air Self-Defense Force (JASDF) has inducted its first Kawasaki RC-2 ELINT platform (left) into service at Iruma AB in western Tokyo. The type will replace the older NAMC YS-11EB (right) in the role. pic.twitter.com/0cdb213Phq MELBOURNE, Australia The Japan Air Self-Defense Force has inducted the first of a new intelligence-gathering aircraft into service, following a two-year flight test program. The JASDF announced the induction of the RC-2 electronic intelligence, or elint, gathering aircraft at a ceremony held at Iruma Air Base in the western suburbs of the Japanese capital Tokyo on Oct. 1. The RC-2 is based on Kawasaki Heavy Industries' C-2 airlifter and has been heavily modified with multiple aircraft fairings that contain antennas for detecting, receiving and classifying electronic emissions. The aircraft made its maiden flight in early 2018, though the variant had been in development since at least 2015. Since that time, it underwent a series of flight tests conducted by the JASDF Air Development and Test Wing at Iruma, where the forces elint squadron is based. The RC-2 will replace the four NAMC YS-11EBs currently serving with the squadron, although its unknown if the new platform will replace the YS-11EBs on a one-for-one basis. The Defense Ministrys latest budget request, released on the same day of the RC-2s induction, seeks $67.2 million to acquire more of the specialized elint systems by purchasing an unspecified number of RC-2s. Japan is also seeking to recapitalize its standoff jamming capability. The latest budget request sought $144.9 million to develop a new standoff jammer aircraft, with the accompanying graphic released by the ministry suggesting that it will also be based on the C-2. This aircraft will likely replace the two YS-11EAs and possibly the sole Kawasaki EC-1 in service with the JASDFs Electronic Warfare Squadron, which is also based at Iruma. The EC-1 is based on the older Kawasaki C-1 that Japan is slowly replacing with the C-2. The latest budget request is seeking $487.5 million to acquire two more of the airlifters in the coming fiscal year. Japan has acquired the C-2 at a relatively slow rate, with seven aircraft funded for fiscal 2014 through fiscal 2018. Fiscal 2019 received no funding for the effort. In recent years, the country has also flirted with the idea of buying Lockheed Martin C-130J Super Hercules airlifters from the United States as a cheaper option. The U.S. ally is also seeking a further $47.4 million to develop a new elint collection system that will eventually go on a new platform to replace the four Lockheed EP-3C Orion aircraft currently operated by the Japan Maritime Self-Defense Force. \n2\n WASHINGTON AT&T will deliver network tools and 5G to three U.S. Air Force bases, the telecommunications giant announced Wednesday. The company will provide the bases with its networking-as-a-service capabilities to 24,000 personnel across Buckley Air Force Base, Colo., Joint Base Elmendorf-Richardson, Alaska, and Offutt Air Force Base, Neb. The company has completed 5G system design across the installations and expects to complete delivery of the services by the end of 2021. Were proud and honored to bring AT&T 5G and other highly innovative commercial networking-as-a-service capabilities to the Air Force, said Anne Chow, chief executive officer of AT&T Business. We are helping the Air Force optimize the value of our 5G and other networking capabilities at these 3 bases and stand ready to work with them to extend these services across the entirety of the Air Force if they so choose. The Air Force made the award using so-called Other Transaction Agreements under its Enterprise IT as a Service (EITaaS) program, which aims to increase network speeds and modernize IT infrastructure at the services bases to support multi-domain operations. According to the news release, AT&T 5G and networking-as-a-service capabilities will be able to support the Air Forces efforts on Internet of Things devices and power everything from augmented and virtual reality, robotics, drones and network edge storage and computing. We think it is vital to test commercially provided services like 5G and software-based networking-as-a-service capabilities as we explore ways to help us innovate and improve our global air, space and cyber readiness, said Col. Justin K. Collins, deputy of the Air Forces enterprise IT & cyber infrastructure division. We expect 5G service will help us improve the user experience and support a broad array of use cases that can enhance mission effectiveness. AT&T is also providing Base Area Network, Wide Area Network, telephony, internet access and highly secure interoperability with legacy systems at the three bases. Meanwhile, the Defense Department is also expected to award 5G contracts to service providers later this year at military bases across the United States. At least one of the bases, Nellis Air Force Base in Nevada, has decided on a vendor but has not announced the winner publicly.Andrew Eversden is a federal IT and cybersecurity reporter for the Federal Times and Fifth Domain. He previously worked as a congressional reporting fellow for the Texas Tribune and Washington intern for the Durango Herald. Andrew is a graduate of American University. \n3\n WASHINGTON The U.S. Air Force has awarded a nearly $40 million contract toParsons to produce ground vehicles that can clear mines or unexploded ordnance from airfields using a laser. The package covers the procurement of 13 Recovery of Airbase Denied by Ordnance (RADBO) vehicles, as well as three spares. The system is made up of a Cougar MRAP, Parsons' three-kilowatt ZEUS laser weapon, and an arm assembly that can move debris or other objects out of the way. The idea behind the RADBO is to allow airmen to clear threats from current or future airfields hardly the laser warfare capability sought by Pentagon planners for decades, but still a potentially important step, as it represents the first DoD ground-based laser system to be ordered into full production. The service awarded Parsons the sole-source contract on Sept. 23. Work will be performed in Huntsville, Ala., with a completion date of Sept. 2023. According to a 2018 video from the Air Forces Installation and Mission Support Center, the majority of development work on the RADBO design was done at the Armys Redstone Arsenal near Huntsville. Parsons claims the ZEUS design can hit targets more than 300 meters away from the vehicle and is powerful enough to detonate small submunitions from cluster bombs, land mines, general purposed bombs and thick-cased artillery rounds, per a company announcement. This is Parsons innovation: delivering a game changing warfighting product, Hector Cuevas, Parsons executive vice president of missile defense and C5ISR, said in a statement. Were proud to partner with the Air Force in deploying this critical force protection and mission enabling technology that will greatly increase safe and effective explosive ordnance disposal operations.Aaron Mehta is Deputy Editor and Senior Pentagon Correspondent for Defense News, covering policy, strategy and acquisition at the highest levels of the Department of Defense and its international partners. \n4\n A developmental robot pilot that transforms manned aircraft into unmanned systems is flying again after the Air Force Research Laboratory took its ROBOpilot out for a test flight at Dugway Proving Ground, Utah, Sept. 24. ROBOpilots name belies the simplicity of the program. In order to turn a manned aircraft into an unmanned one, AFRL simply replaces the human pilot with a robot who interacts with the aircraft controls the same way a human would: it can pull the yoke, press pedals to control rudders and brakes, adjust the throttle and flip switches. In addition to the robots own internal GPS and inertial measurement unit, the system scans the gauges on the dashboard for information about the aircraft and its position, processing that information with a computer to independently fly the plane. Importantly, ROBOpilot requires no permanent modifications. All operators need to do is remove the pilots' seats and replace them with ROBOpilot. And if users determine that they want to return the aircraft to a manned mission, ROBOpilot is simply removed and the pilots' seats are reinstalled. The robotic system is the result of a Small Business Innovative Research (SBIR) award granted to DZYNE Technologies by the AFRLs Center for Rapid Innovation (CRI). Despite a successful first flight in August 2019, the system was later grounded after it maintained damage during a landing mishap. The CRI and DZYNE team analyzed the findings and incorporated the recommendations to ensure the success of this latest test, said Marc Owens, CRIs program manager for ROBOpilot. We determined the cause of the mishap, identified the best course of corrective action and were very pleased to be flight testing again. Since then, ROBOpilot has been cleared to fly again and installed in a new Cessna 206. On Sept. 24, the system returned to the skies for a 2.2 hour test flight over Utah. Since this is a completely new build with a different Cessna 206, we re-accomplished the flight test points completed on our first flight last year, Owen explained. ROBOpilot is too good an idea to let the mishap derail the development of this technology.Get a bi-weekly update on the challenges and opportunities surrounding the use of drones.For more newsletters, click here \n5\nOut of index\nCORRECTION - Blackwing is a reconnaissance system. The dash speed of the Switchblade 600 is 115 mph. WASHINGTON Building on its Switchblade 300 loitering missile legacy with the U.S. Army, AeroVironment is releasing a family of capabilities to include its new Switchblade 600, a larger version suited to go up against armored targets at greater ranges in denied and degraded environments. AeroVironment has provided the tube-launched, rucksack-portable Switchblade to the Army for roughly a decade, delivering thousands of them into theater, but the company sealed the largest loitering munitions deal to date with the service in May a $146 million contract, funded at $76 million for the first year, to supply the 300 version of the system for the Lethal Miniature Aerial Missile Systems program. Our family of loitering missile systems is redefining and disrupting a multibillion-dollar missiles market, AeroVironment CEO Wahid Nawabi told reporters during a Sept. 30 media event. The family also includes Blackwing, a loitering reconnaissance system that can be deployed from a submarine while submerged and used in an underwater air-delivery canister. The ability to identify a threat on the battlefield, assess it, neutralize the threat with an extremely high degree of precision, with low to no collateral damage, while always having the option of waving off the mission and reengaging the same or different target, is at the core of our solution sets and capabilities, Nawabi said, and were going beyond that. Department of Defense customers wanted the same features of the 300, but with greater effects, Todd Hanning, AeroVironments product line manager for tactical missile systems, said during the same event. The 600 delivers with enhanced effects, greater standoff range and extended endurance, Hanning said. This all-in-one, man-portable solution includes everything required to successfully launch, fly, track and engage non-line-of-sight targets with lethal effects. The 50-pound system can be set up and operational in less than 10 minutes and is designed to be capable of launching from ground, air or mobile platforms, providing superior force overmatch while minimizing exposure to enemy direct and indirect fires, Hanning said. The new version can fly for 40 minutes with a range of more than 40 kilometers. The missile exceeds a 115 mph dash speed and carries an anti-armor warhead designed to neutralize armored vehicles without the need for external intelligence, surveillance and reconnaissance or fires assets. The new system comes with a touchscreen tablet-based fire control system with an option to pilot the vehicle manually or autonomously. The missile is secured through onboard encrypted data links and Selective Availability Anti-Spoofing Module GPS. The Switchblade 600 is also equipped with a patented wave-off capability where operators can abort missions at any time and recommit. From [artificial intelligence] to autonomy, were not stopping there. Were investing in future technologies like edge computing and artificial intelligence engines, latest-gen processing with massive computing power, Hanning said. We believe itll be the smartest loitering missile in the market.Battlefield payload delivery, including for lifesaving medical supply, is likely going to be an option commanders regularly seek from drones. AeroVironment began developing the 600 as a new class of loitering missiles to meet a set of requirements in an Army development program called Single Multi-Mission Attack Missile. But according to Brett Hush, the companys senior general manager of product line management for tactical missile systems, weve evolved beyond that. Other customers, including the U.S. Marine Corps and a number of DoD customers, have since adopted similar requirements, he said. Weve been developing very closely with a number of DoD customers, Hush said, The only one that we can talk about publicly at this point in time is the U.S. Marine Corps program, of which we are one of the competitors in the phase one development demonstration. He added there would be a fly-off in January followed by a downselect to a single supplier. The company has had a rigorous test schedule over the past several years for the Switchblade 600, according to Hanning. Most of that testing was ground-launched against both fixed and moving targets. I think we are up to about over 60 flights in our test program, he added, \"and well continue to do that through this next year. Then the 600 will progress into both maritime and aerial environments, Hanning said. AeroVironment is also continuing to find ways to integrate Switchblade into air and ground platforms. The company continues to team up with General Dynamics Land Systems to offer an integrated solution as part of its offering to the Armys Optionally Manned Fighting Vehicle competition. AeroVironment is also teaming with Kratos Defense and Security Systems to demonstrate a high-speed, long-range unmanned combat air vehicle that serves as a mothership to deliver large quantities of Switchblade 300s that can provide a mesh network of information back to a ground control station to tactically execute multiple attack scenarios cooperatively and to overwhelm and disable enemy systems, Hanning said. Initial air-launch testing will begin at the start of next year, Hush said. While AeroVironment is not one of the initial companies developing capabilities within the Armys Future Vertical Lift Air-Launched Effects, or FVL ALE, portfolio, we definitely see a way for AeroVironment to participate in that and really be a player in that market knowing that Switchblade 600 is definitely designed for air-launched effects, air-launched capability, Hush said, and thats something that well continue to work on and look at the opportunity to be a part of that effort. We definitely see its capabilities are directly aligned with that fight and with those platforms. When asked if the company submitted an offering to the ALE development competition, Nawabi said: Im not in a position to be able to comment on the specific details due to the competitive nature of the deal, but we believe that we have a lot to offer for the ALE program and initiative as a whole. I will keep you updated in the future.Jen Judson is the land warfare reporter for Defense News. She has covered defense in the Washington area for nearly a decade. She was previously a reporter at Politico and Inside Defense. She won the National Press Club's best analytical reporting award in 2014 and was named the Defense Media Awards' best young defense journalist in 2018. \n6\n WASHINGTON The U.S. Air Force halted a delivery of the KC-46 yet again after problems with the electrical system were found on one new tanker slated to make its way to the service. The issue was first disclosed during an Oct. 1 hearing of the Senate Armed Services Committee, when Sen. Jeanne Shaheen, D-N.H., said that a KC-46 that was supposed to have been delivered Sept. 25 by Boeing to Pease Air National Guard Base had been delayed. They just learned that that delivery was canceled due to electrical problems with the aircraft, she said to Ellen Lord, the Pentagons top acquisition official. In a statement, Boeing said a minor electrical issue on a single KC-46 was found by the company during acceptance tests. In flight, one of the radar warning receivers is indicating a fault through the planes fault management system, Boeing spokesman Larry Chambers said. We think it may be a poor electrical connection that needs to be re-seated. We are currently evaluating a fix. Resolving this has caused a minor delay to delivery of this single airplane. Boeing expects to conclude this activity within the next several days and is working with the Air Force on a new delivery schedule. The issue is not a design or safety-of-flight issue that would pose risk to the aircrew, he added. But Shaheen, speaking at the hearing, expressed frustration with Boeing over its repeated difficulties designing and building the new tankers, with challenges over the life span of the program that have included wiring issues and problems with the vision system that allows boom operators to safely refuel other planes. Ive spoken to a whole number of officials from Boeing from our military leadership as recently as last week with Gen. [Jacqueline] Van Ovost, who is the head of Air Mobility Command, all of whom have assured me that weve had good conversations between the [Department of Defense] and Boeing, and that the problems are being worked out. Were not going to continue to see these challenges, Shaheen said. And yet weve got another aircraft thats not being delivered because of another problem. So how do we fix this? Because it is an ongoing challenge thats affecting our ability to our national security, long term if we dont get these refueling tankers up and running. Lord responded that KC-46 problems have included design and engineering flaws as well as issues occurring during the manufacturing of the jet. The KC 46 has been an extremely problematical program. I speak with Leanne Caret, the CEO of the defense side of Boeing, on a regular basis about it, Lord said. One issue is frankly the technical solution. That was the original design [and] is now being redesigned, but also we have had a myriad of manufacturing issues with [foreign object debris] and other issues. However, she said the root cause of the problems is the fixed-price firm contract used for the KC-46 program, which makes Boeing financially responsible for any costs beyond the $4.9 billion ceiling. So far, Boeing has spent more than $4.7 billion in company funds on the KC-46 program almost equivalent to the Air Forces own investment in the program. The Air Force plans to buy 179 tankers, 38 of which have already been delivered to the service. Seven KC-46s have gone to Pease ANGB. This story is developing. Stay tuned for updates.Valerie Insinna is Defense News' air warfare reporter. She previously worked the Navy/congressional beats for Defense Daily, which followed almost three years as a staff writer for National Defense Magazine. Prior to that, she worked as an editorial assistant for the Tokyo Shimbuns Washington bureau. \n7\n LONDON British efforts to introduce a new family of long-endurance, medium-altitude drones has moved a step closer with an announcement by the Ministry of Defence Sept. 28 that the first General Atomics Protector RG Mk1 off the production had made its maiden flight. The MoD said the first production version of the drone flew in California on Sept. 25. The flight comes just over two months after the British announced they had inked a 65 million (U.S. $83 million) deal with General Atomics Aeronautical Systems to supply the first three of an expected fleet of at least 16 drones. Three ground control stations and other associated support equipment were also included in the deal. The contract contains options for a further 13 air vehicles and supporting equipment valued at around 180 million. A commitment for the additional drones could come in April next year. Progress with the Protector test schedule follows a two-year delay imposed on the program by the MoD in 2017 after the British ran into wider defense budget problems. The delay was primarily responsible for a 40 percent hike in Protector program costs, top MoD official Stephen Lovegrove said in a letter to the Parliamentary Public Accounts Committee published earlier this year. The Protector vehicles will replace General Atomics Reaper drones widely used by the Royal Air Force in operations in Afghanistan and the Middle East, most recently providing reconnaissance, surveillance and strike capabilities in the fight against the Islamic State group in Syria and Iraq.Defence Minister Viola Amherd considers the result, however close, a mandate to continue ongoing evaluations of the Eurofighter, the Rafale, the F-18 Super Hornet and the F-35A. Protector is the British version of General Atomics latest Predator variant, the MQ-9B Sky Guardian. The RAF drone will fly longer and, armed with Brimstone and Paveway IV precision weapons, hit harder than the Reaper. Crucially, the machine is also in line to be approved to fly in non-segregated airspace in places like the U.K. British Defence Minister Jeremy Quin said the inaugural flight of the production drone was a welcome step in development. With increased range and endurance, greater ISR and weapons capacity and improved weather resilience, Protector will play a vital intelligence and deterrent role in countering future threats, he was quoted as saying in a statement. For the moment the first Protector will stay in the United States to support systems testing as part of an MoD, U.S. Air Force and General Atomics team. Following completion of the work the drone will be delivered to the MoD in the summer of 2021. The platform will continue to be based in the United States to allow the RAF to complete its test and evaluation program. Operating from its base at RAF Waddington, eastern England, Protector is scheduled to enter service by mid-2024. \n8\n WASHINGTON The U.S. State Department has preemptively cleared Switzerland to purchase the F-35A joint strike fighter and F/A-18E/F Super Hornet, just days after a public vote narrowly okd the Swiss government to move forward with a planned procurement of new fighter aircraft. The two packages were posted on the Defense Security Cooperation Agencys website Wednesday. DSCA posts formal notifications to Congress that State deems the sales are worth moving forward. However, the potential packages are not a sign that Switzerland has decided the Lockheed Martin F-35 or Boeing produced F/A-18 are their fighter of the future. Rather, the announcement is a bureaucratic move by State and DSCA to make sure that, should the jets be selected, there will not be delays in getting the stealth fighter cleared. The DSCA has previously done so with F-35 requests from Belgium and Canada. The F-35 package comes with an estimated price tag of $6.58 billion, while the F/A-18 package with a price tag of $7.452 billion. Both those totals, if they represent final figures and DSCA notifications often do not would exceed the approved $6.5 billion budget for the program. In addition, State pre-cleared Switzerland to purchase the Patriot air defense system, a contender for a complimentary ground-based capability. The five Patriot batteries come with an estimated $2.2 billion price tag. A national referendum on Sept. 27 approved the plan to go ahead with the procurement, along with $2 billion for a complementary ground-based air defense system, was narrowly approved by 50.1 percent of voters, a margin of just 8,670 votes. Switzerlands Air 2030 program, which includes an estimated $6.5 billion to buy 30-40 new aircraft for policing the countrys airspace, has the F-35A and Super Hornet facing off against the Eurofighter Typhoon, the Dassault Rafale. The Saab Gripen had been in the running as well, but dropped out last summer based on the criteria from the Swiss government. All vendors must meet a deadline of Nov. 18 to deliver final proposals. The government will then evaluate the bids throughout the first half of 2021 and make a decision on the aircraft type and missile defense hardware by June. Sebastian Sprenger in Cologne, Germany contributed to this report.Aaron Mehta is Deputy Editor and Senior Pentagon Correspondent for Defense News, covering policy, strategy and acquisition at the highest levels of the Department of Defense and its international partners. \n9\n COLOGNE, Germany Swiss voters have approved a government plan to spend $6.5 billion on new fighter aircraft by a margin of 8,670 votes, with the two U.S. vendors in the race feeling the backlash of anti-Trump sentiments. Sundays vote translates into a razor-thin majority of 50.1 percent, or 1,605,700 votes, in favor of the acquisition. There was 49.9 percent, or 1,597,030 votes, against. The voter turnout was 59.4 percent, according to figures published online Sunday evening by the Federal Chancellery. Defence Minister Viola Amherd told reporters she considers the result, however close, a mandate to continue ongoing evaluations of the Eurofighter, the Rafale, the F-18 Super Hornet and the F-35A. The vote represents a long-term investment in the security of the Swiss population and infrastructure, she said. Prodded by reporters about the the narrowness of the vote, she said: In a democracy its a given that we respect the majority decision. The Swiss legislature last week approved the budget for the Air 2030 modernization program, which includes $6.5 billion for 30-40 new aircraft and $2 billion for a complementary ground-based, air defense system.The Swiss have kicked off flying season for the five types of combat aircraft under consideration to replace the country's aging fleet, with several demonstrations scheduled between now and early July. Amherd stressed that the aircraft budget is to be seen as a ceiling. If we can get suitable aircraft for less, we will certainly look at that, she said. All vendors must meet a deadline of Nov. 18 to deliver final proposals. The government will then evaluate the bids throughout the first half of 2021 and make a decision on the aircraft type and missile defense hardware by June. Opponents of the plan could still derail it by seeking another referendum, a step that would require 100,000 signatures and could take years to unfold. The Swiss opposition was energized in part by voters' views about the government of U.S. President Donald Trump, according to local media reports. During the pre-referendum campaign, the two U.S. vendors in the running, Boeing and Lockheed Martin, saw themselves lumped in with his foreign policy approach, considered reckless by many in the wealthy European countries such as Switzerland.Sebastian Sprenger is associate editor for Europe at Defense News, reporting on the state of the defense market in the region, and on U.S.-Europe cooperation and multi-national investments in defense and global security. Previously he served as managing editor for Defense News. He is based in Cologne, Germany. \n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5a698b67f4691b2fca1174fad45d9335b7ac8a
| 3,066 |
ipynb
|
Jupyter Notebook
|
test/Models/pheno_pkg/test/r/Ismomentregistredzc_39.ipynb
|
cyrillemidingoyi/PyCropML
|
b866cc17374424379142d9162af985c1f87c74b6
|
[
"MIT"
] | 5 |
2020-06-21T18:58:04.000Z
|
2022-01-29T21:32:28.000Z
|
test/Models/pheno_pkg/test/r/Ismomentregistredzc_39.ipynb
|
cyrillemidingoyi/PyCropML
|
b866cc17374424379142d9162af985c1f87c74b6
|
[
"MIT"
] | 27 |
2018-12-04T15:35:44.000Z
|
2022-03-11T08:25:03.000Z
|
test/Models/pheno_pkg/test/r/Ismomentregistredzc_39.ipynb
|
cyrillemidingoyi/PyCropML
|
b866cc17374424379142d9162af985c1f87c74b6
|
[
"MIT"
] | 7 |
2019-04-20T02:25:22.000Z
|
2021-11-04T07:52:35.000Z
| 38.325 | 141 | 0.489563 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a5aa5a9e1d8227968d97196244f70685b4fd656
| 943,656 |
ipynb
|
Jupyter Notebook
|
analysis/analyze_aparent_conv_layers_alien1_legacy.ipynb
|
876lkj/APARENT
|
5c8b9c038a46b129b5e0e5ce1453c4725b62322e
|
[
"MIT"
] | 20 |
2019-04-23T20:35:23.000Z
|
2022-02-02T02:07:06.000Z
|
analysis/analyze_aparent_conv_layers_alien1_legacy.ipynb
|
lafleur1/aparentGenomeTesting
|
e945d36e63d207d23b1508a4d1b7e5f63d66e304
|
[
"MIT"
] | 6 |
2019-10-14T16:35:00.000Z
|
2021-03-24T17:55:07.000Z
|
analysis/analyze_aparent_conv_layers_alien1_legacy.ipynb
|
lafleur1/aparentGenomeTesting
|
e945d36e63d207d23b1508a4d1b7e5f63d66e304
|
[
"MIT"
] | 11 |
2019-06-10T08:53:57.000Z
|
2021-01-25T00:54:59.000Z
| 1,167.891089 | 30,664 | 0.95579 |
[
[
[
"from __future__ import print_function\nimport keras\nfrom keras.models import Sequential, Model, load_model\nimport keras.backend as K\n\nimport tensorflow as tf\n\nimport pandas as pd\n\nimport os\nimport pickle\nimport numpy as np\n\nimport scipy.sparse as sp\nimport scipy.io as spio\n\nimport isolearn.io as isoio\n\nfrom scipy.stats import pearsonr\n\nimport matplotlib.pyplot as plt\n\nimport matplotlib.cm as cm\nimport matplotlib.colors as colors\n\nimport matplotlib as mpl\nfrom matplotlib.text import TextPath\nfrom matplotlib.patches import PathPatch, Rectangle\nfrom matplotlib.font_manager import FontProperties\nfrom matplotlib import gridspec\nfrom matplotlib.ticker import FormatStrFormatter\n\nfrom aparent.data.aparent_data_plasmid_legacy import load_data\n\nfrom analyze_aparent_conv_layers_helpers import *\n",
"Using TensorFlow backend.\n"
],
[
"#Load random MPRA data\n\nfile_path = '../data/random_mpra_legacy/combined_library/processed_data_lifted/'\nplasmid_gens = load_data(batch_size=32, valid_set_size=1000, test_set_size=40000, kept_libraries=[22], canonical_pas=True, no_dse_canonical_pas=True, file_path=file_path)\n",
"/home/johli/anaconda3/envs/aparent/lib/python3.6/site-packages/numpy/core/fromnumeric.py:56: FutureWarning: Series.nonzero() is deprecated and will be removed in a future version.Use Series.to_numpy().nonzero() instead\n return getattr(obj, method)(*args, **kwds)\n"
],
[
"#Load legacy APARENT model (lifted from theano)\n\nmodel_name = 'aparent_theano_legacy_30_31_34'#_pasaligned\n\nsave_dir = os.path.join(os.getcwd(), '../saved_models/legacy_models')\nmodel_path = os.path.join(save_dir, model_name + '.h5')\n\naparent_model = load_model(model_path)",
"WARNING:tensorflow:From /home/johli/anaconda3/envs/aparent/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From /home/johli/anaconda3/envs/aparent/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n"
],
[
"#Create a new model that outputs the conv layer activation maps together with the isoform proportion\nconv_layer_iso_model = Model(\n inputs = aparent_model.inputs,\n outputs = [\n aparent_model.get_layer('iso_conv_layer_1').output,\n aparent_model.get_layer('iso_out_layer_1').output\n ]\n)",
"_____no_output_____"
],
[
"#Predict from test data generator\niso_conv_1_out, iso_pred = conv_layer_iso_model.predict_generator(plasmid_gens['test'], workers=4, use_multiprocessing=True)\n\niso_conv_1_out = np.reshape(iso_conv_1_out, (iso_conv_1_out.shape[0], iso_conv_1_out.shape[1], iso_conv_1_out.shape[2]))\n\niso_pred = np.ravel(iso_pred[:, 1])\nlogodds_pred = np.log(iso_pred / (1. - iso_pred))\n\n#Retrieve one-hot input sequences\nonehot_seqs = np.concatenate([plasmid_gens['test'][i][0][0][:, 0, :, :] for i in range(len(plasmid_gens['test']))], axis=0)\n",
"_____no_output_____"
],
[
"#Mask for simple library (Alien1)\nmask_seq = ('X' * 4) + ('N' * (45 + 6 + 45 + 6 + 45)) + ('X' * 27)\n\nfor j in range(len(mask_seq)) :\n if mask_seq[j] == 'X' :\n iso_conv_1_out[:, :, j] = 0\n",
"_____no_output_____"
],
[
"#Layer 1: Compute Max Activation Correlation maps and PWMs\nfilter_width = 8\nn_samples = 5000\n\npwms = np.zeros((iso_conv_1_out.shape[1], filter_width, 4))\npwms_top = np.zeros((iso_conv_1_out.shape[1], filter_width, 4))\n\nfor k in range(iso_conv_1_out.shape[1]) :\n \n for i in range(iso_conv_1_out.shape[0]) :\n max_j = np.argmax(iso_conv_1_out[i, k, :])\n if iso_conv_1_out[i, k, max_j] > 0 :\n pwms[k, :, :] += onehot_seqs[i, max_j: max_j+filter_width, :]\n \n sort_index = np.argsort(np.max(iso_conv_1_out[:, k, :], axis=-1))[::-1]\n for i in range(n_samples) :\n max_j = np.argmax(iso_conv_1_out[sort_index[i], k, :])\n if iso_conv_1_out[sort_index[i], k, max_j] > 0 :\n pwms_top[k, :, :] += onehot_seqs[sort_index[i], max_j: max_j+filter_width, :]\n \n \n pwms[k, :, :] /= np.expand_dims(np.sum(pwms[k, :, :], axis=-1), axis=-1)\n pwms_top[k, :, :] /= np.expand_dims(np.sum(pwms_top[k, :, :], axis=-1), axis=-1)\n\nr_vals = np.zeros((iso_conv_1_out.shape[1], iso_conv_1_out.shape[2]))\n\nfor k in range(iso_conv_1_out.shape[1]) :\n for j in range(iso_conv_1_out.shape[2]) :\n if np.any(iso_conv_1_out[:, k, j] > 0.) :\n r_val, _ = pearsonr(iso_conv_1_out[:, k, j], logodds_pred)\n r_vals[k, j] = r_val if not np.isnan(r_val) else 0\n",
"_____no_output_____"
],
[
"#Plot Max Activation PWMs and Correlation maps\n\nn_filters_per_row = 5\n\nn_rows = int(pwms.shape[0] / n_filters_per_row)\nk = 0\nfor row_i in range(n_rows) :\n \n f, ax = plt.subplots(2, n_filters_per_row, figsize=(2.5 * n_filters_per_row, 2), gridspec_kw = {'height_ratios':[3, 1]})\n\n for kk in range(n_filters_per_row) :\n plot_pwm_iso_logo(pwms_top, r_vals, k, ax[0, kk], ax[1, kk], seq_start=24, seq_end=95, cse_start=49)\n k += 1\n\n plt.tight_layout()\n plt.show()\n\n",
"_____no_output_____"
],
[
"#Create a new model that outputs the conv layer activation maps together with the isoform proportion\nconv_layer_iso_model = Model(\n inputs = aparent_model.inputs,\n outputs = [\n aparent_model.get_layer('iso_conv_layer_2').output,\n aparent_model.get_layer('iso_out_layer_1').output\n ]\n)",
"_____no_output_____"
],
[
"#Predict from test data generator\niso_conv_2_out, iso_pred = conv_layer_iso_model.predict_generator(plasmid_gens['test'], workers=4, use_multiprocessing=True)\n\niso_conv_2_out = np.reshape(iso_conv_2_out, (iso_conv_2_out.shape[0], iso_conv_2_out.shape[1], iso_conv_2_out.shape[2]))\n\niso_pred = np.ravel(iso_pred[:, 1])\nlogodds_pred = np.log(iso_pred / (1. - iso_pred))\n\n#Retrieve one-hot input sequences\nonehot_seqs = np.concatenate([plasmid_gens['test'][i][0][0][:, 0, :, :] for i in range(len(plasmid_gens['test']))], axis=0)\n",
"_____no_output_____"
],
[
"#Layer 2: Compute Max Activation Correlation maps and PWMs\nfilter_width = 19\nn_samples = 200\n\npwms = np.zeros((iso_conv_2_out.shape[1], filter_width, 4))\npwms_top = np.zeros((iso_conv_2_out.shape[1], filter_width, 4))\n\nfor k in range(iso_conv_2_out.shape[1]) :\n \n for i in range(iso_conv_2_out.shape[0]) :\n max_j = np.argmax(iso_conv_2_out[i, k, :])\n if iso_conv_2_out[i, k, max_j] > 0 :\n pwms[k, :, :] += onehot_seqs[i, max_j * 2: max_j * 2 + filter_width, :]\n \n sort_index = np.argsort(np.max(iso_conv_2_out[:, k, :], axis=-1))[::-1]\n for i in range(n_samples) :\n max_j = np.argmax(iso_conv_2_out[sort_index[i], k, :])\n if iso_conv_2_out[sort_index[i], k, max_j] > 0 :\n pwms_top[k, :, :] += onehot_seqs[sort_index[i], max_j * 2: max_j * 2 + filter_width, :]\n \n \n pwms[k, :, :] /= np.expand_dims(np.sum(pwms[k, :, :], axis=-1), axis=-1)\n pwms_top[k, :, :] /= np.expand_dims(np.sum(pwms_top[k, :, :], axis=-1), axis=-1)\n\nr_vals = np.zeros((iso_conv_2_out.shape[1], iso_conv_2_out.shape[2]))\n\nfor k in range(iso_conv_2_out.shape[1]) :\n for j in range(iso_conv_2_out.shape[2]) :\n if np.any(iso_conv_2_out[:, k, j] > 0.) :\n r_val, _ = pearsonr(iso_conv_2_out[:, k, j], logodds_pred)\n r_vals[k, j] = r_val if not np.isnan(r_val) else 0\n",
"/home/johli/anaconda3/envs/aparent/lib/python3.6/site-packages/scipy/stats/stats.py:3038: RuntimeWarning: invalid value encountered in float_scalars\n r = r_num / r_den\n"
],
[
"#Plot Max Activation PWMs and Correlation maps\n\nn_filters_per_row = 5\n\nn_rows = int(pwms.shape[0] / n_filters_per_row)\nk = 0\nfor row_i in range(n_rows) :\n \n f, ax = plt.subplots(2, n_filters_per_row, figsize=(3 * n_filters_per_row, 2), gridspec_kw = {'height_ratios':[3, 1.5]})\n\n for kk in range(n_filters_per_row) :\n plot_pwm_iso_logo(pwms_top, r_vals, k, ax[0, kk], ax[1, kk], seq_start=12, seq_end=44)\n k += 1\n\n plt.tight_layout()\n plt.show()\n\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5abb6ef28c3aa4fed9ad67cc59ff4159cfbb79
| 164,537 |
ipynb
|
Jupyter Notebook
|
week_1/Intro.ipynb
|
fedorkleber/UVA_AML20
|
3ff323b888eea561bbbaad924e4a19454e912003
|
[
"MIT"
] | null | null | null |
week_1/Intro.ipynb
|
fedorkleber/UVA_AML20
|
3ff323b888eea561bbbaad924e4a19454e912003
|
[
"MIT"
] | null | null | null |
week_1/Intro.ipynb
|
fedorkleber/UVA_AML20
|
3ff323b888eea561bbbaad924e4a19454e912003
|
[
"MIT"
] | null | null | null | 64.222092 | 37,824 | 0.792332 |
[
[
[
"# Applied Machine Learning",
"_____no_output_____"
],
[
"## Table of contents\n* [1. Notebook General Info](#1.-Notebook-General-Info)\n* [2. Python Basics](#2.-Python-Basics)\n * [2.1 Basic Types](#2.1-Basic-Types)\n * [2.2 Lists and Tuples](#2.2-Lists-and-Tuples)\n * [2.3 Dictionaries](#2.3-Dictionaries)\n * [2.4 Conditions](#2.4-Conditions)\n * [2.5 Loops](#2.5-Loops)\n * [2.6 Functions](#2.6-Functions)\n* [3. NumPy Basics](#3.-NumPy-Basics)\n * [3.1 Arrays](#3.1-Arrays)\n * [3.2 Functions and Operations](#3.2-Functions-and-Operations)\n * [3.3 Miscellaneous](#3.3-Miscellaneous)\n* [4. Visualization with Matplotlib](#4.-Visualization-with-Matplotlib)\n* [5. Nearest Neighbor Classification](#5.-Nearest-Neighbor-Classification)\n * [5.1 Digits Dataset](#5.1-Digits-Dataset)\n * [5.2 Distances](#5.2-Distances)\n * [5.3 Performance Experiments](#5.3-Performance-Experiments)\n * [5.4 Classification](#5.4-Classification)\n* [6. Linear Algebra Basics](#6.-Linear-Algebra-Basics)",
"_____no_output_____"
],
[
"## 1. Notebook General Info",
"_____no_output_____"
],
[
"### Structure\n- Notebooks consist of **cells**\n- During this course we will use **Code** and **Markdown** cells\n- Code in the cells is executed by pressing **Shift + Enter**. It also renders Markdown\n- To edit a cell, double-click on it.",
"_____no_output_____"
],
[
"### Markdown\n\n* Markdown is a lightweight markup language.\n* You can emphasize the words: *word*, ~~word~~, **word**\n* You can make lists\n\n - item 1\n - item 2\n - subitem 2.1\n - subitem 2.2\n\n* And tables, as well\n\n| Language |Filename extension| First appeared |\n|---------:|:----------------:|:--------------:|\n|C | `.h`, `.c` | 1972 |\n|C++ | `.h`, `.cpp` | 1983 |\n|Swift | `.swift` | 2014 |\n|Python | `.py` | 1991 |\n\n\n* Markdown allows you to add a code listing.\n\n```\ndef sum(a, b):\n return a + b\n```\n\n* You can even add math expressions. Both inline $e^{i \\phi} = \\sin(\\phi) + i \\cos(\\phi)$ and centered:\n$$\n\\int\\limits_{-\\infty}^{\\infty} e^{-x^2}dx = \\sqrt{\\pi}\n$$\n\n* You can also add images, even from the remote resources:\n\n\n\n* Markdown allows one to add hyperlinks. There is a good [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet).",
"_____no_output_____"
],
[
"### Code\n* We will use Python.\n* It is an interpreted language.\n* When you execute the cell by pressing **Shift + Enter**, the code is interpreted line-by-line.",
"_____no_output_____"
],
[
"## 2. Python Basics",
"_____no_output_____"
],
[
"Useful links:\n\n* CodeAcademy https://www.codecademy.com/en/tracks/python (recommended if you are new to Python!)\n* The Hitchhiker’s Guide to Python http://docs.python-guide.org/en/latest/\n* Video tutorials by *sentdex*: [Python 3 Basic Tutorial Series](https://www.youtube.com/watch?v=oVp1vrfL_w4&list=PLQVvvaa0QuDe8XSftW-RAxdo6OmaeL85M), [Intermediate Python Programming](https://www.youtube.com/watch?v=YSe9Tu_iNQQ&list=PLQVvvaa0QuDfju7ADVp5W1GF9jVhjbX-_)\n\nSome interesting talks from conferences:\n* David Beazley: [Built in Super Heroes](https://youtu.be/lyDLAutA88s), [Modules and Packages](https://youtu.be/0oTh1CXRaQ0)\n* Raymond Hettinger: [Transforming Code into Beautiful](https://youtu.be/OSGv2VnC0go), [ Beyond PEP 8](https://youtu.be/wf-BqAjZb8M)",
"_____no_output_____"
],
[
"### 2.1 Basic Types",
"_____no_output_____"
],
[
"* Python is dynamically typed: you do not specify the type of a variable. Just `my_var = 1`\n* Python is strongly typed: you can not add integer to string or None to integer",
"_____no_output_____"
]
],
[
[
"# For now, this is just a magic\nfrom __future__ import print_function, division",
"_____no_output_____"
],
[
"# Integer\na = 2\nprint(a)\n\n# Float\na += 4.0\nprint(a)\n\n# String\nb = \"Hello World\"\nprint(b)\nprint(b + ' ' + str(42))\n\n# Boolean\nfirst_bool_here = False\nprint(first_bool_here)\n\n# This is how formatting works\nprint('My first program is:\"%s\"' % b) # old style\nprint('My first program is:\"{}\"'.format(b)) # new style\nprint(f'My first program is:\"{b}\"') # even newer style",
"2\n6.0\nHello World\nHello World 42\nFalse\nMy first program is:\"Hello World\"\nMy first program is:\"Hello World\"\nMy first program is:\"Hello World\"\n"
],
[
"num = 42\nprint(42 / 5) # a regular division\nprint(42 // 5) # an integer division\nprint(42 % 5) # a remainder",
"8.4\n8\n2\n"
]
],
[
[
"### 2.2 Lists and Tuples",
"_____no_output_____"
],
[
"* `list` and `tuple` are the array-like types in Python\n* `list` is mutable. `tuple` is immutable\n* `list` is represented as `[...]`, `tuple` as `(...)`\n* They both can store different types at the same time\n* The index of the first element is `0`, it is called 'zero-indexed'",
"_____no_output_____"
]
],
[
[
"# Lists\nempty_list = [] # creates an empty list\nlist1 = [1, 2, 3] # creates a list with elements\nlist2 = ['1st', '2nd', '3rd']\nprint(list1) # prints the list\nprint(list2)\n\nprint(len(list2)) # prints the length of the list\n\nlist2.append(2) # appends the item at the end\nprint(list2) # prints the appended list\n\nlist2.insert(2, 0) # inserts 0 at index 3 (zero-indexed)\nprint(list2)\n\nlist2[1] = 'new' # changes the second element of the list (lists are mutable)\nprint(list2)",
"[1, 2, 3]\n['1st', '2nd', '3rd']\n3\n['1st', '2nd', '3rd', 2]\n['1st', '2nd', 0, '3rd', 2]\n['1st', 'new', 0, '3rd', 2]\n"
],
[
"# You can create a list of lists:\nlist_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\nprint(list_of_lists[1][2]) # second list, third element",
"6\n"
],
[
"# Tuples\n# Empty tuple can't be created.\n# It is immutable. So it is just nothing\ntuple1 = (1,) # Comma is necessary. Otherwise it is a number in parenthesis\ntuple2 = ('orange',)\ntuple3 = ('fly', 32, None)\n\nsuper_tuple = tuple1 + tuple2 + tuple3\nprint(super_tuple)\n\nsuper_tuple[1] = 'new' # trying to change an element of a tuple raises an error (tuples are immutable)",
"(1, 'orange', 'fly', 32, None)\n"
]
],
[
[
"* Above we showed how to create and print lists.\n* How to find the length of the list and how to append or insert the items in an already created list.\n* There are several other operations which we can perform with lists:\n * removing elements from the list\n * joining two lists\n * sorting\n * etc\n\nThere is an interesting [cheat sheet](http://www.pythonforbeginners.com/lists/python-lists-cheat-sheet/) you may find useful.\n\nAnother very useful operation on lists is **Slicing**. It is a thing of Python.\n* Slicing allows you to access sublists\n* Slicing does not create a copy of the list when it is called\n* Slicing makes Python so useful for matrix manipulation",
"_____no_output_____"
]
],
[
[
"# This is the worst way of creating a list of consequent integers.\n# But now we use it just for demostration\nnumbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]\nprint(numbers[1:]) # You can slice it from the given index\nprint(numbers[:-1]) # You can slice it till the given index\nprint(numbers[1:-2]) # You can combine them\nprint(numbers[::2]) # You can choose each second\nprint(numbers[2:-2][::2]) # You can chain slicing",
"[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n[2, 3, 4, 5, 6, 7, 8, 9, 10]\n[1, 3, 5, 7, 9, 11]\n[3, 5, 7, 9]\n"
]
],
[
[
"### 2.3 Dictionaries",
"_____no_output_____"
],
[
"* Dictionary is a **Key-Value** storage\n* Dictionaries are mutable by default\n* Dictionaries are useful for linking items\n* In some versions of Python, dictionaries are sorted, in others, they are not.",
"_____no_output_____"
]
],
[
[
"emptydict = {} # creates empty dict\nuser = {'id': '0x123456', 'age': 28, 'authorized': True}\nprint(user)\n\ndays = {\n 1: \"Mon\",\n 2: \"Tues\",\n 3: \"Wed\",\n 4: \"Thu\",\n 5: \"Fri\",\n 6: \"Sat\",\n 7: \"Sun\"\n} # A dict with items\n\nprint(days.keys()) # prints keys\nprint(days) # prints whole dict\nage = user['age'] # accesses the element of the dictionay with key 'age'\nprint(age)",
"{'id': '0x123456', 'age': 28, 'authorized': True}\ndict_keys([1, 2, 3, 4, 5, 6, 7])\n{1: 'Mon', 2: 'Tues', 3: 'Wed', 4: 'Thu', 5: 'Fri', 6: 'Sat', 7: 'Sun'}\n28\n"
],
[
"my_dict = {\n 1: '1',\n '1': 1\n}\n# Keys are not casted. '1' and 1 are not the same key\nprint(my_dict[1] == my_dict['1'])\n\nmy_dict['one'] = False\nmy_dict[123] = 321\nprint(my_dict)",
"False\n{1: '1', '1': 1, 'one': False, 123: 321}\n"
]
],
[
[
"For the further study of dictionary manipulation in Python refer to this [tutorial](http://www.pythonforbeginners.com/dictionary/dictionary-manipulation-in-python ).",
"_____no_output_____"
],
[
"### 2.4 Conditions",
"_____no_output_____"
]
],
[
[
"is_visible = False\nif is_visible:\n print(\"I am visible\")\nelse:\n print(\"You can not see me\")",
"You can not see me\n"
]
],
[
[
"As this is the first appearance of the nested structure, we must clarify the following:\n* In Python all nested code structures are defined by indentation.\n* Standard indentation is 4 spaces (or 1 tab)",
"_____no_output_____"
]
],
[
[
"animals = ['cat', 'dog', 'monkey', 'elephant']\n\nif 'cat' in animals:\n print('Cat is here')\n\nif len(animals) > 2 and 'fish' not in animals:\n print('There are many animals but fish is not here')\n\nif 'whale' in animals or 'dog' in animals:\n print('At least one of my favorite animals is in the list')",
"Cat is here\nThere are many animals but fish is not here\nAt least one of my favorite animals is in the list\n"
],
[
"code = 345\n\nif code == 200:\n print('success')\nelif code == 404:\n print('page not found')\nelif 300 <= code < 400:\n print('redirected')\nelse:\n print('unknown error')",
"redirected\n"
]
],
[
[
"### 2.5 Loops",
"_____no_output_____"
],
[
"* There are 2 types of loops in Python: `while` and `for`\n* `while` loop checks the condition before executing the loop body\n* `for` iterates over the sequence of elements",
"_____no_output_____"
]
],
[
[
"# while\ni = 0\nwhile i < 3:\n print(i)\n i += 1",
"0\n1\n2\n"
],
[
"# for loop\nfor animal in animals:\n print(animal)\n\n# In order to make a c-like loop,\n# you have to create a list of consecutive numbers\nprint('\\nBad way:')\nnumbers = [0, 1, 2, 3, 4]\nfor number in numbers:\n print(number)\n\n# As we already stated, it is not the best way of creating such lists\n# Here is the best way:\nprint('\\nGood way:')\nfor number in range(5):\n print(number)\n\nprint('\\nAdvanced example:')\nfor number in reversed(range(10, 22, 2)):\n print(number)",
"cat\ndog\nmonkey\nelephant\n\nBad way:\n0\n1\n2\n3\n4\n\nGood way:\n0\n1\n2\n3\n4\n\nAdvanced example:\n20\n18\n16\n14\n12\n10\n"
]
],
[
[
"### 2.6 Functions",
"_____no_output_____"
],
[
"* functions are declared with `def` statement\n* function is an object, like float, string, etc.",
"_____no_output_____"
]
],
[
[
"def function_name():\n print ('Hello AML students')\n\nfunction_name()",
"Hello AML students\n"
],
[
"# Create a function that multiplies a number by 5 if it is above a given threshold,\n# otherwise square the input.\ndef manipulate_number(number, threshold):\n # Check whether the number is higher than the threshold.\n if number > threshold:\n return number * 5\n else:\n return number ** 2\n\nprint(manipulate_number(4, 6))\nprint(manipulate_number(8, 7))",
"16\n40\n"
],
[
"def linear(x, k, b=0): # b=0 if b is not specified in function call\n return k * x + b\n\nprint(linear(1, 3.0)) # we don't pass any keys of the arguments\nprint(linear(k=1, x=3.0)) # we pass the keys, sometimes to reorder arguments.\nprint(linear(1, k=3.0, b=3.0)) # we pass b=3. and specify it because b=3.0 is not the default value",
"3.0\n3.0\n6.0\n"
],
[
"def are_close(a, b):\n return (a - b) ** 2 < 1e-6\n\n# Functions could be passed as arguments\ndef evaluate(func, arg_1 ,arg_2):\n return func(arg_1, arg_2)\n\nprint(evaluate(are_close, 0.333, 1.0 / 3))",
"True\n"
]
],
[
[
"* If you are still very new to Python:\n * Implement some simple functions and print the results\n * Please ask questions if pieces of code do not do what you want them to do\n* You can always get the information about the function just by caling **help**:\n\n```Python\nhelp(any_function)\n```\n* In Jupyter Notebook, you can also get the info by pushing **Tab Tab** with pressed **Shift**",
"_____no_output_____"
]
],
[
[
"# Create here your own functions, if you want\n# Create a new cell by typing ctrl+b",
"_____no_output_____"
]
],
[
[
"## 3. NumPy Basics",
"_____no_output_____"
],
[
"* A very nice part of Python is that there are a lot of 3rd party libraries.\n* The most popular library for matrix manipulations / linear algebra is [**NumPy**](http://www.numpy.org/).\n* The official website says:\n> NumPy is the fundamental package for scientific computing with Python.\n\n* NumPy core functions are written in **C/C++** and **Fortran**.\n* NumPy functions work faster than pure Python functions (or at least with the same speed).",
"_____no_output_____"
]
],
[
[
"# The first import\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"* Easy enough!\n* There are several ways of importing libraries:\n * `import library` - import the full library. You can access its functions: `library.utils.somefunc(x)`\n * `import library as lib` - the same as above-described, but more convenient: `lib.utils.other_func(x, y)`\n * `from library.utils import somefunc` - only one function is imported: `somefunc(x)`\n* `import numpy as np` is a standard convention of importing NumPy.",
"_____no_output_____"
],
[
"### 3.1 Arrays",
"_____no_output_____"
],
[
"* The feature of **NumPy** is **Array**.\n* An array is close to the list data type, but it is extended with several useful methods.",
"_____no_output_____"
]
],
[
[
"# you can create an array of zeros\na = np.zeros(5)\nprint(a)\n\n# or an array of consecutive numbers\nb = np.arange(7)\nprint('1...6:')\nprint(b)\n\n# or even an array from a list\nc = np.array([1, 3, 5, 7, 12, 19])\n\nprint('An element of c:')\nprint(c[4])\nprint('Length:', len(c))",
"[0. 0. 0. 0. 0.]\n1...6:\n[0 1 2 3 4 5 6]\nAn element of c:\n12\nLength: 6\n"
]
],
[
[
"* You can also create n-dimensional arrays:\n * an array of arrays\n * an array of arrays of arrays\n * ...\n* They have additional properties which are insignificant for now, but will be exploited later during this course\n* You can transform an n-dimensional array to a plane array and vice versa just by reshaping",
"_____no_output_____"
]
],
[
[
"# A 2-dimensional array\na = np.array([[1, 2], [3, 4]])\nprint(a)\n\n# you can change its shape to make it a 1-dimensional array\nprint(a.ravel())\nprint(a.reshape(4))\n\n# and vice versa\nb = a.ravel()\nprint(b.reshape((2, 2)))\n\n# you can access a row or a column\nprint('2nd column:', a[:, 1])\nprint('1st row:', a[0, :])",
"[[1 2]\n [3 4]]\n[1 2 3 4]\n[1 2 3 4]\n[[1 2]\n [3 4]]\n2nd column: [2 4]\n1st row: [1 2]\n"
]
],
[
[
"### 3.2 Functions and Operations\n\n* NumPy supports basics operations on an array and a number",
"_____no_output_____"
]
],
[
[
"newarray = np.zeros(8)\n# instead of adding a number in a loop,\n# you can do it in one line\nnewarray += 8\nprint(newarray)\n\n# the same for other basic operations\nnewarray *= 3\nprint(newarray)\n\n# and even with slicing\nnewarray[::2] /= 8\nprint(newarray)",
"[8. 8. 8. 8. 8. 8. 8. 8.]\n[24. 24. 24. 24. 24. 24. 24. 24.]\n[ 3. 24. 3. 24. 3. 24. 3. 24.]\n"
]
],
[
[
"* Numpy also supports operations on several arrays of the same length\n* These operations are elemetwise",
"_____no_output_____"
]
],
[
[
"arr_1 = np.array([1, 9, 3, 4])\narr_2 = np.arange(4)\nprint('Arrays:')\nprint(arr_1)\nprint(arr_2)\n\nprint('Addition:')\nprint(arr_1 + arr_2)\nprint(np.add(arr_1, arr_2)) # the same\n\nprint('Multiplication:')\nprint(arr_1 * arr_2)\nprint(np.multiply(arr_1, arr_2)) # the same\n\nprint('Division:')\nprint(arr_2 / arr_1)\nprint(np.divide(1.0 * arr_2, arr_1)) # the same",
"Arrays:\n[1 9 3 4]\n[0 1 2 3]\nAddition:\n[ 1 10 5 7]\n[ 1 10 5 7]\nMultiplication:\n[ 0 9 6 12]\n[ 0 9 6 12]\nDivision:\n[0. 0.11111111 0.66666667 0.75 ]\n[0. 0.11111111 0.66666667 0.75 ]\n"
]
],
[
[
"* NumPy provides one with a rich variaty of mathematical functions\n* Atomic functions ($\\sin(x)$, $\\cos(x)$, $\\ln(x)$, $x^p$, $e^x, \\dots$) are elementwise\n* There are several functions, which allows one to compute statistics:\n * mean of the elements of an array\n * standard deviation\n * ...",
"_____no_output_____"
]
],
[
[
"x = np.linspace(0, 1, 6)\nprint('x:')\nprint(x)\n\nprint('Mean x:')\nprint(np.mean(x))\n\nprint('Std x:')\nprint(x.std())\n\nprint('x^2:')\nprint(x*x) # as elementwise product\nprint(np.square(x)) # with a special function\nprint(np.power(x, 2)) # as a power function with power=2\nprint(x**2) # as you are expected to do it with a number\n\nprint('sin(x):')\nprint(np.sin(x))\n\nprint('Mean e^x:')\nprint(np.mean(np.exp(x)))",
"x:\n[0. 0.2 0.4 0.6 0.8 1. ]\nMean x:\n0.5\nStd x:\n0.3415650255319866\nx^2:\n[0. 0.04 0.16 0.36 0.64 1. ]\n[0. 0.04 0.16 0.36 0.64 1. ]\n[0. 0.04 0.16 0.36 0.64 1. ]\n[0. 0.04 0.16 0.36 0.64 1. ]\nsin(x):\n[0. 0.19866933 0.38941834 0.56464247 0.71735609 0.84147098]\nMean e^x:\n1.7465281688572436\n"
]
],
[
[
"### 3.3 Miscellaneous",
"_____no_output_____"
]
],
[
[
"# Indexing\nx = np.linspace(0, np.pi, 10)\ny = np.cos(x) - np.sin(2 * x)\nprint('x =', x, '\\n')\nprint('y =', y, '\\n')\n# we can create the boolean mask of elements and pass it as indices\nmask = y > 0\nprint('mask =', mask, '\\n')\nprint('positive y =', y[mask], '\\n')",
"x = [0. 0.34906585 0.6981317 1.04719755 1.3962634 1.74532925\n 2.0943951 2.44346095 2.7925268 3.14159265] \n\ny = [ 1. 0.29690501 -0.21876331 -0.3660254 -0.16837197 0.16837197\n 0.3660254 0.21876331 -0.29690501 -1. ] \n\nmask = [ True True False False False True True True False False] \n\npositive y = [1. 0.29690501 0.16837197 0.3660254 0.21876331] \n\n"
],
[
"# NumPy has `random` package\nx = np.random.random()\nprint(x)\n\n# uniform [-2, 8)\nrand_arr = np.random.uniform(-2, 8, size=3)\nprint('Array of random variables')\nprint(rand_arr)\n\n# here is the normal distribution\nprint('N(x|m=0, s=0.1):')\nprint(np.random.normal(scale=0.1, size=4))",
"0.7612019051634209\nArray of random variables\n[7.39323599 1.08087948 2.99388187]\nN(x|m=0, s=0.1):\n[ 0.05146504 -0.01399847 -0.10959651 -0.19605715]\n"
],
[
"# fast search\nx = np.array([1, 2, 5, -1])\nprint(np.where(x < 0))\n\n# retrieve the index of max element\nprint(np.argmax(x))\n\n# sory array\nprint(np.sort(x))",
"(array([3]),)\n2\n[-1 1 2 5]\n"
]
],
[
[
"* There is a lot which you can do with Numpy.\n* For further study and practice of Numpy, we refer you to this [tutorial](http://scipy.github.io/old-wiki/pages/Tentative_NumPy_Tutorial)\n* Here is a good [list](https://github.com/rougier/numpy-100) of numpy tasks.\n* You can also check other packages from **[SciPy](https://www.scipy.org)** ecosystem.\n* You may also be interested in [**scikit-learn**](http://scikit-learn.org/stable/) - tools for machine learning in Python",
"_____no_output_____"
],
[
"## 4. Visualization with Matplotlib",
"_____no_output_____"
],
[
"* We use **Matplotlib** for plots and data visualization\n* There is a [tutorial](http://matplotlib.org/users/pyplot_tutorial.html).\n* Here are some examples from Matplotlib gallery\n\n<link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.2/css/bootstrap.min.css\" integrity=\"sha384-PsH8R72JQ3SOdhVi3uxftmaW6Vc51MKb0q5P2rRUpPvrszuE4W1povHYgTpBfshb\" crossorigin=\"anonymous\">\n\n<div class=\"container\" style=\"max-width:100%\">\n <div class=\"row\">\n <div class=\"col-sm-6\" style=\"display: flex; height: 300px;\">\n <img src=\"http://matplotlib.org/_images/fill_demo1.png\"\n style=\"max-width: 100%; max-height: 100%; margin: auto;\">\n </div>\n <div class=\"col-sm-6\" style=\"display: flex; height: 300px;\">\n <img src=\"http://matplotlib.org/_images/errorbar_limits.png\"\n style=\"max-width: 100%; max-height: 100%; margin: auto;\">\n </div>\n </div>\n <div class=\"row\">\n <div class=\"col-sm-6\" style=\"display: flex; height: 300px;\">\n <img src=\"http://matplotlib.org/_images/subplot_demo.png\"\n style=\"max-width: 100%; max-height: 100%; margin: auto;\">\n </div>\n <div class=\"col-sm-6\" style=\"display: flex; height: 300px;\">\n <img src=\"http://matplotlib.org/_images/histogram_demo_features2.png\"\n style=\"max-width: 100%; max-height: 100%; margin: auto;\">\n </div>\n </div>\n</div>",
"_____no_output_____"
]
],
[
[
"# We import `pyplot` from `matplotlib` as `plt`\nimport matplotlib.pyplot as plt\n\n# We add %matplotlib flag to specify how the figures should be shown\n# inline - static pictures in notebook\n# notebook - interactive graphics\n%matplotlib inline",
"_____no_output_____"
],
[
"# let's plot a simple example\nx = np.arange(100)\ny = x ** 2 - x\n\nplt.plot(y)\nplt.show() # that's it",
"_____no_output_____"
],
[
"# A more complex example\nn_samples = 100\nx = np.linspace(0.0, 1.0, n_samples)\ny = x**3 / (np.exp(10 * x + 1e-8) - 1)\ny /= y.max()\ny_samples = np.abs(y + 0.1 * y * np.random.normal(size=n_samples))\n\n\nplt.figure(figsize=(8, 5))\nplt.plot(x, y_samples, 'o', c='orange', label='experiment')\nplt.plot(x, y, lw=3, label='theory')\nplt.grid()\nplt.title(\"Planck's law\", fontsize=18)\nplt.legend(loc='best', fontsize=14)\nplt.ylabel('Relative spectral radiance', fontsize=14)\nplt.xlabel('Relative frequency', fontsize=14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 5. Nearest Neighbor Classification",
"_____no_output_____"
],
[
"* We have a dataset of objects of several classes\n* We expect two objects from the same class to be close\n* Two objects from different classes are supposed to be distant\n* The query object is supposed to have the same class as its nearest neighbor",
"_____no_output_____"
],
[
"### 5.1 Digits Dataset\n\n* It contains handwritten digits 0 through 9\n* Each object is an $8 \\times 8$ grayscale image\n* We consider each pixel of the image as a separate feature of the object",
"_____no_output_____"
]
],
[
[
"import sklearn.datasets\n\n# We load the dataset\ndigits = sklearn.datasets.load_digits()\n\n# Here we load up the images and labels and print some examples\nimages_and_labels = list(zip(digits.images, digits.target))\nfor index, (image, label) in enumerate(images_and_labels[:10]):\n plt.subplot(2, 5, index + 1)\n plt.axis('off')\n plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title('Training: {}'.format(label), y=1.1)\nplt.show()",
"_____no_output_____"
],
[
"images_1 = digits.images[digits.target == 1]\nimages_5 = digits.images[digits.target == 5]\n\nfor i in range(5):\n plt.subplot(2, 5, i + 1)\n plt.axis('off')\n plt.imshow(images_1[i], cmap=plt.cm.gray_r, interpolation='nearest')\n\n plt.subplot(2, 5, i + 6)\n plt.axis('off')\n plt.imshow(images_5[i], cmap=plt.cm.gray_r, interpolation='nearest')\nplt.show()",
"_____no_output_____"
]
],
[
[
"* Ones look similar. Fives also looks similar\n* Fives and Ones look different",
"_____no_output_____"
],
[
"### 5.2 Distances\n\n* In order to talk about close and distant objects, we have to define the **distance (metric)**\n* Distance is a function $F(\\cdot, \\cdot)$ of 2 elements which returns a number\n* Here are the properties of distance:\n 1. $F(x, y) \\geq 0$\n 2. $F(x, y) = 0 \\Leftrightarrow x = y$\n 3. $F(x, y) = F(y, x)$\n 4. $F(x, z) \\leq F(x, y) + F(y, z)$\n\n* Let's look at the **Eucledian distance** as it is the most intuitive for us:\n$$\nF(x, y) = \\sqrt{\\sum_{i=1}^{d} (x_{i} - y_{i})^{2}}.\n$$\n\nNow it is time to implement it.",
"_____no_output_____"
]
],
[
[
"# First of all, let's implement it in the most trivial way\n# without using numpy arrays, just to understand what is going on\ndef euclidean_distance_simple(x, y):\n # First, make sure x and y are of equal length.\n assert(len(x) == len(y))\n d = 0.0\n for i in range(len(x)):\n d += (x[i]-y[i])**2\n return np.sqrt(d)",
"_____no_output_____"
],
[
"x1 = np.array([0.,0.])\ny1 = np.array([5.,2.])\n\nx2 = np.array([0.,1.,3.])\ny2 = np.array([9.,1.,4.5])",
"_____no_output_____"
]
],
[
[
"Now you can test your functions. The expected values are **5.385...** and **9.124...**",
"_____no_output_____"
]
],
[
[
"print(euclidean_distance_simple(x1, y1))\nprint(euclidean_distance_simple(x2, y2))",
"5.385164807134504\n9.12414379544733\n"
],
[
"# Let's implement it in a more effective way\n# use numpy arrays\n# use all the benefits of numpy\ndef euclidean_distance_numpy(x, y):\n # x, y - numpy arrays\n assert(len(x) == len(y))\n d = 0.0\n temp = (x - y)**2\n d = np.sum(temp)\n return np.sqrt(d)",
"_____no_output_____"
],
[
"print(euclidean_distance_numpy(x1, y1))\nprint(euclidean_distance_numpy(x2, y2))",
"5.385164807134504\n9.12414379544733\n"
]
],
[
[
"### 5.3 Performance Experiments\n\n* We implemented the Euclidean distance in 2 ways. Now we are able to compare their performance\n* We measure the time consumption of the functions\n* We test the perfomance of them while being executed with random vectors of certain sizes",
"_____no_output_____"
]
],
[
[
"import time\n\nsizes = range(1, 1000, 10)\n\nres_simple = []\nres_numpy = []\n\n\nfor size in sizes:\n\n x = np.random.random(size=size)\n y = np.random.random(size=size)\n\n time_0 = time.time()\n _ = euclidean_distance_simple(x, y)\n res_simple.append(time.time() - time_0)\n\n time_0 = time.time()\n _ = euclidean_distance_numpy(x, y)\n res_numpy.append(time.time() - time_0)\n\nres_simple = np.array(res_simple)\nres_numpy = np.array(res_numpy)",
"_____no_output_____"
],
[
"plt.figure(figsize=(9, 5))\n\nplt.plot(sizes, 10**6 * res_simple, lw=3, label='simple')\nplt.plot(sizes, 10**6 * res_numpy, lw=3, label='numpy')\n\nplt.legend(loc='best', fontsize=14)\nplt.xlabel('size', fontsize=15)\nplt.ylabel('times, mks', fontsize=15)\nplt.grid()\nplt.show()",
"_____no_output_____"
]
],
[
[
"* Pure Python works slower than NumPy\n* Always use NumPy when it is possible",
"_____no_output_____"
],
[
"### 5.4 Classification\n\n* We should divide our dataset into a training set and a test set.\n* In order to predict the class of an object, we will iterate over the objects in the training set\n* The predicted class is the class of the closest object",
"_____no_output_____"
]
],
[
[
"n_objects = digits.images.shape[0]\ntrain_test_split = 0.7\ntrain_size = int(n_objects * train_test_split)\nindices = np.arange(n_objects)\nnp.random.shuffle(indices)\n\ntrain_indices, test_indices = indices[:train_size], indices[train_size:]\ntrain_images, train_targets = digits.images[train_indices], digits.target[train_indices]\ntest_images, test_targets = digits.images[test_indices], digits.target[test_indices]",
"_____no_output_____"
],
[
"train_images = train_images.reshape((-1, 64))\ntest_images = test_images.reshape((-1, 64))",
"_____no_output_____"
],
[
"def predict_object_class(vec, x_train, y_train):\n # vec.shape: [64]\n # x_train.shape: [N_objects, 64]\n # y_train.shape: [N_objects]\n\n best = 999999\n for i, sample in enumerate(x_train):\n candidate = euclidean_distance_numpy(sample, vec)\n if candidate < best:\n best = candidate\n best_i = i\n return y_train[best_i]\n",
"_____no_output_____"
],
[
"def predict(x, x_train, y_train):\n # it is not the best way, but it is easy to understand\n classes = []\n for vec in x:\n predicted_cls = predict_object_class(vec, x_train, y_train)\n classes.append(predicted_cls)\n return np.array(classes)",
"_____no_output_____"
],
[
"predicted_targets = predict(test_images, train_images, train_targets)\naccuracy = np.mean(predicted_targets == test_targets)\nprint(\"Accuracy {:.1f}%\".format(accuracy * 100))",
"Accuracy 98.1%\n"
],
[
"correct = predicted_targets == test_targets\nincorrect = ~correct\n\n\nf, axes = plt.subplots(2, 5, figsize=(8, 3))\n\n\nfor ax, image, y_pred, y_test in zip(axes[0],\n test_images[correct],\n predicted_targets[correct],\n test_targets[correct]):\n\n ax.imshow(image.reshape((8, 8)), cmap=plt.cm.gray_r, interpolation='nearest')\n ax.set_title('Pred: {}, Real: {}'.format(y_pred, y_test))\n ax.set_axis_off()\n\nfor ax, image, y_pred, y_test in zip(axes[1],\n test_images[incorrect],\n predicted_targets[incorrect],\n test_targets[incorrect]):\n\n ax.imshow(image.reshape((8, 8)), cmap=plt.cm.gray_r, interpolation='nearest')\n ax.set_title('Pred: {}, True: {}'.format(y_pred, y_test))\n ax.set_axis_off()\n\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"* You can try to use other <a href=\"https://en.wikipedia.org/wiki/Metric_(mathematics)#Examples\">metrics</a>\n* You can experiment with other datasets:\n * **MNIST**:\n 1. [Download](http://yann.lecun.com/exdb/mnist/)\n 2. `from dataset_utils import load_mnist`\n 3. `train = list(load_mnist('training', path='<PATH TO A FOLDER>'))`\n * **CIFAR-10** & **CIFAR-100**:\n 1. [Download](https://www.cs.toronto.edu/~kriz/cifar.html)\n 2. `from dataset_utils import load_cifar`\n 3. `data = load_cifar('<PATH TO A FILE>')`\n",
"_____no_output_____"
],
[
"## 6. Linear Algebra Basics\n* This introduction is devoted to Python and NumPy basics.\n* We used 1-dimensional NumPy arrays for data manipulation.\n* During the coming assignments, n-dimensional (2, 3 and even 4-dimensional) arrays will be exploited\n* In order to make it easier, we provide you with several useful links\n * [Linear Algebra Review and Reference](http://cs229.stanford.edu/section/cs229-linalg.pdf). Chapters **1.1-3.2, 3.5** provide one with almost all the necessities of linear algebra for deep learning\n * [The Matrix Cookbook](https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf) could be used as a cheat sheet\n * [Deep Learning](http://www.deeplearningbook.org) is an ultimate book. An explanation of any aspects of deep learning could be found there.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4a5ac8a3856c7a68d4f3784c6a2e7ecaf9aaa5ae
| 37,341 |
ipynb
|
Jupyter Notebook
|
code/08.05-gradient_descent.ipynb
|
computational-class/cjc
|
1569ce7a7a85571bd2e399ab20fb950d7f8963b2
|
[
"MIT"
] | 65 |
2017-04-06T01:00:19.000Z
|
2020-11-16T15:30:30.000Z
|
code/08.05-gradient_descent.ipynb
|
AnxietyVendor/cjc
|
4bfd22ea4f360a803093a95bd9b1a2d497b7200a
|
[
"MIT"
] | 90 |
2017-05-12T10:09:06.000Z
|
2019-09-17T13:13:22.000Z
|
code/08.05-gradient_descent.ipynb
|
AnxietyVendor/cjc
|
4bfd22ea4f360a803093a95bd9b1a2d497b7200a
|
[
"MIT"
] | 48 |
2017-03-22T02:58:34.000Z
|
2020-11-16T03:08:47.000Z
| 31.942686 | 6,332 | 0.605501 |
[
[
[
"***\n***\n\n\n# Introduction to Gradient Descent\n\nThe Idea Behind Gradient Descent 梯度下降\n\n***\n***\n\n",
"_____no_output_____"
],
[
"<img src='./img/stats/gradient_descent.gif' align = \"middle\" width = '400px'>",
"_____no_output_____"
],
[
"<img align=\"left\" style=\"padding-right:10px;\" width =\"400px\" src=\"./img/stats/gradient2.png\">\n\n\n**如何找到最快下山的路?**\n- 假设此时山上的浓雾很大,下山的路无法确定;\n- 假设你摔不死!\n\n - 你只能利用自己周围的信息去找到下山的路径。\n - 以你当前的位置为基准,寻找这个位置最陡峭的方向,从这个方向向下走。\n",
"_____no_output_____"
],
[
"<img style=\"padding-right:10px;\" width =\"500px\" src=\"./img/stats/gradient.png\" align = 'right'>\n**Gradient is the vector of partial derivatives**\n\nOne approach to maximizing a function is to\n- pick a random starting point, \n- compute the gradient, \n- take a small step in the direction of the gradient, and \n- repeat with a new staring point.\n\n",
"_____no_output_____"
],
[
"\n\n<img src='./img/stats/gd.webp' width = '700' align = 'middle'>\nLet's represent parameters as $\\Theta$, learning rate as $\\alpha$, and gradient as $\\bigtriangledown J(\\Theta)$, ",
"_____no_output_____"
],
[
"To the find the best model is an optimization problem\n- “minimizes the error of the model” \n- “maximizes the likelihood of the data.” ",
"_____no_output_____"
],
[
"We’ll frequently need to maximize (or minimize) functions. \n- to find the input vector v that produces the largest (or smallest) possible value.\n",
"_____no_output_____"
],
[
"# Mathematics behind Gradient Descent\n\nA simple mathematical intuition behind one of the commonly used optimisation algorithms in Machine Learning.\n\nhttps://www.douban.com/note/713353797/",
"_____no_output_____"
],
[
"The cost or loss function:\n\n$$Cost = \\frac{1}{N} \\sum_{i = 1}^N (Y' -Y)^2$$",
"_____no_output_____"
],
[
"\n\n<img src='./img/stats/x2.webp' width = '700' align = 'center'>",
"_____no_output_____"
],
[
"Parameters with small changes:\n$$ m_1 = m_0 - \\delta m, b_1 = b_0 - \\delta b$$\n\nThe cost function J is a function of m and b:\n\n$$J_{m, b} = \\frac{1}{N} \\sum_{i = 1}^N (Y' -Y)^2 = \\frac{1}{N} \\sum_{i = 1}^N Error_i^2$$",
"_____no_output_____"
],
[
"$$\\frac{\\partial J}{\\partial m} = 2 Error \\frac{\\partial}{\\partial m}Error$$\n\n$$\\frac{\\partial J}{\\partial b} = 2 Error \\frac{\\partial}{\\partial b}Error$$",
"_____no_output_____"
],
[
"Let's fit the data with linear regression:\n\n$$\\frac{\\partial}{\\partial m}Error = \\frac{\\partial}{\\partial m}(Y' - Y) = \\frac{\\partial}{\\partial m}(mX + b - Y)$$\n\nSince $X, b, Y$ are constant:\n\n$$\\frac{\\partial}{\\partial m}Error = X$$",
"_____no_output_____"
],
[
"$$\\frac{\\partial}{\\partial b}Error = \\frac{\\partial}{\\partial b}(Y' - Y) = \\frac{\\partial}{\\partial b}(mX + b - Y)$$\n\nSince $X, m, Y$ are constant:\n\n$$\\frac{\\partial}{\\partial m}Error = 1$$",
"_____no_output_____"
],
[
"Thus:\n \n$$\\frac{\\partial J}{\\partial m} = 2 * Error * X$$\n$$\\frac{\\partial J}{\\partial b} = 2 * Error$$",
"_____no_output_____"
],
[
"Let's get rid of the constant 2 and multiplying the learning rate $\\alpha$, who determines how large a step to take:\n\n$$\\frac{\\partial J}{\\partial m} = Error * X * \\alpha$$\n$$\\frac{\\partial J}{\\partial b} = Error * \\alpha$$\n",
"_____no_output_____"
],
[
"Since $ m_1 = m_0 - \\delta m, b_1 = b_0 - \\delta b$:\n\n$$ m_1 = m_0 - Error * X * \\alpha$$\n\n$$b_1 = b_0 - Error * \\alpha$$\n\n**Notice** that the slope b can be viewed as the beta value for X = 1. Thus, the above two equations are in essence the same.\n\nLet's represent parameters as $\\Theta$, learning rate as $\\alpha$, and gradient as $\\bigtriangledown J(\\Theta)$, we have:\n\n\n$$\\Theta_1 = \\Theta_0 - \\alpha \\bigtriangledown J(\\Theta)$$\n",
"_____no_output_____"
],
[
"\n\n<img src='./img/stats/gd.webp' width = '800' align = 'center'>",
"_____no_output_____"
],
[
"Hence,to solve for the gradient, we iterate through our data points using our new $m$ and $b$ values and compute the partial derivatives. \n\nThis new gradient tells us \n- the slope of our cost function at our current position \n- the direction we should move to update our parameters. \n\n- The size of our update is controlled by the learning rate.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# Size of the points dataset.\nm = 20\n# Points x-coordinate and dummy value (x0, x1).\nX0 = np.ones((m, 1))\nX1 = np.arange(1, m+1).reshape(m, 1)\nX = np.hstack((X0, X1))\n# Points y-coordinate\ny = np.array([3, 4, 5, 5, 2, 4, 7, 8, 11, 8, 12,\n 11, 13, 13, 16, 17, 18, 17, 19, 21]).reshape(m, 1)\n\n# The Learning Rate alpha.\nalpha = 0.01",
"_____no_output_____"
],
[
"def error_function(theta, X, y):\n '''Error function J definition.'''\n diff = np.dot(X, theta) - y\n return (1./2*m) * np.dot(np.transpose(diff), diff)\n\ndef gradient_function(theta, X, y):\n '''Gradient of the function J definition.'''\n diff = np.dot(X, theta) - y\n return (1./m) * np.dot(np.transpose(X), diff)\n\ndef gradient_descent(X, y, alpha):\n '''Perform gradient descent.'''\n theta = np.array([1, 1]).reshape(2, 1)\n gradient = gradient_function(theta, X, y)\n while not np.all(np.absolute(gradient) <= 1e-5):\n theta = theta - alpha * gradient\n gradient = gradient_function(theta, X, y)\n return theta\n\n# source:https://www.jianshu.com/p/c7e642877b0e",
"_____no_output_____"
],
[
"optimal = gradient_descent(X, y, alpha)\nprint('Optimal parameters Theta:', optimal[0][0], optimal[1][0])\nprint('Error function:', error_function(optimal, X, y)[0,0])\n",
"Optimal parameters Theta: 0.5158328581734093 0.9699216324486175\nError function: 405.98496249324046\n"
]
],
[
[
"# This is the End!",
"_____no_output_____"
],
[
"# Estimating the Gradient",
"_____no_output_____"
],
[
"If f is a function of one variable, its derivative at a point x measures how f(x) changes when we make a very small change to x. \n\n> It is defined as the limit of the difference quotients:\n\n\n差商(difference quotient)就是因变量的改变量与自变量的改变量两者相除的商。",
"_____no_output_____"
]
],
[
[
"def difference_quotient(f, x, h):\n return (f(x + h) - f(x)) / h",
"_____no_output_____"
]
],
[
[
"For many functions it’s easy to exactly calculate derivatives. \n\nFor example, the square function:\n\n def square(x): \n return x * x\n\nhas the derivative:\n \n def derivative(x): \n return 2 * x",
"_____no_output_____"
]
],
[
[
"def square(x):\n return x * x\n\ndef derivative(x):\n return 2 * x\n\nderivative_estimate = lambda x: difference_quotient(square, x, h=0.00001)",
"_____no_output_____"
],
[
"def sum_of_squares(v):\n \"\"\"computes the sum of squared elements in v\"\"\"\n return sum(v_i ** 2 for v_i in v)",
"_____no_output_____"
],
[
"# plot to show they're basically the same\nimport matplotlib.pyplot as plt\nx = range(-10,10)\nplt.plot(x, list(map(derivative, x)), 'rx') # red x\nplt.plot(x, list(map(derivative_estimate, x)), 'b+') # blue +\nplt.show()",
"_____no_output_____"
]
],
[
[
"When f is a function of many variables, it has multiple partial derivatives.",
"_____no_output_____"
]
],
[
[
"def partial_difference_quotient(f, v, i, h):\n # add h to just the i-th element of v\n w = [v_j + (h if j == i else 0)\n for j, v_j in enumerate(v)]\n return (f(w) - f(v)) / h\n\ndef estimate_gradient(f, v, h=0.00001):\n return [partial_difference_quotient(f, v, i, h)\n for i, _ in enumerate(v)]",
"_____no_output_____"
]
],
[
[
"# Using the Gradient",
"_____no_output_____"
]
],
[
[
"def step(v, direction, step_size):\n \"\"\"move step_size in the direction from v\"\"\"\n return [v_i + step_size * direction_i\n for v_i, direction_i in zip(v, direction)]\n\ndef sum_of_squares_gradient(v):\n return [2 * v_i for v_i in v]",
"_____no_output_____"
],
[
"from collections import Counter\nfrom linear_algebra import distance, vector_subtract, scalar_multiply\nfrom functools import reduce\nimport math, random",
"_____no_output_____"
],
[
"print(\"using the gradient\")\n\n# generate 3 numbers \nv = [random.randint(-10,10) for i in range(3)]\nprint(v)\ntolerance = 0.0000001\n\nn = 0\nwhile True:\n gradient = sum_of_squares_gradient(v) # compute the gradient at v\n if n%50 ==0:\n print(v, sum_of_squares(v))\n next_v = step(v, gradient, -0.01) # take a negative gradient step\n if distance(next_v, v) < tolerance: # stop if we're converging\n break\n v = next_v # continue if we're not\n n += 1\n\nprint(\"minimum v\", v)\nprint(\"minimum value\", sum_of_squares(v))",
"using the gradient\n[-4, 10, 6]\n[-4, 10, 6] 152\n[-1.4566787203484681, 3.641696800871171, 2.1850180805227026] 20.15817249600249\n[-0.5304782235790126, 1.3261955589475318, 0.7957173353685193] 2.6733678840696777\n[-0.19318408497395115, 0.482960212434878, 0.28977612746092685] 0.35454086152861664\n[-0.07035178642288623, 0.17587946605721566, 0.10552767963432938] 0.04701905160246833\n[-0.025619987555179666, 0.0640499688879492, 0.03842998133276951] 0.006235645742111834\n[-0.0093300226718057, 0.023325056679514268, 0.013995034007708552] 0.0008269685690358806\n[-0.0033977113715970304, 0.008494278428992584, 0.005096567057395551] 0.00010967220436445803\n[-0.0012373434632228497, 0.003093358658057131, 0.0018560151948342769] 1.4544679036813049e-05\n[-0.0004506029731597509, 0.0011265074328993788, 0.0006759044597396269] 1.9289088744938724e-06\n[-0.00016409594058189027, 0.00041023985145472635, 0.00024614391087283554] 2.558110382968256e-07\n[-5.975876618530154e-05, 0.00014939691546325416, 8.963814927795238e-05] 3.3925546291900725e-08\n[-2.1762330764102098e-05, 5.440582691025532e-05, 3.264349614615315e-05] 4.499190882718763e-09\n[-7.925181032313087e-06, 1.9812952580782747e-05, 1.1887771548469638e-05] 5.96680696751885e-10\n[-2.8861106411699463e-06, 7.215276602924873e-06, 4.32916596175492e-06] 7.913152901420692e-11\nminimum v [-1.6064572436336709e-06, 4.0161431090841815e-06, 2.409685865450507e-06]\nminimum value 2.4516696318419405e-11\n"
]
],
[
[
"# Choosing the Right Step Size",
"_____no_output_____"
],
[
"Although the rationale for moving against the gradient is clear, \n- how far to move is not. \n - Indeed, choosing the right step size is more of an art than a science.",
"_____no_output_____"
],
[
"Methods:\n1. Using a fixed step size\n1. Gradually shrinking the step size over time\n1. At each step, choosing the step size that minimizes the value of the objective function",
"_____no_output_____"
]
],
[
[
"step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]",
"_____no_output_____"
]
],
[
[
"It is possible that certain step sizes will result in invalid inputs for our function. \n\nSo we’ll need to create a “safe apply” function\n- returns infinity for invalid inputs:\n - which should never be the minimum of anything",
"_____no_output_____"
]
],
[
[
"def safe(f):\n \"\"\"define a new function that wraps f and return it\"\"\"\n def safe_f(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except:\n return float('inf') # this means \"infinity\" in Python\n return safe_f",
"_____no_output_____"
]
],
[
[
"# Putting It All Together",
"_____no_output_____"
],
[
"- **target_fn** that we want to minimize\n- **gradient_fn**. \n\nFor example, the target_fn could represent the errors in a model as a function of its parameters, \n\nTo choose a starting value for the parameters `theta_0`. ",
"_____no_output_____"
]
],
[
[
"def minimize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):\n \"\"\"use gradient descent to find theta that minimizes target function\"\"\"\n\n step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]\n\n theta = theta_0 # set theta to initial value\n target_fn = safe(target_fn) # safe version of target_fn\n value = target_fn(theta) # value we're minimizing\n\n while True:\n gradient = gradient_fn(theta)\n next_thetas = [step(theta, gradient, -step_size)\n for step_size in step_sizes]\n\n # choose the one that minimizes the error function\n next_theta = min(next_thetas, key=target_fn)\n next_value = target_fn(next_theta)\n\n # stop if we're \"converging\"\n if abs(value - next_value) < tolerance:\n return theta\n else:\n theta, value = next_theta, next_value",
"_____no_output_____"
],
[
"# minimize_batch\"\nv = [random.randint(-10,10) for i in range(3)]\nv = minimize_batch(sum_of_squares, sum_of_squares_gradient, v)\nprint(\"minimum v\", v)\nprint(\"minimum value\", sum_of_squares(v))",
"minimum v [0.0009304595970494407, -0.001196305196206424, -0.00026584559915698326]\nminimum value 2.367575066803034e-06\n"
]
],
[
[
"Sometimes we’ll instead want to maximize a function, which we can do by minimizing its negative",
"_____no_output_____"
]
],
[
[
"def negate(f):\n \"\"\"return a function that for any input x returns -f(x)\"\"\"\n return lambda *args, **kwargs: -f(*args, **kwargs)\n\ndef negate_all(f):\n \"\"\"the same when f returns a list of numbers\"\"\"\n return lambda *args, **kwargs: [-y for y in f(*args, **kwargs)]\n\ndef maximize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001):\n return minimize_batch(negate(target_fn),\n negate_all(gradient_fn),\n theta_0,\n tolerance)",
"_____no_output_____"
]
],
[
[
"Using the batch approach, each gradient step requires us to make a prediction and compute the gradient for the whole data set, which makes each step take a long time.",
"_____no_output_____"
],
[
"Error functions are additive\n- The predictive error on the whole data set is simply the sum of the predictive errors for each data point.\n\nWhen this is the case, we can instead apply a technique called **stochastic gradient descent** \n- which computes the gradient (and takes a step) for only one point at a time. \n- It cycles over our data repeatedly until it reaches a stopping point.",
"_____no_output_____"
],
[
"# Stochastic Gradient Descent",
"_____no_output_____"
],
[
"During each cycle, we’ll want to iterate through our data in a random order:",
"_____no_output_____"
]
],
[
[
"def in_random_order(data):\n \"\"\"generator that returns the elements of data in random order\"\"\"\n indexes = [i for i, _ in enumerate(data)] # create a list of indexes\n random.shuffle(indexes) # shuffle them\n for i in indexes: # return the data in that order\n yield data[i]",
"_____no_output_____"
]
],
[
[
"This approach avoids circling around near a minimum forever\n- whenever we stop getting improvements we’ll decrease the step size and eventually quit.",
"_____no_output_____"
]
],
[
[
"def minimize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):\n data = list(zip(x, y))\n theta = theta_0 # initial guess\n alpha = alpha_0 # initial step size\n min_theta, min_value = None, float(\"inf\") # the minimum so far\n iterations_with_no_improvement = 0\n\n # if we ever go 100 iterations with no improvement, stop\n while iterations_with_no_improvement < 100:\n value = sum( target_fn(x_i, y_i, theta) for x_i, y_i in data )\n\n if value < min_value:\n # if we've found a new minimum, remember it\n # and go back to the original step size\n min_theta, min_value = theta, value\n iterations_with_no_improvement = 0\n alpha = alpha_0\n else:\n # otherwise we're not improving, so try shrinking the step size\n iterations_with_no_improvement += 1\n alpha *= 0.9\n\n # and take a gradient step for each of the data points\n for x_i, y_i in in_random_order(data):\n gradient_i = gradient_fn(x_i, y_i, theta)\n theta = vector_subtract(theta, scalar_multiply(alpha, gradient_i))\n\n return min_theta",
"_____no_output_____"
],
[
"def maximize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):\n return minimize_stochastic(negate(target_fn),\n negate_all(gradient_fn),\n x, y, theta_0, alpha_0)\n",
"_____no_output_____"
],
[
"print(\"using minimize_stochastic_batch\")\n\nx = list(range(101))\ny = [3*x_i + random.randint(-10, 20) for x_i in x]\ntheta_0 = random.randint(-10,10) \nv = minimize_stochastic(sum_of_squares, sum_of_squares_gradient, x, y, theta_0)\n\nprint(\"minimum v\", v)\nprint(\"minimum value\", sum_of_squares(v))\n ",
"_____no_output_____"
]
],
[
[
"Scikit-learn has a Stochastic Gradient Descent module http://scikit-learn.org/stable/modules/sgd.html",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a5acd53ce95cee11b6047a40e4708629e0f638e
| 471,319 |
ipynb
|
Jupyter Notebook
|
Sentimental_Analysis_pre_model.ipynb
|
Build-Week-Saltiest-Hack-News-Trolls-2/datascience
|
f7a65345ac583a5918b51bc10a28a5b5a4d55f61
|
[
"MIT"
] | null | null | null |
Sentimental_Analysis_pre_model.ipynb
|
Build-Week-Saltiest-Hack-News-Trolls-2/datascience
|
f7a65345ac583a5918b51bc10a28a5b5a4d55f61
|
[
"MIT"
] | null | null | null |
Sentimental_Analysis_pre_model.ipynb
|
Build-Week-Saltiest-Hack-News-Trolls-2/datascience
|
f7a65345ac583a5918b51bc10a28a5b5a4d55f61
|
[
"MIT"
] | 3 |
2020-04-28T15:06:21.000Z
|
2020-04-30T02:38:19.000Z
| 107.828643 | 287,294 | 0.774231 |
[
[
[
"<a href=\"https://colab.research.google.com/github/Build-Week-Saltiest-Hack-News-Trolls-2/datascience/blob/Moly-malibu-patch-1/Sentimental_Analysis_pre_model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"\n#Sentimental Analysis Project:",
"_____no_output_____"
]
],
[
[
"!pip install vaderSentiment",
"Requirement already satisfied: vaderSentiment in /usr/local/lib/python3.6/dist-packages (3.3.1)\n"
],
[
"#import Library\nimport re\nimport string\n\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\nimport pandas as pd\nimport numpy as np\nimport spacy\nfrom collections import Counter\nfrom bs4 import BeautifulSoup",
"_____no_output_____"
],
[
"#create dataset\nimport pandas as pd\ndf = pd.read_csv('saltyhacker.csv')",
"_____no_output_____"
],
[
"#Str the data\ndf['Text'] = df['Text'].astype(str)",
"_____no_output_____"
]
],
[
[
"#CLEAN DATA",
"_____no_output_____"
]
],
[
[
"#clean DF\ndef clean_description(desc):\n soup = BeautifulSoup(desc)\n return soup.get_text()\ndf['rating'] = df['Text'].apply(clean_description)\ndf['words_length'] = df['rating'].str.len()",
"_____no_output_____"
],
[
"#clean HTML\nimport lxml.html.clean \n\nlxml.html.clean.clean_html('<html><head></head><bodyonload = loadfunc()>my text</body></html>')\nprint (BeautifulSoup('<').string) \nprint (BeautifulSoup('&').string) ",
"None\nNone\n"
],
[
"#CLEAN DATA\n#remove whitespace\ndf['rating'] = df['rating'].str.strip().str.lower()\ndf['Text'] = df['Text'].str.strip().str.lower()\n\n#Start with date\ndf['rating'].str.match('\\d?\\d/\\d?\\d/\\d{4}').all()\n\n#\\s indicates a white space. So [^\\s] is any non-white space and includes letters, numbers, special characters\ndf['rating'] = df['rating'].str.replace('[^a-zA-Z\\s]', '').str.replace('\\s+', ' ')\n\n#Replace occurrences of pattern/regex in the Series/Index with some other string\ndf['Text'] = df['Text'].str.replace('[^a-zA-Z\\s]', '').str.replace('\\s+', ' ')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df['rating'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"#SENTIMENT ANALYSIS USING IN DE MODEL VADER\n\"VADER (Valence Aware Dictionary and sEntiment Reasoner) is a sentiment intensity tool added to NLTK in 2014. \n\nUnlike other techniques that require training on related text before use, VADER is ready to go for analysis without any special setup. VADER is unique in that it makes fine-tuned distinctions between varying degrees of positivity and negativity. \n\nFor example, VADER scores “comfort” moderately positively and “euphoria” extremely positively. It also attempts to capture and score textual features common in informal online text such as capitalizations, exclamation points, and emoticons.\"\n\nhttps://programminghistorian.org/en/lessons/sentiment-analysis\n\nhttp://www.nltk.org/_modules/nltk/sentiment/vader.html",
"_____no_output_____"
]
],
[
[
"#vander model\nfrom vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer\nanalyzer = SentimentIntensityAnalyzer()\n\ndef vaderize(sentence):\n return analyzer.polarity_scores(sentence)",
"_____no_output_____"
],
[
"#creared columns score with numbers\ndf['Scores'] = df['rating'].apply(vaderize)",
"_____no_output_____"
],
[
"#Create score by differente classification position into the text\ndf[['negative', 'neutral', 'positive', 'compound']] = df.Scores.apply(pd.Series)",
"_____no_output_____"
],
[
"for text in df.sort_values(by='neutral', ascending=False)['rating'].head(5):\n print(f\"------ Topic ------\")\n print(text, end=\"\\n\\n\")",
"------ Topic ------\ncustom graphics and web design projects just had to start the timer keep track of what we were doing and submit the final time with teams it all adds up and you can see everyones time it took on a project and you can all add to the same time record\n\n------ Topic ------\ni just ordered a brand new apple macbook air with gigs for my daughter i cant even get apple to charge my card and ship the thing their estimate is midmay but i havent seen any traction on it at all its as if i didnt even order it\n\n------ Topic ------\nthat makes sense though i suppose for a road warrior setup the source ip might change every so often right\n\n------ Topic ------\nfalse the se came out after the s\n\n------ Topic ------\nnan\n\n"
],
[
"#To See the count in the column\ndf['positive'].value_counts() ",
"_____no_output_____"
],
[
"#To See the count in the column\ndf['neutral'].value_counts() ",
"_____no_output_____"
],
[
"#To See the count in the column\ndf['negative'].value_counts() ",
"_____no_output_____"
],
[
"#Graphic to see negative words\nnegative_words = ' '.join([text for text in df['rating'][df['negative'] == 0]])\n\nwordcloud = WordCloud(width=800, height=500,\nrandom_state=21, max_font_size=110).generate(negative_words)\nplt.figure(figsize=(10, 7))\nplt.imshow(wordcloud, interpolation=\"bilinear\")\nplt.axis('off')\nplt.show()",
"_____no_output_____"
],
[
"# Calculate Vader sentiment score\ndf['final_score'] = df['Text'].apply(lambda x: score_vader(x, vader))\n\n#Discretize variable into equal-sized buckets based on rank or based on sample quantiles.\ndf['final_pred'] = pd.cut(df['final_score'], bins=5, labels=[1, 2, 3, 4, 5])\ndf = df.drop('final_score', axis=1)\ndf.head(7)",
"_____no_output_____"
],
[
"#percentage value in a column by category \ndf['final_pred'].value_counts(normalize=True) * 100",
"_____no_output_____"
]
],
[
[
"#SIMPLE MODEL USING TEXTBLOB LIBRARY",
"_____no_output_____"
],
[
"TextBlob is a Python (2 and 3) library for processing textual data. It provides a consistent API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, and is a simple interface.",
"_____no_output_____"
],
[
"#NOTA:\n\nThis small model generally shows whether the text is neutral, positive, or negative. which is essentially what the whole project is looking for. corroborating with the vander model that in general most words are neutral according to the percentage in each column.",
"_____no_output_____"
]
],
[
[
"#Modelo use Textblob\nimport csv\nfrom textblob import TextBlob\n\narticle = 'saltyhacker.csv'\n\nwith open(article, 'r') as csvfile:\n rows = csv.reader(csvfile)\n for row in rows:\n sentence = row[0]\n print (sentence)\n blob = TextBlob(sentence)\n print (blob.sentiment)",
"User\nSentiment(polarity=0.0, subjectivity=0.0)\nthu2111\nSentiment(polarity=0.0, subjectivity=0.0)\nnoisy_boy\nSentiment(polarity=0.0, subjectivity=0.0)\nlopis\nSentiment(polarity=0.0, subjectivity=0.0)\naexol\nSentiment(polarity=0.0, subjectivity=0.0)\nfrockington1\nSentiment(polarity=0.0, subjectivity=0.0)\nalpaca128\nSentiment(polarity=0.0, subjectivity=0.0)\nalkonaut\nSentiment(polarity=0.0, subjectivity=0.0)\ngiantg2\nSentiment(polarity=0.0, subjectivity=0.0)\n9wzYQbTYsAIc\nSentiment(polarity=0.0, subjectivity=0.0)\nJonnax\nSentiment(polarity=0.0, subjectivity=0.0)\nChris2048\nSentiment(polarity=0.0, subjectivity=0.0)\nthu2111\nSentiment(polarity=0.0, subjectivity=0.0)\nwinkeltripel\nSentiment(polarity=0.0, subjectivity=0.0)\npjmlp\nSentiment(polarity=0.0, subjectivity=0.0)\nWaterluvian\nSentiment(polarity=0.0, subjectivity=0.0)\nleejo\nSentiment(polarity=0.0, subjectivity=0.0)\nwaynenilsen\nSentiment(polarity=0.0, subjectivity=0.0)\ncameronbrown\nSentiment(polarity=0.0, subjectivity=0.0)\nasdkjh345fd\nSentiment(polarity=0.0, subjectivity=0.0)\nretSava\nSentiment(polarity=0.0, subjectivity=0.0)\nsseth\nSentiment(polarity=0.0, subjectivity=0.0)\nWaterluvian\nSentiment(polarity=0.0, subjectivity=0.0)\nasdkjh345fd\nSentiment(polarity=0.0, subjectivity=0.0)\nchrisbennet\nSentiment(polarity=0.0, subjectivity=0.0)\nDeradon\nSentiment(polarity=0.0, subjectivity=0.0)\natilaneves\nSentiment(polarity=0.0, subjectivity=0.0)\nTsiCClawOfLight\nSentiment(polarity=0.0, subjectivity=0.0)\ntimbaboon\nSentiment(polarity=0.0, subjectivity=0.0)\nzonefuenf\nSentiment(polarity=0.0, subjectivity=0.0)\ndanieltillett\nSentiment(polarity=0.0, subjectivity=0.0)\nasdkjh345fd\nSentiment(polarity=0.0, subjectivity=0.0)\ndx034\nSentiment(polarity=0.0, subjectivity=0.0)\nshakna\nSentiment(polarity=0.0, subjectivity=0.0)\nimtringued\nSentiment(polarity=0.0, subjectivity=0.0)\nnthnclrk\nSentiment(polarity=0.0, subjectivity=0.0)\narunc\nSentiment(polarity=0.0, subjectivity=0.0)\npjc50\nSentiment(polarity=0.0, subjectivity=0.0)\nmstade\nSentiment(polarity=0.0, subjectivity=0.0)\nsaalweachter\nSentiment(polarity=0.0, subjectivity=0.0)\npjc50\nSentiment(polarity=0.0, subjectivity=0.0)\nzozbot234\nSentiment(polarity=0.0, subjectivity=0.0)\nzynkb0a\nSentiment(polarity=0.0, subjectivity=0.0)\nclaudiug\nSentiment(polarity=0.0, subjectivity=0.0)\njaclaz\nSentiment(polarity=0.0, subjectivity=0.0)\nGoblinSlayer\nSentiment(polarity=0.0, subjectivity=0.0)\nmrtksn\nSentiment(polarity=0.0, subjectivity=0.0)\nKiro\nSentiment(polarity=0.0, subjectivity=0.0)\nbeyondcompute\nSentiment(polarity=0.0, subjectivity=0.0)\ntripzilch\nSentiment(polarity=0.0, subjectivity=0.0)\nmercora\nSentiment(polarity=0.0, subjectivity=0.0)\nupofadown\nSentiment(polarity=0.0, subjectivity=0.0)\nBrandoElFollito\nSentiment(polarity=0.0, subjectivity=0.0)\npestaa\nSentiment(polarity=0.0, subjectivity=0.0)\nsimion314\nSentiment(polarity=0.0, subjectivity=0.0)\nbryogenic\nSentiment(polarity=0.0, subjectivity=0.0)\nasdkjh345fd\nSentiment(polarity=0.0, subjectivity=0.0)\ntekni5\nSentiment(polarity=0.0, subjectivity=0.0)\ntaneq\nSentiment(polarity=0.0, subjectivity=0.0)\njka\nSentiment(polarity=0.0, subjectivity=0.0)\nglobular-toast\nSentiment(polarity=0.0, subjectivity=0.0)\nTraster\nSentiment(polarity=0.0, subjectivity=0.0)\ndef8cefe\nSentiment(polarity=0.0, subjectivity=0.0)\nspiritplumber\nSentiment(polarity=0.0, subjectivity=0.0)\nForHackernews\nSentiment(polarity=0.0, subjectivity=0.0)\npjmlp\nSentiment(polarity=0.0, subjectivity=0.0)\nacqq\nSentiment(polarity=0.0, subjectivity=0.0)\ngovg\nSentiment(polarity=0.0, subjectivity=0.0)\naequitas\nSentiment(polarity=0.0, subjectivity=0.0)\nthu2111\nSentiment(polarity=0.0, subjectivity=0.0)\narunc\nSentiment(polarity=0.0, subjectivity=0.0)\npjmlp\nSentiment(polarity=0.0, subjectivity=0.0)\nsoraminazuki\nSentiment(polarity=0.0, subjectivity=0.0)\ndsfyu404ed\nSentiment(polarity=0.0, subjectivity=0.0)\nchrisbennet\nSentiment(polarity=0.0, subjectivity=0.0)\nnromiun\nSentiment(polarity=0.0, subjectivity=0.0)\nunicornfinder\nSentiment(polarity=0.0, subjectivity=0.0)\nchedabob\nSentiment(polarity=0.0, subjectivity=0.0)\narsenico\nSentiment(polarity=0.0, subjectivity=0.0)\ngizmondo\nSentiment(polarity=0.0, subjectivity=0.0)\nbryogenic\nSentiment(polarity=0.0, subjectivity=0.0)\nimtringued\nSentiment(polarity=0.0, subjectivity=0.0)\nhnra\nSentiment(polarity=0.0, subjectivity=0.0)\nasdkjh345fd\nSentiment(polarity=0.0, subjectivity=0.0)\nrataata_jr\nSentiment(polarity=0.0, subjectivity=0.0)\nastatine\nSentiment(polarity=0.0, subjectivity=0.0)\npiroux\nSentiment(polarity=0.0, subjectivity=0.0)\nsrg0\nSentiment(polarity=0.0, subjectivity=0.0)\ndiscordance\nSentiment(polarity=0.0, subjectivity=0.0)\nloopz\nSentiment(polarity=0.0, subjectivity=0.0)\nkungato\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\njasonlotito\nSentiment(polarity=0.0, subjectivity=0.0)\nhobofan\nSentiment(polarity=0.0, subjectivity=0.0)\ncelticninja\nSentiment(polarity=0.0, subjectivity=0.0)\nkylehotchkiss\nSentiment(polarity=0.0, subjectivity=0.0)\nkd5bjo\nSentiment(polarity=0.0, subjectivity=0.0)\nfrockington1\nSentiment(polarity=0.0, subjectivity=0.0)\ncpwright\nSentiment(polarity=0.0, subjectivity=0.0)\nKoshkin\nSentiment(polarity=0.0, subjectivity=0.0)\nllarsson\nSentiment(polarity=0.0, subjectivity=0.0)\nmisja111\nSentiment(polarity=0.0, subjectivity=0.0)\nAunche\nSentiment(polarity=0.0, subjectivity=0.0)\njustanotherc\nSentiment(polarity=0.0, subjectivity=0.0)\nhellofunk\nSentiment(polarity=0.0, subjectivity=0.0)\ncelticninja\nSentiment(polarity=0.0, subjectivity=0.0)\nchrisbennet\nSentiment(polarity=0.0, subjectivity=0.0)\nbachmeier\nSentiment(polarity=0.0, subjectivity=0.0)\nmhd\nSentiment(polarity=0.0, subjectivity=0.0)\nsmitty1e\nSentiment(polarity=0.0, subjectivity=0.0)\nmatwood\nSentiment(polarity=0.0, subjectivity=0.0)\nhasa\nSentiment(polarity=0.0, subjectivity=0.0)\ntoyg\nSentiment(polarity=0.0, subjectivity=0.0)\n1cvmask\nSentiment(polarity=0.0, subjectivity=0.0)\nOJFord\nSentiment(polarity=0.0, subjectivity=0.0)\njillesvangurp\nSentiment(polarity=0.0, subjectivity=0.0)\nvharish\nSentiment(polarity=0.0, subjectivity=0.0)\nsseth\nSentiment(polarity=0.0, subjectivity=0.0)\nzooli\nSentiment(polarity=0.0, subjectivity=0.0)\nnromiun\nSentiment(polarity=0.0, subjectivity=0.0)\nSymbiote\nSentiment(polarity=0.0, subjectivity=0.0)\nliterallycancer\nSentiment(polarity=0.0, subjectivity=0.0)\nhenvic\nSentiment(polarity=0.0, subjectivity=0.0)\npjc50\nSentiment(polarity=0.0, subjectivity=0.0)\nhnra\nSentiment(polarity=0.0, subjectivity=0.0)\nKoshkin\nSentiment(polarity=0.0, subjectivity=0.0)\ncallmeal\nSentiment(polarity=0.0, subjectivity=0.0)\nydlr\nSentiment(polarity=0.0, subjectivity=0.0)\nGoblinSlayer\nSentiment(polarity=0.0, subjectivity=0.0)\nnickcw\nSentiment(polarity=0.0, subjectivity=0.0)\ncluoma\nSentiment(polarity=0.0, subjectivity=0.0)\npeterwwillis\nSentiment(polarity=0.0, subjectivity=0.0)\nghaff\nSentiment(polarity=0.0, subjectivity=0.0)\ndkersten\nSentiment(polarity=0.0, subjectivity=0.0)\nLocalH\nSentiment(polarity=0.0, subjectivity=0.0)\ngjs278\nSentiment(polarity=0.0, subjectivity=0.0)\njessaustin\nSentiment(polarity=0.0, subjectivity=0.0)\nthanksforfish\nSentiment(polarity=0.0, subjectivity=0.0)\nrevertts\nSentiment(polarity=0.0, subjectivity=0.0)\ntripzilch\nSentiment(polarity=0.0, subjectivity=0.0)\nbenoitg\nSentiment(polarity=0.0, subjectivity=0.0)\ntaneq\nSentiment(polarity=0.0, subjectivity=0.0)\nmagicalhippo\nSentiment(polarity=0.0, subjectivity=0.0)\nkolinko\nSentiment(polarity=0.0, subjectivity=0.0)\nswiley\nSentiment(polarity=0.0, subjectivity=0.0)\ncallmeal\nSentiment(polarity=0.0, subjectivity=0.0)\nraghava\nSentiment(polarity=0.0, subjectivity=0.0)\nzeepzeep\nSentiment(polarity=0.0, subjectivity=0.0)\nkreco\nSentiment(polarity=0.0, subjectivity=0.0)\nRainymood\nSentiment(polarity=0.0, subjectivity=0.0)\nsnakeboy\nSentiment(polarity=0.0, subjectivity=0.0)\nabraham\nSentiment(polarity=0.0, subjectivity=0.0)\ntripzilch\nSentiment(polarity=0.0, subjectivity=0.0)\nnihil75\nSentiment(polarity=0.0, subjectivity=0.0)\na_imho\nSentiment(polarity=0.0, subjectivity=0.0)\ntyingq\nSentiment(polarity=0.0, subjectivity=0.0)\nnromiun\nSentiment(polarity=0.0, subjectivity=0.0)\nrevertts\nSentiment(polarity=0.0, subjectivity=0.0)\nnickjj\nSentiment(polarity=0.0, subjectivity=0.0)\ndmw_ng\nSentiment(polarity=0.0, subjectivity=0.0)\nFeepingCreature\nSentiment(polarity=0.0, subjectivity=0.0)\nezequiel-garzon\nSentiment(polarity=0.0, subjectivity=0.0)\nfxtentacle\nSentiment(polarity=0.0, subjectivity=0.0)\n9wzYQbTYsAIc\nSentiment(polarity=0.0, subjectivity=0.0)\njfkebwjsbx\nSentiment(polarity=0.0, subjectivity=0.0)\nLoughla\nSentiment(polarity=0.0, subjectivity=0.0)\nasplake\nSentiment(polarity=0.0, subjectivity=0.0)\navindroth\nSentiment(polarity=0.0, subjectivity=0.0)\nencom\nSentiment(polarity=0.0, subjectivity=0.0)\nelcritch\nSentiment(polarity=0.0, subjectivity=0.0)\nphoenixdblack\nSentiment(polarity=0.0, subjectivity=0.0)\nMindwipe\nSentiment(polarity=0.0, subjectivity=0.0)\nbluGill\nSentiment(polarity=0.0, subjectivity=0.0)\nrini17\nSentiment(polarity=0.0, subjectivity=0.0)\nliterallycancer\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nswiley\nSentiment(polarity=0.0, subjectivity=0.0)\nAngostura\nSentiment(polarity=0.0, subjectivity=0.0)\ntambourine_man\nSentiment(polarity=0.0, subjectivity=0.0)\nrectang\nSentiment(polarity=0.0, subjectivity=0.0)\neugeniub\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nsho\nSentiment(polarity=0.0, subjectivity=0.0)\nphiljohn\nSentiment(polarity=0.0, subjectivity=0.0)\npeterwwillis\nSentiment(polarity=0.0, subjectivity=0.0)\nvbezhenar\nSentiment(polarity=0.0, subjectivity=0.0)\nar-nelson\nSentiment(polarity=0.0, subjectivity=0.0)\nkortex\nSentiment(polarity=0.0, subjectivity=0.0)\njahn716\nSentiment(polarity=0.0, subjectivity=0.0)\nben_w\nSentiment(polarity=0.0, subjectivity=0.0)\nfoolzcrow\nSentiment(polarity=0.0, subjectivity=0.0)\njedimastert\nSentiment(polarity=0.0, subjectivity=0.0)\nMindwipe\nSentiment(polarity=0.0, subjectivity=0.0)\nt0astbread\nSentiment(polarity=0.0, subjectivity=0.0)\ndennisong\nSentiment(polarity=0.0, subjectivity=0.0)\nBerislavLopac\nSentiment(polarity=0.0, subjectivity=0.0)\nbluGill\nSentiment(polarity=0.0, subjectivity=0.0)\npharke\nSentiment(polarity=0.0, subjectivity=0.0)\nnromiun\nSentiment(polarity=0.0, subjectivity=0.0)\nabstractbarista\nSentiment(polarity=0.0, subjectivity=0.0)\npot8n\nSentiment(polarity=0.0, subjectivity=0.0)\narikrak\nSentiment(polarity=0.0, subjectivity=0.0)\ndepressedpanda\nSentiment(polarity=0.0, subjectivity=0.0)\ndiggan\nSentiment(polarity=0.0, subjectivity=0.0)\nbookofjoe\nSentiment(polarity=0.0, subjectivity=0.0)\nexikyut\nSentiment(polarity=0.0, subjectivity=0.0)\njustin66\nSentiment(polarity=0.0, subjectivity=0.0)\nsnazz\nSentiment(polarity=0.0, subjectivity=0.0)\nmrfredward\nSentiment(polarity=0.0, subjectivity=0.0)\nmr_gibbins\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nLocalH\nSentiment(polarity=0.0, subjectivity=0.0)\nhiq\nSentiment(polarity=0.0, subjectivity=0.0)\nsanderjd\nSentiment(polarity=0.0, subjectivity=0.0)\nMandieD\nSentiment(polarity=0.0, subjectivity=0.0)\nscaredginger\nSentiment(polarity=0.0, subjectivity=0.0)\njudge2020\nSentiment(polarity=0.0, subjectivity=0.0)\nsnazz\nSentiment(polarity=0.0, subjectivity=0.0)\nlaumars\nSentiment(polarity=0.0, subjectivity=0.0)\nscaredginger\nSentiment(polarity=0.0, subjectivity=0.0)\nignoramous\nSentiment(polarity=0.0, subjectivity=0.0)\nauntienomen\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nBalero\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\n_Microft\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ntombh\nSentiment(polarity=0.0, subjectivity=0.0)\nvbezhenar\nSentiment(polarity=0.0, subjectivity=0.0)\nkuu\nSentiment(polarity=0.0, subjectivity=0.0)\neric_b\nSentiment(polarity=0.0, subjectivity=0.0)\nnbap\nSentiment(polarity=0.0, subjectivity=0.0)\ntombh\nSentiment(polarity=0.0, subjectivity=0.0)\nnvarsj\nSentiment(polarity=0.0, subjectivity=0.0)\ndaleholborow\nSentiment(polarity=0.0, subjectivity=0.0)\naikinai\nSentiment(polarity=0.0, subjectivity=0.0)\nauxym\nSentiment(polarity=0.0, subjectivity=0.0)\nAeolun\nSentiment(polarity=0.0, subjectivity=0.0)\nharryh\nSentiment(polarity=0.0, subjectivity=0.0)\nmcphage\nSentiment(polarity=0.0, subjectivity=0.0)\ntiborsaas\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nhx2a\nSentiment(polarity=0.0, subjectivity=0.0)\neric_b\nSentiment(polarity=0.0, subjectivity=0.0)\nabstractbarista\nSentiment(polarity=0.0, subjectivity=0.0)\namaccuish\nSentiment(polarity=0.0, subjectivity=0.0)\ndynamite-ready\nSentiment(polarity=0.0, subjectivity=0.0)\nyters\nSentiment(polarity=0.0, subjectivity=0.0)\ndsfyu404ed\nSentiment(polarity=0.0, subjectivity=0.0)\nteruakohatu\nSentiment(polarity=0.0, subjectivity=0.0)\nhvidgaard\nSentiment(polarity=0.0, subjectivity=0.0)\nglobular-toast\nSentiment(polarity=0.0, subjectivity=0.0)\nphatfish\nSentiment(polarity=0.0, subjectivity=0.0)\nadrianN\nSentiment(polarity=0.0, subjectivity=0.0)\nAeolun\nSentiment(polarity=0.0, subjectivity=0.0)\nhacker_newz\nSentiment(polarity=0.0, subjectivity=0.0)\nyaur\nSentiment(polarity=0.0, subjectivity=0.0)\nxxs\nSentiment(polarity=0.0, subjectivity=0.0)\njahn716\nSentiment(polarity=0.0, subjectivity=0.0)\nignoramous\nSentiment(polarity=0.0, subjectivity=0.0)\nnecrotic_comp\nSentiment(polarity=0.0, subjectivity=0.0)\nnicoburns\nSentiment(polarity=0.0, subjectivity=0.0)\niamthepieman\nSentiment(polarity=0.0, subjectivity=0.0)\nLgWoodenBadger\nSentiment(polarity=0.0, subjectivity=0.0)\nsearchableguy\nSentiment(polarity=0.0, subjectivity=0.0)\nMaxBarraclough\nSentiment(polarity=0.0, subjectivity=0.0)\nraziel2p\nSentiment(polarity=0.0, subjectivity=0.0)\nkortex\nSentiment(polarity=0.0, subjectivity=0.0)\nkemiller2002\nSentiment(polarity=0.0, subjectivity=0.0)\nzaphar\nSentiment(polarity=0.0, subjectivity=0.0)\nGnarfGnarf\nSentiment(polarity=0.0, subjectivity=0.0)\ndzhiurgis\nSentiment(polarity=0.0, subjectivity=0.0)\npatrickaljord\nSentiment(polarity=0.0, subjectivity=0.0)\nremus\nSentiment(polarity=0.0, subjectivity=0.0)\nantsar\nSentiment(polarity=0.0, subjectivity=0.0)\nhvidgaard\nSentiment(polarity=0.0, subjectivity=0.0)\nrowanG077\nSentiment(polarity=0.0, subjectivity=0.0)\nvbezhenar\nSentiment(polarity=0.0, subjectivity=0.0)\nhibbelig\nSentiment(polarity=0.0, subjectivity=0.0)\nadamfeldman\nSentiment(polarity=0.0, subjectivity=0.0)\nartem31\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\ntyingq\nSentiment(polarity=0.0, subjectivity=0.0)\ncraftinator\nSentiment(polarity=0.0, subjectivity=0.0)\nadrusi\nSentiment(polarity=0.0, subjectivity=0.0)\njatsek\nSentiment(polarity=0.0, subjectivity=0.0)\nstevespang\nSentiment(polarity=0.0, subjectivity=0.0)\nrezeroed\nSentiment(polarity=0.0, subjectivity=0.0)\nInterleap-Tech\nSentiment(polarity=0.0, subjectivity=0.0)\ndig1\nSentiment(polarity=0.0, subjectivity=0.0)\nsmichel17\nSentiment(polarity=0.0, subjectivity=0.0)\nnotkaiho\nSentiment(polarity=0.0, subjectivity=0.0)\nchimprich\nSentiment(polarity=0.0, subjectivity=0.0)\nnp_tedious\nSentiment(polarity=0.0, subjectivity=0.0)\ncharlzbryan\nSentiment(polarity=0.0, subjectivity=0.0)\nmdszy\nSentiment(polarity=0.0, subjectivity=0.0)\nbsenftner\nSentiment(polarity=0.0, subjectivity=0.0)\nnotkaiho\nSentiment(polarity=0.0, subjectivity=0.0)\nmjd\nSentiment(polarity=0.0, subjectivity=0.0)\ncyphar\nSentiment(polarity=0.0, subjectivity=0.0)\nheyoni\nSentiment(polarity=0.0, subjectivity=0.0)\ncameron_b\nSentiment(polarity=0.0, subjectivity=0.0)\nakhilcacharya\nSentiment(polarity=0.0, subjectivity=0.0)\nbadrabbit\nSentiment(polarity=0.0, subjectivity=0.0)\nrezeroed\nSentiment(polarity=0.0, subjectivity=0.0)\nthehappypm\nSentiment(polarity=0.0, subjectivity=0.0)\nRandomBacon\nSentiment(polarity=0.0, subjectivity=0.0)\njffhn\nSentiment(polarity=0.0, subjectivity=0.0)\nTopHand\nSentiment(polarity=0.0, subjectivity=0.0)\nmjd\nSentiment(polarity=0.0, subjectivity=0.0)\nhobs\nSentiment(polarity=0.0, subjectivity=0.0)\nMisterTea\nSentiment(polarity=0.0, subjectivity=0.0)\nchrismorgan\nSentiment(polarity=0.0, subjectivity=0.0)\ndhimes\nSentiment(polarity=0.0, subjectivity=0.0)\nbhickey\nSentiment(polarity=0.0, subjectivity=0.0)\nhorsemessiah\nSentiment(polarity=0.0, subjectivity=0.0)\nnchelluri\nSentiment(polarity=0.0, subjectivity=0.0)\nfarbodsaraf\nSentiment(polarity=0.0, subjectivity=0.0)\nSorrop\nSentiment(polarity=0.0, subjectivity=0.0)\nalkonaut\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nrrmm\nSentiment(polarity=0.0, subjectivity=0.0)\njohnchristopher\nSentiment(polarity=0.0, subjectivity=0.0)\nag56\nSentiment(polarity=0.0, subjectivity=0.0)\ndhimes\nSentiment(polarity=0.0, subjectivity=0.0)\nsnazz\nSentiment(polarity=0.0, subjectivity=0.0)\nmrfusion\nSentiment(polarity=0.0, subjectivity=0.0)\nanarcat\nSentiment(polarity=0.0, subjectivity=0.0)\nbarrenko\nSentiment(polarity=0.0, subjectivity=0.0)\nrramadass\nSentiment(polarity=0.0, subjectivity=0.0)\nrovr138\nSentiment(polarity=0.0, subjectivity=0.0)\nknolax\nSentiment(polarity=0.0, subjectivity=0.0)\ndarkwater\nSentiment(polarity=0.0, subjectivity=0.0)\nslano\nSentiment(polarity=0.0, subjectivity=0.0)\nbarrenko\nSentiment(polarity=0.0, subjectivity=0.0)\n2OEH8eoCRo0\nSentiment(polarity=0.0, subjectivity=0.0)\nchrismorgan\nSentiment(polarity=0.0, subjectivity=0.0)\nfrankmcsherry\nSentiment(polarity=0.0, subjectivity=0.0)\nrezeroed\nSentiment(polarity=0.0, subjectivity=0.0)\nKonohamaru\nSentiment(polarity=0.0, subjectivity=0.0)\nuser5994461\nSentiment(polarity=0.0, subjectivity=0.0)\ncredit_guy\nSentiment(polarity=0.0, subjectivity=0.0)\nvoidpointercast\nSentiment(polarity=0.0, subjectivity=0.0)\nTheOtherHobbes\nSentiment(polarity=0.0, subjectivity=0.0)\ntripzilch\nSentiment(polarity=0.0, subjectivity=0.0)\nmkhnews\nSentiment(polarity=0.0, subjectivity=0.0)\nazalemeth\nSentiment(polarity=0.0, subjectivity=0.0)\nAmericanChopper\nSentiment(polarity=0.0, subjectivity=0.0)\nCivBase\nSentiment(polarity=0.0, subjectivity=0.0)\ntimw4mail\nSentiment(polarity=0.0, subjectivity=0.0)\nbrudgers\nSentiment(polarity=0.0, subjectivity=0.0)\nsnazz\nSentiment(polarity=0.0, subjectivity=0.0)\nscoot_718\nSentiment(polarity=0.0, subjectivity=0.0)\n_-___________-_\nSentiment(polarity=0.0, subjectivity=0.0)\nKMnO4\nSentiment(polarity=0.0, subjectivity=0.0)\nandoniou\nSentiment(polarity=0.0, subjectivity=0.0)\nsmolder\nSentiment(polarity=0.0, subjectivity=0.0)\njtdev\nSentiment(polarity=0.0, subjectivity=0.0)\naikinai\nSentiment(polarity=0.0, subjectivity=0.0)\nwilliesleg\nSentiment(polarity=0.0, subjectivity=0.0)\nlioeters\nSentiment(polarity=0.0, subjectivity=0.0)\nstjepang\nSentiment(polarity=0.0, subjectivity=0.0)\nkls\nSentiment(polarity=0.0, subjectivity=0.0)\nacqq\nSentiment(polarity=0.0, subjectivity=0.0)\nrezeroed\nSentiment(polarity=0.0, subjectivity=0.0)\ndoubletgl\nSentiment(polarity=0.0, subjectivity=0.0)\nDictumMortuum\nSentiment(polarity=0.0, subjectivity=0.0)\nzokier\nSentiment(polarity=0.0, subjectivity=0.0)\nmagicalist\nSentiment(polarity=0.0, subjectivity=0.0)\nsnazz\nSentiment(polarity=0.0, subjectivity=0.0)\nbogomipz\nSentiment(polarity=0.0, subjectivity=0.0)\nchesterarthur\nSentiment(polarity=0.0, subjectivity=0.0)\nchrismorgan\nSentiment(polarity=0.0, subjectivity=0.0)\ntheknight\nSentiment(polarity=0.0, subjectivity=0.0)\nbb123\nSentiment(polarity=0.0, subjectivity=0.0)\nlevosmetalo\nSentiment(polarity=0.0, subjectivity=0.0)\nsearchableguy\nSentiment(polarity=0.0, subjectivity=0.0)\nlukevp\nSentiment(polarity=0.0, subjectivity=0.0)\nwil421\nSentiment(polarity=0.0, subjectivity=0.0)\nignoramous\nSentiment(polarity=0.0, subjectivity=0.0)\nmoondev\nSentiment(polarity=0.0, subjectivity=0.0)\nwarrenm\nSentiment(polarity=0.0, subjectivity=0.0)\njhayward\nSentiment(polarity=0.0, subjectivity=0.0)\nyaitsyaboi\nSentiment(polarity=0.0, subjectivity=0.0)\ntripzilch\nSentiment(polarity=0.0, subjectivity=0.0)\nNovemberWhiskey\nSentiment(polarity=0.0, subjectivity=0.0)\nquantummkv\nSentiment(polarity=0.0, subjectivity=0.0)\ngruez\nSentiment(polarity=0.0, subjectivity=0.0)\nscoot_718\nSentiment(polarity=0.0, subjectivity=0.0)\nparasubvert\nSentiment(polarity=0.0, subjectivity=0.0)\nsolatic\nSentiment(polarity=0.0, subjectivity=0.0)\ned_balls\nSentiment(polarity=0.0, subjectivity=0.0)\nSkyBelow\nSentiment(polarity=0.0, subjectivity=0.0)\ngruez\nSentiment(polarity=0.0, subjectivity=0.0)\nthrow93232\nSentiment(polarity=0.0, subjectivity=0.0)\nthanksforfish\nSentiment(polarity=0.0, subjectivity=0.0)\nscoot_718\nSentiment(polarity=0.0, subjectivity=0.0)\njrockway\nSentiment(polarity=0.0, subjectivity=0.0)\ncodingdave\nSentiment(polarity=0.0, subjectivity=0.0)\nyamoriyamori\nSentiment(polarity=0.0, subjectivity=0.0)\nvarlock\nSentiment(polarity=0.0, subjectivity=0.0)\nSlipperySlope\nSentiment(polarity=0.0, subjectivity=0.0)\nadornedCupcake\nSentiment(polarity=0.0, subjectivity=0.0)\nthanksforfish\nSentiment(polarity=0.0, subjectivity=0.0)\ntcd\nSentiment(polarity=0.0, subjectivity=0.0)\nnifnifnif\nSentiment(polarity=0.0, subjectivity=0.0)\nPaywallBuster\nSentiment(polarity=0.0, subjectivity=0.0)\n_-___________-_\nSentiment(polarity=0.0, subjectivity=0.0)\nscoot_718\nSentiment(polarity=0.0, subjectivity=0.0)\ndogma1138\nSentiment(polarity=0.0, subjectivity=0.0)\nnickjj\nSentiment(polarity=0.0, subjectivity=0.0)\nasdkhadsj\nSentiment(polarity=0.0, subjectivity=0.0)\npjmlp\nSentiment(polarity=0.0, subjectivity=0.0)\nM2Ys4U\nSentiment(polarity=0.0, subjectivity=0.0)\nmattkrause\nSentiment(polarity=0.0, subjectivity=0.0)\nbrudgers\nSentiment(polarity=0.0, subjectivity=0.0)\nscoot_718\nSentiment(polarity=0.0, subjectivity=0.0)\ne12e\nSentiment(polarity=0.0, subjectivity=0.0)\njfk13\nSentiment(polarity=0.0, subjectivity=0.0)\nbluGill\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ngamma3\nSentiment(polarity=0.0, subjectivity=0.0)\nnojito\nSentiment(polarity=0.0, subjectivity=0.0)\nsnazz\nSentiment(polarity=0.0, subjectivity=0.0)\nsimonh\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ntssva\nSentiment(polarity=0.0, subjectivity=0.0)\n__s\nSentiment(polarity=0.0, subjectivity=0.0)\njoking\nSentiment(polarity=0.0, subjectivity=0.0)\ntapland\nSentiment(polarity=0.0, subjectivity=0.0)\ntransfire\nSentiment(polarity=0.0, subjectivity=0.0)\nnoema\nSentiment(polarity=0.0, subjectivity=0.0)\nuser5994461\nSentiment(polarity=0.0, subjectivity=0.0)\nflyingfences\nSentiment(polarity=0.0, subjectivity=0.0)\nnojito\nSentiment(polarity=0.0, subjectivity=0.0)\nmajke\nSentiment(polarity=0.0, subjectivity=0.0)\nersinesen\nSentiment(polarity=0.0, subjectivity=0.0)\nBeniBoy\nSentiment(polarity=0.0, subjectivity=0.0)\nbborud\nSentiment(polarity=0.0, subjectivity=0.0)\ncollegeburner\nSentiment(polarity=0.0, subjectivity=0.0)\nfuryg3\nSentiment(polarity=0.0, subjectivity=0.0)\ndnrvs\nSentiment(polarity=0.0, subjectivity=0.0)\nantirez\nSentiment(polarity=0.0, subjectivity=0.0)\nMaxBarraclough\nSentiment(polarity=0.0, subjectivity=0.0)\ncredit_guy\nSentiment(polarity=0.0, subjectivity=0.0)\ntcd\nSentiment(polarity=0.0, subjectivity=0.0)\nsmallstepforman\nSentiment(polarity=0.0, subjectivity=0.0)\nizzydata\nSentiment(polarity=0.0, subjectivity=0.0)\nroguas\nSentiment(polarity=0.0, subjectivity=0.0)\nPaulHoule\nSentiment(polarity=0.0, subjectivity=0.0)\nwwright\nSentiment(polarity=0.0, subjectivity=0.0)\nrglullis\nSentiment(polarity=0.0, subjectivity=0.0)\ntotaldude87\nSentiment(polarity=0.0, subjectivity=0.0)\nkeiferski\nSentiment(polarity=0.0, subjectivity=0.0)\nmhh__\nSentiment(polarity=0.0, subjectivity=0.0)\nrswail\nSentiment(polarity=0.0, subjectivity=0.0)\nForHackernews\nSentiment(polarity=0.0, subjectivity=0.0)\nbhickey\nSentiment(polarity=0.0, subjectivity=0.0)\nmattcanhack\nSentiment(polarity=0.0, subjectivity=0.0)\ntimthorn\nSentiment(polarity=0.0, subjectivity=0.0)\nOutsmartDan\nSentiment(polarity=0.0, subjectivity=0.0)\ncollegeburner\nSentiment(polarity=0.0, subjectivity=0.0)\nJulianMorrison\nSentiment(polarity=0.0, subjectivity=0.0)\ngazoakley\nSentiment(polarity=0.0, subjectivity=0.0)\njbverschoor\nSentiment(polarity=0.0, subjectivity=0.0)\nmister_hn\nSentiment(polarity=0.0, subjectivity=0.0)\nmhh__\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ninterestica\nSentiment(polarity=0.0, subjectivity=0.0)\ntapland\nSentiment(polarity=0.0, subjectivity=0.0)\ntripzilch\nSentiment(polarity=0.0, subjectivity=0.0)\nblaser-waffle\nSentiment(polarity=0.0, subjectivity=0.0)\nJulianMorrison\nSentiment(polarity=0.0, subjectivity=0.0)\ngreatwhitenorth\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\nRMPR\nSentiment(polarity=0.0, subjectivity=0.0)\nGoblinSlayer\nSentiment(polarity=0.0, subjectivity=0.0)\nb10t\nSentiment(polarity=0.0, subjectivity=0.0)\nremus\nSentiment(polarity=0.0, subjectivity=0.0)\nmjburgess\nSentiment(polarity=0.0, subjectivity=0.0)\nmagicalist\nSentiment(polarity=0.0, subjectivity=0.0)\ndstroot\nSentiment(polarity=0.0, subjectivity=0.0)\njayd16\nSentiment(polarity=0.0, subjectivity=0.0)\nkrebs_liebhaber\nSentiment(polarity=0.0, subjectivity=0.0)\nMichaelApproved\nSentiment(polarity=0.0, subjectivity=0.0)\ninopinatus\nSentiment(polarity=0.0, subjectivity=0.0)\nkls\nSentiment(polarity=0.0, subjectivity=0.0)\nbeckingz\nSentiment(polarity=0.0, subjectivity=0.0)\nnchie\nSentiment(polarity=0.0, subjectivity=0.0)\nbrmgb\nSentiment(polarity=0.0, subjectivity=0.0)\ndangus\nSentiment(polarity=0.0, subjectivity=0.0)\nfalcolas\nSentiment(polarity=0.0, subjectivity=0.0)\ninterestica\nSentiment(polarity=0.0, subjectivity=0.0)\nmrtksn\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\ncollegeburner\nSentiment(polarity=0.0, subjectivity=0.0)\ngazoakley\nSentiment(polarity=0.0, subjectivity=0.0)\nsimonh\nSentiment(polarity=0.0, subjectivity=0.0)\ndilap\nSentiment(polarity=0.0, subjectivity=0.0)\nmhh__\nSentiment(polarity=0.0, subjectivity=0.0)\nptspin\nSentiment(polarity=0.0, subjectivity=0.0)\nCyanLite4\nSentiment(polarity=0.0, subjectivity=0.0)\ncodinghabit\nSentiment(polarity=0.0, subjectivity=0.0)\nKenanSulayman\nSentiment(polarity=0.0, subjectivity=0.0)\nfalcolas\nSentiment(polarity=0.0, subjectivity=0.0)\ntptacek\nSentiment(polarity=0.0, subjectivity=0.0)\nwlesieutre\nSentiment(polarity=0.0, subjectivity=0.0)\ninterestica\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nRegBarclay\nSentiment(polarity=0.0, subjectivity=0.0)\nsmallnamespace\nSentiment(polarity=0.0, subjectivity=0.0)\npjmlp\nSentiment(polarity=0.0, subjectivity=0.0)\nfalcolas\nSentiment(polarity=0.0, subjectivity=0.0)\nSymbiote\nSentiment(polarity=0.0, subjectivity=0.0)\nyepthatsreality\nSentiment(polarity=0.0, subjectivity=0.0)\npilif\nSentiment(polarity=0.0, subjectivity=0.0)\nhorsemessiah\nSentiment(polarity=0.0, subjectivity=0.0)\nSymbiote\nSentiment(polarity=0.0, subjectivity=0.0)\ngeorgyo\nSentiment(polarity=0.0, subjectivity=0.0)\nrooam-dev\nSentiment(polarity=0.0, subjectivity=0.0)\npaulcole\nSentiment(polarity=0.0, subjectivity=0.0)\nkumarvvr\nSentiment(polarity=0.0, subjectivity=0.0)\nscott_s\nSentiment(polarity=0.0, subjectivity=0.0)\nmindcrime\nSentiment(polarity=0.0, subjectivity=0.0)\nAshamedCaptain\nSentiment(polarity=0.0, subjectivity=0.0)\nLatteLazy\nSentiment(polarity=0.0, subjectivity=0.0)\nlubblig\nSentiment(polarity=0.0, subjectivity=0.0)\nglaberficken\nSentiment(polarity=0.0, subjectivity=0.0)\ndigitalsushi\nSentiment(polarity=0.0, subjectivity=0.0)\nnobleach\nSentiment(polarity=0.0, subjectivity=0.0)\nlondt8\nSentiment(polarity=0.0, subjectivity=0.0)\nrooam-dev\nSentiment(polarity=0.0, subjectivity=0.0)\nlstamour\nSentiment(polarity=0.0, subjectivity=0.0)\nxnyan\nSentiment(polarity=0.0, subjectivity=0.0)\npjmlp\nSentiment(polarity=0.0, subjectivity=0.0)\nblueboo\nSentiment(polarity=0.0, subjectivity=0.0)\nvngzs\nSentiment(polarity=0.0, subjectivity=0.0)\njonfw\nSentiment(polarity=0.0, subjectivity=0.0)\nloopdoend\nSentiment(polarity=0.0, subjectivity=0.0)\nmister_hn\nSentiment(polarity=0.0, subjectivity=0.0)\nnoema\nSentiment(polarity=0.0, subjectivity=0.0)\nPhrenzy\nSentiment(polarity=0.0, subjectivity=0.0)\nm3nu\nSentiment(polarity=0.0, subjectivity=0.0)\nyepthatsreality\nSentiment(polarity=0.0, subjectivity=0.0)\njbverschoor\nSentiment(polarity=0.0, subjectivity=0.0)\nlonelappde\nSentiment(polarity=0.0, subjectivity=0.0)\nAshamedCaptain\nSentiment(polarity=0.0, subjectivity=0.0)\nloopz\nSentiment(polarity=0.0, subjectivity=0.0)\nLatteLazy\nSentiment(polarity=0.0, subjectivity=0.0)\ncollegeburner\nSentiment(polarity=0.0, subjectivity=0.0)\ndarkerside\nSentiment(polarity=0.0, subjectivity=0.0)\nalexpetralia\nSentiment(polarity=0.0, subjectivity=0.0)\nbachmeier\nSentiment(polarity=0.0, subjectivity=0.0)\nthrowaway2245\nSentiment(polarity=0.0, subjectivity=0.0)\nsharemywin\nSentiment(polarity=0.0, subjectivity=0.0)\ndef8cefe\nSentiment(polarity=0.0, subjectivity=0.0)\nhyperpallium\nSentiment(polarity=0.0, subjectivity=0.0)\nAnIdiotOnTheNet\nSentiment(polarity=0.0, subjectivity=0.0)\nraphaelj\nSentiment(polarity=0.0, subjectivity=0.0)\njahn716\nSentiment(polarity=0.0, subjectivity=0.0)\ndharma1\nSentiment(polarity=0.0, subjectivity=0.0)\nchx\nSentiment(polarity=0.0, subjectivity=0.0)\nCyanLite4\nSentiment(polarity=0.0, subjectivity=0.0)\nkls\nSentiment(polarity=0.0, subjectivity=0.0)\nsjg007\nSentiment(polarity=0.0, subjectivity=0.0)\nlonelappde\nSentiment(polarity=0.0, subjectivity=0.0)\ndarkerside\nSentiment(polarity=0.0, subjectivity=0.0)\nuserbinator\nSentiment(polarity=0.0, subjectivity=0.0)\nbeckingz\nSentiment(polarity=0.0, subjectivity=0.0)\ngmueckl\nSentiment(polarity=0.0, subjectivity=0.0)\nskosuri\nSentiment(polarity=0.0, subjectivity=0.0)\ntptacek\nSentiment(polarity=0.0, subjectivity=0.0)\nphkahler\nSentiment(polarity=0.0, subjectivity=0.0)\nlonelappde\nSentiment(polarity=0.0, subjectivity=0.0)\ndonquichotte\nSentiment(polarity=0.0, subjectivity=0.0)\nblarg1\nSentiment(polarity=0.0, subjectivity=0.0)\njustusthane\nSentiment(polarity=0.0, subjectivity=0.0)\nlonelappde\nSentiment(polarity=0.0, subjectivity=0.0)\njayd16\nSentiment(polarity=0.0, subjectivity=0.0)\nveggieburglar\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcosdumay\nSentiment(polarity=0.0, subjectivity=0.0)\nspecialist\nSentiment(polarity=0.0, subjectivity=0.0)\nschaefer\nSentiment(polarity=0.0, subjectivity=0.0)\nkbutler\nSentiment(polarity=0.0, subjectivity=0.0)\neulenteufel\nSentiment(polarity=0.0, subjectivity=0.0)\n153957\nSentiment(polarity=0.0, subjectivity=0.0)\nnoema\nSentiment(polarity=0.0, subjectivity=0.0)\nbachmeier\nSentiment(polarity=0.0, subjectivity=0.0)\nhn_throwaway_99\nSentiment(polarity=0.0, subjectivity=0.0)\nJaccob\nSentiment(polarity=0.0, subjectivity=0.0)\nUnknown_Unknown\nSentiment(polarity=0.0, subjectivity=0.0)\nNursie\nSentiment(polarity=0.0, subjectivity=0.0)\nblack3r\nSentiment(polarity=0.0, subjectivity=0.0)\nkernelbugs\nSentiment(polarity=0.0, subjectivity=0.0)\nmike-cardwell\nSentiment(polarity=0.0, subjectivity=0.0)\ncraigtaub\nSentiment(polarity=0.0, subjectivity=0.0)\nbookofjoe\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\nrswail\nSentiment(polarity=0.0, subjectivity=0.0)\nluord\nSentiment(polarity=0.0, subjectivity=0.0)\nuserbinator\nSentiment(polarity=0.0, subjectivity=0.0)\nceejayoz\nSentiment(polarity=0.0, subjectivity=0.0)\nrhinoceraptor\nSentiment(polarity=0.0, subjectivity=0.0)\nevanslify\nSentiment(polarity=0.0, subjectivity=0.0)\nzamalek\nSentiment(polarity=0.0, subjectivity=0.0)\nenerg8\nSentiment(polarity=0.0, subjectivity=0.0)\nlonelappde\nSentiment(polarity=0.0, subjectivity=0.0)\nsteerablesafe\nSentiment(polarity=0.0, subjectivity=0.0)\ndangus\nSentiment(polarity=0.0, subjectivity=0.0)\nrtkwe\nSentiment(polarity=0.0, subjectivity=0.0)\ncs702\nSentiment(polarity=0.0, subjectivity=0.0)\nnovia\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ndgudkov\nSentiment(polarity=0.0, subjectivity=0.0)\nsearchableguy\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nthbr99\nSentiment(polarity=0.0, subjectivity=0.0)\nbachmeier\nSentiment(polarity=0.0, subjectivity=0.0)\njoyj2nd\nSentiment(polarity=0.0, subjectivity=0.0)\noblio\nSentiment(polarity=0.0, subjectivity=0.0)\nacqq\nSentiment(polarity=0.0, subjectivity=0.0)\nTheHypnotist\nSentiment(polarity=0.0, subjectivity=0.0)\nuser5994461\nSentiment(polarity=0.0, subjectivity=0.0)\nthanksforfish\nSentiment(polarity=0.0, subjectivity=0.0)\npierrebai\nSentiment(polarity=0.0, subjectivity=0.0)\nchewmieser\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcosdumay\nSentiment(polarity=0.0, subjectivity=0.0)\nstatictype\nSentiment(polarity=0.0, subjectivity=0.0)\noconnor663\nSentiment(polarity=0.0, subjectivity=0.0)\nhn_throwaway_99\nSentiment(polarity=0.0, subjectivity=0.0)\nhagy\nSentiment(polarity=0.0, subjectivity=0.0)\nbb123\nSentiment(polarity=0.0, subjectivity=0.0)\nthanksforfish\nSentiment(polarity=0.0, subjectivity=0.0)\nstcredzero\nSentiment(polarity=0.0, subjectivity=0.0)\nhobofan\nSentiment(polarity=0.0, subjectivity=0.0)\nskocznymroczny\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcosdumay\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\npaulcole\nSentiment(polarity=0.0, subjectivity=0.0)\nbrlewis\nSentiment(polarity=0.0, subjectivity=0.0)\nscreye\nSentiment(polarity=0.0, subjectivity=0.0)\njoncrane\nSentiment(polarity=0.0, subjectivity=0.0)\nsnvzz\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ninspector14\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nstOneskull\nSentiment(polarity=0.0, subjectivity=0.0)\nbarkingcat\nSentiment(polarity=0.0, subjectivity=0.0)\nfrogpelt\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nChrisArchitect\nSentiment(polarity=0.0, subjectivity=0.0)\nthe-dude\nSentiment(polarity=0.0, subjectivity=0.0)\nmFixman\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nwar1025\nSentiment(polarity=0.0, subjectivity=0.0)\nthbr99\nSentiment(polarity=0.0, subjectivity=0.0)\nlowpro\nSentiment(polarity=0.0, subjectivity=0.0)\nuserbinator\nSentiment(polarity=0.0, subjectivity=0.0)\npierrebai\nSentiment(polarity=0.0, subjectivity=0.0)\nbottle2\nSentiment(polarity=0.0, subjectivity=0.0)\nskocznymroczny\nSentiment(polarity=0.0, subjectivity=0.0)\nneilv\nSentiment(polarity=0.0, subjectivity=0.0)\ngridlockd\nSentiment(polarity=0.0, subjectivity=0.0)\nlowpro\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nabainbridge\nSentiment(polarity=0.0, subjectivity=0.0)\nshoulderfake\nSentiment(polarity=0.0, subjectivity=0.0)\nbarkingcat\nSentiment(polarity=0.0, subjectivity=0.0)\nJacobDotVI\nSentiment(polarity=0.0, subjectivity=0.0)\nlowpro\nSentiment(polarity=0.0, subjectivity=0.0)\nanaphor\nSentiment(polarity=0.0, subjectivity=0.0)\nalufers\nSentiment(polarity=0.0, subjectivity=0.0)\nantman\nSentiment(polarity=0.0, subjectivity=0.0)\nModernMech\nSentiment(polarity=0.0, subjectivity=0.0)\njfkebwjsbx\nSentiment(polarity=0.0, subjectivity=0.0)\nasdkhadsj\nSentiment(polarity=0.0, subjectivity=0.0)\ncrazygringo\nSentiment(polarity=0.0, subjectivity=0.0)\nchosenbreed37\nSentiment(polarity=0.0, subjectivity=0.0)\nelliekelly\nSentiment(polarity=0.0, subjectivity=0.0)\nDiogenesKynikos\nSentiment(polarity=0.0, subjectivity=0.0)\nbrlewis\nSentiment(polarity=0.0, subjectivity=0.0)\ndanShumway\nSentiment(polarity=0.0, subjectivity=0.0)\nulucs\nSentiment(polarity=0.0, subjectivity=0.0)\natrilumen\nSentiment(polarity=0.0, subjectivity=0.0)\nJPKab\nSentiment(polarity=0.0, subjectivity=0.0)\nhagy\nSentiment(polarity=0.0, subjectivity=0.0)\nmr_trick\nSentiment(polarity=0.0, subjectivity=0.0)\nmythrwy\nSentiment(polarity=0.0, subjectivity=0.0)\nmatthew_wilson\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\nholtalanm\nSentiment(polarity=0.0, subjectivity=0.0)\nmellosouls\nSentiment(polarity=0.0, subjectivity=0.0)\nlstamour\nSentiment(polarity=0.0, subjectivity=0.0)\nmikemac\nSentiment(polarity=0.0, subjectivity=0.0)\nheyoni\nSentiment(polarity=0.0, subjectivity=0.0)\njoyj2nd\nSentiment(polarity=0.0, subjectivity=0.0)\nPick-A-Hill2019\nSentiment(polarity=0.0, subjectivity=0.0)\njfkebwjsbx\nSentiment(polarity=0.0, subjectivity=0.0)\nscreye\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nthbr99\nSentiment(polarity=0.0, subjectivity=0.0)\nlowpro\nSentiment(polarity=0.0, subjectivity=0.0)\nfnord123\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\ndsfyu404ed\nSentiment(polarity=0.0, subjectivity=0.0)\ndraw_down\nSentiment(polarity=0.0, subjectivity=0.0)\nrakoo\nSentiment(polarity=0.0, subjectivity=0.0)\nbogomipz\nSentiment(polarity=0.0, subjectivity=0.0)\nrfrey\nSentiment(polarity=0.0, subjectivity=0.0)\nalexchamberlain\nSentiment(polarity=0.0, subjectivity=0.0)\njayd16\nSentiment(polarity=0.0, subjectivity=0.0)\nfnord123\nSentiment(polarity=0.0, subjectivity=0.0)\nlivre\nSentiment(polarity=0.0, subjectivity=0.0)\nalufers\nSentiment(polarity=0.0, subjectivity=0.0)\ngen_greyface\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\nnovia\nSentiment(polarity=0.0, subjectivity=0.0)\ncrazygringo\nSentiment(polarity=0.0, subjectivity=0.0)\nlaken\nSentiment(polarity=0.0, subjectivity=0.0)\nornornor\nSentiment(polarity=0.0, subjectivity=0.0)\nbluGill\nSentiment(polarity=0.0, subjectivity=0.0)\ngnusty_gnurc\nSentiment(polarity=0.0, subjectivity=0.0)\nholtalanm\nSentiment(polarity=0.0, subjectivity=0.0)\nmhb\nSentiment(polarity=0.0, subjectivity=0.0)\nlukeschlather\nSentiment(polarity=0.0, subjectivity=0.0)\npaulcole\nSentiment(polarity=0.0, subjectivity=0.0)\njhoechtl\nSentiment(polarity=0.0, subjectivity=0.0)\nnkurz\nSentiment(polarity=0.0, subjectivity=0.0)\nemodendroket\nSentiment(polarity=0.0, subjectivity=0.0)\nTallGuyShort\nSentiment(polarity=0.0, subjectivity=0.0)\nflohofwoe\nSentiment(polarity=0.0, subjectivity=0.0)\ndangus\nSentiment(polarity=0.0, subjectivity=0.0)\nRudominski\nSentiment(polarity=0.0, subjectivity=0.0)\nicedchai\nSentiment(polarity=0.0, subjectivity=0.0)\nOrangeMango\nSentiment(polarity=0.0, subjectivity=0.0)\ndangus\nSentiment(polarity=0.0, subjectivity=0.0)\nLuisOrtiz\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\npmoriarty\nSentiment(polarity=0.0, subjectivity=0.0)\nvngzs\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\ngwd\nSentiment(polarity=0.0, subjectivity=0.0)\njoshribakoff\nSentiment(polarity=0.0, subjectivity=0.0)\nBjartr\nSentiment(polarity=0.0, subjectivity=0.0)\nflohofwoe\nSentiment(polarity=0.0, subjectivity=0.0)\ncrazygringo\nSentiment(polarity=0.0, subjectivity=0.0)\nlosvedir\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ncontinuations\nSentiment(polarity=0.0, subjectivity=0.0)\nanfilt\nSentiment(polarity=0.0, subjectivity=0.0)\nrtkwe\nSentiment(polarity=0.0, subjectivity=0.0)\ndanparsonson\nSentiment(polarity=0.0, subjectivity=0.0)\nSpivak\nSentiment(polarity=0.0, subjectivity=0.0)\nsumedh\nSentiment(polarity=0.0, subjectivity=0.0)\nmichaelmior\nSentiment(polarity=0.0, subjectivity=0.0)\nnmc\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcosdumay\nSentiment(polarity=0.0, subjectivity=0.0)\nJPKab\nSentiment(polarity=0.0, subjectivity=0.0)\nbryanrasmussen\nSentiment(polarity=0.0, subjectivity=0.0)\nMandieD\nSentiment(polarity=0.0, subjectivity=0.0)\nvorpalhex\nSentiment(polarity=0.0, subjectivity=0.0)\ncptskippy\nSentiment(polarity=0.0, subjectivity=0.0)\nmatheusmoreira\nSentiment(polarity=0.0, subjectivity=0.0)\ntylermw\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\ntimpattinson\nSentiment(polarity=0.0, subjectivity=0.0)\ngindely\nSentiment(polarity=0.0, subjectivity=0.0)\nempath75\nSentiment(polarity=0.0, subjectivity=0.0)\nTheAdamAndChe\nSentiment(polarity=0.0, subjectivity=0.0)\nForHackernews\nSentiment(polarity=0.0, subjectivity=0.0)\nasplake\nSentiment(polarity=0.0, subjectivity=0.0)\nIdidntdothis\nSentiment(polarity=0.0, subjectivity=0.0)\nneilv\nSentiment(polarity=0.0, subjectivity=0.0)\nsamthecoy\nSentiment(polarity=0.0, subjectivity=0.0)\nhello_asdf\nSentiment(polarity=0.0, subjectivity=0.0)\nhobs\nSentiment(polarity=0.0, subjectivity=0.0)\nrecursive\nSentiment(polarity=0.0, subjectivity=0.0)\nKeats\nSentiment(polarity=0.0, subjectivity=0.0)\nupofadown\nSentiment(polarity=0.0, subjectivity=0.0)\nvorpalhex\nSentiment(polarity=0.0, subjectivity=0.0)\nfluffything\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nretpirato\nSentiment(polarity=0.0, subjectivity=0.0)\ndanparsonson\nSentiment(polarity=0.0, subjectivity=0.0)\nprostheticvamp\nSentiment(polarity=0.0, subjectivity=0.0)\nTenoke\nSentiment(polarity=0.0, subjectivity=0.0)\ncpach\nSentiment(polarity=0.0, subjectivity=0.0)\n3pt14159\nSentiment(polarity=0.0, subjectivity=0.0)\ncrazygringo\nSentiment(polarity=0.0, subjectivity=0.0)\nIdidntdothis\nSentiment(polarity=0.0, subjectivity=0.0)\njake_morrison\nSentiment(polarity=0.0, subjectivity=0.0)\nstefan_\nSentiment(polarity=0.0, subjectivity=0.0)\neof\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\ncptskippy\nSentiment(polarity=0.0, subjectivity=0.0)\nFaaak\nSentiment(polarity=0.0, subjectivity=0.0)\ndennisong\nSentiment(polarity=0.0, subjectivity=0.0)\narihant\nSentiment(polarity=0.0, subjectivity=0.0)\nheyoni\nSentiment(polarity=0.0, subjectivity=0.0)\ndangus\nSentiment(polarity=0.0, subjectivity=0.0)\nnonamenoslogan\nSentiment(polarity=0.0, subjectivity=0.0)\nstronglikedan\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\necpottinger\nSentiment(polarity=0.0, subjectivity=0.0)\npintxo\nSentiment(polarity=0.0, subjectivity=0.0)\nHamuko\nSentiment(polarity=0.0, subjectivity=0.0)\nswasheck\nSentiment(polarity=0.0, subjectivity=0.0)\nareactnativedev\nSentiment(polarity=0.0, subjectivity=0.0)\nhhmc\nSentiment(polarity=0.0, subjectivity=0.0)\nmonocasa\nSentiment(polarity=0.0, subjectivity=0.0)\nkolleykibber\nSentiment(polarity=0.0, subjectivity=0.0)\nczzr\nSentiment(polarity=0.0, subjectivity=0.0)\nRobotbeat\nSentiment(polarity=0.0, subjectivity=0.0)\ngnomesteel\nSentiment(polarity=0.0, subjectivity=0.0)\ntechnion\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nbagacrap\nSentiment(polarity=0.0, subjectivity=0.0)\nxnyan\nSentiment(polarity=0.0, subjectivity=0.0)\nVikingCoder\nSentiment(polarity=0.0, subjectivity=0.0)\nkoffiezet\nSentiment(polarity=0.0, subjectivity=0.0)\nnocman\nSentiment(polarity=0.0, subjectivity=0.0)\nelliekelly\nSentiment(polarity=0.0, subjectivity=0.0)\nivan_burazin\nSentiment(polarity=0.0, subjectivity=0.0)\nrchaud\nSentiment(polarity=0.0, subjectivity=0.0)\ndennisong\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeQuery\nSentiment(polarity=0.0, subjectivity=0.0)\njudofyr\nSentiment(polarity=0.0, subjectivity=0.0)\nretpirato\nSentiment(polarity=0.0, subjectivity=0.0)\nfb03\nSentiment(polarity=0.0, subjectivity=0.0)\nmooreds\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nchance_state\nSentiment(polarity=0.0, subjectivity=0.0)\nasicsp\nSentiment(polarity=0.0, subjectivity=0.0)\ndnhz\nSentiment(polarity=0.0, subjectivity=0.0)\nandybak\nSentiment(polarity=0.0, subjectivity=0.0)\npgtan\nSentiment(polarity=0.0, subjectivity=0.0)\nLudwigNagasena\nSentiment(polarity=0.0, subjectivity=0.0)\nsirius87\nSentiment(polarity=0.0, subjectivity=0.0)\ngridlockd\nSentiment(polarity=0.0, subjectivity=0.0)\nunicornfinder\nSentiment(polarity=0.0, subjectivity=0.0)\ntopaz0\nSentiment(polarity=0.0, subjectivity=0.0)\nristos\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nadwi\nSentiment(polarity=0.0, subjectivity=0.0)\nnocman\nSentiment(polarity=0.0, subjectivity=0.0)\nISL\nSentiment(polarity=0.0, subjectivity=0.0)\nfctorial\nSentiment(polarity=0.0, subjectivity=0.0)\nhaolez\nSentiment(polarity=0.0, subjectivity=0.0)\ndmux\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nsmlckz\nSentiment(polarity=0.0, subjectivity=0.0)\nmFixman\nSentiment(polarity=0.0, subjectivity=0.0)\ndetaro\nSentiment(polarity=0.0, subjectivity=0.0)\ntzatzikaki\nSentiment(polarity=0.0, subjectivity=0.0)\ndanielbarla\nSentiment(polarity=0.0, subjectivity=0.0)\nicebraining\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nignoramous\nSentiment(polarity=0.0, subjectivity=0.0)\nundecisive\nSentiment(polarity=0.0, subjectivity=0.0)\nchpmrc\nSentiment(polarity=0.0, subjectivity=0.0)\nKoshkin\nSentiment(polarity=0.0, subjectivity=0.0)\nbtbuildem\nSentiment(polarity=0.0, subjectivity=0.0)\nsershe\nSentiment(polarity=0.0, subjectivity=0.0)\nasicsp\nSentiment(polarity=0.0, subjectivity=0.0)\nrecursive\nSentiment(polarity=0.0, subjectivity=0.0)\nMilnerRoute\nSentiment(polarity=0.0, subjectivity=0.0)\nneovive\nSentiment(polarity=0.0, subjectivity=0.0)\nmikepurvis\nSentiment(polarity=0.0, subjectivity=0.0)\nscrimps\nSentiment(polarity=0.0, subjectivity=0.0)\nrsynnott\nSentiment(polarity=0.0, subjectivity=0.0)\ntabs_masterrace\nSentiment(polarity=0.0, subjectivity=0.0)\nacehreli\nSentiment(polarity=0.0, subjectivity=0.0)\nMilnerRoute\nSentiment(polarity=0.0, subjectivity=0.0)\nCogito\nSentiment(polarity=0.0, subjectivity=0.0)\ncochne\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ntylermw\nSentiment(polarity=0.0, subjectivity=0.0)\nJCharante\nSentiment(polarity=0.0, subjectivity=0.0)\nohyeahlaws\nSentiment(polarity=0.0, subjectivity=0.0)\npjmlp\nSentiment(polarity=0.0, subjectivity=0.0)\nBenoitEssiambre\nSentiment(polarity=0.0, subjectivity=0.0)\ndontbenebby\nSentiment(polarity=0.0, subjectivity=0.0)\nrchaud\nSentiment(polarity=0.0, subjectivity=0.0)\nReactiveJelly\nSentiment(polarity=0.0, subjectivity=0.0)\nslaythemgods\nSentiment(polarity=0.0, subjectivity=0.0)\nfredmonroe\nSentiment(polarity=0.0, subjectivity=0.0)\nchoward\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nhnarn\nSentiment(polarity=0.0, subjectivity=0.0)\nCryptoPunk\nSentiment(polarity=0.0, subjectivity=0.0)\nserverQuestion\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcofatica\nSentiment(polarity=0.0, subjectivity=0.0)\nants_a\nSentiment(polarity=0.0, subjectivity=0.0)\nmonocasa\nSentiment(polarity=0.0, subjectivity=0.0)\nmesaframe\nSentiment(polarity=0.0, subjectivity=0.0)\nkranner\nSentiment(polarity=0.0, subjectivity=0.0)\ndhimes\nSentiment(polarity=0.0, subjectivity=0.0)\nnonamenoslogan\nSentiment(polarity=0.0, subjectivity=0.0)\nonion2k\nSentiment(polarity=0.0, subjectivity=0.0)\ntomp\nSentiment(polarity=0.0, subjectivity=0.0)\nneap24\nSentiment(polarity=0.0, subjectivity=0.0)\nignoramous\nSentiment(polarity=0.0, subjectivity=0.0)\njjice\nSentiment(polarity=0.0, subjectivity=0.0)\nCryptoPunk\nSentiment(polarity=0.0, subjectivity=0.0)\nas300\nSentiment(polarity=0.0, subjectivity=0.0)\ngbrown\nSentiment(polarity=0.0, subjectivity=0.0)\nsclevine\nSentiment(polarity=0.0, subjectivity=0.0)\ntraderjane\nSentiment(polarity=0.0, subjectivity=0.0)\nbookofjoe\nSentiment(polarity=0.0, subjectivity=0.0)\nReactiveJelly\nSentiment(polarity=0.0, subjectivity=0.0)\nneutronicus\nSentiment(polarity=0.0, subjectivity=0.0)\nartify\nSentiment(polarity=0.0, subjectivity=0.0)\nupofadown\nSentiment(polarity=0.0, subjectivity=0.0)\nflohofwoe\nSentiment(polarity=0.0, subjectivity=0.0)\nTade0\nSentiment(polarity=0.0, subjectivity=0.0)\namelius\nSentiment(polarity=0.0, subjectivity=0.0)\nmaxharris\nSentiment(polarity=0.0, subjectivity=0.0)\ntucaz\nSentiment(polarity=0.0, subjectivity=0.0)\nwpietri\nSentiment(polarity=0.0, subjectivity=0.0)\nvoldacar\nSentiment(polarity=0.0, subjectivity=0.0)\nnotahacker\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ngridlockd\nSentiment(polarity=0.0, subjectivity=0.0)\nconmigo\nSentiment(polarity=0.0, subjectivity=0.0)\nreagular\nSentiment(polarity=0.0, subjectivity=0.0)\nhuhnmonster\nSentiment(polarity=0.0, subjectivity=0.0)\nkranner\nSentiment(polarity=0.0, subjectivity=0.0)\nmaccard\nSentiment(polarity=0.0, subjectivity=0.0)\nprophesi\nSentiment(polarity=0.0, subjectivity=0.0)\ndennisong\nSentiment(polarity=0.0, subjectivity=0.0)\nG2H\nSentiment(polarity=0.0, subjectivity=0.0)\nasicsp\nSentiment(polarity=0.0, subjectivity=0.0)\nnerdponx\nSentiment(polarity=0.0, subjectivity=0.0)\naequitas\nSentiment(polarity=0.0, subjectivity=0.0)\ncrazygringo\nSentiment(polarity=0.0, subjectivity=0.0)\nanticensor\nSentiment(polarity=0.0, subjectivity=0.0)\ngoda90\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nphoenixdblack\nSentiment(polarity=0.0, subjectivity=0.0)\nignoramous\nSentiment(polarity=0.0, subjectivity=0.0)\nabecedarius\nSentiment(polarity=0.0, subjectivity=0.0)\nornornor\nSentiment(polarity=0.0, subjectivity=0.0)\nMarsymars\nSentiment(polarity=0.0, subjectivity=0.0)\njmiserez\nSentiment(polarity=0.0, subjectivity=0.0)\nmingabunga\nSentiment(polarity=0.0, subjectivity=0.0)\nshaneapen\nSentiment(polarity=0.0, subjectivity=0.0)\nqppo\nSentiment(polarity=0.0, subjectivity=0.0)\ndmitriid\nSentiment(polarity=0.0, subjectivity=0.0)\nscreye\nSentiment(polarity=0.0, subjectivity=0.0)\ntylermw\nSentiment(polarity=0.0, subjectivity=0.0)\nthecureforzits\nSentiment(polarity=0.0, subjectivity=0.0)\nFeepingCreature\nSentiment(polarity=0.0, subjectivity=0.0)\ntopaz0\nSentiment(polarity=0.0, subjectivity=0.0)\nGoblinSlayer\nSentiment(polarity=0.0, subjectivity=0.0)\ncouchand\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\njtvjan\nSentiment(polarity=0.0, subjectivity=0.0)\nmonocasa\nSentiment(polarity=0.0, subjectivity=0.0)\ngrawprog\nSentiment(polarity=0.0, subjectivity=0.0)\nmudita\nSentiment(polarity=0.0, subjectivity=0.0)\nSpicyLemonZest\nSentiment(polarity=0.0, subjectivity=0.0)\nPhlogistique\nSentiment(polarity=0.0, subjectivity=0.0)\njonahbenton\nSentiment(polarity=0.0, subjectivity=0.0)\npetercooper\nSentiment(polarity=0.0, subjectivity=0.0)\nWilliamEdward\nSentiment(polarity=0.0, subjectivity=0.0)\nlukeschlather\nSentiment(polarity=0.0, subjectivity=0.0)\nver_ture\nSentiment(polarity=0.0, subjectivity=0.0)\nFeepingCreature\nSentiment(polarity=0.0, subjectivity=0.0)\nnradov\nSentiment(polarity=0.0, subjectivity=0.0)\njmull\nSentiment(polarity=0.0, subjectivity=0.0)\nals0\nSentiment(polarity=0.0, subjectivity=0.0)\nignoramous\nSentiment(polarity=0.0, subjectivity=0.0)\nnicklafferty\nSentiment(polarity=0.0, subjectivity=0.0)\nCogito\nSentiment(polarity=0.0, subjectivity=0.0)\npetercooper\nSentiment(polarity=0.0, subjectivity=0.0)\nguhsnamih\nSentiment(polarity=0.0, subjectivity=0.0)\nbetterunix2\nSentiment(polarity=0.0, subjectivity=0.0)\nprawn\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\nsherlock_h\nSentiment(polarity=0.0, subjectivity=0.0)\nMaxBarraclough\nSentiment(polarity=0.0, subjectivity=0.0)\nrichardwhiuk\nSentiment(polarity=0.0, subjectivity=0.0)\nkalium-xyz\nSentiment(polarity=0.0, subjectivity=0.0)\namelius\nSentiment(polarity=0.0, subjectivity=0.0)\ncptskippy\nSentiment(polarity=0.0, subjectivity=0.0)\ngilbetron\nSentiment(polarity=0.0, subjectivity=0.0)\nScarblac\nSentiment(polarity=0.0, subjectivity=0.0)\nalistairSH\nSentiment(polarity=0.0, subjectivity=0.0)\nigouy\nSentiment(polarity=0.0, subjectivity=0.0)\n9nGQluzmnq3M\nSentiment(polarity=0.0, subjectivity=0.0)\nScarblac\nSentiment(polarity=0.0, subjectivity=0.0)\nthecureforzits\nSentiment(polarity=0.0, subjectivity=0.0)\nfennecfoxen\nSentiment(polarity=0.0, subjectivity=0.0)\ngindely\nSentiment(polarity=0.0, subjectivity=0.0)\nloup-vaillant\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcofatica\nSentiment(polarity=0.0, subjectivity=0.0)\nsharemywin\nSentiment(polarity=0.0, subjectivity=0.0)\nKlathmon\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\ndec0dedab0de\nSentiment(polarity=0.0, subjectivity=0.0)\npharke\nSentiment(polarity=0.0, subjectivity=0.0)\nadrianN\nSentiment(polarity=0.0, subjectivity=0.0)\nsdiq\nSentiment(polarity=0.0, subjectivity=0.0)\ncat199\nSentiment(polarity=0.0, subjectivity=0.0)\nnicklafferty\nSentiment(polarity=0.0, subjectivity=0.0)\nhenryfjordan\nSentiment(polarity=0.0, subjectivity=0.0)\nOskarS\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nkaratinversion\nSentiment(polarity=0.0, subjectivity=0.0)\ninterestica\nSentiment(polarity=0.0, subjectivity=0.0)\nlmkg\nSentiment(polarity=0.0, subjectivity=0.0)\nsclevine\nSentiment(polarity=0.0, subjectivity=0.0)\nkaetemi\nSentiment(polarity=0.0, subjectivity=0.0)\nbofadeez\nSentiment(polarity=0.0, subjectivity=0.0)\nbaryphonic\nSentiment(polarity=0.0, subjectivity=0.0)\nkolleykibber\nSentiment(polarity=0.0, subjectivity=0.0)\ncoreyp_1\nSentiment(polarity=0.0, subjectivity=0.0)\nmichaelmior\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nfrogpelt\nSentiment(polarity=0.0, subjectivity=0.0)\ndreamer7\nSentiment(polarity=0.0, subjectivity=0.0)\ndeadalus\nSentiment(polarity=0.0, subjectivity=0.0)\naequitas\nSentiment(polarity=0.0, subjectivity=0.0)\nsdiq\nSentiment(polarity=0.0, subjectivity=0.0)\nsershe\nSentiment(polarity=0.0, subjectivity=0.0)\nHurdy\nSentiment(polarity=0.0, subjectivity=0.0)\nOnuRC\nSentiment(polarity=0.0, subjectivity=0.0)\nnicklafferty\nSentiment(polarity=0.0, subjectivity=0.0)\nmikorym\nSentiment(polarity=0.0, subjectivity=0.0)\nlern_too_spel\nSentiment(polarity=0.0, subjectivity=0.0)\nkps\nSentiment(polarity=0.0, subjectivity=0.0)\nbluGill\nSentiment(polarity=0.0, subjectivity=0.0)\nrodiger\nSentiment(polarity=0.0, subjectivity=0.0)\nmonkpit\nSentiment(polarity=0.0, subjectivity=0.0)\nkevin_thibedeau\nSentiment(polarity=0.0, subjectivity=0.0)\nmywittyname\nSentiment(polarity=0.0, subjectivity=0.0)\npetilon\nSentiment(polarity=0.0, subjectivity=0.0)\nkaratestomp\nSentiment(polarity=0.0, subjectivity=0.0)\nBlaiz0r\nSentiment(polarity=0.0, subjectivity=0.0)\np0nce\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nScarblac\nSentiment(polarity=0.0, subjectivity=0.0)\ngentleman11\nSentiment(polarity=0.0, subjectivity=0.0)\nragebol\nSentiment(polarity=0.0, subjectivity=0.0)\nnkurz\nSentiment(polarity=0.0, subjectivity=0.0)\nnicklafferty\nSentiment(polarity=0.0, subjectivity=0.0)\nsaalweachter\nSentiment(polarity=0.0, subjectivity=0.0)\nadrianN\nSentiment(polarity=0.0, subjectivity=0.0)\nbofadeez\nSentiment(polarity=0.0, subjectivity=0.0)\nrobertakarobin\nSentiment(polarity=0.0, subjectivity=0.0)\np0nce\nSentiment(polarity=0.0, subjectivity=0.0)\nnicklafferty\nSentiment(polarity=0.0, subjectivity=0.0)\nmikorym\nSentiment(polarity=0.0, subjectivity=0.0)\nSpicyLemonZest\nSentiment(polarity=0.0, subjectivity=0.0)\ntinco\nSentiment(polarity=0.0, subjectivity=0.0)\nnew2628\nSentiment(polarity=0.0, subjectivity=0.0)\nshrikant\nSentiment(polarity=0.0, subjectivity=0.0)\nTraster\nSentiment(polarity=0.0, subjectivity=0.0)\ndirtydroog\nSentiment(polarity=0.0, subjectivity=0.0)\npetercooper\nSentiment(polarity=0.0, subjectivity=0.0)\nrtkwe\nSentiment(polarity=0.0, subjectivity=0.0)\nrpiguy\nSentiment(polarity=0.0, subjectivity=0.0)\nstronglikedan\nSentiment(polarity=0.0, subjectivity=0.0)\nloup-vaillant\nSentiment(polarity=0.0, subjectivity=0.0)\nWalterBAmaQ\nSentiment(polarity=0.0, subjectivity=0.0)\nScarblac\nSentiment(polarity=0.0, subjectivity=0.0)\nnicklafferty\nSentiment(polarity=0.0, subjectivity=0.0)\ndeogeo\nSentiment(polarity=0.0, subjectivity=0.0)\nsmlckz\nSentiment(polarity=0.0, subjectivity=0.0)\neliseumds\nSentiment(polarity=0.0, subjectivity=0.0)\nghshephard\nSentiment(polarity=0.0, subjectivity=0.0)\ncredit_guy\nSentiment(polarity=0.0, subjectivity=0.0)\nderefr\nSentiment(polarity=0.0, subjectivity=0.0)\nyodon\nSentiment(polarity=0.0, subjectivity=0.0)\nbanannaise\nSentiment(polarity=0.0, subjectivity=0.0)\nartemonster\nSentiment(polarity=0.0, subjectivity=0.0)\nlern_too_spel\nSentiment(polarity=0.0, subjectivity=0.0)\naequitas\nSentiment(polarity=0.0, subjectivity=0.0)\nTraubenfuchs\nSentiment(polarity=0.0, subjectivity=0.0)\nBlaiz0r\nSentiment(polarity=0.0, subjectivity=0.0)\nrozab\nSentiment(polarity=0.0, subjectivity=0.0)\nbagacrap\nSentiment(polarity=0.0, subjectivity=0.0)\nmonkpit\nSentiment(polarity=0.0, subjectivity=0.0)\nqqssccfftt\nSentiment(polarity=0.0, subjectivity=0.0)\nneilv\nSentiment(polarity=0.0, subjectivity=0.0)\nderefr\nSentiment(polarity=0.0, subjectivity=0.0)\nginko\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nehnto\nSentiment(polarity=0.0, subjectivity=0.0)\nbagacrap\nSentiment(polarity=0.0, subjectivity=0.0)\nwar1025\nSentiment(polarity=0.0, subjectivity=0.0)\njodrellblank\nSentiment(polarity=0.0, subjectivity=0.0)\nkelw22\nSentiment(polarity=0.0, subjectivity=0.0)\namelius\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nnieksand\nSentiment(polarity=0.0, subjectivity=0.0)\nadreamingsoul\nSentiment(polarity=0.0, subjectivity=0.0)\nsethammons\nSentiment(polarity=0.0, subjectivity=0.0)\nbarbegal\nSentiment(polarity=0.0, subjectivity=0.0)\njefftk\nSentiment(polarity=0.0, subjectivity=0.0)\ndetaro\nSentiment(polarity=0.0, subjectivity=0.0)\nnew2628\nSentiment(polarity=0.0, subjectivity=0.0)\n1-more\nSentiment(polarity=0.0, subjectivity=0.0)\nPxtl\nSentiment(polarity=0.0, subjectivity=0.0)\ndfox\nSentiment(polarity=0.0, subjectivity=0.0)\nBlaiz0r\nSentiment(polarity=0.0, subjectivity=0.0)\notoburb\nSentiment(polarity=0.0, subjectivity=0.0)\nHellMood\nSentiment(polarity=0.0, subjectivity=0.0)\nkamarg\nSentiment(polarity=0.0, subjectivity=0.0)\nwolfgke\nSentiment(polarity=0.0, subjectivity=0.0)\nasveikau\nSentiment(polarity=0.0, subjectivity=0.0)\nsnazz\nSentiment(polarity=0.0, subjectivity=0.0)\nWildGreenLeave\nSentiment(polarity=0.0, subjectivity=0.0)\ncrazygringo\nSentiment(polarity=0.0, subjectivity=0.0)\ncommandlinefan\nSentiment(polarity=0.0, subjectivity=0.0)\np0nce\nSentiment(polarity=0.0, subjectivity=0.0)\nadwi\nSentiment(polarity=0.0, subjectivity=0.0)\nyibg\nSentiment(polarity=0.0, subjectivity=0.0)\njasonjayr\nSentiment(polarity=0.0, subjectivity=0.0)\nseyz\nSentiment(polarity=0.0, subjectivity=0.0)\npaulcole\nSentiment(polarity=0.0, subjectivity=0.0)\nwar1025\nSentiment(polarity=0.0, subjectivity=0.0)\nHellMood\nSentiment(polarity=0.0, subjectivity=0.0)\np0nce\nSentiment(polarity=0.0, subjectivity=0.0)\nzozbot234\nSentiment(polarity=0.0, subjectivity=0.0)\nlou1306\nSentiment(polarity=0.0, subjectivity=0.0)\ndwheeler\nSentiment(polarity=0.0, subjectivity=0.0)\nkazinator\nSentiment(polarity=0.0, subjectivity=0.0)\nnxc18\nSentiment(polarity=0.0, subjectivity=0.0)\nsaalweachter\nSentiment(polarity=0.0, subjectivity=0.0)\nak217\nSentiment(polarity=0.0, subjectivity=0.0)\ngregoriol\nSentiment(polarity=0.0, subjectivity=0.0)\nDuskStar\nSentiment(polarity=0.0, subjectivity=0.0)\napi\nSentiment(polarity=0.0, subjectivity=0.0)\njoelbluminator\nSentiment(polarity=0.0, subjectivity=0.0)\nOskarS\nSentiment(polarity=0.0, subjectivity=0.0)\nwhitten\nSentiment(polarity=0.0, subjectivity=0.0)\naryx\nSentiment(polarity=0.0, subjectivity=0.0)\ngregoriol\nSentiment(polarity=0.0, subjectivity=0.0)\nHellMood\nSentiment(polarity=0.0, subjectivity=0.0)\ngdubs\nSentiment(polarity=0.0, subjectivity=0.0)\nmikestew\nSentiment(polarity=0.0, subjectivity=0.0)\ngspr\nSentiment(polarity=0.0, subjectivity=0.0)\nuntog\nSentiment(polarity=0.0, subjectivity=0.0)\ndetaro\nSentiment(polarity=0.0, subjectivity=0.0)\ndom96\nSentiment(polarity=0.0, subjectivity=0.0)\nroosterdawn\nSentiment(polarity=0.0, subjectivity=0.0)\ngregoriol\nSentiment(polarity=0.0, subjectivity=0.0)\ngazoakley\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\natilaneves\nSentiment(polarity=0.0, subjectivity=0.0)\nutdiscant\nSentiment(polarity=0.0, subjectivity=0.0)\nArlenBales\nSentiment(polarity=0.0, subjectivity=0.0)\ndownrightmike\nSentiment(polarity=0.0, subjectivity=0.0)\nwhitten\nSentiment(polarity=0.0, subjectivity=0.0)\nHellMood\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nvorpalhex\nSentiment(polarity=0.0, subjectivity=0.0)\ndahart\nSentiment(polarity=0.0, subjectivity=0.0)\nSketchySeaBeast\nSentiment(polarity=0.0, subjectivity=0.0)\ngspr\nSentiment(polarity=0.0, subjectivity=0.0)\nsershe\nSentiment(polarity=0.0, subjectivity=0.0)\nqqssccfftt\nSentiment(polarity=0.0, subjectivity=0.0)\nSpicyLemonZest\nSentiment(polarity=0.0, subjectivity=0.0)\nken\nSentiment(polarity=0.0, subjectivity=0.0)\nHongwei\nSentiment(polarity=0.0, subjectivity=0.0)\nqqssccfftt\nSentiment(polarity=0.0, subjectivity=0.0)\ndirtydroog\nSentiment(polarity=0.0, subjectivity=0.0)\necmascript\nSentiment(polarity=0.0, subjectivity=0.0)\nTsiklon\nSentiment(polarity=0.0, subjectivity=0.0)\natilaneves\nSentiment(polarity=0.0, subjectivity=0.0)\nPretzelFisch\nSentiment(polarity=0.0, subjectivity=0.0)\ninterestica\nSentiment(polarity=0.0, subjectivity=0.0)\ngdubs\nSentiment(polarity=0.0, subjectivity=0.0)\nHellMood\nSentiment(polarity=0.0, subjectivity=0.0)\ng00s3_caLL_x2\nSentiment(polarity=0.0, subjectivity=0.0)\nz-cam\nSentiment(polarity=0.0, subjectivity=0.0)\nkeiferski\nSentiment(polarity=0.0, subjectivity=0.0)\nbofadeez\nSentiment(polarity=0.0, subjectivity=0.0)\njacobush\nSentiment(polarity=0.0, subjectivity=0.0)\noutime\nSentiment(polarity=0.0, subjectivity=0.0)\nchrstphrhrt\nSentiment(polarity=0.0, subjectivity=0.0)\nrunjake\nSentiment(polarity=0.0, subjectivity=0.0)\nOskarS\nSentiment(polarity=0.0, subjectivity=0.0)\npatrickmcmanus\nSentiment(polarity=0.0, subjectivity=0.0)\nbeckingz\nSentiment(polarity=0.0, subjectivity=0.0)\nping_pong\nSentiment(polarity=0.0, subjectivity=0.0)\nttonkytonk\nSentiment(polarity=0.0, subjectivity=0.0)\nyjftsjthsd-h\nSentiment(polarity=0.0, subjectivity=0.0)\ndarkhorn\nSentiment(polarity=0.0, subjectivity=0.0)\nenitihas\nSentiment(polarity=0.0, subjectivity=0.0)\ntylermw\nSentiment(polarity=0.0, subjectivity=0.0)\nbediger4000\nSentiment(polarity=0.0, subjectivity=0.0)\nashtonkem\nSentiment(polarity=0.0, subjectivity=0.0)\nderefr\nSentiment(polarity=0.0, subjectivity=0.0)\ngspr\nSentiment(polarity=0.0, subjectivity=0.0)\nhyperpallium\nSentiment(polarity=0.0, subjectivity=0.0)\nefdee\nSentiment(polarity=0.0, subjectivity=0.0)\noysteroyster\nSentiment(polarity=0.0, subjectivity=0.0)\njacquesm\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nreacweb\nSentiment(polarity=0.0, subjectivity=0.0)\nwowsig\nSentiment(polarity=0.0, subjectivity=0.0)\nchrstphrhrt\nSentiment(polarity=0.0, subjectivity=0.0)\n4444\nSentiment(polarity=0.0, subjectivity=0.0)\nArunNair\nSentiment(polarity=0.0, subjectivity=0.0)\namelius\nSentiment(polarity=0.0, subjectivity=0.0)\njamil7\nSentiment(polarity=0.0, subjectivity=0.0)\nIgelau\nSentiment(polarity=0.0, subjectivity=0.0)\nsigacts\nSentiment(polarity=0.0, subjectivity=0.0)\nPick-A-Hill2019\nSentiment(polarity=0.0, subjectivity=0.0)\n4444\nSentiment(polarity=0.0, subjectivity=0.0)\nhckr_news\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\nthebean11\nSentiment(polarity=0.0, subjectivity=0.0)\nashtonkem\nSentiment(polarity=0.0, subjectivity=0.0)\ninterestica\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcosdumay\nSentiment(polarity=0.0, subjectivity=0.0)\nddevault\nSentiment(polarity=0.0, subjectivity=0.0)\nTomGullen\nSentiment(polarity=0.0, subjectivity=0.0)\nRandomBacon\nSentiment(polarity=0.0, subjectivity=0.0)\nmikestew\nSentiment(polarity=0.0, subjectivity=0.0)\nbogomipz\nSentiment(polarity=0.0, subjectivity=0.0)\nvidarh\nSentiment(polarity=0.0, subjectivity=0.0)\nRandomBacon\nSentiment(polarity=0.0, subjectivity=0.0)\nXCSme\nSentiment(polarity=0.0, subjectivity=0.0)\nCivBase\nSentiment(polarity=0.0, subjectivity=0.0)\nstandardUser\nSentiment(polarity=0.0, subjectivity=0.0)\njriot\nSentiment(polarity=0.0, subjectivity=0.0)\nbognition\nSentiment(polarity=0.0, subjectivity=0.0)\nSketchySeaBeast\nSentiment(polarity=0.0, subjectivity=0.0)\nknown\nSentiment(polarity=0.0, subjectivity=0.0)\nbediger4000\nSentiment(polarity=0.0, subjectivity=0.0)\nanarchop\nSentiment(polarity=0.0, subjectivity=0.0)\nentropicdrifter\nSentiment(polarity=0.0, subjectivity=0.0)\ndidgeoridoo\nSentiment(polarity=0.0, subjectivity=0.0)\npacala\nSentiment(polarity=0.0, subjectivity=0.0)\nGrue3\nSentiment(polarity=0.0, subjectivity=0.0)\nRandomBacon\nSentiment(polarity=0.0, subjectivity=0.0)\nself_awareness\nSentiment(polarity=0.0, subjectivity=0.0)\nkyuudou\nSentiment(polarity=0.0, subjectivity=0.0)\nwernercd\nSentiment(polarity=0.0, subjectivity=0.0)\nceejayoz\nSentiment(polarity=0.0, subjectivity=0.0)\nsatya71\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nrudolph9\nSentiment(polarity=0.0, subjectivity=0.0)\nmoftz\nSentiment(polarity=0.0, subjectivity=0.0)\npaulgb\nSentiment(polarity=0.0, subjectivity=0.0)\nderefr\nSentiment(polarity=0.0, subjectivity=0.0)\npiva00\nSentiment(polarity=0.0, subjectivity=0.0)\nalyandon\nSentiment(polarity=0.0, subjectivity=0.0)\npstuart\nSentiment(polarity=0.0, subjectivity=0.0)\nidasofiea\nSentiment(polarity=0.0, subjectivity=0.0)\nmanishsharan\nSentiment(polarity=0.0, subjectivity=0.0)\ndanharaj\nSentiment(polarity=0.0, subjectivity=0.0)\ndahart\nSentiment(polarity=0.0, subjectivity=0.0)\nwhoisjuan\nSentiment(polarity=0.0, subjectivity=0.0)\nratww\nSentiment(polarity=0.0, subjectivity=0.0)\nHugoDaniel\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcosdumay\nSentiment(polarity=0.0, subjectivity=0.0)\nutdiscant\nSentiment(polarity=0.0, subjectivity=0.0)\nbsg75\nSentiment(polarity=0.0, subjectivity=0.0)\naaron695\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\njohnhattan\nSentiment(polarity=0.0, subjectivity=0.0)\nken\nSentiment(polarity=0.0, subjectivity=0.0)\ngrowlist\nSentiment(polarity=0.0, subjectivity=0.0)\nRandomBacon\nSentiment(polarity=0.0, subjectivity=0.0)\ntomtomtom1\nSentiment(polarity=0.0, subjectivity=0.0)\napocalypstyx\nSentiment(polarity=0.0, subjectivity=0.0)\nbogomipz\nSentiment(polarity=0.0, subjectivity=0.0)\npiva00\nSentiment(polarity=0.0, subjectivity=0.0)\ngspr\nSentiment(polarity=0.0, subjectivity=0.0)\nhilbertseries\nSentiment(polarity=0.0, subjectivity=0.0)\nentropicdrifter\nSentiment(polarity=0.0, subjectivity=0.0)\nhwh\nSentiment(polarity=0.0, subjectivity=0.0)\nAuryGlenz\nSentiment(polarity=0.0, subjectivity=0.0)\npatrickmcmanus\nSentiment(polarity=0.0, subjectivity=0.0)\njudge2020\nSentiment(polarity=0.0, subjectivity=0.0)\n120bits\nSentiment(polarity=0.0, subjectivity=0.0)\nkyuudou\nSentiment(polarity=0.0, subjectivity=0.0)\nFrojoS\nSentiment(polarity=0.0, subjectivity=0.0)\nColinWright\nSentiment(polarity=0.0, subjectivity=0.0)\nideal0227\nSentiment(polarity=0.0, subjectivity=0.0)\nwhalesalad\nSentiment(polarity=0.0, subjectivity=0.0)\npiva00\nSentiment(polarity=0.0, subjectivity=0.0)\ngrowlist\nSentiment(polarity=0.0, subjectivity=0.0)\nTagbert\nSentiment(polarity=0.0, subjectivity=0.0)\ndogma1138\nSentiment(polarity=0.0, subjectivity=0.0)\ndanharaj\nSentiment(polarity=0.0, subjectivity=0.0)\ntomtomtom1\nSentiment(polarity=0.0, subjectivity=0.0)\nBtM909\nSentiment(polarity=0.0, subjectivity=0.0)\nmattbgates\nSentiment(polarity=0.0, subjectivity=0.0)\nshriek\nSentiment(polarity=0.0, subjectivity=0.0)\ndetaro\nSentiment(polarity=0.0, subjectivity=0.0)\nandrewla\nSentiment(polarity=0.0, subjectivity=0.0)\nmoftz\nSentiment(polarity=0.0, subjectivity=0.0)\nsli\nSentiment(polarity=0.0, subjectivity=0.0)\nOkx\nSentiment(polarity=0.0, subjectivity=0.0)\n0xCMP\nSentiment(polarity=0.0, subjectivity=0.0)\nhedora\nSentiment(polarity=0.0, subjectivity=0.0)\nsmlckz\nSentiment(polarity=0.0, subjectivity=0.0)\nfoobarian\nSentiment(polarity=0.0, subjectivity=0.0)\nj0hnM1st\nSentiment(polarity=0.0, subjectivity=0.0)\nOskarS\nSentiment(polarity=0.0, subjectivity=0.0)\ncultofmetatron\nSentiment(polarity=0.0, subjectivity=0.0)\nkawzeg\nSentiment(polarity=0.0, subjectivity=0.0)\na3n\nSentiment(polarity=0.0, subjectivity=0.0)\nphonypc\nSentiment(polarity=0.0, subjectivity=0.0)\nColinWright\nSentiment(polarity=0.0, subjectivity=0.0)\njustinmeiners\nSentiment(polarity=0.0, subjectivity=0.0)\nchoward\nSentiment(polarity=0.0, subjectivity=0.0)\nKerryJones\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\nSharlin\nSentiment(polarity=0.0, subjectivity=0.0)\nFalling3\nSentiment(polarity=0.0, subjectivity=0.0)\nashtonkem\nSentiment(polarity=0.0, subjectivity=0.0)\ndiogenescynic\nSentiment(polarity=0.0, subjectivity=0.0)\nAuryGlenz\nSentiment(polarity=0.0, subjectivity=0.0)\nj0hnM1st\nSentiment(polarity=0.0, subjectivity=0.0)\nsorokod\nSentiment(polarity=0.0, subjectivity=0.0)\nkevstev\nSentiment(polarity=0.0, subjectivity=0.0)\nfxleach\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nwodenokoto\nSentiment(polarity=0.0, subjectivity=0.0)\nmindslight\nSentiment(polarity=0.0, subjectivity=0.0)\nsolotronics\nSentiment(polarity=0.0, subjectivity=0.0)\nstronglikedan\nSentiment(polarity=0.0, subjectivity=0.0)\nfnord123\nSentiment(polarity=0.0, subjectivity=0.0)\nnotechback\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nnaravara\nSentiment(polarity=0.0, subjectivity=0.0)\noaiey\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nmhh__\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeQuery\nSentiment(polarity=0.0, subjectivity=0.0)\n317070\nSentiment(polarity=0.0, subjectivity=0.0)\nsli\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\nebg13\nSentiment(polarity=0.0, subjectivity=0.0)\ndharmach\nSentiment(polarity=0.0, subjectivity=0.0)\nmywittyname\nSentiment(polarity=0.0, subjectivity=0.0)\nAuryGlenz\nSentiment(polarity=0.0, subjectivity=0.0)\nthe8472\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nsenderista\nSentiment(polarity=0.0, subjectivity=0.0)\namelius\nSentiment(polarity=0.0, subjectivity=0.0)\nxhkkffbf\nSentiment(polarity=0.0, subjectivity=0.0)\nkevstev\nSentiment(polarity=0.0, subjectivity=0.0)\ngavinray\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\ngspr\nSentiment(polarity=0.0, subjectivity=0.0)\nlucgommans\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nmhh__\nSentiment(polarity=0.0, subjectivity=0.0)\nmalandrew\nSentiment(polarity=0.0, subjectivity=0.0)\neropple\nSentiment(polarity=0.0, subjectivity=0.0)\nerdos4d\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nwasdfff\nSentiment(polarity=0.0, subjectivity=0.0)\nc-cube\nSentiment(polarity=0.0, subjectivity=0.0)\na3n\nSentiment(polarity=0.0, subjectivity=0.0)\napi\nSentiment(polarity=0.0, subjectivity=0.0)\nrichbradshaw\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nkyralis\nSentiment(polarity=0.0, subjectivity=0.0)\nkangnkodos\nSentiment(polarity=0.0, subjectivity=0.0)\nzndr\nSentiment(polarity=0.0, subjectivity=0.0)\nsolotronics\nSentiment(polarity=0.0, subjectivity=0.0)\n725686\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nsailfast\nSentiment(polarity=0.0, subjectivity=0.0)\nrenewiltord\nSentiment(polarity=0.0, subjectivity=0.0)\nSyzygies\nSentiment(polarity=0.0, subjectivity=0.0)\noalders\nSentiment(polarity=0.0, subjectivity=0.0)\njon889\nSentiment(polarity=0.0, subjectivity=0.0)\nwx196\nSentiment(polarity=0.0, subjectivity=0.0)\nfredley\nSentiment(polarity=0.0, subjectivity=0.0)\nmellosouls\nSentiment(polarity=0.0, subjectivity=0.0)\nrichbradshaw\nSentiment(polarity=0.0, subjectivity=0.0)\nJoeAltmaier\nSentiment(polarity=0.0, subjectivity=0.0)\nskrebbel\nSentiment(polarity=0.0, subjectivity=0.0)\nderstander\nSentiment(polarity=0.0, subjectivity=0.0)\n0xff00ffee\nSentiment(polarity=0.0, subjectivity=0.0)\n0xCMP\nSentiment(polarity=0.0, subjectivity=0.0)\nklodolph\nSentiment(polarity=0.0, subjectivity=0.0)\ndharmach\nSentiment(polarity=0.0, subjectivity=0.0)\nkumarvvr\nSentiment(polarity=0.0, subjectivity=0.0)\nangus_gh\nSentiment(polarity=0.0, subjectivity=0.0)\nsq_conspiracy\nSentiment(polarity=0.0, subjectivity=0.0)\nbrundolf\nSentiment(polarity=0.0, subjectivity=0.0)\neastbayjake\nSentiment(polarity=0.0, subjectivity=0.0)\nABNW\nSentiment(polarity=0.0, subjectivity=0.0)\nMelting_Harps\nSentiment(polarity=0.0, subjectivity=0.0)\nsteve76\nSentiment(polarity=0.0, subjectivity=0.0)\ndennisong\nSentiment(polarity=0.0, subjectivity=0.0)\ndetaro\nSentiment(polarity=0.0, subjectivity=0.0)\nfosco\nSentiment(polarity=0.0, subjectivity=0.0)\nseattle_spring\nSentiment(polarity=0.0, subjectivity=0.0)\nd0m\nSentiment(polarity=0.0, subjectivity=0.0)\nexhilaration\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcus_holmes\nSentiment(polarity=0.0, subjectivity=0.0)\nvmception\nSentiment(polarity=0.0, subjectivity=0.0)\nrezeroed\nSentiment(polarity=0.0, subjectivity=0.0)\nm4rtink\nSentiment(polarity=0.0, subjectivity=0.0)\nK0SM0S\nSentiment(polarity=0.0, subjectivity=0.0)\nSiempreViernes\nSentiment(polarity=0.0, subjectivity=0.0)\ngreatwhitenorth\nSentiment(polarity=0.0, subjectivity=0.0)\nmatt_s\nSentiment(polarity=0.0, subjectivity=0.0)\njointpdf\nSentiment(polarity=0.0, subjectivity=0.0)\nsteviesong\nSentiment(polarity=0.0, subjectivity=0.0)\nSharlin\nSentiment(polarity=0.0, subjectivity=0.0)\nmaest\nSentiment(polarity=0.0, subjectivity=0.0)\nsabujp\nSentiment(polarity=0.0, subjectivity=0.0)\nhn_throwaway_99\nSentiment(polarity=0.0, subjectivity=0.0)\nDuskStar\nSentiment(polarity=0.0, subjectivity=0.0)\n3fe9a03ccd14ca5\nSentiment(polarity=0.0, subjectivity=0.0)\nTorKlingberg\nSentiment(polarity=0.0, subjectivity=0.0)\ndennisong\nSentiment(polarity=0.0, subjectivity=0.0)\ncrimsonalucard\nSentiment(polarity=0.0, subjectivity=0.0)\nanalog31\nSentiment(polarity=0.0, subjectivity=0.0)\ndacohenii\nSentiment(polarity=0.0, subjectivity=0.0)\nhnarn\nSentiment(polarity=0.0, subjectivity=0.0)\ndeogeo\nSentiment(polarity=0.0, subjectivity=0.0)\nyoz-y\nSentiment(polarity=0.0, subjectivity=0.0)\ntango12\nSentiment(polarity=0.0, subjectivity=0.0)\nthe8472\nSentiment(polarity=0.0, subjectivity=0.0)\nIdoRA\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nbrimanning\nSentiment(polarity=0.0, subjectivity=0.0)\ncrimsonalucard\nSentiment(polarity=0.0, subjectivity=0.0)\ndavedx\nSentiment(polarity=0.0, subjectivity=0.0)\nSharlin\nSentiment(polarity=0.0, subjectivity=0.0)\nFeloniousHam\nSentiment(polarity=0.0, subjectivity=0.0)\nmanquer\nSentiment(polarity=0.0, subjectivity=0.0)\nakerro\nSentiment(polarity=0.0, subjectivity=0.0)\nzynkb0a\nSentiment(polarity=0.0, subjectivity=0.0)\namelius\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ncollyw\nSentiment(polarity=0.0, subjectivity=0.0)\ngreenie_beans\nSentiment(polarity=0.0, subjectivity=0.0)\ndahart\nSentiment(polarity=0.0, subjectivity=0.0)\ns_y_n_t_a_x\nSentiment(polarity=0.0, subjectivity=0.0)\njillesvangurp\nSentiment(polarity=0.0, subjectivity=0.0)\njerven\nSentiment(polarity=0.0, subjectivity=0.0)\nthrowaway5752\nSentiment(polarity=0.0, subjectivity=0.0)\ndubcanada\nSentiment(polarity=0.0, subjectivity=0.0)\nMyrmornis\nSentiment(polarity=0.0, subjectivity=0.0)\ncollyw\nSentiment(polarity=0.0, subjectivity=0.0)\nrrss\nSentiment(polarity=0.0, subjectivity=0.0)\nbane\nSentiment(polarity=0.0, subjectivity=0.0)\ntylermw\nSentiment(polarity=0.0, subjectivity=0.0)\nsyockit\nSentiment(polarity=0.0, subjectivity=0.0)\nbryanrasmussen\nSentiment(polarity=0.0, subjectivity=0.0)\nklodolph\nSentiment(polarity=0.0, subjectivity=0.0)\ndetaro\nSentiment(polarity=0.0, subjectivity=0.0)\nuser_50123890\nSentiment(polarity=0.0, subjectivity=0.0)\nkumarvvr\nSentiment(polarity=0.0, subjectivity=0.0)\norliesaurus\nSentiment(polarity=0.0, subjectivity=0.0)\nJamesBarney\nSentiment(polarity=0.0, subjectivity=0.0)\ngreesil\nSentiment(polarity=0.0, subjectivity=0.0)\ncatacombs\nSentiment(polarity=0.0, subjectivity=0.0)\nOscarCunningham\nSentiment(polarity=0.0, subjectivity=0.0)\nF-0X\nSentiment(polarity=0.0, subjectivity=0.0)\nstevefan1999\nSentiment(polarity=0.0, subjectivity=0.0)\nhocuspocus\nSentiment(polarity=0.0, subjectivity=0.0)\ncoreyp_1\nSentiment(polarity=0.0, subjectivity=0.0)\ngambiting\nSentiment(polarity=0.0, subjectivity=0.0)\nareactnativedev\nSentiment(polarity=0.0, subjectivity=0.0)\nSilhouette\nSentiment(polarity=0.0, subjectivity=0.0)\nkipply\nSentiment(polarity=0.0, subjectivity=0.0)\nsabujp\nSentiment(polarity=0.0, subjectivity=0.0)\ntakeda\nSentiment(polarity=0.0, subjectivity=0.0)\na3n\nSentiment(polarity=0.0, subjectivity=0.0)\nethbro\nSentiment(polarity=0.0, subjectivity=0.0)\nfluffything\nSentiment(polarity=0.0, subjectivity=0.0)\nantirez\nSentiment(polarity=0.0, subjectivity=0.0)\ngfodor\nSentiment(polarity=0.0, subjectivity=0.0)\nbachmeier\nSentiment(polarity=0.0, subjectivity=0.0)\nngcc_hk\nSentiment(polarity=0.0, subjectivity=0.0)\nfredley\nSentiment(polarity=0.0, subjectivity=0.0)\nparski\nSentiment(polarity=0.0, subjectivity=0.0)\nthanksforfish\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\nDavesjoshin\nSentiment(polarity=0.0, subjectivity=0.0)\nmasklinn\nSentiment(polarity=0.0, subjectivity=0.0)\nratmice\nSentiment(polarity=0.0, subjectivity=0.0)\nPick-A-Hill2019\nSentiment(polarity=0.0, subjectivity=0.0)\nScoundreller\nSentiment(polarity=0.0, subjectivity=0.0)\ntanzbaer\nSentiment(polarity=0.0, subjectivity=0.0)\n3fe9a03ccd14ca5\nSentiment(polarity=0.0, subjectivity=0.0)\nschveiguy\nSentiment(polarity=0.0, subjectivity=0.0)\naeqas\nSentiment(polarity=0.0, subjectivity=0.0)\npbhjpbhj\nSentiment(polarity=0.0, subjectivity=0.0)\nyoavm\nSentiment(polarity=0.0, subjectivity=0.0)\nkumarvvr\nSentiment(polarity=0.0, subjectivity=0.0)\nmtmail\nSentiment(polarity=0.0, subjectivity=0.0)\nkeiferski\nSentiment(polarity=0.0, subjectivity=0.0)\nmarcrosoft\nSentiment(polarity=0.0, subjectivity=0.0)\nfxleach\nSentiment(polarity=0.0, subjectivity=0.0)\nceejayoz\nSentiment(polarity=0.0, subjectivity=0.0)\numvi\nSentiment(polarity=0.0, subjectivity=0.0)\nexikyut\nSentiment(polarity=0.0, subjectivity=0.0)\njonsen\nSentiment(polarity=0.0, subjectivity=0.0)\nkumarvvr\nSentiment(polarity=0.0, subjectivity=0.0)\napi\nSentiment(polarity=0.0, subjectivity=0.0)\ncaspper69\nSentiment(polarity=0.0, subjectivity=0.0)\ndexen\nSentiment(polarity=0.0, subjectivity=0.0)\ndubcanada\nSentiment(polarity=0.0, subjectivity=0.0)\nATsch\nSentiment(polarity=0.0, subjectivity=0.0)\nCryptoBanker\nSentiment(polarity=0.0, subjectivity=0.0)\nphkahler\nSentiment(polarity=0.0, subjectivity=0.0)\namelius\nSentiment(polarity=0.0, subjectivity=0.0)\nmden\nSentiment(polarity=0.0, subjectivity=0.0)\nmortenjorck\nSentiment(polarity=0.0, subjectivity=0.0)\nCogitoCogito\nSentiment(polarity=0.0, subjectivity=0.0)\nd883kd8\nSentiment(polarity=0.0, subjectivity=0.0)\nbaxtr\nSentiment(polarity=0.0, subjectivity=0.0)\nklodolph\nSentiment(polarity=0.0, subjectivity=0.0)\ntylermw\nSentiment(polarity=0.0, subjectivity=0.0)\ntrizzle21\nSentiment(polarity=0.0, subjectivity=0.0)\nmasklinn\nSentiment(polarity=0.0, subjectivity=0.0)\n2OEH8eoCRo0\nSentiment(polarity=0.0, subjectivity=0.0)\n2OEH8eoCRo0\nSentiment(polarity=0.0, subjectivity=0.0)\nSilhouette\nSentiment(polarity=0.0, subjectivity=0.0)\nprimeradical\nSentiment(polarity=0.0, subjectivity=0.0)\nschveiguy\nSentiment(polarity=0.0, subjectivity=0.0)\nkube-system\nSentiment(polarity=0.0, subjectivity=0.0)\nSomeoneFromCA\nSentiment(polarity=0.0, subjectivity=0.0)\nbauka\nSentiment(polarity=0.0, subjectivity=0.0)\nken\nSentiment(polarity=0.0, subjectivity=0.0)\nkumarvvr\nSentiment(polarity=0.0, subjectivity=0.0)\npaulie_a\nSentiment(polarity=0.0, subjectivity=0.0)\nthe_svd_doctor\nSentiment(polarity=0.0, subjectivity=0.0)\ncorpMaverick\nSentiment(polarity=0.0, subjectivity=0.0)\nz9e\nSentiment(polarity=0.0, subjectivity=0.0)\ndubcanada\nSentiment(polarity=0.0, subjectivity=0.0)\nsergeykish\nSentiment(polarity=0.0, subjectivity=0.0)\npravda\nSentiment(polarity=0.0, subjectivity=0.0)\nramshorns\nSentiment(polarity=0.0, subjectivity=0.0)\nfxleach\nSentiment(polarity=0.0, subjectivity=0.0)\nusaar333\nSentiment(polarity=0.0, subjectivity=0.0)\nDaniel_sk\nSentiment(polarity=0.0, subjectivity=0.0)\na3n\nSentiment(polarity=0.0, subjectivity=0.0)\njavajosh\nSentiment(polarity=0.0, subjectivity=0.0)\n2OEH8eoCRo0\nSentiment(polarity=0.0, subjectivity=0.0)\nNohatCoder\nSentiment(polarity=0.0, subjectivity=0.0)\ntom-thistime\nSentiment(polarity=0.0, subjectivity=0.0)\nejstronge\nSentiment(polarity=0.0, subjectivity=0.0)\nthemodelplumber\nSentiment(polarity=0.0, subjectivity=0.0)\nmasklinn\nSentiment(polarity=0.0, subjectivity=0.0)\nalvarelle\nSentiment(polarity=0.0, subjectivity=0.0)\numvi\nSentiment(polarity=0.0, subjectivity=0.0)\ndexen\nSentiment(polarity=0.0, subjectivity=0.0)\ntraverseda\nSentiment(polarity=0.0, subjectivity=0.0)\nkumarvvr\nSentiment(polarity=0.0, subjectivity=0.0)\ncesarb\nSentiment(polarity=0.0, subjectivity=0.0)\ncultus\nSentiment(polarity=0.0, subjectivity=0.0)\npbhjpbhj\nSentiment(polarity=0.0, subjectivity=0.0)\nSomeoneFromCA\nSentiment(polarity=0.0, subjectivity=0.0)\nprimeradical\nSentiment(polarity=0.0, subjectivity=0.0)\nrvz\nSentiment(polarity=0.0, subjectivity=0.0)\ndanans\nSentiment(polarity=0.0, subjectivity=0.0)\nMalwareMustDie\nSentiment(polarity=0.0, subjectivity=0.0)\nkrsdcbl\nSentiment(polarity=0.0, subjectivity=0.0)\nubercow13\nSentiment(polarity=0.0, subjectivity=0.0)\n3fe9a03ccd14ca5\nSentiment(polarity=0.0, subjectivity=0.0)\nfowl2\nSentiment(polarity=0.0, subjectivity=0.0)\nL_226\nSentiment(polarity=0.0, subjectivity=0.0)\ntrumbitta2\nSentiment(polarity=0.0, subjectivity=0.0)\nklodolph\nSentiment(polarity=0.0, subjectivity=0.0)\n7777fps\nSentiment(polarity=0.0, subjectivity=0.0)\nFjolsvith\nSentiment(polarity=0.0, subjectivity=0.0)\nivank\nSentiment(polarity=0.0, subjectivity=0.0)\nlukifer\nSentiment(polarity=0.0, subjectivity=0.0)\ntaylodl\nSentiment(polarity=0.0, subjectivity=0.0)\nmdani\nSentiment(polarity=0.0, subjectivity=0.0)\nkazinator\nSentiment(polarity=0.0, subjectivity=0.0)\nmdszy\nSentiment(polarity=0.0, subjectivity=0.0)\n2OEH8eoCRo0\nSentiment(polarity=0.0, subjectivity=0.0)\ntoyg\nSentiment(polarity=0.0, subjectivity=0.0)\nbobobob420\nSentiment(polarity=0.0, subjectivity=0.0)\ntraderjane\nSentiment(polarity=0.0, subjectivity=0.0)\nslightwinder\nSentiment(polarity=0.0, subjectivity=0.0)\nbluedino\nSentiment(polarity=0.0, subjectivity=0.0)\nmattacular\nSentiment(polarity=0.0, subjectivity=0.0)\nzests\nSentiment(polarity=0.0, subjectivity=0.0)\nxhkkffbf\nSentiment(polarity=0.0, subjectivity=0.0)\nsasasassy\nSentiment(polarity=0.0, subjectivity=0.0)\nDanaStartupNews\nSentiment(polarity=0.0, subjectivity=0.0)\nATsch\nSentiment(polarity=0.0, subjectivity=0.0)\nbookofjoe\nSentiment(polarity=0.0, subjectivity=0.0)\nkiliantics\nSentiment(polarity=0.0, subjectivity=0.0)\nbreak_the_bank\nSentiment(polarity=0.0, subjectivity=0.0)\nHellMood\nSentiment(polarity=0.0, subjectivity=0.0)\nlsllc\nSentiment(polarity=0.0, subjectivity=0.0)\nforgot_user1234\nSentiment(polarity=0.0, subjectivity=0.0)\nsebringj\nSentiment(polarity=0.0, subjectivity=0.0)\nbrudgers\nSentiment(polarity=0.0, subjectivity=0.0)\nbhupy\nSentiment(polarity=0.0, subjectivity=0.0)\nhackmiester\nSentiment(polarity=0.0, subjectivity=0.0)\n\nSentiment(polarity=0.0, subjectivity=0.0)\ndr-detroit\nSentiment(polarity=0.0, subjectivity=0.0)\nwasdfff\nSentiment(polarity=0.0, subjectivity=0.0)\nBOOSTERHIDROGEN\nSentiment(polarity=0.0, subjectivity=0.0)\nbregma\nSentiment(polarity=0.0, subjectivity=0.0)\nemilecantin\nSentiment(polarity=0.0, subjectivity=0.0)\nfxleach\nSentiment(polarity=0.0, subjectivity=0.0)\njohnchristopher\nSentiment(polarity=0.0, subjectivity=0.0)\nSephr\nSentiment(polarity=0.0, subjectivity=0.0)\nthrowaway2048\nSentiment(polarity=0.0, subjectivity=0.0)\nthird_I\nSentiment(polarity=0.0, subjectivity=0.0)\nzynkb0a\nSentiment(polarity=0.0, subjectivity=0.0)\nthesuperbigfrog\nSentiment(polarity=0.0, subjectivity=0.0)\nsergiotapia\nSentiment(polarity=0.0, subjectivity=0.0)\nigneo676\nSentiment(polarity=0.0, subjectivity=0.0)\nbregma\nSentiment(polarity=0.0, subjectivity=0.0)\nnoarchy\nSentiment(polarity=0.0, subjectivity=0.0)\nderision\nSentiment(polarity=0.0, subjectivity=0.0)\nbravoetch\nSentiment(polarity=0.0, subjectivity=0.0)\nIdoRA\nSentiment(polarity=0.0, subjectivity=0.0)\nUehreka\nSentiment(polarity=0.0, subjectivity=0.0)\nhiccuphippo\nSentiment(polarity=0.0, subjectivity=0.0)\nciabattabread\nSentiment(polarity=0.0, subjectivity=0.0)\ndlgeek\nSentiment(polarity=0.0, subjectivity=0.0)\nzests\nSentiment(polarity=0.0, subjectivity=0.0)\n9wzYQbTYsAIc\nSentiment(polarity=0.0, subjectivity=0.0)\nthird_I\nSentiment(polarity=0.0, subjectivity=0.0)\nr00fus\nSentiment(polarity=0.0, subjectivity=0.0)\nEvenThisAcronym\nSentiment(polarity=0.0, subjectivity=0.0)\njakear\nSentiment(polarity=0.0, subjectivity=0.0)\nsergiomattei\nSentiment(polarity=0.0, subjectivity=0.0)\nscreye\nSentiment(polarity=0.0, subjectivity=0.0)\ngen3\nSentiment(polarity=0.0, subjectivity=0.0)\ndelecti\nSentiment(polarity=0.0, subjectivity=0.0)\nopendomain\nSentiment(polarity=0.0, subjectivity=0.0)\nTooKool4This\nSentiment(polarity=0.0, subjectivity=0.0)\nbufferoverflow\nSentiment(polarity=0.0, subjectivity=0.0)\nmiscPerson\nSentiment(polarity=0.0, subjectivity=0.0)\nera86\nSentiment(polarity=0.0, subjectivity=0.0)\ntinyhouse\nSentiment(polarity=0.0, subjectivity=0.0)\nta1234567890\nSentiment(polarity=0.0, subjectivity=0.0)\nmalandrew\nSentiment(polarity=0.0, subjectivity=0.0)\nsave_ferris\nSentiment(polarity=0.0, subjectivity=0.0)\nCosmicShadow\nSentiment(polarity=0.0, subjectivity=0.0)\nthrowaway55554\nSentiment(polarity=0.0, subjectivity=0.0)\npbhjpbhj\nSentiment(polarity=0.0, subjectivity=0.0)\nscrimps\nSentiment(polarity=0.0, subjectivity=0.0)\ncousin_it\nSentiment(polarity=0.0, subjectivity=0.0)\nDethNinja\nSentiment(polarity=0.0, subjectivity=0.0)\nsamizdis\nSentiment(polarity=0.0, subjectivity=0.0)\nTheRealPomax\nSentiment(polarity=0.0, subjectivity=0.0)\nDaniloDias\nSentiment(polarity=0.0, subjectivity=0.0)\nthrowawaybanjo1\nSentiment(polarity=0.0, subjectivity=0.0)\ncultus\nSentiment(polarity=0.0, subjectivity=0.0)\nDSingularity\nSentiment(polarity=0.0, subjectivity=0.0)\ncatalogia\nSentiment(polarity=0.0, subjectivity=0.0)\ntoomanybeersies\nSentiment(polarity=0.0, subjectivity=0.0)\nTeMPOraL\nSentiment(polarity=0.0, subjectivity=0.0)\nbuzzerbetrayed\nSentiment(polarity=0.0, subjectivity=0.0)\ndsabanin\nSentiment(polarity=0.0, subjectivity=0.0)\ncultus\nSentiment(polarity=0.0, subjectivity=0.0)\nzynkb0a\nSentiment(polarity=0.0, subjectivity=0.0)\ndtx1\nSentiment(polarity=0.0, subjectivity=0.0)\nk__\nSentiment(polarity=0.0, subjectivity=0.0)\nfortran77\nSentiment(polarity=0.0, subjectivity=0.0)\nttaahh\nSentiment(polarity=0.0, subjectivity=0.0)\nEvenThisAcronym\nSentiment(polarity=0.0, subjectivity=0.0)\nwasdfff\nSentiment(polarity=0.0, subjectivity=0.0)\nstephenhuey\nSentiment(polarity=0.0, subjectivity=0.0)\nadunk\nSentiment(polarity=0.0, subjectivity=0.0)\nzxcmx\nSentiment(polarity=0.0, subjectivity=0.0)\nmrighele\nSentiment(polarity=0.0, subjectivity=0.0)\ndtf\nSentiment(polarity=0.0, subjectivity=0.0)\npjc50\nSentiment(polarity=0.0, subjectivity=0.0)\nPick-A-Hill2019\nSentiment(polarity=0.0, subjectivity=0.0)\n"
],
[
"#create a textblob object\nobj = TextBlob(article)\n\n#Returns a value between -1 and 1\nsentiment = obj.sentiment.polarity\nprint(sentiment)",
"0.0\n"
],
[
"#TextBlob\nif sentiment == 0:\n print('The text is neutral')\nelif sentiment > 0:\n print('The text is positive')\nelse:\n print('The Text is negative')",
"The text is neutral\n"
]
],
[
[
"#CONCLUTION",
"_____no_output_____"
]
],
[
[
"#percentage value in a column by category \ndf['final_pred'].value_counts(normalize=True) * 100",
"_____no_output_____"
]
],
[
[
"#COMPARE TEXTBLOB AND VANDER SENTIMENTAL \n\nWe can conclude that in both model can show if the text or article in general is neutral, positive, or negative. \n\nIn this case is confirmed that the mayority words is neutral in both models.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5ae24736b7379bd3e73b852068aa28c80f8279
| 143,756 |
ipynb
|
Jupyter Notebook
|
bayesian-stats-modelling-tutorial/notebooks/01a-student-probability-simulation.ipynb
|
sunny2309/scipy_conf_notebooks
|
30a85d5137db95e01461ad21519bc1bdf294044b
|
[
"MIT"
] | 2 |
2021-01-09T15:57:26.000Z
|
2021-11-29T01:44:21.000Z
|
bayesian-stats-modelling-tutorial/notebooks/01a-student-probability-simulation.ipynb
|
sunny2309/scipy_conf_notebooks
|
30a85d5137db95e01461ad21519bc1bdf294044b
|
[
"MIT"
] | 5 |
2019-11-15T02:00:26.000Z
|
2021-01-06T04:26:40.000Z
|
bayesian-stats-modelling-tutorial/notebooks/01a-student-probability-simulation.ipynb
|
sunny2309/scipy_conf_notebooks
|
30a85d5137db95e01461ad21519bc1bdf294044b
|
[
"MIT"
] | null | null | null | 118.220395 | 21,572 | 0.865946 |
[
[
[
"# What is probability? A simulated introduction",
"_____no_output_____"
]
],
[
[
"#Import packages\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\nsns.set()",
"_____no_output_____"
]
],
[
[
"## Learning Objectives of Part 1",
"_____no_output_____"
],
[
"- To have an understanding of what \"probability\" means, in both Bayesian and Frequentist terms;\n- To be able to simulate probability distributions that model real-world phenomena;\n- To understand how probability distributions relate to data-generating **stories**.",
"_____no_output_____"
],
[
"## Probability",
"_____no_output_____"
],
[
"> To the pioneers such as Bernoulli, Bayes and Laplace, a probability represented a _degree-of-belief_ or plausibility; how much they thought that something was true, based on the evidence at hand. To the 19th century scholars, however, this seemed too vague and subjective an idea to be the basis of a rigorous mathematical theory. So they redefined probability as the _long-run relative frequency_ with which an event occurred, given (infinitely) many repeated (experimental) trials. Since frequencies can be measured, probability was now seen as an objective tool for dealing with _random_ phenomena.\n\n-- _Data Analysis, A Bayesian Tutorial_, Sivia & Skilling (p. 9)",
"_____no_output_____"
],
[
"What type of random phenomena are we talking about here? One example is:\n\n- Knowing that a website has a click-through rate (CTR) of 10%, we can calculate the probability of having 10 people, 9 people, 8 people ... and so on click through, upon drawing 10 people randomly from the population;\n- But given the data of how many people click through, how can we calculate the CTR? And how certain can we be of this CTR? Or how likely is a particular CTR?\n\nScience mostly asks questions of the second form above & Bayesian thinking provides a wonderful framework for answering such questions. Essentially Bayes' Theorem gives us a way of moving from the probability of the data given the model (written as $P(data|model)$) to the probability of the model given the data ($P(model|data)$).\n\nWe'll first explore questions of the 1st type using simulation: knowing the model, what is the probability of seeing certain data?",
"_____no_output_____"
],
[
"## Simulating probabilities",
"_____no_output_____"
],
[
"* Let's say that a website has a CTR of 50%, i.e. that 50% of people click through. If we picked 1000 people at random from thepopulation, how likely would it be to find that a certain number of people click?\n\nWe can simulate this using `numpy`'s random number generator.\n\nTo do so, first note we can use `np.random.rand()` to randomly select floats between 0 and 1 (known as the _uniform distribution_). Below, we do so and plot a histogram:",
"_____no_output_____"
]
],
[
[
"# Draw 1,000 samples from uniform & plot results\nx = np.random.rand(1000)\nplt.hist(x, bins=20);",
"_____no_output_____"
]
],
[
[
"To then simulate the sampling from the population, we check whether each float was greater or less than 0.5. If less than or equal to 0.5, we say the person clicked.",
"_____no_output_____"
]
],
[
[
"# Computed how many people click\nclicks = x <= 0.5\nn_clicks = clicks.sum()\nf\"Number of clicks = {n_clicks}\"",
"_____no_output_____"
]
],
[
[
"The proportion of people who clicked can be calculated as the total number of clicks over the number of people:",
"_____no_output_____"
]
],
[
[
"# Computed proportion of people who clicked\nf\"Proportion who clicked = {n_clicks/len(clicks)}\"",
"_____no_output_____"
]
],
[
[
"**Discussion**: Did you get the same answer as your neighbour? If you did, why? If not, why not?",
"_____no_output_____"
],
[
"**Up for discussion:** Let's say that all you had was this data and you wanted to figure out the CTR (probability of clicking). \n\n* What would your estimate be?\n* Bonus points: how confident would you be of your estimate?",
"_____no_output_____"
],
[
"**Note:** Although, in the above, we have described _probability_ in two ways, we have not described it mathematically. We're not going to do so rigorously here, but we will say that _probability_ defines a function from the space of possibilities (in the above, the interval $[0,1]$) that describes how likely it is to get a particular point or region in that space. Mike Betancourt has an elegant [Introduction to Probability Theory (For Scientists and Engineers)](https://betanalpha.github.io/assets/case_studies/probability_theory.html) that I can recommend.",
"_____no_output_____"
],
[
"### Hands-on: clicking",
"_____no_output_____"
],
[
"Use random sampling to simulate how many people click when the CTR is 0.7. How many click? What proportion?",
"_____no_output_____"
]
],
[
[
"# Solution\nclicks = x <= 0.7\nn_clicks = clicks.sum()\nprint(f\"Number of clicks = {n_clicks}\")\nprint(f\"Proportion who clicked = {n_clicks/len(clicks)}\")",
"Number of clicks = 700\nProportion who clicked = 0.7\n"
]
],
[
[
"_Discussion point_: This model is known as the bias coin flip. \n- Can you see why?\n- Can it be used to model other phenomena?",
"_____no_output_____"
],
[
"### Galapagos finch beaks",
"_____no_output_____"
],
[
"You can also calculate such proportions with real-world data. Here we import a dataset of Finch beak measurements from the Galápagos islands. You can find the data [here](https://datadryad.org/resource/doi:10.5061/dryad.9gh90).",
"_____no_output_____"
]
],
[
[
"# Import and view head of data\ndf_12 = pd.read_csv('../data/finch_beaks_2012.csv')\ndf_12.head()",
"_____no_output_____"
],
[
"# Store lengths in a pandas series\nlengths = df_12['blength']",
"_____no_output_____"
]
],
[
[
"* What proportion of birds have a beak length > 10 ?",
"_____no_output_____"
]
],
[
[
"p = sum(lengths > 10) / len(lengths)\np",
"_____no_output_____"
]
],
[
[
"**Note:** This is the proportion of birds that have beak length $>10$ in your empirical data, not the probability that any bird drawn from the population will have beak length $>10$.",
"_____no_output_____"
],
[
"### Proportion: A proxy for probability\n\nAs stated above, we have calculated a proportion, not a probability. As a proxy for the probability, we can simulate drawing random samples (with replacement) from the data seeing how many lengths are > 10 and calculating the proportion (commonly referred to as [hacker statistics](https://speakerdeck.com/jakevdp/statistics-for-hackers)):",
"_____no_output_____"
]
],
[
[
"n_samples = 10000\nsum(np.random.choice(lengths, n_samples, replace=True) > 10) / n_samples",
"_____no_output_____"
]
],
[
[
"### Another way to simulate coin-flips",
"_____no_output_____"
],
[
"In the above, you have used the uniform distribution to sample from a series of biased coin flips. I want to introduce you to another distribution that you can also use to do so: the **binomial distribution**.\n\nThe **binomial distribution** with parameters $n$ and $p$ is defined as the probability distribution of\n\n> the number of heads seen when flipping a coin $n$ times when with $p(heads)=p$.",
"_____no_output_____"
],
[
"**Note** that this distribution essentially tells the **story** of a general model in the following sense: if we believe that they underlying process generating the observed data has a binary outcome (affected by disease or not, head or not, 0 or 1, clicked through or not), and that one the of the two outcomes occurs with probability $p$, then the probability of seeing a particular outcome is given by the **binomial distribution** with parameters $n$ and $p$.",
"_____no_output_____"
],
[
"Any process that matches the coin flip story is a Binomial process (note that you'll see such coin flips also referred to as Bernoulli trials in the literature). So we can also formulate the story of the Binomial distribution as\n\n> the number $r$ of successes in $n$ Bernoulli trials with probability $p$ of success, is Binomially distributed. ",
"_____no_output_____"
],
[
"We'll now use the binomial distribution to answer the same question as above:\n* If P(heads) = 0.7 and you flip the coin ten times, how many heads will come up?\n\nWe'll also set the seed to ensure reproducible results.",
"_____no_output_____"
]
],
[
[
"# Set seed\nnp.random.seed(42)",
"_____no_output_____"
],
[
"# Simulate one run of flipping the biased coin 10 times\nnp.random.binomial(10,0.7)",
"_____no_output_____"
]
],
[
[
"### Simulating many times to get the distribution\n\nIn the above, we have simulated the scenario once. But this only tells us one potential outcome. To see how likely it is to get $n$ heads, for example, we need to simulate it a lot of times and check what proportion ended up with $n$ heads.",
"_____no_output_____"
]
],
[
[
"# Simulate 1,000 run of flipping the biased coin 10 times\nx = np.random.binomial(10, 0.7, size=10_000)\n\n# Plot normalized histogram of results\nplt.hist(x, density=True, bins=10);",
"_____no_output_____"
]
],
[
[
"* Group chat: what do you see in the above?",
"_____no_output_____"
],
[
"### Hands-on: Probabilities",
"_____no_output_____"
],
[
"- If I flip a biased coin ($P(H)=0.3$) 20 times, what is the probability of 5 or more heads?",
"_____no_output_____"
]
],
[
[
"# Calculate the probability of 5 or more heads for p=0.3\nsum(np.random.binomial(20, 0.3, 10_000) >= 5) / 10_000",
"_____no_output_____"
]
],
[
[
"- If I flip a fair coin 20 times, what is the probability of 5 or more heads?",
"_____no_output_____"
]
],
[
[
"# Calculate the probability of 5 or more heads for p=0.5\nsum(np.random.binomial(20, 0.5, 10_000) >= 5) / 10_000",
"_____no_output_____"
]
],
[
[
"- Plot the normalized histogram of number of heads of the following experiment: flipping a fair coin 10 times.",
"_____no_output_____"
]
],
[
[
"# Plot histogram \nx = np.random.binomial(10, 0.5, 10_000)\nplt.hist(x);",
"_____no_output_____"
]
],
[
[
"**Note:** you may have noticed that the _binomial distribution_ can take on only a finite number of values, whereas the _uniform distribution_ above can take on any number between $0$ and $1$. These are different enough cases to warrant special mention of this & two different names: the former is called a _probability mass function_ (PMF) and the latter a _probability distribution function_ (PDF). Time permitting, we may discuss some of the subtleties here. If not, all good texts will cover this. I like (Sivia & Skilling, 2006), among many others.\n",
"_____no_output_____"
],
[
"**Question:** \n* Looking at the histogram, can you tell me the probability of seeing 4 or more heads?",
"_____no_output_____"
],
[
"Enter the ECDF.",
"_____no_output_____"
],
[
"## Empirical cumulative distribution functions (ECDFs)",
"_____no_output_____"
],
[
"An ECDF is, as an alternative to a histogram, a way to visualize univariate data that is rich in information. It allows you to visualize all of your data and, by doing so, avoids the very real problem of binning.\n- can plot control plus experiment\n- data plus model!\n- many populations\n- can see multimodality (though less pronounced) -- a mode becomes a point of inflexion!\n- can read off so much: e.g. percentiles.\n\nSee Eric Ma's great post on ECDFS [here](https://ericmjl.github.io/blog/2018/7/14/ecdfs/) AND [this twitter thread](https://twitter.com/allendowney/status/1019171696572583936) (thanks, Allen Downey!).\n\nSo what is this ECDF? \n\n**Definition:** In an ECDF, the x-axis is the range of possible values for the data & for any given x-value, the corresponding y-value is the proportion of data points less than or equal to that x-value.",
"_____no_output_____"
],
[
"Let's define a handy ECDF function that takes in data and outputs $x$ and $y$ data for the ECDF.",
"_____no_output_____"
]
],
[
[
"def ecdf(data):\n \"\"\"Compute ECDF for a one-dimensional array of measurements.\"\"\"\n # Number of data points\n n = len(data)\n\n # x-data for the ECDF\n x = np.sort(data)\n\n # y-data for the ECDF\n y = np.arange(1, n+1) / n\n\n return x, y",
"_____no_output_____"
]
],
[
[
"### Hands-on: Plotting ECDFs",
"_____no_output_____"
],
[
"Plot the ECDF for the previous hands-on exercise. Read the answer to the following question off the ECDF: he probability of seeing 4 or more heads?",
"_____no_output_____"
]
],
[
[
"# Generate x- and y-data for the ECDF\nx_flips, y_flips = ecdf(x)\n\n# Plot the ECDF\nplt.plot(x_flips, y_flips, marker=\".\")",
"_____no_output_____"
]
],
[
[
"## Probability distributions and their stories",
"_____no_output_____"
],
[
"**Credit:** Thank you to [Justin Bois](http://bois.caltech.edu/) for countless hours of discussion, work and collaboration on thinking about probability distributions and their stories. All of the following is inspired by Justin & his work, if not explicitly drawn from.",
"_____no_output_____"
],
[
"___\n\nIn the above, we saw that we could match data-generating processes with binary outcomes to the story of the binomial distribution.\n\n> The Binomial distribution's story is as follows: the number $r$ of successes in $n$ Bernoulli trials with probability $p$ of success, is Binomially distributed. \n\nThere are many other distributions with stories also!",
"_____no_output_____"
],
[
"### Poisson processes and the Poisson distribution",
"_____no_output_____"
],
[
"In the book [Information Theory, Inference and Learning Algorithms](https://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981) David MacKay tells the tale of a town called Poissonville, in which the buses have an odd schedule. Standing at a bus stop in Poissonville, the amount of time you have to wait for a bus is totally independent of when the previous bus arrived. This means you could watch a bus drive off and another arrive almost instantaneously, or you could be waiting for hours.\n\nArrival of buses in Poissonville is what we call a Poisson process. The timing of the next event is completely independent of when the previous event happened. Many real-life processes behave in this way. \n\n* natural births in a given hospital (there is a well-defined average number of natural births per year, and the timing of one birth is independent of the timing of the previous one);\n* Landings on a website;\n* Meteor strikes;\n* Molecular collisions in a gas;\n* Aviation incidents.\n\nAny process that matches the buses in Poissonville **story** is a Poisson process.\n\n ",
"_____no_output_____"
],
[
"The number of arrivals of a Poisson process in a given amount of time is Poisson distributed. The Poisson distribution has one parameter, the average number of arrivals in a given length of time. So, to match the story, we could consider the number of hits on a website in an hour with an average of six hits per hour. This is Poisson distributed.",
"_____no_output_____"
]
],
[
[
"# Generate Poisson-distributed data\nsamples = np.random.poisson(6, 10**6)\n\n# Plot histogram\nplt.hist(samples, bins=21);",
"_____no_output_____"
]
],
[
[
"**Question:** Does this look like anything to you?",
"_____no_output_____"
],
[
"In fact, the Poisson distribution is the limit of the Binomial distribution for low probability of success and large number of trials, that is, for rare events. ",
"_____no_output_____"
],
[
"To see this, think about the stories. Picture this: you're doing a Bernoulli trial once a minute for an hour, each with a success probability of 0.05. We would do 60 trials, and the number of successes is Binomially distributed, and we would expect to get about 3 successes. This is just like the Poisson story of seeing 3 buses on average arrive in a given interval of time. Thus the Poisson distribution with arrival rate equal to np approximates a Binomial distribution for n Bernoulli trials with probability p of success (with n large and p small). This is useful because the Poisson distribution can be simpler to work with as it has only one parameter instead of two for the Binomial distribution.",
"_____no_output_____"
],
[
"#### Hands-on: Poisson",
"_____no_output_____"
],
[
"Plot the ECDF of the Poisson-distributed data that you generated above.",
"_____no_output_____"
]
],
[
[
"# Generate x- and y-data for the ECDF\nx_p, y_p = ecdf(samples)\n\n# Plot the ECDF\nplt.plot(x_p, y_p, marker=\".\");",
"_____no_output_____"
]
],
[
[
"#### Example Poisson distribution: field goals attempted per game",
"_____no_output_____"
],
[
"This section is explicitly taken from the great work of Justin Bois. You can find more [here](https://github.com/justinbois/dataframed-plot-examples/blob/master/lebron_field_goals.ipynb).",
"_____no_output_____"
],
[
"Let's first remind ourselves of the story behind the Poisson distribution.\n> The number of arrivals of a Poisson processes in a given set time interval is Poisson distributed.\n\nTo quote Justin Bois:\n\n> We could model field goal attempts in a basketball game using a Poisson distribution. When a player takes a shot is a largely stochastic process, being influenced by the myriad ebbs and flows of a basketball game. Some players shoot more than others, though, so there is a well-defined rate of shooting. Let's consider LeBron James's field goal attempts for the 2017-2018 NBA season.",
"_____no_output_____"
],
[
"First thing's first, the data ([from here](https://www.basketball-reference.com/players/j/jamesle01/gamelog/2018)):",
"_____no_output_____"
]
],
[
[
"fga = [19, 16, 15, 20, 20, 11, 15, 22, 34, 17, 20, 24, 14, 14, \n 24, 26, 14, 17, 20, 23, 16, 11, 22, 15, 18, 22, 23, 13, \n 18, 15, 23, 22, 23, 18, 17, 22, 17, 15, 23, 8, 16, 25, \n 18, 16, 17, 23, 17, 15, 20, 21, 10, 17, 22, 20, 20, 23, \n 17, 18, 16, 25, 25, 24, 19, 17, 25, 20, 20, 14, 25, 26, \n 29, 19, 16, 19, 18, 26, 24, 21, 14, 20, 29, 16, 9]",
"_____no_output_____"
]
],
[
[
"To show that this LeBron's attempts are ~ Poisson distributed, you're now going to plot the ECDF and compare it with the the ECDF of the Poisson distribution that has the mean of the data (technically, this is the maximum likelihood estimate).",
"_____no_output_____"
],
[
"#### Hands-on: Simulating Data Generating Stories",
"_____no_output_____"
],
[
"Generate the x and y values for the ECDF of LeBron's field attempt goals.",
"_____no_output_____"
]
],
[
[
"# Generate x & y data for ECDF\nx_ecdf, y_ecdf = ecdf(fga)",
"_____no_output_____"
]
],
[
[
"Now we'll draw samples out of a Poisson distribution to get the theoretical ECDF, plot it with the ECDF of the data and see how they look.",
"_____no_output_____"
]
],
[
[
"# Number of times we simulate the model\nn_reps = 1000\n\n# Plot ECDF of data\nplt.plot(x_ecdf, y_ecdf, '.', color='black');\n\n# Plot ECDF of model\nfor _ in range(n_reps):\n samples = np.random.poisson(np.mean(fga), size=len(fga))\n x_theor, y_theor = ecdf(samples)\n plt.plot(x_theor, y_theor, '.', alpha=0.01, color='lightgray');\n\n\n# Label your axes\nplt.xlabel('field goal attempts')\nplt.ylabel('ECDF');",
"_____no_output_____"
]
],
[
[
"You can see from the ECDF that LeBron's field goal attempts per game are Poisson distributed.",
"_____no_output_____"
],
[
"### Exponential distribution",
"_____no_output_____"
],
[
"We've encountered a variety of named _discrete distributions_. There are also named _continuous distributions_, such as the Exponential distribution and the Normal (or Gaussian) distribution. To see what the story of the Exponential distribution is, let's return to Poissonville, in which the number of buses that will arrive per hour are Poisson distributed.\nHowever, the waiting time between arrivals of a Poisson process are exponentially distributed.\n\nSo: the exponential distribution has the following story: the waiting time between arrivals of a Poisson process are exponentially distributed. It has a single parameter, the mean waiting time. This distribution is not peaked, as we can see from its PDF.\n\nFor an illustrative example, lets check out the time between all incidents involving nuclear power since 1974. It's a reasonable first approximation to expect incidents to be well-modeled by a Poisson process, which means the timing of one incident is independent of all others. If this is the case, the time between incidents should be Exponentially distributed.\n\n\nTo see if this story is credible, we can plot the ECDF of the data with the CDF that we'd get from an exponential distribution with the sole parameter, the mean, given by the mean inter-incident time of the data.\n",
"_____no_output_____"
]
],
[
[
"# Load nuclear power accidents data & create array of inter-incident times\ndf = pd.read_csv('../data/nuclear_power_accidents.csv')\ndf.Date = pd.to_datetime(df.Date)\ndf = df[df.Date >= pd.to_datetime('1974-01-01')]\ninter_times = np.diff(np.sort(df.Date)).astype(float) / 1e9 / 3600 / 24",
"_____no_output_____"
],
[
"# Compute mean and sample from exponential\nmean = ___\nsamples = ___\n\n# Compute ECDFs for sample & model\nx, y = ___\nx_theor, y_theor = ___",
"_____no_output_____"
],
[
"# Plot sample & model ECDFs\n___;\nplt.plot(x, y, marker='.', linestyle='none');",
"_____no_output_____"
]
],
[
[
"We see that the data is close to being Exponentially distributed, which means that we can model the nuclear incidents as a Poisson process.",
"_____no_output_____"
],
[
"### Normal distribution",
"_____no_output_____"
],
[
"The Normal distribution, also known as the Gaussian or Bell Curve, appears everywhere. There are many reasons for this. One is the following:\n\n> When doing repeated measurements, we expect them to be Normally distributed, owing to the many subprocesses that contribute to a measurement. This is because (a formulation of the Central Limit Theorem) **any quantity that emerges as the sum of a large number of subprocesses tends to be Normally distributed** provided none of the subprocesses is very broadly distributed.\n\nNow it's time to see if this holds for the measurements of the speed of light in the famous Michelson–Morley experiment:",
"_____no_output_____"
],
[
"Below, I'll plot the histogram with a Gaussian curve fitted to it. Even if that looks good, though, that could be due to binning bias. SO then you'll plot the ECDF of the data and the CDF of the model!",
"_____no_output_____"
]
],
[
[
"# Load data, plot histogram \nimport scipy.stats as st\ndf = pd.read_csv('../data/michelson_speed_of_light.csv')\ndf = df.rename(columns={'velocity of light in air (km/s)': 'c'})\nc = df.c.values\nx_s = np.linspace(299.6, 300.1, 400) * 1000\nplt.plot(x_s, st.norm.pdf(x_s, c.mean(), c.std(ddof=1)))\nplt.hist(c, bins=9, density=True)\nplt.xlabel('speed of light (km/s)')\nplt.ylabel('PDF');",
"_____no_output_____"
]
],
[
[
"#### Hands-on: Simulating Normal",
"_____no_output_____"
]
],
[
[
"# Get speed of light measurement + mean & standard deviation\nmichelson_speed_of_light = df.c.values\nmean = np.mean(michelson_speed_of_light)\nstd = np.std(michelson_speed_of_light, ddof=1)\n\n# Generate normal samples w/ mean, std of data\nsamples = np.random.normal(mean, std, size=10000)\n\n# Generate data ECDF & model CDF\nx, y =ecdf(michelson_speed_of_light)\nx_theor, y_theor = ecdf(samples)\n\n# Plot data & model (E)CDFs\nplt.plot(x_theor, y_theor)\nplt.plot(x, y, marker=\".\")\nplt.xlabel('speed of light (km/s)')\nplt.ylabel('CDF');",
"_____no_output_____"
]
],
[
[
"Some of you may ask but is the data really normal? I urge you to check out Allen Downey's post [_Are your data normal? Hint: no._ ](http://allendowney.blogspot.com/2013/08/are-my-data-normal.html)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5ae31b750461752b9ee867c45d1cb523b079f3
| 49,149 |
ipynb
|
Jupyter Notebook
|
examples/ux-docs/template/examples-creating-a-sequencevariant-from-scratch-ux-no-bueno.ipynb
|
aaronslaff/hgvs
|
fa585c18d60e086b94e1b2c80ba4fa1a6d2b626b
|
[
"Apache-2.0"
] | null | null | null |
examples/ux-docs/template/examples-creating-a-sequencevariant-from-scratch-ux-no-bueno.ipynb
|
aaronslaff/hgvs
|
fa585c18d60e086b94e1b2c80ba4fa1a6d2b626b
|
[
"Apache-2.0"
] | null | null | null |
examples/ux-docs/template/examples-creating-a-sequencevariant-from-scratch-ux-no-bueno.ipynb
|
aaronslaff/hgvs
|
fa585c18d60e086b94e1b2c80ba4fa1a6d2b626b
|
[
"Apache-2.0"
] | null | null | null | 38.457746 | 1,756 | 0.574844 |
[
[
[
"# hgvs Documention: Examples\n\nThis notebook is being drafted to run and review the code presented in the hgvs documentation that is in the \"Creating a SequenceVariant from scratch\" section (https://hgvs.readthedocs.io/en/stable/examples/creating-a-variant.html#overview). \n\n## User Troubleshooting\nUsers proposed state. User stories and user troubleshooting methods are included. \n\n### User Credentials\nThis section is for people to provide their background\n\n Occupation: Molecular Biologist\n\n Experience in Biology: 8 years\n\n Experience in Python: ~1 year",
"_____no_output_____"
],
[
"## Step 1: Import hgvs modules\n\n### User Story:\nAs a novice python developer I review the entire page for code snipets. I collect all of the modules that need to be imported and run them in the first cell.",
"_____no_output_____"
]
],
[
[
"import hgvs.location \nimport hgvs.posedit \nimport hgvs.edit \nimport hgvs.variant \nimport copy",
"_____no_output_____"
]
],
[
[
"## Troubleshooting: \"ImportError: No module named variant\"\n### User Story:\nI am confused why one of the modules caused an error.\n### Approach:\nUse `dir()` and `help()` on hgvs and its modules.\n### Resolution: \nExecute example from `help(hgvs)`. Import `hgvs.variantmapper`. \n### Comments:\n`hgvs.variant` and `hgvs.variant.SequenceVariant` are now `hgvs.sequencevariant.SequenceVariant`.",
"_____no_output_____"
]
],
[
[
"dir(hgvs), help(hgvs)",
"Help on package hgvs:\n\nNAME\n hgvs\n\nFILE\n /home/aaron/hgvs/hgvs/__init__.py\n\nDESCRIPTION\n hgvs is a package to parse, format, and manipulate biological sequence\n variants. See https://github.com/biocommons/hgvs/ for details.\n \n Example use:\n \n >>> import hgvs.dataproviders.uta\n >>> import hgvs.parser\n >>> import hgvs.variantmapper\n \n # start with these variants as strings\n >>> hgvs_g, hgvs_c = \"NC_000007.13:g.36561662C>T\", \"NM_001637.3:c.1582G>A\"\n \n # parse the genomic variant into a Python structure\n >>> hp = hgvs.parser.Parser()\n >>> var_g = hp.parse_hgvs_variant(hgvs_g)\n >>> var_g\n SequenceVariant(ac=NC_000007.13, type=g, posedit=36561662C>T)\n \n # SequenceVariants are composed of structured objects, e.g.,\n >>> var_g.posedit.pos.start\n SimplePosition(base=36561662, uncertain=False)\n \n # format by stringification \n >>> str(var_g)\n 'NC_000007.13:g.36561662C>T'\n \n # initialize the mapper for GRCh37 with splign-based alignments\n >>> hdp = hgvs.dataproviders.uta.connect()\n >>> am = hgvs.assemblymapper.AssemblyMapper(hdp,\n ... assembly_name=\"GRCh37\", alt_aln_method=\"splign\",\n ... replace_reference=True)\n \n # identify transcripts that overlap this genomic variant\n >>> transcripts = am.relevant_transcripts(var_g)\n >>> sorted(transcripts)\n ['NM_001177506.1', 'NM_001177507.1', 'NM_001637.3']\n \n # map genomic variant to one of these transcripts\n >>> var_c = am.g_to_c(var_g, \"NM_001637.3\")\n >>> var_c\n SequenceVariant(ac=NM_001637.3, type=c, posedit=1582G>A)\n >>> str(var_c)\n 'NM_001637.3:c.1582G>A'\n \n # CDS coordinates use BaseOffsetPosition to support intronic offsets\n >>> var_c.posedit.pos.start\n BaseOffsetPosition(base=1582, offset=0, datum=Datum.CDS_START, uncertain=False)\n\nPACKAGE CONTENTS\n alignmentmapper\n assemblymapper\n config\n dataproviders (package)\n decorators (package)\n easy\n edit\n enums\n exceptions\n hgvsposition\n intervalmapper\n location\n normalizer\n parser\n posedit\n projector\n sequencevariant\n shell\n transcriptmapper\n utils (package)\n validator\n variantmapper\n\nDATA\n __version__ = '1.1.4.dev79+g907f995.d20181204'\n absolute_import = _Feature((2, 5, 0, 'alpha', 1), (3, 0, 0, 'alpha', 0...\n division = _Feature((2, 2, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0), 8192...\n global_config = <hgvs.config.Config object>\n logger = <logging.Logger object>\n print_function = _Feature((2, 6, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0)...\n unicode_literals = _Feature((2, 6, 0, 'alpha', 2), (3, 0, 0, 'alpha', ...\n\nVERSION\n 1.1.4.dev79+g907f995.d20181204\n\n\n"
],
[
"# follow example in Description\nimport hgvs.dataproviders.uta\nimport hgvs.parser\nimport hgvs.variantmapper",
"_____no_output_____"
],
[
"# chose variant, https://www.ncbi.nlm.nih.gov/snp/rs6025\nrs6025 = 'NC_000001.10:g.169519049T>C'",
"_____no_output_____"
],
[
"# parse variant\nhp = hgvs.parser.Parser()\nrs6025P = hp.parse_hgvs_variant(rs6025)\nrs6025P",
"_____no_output_____"
],
[
"# SequenceVariant can be pulled apart\nrs6025P.ac, rs6025P.fill_ref, rs6025P.format, rs6025P.posedit, rs6025P.type, rs6025P.validate",
"_____no_output_____"
],
[
"# Exploring .fill_ref, .format, .validate\ndir(rs6025P.fill_ref), dir(rs6025P.format), dir(rs6025P.validate)",
"_____no_output_____"
],
[
"# create dataprovider variable -- what does this do?\nhdp = hgvs.dataproviders.uta.connect()",
"_____no_output_____"
],
[
"# create assemblymapper variable\nam = hgvs.assemblymapper",
"_____no_output_____"
]
],
[
[
"## Troubleshooting: \"AttributeError: 'module' object has no attribute 'assemblymapper'\"\n\n### Resolution: \nImport `hgvs.assemblymapper`.\n\n### Comments:\n",
"_____no_output_____"
]
],
[
[
"# import module\nimport hgvs.assemblymapper",
"_____no_output_____"
]
],
[
[
"End of troubleshooting for **\"AttributeError: 'module' object has no attribute 'assemblymapper'\"**",
"_____no_output_____"
]
],
[
[
"# create assemblymapper variable, determine transcripts effected\nam = hgvs.assemblymapper.AssemblyMapper(hdp, alt_aln_method='splign', assembly_name='GRCh37', replace_reference=True)\ntranscripts = am.relevant_transcripts(rs6025P)\nsorted(transcripts)",
"_____no_output_____"
],
[
"# map variant to coding sequence\nrs6025c = am.g_to_c(rs6025P,transcripts[0])\nrs6025c",
"_____no_output_____"
],
[
"# pull apart the SequenceVariant\nrs6025c.ac, rs6025c.posedit.edit, rs6025c.posedit.pos.start, rs6025c.type",
"_____no_output_____"
]
],
[
[
"End of troubleshooting for **\"ImportError: No module named variant\"** \n\n## Step 2: Make an Interval to define a position of the edit",
"_____no_output_____"
]
],
[
[
"start = hgvs.location.BaseOffsetPosition(base=200,offset=-6,datum=hgvs.location.CDS_START)\nstart, str(start)",
"_____no_output_____"
]
],
[
[
"## Troubleshooting: \"AttributeError: 'module' object has no attribute 'CDS_START'\"\n\n### Resolution: \nUse `hgvs.location.Datum.` prefix.\n\n\n### Comments:\n",
"_____no_output_____"
]
],
[
[
"# Check dir() on hgvs.location and hgvs.posedit\ndir(hgvs.location)",
"_____no_output_____"
],
[
"# read doc on 'Datum' and check class list\nhelp(hgvs.location.Datum), dir(hgvs.location.Datum)",
"Help on class Datum in module hgvs.enums:\n\nDatum = <enum 'Datum'>\n"
],
[
"hgvs.location.Datum is hgvs.enums.Datum",
"_____no_output_____"
]
],
[
[
"End of troubleshooting for **\"AttributeError: 'module' object has no attribute 'CDS_START'\"**\n## Step 2 cont.",
"_____no_output_____"
]
],
[
[
"start = hgvs.location.BaseOffsetPosition(base=200,offset=-6,datum=hgvs.location.Datum.CDS_START)\nstart, str(start)",
"_____no_output_____"
],
[
"end = hgvs.location.BaseOffsetPosition(base=22,datum=hgvs.location.Datum.CDS_END)\nend, str(end)",
"_____no_output_____"
],
[
"iv = hgvs.location.Interval(start=start,end=end)\niv, str(iv)",
"_____no_output_____"
]
],
[
[
"## Step 3: Make an edit object",
"_____no_output_____"
]
],
[
[
"edit = hgvs.edit.NARefAlt(ref='A',alt='T')\nedit, str(edit)",
"_____no_output_____"
],
[
"posedit = hgvs.posedit.PosEdit(pos=iv,edit=edit)\nposedit, str(posedit)",
"_____no_output_____"
],
[
"var = hgvs.variant.SequenceVariant(ac=transcripts[0], type='g', posedit=posedit)\nvar, str(var)",
"_____no_output_____"
],
[
"# see AttributeError: 'module' object has no attribute 'variant' troubleshooting\ndir(hgvs), dir(hgvs.sequencevariant)",
"_____no_output_____"
],
[
"# hgvs.sequencevariant is an accepted class with SequenceVariant as a class\nvar = hgvs.sequencevariant.SequenceVariant(ac=transcripts[0], type='g', posedit=posedit)\nvar, str(var)",
"_____no_output_____"
]
],
[
[
"## Step 4: Validate the variant\nSee hgvs.validator.Validator for validation options.",
"_____no_output_____"
]
],
[
[
"dir(hgvs.validator.Validator), help(hgvs.validator.Validator)",
"Help on class Validator in module hgvs.validator:\n\nclass Validator(__builtin__.object)\n | invoke intrinsic and extrinsic validation\n | \n | Methods defined here:\n | \n | __init__(self, hdp, strict=True)\n | \n | validate(self, var, strict=None)\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n\n"
],
[
"hgvs.validator.Validator.validate(var)",
"_____no_output_____"
],
[
"hgvs.validator.Validator.validate(var.validate)",
"_____no_output_____"
]
],
[
[
"## Troubleshooting: \"TypeError: unbound method validate() must be called with Validator instance as first argument\"\n\n### Resolution: \nUse `hgvs.sequencevariant.validate_type_ac_pair(ac= , type= )`.\n\n\n### Comments:\n",
"_____no_output_____"
]
],
[
[
"# hgvs.sequencevariant has validate_type_ac_pair\nval = hgvs.sequencevariant.validate_type_ac_pair(ac=var.ac, type=var.type)\nval",
"_____no_output_____"
]
],
[
[
"End of troubleshooting for **\"TypeError: unbound method validate() must be called with Validator instance as first argument\"**",
"_____no_output_____"
]
],
[
[
"var.type = 'c'",
"_____no_output_____"
],
[
"val = hgvs.sequencevariant.validate_type_ac_pair(ac=var.ac, type=var.type)\nval",
"_____no_output_____"
]
],
[
[
"## Step 5: Update variant using copy.deepcopy",
"_____no_output_____"
]
],
[
[
"import copy",
"_____no_output_____"
],
[
"var2 = copy.deepcopy(var)\nvar2",
"_____no_output_____"
],
[
"var2.posedit.pos.start.base = 456",
"_____no_output_____"
],
[
"str(var2)",
"_____no_output_____"
],
[
"var2.posedit.edit.alt = 'CT'",
"_____no_output_____"
],
[
"str(var2)",
"_____no_output_____"
],
[
"var2.posedit.pos.end.uncertain = True",
"_____no_output_____"
],
[
"str(var2)",
"_____no_output_____"
],
[
"var2 = copy.deepcopy(var)\nvar2.posedit.pos.end.uncertain = True",
"_____no_output_____"
],
[
"str(var2)",
"_____no_output_____"
]
],
[
[
"## Troubleshooting: \"HGVSUnsupportedOperationError: Cannot compare coordinates of uncertain positions\"\n\n### Resolution: \nNone at this time. \n\n\n### Comments:\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a5ae483aed85bdd76d3ebf0e3800bb39380b601
| 607,811 |
ipynb
|
Jupyter Notebook
|
notebooks/generative_adversarial_network.ipynb
|
gkbharathy/DeepLearning_Illustrated
|
b941694d83e100263a650f4d98be6ce717e24191
|
[
"MIT"
] | 8 |
2019-02-11T20:13:20.000Z
|
2021-05-29T17:09:45.000Z
|
notebooks/generative_adversarial_network.ipynb
|
gkbharathy/DeepLearning_Illustrated
|
b941694d83e100263a650f4d98be6ce717e24191
|
[
"MIT"
] | 2 |
2019-03-14T16:32:46.000Z
|
2019-03-23T18:42:14.000Z
|
notebooks/generative_adversarial_network.ipynb
|
gkbharathy/DeepLearning_Illustrated
|
b941694d83e100263a650f4d98be6ce717e24191
|
[
"MIT"
] | 10 |
2019-03-13T19:38:25.000Z
|
2020-03-19T03:14:56.000Z
| 309.791539 | 35,366 | 0.887485 |
[
[
[
"# *Quick, Draw!* GAN",
"_____no_output_____"
],
[
"In this notebook, we use Generative Adversarial Network code (adapted from [Rowel Atienza's](https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py) under [MIT License](https://github.com/roatienza/Deep-Learning-Experiments/blob/master/LICENSE)) to create sketches in the style of humans who have played the [*Quick, Draw!* game](https://quickdraw.withgoogle.com) (data available [here](https://github.com/googlecreativelab/quickdraw-dataset) under [Creative Commons Attribution 4.0 license](https://creativecommons.org/licenses/by/4.0/)).",
"_____no_output_____"
],
[
"#### Load dependencies",
"_____no_output_____"
]
],
[
[
"# for data input and output:\nimport numpy as np\nimport os\n\n# for deep learning: \nimport keras\nfrom keras.models import Model\nfrom keras.layers import Input, Dense, Conv2D, Dropout\nfrom keras.layers import BatchNormalization, Flatten\nfrom keras.layers import Activation\nfrom keras.layers import Reshape # new! \nfrom keras.layers import Conv2DTranspose, UpSampling2D # new! \nfrom keras.optimizers import RMSprop # new! \n\n# for plotting: \nimport pandas as pd\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"Using TensorFlow backend.\n"
]
],
[
[
"#### Load data\nNumPy bitmap files are [here](https://console.cloud.google.com/storage/browser/quickdraw_dataset/full/numpy_bitmap) -- pick your own drawing category -- you don't have to pick *apples* :)",
"_____no_output_____"
]
],
[
[
"input_images = \"../quickdraw_data/apple.npy\"",
"_____no_output_____"
],
[
"data = np.load(input_images) # 28x28 (sound familiar?) grayscale bitmap in numpy .npy format; images are centered",
"_____no_output_____"
],
[
"data.shape",
"_____no_output_____"
],
[
"data[4242]",
"_____no_output_____"
],
[
"data = data/255\ndata = np.reshape(data,(data.shape[0],28,28,1)) # fourth dimension is color\nimg_w,img_h = data.shape[1:3]\ndata.shape",
"_____no_output_____"
],
[
"data[4242]",
"_____no_output_____"
],
[
"plt.imshow(data[4242,:,:,0], cmap='Greys')",
"_____no_output_____"
]
],
[
[
"#### Create discriminator network",
"_____no_output_____"
]
],
[
[
"def build_discriminator(depth=64, p=0.4):\n\n # Define inputs\n image = Input((img_w,img_h,1))\n \n # Convolutional layers\n conv1 = Conv2D(depth*1, 5, strides=2, \n padding='same', activation='relu')(image)\n conv1 = Dropout(p)(conv1)\n \n conv2 = Conv2D(depth*2, 5, strides=2, \n padding='same', activation='relu')(conv1)\n conv2 = Dropout(p)(conv2)\n \n conv3 = Conv2D(depth*4, 5, strides=2, \n padding='same', activation='relu')(conv2)\n conv3 = Dropout(p)(conv3)\n \n conv4 = Conv2D(depth*8, 5, strides=1, \n padding='same', activation='relu')(conv3)\n conv4 = Flatten()(Dropout(p)(conv4))\n \n # Output layer\n prediction = Dense(1, activation='sigmoid')(conv4)\n \n # Model definition\n model = Model(inputs=image, outputs=prediction)\n \n return model",
"_____no_output_____"
],
[
"discriminator = build_discriminator()",
"_____no_output_____"
],
[
"discriminator.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 28, 28, 1) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 14, 14, 64) 1664 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 14, 14, 64) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 7, 7, 128) 204928 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 7, 7, 128) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 4, 4, 256) 819456 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 4, 4, 256) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 4, 4, 512) 3277312 \n_________________________________________________________________\ndropout_4 (Dropout) (None, 4, 4, 512) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 8192) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 8193 \n=================================================================\nTotal params: 4,311,553\nTrainable params: 4,311,553\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"discriminator.compile(loss='binary_crossentropy', \n optimizer=RMSprop(lr=0.0008, \n decay=6e-8, \n clipvalue=1.0), \n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"#### Create generator network",
"_____no_output_____"
]
],
[
[
"z_dimensions = 32",
"_____no_output_____"
],
[
"def build_generator(latent_dim=z_dimensions, \n depth=64, p=0.4):\n \n # Define inputs\n noise = Input((latent_dim,))\n \n # First dense layer\n dense1 = Dense(7*7*depth)(noise)\n dense1 = BatchNormalization(momentum=0.9)(dense1) # default momentum for moving average is 0.99\n dense1 = Activation(activation='relu')(dense1)\n dense1 = Reshape((7,7,depth))(dense1)\n dense1 = Dropout(p)(dense1)\n \n # De-Convolutional layers\n conv1 = UpSampling2D()(dense1)\n conv1 = Conv2DTranspose(int(depth/2), \n kernel_size=5, padding='same', \n activation=None,)(conv1)\n conv1 = BatchNormalization(momentum=0.9)(conv1)\n conv1 = Activation(activation='relu')(conv1)\n \n conv2 = UpSampling2D()(conv1)\n conv2 = Conv2DTranspose(int(depth/4), \n kernel_size=5, padding='same', \n activation=None,)(conv2)\n conv2 = BatchNormalization(momentum=0.9)(conv2)\n conv2 = Activation(activation='relu')(conv2)\n \n conv3 = Conv2DTranspose(int(depth/8), \n kernel_size=5, padding='same', \n activation=None,)(conv2)\n conv3 = BatchNormalization(momentum=0.9)(conv3)\n conv3 = Activation(activation='relu')(conv3)\n\n # Output layer\n image = Conv2D(1, kernel_size=5, padding='same', \n activation='sigmoid')(conv3)\n\n # Model definition \n model = Model(inputs=noise, outputs=image)\n \n return model",
"_____no_output_____"
],
[
"generator = build_generator()",
"_____no_output_____"
],
[
"generator.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_2 (InputLayer) (None, 32) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 3136) 103488 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 3136) 12544 \n_________________________________________________________________\nactivation_1 (Activation) (None, 3136) 0 \n_________________________________________________________________\nreshape_1 (Reshape) (None, 7, 7, 64) 0 \n_________________________________________________________________\ndropout_5 (Dropout) (None, 7, 7, 64) 0 \n_________________________________________________________________\nup_sampling2d_1 (UpSampling2 (None, 14, 14, 64) 0 \n_________________________________________________________________\nconv2d_transpose_1 (Conv2DTr (None, 14, 14, 32) 51232 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 14, 14, 32) 128 \n_________________________________________________________________\nactivation_2 (Activation) (None, 14, 14, 32) 0 \n_________________________________________________________________\nup_sampling2d_2 (UpSampling2 (None, 28, 28, 32) 0 \n_________________________________________________________________\nconv2d_transpose_2 (Conv2DTr (None, 28, 28, 16) 12816 \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 28, 28, 16) 64 \n_________________________________________________________________\nactivation_3 (Activation) (None, 28, 28, 16) 0 \n_________________________________________________________________\nconv2d_transpose_3 (Conv2DTr (None, 28, 28, 8) 3208 \n_________________________________________________________________\nbatch_normalization_4 (Batch (None, 28, 28, 8) 32 \n_________________________________________________________________\nactivation_4 (Activation) (None, 28, 28, 8) 0 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 28, 28, 1) 201 \n=================================================================\nTotal params: 183,713\nTrainable params: 177,329\nNon-trainable params: 6,384\n_________________________________________________________________\n"
]
],
[
[
"#### Create adversarial network",
"_____no_output_____"
]
],
[
[
"z = Input(shape=(z_dimensions,))\nimg = generator(z)",
"_____no_output_____"
],
[
"discriminator.trainable = False",
"_____no_output_____"
],
[
"pred = discriminator(img)",
"_____no_output_____"
],
[
"adversarial_model = Model(z, pred)",
"_____no_output_____"
],
[
"adversarial_model.compile(loss='binary_crossentropy', \n optimizer=RMSprop(lr=0.0004, \n decay=3e-8, \n clipvalue=1.0), \n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"#### Train!",
"_____no_output_____"
]
],
[
[
"def train(epochs=2000, batch=128, z_dim=z_dimensions):\n \n d_metrics = []\n a_metrics = []\n \n running_d_loss = 0\n running_d_acc = 0\n running_a_loss = 0\n running_a_acc = 0\n \n for i in range(epochs):\n \n # sample real images: \n real_imgs = np.reshape(\n data[np.random.choice(data.shape[0],\n batch,\n replace=False)],\n (batch,28,28,1))\n \n # generate fake images: \n fake_imgs = generator.predict(\n np.random.uniform(-1.0, 1.0, \n size=[batch, z_dim]))\n \n # concatenate images as discriminator inputs:\n x = np.concatenate((real_imgs,fake_imgs))\n \n # assign y labels for discriminator: \n y = np.ones([2*batch,1])\n y[batch:,:] = 0\n \n # train discriminator: \n d_metrics.append(\n discriminator.train_on_batch(x,y)\n )\n running_d_loss += d_metrics[-1][0]\n running_d_acc += d_metrics[-1][1]\n \n # adversarial net's noise input and \"real\" y: \n noise = np.random.uniform(-1.0, 1.0, \n size=[batch, z_dim])\n y = np.ones([batch,1])\n \n # train adversarial net: \n a_metrics.append(\n adversarial_model.train_on_batch(noise,y)\n ) \n running_a_loss += a_metrics[-1][0]\n running_a_acc += a_metrics[-1][1]\n \n # periodically print progress & fake images: \n if (i+1)%100 == 0:\n\n print('Epoch #{}'.format(i))\n log_mesg = \"%d: [D loss: %f, acc: %f]\" % \\\n (i, running_d_loss/i, running_d_acc/i)\n log_mesg = \"%s [A loss: %f, acc: %f]\" % \\\n (log_mesg, running_a_loss/i, running_a_acc/i)\n print(log_mesg)\n\n noise = np.random.uniform(-1.0, 1.0, \n size=[16, z_dim])\n gen_imgs = generator.predict(noise)\n\n plt.figure(figsize=(5,5))\n\n for k in range(gen_imgs.shape[0]):\n plt.subplot(4, 4, k+1)\n plt.imshow(gen_imgs[k, :, :, 0], \n cmap='gray')\n plt.axis('off')\n \n plt.tight_layout()\n plt.show()\n \n return a_metrics, d_metrics",
"_____no_output_____"
],
[
"a_metrics_complete, d_metrics_complete = train()",
"Epoch #99\n99: [D loss: 0.308268, acc: 0.938605] [A loss: 3.351842, acc: 0.289694]\n"
],
[
"ax = pd.DataFrame(\n {\n 'Adversarial': [metric[0] for metric in a_metrics_complete],\n 'Discriminator': [metric[0] for metric in d_metrics_complete],\n }\n).plot(title='Training Loss', logy=True)\nax.set_xlabel(\"Epochs\")\nax.set_ylabel(\"Loss\")",
"_____no_output_____"
],
[
"ax = pd.DataFrame(\n {\n 'Adversarial': [metric[1] for metric in a_metrics_complete],\n 'Discriminator': [metric[1] for metric in d_metrics_complete],\n }\n).plot(title='Training Accuracy')\nax.set_xlabel(\"Epochs\")\nax.set_ylabel(\"Accuracy\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a5af6741240e0f893799c51f678755e2aa6b510
| 47,962 |
ipynb
|
Jupyter Notebook
|
dendograma python.ipynb
|
vfamim/clusters-ia
|
c8bdcfcd7b29160fd768475d30d0c5cd3d49035d
|
[
"MIT"
] | null | null | null |
dendograma python.ipynb
|
vfamim/clusters-ia
|
c8bdcfcd7b29160fd768475d30d0c5cd3d49035d
|
[
"MIT"
] | null | null | null |
dendograma python.ipynb
|
vfamim/clusters-ia
|
c8bdcfcd7b29160fd768475d30d0c5cd3d49035d
|
[
"MIT"
] | null | null | null | 39.736537 | 12,574 | 0.523498 |
[
[
[
"import pandas as pd\nimport openpyxl\nimport matplotlib.pyplot as plt # matplotlib para criar o gráfico dendoframa\nfrom scipy.cluster.hierarchy import dendrogram, linkage # scipy.cluster.hierarchy import funções do tipo linkage = método de ligação\nfrom sklearn.cluster import AgglomerativeClustering # sklearn.cluster import AgglomerativeClustering - executa um agrupamento hierarquico\nfrom sklearn.preprocessing import StandardScaler # from sklearn.preprocessing pacote/ modulo que fornece funções do tipo: StandardScaler - Padroniza os dados.",
"_____no_output_____"
],
[
"dendograma_df = pd.read_excel(\"/home/vfamim/Documentos/DATA SCIENCE/Inteligencia_Analitica/comportamento_consumidores.xlsx\")\ndendograma_df.head()",
"_____no_output_____"
],
[
"dendograma_df.shape",
"_____no_output_____"
],
[
"df = dendograma_df.loc[:,\"Preco\":\"Local\"]\ndf.head()",
"_____no_output_____"
],
[
"# StandardScaler - Padroniza as variáveis para média 0 e DP 1.\n# scaler.fit_transform - ajusta e depois padroniza\nscaler = StandardScaler()\nbase = scaler.fit_transform(df)\nbase",
"_____no_output_____"
],
[
"# Função dendograma é da biblioteca scipy.cluster.hierarchy.dendrogram.\n# Linkage = ligação(dados/ base, method = )\ndendrograma = dendrogram(linkage(base, method = 'average'))\nplt.title('Dendrograma')\nplt.xlabel('registros')\nplt.ylabel('Distância Euclidiana')",
"_____no_output_____"
],
[
"# AgglomerativeClustering - classe do metodo aglomerativo\n# affinity parametro para calcular a distancia euclidiana\n# fit_predict - Ajuste o clustering hierárquico de recursos ou matriz de distância e retorne rótulos de cluster.\nhc = AgglomerativeClustering(n_clusters = 4, affinity = 'euclidean', linkage = 'average')\ngrupo = hc.fit_predict(base)\ngrupo",
"_____no_output_____"
],
[
"df['grupo']=grupo\ndf.head()",
"_____no_output_____"
],
[
"df['grupo'].value_counts()",
"_____no_output_____"
],
[
"# Dados padronizados\nbase_df = pd.DataFrame(base,columns=['Preco','Internacional','Interurbano','Local'] )\nbase_df['grupo']=grupo\nbase_df",
"_____no_output_____"
],
[
"# Criar grafico para entender os cluster\ntabela = base_df[['grupo', 'Preco','Internacional','Interurbano','Local']].groupby(['grupo']).mean()\ntabela",
"_____no_output_____"
],
[
"### Salvando excel\n### index = False não salva o índice do arquivo.\nescrever = pd.ExcelWriter('/home/vfamim/Documentos/DATA SCIENCE/Inteligencia_Analitica/tabela_01.xlsx')\ntabela.to_excel(escrever, index=False)\nescrever.save()",
"_____no_output_____"
],
[
"dendograma_df.shape",
"_____no_output_____"
],
[
"dendograma_df['grupo']=grupo\ndendograma_df.head()",
"_____no_output_____"
],
[
"dendograma_df['grupo']",
"_____no_output_____"
],
[
"#import numpy as np\ndendograma_df['grupo']= dendograma_df['grupo'].map ({0:'Diamante', 1:'Ouro', 2:'Prata', 3:'Bronze'})\ndendograma_df.head()",
"_____no_output_____"
],
[
"dendograma_df['grupo'].value_counts()",
"_____no_output_____"
],
[
"pd.crosstab(dendograma_df.Segmento, dendograma_df.grupo)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5afcf7ba30af8593a87ef0cc76b13bd1282a63
| 6,593 |
ipynb
|
Jupyter Notebook
|
DataGenerator.ipynb
|
AaronGe88inTHU/dreye-thu
|
e0800daa07b37a56e1c8b6bb053689392a9e2211
|
[
"MIT"
] | 1 |
2021-04-07T07:37:26.000Z
|
2021-04-07T07:37:26.000Z
|
DataGenerator.ipynb
|
AaronGe88inTHU/dreye-thu
|
e0800daa07b37a56e1c8b6bb053689392a9e2211
|
[
"MIT"
] | null | null | null |
DataGenerator.ipynb
|
AaronGe88inTHU/dreye-thu
|
e0800daa07b37a56e1c8b6bb053689392a9e2211
|
[
"MIT"
] | 1 |
2019-10-19T08:01:52.000Z
|
2019-10-19T08:01:52.000Z
| 34.338542 | 236 | 0.490217 |
[
[
[
"<a href=\"https://colab.research.google.com/github/AaronGe88inTHU/dreye-thu/blob/master/DataGenerator.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"import numpy as np\nimport cv2\nfrom PIL import Image\nimport tensorflow as tf\nimport os, glob\nfrom matplotlib import pyplot as plt\nimport tarfile\nimport shutil",
"_____no_output_____"
],
[
"def target_generate(file_path):\n cmd_str = os.path.join(file_path,\"*.png\")\n \n file_list = glob.glob(\"/content/drive/My Drive/dreye/ImagePreprocess/*.png\")\n file_list = sorted(file_list, key=lambda name: int(name.split(\"/\")[-1][:-4]))#int(name[:-4]))\n print(\"{0} images need to be processed! in {1}\".format(len(file_list), cmd_str))\n \n batch_size = 20\n image_arrays = []\n crop_arrays = []\n batches = int(np.floor(len(file_list) / batch_size))\n mod = len(file_list) % batch_size\n for ii in range(batches):\n image_batch = []\n crop_batch = []\n for jj in range(batch_size):\n im = np.array(Image.open(file_list[ii * batch_size + jj]))\n im = cv2.resize(im, dsize=(448, 448), interpolation=cv2.INTER_CUBIC)\n cp = im[167:279, 167:279]\n image_batch.append(im)\n crop_batch.append(cp)\n \n \n print(\"{} images resized!\".format(batch_size))\n print(\"{} images cropped!\".format(batch_size))\n \n image_batch = np.array(image_batch)\n crop_batch = np.array(crop_batch)\n \n image_arrays.extend(image_batch)\n crop_arrays.extend(crop_batch)\n #print(len(image_arrays), len(crop_arrays))#plt.imshow(np.array(images[0], dtype=np.int32))\n\n image_batch = []\n crop_batch = []\n for jj in range (int(batches * batch_size + mod)):\n im = np.array(Image.open(file_list[jj]))\n im = cv2.resize(im, dsize=(448, 448), interpolation=cv2.INTER_CUBIC)\n cp = im[167:279, 167:279]\n image_batch.append(im)\n crop_batch.append(cp)\n print(\"{} images resized!\".format(mod))\n print(\"{} images cropped!\".format(mod))\n\n\n image_batch = np.array(image_batch)\n image_arrays.extend(image_batch\n )\n crop_batch = np.array(crop_batch)\n crop_arrays.extend(crop_batch)\n\n image_arrays = np.array(image_arrays)\n crop_arrays = np.array(crop_arrays)\n #print(image_arrays.shape, crop_arrays.shape)\n \n resize_path = os.path.join(file_path,\"resize\")\n crop_path = os.path.join(file_path,\"crop\")\n os.mkdir(resize_path)\n os.mkdir(crop_path)\n for ii in range(image_arrays.shape[0]):\n im = Image.fromarray(image_arrays[ii])\n im.save(os.path.join(resize_path,\"{}.png\".format(ii)))\n im = Image.fromarray(crop_arrays[ii])\n im.save(os.path.join(crop_path,\"{}.png\".format(ii)))\n \n print(\"Saved successfully!\")\n\n ",
"_____no_output_____"
],
[
"#target_generate('/content/drive/My Drive/dreye/ImagePreprocess')\n\npath_name = os.path.join(\"/content/drive/My Drive/dreye/ImagePreprocess/\", \"resize\")\ntar = tarfile.open(path_name+\".tar.gz\", \"w:gz\")\ntar.add(path_name, arcname=\"resize\")\ntar.close()\nshutil.rmtree(path_name)\n\npath_name = os.path.join(\"/content/drive/My Drive/dreye/ImagePreprocess/\", \"crop\")\ntar2 = tarfile.open(path_name+\".tar.gz\", \"w:gz\")\ntar2.add(path_name, arcname=\"crop\")\ntar2.close()\nshutil.rmtree(path_name)\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a5b114f96b63d5daa5648379abd352daa9059e0
| 248,015 |
ipynb
|
Jupyter Notebook
|
module1-regression-1/LS_DS_211.ipynb
|
aidanvu1992/DS-Unit-2-Linear-Models
|
87b9c5f3cd802a605ace0a620cffe28fbf49e47c
|
[
"MIT"
] | null | null | null |
module1-regression-1/LS_DS_211.ipynb
|
aidanvu1992/DS-Unit-2-Linear-Models
|
87b9c5f3cd802a605ace0a620cffe28fbf49e47c
|
[
"MIT"
] | null | null | null |
module1-regression-1/LS_DS_211.ipynb
|
aidanvu1992/DS-Unit-2-Linear-Models
|
87b9c5f3cd802a605ace0a620cffe28fbf49e47c
|
[
"MIT"
] | null | null | null | 157.670057 | 167,157 | 0.853557 |
[
[
[
"<a href=\"https://colab.research.google.com/github/bruno-janota/DS-Unit-2-Linear-Models/blob/master/module1-regression-1/LS_DS_211.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Lambda School Data Science\n\n*Unit 2, Sprint 1, Module 1*\n\n---",
"_____no_output_____"
],
[
"# Regression 1\n\n- Begin with baselines for regression\n- Use scikit-learn to fit a linear regression\n- Explain the coefficients from a linear regression",
"_____no_output_____"
],
[
"Brandon Rohrer wrote a good blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)\n\nWe’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”\n\n- “How Much / How Many?” (Regression)\n- “Is this A or B?” (Classification)\n\nThis unit, you’ll build supervised learning models with “tabular data” (data in tables, like spreadsheets). Including, but not limited to:\n\n- Predict New York City real estate prices <-- **Today, we'll start this!**\n- Predict which water pumps in Tanzania need repairs\n- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model!",
"_____no_output_____"
],
[
"### Setup\n\nRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.\n\nLibraries:\n\n- ipywidgets\n- pandas\n- plotly\n- scikit-learn",
"_____no_output_____"
]
],
[
[
"import sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'\n \n# Ignore this Numpy warning when using Plotly Express:\n# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.\nimport warnings\nwarnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')",
"_____no_output_____"
]
],
[
[
"# Begin with baselines for regression",
"_____no_output_____"
],
[
"## Overview",
"_____no_output_____"
],
[
"### Predict how much a NYC condo costs 🏠💸\n\nRegression models output continuous numbers, so we can use regression to answer questions like \"How much?\" or \"How many?\" \n\nOften, the question is \"How much will this cost? How many dollars?\"",
"_____no_output_____"
],
[
"For example, here's a fun YouTube video, which we'll use as our scenario for this lesson:\n\n[Amateurs & Experts Guess How Much a NYC Condo With a Private Terrace Costs](https://www.youtube.com/watch?v=JQCctBOgH9I)\n\n> Real Estate Agent Leonard Steinberg just sold a pre-war condo in New York City's Tribeca neighborhood. We challenged three people - an apartment renter, an apartment owner and a real estate expert - to try to guess how much the apartment sold for. Leonard reveals more and more details to them as they refine their guesses.",
"_____no_output_____"
],
[
"The condo from the video is **1,497 square feet**, built in 1852, and is in a desirable neighborhood. According to the real estate agent, _\"Tribeca is known to be one of the most expensive ZIP codes in all of the United States of America.\"_\n\nHow can we guess what this condo sold for? Let's look at 3 methods:\n\n1. Heuristics\n2. Descriptive Statistics\n3. Predictive Model ",
"_____no_output_____"
],
[
"## Follow Along",
"_____no_output_____"
],
[
"### 1. Heuristics\n\nHeuristics are \"rules of thumb\" that people use to make decisions and judgments. The video participants discussed their heuristics:\n\n\n",
"_____no_output_____"
],
[
"**Participant 1**, Chinwe, is a real estate amateur. She rents her apartment in New York City. Her first guess was `8 million, and her final guess was 15 million.\n\n[She said](https://youtu.be/JQCctBOgH9I?t=465), _\"People just go crazy for numbers like 1852. You say **'pre-war'** to anyone in New York City, they will literally sell a kidney. They will just give you their children.\"_ ",
"_____no_output_____"
],
[
"**Participant 3**, Pam, is an expert. She runs a real estate blog. Her first guess was 1.55 million, and her final guess was 2.2 million.\n\n[She explained](https://youtu.be/JQCctBOgH9I?t=280) her first guess: _\"I went with a number that I think is kind of the going rate in the location, and that's **a thousand bucks a square foot.**\"_",
"_____no_output_____"
],
[
"**Participant 2**, Mubeen, is between the others in his expertise level. He owns his apartment in New York City. His first guess was 1.7 million, and his final guess was also 2.2 million.",
"_____no_output_____"
],
[
"### 2. Descriptive Statistics",
"_____no_output_____"
],
[
"We can use data to try to do better than these heuristics. How much have other Tribeca condos sold for?\n\nLet's answer this question with a relevant dataset, containing most of the single residential unit, elevator apartment condos sold in Tribeca, from January through April 2019.\n\nWe can get descriptive statistics for the dataset's `SALE_PRICE` column.\n\nHow many condo sales are in this dataset? What was the average sale price? The median? Minimum? Maximum?",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv(DATA_PATH+'condos/tribeca.csv')\npd.options.display.float_format = '{:,.0f}'.format\ndf['SALE_PRICE'].describe()",
"_____no_output_____"
]
],
[
[
"On average, condos in Tribeca have sold for \\$3.9 million. So that could be a reasonable first guess.\n\nIn fact, here's the interesting thing: **we could use this one number as a \"prediction\", if we didn't have any data except for sales price...** \n\nImagine we didn't have any any other information about condos, then what would you tell somebody? If you had some sales prices like this but you didn't have any of these other columns. If somebody asked you, \"How much do you think a condo in Tribeca costs?\"\n\nYou could say, \"Well, I've got 90 sales prices here, and I see that on average they cost \\$3.9 million.\"\n\nSo we do this all the time in the real world. We use descriptive statistics for prediction. And that's not wrong or bad, in fact **that's where you should start. This is called the _mean baseline_.**",
"_____no_output_____"
],
[
"**Baseline** is an overloaded term, with multiple meanings:\n\n1. [**The score you'd get by guessing**](https://twitter.com/koehrsen_will/status/1088863527778111488)\n2. [**Fast, first models that beat guessing**](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) \n3. **Complete, tuned \"simpler\" model** (Simpler mathematically, computationally. Or less work for you, the data scientist.)\n4. **Minimum performance that \"matters\"** to go to production and benefit your employer and the people you serve.\n5. **Human-level performance** \n\nBaseline type #1 is what we're doing now.\n\nLinear models can be great for #2, 3, 4, and [sometimes even #5 too!](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825)",
"_____no_output_____"
],
[
"---\n\nLet's go back to our mean baseline for Tribeca condos. \n\nIf we just guessed that every Tribeca condo sold for \\$3.9 million, how far off would we be, on average?",
"_____no_output_____"
]
],
[
[
"guess = df['SALE_PRICE'].mean()\nerrors = guess - df['SALE_PRICE']\nmean_absolute_error = errors.abs().mean()\nprint(f'If we just guessed every Tribeca condo sold for ${guess:,.0f},')\nprint(f'we would be off by ${mean_absolute_error:,.0f} on average.')",
"If we just guessed every Tribeca condo sold for $3,928,736,\nwe would be off by $2,783,380 on average.\n"
]
],
[
[
"That sounds like a lot of error! \n\nBut fortunately, we can do better than this first baseline — we can use more data. For example, the condo's size.\n\nCould sale price be **dependent** on square feet? To explore this relationship, let's make a scatterplot, using [Plotly Express](https://plot.ly/python/plotly-express/):",
"_____no_output_____"
]
],
[
[
"import plotly.express as px\npx.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE')",
"_____no_output_____"
]
],
[
[
"### 3. Predictive Model\n\nTo go from a _descriptive_ [scatterplot](https://www.plotly.express/plotly_express/#plotly_express.scatter) to a _predictive_ regression, just add a _line of best fit:_",
"_____no_output_____"
]
],
[
[
"px.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE', trendline='ols')",
"_____no_output_____"
],
[
"df.SALE_PRICE.mean()",
"_____no_output_____"
],
[
"df.SALE_PRICE.std()",
"_____no_output_____"
],
[
"df.SALE_PRICE.describe()",
"_____no_output_____"
],
[
"import seaborn as sns\n\nsns.boxplot(df.SALE_PRICE)",
"_____no_output_____"
]
],
[
[
"Roll over the Plotly regression line to see its equation and predictions for sale price, dependent on gross square feet.\n\nLinear Regression helps us **interpolate.** For example, in this dataset, there's a gap between 4016 sq ft and 4663 sq ft. There were no 4300 sq ft condos sold, but what price would you predict, using this line of best fit?\n\nLinear Regression also helps us **extrapolate.** For example, in this dataset, there were no 6000 sq ft condos sold, but what price would you predict?",
"_____no_output_____"
],
[
"The line of best fit tries to summarize the relationship between our x variable and y variable in a way that enables us to use the equation for that line to make predictions.\n\n\n\n",
"_____no_output_____"
],
[
"**Synonyms for \"y variable\"**\n\n- **Dependent Variable**\n- Response Variable\n- Outcome Variable \n- Predicted Variable\n- Measured Variable\n- Explained Variable\n- **Label**\n- **Target**",
"_____no_output_____"
],
[
"**Synonyms for \"x variable\"**\n\n- **Independent Variable**\n- Explanatory Variable\n- Regressor\n- Covariate\n- Correlate\n- **Feature**\n",
"_____no_output_____"
],
[
"The bolded terminology will be used most often by your instructors this unit.",
"_____no_output_____"
],
[
"## Challenge\n\nIn your assignment, you will practice how to begin with baselines for regression, using a new dataset!",
"_____no_output_____"
],
[
"# Use scikit-learn to fit a linear regression",
"_____no_output_____"
],
[
"## Overview",
"_____no_output_____"
],
[
"We can use visualization libraries to do simple linear regression (\"simple\" means there's only one independent variable). \n\nBut during this unit, we'll usually use the scikit-learn library for predictive models, and we'll usually have multiple independent variables.",
"_____no_output_____"
],
[
"In [_Python Data Science Handbook,_ Chapter 5.2: Introducing Scikit-Learn](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API), Jake VanderPlas explains **how to structure your data** for scikit-learn:\n\n> The best way to think about data within Scikit-Learn is in terms of tables of data. \n>\n> \n>\n>The features matrix is often stored in a variable named `X`. The features matrix is assumed to be two-dimensional, with shape `[n_samples, n_features]`, and is most often contained in a NumPy array or a Pandas `DataFrame`.\n>\n>We also generally work with a label or target array, which by convention we will usually call `y`. The target array is usually one dimensional, with length `n_samples`, and is generally contained in a NumPy array or Pandas `Series`. The target array may have continuous numerical values, or discrete classes/labels. \n>\n>The target array is the quantity we want to _predict from the data:_ in statistical terms, it is the dependent variable. ",
"_____no_output_____"
],
[
"VanderPlas also lists a **5 step process** for scikit-learn's \"Estimator API\":\n\n> Every machine learning algorithm in Scikit-Learn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications.\n>\n> Most commonly, the steps in using the Scikit-Learn estimator API are as follows:\n>\n> 1. Choose a class of model by importing the appropriate estimator class from Scikit-Learn.\n> 2. Choose model hyperparameters by instantiating this class with desired values.\n> 3. Arrange data into a features matrix and target vector following the discussion above.\n> 4. Fit the model to your data by calling the `fit()` method of the model instance.\n> 5. Apply the Model to new data: For supervised learning, often we predict labels for unknown data using the `predict()` method.\n\nLet's try it!",
"_____no_output_____"
],
[
"## Follow Along\n\nFollow the 5 step process, and refer to [Scikit-Learn LinearRegression documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).",
"_____no_output_____"
]
],
[
[
"# 1. Import the appropriate estimator class from Scikit-Learn\nfrom sklearn.linear_model import LinearRegression\n",
"_____no_output_____"
],
[
"# 2. Instantiate this class\nmodel = LinearRegression()\n",
"_____no_output_____"
],
[
"# 3. Arrange X features matrix & y target vector\nfeatures = ['GROSS_SQUARE_FEET']\ntarget = 'SALE_PRICE'\nX = df[features]\ny = df[target]\n\nprint(X.shape, y.shape)",
"(90, 1) (90,)\n"
],
[
"# 4. Fit the model\nmodel.fit(X, y)\n",
"_____no_output_____"
],
[
"# 5. Apply the model to new data\nsq_feet = 1497\nX_test = [[sq_feet]]\ny_pred = model.predict(X_test)\n\nprint(f'Predicted price for {sq_feet} sq ft Tribeca condo: {y_pred[0]}')",
"Predicted price for 1497 sq ft Tribeca condo: 3100078.099303695\n"
]
],
[
[
"So, we used scikit-learn to fit a linear regression, and predicted the sales price for a 1,497 square foot Tribeca condo, like the one from the video.\n\nNow, what did that condo actually sell for? ___The final answer is revealed in [the video at 12:28](https://youtu.be/JQCctBOgH9I?t=748)!___",
"_____no_output_____"
]
],
[
[
"y_test = [2800000]",
"_____no_output_____"
]
],
[
[
"What was the error for our prediction, versus the video participants?\n\nLet's use [scikit-learn's mean absolute error function](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html).",
"_____no_output_____"
]
],
[
[
"chinwe_final_guess = [15000000]\nmubeen_final_guess = [2200000]\npam_final_guess = [2200000]",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_absolute_error\n\nmae = mean_absolute_error(y_test, y_pred)\nprint(f'Out models error: {mae}')",
"Out models error: 300078.0993036949\n"
],
[
"mae = mean_absolute_error(y_test, chinwe_final_guess)\nprint(f'Chinwe models error: {mae}')",
"Chinwe models error: 12200000.0\n"
],
[
"mae = mean_absolute_error(y_test, mubeen_final_guess)\nprint(f'Mubeen and Pam models error: {mae}')",
"Mubeen and Pam models error: 600000.0\n"
],
[
"# Make predictions on full dataset and report the mae of the model\npreds = model.predict(X)\nmae = mean_absolute_error(y, preds)\nprint(f'Our models MAE: ${mae}')",
"Our models MAE: $1176817.9930150746\n"
]
],
[
[
"This [diagram](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/text_analytics/general_concepts.html#supervised-learning-model-fit-x-y) shows what we just did! Don't worry about understanding it all now. But can you start to match some of these boxes/arrows to the corresponding lines of code from above?\n\n<img src=\"https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/_images/plot_ML_flow_chart_12.png\" width=\"75%\">",
"_____no_output_____"
],
[
"Here's [another diagram](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/), which shows how machine learning is a \"new programming paradigm\":\n\n<img src=\"https://pbs.twimg.com/media/ECQDlFOWkAEJzlY.jpg\" width=\"70%\">\n\n> A machine learning system is \"trained\" rather than explicitly programmed. It is presented with many \"examples\" relevant to a task, and it finds statistical structure in these examples which eventually allows the system to come up with rules for automating the task. —[Francois Chollet](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/)",
"_____no_output_____"
],
[
"Wait, are we saying that *linear regression* could be considered a *machine learning algorithm*? Maybe it depends? What do you think? We'll discuss throughout this unit.",
"_____no_output_____"
],
[
"## Challenge\n\nIn your assignment, you will use scikit-learn for linear regression with one feature. For a stretch goal, you can do linear regression with two or more features.",
"_____no_output_____"
],
[
"# Explain the coefficients from a linear regression",
"_____no_output_____"
],
[
"## Overview\n\nWhat pattern did the model \"learn\", about the relationship between square feet & price?",
"_____no_output_____"
],
[
"## Follow Along",
"_____no_output_____"
],
[
"To help answer this question, we'll look at the `coef_` and `intercept_` attributes of the `LinearRegression` object. (Again, [here's the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).)\n",
"_____no_output_____"
]
],
[
[
"model.coef_",
"_____no_output_____"
],
[
"model.intercept_",
"_____no_output_____"
]
],
[
[
"We can repeatedly apply the model to new/unknown data, and explain the coefficient:",
"_____no_output_____"
]
],
[
[
"def predict(square_feet):\n y_pred = model.predict([[square_feet]])\n estimate = y_pred[0]\n coefficient = model.coef_[0]\n result = f'${estimate:,.0f} estimated price for {square_feet:,.0f} square foot condo in Tribeca. '\n explanation = f'In this linear regression, each additional square foot adds ${coefficient:,.0f}.'\n return result + explanation\n\npredict(1497)",
"_____no_output_____"
],
[
"# What does the model predict for low square footage?\npredict(500)",
"_____no_output_____"
],
[
"# For high square footage?\npredict(10000)",
"_____no_output_____"
],
[
"# Re-run the prediction functon interactively\n# Ipywidgets usually works on Colab, but not always\nfrom ipywidgets import interact\ninteract(predict, square_feet=(600, 5000));",
"_____no_output_____"
]
],
[
[
"## Challenge\n\nIn your assignment, you will define a function to make new predictions and explain the model coefficient.",
"_____no_output_____"
],
[
"# Review",
"_____no_output_____"
],
[
"You'll practice these objectives when you do your assignment:\n\n- Begin with baselines for regression\n- Use scikit-learn to fit a linear regression\n- Make new predictions and explain coefficients",
"_____no_output_____"
],
[
"You'll use another New York City real estate dataset. You'll predict how much it costs to rent an apartment, instead of how much it costs to buy a condo.\n\nYou've been provided with a separate notebook for your assignment, which has all the instructions and stretch goals. Good luck and have fun!",
"_____no_output_____"
],
[
"# Sources\n\n#### NYC Real Estate\n- Video: [Amateurs & Experts Guess How Much a NYC Condo With a Private Terrace Costs](https://www.youtube.com/watch?v=JQCctBOgH9I)\n- Data: [NYC OpenData: NYC Citywide Rolling Calendar Sales](https://data.cityofnewyork.us/dataset/NYC-Citywide-Rolling-Calendar-Sales/usep-8jbt)\n- Glossary: [NYC Department of Finance: Rolling Sales Data](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page)\n\n#### Baselines\n- Will Koehrsen, [\"One of the most important steps in a machine learning project is establishing a common sense baseline...\"](https://twitter.com/koehrsen_will/status/1088863527778111488)\n- Emmanuel Ameisen, [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)\n- Robyn M. Dawes, [The robust beauty of improper linear models in decision making](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825)\n\n#### Plotly Express\n- [Plotly Express](https://plot.ly/python/plotly-express/) examples\n- [plotly_express.scatter](https://www.plotly.express/plotly_express/#plotly_express.scatter) docs\n\n#### Scikit-Learn\n- Jake VanderPlas, [_Python Data Science Handbook,_ Chapter 5.2: Introducing Scikit-Learn](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API)\n- Olvier Grisel, [Diagram](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/text_analytics/general_concepts.html#supervised-learning-model-fit-x-y)\n- [sklearn.linear_model.LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)\n- [sklearn.metrics.mean_absolute_error](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a5b164b6f44f45dc360512d7a0c18c6e794ff93
| 27,619 |
ipynb
|
Jupyter Notebook
|
azure_speech/Azure_demo.ipynb
|
TheSodacan/linebotTeacher
|
208798bcfd6470a0fd47b24dfe3b4007e9fad98f
|
[
"Apache-2.0"
] | 2 |
2021-06-08T01:47:19.000Z
|
2021-06-14T16:52:01.000Z
|
azure_speech/Azure_demo.ipynb
|
TheSodacan/linebotTeacher
|
208798bcfd6470a0fd47b24dfe3b4007e9fad98f
|
[
"Apache-2.0"
] | null | null | null |
azure_speech/Azure_demo.ipynb
|
TheSodacan/linebotTeacher
|
208798bcfd6470a0fd47b24dfe3b4007e9fad98f
|
[
"Apache-2.0"
] | 1 |
2021-06-18T13:11:00.000Z
|
2021-06-18T13:11:00.000Z
| 41.222388 | 142 | 0.619465 |
[
[
[
"# 執行語音轉文字服務操作",
"_____no_output_____"
]
],
[
[
"import azure.cognitiveservices.speech as speechsdk\n\n# Creates an instance of a speech config with specified subscription key and service region.\n# Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion\nspeech_key, service_region = \"196f2f318dc744049eafb9cf89631e42\", \"southcentralus\"\nspeech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)\n\n# Creates an audio configuration that points to an audio file.\n# Replace with your own audio filename.\naudio_filename = \"narration.wav\"\naudio_input = speechsdk.audio.AudioConfig(filename=audio_filename)\n\n# Creates a recognizer with the given settings\nspeech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input)\n\nprint(\"Recognizing first result...\")\n\n# Starts speech recognition, and returns after a single utterance is recognized. The end of a\n# single utterance is determined by listening for silence at the end or until a maximum of 15\n# seconds of audio is processed. The task returns the recognition text as result. \n# Note: Since recognize_once() returns only a single utterance, it is suitable only for single\n# shot recognition like command or query. \n# For long-running multi-utterance recognition, use start_continuous_recognition() instead.\nresult = speech_recognizer.recognize_once()\n\n# Checks result.\nif result.reason == speechsdk.ResultReason.RecognizedSpeech:\n print(\"Recognized: {}\".format(result.text))\nelif result.reason == speechsdk.ResultReason.NoMatch:\n print(\"No speech could be recognized: {}\".format(result.no_match_details))\nelif result.reason == speechsdk.ResultReason.Canceled:\n cancellation_details = result.cancellation_details\n print(\"Speech Recognition canceled: {}\".format(cancellation_details.reason))\n if cancellation_details.reason == speechsdk.CancellationReason.Error:\n print(\"Error details: {}\".format(cancellation_details.error_details))",
"_____no_output_____"
]
],
[
[
"# 執行文字轉語音服務操作",
"_____no_output_____"
],
[
"## 文字轉成合成語音",
"_____no_output_____"
]
],
[
[
"import azure.cognitiveservices.speech as speechsdk\n\n# Creates an instance of a speech config with specified subscription key and service region.\n# Replace with your own subscription key and service region (e.g., \"westus\").\nspeech_key, service_region = \"196f2f318dc744049eafb9cf89631e42\", \"southcentralus\"\nspeech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)\n\n# Creates a speech synthesizer using the default speaker as audio output.\nspeech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config)\n\n# Receives a text from console input.\nprint(\"Type some text that you want to speak...\")\ntext = input()\n\n# Synthesizes the received text to speech.\n# The synthesized speech is expected to be heard on the speaker with this line executed.\nresult = speech_synthesizer.speak_text_async(text).get()\n\n# Checks result.\nif result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:\n print(\"Speech synthesized to speaker for text [{}]\".format(text))\nelif result.reason == speechsdk.ResultReason.Canceled:\n cancellation_details = result.cancellation_details\n print(\"Speech synthesis canceled: {}\".format(cancellation_details.reason))\n if cancellation_details.reason == speechsdk.CancellationReason.Error:\n if cancellation_details.error_details:\n print(\"Error details: {}\".format(cancellation_details.error_details))\n print(\"Did you update the subscription info?\")",
"_____no_output_____"
]
],
[
[
"## 文字轉成音訊檔案",
"_____no_output_____"
]
],
[
[
"import azure.cognitiveservices.speech as speechsdk\n\n# Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion\nspeech_key, service_region = \"196f2f318dc744049eafb9cf89631e42\", \"southcentralus\"\nspeech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)\n\n# Creates an audio configuration that points to an audio file.\n# Replace with your own audio filename.\naudio_filename = \"helloworld.wav\"\naudio_output = speechsdk.audio.AudioOutputConfig(filename=audio_filename)\n\n# Creates a synthesizer with the given settings\nspeech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_output)\n\n# Synthesizes the text to speech.\n# Replace with your own text.\ntext = \"Hello world!\"\nresult = speech_synthesizer.speak_text_async(text).get()\n\n# Checks result.\nif result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:\n print(\"Speech synthesized to [{}] for text [{}]\".format(audio_filename, text))\nelif result.reason == speechsdk.ResultReason.Canceled:\n cancellation_details = result.cancellation_details\n print(\"Speech synthesis canceled: {}\".format(cancellation_details.reason))\n if cancellation_details.reason == speechsdk.CancellationReason.Error:\n if cancellation_details.error_details:\n print(\"Error details: {}\".format(cancellation_details.error_details))\n print(\"Did you update the subscription info?\")",
"_____no_output_____"
]
],
[
[
"# 語音轉成翻譯文字服務操作",
"_____no_output_____"
]
],
[
[
"import azure.cognitiveservices.speech as speechsdk\n\nspeech_key, service_region = \"196f2f318dc744049eafb9cf89631e42\", \"southcentralus\"\n\ndef translate_speech_to_text():\n\n # Creates an instance of a speech translation config with specified subscription key and service region.\n # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion\n translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region)\n\n # Sets source and target languages.\n # Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages\n fromLanguage = 'en-US'\n toLanguage = 'de' #找 文本語言 line 33底下也需要更改\n translation_config.speech_recognition_language = fromLanguage\n translation_config.add_target_language(toLanguage)\n\n # Creates a translation recognizer using and audio file as input.\n recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config)\n\n # Starts translation, and returns after a single utterance is recognized. The end of a\n # single utterance is determined by listening for silence at the end or until a maximum of 15\n # seconds of audio is processed. It returns the recognized text as well as the translation.\n # Note: Since recognize_once() returns only a single utterance, it is suitable only for single\n # shot recognition like command or query.\n # For long-running multi-utterance recognition, use start_continuous_recognition() instead.\n print(\"Say something...\")\n result = recognizer.recognize_once()\n\n # Check the result\n if result.reason == speechsdk.ResultReason.TranslatedSpeech:\n print(\"RECOGNIZED '{}': {}\".format(fromLanguage, result.text))\n print(\"TRANSLATED into {}: {}\".format(toLanguage, result.translations['de']))\n elif result.reason == speechsdk.ResultReason.RecognizedSpeech:\n print(\"RECOGNIZED: {} (text could not be translated)\".format(result.text))\n elif result.reason == speechsdk.ResultReason.NoMatch:\n print(\"NOMATCH: Speech could not be recognized: {}\".format(result.no_match_details))\n elif result.reason == speechsdk.ResultReason.Canceled:\n print(\"CANCELED: Reason={}\".format(result.cancellation_details.reason))\n if result.cancellation_details.reason == speechsdk.CancellationReason.Error:\n print(\"CANCELED: ErrorDetails={}\".format(result.cancellation_details.error_details))\n\ntranslate_speech_to_text()",
"_____no_output_____"
]
],
[
[
"# 語音轉成多國翻譯文字服務操作",
"_____no_output_____"
]
],
[
[
"import azure.cognitiveservices.speech as speechsdk\n\nspeech_key, service_region = \"196f2f318dc744049eafb9cf89631e42\", \"southcentralus\"\n\ndef translate_speech_to_text():\n\n # Creates an instance of a speech translation config with specified subscription key and service region.\n # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion\n translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region)\n\n # Sets source and target languages.\n # Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages\n fromLanguage = 'en-US'\n translation_config.speech_recognition_language = fromLanguage\n translation_config.add_target_language('de')\n translation_config.add_target_language('fr')\n\n # Creates a translation recognizer using and audio file as input.\n recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config)\n\n # Starts translation, and returns after a single utterance is recognized. The end of a\n # single utterance is determined by listening for silence at the end or until a maximum of 15\n # seconds of audio is processed. It returns the recognized text as well as the translation.\n # Note: Since recognize_once() returns only a single utterance, it is suitable only for single\n # shot recognition like command or query.\n # For long-running multi-utterance recognition, use start_continuous_recognition() instead.\n print(\"Say something...\")\n result = recognizer.recognize_once()\n\n # Check the result\n if result.reason == speechsdk.ResultReason.TranslatedSpeech:\n print(\"RECOGNIZED '{}': {}\".format(fromLanguage, result.text))\n print(\"TRANSLATED into {}: {}\".format('de', result.translations['de']))\n print(\"TRANSLATED into {}: {}\".format('fr', result.translations['fr']))\n elif result.reason == speechsdk.ResultReason.RecognizedSpeech:\n print(\"RECOGNIZED: {} (text could not be translated)\".format(result.text))\n elif result.reason == speechsdk.ResultReason.NoMatch:\n print(\"NOMATCH: Speech could not be recognized: {}\".format(result.no_match_details))\n elif result.reason == speechsdk.ResultReason.Canceled:\n print(\"CANCELED: Reason={}\".format(result.cancellation_details.reason))\n if result.cancellation_details.reason == speechsdk.CancellationReason.Error:\n print(\"CANCELED: ErrorDetails={}\".format(result.cancellation_details.error_details))\n\ntranslate_speech_to_text()",
"_____no_output_____"
]
],
[
[
"# 語音轉成多國語音服務操作",
"_____no_output_____"
]
],
[
[
"import azure.cognitiveservices.speech as speechsdk\n\nspeech_key, service_region = \"196f2f318dc744049eafb9cf89631e42\", \"southcentralus\"\n\ndef translate_speech_to_speech():\n\n # Creates an instance of a speech translation config with specified subscription key and service region.\n # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion\n translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region)\n\n # Sets source and target languages.\n # Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages\n fromLanguage = 'en-US'\n toLanguage = 'de'\n translation_config.speech_recognition_language = fromLanguage\n translation_config.add_target_language(toLanguage)\n\n # Sets the synthesis output voice name.\n # Replace with the languages of your choice, from list found here: https://aka.ms/speech/tts-languages\n translation_config.voice_name = \"de-DE-Hedda\"\n\n # Creates a translation recognizer using and audio file as input.\n recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config)\n\n # Prepare to handle the synthesized audio data.\n def synthesis_callback(evt):\n size = len(evt.result.audio)\n print('AUDIO SYNTHESIZED: {} byte(s) {}'.format(size, '(COMPLETED)' if size == 0 else ''))\n\n recognizer.synthesizing.connect(synthesis_callback)\n\n # Starts translation, and returns after a single utterance is recognized. The end of a\n # single utterance is determined by listening for silence at the end or until a maximum of 15\n # seconds of audio is processed. It returns the recognized text as well as the translation.\n # Note: Since recognize_once() returns only a single utterance, it is suitable only for single\n # shot recognition like command or query.\n # For long-running multi-utterance recognition, use start_continuous_recognition() instead.\n print(\"Say something...\")\n result = recognizer.recognize_once()\n\n # Check the result\n if result.reason == speechsdk.ResultReason.TranslatedSpeech:\n print(\"RECOGNIZED '{}': {}\".format(fromLanguage, result.text))\n print(\"TRANSLATED into {}: {}\".format(toLanguage, result.translations['de']))\n elif result.reason == speechsdk.ResultReason.RecognizedSpeech:\n print(\"RECOGNIZED: {} (text could not be translated)\".format(result.text))\n elif result.reason == speechsdk.ResultReason.NoMatch:\n print(\"NOMATCH: Speech could not be recognized: {}\".format(result.no_match_details))\n elif result.reason == speechsdk.ResultReason.Canceled:\n print(\"CANCELED: Reason={}\".format(result.cancellation_details.reason))\n if result.cancellation_details.reason == speechsdk.CancellationReason.Error:\n print(\"CANCELED: ErrorDetails={}\".format(result.cancellation_details.error_details))\n\ntranslate_speech_to_speech()",
"_____no_output_____"
]
],
[
[
"# 文字語言偵測服務操作",
"_____no_output_____"
]
],
[
[
"from azure.core.credentials import AzureKeyCredential\nfrom azure.ai.textanalytics import TextAnalyticsClient\n\n\nkey = \"bdb7d45b308f4851bd1b8cae9a1d3453\"\nendpoint = \"https://test0524.cognitiveservices.azure.com/\"\n\n\ntext_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\ndocuments = [\n \"This document is written in English.\",\n \"Este es un document escrito en Español.\",\n \"这是一个用中文写的文件\",\n \"Dies ist ein Dokument in deutsche Sprache.\",\n \"Detta är ett dokument skrivet på engelska.\"\n]\n\nresult = text_analytics_client.detect_language(documents)\n\nfor idx, doc in enumerate(result):\n if not doc.is_error:\n print(\"Document text: {}\".format(documents[idx]))\n print(\"Language detected: {}\".format(doc.primary_language.name))\n print(\"ISO6391 name: {}\".format(doc.primary_language.iso6391_name))\n print(\"Confidence score: {}\\n\".format(doc.primary_language.confidence_score))\n if doc.is_error:\n print(doc.id, doc.error)",
"_____no_output_____"
]
],
[
[
"# 執行關鍵字詞擷取服務操作",
"_____no_output_____"
]
],
[
[
"from azure.core.credentials import AzureKeyCredential\nfrom azure.ai.textanalytics import TextAnalyticsClient\n\nkey = \"bdb7d45b308f4851bd1b8cae9a1d3453\"\nendpoint = \"https://test0524.cognitiveservices.azure.com/\"\n\ntext_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\ndocuments = [\n \"Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle.\",\n \"I need to take my cat to the veterinarian.\",\n \"I will travel to South America in the summer.\",\n]\n\nresult = text_analytics_client.extract_key_phrases(documents)\nfor doc in result:\n if not doc.is_error:\n print(doc.key_phrases)\n if doc.is_error:\n print(doc.id, doc.error)",
"_____no_output_____"
]
],
[
[
"# 執行實體辨識服務操作",
"_____no_output_____"
],
[
"## 實體辨識",
"_____no_output_____"
]
],
[
[
"from azure.core.credentials import AzureKeyCredential\nfrom azure.ai.textanalytics import TextAnalyticsClient\n\nkey = \"bdb7d45b308f4851bd1b8cae9a1d3453\"\nendpoint = \"https://test0524.cognitiveservices.azure.com/\"\n\n\ntext_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\ndocuments = [\n \"Microsoft was founded by Bill Gates and Paul Allen.\",\n \"I had a wonderful trip to Seattle last week.\",\n \"I visited the Space Needle 2 times.\",\n]\n\nresult = text_analytics_client.recognize_entities(documents)\ndocs = [doc for doc in result if not doc.is_error]\n\nfor idx, doc in enumerate(docs):\n print(\"\\nDocument text: {}\".format(documents[idx]))\n for entity in doc.entities:\n print(\"Entity: \\t\", entity.text, \"\\tCategory: \\t\", entity.category,\n \"\\tConfidence Score: \\t\", entity.confidence_score)",
"_____no_output_____"
]
],
[
[
"## 實體連結",
"_____no_output_____"
]
],
[
[
"from azure.core.credentials import AzureKeyCredential\nfrom azure.ai.textanalytics import TextAnalyticsClient\n\nkey = \"bdb7d45b308f4851bd1b8cae9a1d3453\"\nendpoint = \"https://test0524.cognitiveservices.azure.com/\"\n\ntext_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\ndocuments = [\n \"Microsoft moved its headquarters to Bellevue, Washington in January 1979.\",\n \"Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella.\",\n \"Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo.\",\n]\n\nresult = text_analytics_client.recognize_linked_entities(documents)\ndocs = [doc for doc in result if not doc.is_error]\n\nfor idx, doc in enumerate(docs):\n print(\"Document text: {}\\n\".format(documents[idx]))\n for entity in doc.entities:\n print(\"Entity: {}\".format(entity.name))\n print(\"Url: {}\".format(entity.url))\n print(\"Data Source: {}\".format(entity.data_source))\n for match in entity.matches:\n print(\"Confidence Score: {}\".format(match.confidence_score))\n print(\"Entity as appears in request: {}\".format(match.text))\n print(\"------------------------------------------\")",
"_____no_output_____"
]
],
[
[
"# 執行文本翻譯服務操作",
"_____no_output_____"
]
],
[
[
"# -*- coding: utf-8 -*-\nimport os, requests, uuid, json\n\nsubscription_key = 'ab93f8c61e174973818ac06706a5a5d5' # your key\nendpoint = 'https://api.cognitive.microsofttranslator.com/'\n\n# key_var_name = 'TRANSLATOR_TEXT_SUBSCRIPTION_KEY'\n# if not key_var_name in os.environ:\n# raise Exception('Please set/export the environment variable: {}'.format(key_var_name))\n# subscription_key = os.environ[key_var_name]\n\n# endpoint_var_name = 'TRANSLATOR_TEXT_ENDPOINT'\n# if not endpoint_var_name in os.environ:\n# raise Exception('Please set/export the environment variable: {}'.format(endpoint_var_name))\n# endpoint = os.environ[endpoint_var_name]\n\npath = '/translate?api-version=3.0'\n\n# Output language setting\nparams = '&to=de&to=it'\nconstructed_url = endpoint + path + params\n\nheaders = {\n 'Ocp-Apim-Subscription-Key': subscription_key,\n 'Content-type': 'application/json',\n 'X-ClientTraceId': str(uuid.uuid4())\n}\n\nbody = [{\n 'text': 'Hello World!'\n}]\n\nrequest = requests.post(constructed_url, headers=headers, json=body)\nresponse = request.json()\n\nprint(json.dumps(response, sort_keys=True, indent=4,\n ensure_ascii=False, separators=(',', ': ')))\n\n",
"_____no_output_____"
]
],
[
[
"# 執行LUIS意圖辨識服務操作",
"_____no_output_____"
],
[
"## REST API",
"_____no_output_____"
]
],
[
[
"import requests\n\ntry:\n\n key = '8286e59fe6f54ab9826222300bbdcb11' # your Runtime key\n endpoint = 'westus.api.cognitive.microsoft.com' # such as 'your-resource-name.api.cognitive.microsoft.com'\n appId = 'df67dcdb-c37d-46af-88e1-8b97951ca1c2'\n utterance = 'turn on all lights'\n\n headers = {\n }\n\n params ={\n 'query': utterance,\n 'timezoneOffset': '0',\n 'verbose': 'true',\n 'show-all-intents': 'true',\n 'spellCheck': 'false',\n 'staging': 'false',\n 'subscription-key': key\n }\n\n r = requests.get(f'https://{endpoint}/luis/prediction/v3.0/apps/{appId}/slots/production/predict',headers=headers, params=params)\n print(r.json())\n\nexcept Exception as e:\n print(f'{e}')",
"_____no_output_____"
]
],
[
[
"## SDK",
"_____no_output_____"
]
],
[
[
"from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient\nfrom msrest.authentication import CognitiveServicesCredentials\n\nimport datetime, json, os, time\n\n# Use public app ID or replace with your own trained and published app's ID\n# to query your own app\n# public appID = 'df67dcdb-c37d-46af-88e1-8b97951ca1c2'\nluisAppID = 'dcb2cb33-dee6-46c1-a3a6-28e266d159e0'\nruntime_key = '8286e59fe6f54ab9826222300bbdcb11'\nruntime_endpoint = 'https://westus.api.cognitive.microsoft.com/'\n\n# production or staging\nluisSlotName = 'production'\n\n# Instantiate a LUIS runtime client\nclientRuntime = LUISRuntimeClient(runtime_endpoint, CognitiveServicesCredentials(runtime_key))\n\ndef predict(app_id, slot_name):\n\n request = { \"query\" : \"hi, show me lovely baby pictures\" }\n\n # Note be sure to specify, using the slot_name parameter, whether your application is in staging or \\\n # production.\n response = clientRuntime.prediction.get_slot_prediction(app_id=app_id, slot_name=slot_name, \\\n prediction_request=request)\n\n print(\"Top intent: {}\".format(response.prediction.top_intent))\n print(\"Sentiment: {}\".format (response.prediction.sentiment))\n print(\"Intents: \")\n\n for intent in response.prediction.intents:\n print(\"\\t{}\".format (json.dumps (intent)))\n print(\"Entities: {}\".format (response.prediction.entities))\n \npredict(luisAppID, luisSlotName)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5b1afa6dec3e7402f3760d59df7e7ae599564e
| 9,762 |
ipynb
|
Jupyter Notebook
|
K3D_Animations/CartPole-v0-K3D.ipynb
|
K3D-tools/experiments
|
e1a92a8ff4a16c80ece6fe2f13ccf20af41d16b0
|
[
"MIT"
] | 3 |
2019-02-09T02:58:30.000Z
|
2020-02-16T12:23:44.000Z
|
K3D_Animations/CartPole-v0-K3D.ipynb
|
K3D-tools/experiments
|
e1a92a8ff4a16c80ece6fe2f13ccf20af41d16b0
|
[
"MIT"
] | null | null | null |
K3D_Animations/CartPole-v0-K3D.ipynb
|
K3D-tools/experiments
|
e1a92a8ff4a16c80ece6fe2f13ccf20af41d16b0
|
[
"MIT"
] | 3 |
2018-09-14T10:55:16.000Z
|
2021-09-13T04:29:53.000Z
| 29.492447 | 135 | 0.497132 |
[
[
[
"import numpy as np\nimport gym\nimport k3d\nfrom ratelimiter import RateLimiter\nfrom k3d.platonic import Cube\nfrom time import time\n\nrate_limiter = RateLimiter(max_calls=4, period=1)\n\nenv = gym.make('CartPole-v0')\nobservation = env.reset()\n\nplot = k3d.plot(grid_auto_fit=False, camera_auto_fit=False, grid=(-1,-1,-1,1,1,1))\n\njoint_positions = np.array([observation[0], 0, 0], dtype=np.float32)\npole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)\n\ncart = Cube(origin=joint_positions, size=0.1).mesh\ncart.scaling = [1, 0.5, 1]\n\njoint = k3d.points(np.mean(cart.vertices[[0,2,4,6]], axis=0), point_size=0.03, color=0xff00, shader='mesh')\npole = k3d.line(vertices=np.array([joint.positions, pole_positions]), shader='mesh', color=0xff0000)\nbox = cart.vertices\nmass = k3d.points(pole_positions, point_size=0.03, color=0xff0000, shader='mesh')\n\nplot += pole + cart + joint + mass\n\nplot.display()",
"\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\n"
],
[
"for i_episode in range(20):\n observation = env.reset()\n for t in range(100):\n with rate_limiter:\n joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)\n pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)\n\n cart.vertices = box + joint_positions\n joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)\n pole.vertices = [joint.positions, pole_positions]\n mass.positions = pole_positions\n \n action = env.action_space.sample()\n observation, reward, done, info = env.step(action)\n if done:\n break",
"_____no_output_____"
],
[
"plot.display()",
"_____no_output_____"
],
[
"for i_episode in range(20):\n observation = env.reset()\n for t in range(100):\n \n joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)\n pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)\n \n with rate_limiter:\n cart.vertices = box + joint_positions\n joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)\n pole.vertices = [joint.positions, pole_positions]\n mass.positions = pole_positions\n\n action = env.action_space.sample()\n observation, reward, done, info = env.step(action)\n \n if done:\n break",
"_____no_output_____"
],
[
"max_calls, period = 3, 1\ncall_time = period/max_calls\n\nfor i_episode in range(20):\n observation = env.reset()\n for t in range(100):\n \n joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)\n pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)\n time_stamp2 = time()\n \n if t>0:\n d = time_stamp2 - time_stamp1\n if d < call_time:\n cart.vertices = box + joint_positions\n joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)\n pole.vertices = [joint.positions, pole_positions]\n mass.positions = pole_positions\n \n if t==0:\n cart.vertices = box + joint_positions\n joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0)\n pole.vertices = [joint.positions, pole_positions]\n mass.positions = pole_positions\n \n time_stamp1 = time()\n action = env.action_space.sample()\n observation, reward, done, info = env.step(action)\n \n if done:\n break",
"_____no_output_____"
],
[
"max_calls, period = 3, 1\ncall_time = period/max_calls\ni = 1\nall_it_time = 0\ncache = []\niterator = []\n\n\nfor i_episode in range(20):\n cache.append([])\n observation = env.reset()\n for t in range(100):\n ts1 = time()\n joint_positions = np.array([observation[0], 0, 0], dtype=np.float32)\n pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32)\n\n # [cart.vertices, joint.positions, pole.vertices, mass.positions]\n cache[i_episode].append([box + joint_positions, np.mean((box + joint_positions)[[0,2,4,6]], axis=0),\n [np.mean((box + joint_positions)[[0,2,4,6]], axis=0), pole_positions],\n pole_positions])\n \n if all_it_time > call_time*i:\n i += 1\n iterator = iter(iterator)\n element = next(iterator)\n cart.vertices = element[0]\n joint.positions = element[1]\n pole.vertices = element[2]\n mass.positions = element[3]\n\n action = env.action_space.sample()\n observation, reward, done, info = env.step(action)\n ts2 = time()\n\n it_time = ts2 - ts1\n all_it_time += it_time\n\n if done:\n break\n\n temp_list = []\n to_pull = t//max_calls\n if max_calls > t:\n to_pull = 1\n\n for j in range(max_calls):\n temp_list.append(cache[i_episode][to_pull*i])\n\n iterator = list(iterator) + temp_list\n\ndel cache\nfor element in iterator:\n with RateLimiter(max_calls=max_calls):\n\n i += 1\n iterator = iter(iterator)\n element = next(iterator)\n cart.vertices = element[0]\n joint.positions = element[1]\n pole.vertices = element[2]\n mass.positions = element[3]",
"_____no_output_____"
],
[
"plot.display()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5b1ea2fcc6392569df18daea2a5249fb4badf3
| 6,938 |
ipynb
|
Jupyter Notebook
|
Python/5 Loops and List Comprehensions/exercise-loops-and-list-comprehensions.ipynb
|
mattborghi/kaggle-courses
|
b56b9e67210a409e5a0d633a7a0a8fbcf090c10f
|
[
"MIT"
] | null | null | null |
Python/5 Loops and List Comprehensions/exercise-loops-and-list-comprehensions.ipynb
|
mattborghi/kaggle-courses
|
b56b9e67210a409e5a0d633a7a0a8fbcf090c10f
|
[
"MIT"
] | null | null | null |
Python/5 Loops and List Comprehensions/exercise-loops-and-list-comprehensions.ipynb
|
mattborghi/kaggle-courses
|
b56b9e67210a409e5a0d633a7a0a8fbcf090c10f
|
[
"MIT"
] | null | null | null | 6,938 | 6,938 | 0.699481 |
[
[
[
"**This notebook is an exercise in the [Python](https://www.kaggle.com/learn/python) course. You can reference the tutorial at [this link](https://www.kaggle.com/colinmorris/loops-and-list-comprehensions).**\n\n---\n",
"_____no_output_____"
],
[
"# Try It Yourself\n\nWith all you've learned, you can start writing much more interesting programs. See if you can solve the problems below.\n\nAs always, run the setup code below before working on the questions.",
"_____no_output_____"
]
],
[
[
"from learntools.core import binder; binder.bind(globals())\nfrom learntools.python.ex5 import *\nprint('Setup complete.')",
"_____no_output_____"
]
],
[
[
"# Exercises",
"_____no_output_____"
],
[
"## 1.\n\nHave you ever felt debugging involved a bit of luck? The following program has a bug. Try to identify the bug and fix it.",
"_____no_output_____"
]
],
[
[
"def has_lucky_number(nums):\n \"\"\"Return whether the given list of numbers is lucky. A lucky list contains\n at least one number divisible by 7.\n \"\"\"\n for num in nums:\n if num % 7 == 0:\n return True\n else:\n return False",
"_____no_output_____"
]
],
[
[
"Try to identify the bug and fix it in the cell below:",
"_____no_output_____"
]
],
[
[
"def has_lucky_number(nums):\n \"\"\"Return whether the given list of numbers is lucky. A lucky list contains\n at least one number divisible by 7.\n \"\"\"\n for num in nums:\n if num % 7 == 0:\n return True\n return False\n\n# Check your answer\nq1.check()",
"_____no_output_____"
],
[
"#q1.hint()\n#q1.solution()",
"_____no_output_____"
]
],
[
[
"## 2.\n\n### a.\nLook at the Python expression below. What do you think we'll get when we run it? When you've made your prediction, uncomment the code and run the cell to see if you were right.",
"_____no_output_____"
]
],
[
[
"[1, 2, 3, 4] > 2",
"_____no_output_____"
]
],
[
[
"### b\nR and Python have some libraries (like numpy and pandas) compare each element of the list to 2 (i.e. do an 'element-wise' comparison) and give us a list of booleans like `[False, False, True, True]`. \n\nImplement a function that reproduces this behaviour, returning a list of booleans corresponding to whether the corresponding element is greater than n.\n",
"_____no_output_____"
]
],
[
[
"def elementwise_greater_than(L, thresh):\n \"\"\"Return a list with the same length as L, where the value at index i is \n True if L[i] is greater than thresh, and False otherwise.\n \n >>> elementwise_greater_than([1, 2, 3, 4], 2)\n [False, False, True, True]\n \"\"\"\n return [l > thresh for l in L] \n\n# Check your answer\nq2.check()",
"_____no_output_____"
],
[
"#q2.solution()",
"_____no_output_____"
]
],
[
[
"## 3.\n\nComplete the body of the function below according to its docstring.",
"_____no_output_____"
]
],
[
[
"def menu_is_boring(meals):\n \"\"\"Given a list of meals served over some period of time, return True if the\n same meal has ever been served two days in a row, and False otherwise.\n \"\"\"\n for i in range(len(meals)-1):\n if meals[i] == meals[i+1]:\n return True\n return False\n\n# Check your answer\nq3.check()",
"_____no_output_____"
],
[
"q3.hint()\nq3.solution()",
"_____no_output_____"
]
],
[
[
"## 4. <span title=\"A bit spicy\" style=\"color: darkgreen \">🌶️</span>\n\nNext to the Blackjack table, the Python Challenge Casino has a slot machine. You can get a result from the slot machine by calling `play_slot_machine()`. The number it returns is your winnings in dollars. Usually it returns 0. But sometimes you'll get lucky and get a big payday. Try running it below:",
"_____no_output_____"
]
],
[
[
"play_slot_machine()",
"_____no_output_____"
]
],
[
[
"By the way, did we mention that each play costs $1? Don't worry, we'll send you the bill later.\n\nOn average, how much money can you expect to gain (or lose) every time you play the machine? The casino keeps it a secret, but you can estimate the average value of each pull using a technique called the **Monte Carlo method**. To estimate the average outcome, we simulate the scenario many times, and return the average result.\n\nComplete the following function to calculate the average value per play of the slot machine.",
"_____no_output_____"
]
],
[
[
"def estimate_average_slot_payout(n_runs):\n \"\"\"Run the slot machine n_runs times and return the average net profit per run.\n Example calls (note that return value is nondeterministic!):\n >>> estimate_average_slot_payout(1)\n -1\n >>> estimate_average_slot_payout(1)\n 0.5\n \"\"\"\n return sum([play_slot_machine() - 1 for _ in range(n_runs)])/n_runs",
"_____no_output_____"
],
[
"estimate_average_slot_payout(10000000)",
"_____no_output_____"
]
],
[
[
"When you think you know the expected value per spin, run the code cell below to view the solution and get credit for answering the question.",
"_____no_output_____"
]
],
[
[
"# Check your answer (Run this code cell to receive credit!)\nq4.solution()",
"_____no_output_____"
]
],
[
[
"# Keep Going\n\nMany programmers report that dictionaries are their favorite data structure. You'll get to **[learn about them](https://www.kaggle.com/colinmorris/strings-and-dictionaries)** (as well as strings) in the next lesson.",
"_____no_output_____"
],
[
"---\n\n\n\n\n*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161283) to chat with other Learners.*",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a5b2353658b036f74b9285dc8eb92b7c3582a7b
| 43,243 |
ipynb
|
Jupyter Notebook
|
SVM test.ipynb
|
sahilgandhi94/predictive-lead-scoring
|
62d0dce4a7ef2fe17815870a71c61d56b39c44cf
|
[
"MIT"
] | null | null | null |
SVM test.ipynb
|
sahilgandhi94/predictive-lead-scoring
|
62d0dce4a7ef2fe17815870a71c61d56b39c44cf
|
[
"MIT"
] | null | null | null |
SVM test.ipynb
|
sahilgandhi94/predictive-lead-scoring
|
62d0dce4a7ef2fe17815870a71c61d56b39c44cf
|
[
"MIT"
] | 3 |
2018-06-19T11:36:38.000Z
|
2021-01-08T08:22:30.000Z
| 32.464715 | 290 | 0.44909 |
[
[
[
"import numpy as np\nimport pandas as pd\n\nfrom sklearn.svm import SVC\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import train_test_split\n\nDATA = 'dataset/loan_one_hot_encoded.csv'",
"_____no_output_____"
],
[
"drop_cols = ['loan_created', 'application_id',\n# 'firm_type_Proprietorship',\n 'average_business_inflow'\n ]\ndf = pd.read_csv(DATA)\nY = df['loan_created']\nog_X = df.drop(drop_cols, axis=1)",
"_____no_output_____"
]
],
[
[
"Things to-do:\n\n- [ ] The data is missing values; use Imputer to fill mean/median of the column\n - [ ] Create another column to denote whether the data was imputed or not; I've read that it seems to have better results\n- [ ] Set class_weights in SVM\n- [ ] Tune hyperparameters\n- [ ] Specific kernel? \n\nWhat to do about skewed data:\n- See as an anamoly detection problem?\n- class weights (for SVM)\n- Remove training data (less data anyway..:( )\n",
"_____no_output_____"
]
],
[
[
"imp = Imputer()\nimputed_X = imp.fit_transform(og_X)\n\n# X = imputed_X\nscl = StandardScaler()\nX = scl.fit_transform(imputed_X)",
"_____no_output_____"
],
[
"# pd.value_counts(og_X['firm_type_Proprietorship'])",
"_____no_output_____"
],
[
"X.shape",
"_____no_output_____"
],
[
"reverse_Y = Y.apply(lambda x: 0 if x == 1 else 1)",
"_____no_output_____"
],
[
"np.unique(Y, return_counts=True)",
"_____no_output_____"
]
],
[
[
"### ensembles",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import BaggingClassifier\nfrom sklearn.ensemble import AdaBoostClassifier",
"_____no_output_____"
],
[
"clf = SVC(**{'C': 10, 'class_weight': 'balanced', 'degree': 3, 'gamma': 'auto', 'kernel': 'sigmoid', 'probability': True})\nens_clf = BaggingClassifier(clf)\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)\nens_clf.fit(X_train, y_train)\nens_clf.predict(X_test)\n",
"_____no_output_____"
],
[
"clf = SVC(**{'C': 10, 'class_weight': 'balanced', 'degree': 3, 'gamma': 'auto', 'kernel': 'sigmoid', 'probability': True})\nens_clf = AdaBoostClassifier(clf)\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)\nens_clf.fit(X_train, y_train)\nens_clf.predict(X_test)",
"_____no_output_____"
]
],
[
[
"### anomaly detection",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import OneClassSVM\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import make_scorer",
"_____no_output_____"
],
[
"ad_clf = OneClassSVM(kernel=\"rbf\")\nscores = cross_val_score(ad_clf, X, [_ if _ == 1 else -1 for _ in Y], cv=k_fold, scoring=make_scorer(accuracy_score))\nprint(scores)\nprint(np.average(scores))",
"[0.51282051 0.43589744 0.71052632 0.39473684 0.55263158 0.34210526]\n0.4914529914529915\n"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)\nad_clf = OneClassSVM(nu=0.7)\nad_clf.fit(X_train, y_train)\ny_predict = ad_clf.predict(X_test)\nprint(accuracy_score([x if x == 1 else -1 for x in y_test], y_predict))\nprint(y_predict)",
"0.6578947368421053\n[ 1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1\n -1 -1 1 -1 -1 -1 -1 -1 1 1 -1 -1 -1 1 1 -1 1 -1 1 -1 -1 1 -1 -1\n -1 -1 -1 -1 1 -1 1 1 1 -1 1 -1 1 -1 -1 1 -1 -1 1 -1 -1 1 -1 -1\n 1 -1 -1 1]\n"
],
[
"from sklearn.model_selection import GridSearchCV",
"_____no_output_____"
],
[
"param_grid = [\n {'nu': np.arange(.1, 1.0, 0.1), 'gamma': ['auto'], 'kernel': ['rbf']},\n ]\ngs_cv = GridSearchCV(OneClassSVM(), param_grid=param_grid, scoring=make_scorer(accuracy_score), cv=5, refit=True)\ngs_cv.fit(X, [_ if _ == 1 else -1 for _ in Y])",
"_____no_output_____"
],
[
"import pandas as pd\npd.DataFrame(gs_cv.cv_results_)",
"/Users/sahil/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('mean_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n warnings.warn(*warn_args, **warn_kwargs)\n/Users/sahil/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split0_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n warnings.warn(*warn_args, **warn_kwargs)\n/Users/sahil/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split1_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n warnings.warn(*warn_args, **warn_kwargs)\n/Users/sahil/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split2_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n warnings.warn(*warn_args, **warn_kwargs)\n/Users/sahil/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split3_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n warnings.warn(*warn_args, **warn_kwargs)\n/Users/sahil/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split4_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n warnings.warn(*warn_args, **warn_kwargs)\n/Users/sahil/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('std_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n warnings.warn(*warn_args, **warn_kwargs)\n"
],
[
"gs_cv.best_score_",
"_____no_output_____"
],
[
"gs_cv.best_params_",
"_____no_output_____"
]
],
[
[
"### normal svm and kernel selection",
"_____no_output_____"
]
],
[
[
"k_fold = 6",
"_____no_output_____"
],
[
"kernel = 'poly'\nprint('Kernel: ', kernel)\nclf = SVC(kernel=kernel, class_weight='balanced')\nnp.average(cross_val_score(clf, X, Y, cv=k_fold))",
"Kernel: poly\n"
],
[
"kernel = 'rbf'\nprint('Kernel: ', kernel)\nclf = SVC(kernel=kernel, class_weight='balanced')\nnp.average(cross_val_score(clf, X, Y, cv=k_fold))",
"Kernel: rbf\n"
],
[
"kernel = 'sigmoid'\nprint('Kernel: ', kernel)\nclf = SVC(kernel=kernel, class_weight='balanced')\nnp.average(cross_val_score(clf, X, Y, cv=k_fold))",
"Kernel: sigmoid\n"
],
[
"# kernel = 'precomputed'\n# print('Kernel: ', kernel)\n# clf = SVC(kernel=kernel, class_weight='balanced')\n# cross_val_score(clf, X, Y, cv=k_fold)",
"_____no_output_____"
],
[
"# n_samples / (n_classes * np.bincount(y))\nX.shape[0] / (2*np.bincount(Y))",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)\nclf = SVC(kernel='poly', class_weight='balanced', probability=True)\nclf.fit(X_train, y_train)\nprint(clf.predict(X_test))\nclf.predict_proba(X_test)[:,1]",
"[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0]\n"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)\nclf = SVC(kernel='poly', class_weight='balanced')\nclf.fit(X_train, y_train)\nclf.predict(X_test)\n# clf.score(X_test, y_test)",
"_____no_output_____"
],
[
"p = PCA(n_components=160)\np.fit(X)\n\npercents = np.round((p.explained_variance_ratio_), 3)\n_sum = 0\n# print(np.sum(percents))\nj = 0\nfor i in range(21):\n s = np.sum(percents[j: j+5])*100\n _sum += s\n print(j, s, _sum)\n j += 5\n",
"0 14.0 14.0\n5 9.4 23.4\n10 7.9 31.3\n15 6.3 37.6\n20 5.4 43.0\n25 4.8 47.8\n30 4.3 52.1\n35 4.0 56.1\n40 3.6 59.7\n45 3.5 63.2\n50 3.1 66.3\n55 3.0 69.3\n60 2.8 72.1\n65 2.5 74.6\n70 2.5 77.1\n75 2.5 79.6\n80 2.1 81.7\n85 2.0 83.7\n90 2.0 85.7\n95 2.0 87.7\n100 2.0 89.7\n"
],
[
"pca = PCA(n_components=60)\npca.fit(X)\nprint(np.round((pca.explained_variance_ratio_), 3))\n\n# pca.components_\n# ['PC-1','PC-2','PC-3','PC-4','PC-5','PC-6']\n\ncoef = pca.transform(np.eye(X.shape[1]))\nprint(np.linalg.norm(coef, axis=0))\n\n_p = pd.DataFrame(coef, columns=range(1, 61), index=og_X.columns)\n\nabs(_p).idxmax()\n\n# pd.value_counts(abs(_p).idxmax(axis=1))\n\n# clf = SVC(kernel='poly', class_weight='balanced')\n# X_train, X_test, y_train, y_test = train_test_split(pca.transform(X), Y, test_size=0.33, random_state=42)\n\n# clf.fit(X_train, y_train)\n\n# clf.predict(X_test), clf.score(X_test, y_test)\n\n\n# # cross_val_score(clf, pca.transform(X), Y, cv=k_fold)",
"[ 0.046 0.029 0.023 0.021 0.021 0.02 0.019 0.019 0.018 0.018\n 0.017 0.016 0.016 0.015 0.015 0.014 0.013 0.012 0.012 0.012\n 0.011 0.011 0.011 0.011 0.01 0.01 0.01 0.01 0.009 0.009\n 0.009 0.009 0.009 0.008 0.008 0.008 0.008 0.008 0.008 0.008\n 0.008 0.007 0.007 0.007 0.007 0.007 0.007 0.007 0.007 0.007\n 0.007 0.006 0.006 0.006 0.006 0.006 0.006 0.006 0.006 0.006]\n[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n 1. 1. 1. 1. 1. 1.]\n"
],
[
"pca",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5b23adb5231ff7323f73952b13ba30f870dac4
| 35,048 |
ipynb
|
Jupyter Notebook
|
module1-statistics-probability-and-inference/LS_DS_141_Statistics_Probability_Assignment.ipynb
|
dwightchurchill/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
|
521be02e2e8baed9fc44593f5cb5a095946cb82d
|
[
"MIT"
] | null | null | null |
module1-statistics-probability-and-inference/LS_DS_141_Statistics_Probability_Assignment.ipynb
|
dwightchurchill/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
|
521be02e2e8baed9fc44593f5cb5a095946cb82d
|
[
"MIT"
] | null | null | null |
module1-statistics-probability-and-inference/LS_DS_141_Statistics_Probability_Assignment.ipynb
|
dwightchurchill/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments
|
521be02e2e8baed9fc44593f5cb5a095946cb82d
|
[
"MIT"
] | null | null | null | 32.183655 | 455 | 0.336082 |
[
[
[
"<img align=\"left\" src=\"https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png\" width=200>\n<br></br>\n<br></br>\n\n## *Data Science Unit 1 Sprint 3 Assignment 1*\n\n# Apply the t-test to real data\n\nYour assignment is to determine which issues have \"statistically significant\" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values!\n\nYour goals:\n\n1. Load and clean the data (or determine the best method to drop observations when running tests)\n2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.01\n3. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.01\n4. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference)\n\nNote that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis.\n\nStretch goals:\n\n1. Refactor your code into functions so it's easy to rerun with arbitrary variables\n2. Apply hypothesis testing to your personal project data (for the purposes of this notebook you can type a summary of the hypothesis you formed and tested)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom matplotlib import style\nfrom scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel",
"_____no_output_____"
],
[
"df = pd.read_csv('house-votes-84.data',header=None, na_values='?')\ndf.head()",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df = df[df != '?']",
"_____no_output_____"
],
[
"df.isna().sum()",
"_____no_output_____"
],
[
"df.dropna(inplace=True)",
"_____no_output_____"
],
[
"df.isna().sum()",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.columns.tolist()",
"_____no_output_____"
],
[
"df = df.rename(columns={\n 0: 'Party',\n 1: 'handicapped-infants',\n 2: 'water-project-cost-sharing',\n 3: 'adoption-of-the-budget-resolution',\n 4: 'physician-fee-freeze',\n 5: 'el-salvador-aid',\n 6: 'religious-groups-in-schools',\n 7: 'anti-satellite-test-ban',\n 8: 'aid-to-nicaraguan-contras',\n 9: 'mx-missile',\n 10: 'immigration',\n 11: 'synfuels-corporation-cutback',\n 12: 'education-spending',\n 13: 'superfund-right-to-sue',\n 14: 'crime',\n 15: 'duty-free-exports',\n 16: 'export-administration-act-south-africa',\n})\ndf.head()",
"_____no_output_____"
],
[
"df = df.replace(['y','n'], [1,0])\ndf.head()",
"_____no_output_____"
],
[
"democrats = df[df['Party']=='democrat']\nrepublicans = df[df['Party']=='republican']",
"_____no_output_____"
],
[
"republicans['handicapped-infants'].describe()",
"_____no_output_____"
],
[
"stat, pvalue = ttest_ind(democrats['handicapped-infants'], republicans['handicapped-infants'])\nprint('{}, {}'.format(stat, pvalue))\n#2.0722024876891192e-09",
"6.240907554031057, 2.0722024876891192e-09\n"
],
[
"def ttest(title, sample1, sample2, alpha):\n stat, pvalue = ttest_ind(sample1, sample2)\n title = title.replace('-',' ').title()\n result = {'title':title,'stat':stat, 'pvalue':pvalue,'alpha':alpha}\n return result\n\ncolumns = df.columns.tolist()\ncolumns = columns[1:]\n\nfor col in columns: \n result = ttest(col,democrats[col],republicans[col],0.01)\n if result['pvalue'] < result['alpha']: \n if republicans[col].mean() > democrats[col].mean():\n print('The Republicans support the {} issue more than the Democrats'.format(result['title']))\n else: \n print('The Democrats support the {} issue more than the Republican'.format(result['title']))\n else: \n print('The difference between parties on the {} issue is not statistically significant.'.format(result['title']))\n",
"The Democrats support the Handicapped Infants issue more than the Republican\nThe difference between parties on the Water Project Cost Sharing issue is not statistically significant.\nThe Democrats support the Adoption Of The Budget Resolution issue more than the Republican\nThe Republicans support the Physician Fee Freeze issue more than the Democrats\nThe Republicans support the El Salvador Aid issue more than the Democrats\nThe Republicans support the Religious Groups In Schools issue more than the Democrats\nThe Democrats support the Anti Satellite Test Ban issue more than the Republican\nThe Democrats support the Aid To Nicaraguan Contras issue more than the Republican\nThe Democrats support the Mx Missile issue more than the Republican\nThe difference between parties on the Immigration issue is not statistically significant.\nThe Democrats support the Synfuels Corporation Cutback issue more than the Republican\nThe Republicans support the Education Spending issue more than the Democrats\nThe Republicans support the Superfund Right To Sue issue more than the Democrats\nThe Republicans support the Crime issue more than the Democrats\nThe Democrats support the Duty Free Exports issue more than the Republican\nThe Democrats support the Export Administration Act South Africa issue more than the Republican\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5b26ce0566c097a6343fd2e23f79efb151913e
| 2,132 |
ipynb
|
Jupyter Notebook
|
coursera/ml_yandex/course2/course2week5/Bayes.ipynb
|
VadimKirilchuk/education
|
ebddb2fb971ff1f3991e71fcb17ce83b95c4a397
|
[
"Apache-2.0"
] | null | null | null |
coursera/ml_yandex/course2/course2week5/Bayes.ipynb
|
VadimKirilchuk/education
|
ebddb2fb971ff1f3991e71fcb17ce83b95c4a397
|
[
"Apache-2.0"
] | null | null | null |
coursera/ml_yandex/course2/course2week5/Bayes.ipynb
|
VadimKirilchuk/education
|
ebddb2fb971ff1f3991e71fcb17ce83b95c4a397
|
[
"Apache-2.0"
] | null | null | null | 27.333333 | 107 | 0.582083 |
[
[
[
"from sklearn import datasets, model_selection, cross_validation\nfrom sklearn.naive_bayes import BernoulliNB, GaussianNB, MultinomialNB\n\ndigits = datasets.load_digits()\ncancer = datasets.load_breast_cancer()\n\n#print(digits.DESCR)\n#print(cancer.DESCR)\n\ndigits_X = digits.data\ndigits_y = digits.target\n\ncancer_X = cancer.data\ncancer_y = cancer.target\n\nfor descr, X, y in [('digits', digits_X, digits_y), ('cancer', cancer_X, cancer_y)]:\n print('\\n')\n for model in [BernoulliNB(), GaussianNB(), MultinomialNB()]:\n val = cross_validation.cross_val_score(model, X, y).mean()\n print(descr, model, val)",
"\n\ndigits BernoulliNB(alpha=1.0, binarize=0.0, class_prior=None, fit_prior=True) 0.8258236507780582\ndigits GaussianNB(priors=None) 0.8186003803550138\ndigits MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True) 0.8708771489735053\n\n\ncancer BernoulliNB(alpha=1.0, binarize=0.0, class_prior=None, fit_prior=True) 0.6274204028589994\ncancer GaussianNB(priors=None) 0.9367492806089297\ncancer MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True) 0.8945790401930752\n"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4a5b41008f08f0d212337d7353faca14b4230b58
| 48,321 |
ipynb
|
Jupyter Notebook
|
HeroesOfPymoli/HeroesOfPymoli_starter.ipynb
|
bandipara/Pandas-Challenge
|
3dde1c00b7913d36f339fde69f2e6d41f685d8a1
|
[
"ADSL"
] | null | null | null |
HeroesOfPymoli/HeroesOfPymoli_starter.ipynb
|
bandipara/Pandas-Challenge
|
3dde1c00b7913d36f339fde69f2e6d41f685d8a1
|
[
"ADSL"
] | null | null | null |
HeroesOfPymoli/HeroesOfPymoli_starter.ipynb
|
bandipara/Pandas-Challenge
|
3dde1c00b7913d36f339fde69f2e6d41f685d8a1
|
[
"ADSL"
] | null | null | null | 31.479479 | 156 | 0.372716 |
[
[
[
"### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport pandas as pd\n\n# File to Load\npurchanse_file = \"Resources/purchase_data.csv\"\n\n# Read Purchasing File and store into Pandas data frame\npurchase_data = pd.read_csv(purchanse_file)\npurchase_data",
"_____no_output_____"
]
],
[
[
"## Player Count",
"_____no_output_____"
],
[
"* Display the total number of players\n",
"_____no_output_____"
]
],
[
[
"total_players = purchase_data['SN'].nunique()\ntotal_players_df = pd.DataFrame({\"total_players\":[total_players]})\ntotal_players_df",
"_____no_output_____"
]
],
[
[
"## Purchasing Analysis (Total)",
"_____no_output_____"
],
[
"* Run basic calculations to obtain number of unique items, average price, etc.\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame\n",
"_____no_output_____"
]
],
[
[
"no_uniq_itms = purchase_data['Item Name'].nunique()\navg_price = purchase_data['Price'].mean()\ntotal_purchase = purchase_data['Item ID'].count()\ntotal_revenue = purchase_data['Price'].sum()\n\nanalysis_pur = [{'no_uniq_itms': no_uniq_itms, 'avg_price': avg_price, 'total_purchase':total_purchase, 'total_revenue':total_revenue}]\npur_anlys_df = pd.DataFrame(analysis_pur) \n\npur_anlys_df",
"_____no_output_____"
]
],
[
[
"## Gender Demographics",
"_____no_output_____"
],
[
"* Percentage and Count of Male Players\n\n\n* Percentage and Count of Female Players\n\n\n* Percentage and Count of Other / Non-Disclosed\n\n\n",
"_____no_output_____"
]
],
[
[
"gender_count = purchase_data.groupby('Gender')['SN'].nunique()\n#pct_gender_count = gender_count/576\n\npct_gen_players = gender_count/total_players*100\ngen_demo_df = pd.DataFrame({'pct_gen_players':pct_gen_players, 'gender_count': gender_count}) \n \n# Create DataFrame \n\ngen_demo_df.index.name = None\n# Print the output. \n\ngen_demo_df",
"_____no_output_____"
]
],
[
[
"\n## Purchasing Analysis (Gender)",
"_____no_output_____"
],
[
"* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender\n\n\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame",
"_____no_output_____"
]
],
[
[
"gen_purchase_count = purchase_data.groupby('Gender')['Purchase ID'].nunique()\ngen_purchase_avg = purchase_data.groupby('Gender')['Price'].mean()\ngen_purchase_total = purchase_data.groupby('Gender')['Price'].sum()\ngen_avg_per_person = gen_purchase_total/gender_count\n\ngen_pur_analysis_df = pd.DataFrame({'gen_purchase_count':gen_purchase_count, 'gen_purchase_avg': gen_purchase_avg,\n 'gen_purchase_total':gen_purchase_total,'gen_avg_per_person':gen_avg_per_person }) \n \n# Create DataFrame \n\ngen_pur_analysis_df.index.name = None\n\ngen_pur_analysis_df",
"_____no_output_____"
]
],
[
[
"## Age Demographics",
"_____no_output_____"
],
[
"* Establish bins for ages\n\n\n* Categorize the existing players using the age bins. Hint: use pd.cut()\n\n\n* Calculate the numbers and percentages by age group\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: round the percentage column to two decimal points\n\n\n* Display Age Demographics Table\n",
"_____no_output_____"
]
],
[
[
"age_bins = [0, 9, 14, 19,24, 29, 34, 39, 100]\nage_group =['<10', '10-14', '15-19', '20-24','25-29','30-34','35-39','40+'] \npurchase_data[\"age_groups\"] = pd.cut(purchase_data[\"Age\"], age_bins, labels=age_group)\n\npurchase_data",
"_____no_output_____"
],
[
"age_grp_total = purchase_data.groupby('age_groups')['SN'].nunique()\npct_age_grp = (age_grp_total/total_players) *100\n\nage_demo_summary_df = pd.DataFrame({'age_grp_total':age_grp_total,'pct_age_grp':pct_age_grp })\nage_demo_summary_df.index.name=None\nage_demo_summary_df",
"_____no_output_____"
]
],
[
[
"## Purchasing Analysis (Age)",
"_____no_output_____"
],
[
"* Bin the purchase_data data frame by age\n\n\n* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame",
"_____no_output_____"
]
],
[
[
"age_pur_total = purchase_data.groupby('age_groups')['Purchase ID'].nunique()\nage_grp_pp_sum = purchase_data.groupby('age_groups')['Price'].sum()\nage_grp_avg_pp = age_grp_pp_sum/age_pur_total \nage_grp_avg_indiv_pur = age_grp_pp_sum/age_grp_total\n\nanalysis_pur_age_df = pd.DataFrame({'age_pur_total': age_pur_total, 'age_grp_pp_sum': age_grp_pp_sum,\n 'age_grp_avg_pp':age_grp_avg_pp, 'age_grp_avg_indiv_pur':age_grp_avg_indiv_pur})\n\nanalysis_pur_age_df.index.name = None\nanalysis_pur_age_df",
"_____no_output_____"
]
],
[
[
"## Top Spenders",
"_____no_output_____"
],
[
"* Run basic calculations to obtain the results in the table below\n\n\n* Create a summary data frame to hold the results\n\n\n* Sort the total purchase value column in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the summary data frame\n\n",
"_____no_output_____"
]
],
[
[
"spenders_count = purchase_data.groupby('SN')\npurchase_count = spenders_count['Purchase ID'].count()\navg_pur_price = spenders_count['Price'].mean()\ntotal_purchase_value = spenders_count['Price'].sum()\n\ntop_spenders_df = pd.DataFrame({'purchase_count':purchase_count,'avg_pur_price':avg_pur_price,\n 'total_purchase_value':total_purchase_value})\ntop_spenders_df_formatted = top_spenders_df.sort_values(['total_purchase_value'], ascending=False).head()\n\ntop_spenders_df_formatted",
"_____no_output_____"
]
],
[
[
"## Most Popular Items",
"_____no_output_____"
],
[
"* Retrieve the Item ID, Item Name, and Item Price columns\n\n\n* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value\n\n\n* Create a summary data frame to hold the results\n\n\n* Sort the purchase count column in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the summary data frame\n\n",
"_____no_output_____"
]
],
[
[
"item_count = purchase_data.groupby(['Item ID','Item Name'])\nitem_purchase_count = item_count['Purchase ID'].count()\nitem_total_purchase_value = item_count['Price'].sum()\nitem_price = item_total_purchase_value/item_purchase_count\nmost_popular_df = pd.DataFrame({'item_purchase_count':item_purchase_count,'item_price':item_price,\n 'item_total_purchase_value':item_total_purchase_value})\nmost_popular_df_formatted = most_popular_df.sort_values(['item_purchase_count'], ascending=False).head()\n\nmost_popular_df_formatted",
"_____no_output_____"
]
],
[
[
"## Most Profitable Items",
"_____no_output_____"
],
[
"* Sort the above table by total purchase value in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the data frame\n\n",
"_____no_output_____"
]
],
[
[
"most_profitable_df = most_popular_df_formatted.sort_values(['item_total_purchase_value'], \n ascending = False)\nmost_profitable_df",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a5b4f4110f145ec1c91d6af4a41f1bc769532c5
| 28,323 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/Lanedetection-checkpoint.ipynb
|
apreddyy/ADLanedetectionPython
|
fb444b4680fc262a11050b837501bf42de9609d8
|
[
"MIT"
] | 1 |
2018-08-16T16:13:16.000Z
|
2018-08-16T16:13:16.000Z
|
.ipynb_checkpoints/Lanedetection-checkpoint.ipynb
|
apreddyy/ADLanedetectionPython
|
fb444b4680fc262a11050b837501bf42de9609d8
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/Lanedetection-checkpoint.ipynb
|
apreddyy/ADLanedetectionPython
|
fb444b4680fc262a11050b837501bf42de9609d8
|
[
"MIT"
] | null | null | null | 40.929191 | 134 | 0.562229 |
[
[
[
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport pickle\nimport numpy as np\nimport cv2\nfrom moviepy.editor import VideoFileClip\nimport math\nimport glob",
"_____no_output_____"
],
[
"class Left_Right:\n last_L_points = []\n last_R_points = []\n \n def __init__(self, last_L_points, last_R_points):\n self.last_L_points = last_L_points\n self.last_R_points = last_R_points",
"_____no_output_____"
],
[
"calib_image = mpimg.imread(r'C:\\Users\\pramo\\Documents\\Project4\\camera_cal\\calibration1.jpg')\nplt.imshow(calib_image)",
"_____no_output_____"
],
[
"objp = np.zeros((6*9,3), np.float32)\nobjp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n# Arrays to store object points and image points from all the images.\nobjpoints = [] # 3d points in real world space\nimgpoints = [] # 2d points in image plane.\n# Make a list of calibration images\nimages = glob.glob(r'C:\\Users\\pramo\\Documents\\Project4\\camera_cal\\calibration*.jpg')\nshow_images = []\n# Step through the list and search for chessboard corners\nfor fname in images:\n img = cv2.imread(fname)\n gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n # Find the chessboard corners\n ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n # If found, add object points, image points\n if ret == True:\n objpoints.append(objp)\n imgpoints.append(corners)\n # Draw and display the corners\n img = cv2.drawChessboardCorners(img, (9,6), corners, ret) \n show_images.append (img)\n \nret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)\n\ndist_pickle = {}\ndist_pickle[\"mtx\"] = mtx\ndist_pickle[\"dist\"] = dist\npickle.dump( dist_pickle, open( \"wide_dist_pickle.p\", \"wb\" ) )\n\n \nfig=plt.figure(figsize=(20, 20))\ncolumns = 2\nrows = 10\nfor i in range(len(show_images)):\n j= i+1\n img = show_images[i].squeeze()\n fig.add_subplot(rows, columns, j)\n plt.imshow(img, cmap=\"gray\")\nplt.show()",
"_____no_output_____"
],
[
"dist_pickle = pickle.load( open( \"wide_dist_pickle.p\", \"rb\" ) )\nmtx = dist_pickle[\"mtx\"]\ndist = dist_pickle[\"dist\"]",
"_____no_output_____"
],
[
"def cal_undistort(img, mtx, dist):\n return cv2.undistort(img, mtx, dist, None, mtx)\n\ndef gray_image(img):\n thresh = (200, 220)\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n binary = np.zeros_like(gray)\n binary[(gray > thresh[0]) & (gray <= thresh[1])] = 1\n return binary\n\ndef abs_sobel_img(img, orient='x', thresh_min=0, thresh_max=255):\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n if orient == 'x':\n abs_sobel = np.absolute(cv2.Sobel(gray , cv2.CV_64F, 1, 0))\n if orient == 'y':\n abs_sobel = np.absolute(cv2.Sobel(gray , cv2.CV_64F, 0, 1))\n scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))\n abs_sobel_output = np.zeros_like(scaled_sobel)\n abs_sobel_output[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1\n return abs_sobel_output\n\ndef hls_select(img, thresh_min=0, thresh_max=255):\n hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)\n s_channel = hls[:,:,2]\n binary_output = np.zeros_like(s_channel)\n binary_output[(s_channel > thresh_min) & (s_channel <= thresh_max)] = 1\n return binary_output\n\n#hls_binary = hls_select(image, thresh=(90, 255))\ndef wrap_transform(img, inverse ='TRUE'):\n img_size = (img.shape[1], img.shape[0])\n src = np.float32(\n [[(img_size[0] / 2) - 55, img_size[1] / 2 + 100],\n [((img_size[0] / 6) - 10), img_size[1]],\n [(img_size[0] * 5 / 6) + 60, img_size[1]],\n [(img_size[0] / 2 + 55), img_size[1] / 2 + 100]])\n dst = np.float32(\n [[(img_size[0] / 4), 0],\n [(img_size[0] / 4), img_size[1]],\n [(img_size[0] * 3 / 4), img_size[1]],\n [(img_size[0] * 3 / 4), 0]])\n \n if inverse == 'FALSE':\n M = cv2.getPerspectiveTransform(src, dst)\n if inverse == 'TRUE':\n M = cv2.getPerspectiveTransform(dst, src)\n \n return cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)",
"_____no_output_____"
],
[
"def combined_image(img): \n undisort_image = cal_undistort(img, mtx, dist)\n W_image = wrap_transform(undisort_image, inverse ='FALSE')\n grayimage = gray_image(W_image ) \n sobelx = abs_sobel_img(W_image,'x', 20, 100)\n s_binary = hls_select(W_image, 150, 255)\n color_binary = np.dstack(( np.zeros_like(sobelx), sobelx, s_binary)) * 255\n combined_binary = np.zeros_like(sobelx)\n combined_binary[(s_binary == 1) | (sobelx == 1) | (grayimage == 1)] = 1\n return undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image",
"_____no_output_____"
],
[
"img = cv2.imread(r'C:\\Users\\pramo\\Documents\\Project4\\camera_cal\\calibration2.jpg')\nundisort_image = cal_undistort(img, mtx, dist)\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))\nf.tight_layout()\nax1.imshow(img)\nax1.set_title('Original Image', fontsize=10)\nax2.imshow(undisort_image, cmap=\"gray\")\nax2.set_title('undisort_image', fontsize=10)\nplt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)",
"_____no_output_____"
],
[
"img = cv2.imread(r'C:\\Users\\pramo\\Documents\\Project4\\test_images\\test6.jpg')\nundisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img)\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))\nf.tight_layout()\nax1.imshow(img)\nax1.set_title('Original Image', fontsize=10)\nax2.imshow(undisort_image, cmap=\"gray\")\nax2.set_title('undisort_image', fontsize=10)\nplt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))\nf.tight_layout()\nax1.imshow(sobelx)\nax1.set_title('sobelx', fontsize=10)\nax2.imshow(s_binary, cmap=\"gray\")\nax2.set_title('s_binary', fontsize=10)\nplt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))\nf.tight_layout()\nax1.imshow(color_binary)\nax1.set_title('color_binary', fontsize=10)\nax2.imshow(combined_binary, cmap=\"gray\")\nax2.set_title('combined_binary', fontsize=10)\nplt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))\nf.tight_layout()\nax1.imshow(W_image)\nax1.set_title('W_image', fontsize=10)\nax2.imshow(img)\nax2.set_title('Original Image', fontsize=10)\nplt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)",
"_____no_output_____"
],
[
"def hist(img):\n return np.sum(img[img.shape[0]//2:,:], axis=0)",
"_____no_output_____"
],
[
"def find_lane_pixels(binary_warped, image_show = True):\n # Take a histogram of the bottom half of the image\n histogram = hist(binary_warped)\n # Create an output image to draw on and visualize the result\n out_img = np.dstack((binary_warped, binary_warped, binary_warped))\n # Find the peak of the left and right halves of the histogram\n # These will be the starting point for the left and right lines\n midpoint = np.int(histogram.shape[0]//2)\n leftx_base = np.argmax(histogram[:midpoint])\n rightx_base = np.argmax(histogram[midpoint:]) + midpoint\n\n # HYPERPARAMETERS\n # Choose the number of sliding windows\n nwindows = 8\n # Set the width of the windows +/- margin\n margin = 150\n # Set minimum number of pixels found to recenter window\n minpix = 50\n\n # Set height of windows - based on nwindows above and image shape\n window_height = np.int(binary_warped.shape[0]//nwindows)\n # Identify the x and y positions of all nonzero pixels in the image\n nonzero = binary_warped.nonzero()\n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n # Current positions to be updated later for each window in nwindows\n leftx_current = leftx_base\n rightx_current = rightx_base\n\n # Create empty lists to receive left and right lane pixel indices\n left_lane_inds = []\n right_lane_inds = []\n\n # Step through the windows one by one\n for window in range(nwindows):\n # Identify window boundaries in x and y (and right and left)\n win_y_low = binary_warped.shape[0] - (window+1)*window_height\n win_y_high = binary_warped.shape[0] - window*window_height\n win_xleft_low = leftx_current - margin\n win_xleft_high = leftx_current + margin\n win_xright_low = rightx_current - margin\n win_xright_high = rightx_current + margin\n \n # Draw the windows on the visualization image\n cv2.rectangle(out_img,(win_xleft_low,win_y_low),\n (win_xleft_high,win_y_high),(0,255,0), 2) \n cv2.rectangle(out_img,(win_xright_low,win_y_low),\n (win_xright_high,win_y_high),(0,255,0), 2) \n \n # Identify the nonzero pixels in x and y within the window #\n good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & \n (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]\n good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & \n (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]\n \n # Append these indices to the lists\n left_lane_inds.append(good_left_inds)\n right_lane_inds.append(good_right_inds)\n \n # If you found > minpix pixels, recenter next window on their mean position\n if len(good_left_inds) > minpix:\n leftx_current = np.int(np.mean(nonzerox[good_left_inds]))\n if len(good_right_inds) > minpix: \n rightx_current = np.int(np.mean(nonzerox[good_right_inds]))\n\n # Concatenate the arrays of indices (previously was a list of lists of pixels)\n try:\n left_lane_inds = np.concatenate(left_lane_inds)\n right_lane_inds = np.concatenate(right_lane_inds)\n except ValueError:\n # Avoids an error if the above is not implemented fully\n pass\n\n # Extract left and right line pixel positions\n leftx = nonzerox[left_lane_inds]\n lefty = nonzeroy[left_lane_inds] \n rightx = nonzerox[right_lane_inds]\n righty = nonzeroy[right_lane_inds]\n \n left_fit = np.polyfit(lefty, leftx, 2)\n right_fit = np.polyfit(righty, rightx, 2)\n \n if image_show == True:\n \n #Generate x and y values for plotting\n ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )\n try:\n left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]\n right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]\n except TypeError:\n #Avoids an error if `left` and `right_fit` are still none or incorrect\n print('The function failed to fit a line!')\n left_fitx = 1*ploty**2 + 1*ploty\n right_fitx = 1*ploty**2 + 1*ploty\n\n ## Visualization ##\n # Colors in the left and right lane regions\n out_img[lefty, leftx] = [255, 0, 0]\n out_img[righty, rightx] = [0, 0, 255]\n if image_show == True: \n return out_img, left_fit, right_fit\n else:\n return left_fit, right_fit\n ",
"_____no_output_____"
],
[
"images = glob.glob(r'C:\\Users\\pramo\\Documents\\Project4\\test_images\\test*.jpg')\nshow_images = []\n# Step through the list and search for chessboard corners\nfor fname in images:\n img = cv2.imread(fname)\n undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img)\n outImage, left_fit, right_fit = find_lane_pixels(combined_binary, image_show = True)\n show_images.append (outImage)\n \nfig=plt.figure(figsize=(20, 20))\ncolumns = 2\nrows = 4\nfor i in range(len(show_images)):\n j= i+1\n img = show_images[i].squeeze()\n fig.add_subplot(rows, columns, j)\n plt.imshow(img)\nplt.show()",
"_____no_output_____"
],
[
"def fit_poly(img_shape, left_fitn, right_fitn):\n # Generate x and y values for plotting\n ploty = np.linspace(0, img_shape[0]-1, img_shape[0])\n ### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ###\n left_fitx = left_fitn[0]*ploty**2 + left_fitn[1]*ploty + left_fitn[2]\n right_fitx = right_fitn[0]*ploty**2 + right_fitn[1]*ploty + right_fitn[2]\n return left_fitx, right_fitx, ploty",
"_____no_output_____"
],
[
"def fit_polynomial(binary_warped, left_fit, right_fit, image_show = True):\n # Find our lane pixels first\n margin = 10\n nonzero = binary_warped.nonzero()\n \n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n \n left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + \n left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + \n left_fit[1]*nonzeroy + left_fit[2] + margin)))\n right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + \n right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + \n right_fit[1]*nonzeroy + right_fit[2] + margin)))\n \n # Again, extract left and right line pixel positions\n leftx = nonzerox[left_lane_inds]\n lefty = nonzeroy[left_lane_inds] \n rightx = nonzerox[right_lane_inds]\n righty = nonzeroy[right_lane_inds]\n # Color in left and right line pixels\n \n left_fitn = np.polyfit(lefty, leftx, 2)\n right_fitn = np.polyfit(righty, rightx, 2)\n \n if image_show == True:\n \n left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, left_fitn, right_fitn)\n out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255\n window_img = np.zeros_like(out_img)\n\n out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]\n out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]\n\n # Generate a polygon to illustrate the search window area\n # And recast the x and y points into usable format for cv2.fillPoly()\n left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])\n left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, \n ploty])))])\n left_line_pts = np.hstack((left_line_window1, left_line_window2))\n right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])\n right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, \n ploty])))])\n right_line_pts = np.hstack((right_line_window1, right_line_window2))\n\n # Draw the lane onto the warped blank image\n cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))\n cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))\n out_img = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)\n ## End visualization steps ##\n if image_show == True: \n return out_img\n else:\n return left_fitn, right_fitn",
"_____no_output_____"
],
[
"img = cv2.imread(r'C:\\Users\\pramo\\Documents\\Project4\\test_images\\test8.jpg')\nundisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img)\noutImage = fit_polynomial(combined_binary, left_fit, right_fit, image_show = True)\nplt.imshow(outImage)\n",
"_____no_output_____"
],
[
"def center(X_pointL, X_pointR):\n mid_pointx = (X_pointL + X_pointR)/2\n image_mid_pointx = 640\n dist = distance(mid_pointx, image_mid_pointx)\n dist = dist*(3.7/700)\n return dist, mid_pointx ",
"_____no_output_____"
],
[
"def distance(pointL, pointR):\n return math.sqrt((pointL - pointR)**2)",
"_____no_output_____"
],
[
"def measure_curvature_pixels(img_shape, left_fit, right_fit): \n ym_per_pix = 30/720 # meters per pixel in y dimension\n xm_per_pix = 3.7/700 # meters per pixel in x dimension\n # Start by generating our fake example data\n # Make sure to feed in your real data instead in your project!\n leftx, rightx, ploty = fit_poly(img_shape, left_fit, right_fit)\n \n leftx = leftx[::-1] # Reverse to match top-to-bottom in y\n rightx = rightx[::-1] # Reverse to match top-to-bottom in y\n \n first_element_L = leftx[-720] \n first_element_R = rightx[-720]\n \n center_dist, mid_pointx = center(first_element_L, first_element_R)\n \n # Fit a second order polynomial to pixel positions in each fake lane line\n # Fit new polynomials to x,y in world space\n left_fit_cr = np.polyfit(ploty*ym_per_pix, leftx*xm_per_pix, 2)\n right_fit_cr = np.polyfit(ploty*ym_per_pix, rightx*xm_per_pix, 2)\n \n # Define y-value where we want radius of curvature\n # We'll choose the maximum y-value, corresponding to the bottom of the image\n y_eval = np.max(ploty)\n \n # Calculation of R_curve (radius of curvature)\n left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])\n right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])\n \n return left_curverad, right_curverad, center_dist, mid_pointx",
"_____no_output_____"
],
[
"def Sanity_Check(img_shape, left_fit, right_fit): \n \n xm_per_pix = 3.7/700 # meters per pixel in x dimension \n ploty = np.linspace(0, 719, num=720)\n left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]\n right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]\n \n left_fitx, right_fitx, ploty = fit_poly(img_shape, left_fit, right_fit)\n \n left_fitx = left_fitx[::-1] # Reverse to match top-to-bottom in y\n right_fitx = right_fitx[::-1] # Reverse to match top-to-bottom in y\n \n last_element_L = left_fitx[-1] \n last_element_R = right_fitx [-1]\n #print(last_element_L)\n mid_element_L = left_fitx[-360] \n mid_element_R = right_fitx [-360] \n first_element_L = left_fitx[-720] \n first_element_R = right_fitx [-720]\n \n b_dist = (distance(last_element_L, last_element_R)*xm_per_pix)\n m_dist = (distance(mid_element_L, mid_element_R)*xm_per_pix)\n t_dist = (distance(first_element_L, first_element_R)*xm_per_pix) \n return b_dist, m_dist, t_dist",
"_____no_output_____"
],
[
"def draw_poly(u_imag, binary_warped, left_fit, right_fit): \n warp_zero = np.zeros_like(binary_warped).astype(np.uint8) \n color_warp = np.dstack((warp_zero, warp_zero, warp_zero))\n # Recast the x and y points into usable format for cv2.fillPoly()\n left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, left_fit, right_fit) \n pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])\n pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])\n pts = np.hstack((pts_left, pts_right))\n pts_left = np.array([pts_left], np.int32)\n pts_right = np.array([pts_right], np.int32)\n # Draw the lane onto the warped blank image\n cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))\n cv2.polylines(color_warp, pts_left, 0, (255,0,0), 40)\n cv2.polylines(color_warp, pts_right, 0, (255,0,0), 40)\n # Warp the blank back to original image space using inverse perspective matrix (Minv)\n un_warped = wrap_transform(color_warp, inverse = 'TRUE') \n # Combine the result with the original image\n out_img = cv2.addWeighted(u_imag, 1, un_warped, 0.3, 0)\n return out_img",
"_____no_output_____"
],
[
"image_show = False\n\ndef process_image(image): \n left_fit = [] \n right_fit = []\n undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(image)\n \n if len(Left_Right.last_L_points) == 0 or len(Left_Right.last_R_points) == 0:\n left_fit, right_fit = find_lane_pixels(combined_binary, image_show = False) \n else:\n left_fit = Left_Right.last_L_points\n right_fit = Left_Right.last_R_points \n left_fit, right_fit = fit_polynomial(combined_binary, left_fit, right_fit, image_show = False) \n \n b_dist, m_dist, t_dist = Sanity_Check(combined_binary.shape, left_fit, right_fit) \n mean = (b_dist + m_dist + t_dist)/3\n #print (t_dist) \n if (3.8 > mean > 3.1) and (3.5 > t_dist > 3.1):\n Left_Right.last_L_points = left_fit\n Left_Right.last_R_points = right_fit \n else: \n left_fit = Left_Right.last_L_points\n right_fit = Left_Right.last_R_points\n \n L_curvature, R_Curvature, center_dist, mid_pointx = measure_curvature_pixels(combined_binary.shape, left_fit, right_fit)\n curvature = (L_curvature + R_Curvature)/2 \n result = draw_poly(undisort_image, combined_binary, left_fit, right_fit)\n TEXT = 'Center Curvature = %f(m)' %curvature\n font = cv2.FONT_HERSHEY_SIMPLEX\n cv2.putText(result, TEXT, (50,50), font, 1, (0, 255, 0), 2)\n if (mid_pointx > 640):\n TEXT = 'Away from center = %f(m - To Right)' %center_dist\n font = cv2.FONT_HERSHEY_SIMPLEX\n cv2.putText(result, TEXT, (50,100), font, 1, (0, 255, 0), 2)\n else:\n TEXT = 'Away from center = %f(m - To Left)' %center_dist\n font = cv2.FONT_HERSHEY_SIMPLEX\n cv2.putText(result, TEXT, (50,100), font, 1, (0, 255, 0), 2) \n return result",
"_____no_output_____"
],
[
"img = cv2.imread(r'C:\\Users\\pramo\\Documents\\Project4\\test_images\\test8.jpg')\noutImage = process_image(img)\nplt.imshow(outImage)",
"_____no_output_____"
],
[
"output = 'project_video_out.mp4'\nclip = VideoFileClip('project_video.mp4')\nyellow_clip = clip.fl_image(process_image)\nyellow_clip.write_videofile(output, audio=False)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5b5b3b74de1550e83d510c9ea5d1ff7a486a21
| 4,730 |
ipynb
|
Jupyter Notebook
|
2018-2019/project/saliency_maps/crop_images_from_dir_and_save_all_for_sal_unseen_categories.ipynb
|
Tudor67/Neural-Networks-Assignments
|
7376e9d3b0059df2f2b21d56787c47d3c1ba6746
|
[
"MIT"
] | 1 |
2019-04-07T03:50:57.000Z
|
2019-04-07T03:50:57.000Z
|
2018-2019/project/saliency_maps/crop_images_from_dir_and_save_all_for_sal_unseen_categories.ipynb
|
Tudor67/Neural-Networks-Assignments
|
7376e9d3b0059df2f2b21d56787c47d3c1ba6746
|
[
"MIT"
] | 5 |
2018-10-16T22:46:33.000Z
|
2019-02-04T20:11:41.000Z
|
2018-2019/project/saliency_maps/crop_images_from_dir_and_save_all_for_sal_unseen_categories.ipynb
|
Tudor67/Neural-Networks-Assignments
|
7376e9d3b0059df2f2b21d56787c47d3c1ba6746
|
[
"MIT"
] | 1 |
2019-04-07T03:50:42.000Z
|
2019-04-07T03:50:42.000Z
| 33.309859 | 872 | 0.412685 |
[
[
[
"## 1. Setup",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append('..')",
"_____no_output_____"
],
[
"import config\nimport numpy as np\nimport warnings\n\nfrom utils.preprocessing import crop_images_from_dir_and_save_all",
"_____no_output_____"
],
[
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"DATASET_PATH = f'../datasets/unseen_categories'",
"_____no_output_____"
]
],
[
[
"## 2. Crop images from dir and save all",
"_____no_output_____"
]
],
[
[
"'''\nfor split_name in ['test']:\n for backprop_modifier in ['None', 'deconv', 'guided']:\n crop_images_from_dir_and_save_all(images_path=f'{DATASET_PATH}/{split_name}'\n f'/{split_name}'\n f'_img_from_patches',\n save_path=f'{DATASET_PATH}/{split_name}'\n f'/{split_name}'\n f'_img_patches_for_sal',\n patch_h=config.SALIENCY_INPUT_SHAPE[0],\n patch_w=config.SALIENCY_INPUT_SHAPE[1],\n img_format='png',\n append_h_w=False)\n'''",
"_____no_output_____"
],
[
"for split_name in ['test']:\n for backprop_modifier in ['None', 'deconv', 'guided']:\n crop_images_from_dir_and_save_all(images_path=f'{DATASET_PATH}/{split_name}'\n f'/{split_name}_{backprop_modifier}'\n f'_sal_from_patches',\n save_path=f'{DATASET_PATH}/{split_name}'\n f'/{split_name}_{backprop_modifier}'\n f'_sal_patches',\n patch_h=config.INPUT_SHAPE[0],\n patch_w=config.INPUT_SHAPE[1],\n img_format='png',\n append_h_w=False)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a5b7e4ffdb5b5d1ef4bb24a724f74c32f818b44
| 5,221 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/Keras example for classifitcation in TF2-checkpoint.ipynb
|
jskDr/tictactoe
|
4e76a1a5c22ec9ac287fdbd93604b480f8cc7d23
|
[
"MIT"
] | 1 |
2020-04-03T23:12:37.000Z
|
2020-04-03T23:12:37.000Z
|
.ipynb_checkpoints/Keras example for classifitcation in TF2-checkpoint.ipynb
|
jskDr/tictactoe
|
4e76a1a5c22ec9ac287fdbd93604b480f8cc7d23
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/Keras example for classifitcation in TF2-checkpoint.ipynb
|
jskDr/tictactoe
|
4e76a1a5c22ec9ac287fdbd93604b480f8cc7d23
|
[
"MIT"
] | null | null | null | 28.686813 | 177 | 0.557556 |
[
[
[
"# Keras example for classifitcation\n- Refer to: \n - https://colab.research.google.com/drive/1p4RhSj1FEuscyZP81ocn8IeGD_2r46fS?fbclid=IwAR2c5N-T-b1arVit3jJIDrTuZQzNz_3pzSR2A9AXGWO-5QrJr8NhjgttB9k#scrollTo=zoDjozMFREDU\n - https://colab.research.google.com/drive/1UCJt8EYjlzCs1H1d1X0iDGYJsHKwu-NO?fbclid=IwAR269Y-3J1DuZL01L6GBCC4dg6RSAmJXHnRfztL454dZ5SqKLRxCAZcxzgY",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow.keras import layers\n\n# 데이터셋를 준비합니다\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\nx_train = x_train[:].reshape(60000, 784).astype('float32') / 255\ndataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\ndataset = dataset.shuffle(buffer_size=1024).batch(64)\n\n# 간단한 분류를 위한 모델의 인스턴스를 만듭니다\nmodel = tf.keras.Sequential([\n layers.Dense(256, activation=tf.nn.relu),\n layers.Dense(256, activation=tf.nn.relu),\n layers.Dense(10)\n])\n\n# 정수형 레이블을 인자로 받아들이는, 로지스틱 Loss의 인스턴스를 만듭니다\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\n# 정확도에 대한 Metric의 인스턴스를 만듭니다\naccuracy = tf.keras.metrics.SparseCategoricalAccuracy()\n\n# Optimizer의 인스턴스를 만듭니다\noptimizer = tf.keras.optimizers.Adam()\n\n# 데이터셋의 데이터 배치를 순회합니다\nfor step, (x, y) in enumerate(dataset):\n \n # GradientTape 열어줍니다\n with tf.GradientTape() as tape:\n\n # 순방향 전파(forward)를 수행합니다\n logits = model(x)\n\n # 현재 배치에 대한 손실값을 측정합니다\n loss_value = loss(y, logits)\n \n # 손실에 대한 가중치의 경사도를 계산합니다\n gradients = tape.gradient(loss_value, model.trainable_weights)\n \n # 모델의 가중치를 갱신합니다\n optimizer.apply_gradients(zip(gradients, model.trainable_weights))\n\n # 현재까지 수행된 전체에 대한 모델의 정확도를 갱신합니다\n accuracy.update_state(y, logits)\n \n # 로그를 출력합니다\n if step % 100 == 0:\n print('단계(Step):', step)\n print('마지막 단계(Step)의 손실:', float(loss_value))\n print('지금까지 수행된 전체에 대한 정확도:', float(accuracy.result()))",
"단계(Step): 0\n마지막 단계(Step)의 손실: 2.2830166816711426\n지금까지 수행된 전체에 대한 정확도: 0.15625\n단계(Step): 100\n마지막 단계(Step)의 손실: 0.3240736722946167\n지금까지 수행된 전체에 대한 정확도: 0.8389542102813721\n단계(Step): 200\n마지막 단계(Step)의 손실: 0.2415887713432312\n지금까지 수행된 전체에 대한 정확도: 0.8763992786407471\n단계(Step): 300\n마지막 단계(Step)의 손실: 0.17907004058361053\n지금까지 수행된 전체에 대한 정확도: 0.8964389562606812\n단계(Step): 400\n마지막 단계(Step)의 손실: 0.20433291792869568\n지금까지 수행된 전체에 대한 정확도: 0.9079644680023193\n단계(Step): 500\n마지막 단계(Step)의 손실: 0.20842380821704865\n지금까지 수행된 전체에 대한 정확도: 0.9157622456550598\n단계(Step): 600\n마지막 단계(Step)의 손실: 0.37517252564430237\n지금까지 수행된 전체에 대한 정확도: 0.9226809740066528\n단계(Step): 700\n마지막 단계(Step)의 손실: 0.08569125831127167\n지금까지 수행된 전체에 대한 정확도: 0.9273805022239685\n단계(Step): 800\n마지막 단계(Step)의 손실: 0.14712117612361908\n지금까지 수행된 전체에 대한 정확도: 0.9302629232406616\n단계(Step): 900\n마지막 단계(Step)의 손실: 0.11208811402320862\n지금까지 수행된 전체에 대한 정확도: 0.9339102506637573\n"
],
[
"x_train.shape",
"_____no_output_____"
],
[
"model(x_train[0:1, :])",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a5b8883efa49590e73821ae357760d4b9c0b1e5
| 29,271 |
ipynb
|
Jupyter Notebook
|
Python-Epiphanies/Python-Epiphanies-3-More-Namespace-Operations.ipynb
|
TheCulliganMan/PyCon
|
6387902d5ea731d26a8abb6fa599919f3f541dec
|
[
"BSD-2-Clause"
] | null | null | null |
Python-Epiphanies/Python-Epiphanies-3-More-Namespace-Operations.ipynb
|
TheCulliganMan/PyCon
|
6387902d5ea731d26a8abb6fa599919f3f541dec
|
[
"BSD-2-Clause"
] | null | null | null |
Python-Epiphanies/Python-Epiphanies-3-More-Namespace-Operations.ipynb
|
TheCulliganMan/PyCon
|
6387902d5ea731d26a8abb6fa599919f3f541dec
|
[
"BSD-2-Clause"
] | null | null | null | 20.103709 | 435 | 0.497796 |
[
[
[
"# 3 More Namespace Operations ",
"_____no_output_____"
],
[
"### 3.1 `locals()` and `globals()`",
"_____no_output_____"
],
[
"Name binding operations covered so far:\n\n - *name* `=` (assignment)\n - `del` *name* (unbinds the name)\n - `def` *name* function definition (including lambdas)\n - `def name(`*names*`):` (function execution)\n - *name*`.`*attribute_name* `=`, `__setattr__`, `__delattr__`\n - `global`, `nonlocal` (changes scope rules)\n - `except Exception as` *name*:",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
]
],
[
[
"locals()",
"_____no_output_____"
],
[
"len(locals())",
"_____no_output_____"
]
],
[
[
" In the REPL these are the same:",
"_____no_output_____"
]
],
[
[
"locals() == globals()",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"x = 0",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
]
],
[
[
" The following code is not recommended.",
"_____no_output_____"
]
],
[
[
"locals()['x']",
"_____no_output_____"
],
[
"locals()['x'] = 1",
"_____no_output_____"
],
[
"locals()['x']",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
]
],
[
[
" If you're tempted to use it, try this code which due to \"fast\nlocals\" doesn't do what you might expect:",
"_____no_output_____"
]
],
[
[
"def f():\n locals()['x'] = 5\n print(x)\nf()",
"_____no_output_____"
]
],
[
[
"### 3.2 The `import` Statement",
"_____no_output_____"
]
],
[
[
"def _dir(obj='__secret', _CLUTTER=dir()):\n \"\"\"\n A version of dir that excludes clutter and private names.\n \"\"\"\n if obj == '__secret':\n names = globals().keys()\n else:\n names = dir(obj)\n return [n for n in names if n not in _CLUTTER and not n.startswith('_')]",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"_dir()",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"import csv\n_dir()",
"_____no_output_____"
],
[
"csv",
"_____no_output_____"
],
[
"_dir(csv)",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"csv.reader",
"_____no_output_____"
],
[
"csv.writer",
"_____no_output_____"
],
[
"csv.spam",
"_____no_output_____"
],
[
"csv.spam = 'Python is dangerous'\ncsv.spam",
"_____no_output_____"
],
[
"csv.reader = csv.writer\ncsv.reader",
"_____no_output_____"
],
[
"from csv import reader as csv_reader\n_dir()",
"_____no_output_____"
],
[
"csv.reader is csv_reader",
"_____no_output_____"
],
[
"csv",
"_____no_output_____"
],
[
"csv.reader",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"del csv\nimport csv as csv_module\n_dir()",
"_____no_output_____"
],
[
"csv_module.reader is csv_reader",
"_____no_output_____"
],
[
"csv_module.reader",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"math",
"_____no_output_____"
],
[
"math + 3",
"_____no_output_____"
],
[
"del math",
"_____no_output_____"
],
[
"print(math)",
"_____no_output_____"
]
],
[
[
" Will the next statement give a `NameError` like the previous statement? Why not?",
"_____no_output_____"
]
],
[
[
"import math",
"_____no_output_____"
],
[
"math",
"_____no_output_____"
],
[
"del math",
"_____no_output_____"
]
],
[
[
" What if we don't know the name of the module until run-time?",
"_____no_output_____"
]
],
[
[
"import importlib",
"_____no_output_____"
],
[
"importlib.import_module('math')",
"_____no_output_____"
],
[
"math.pi",
"_____no_output_____"
],
[
"math_module = importlib.import_module('math')",
"_____no_output_____"
],
[
"math.pi",
"_____no_output_____"
],
[
"math_module.pi",
"_____no_output_____"
],
[
"module_name = 'math'",
"_____no_output_____"
],
[
"import module_name",
"_____no_output_____"
],
[
"import 'math'",
"_____no_output_____"
],
[
"import math",
"_____no_output_____"
]
],
[
[
"### 3.3 Exercises: The `import` Statement",
"_____no_output_____"
],
[
" Explore reloading a module. This is rarely needed and usually only when exploring.",
"_____no_output_____"
],
[
" Several statements below will throw errors - try to figure out which ones before you run them.",
"_____no_output_____"
]
],
[
[
"import csv",
"_____no_output_____"
],
[
"import importlib",
"_____no_output_____"
],
[
"importlib.reload?",
"_____no_output_____"
],
[
"del csv",
"_____no_output_____"
],
[
"importlib.reload(csv)",
"_____no_output_____"
],
[
"importlib.reload('csv')",
"_____no_output_____"
],
[
"import csv",
"_____no_output_____"
],
[
"importlib.reload('csv')",
"_____no_output_____"
],
[
"importlib.reload(csv)",
"_____no_output_____"
]
],
[
[
"### 3.4 Augmented Assignment Statements",
"_____no_output_____"
],
[
"Bind two names to the `str` object `'abc'`, then from it create `'abcd'`\nand rebind (reassign) one of the names:",
"_____no_output_____"
]
],
[
[
"string_1 = string_2 = 'abc'\nstring_1 is string_2",
"_____no_output_____"
],
[
"string_2 = string_2 + 'd'\nstring_1 is string_2, string_1, string_2",
"_____no_output_____"
]
],
[
[
" This reassigns the second name so it is bound to a new\nobject. This works similarly if we start with two names for one\n`list` object and then reassign one of the names.",
"_____no_output_____"
]
],
[
[
"list_1 = list_2 = ['a', 'b', 'c']\nlist_1 is list_2",
"_____no_output_____"
],
[
"list_2 = list_2 + ['d']\nlist_1 is list_2, list_1, list_2",
"_____no_output_____"
]
],
[
[
" If for the `str` objects we instead use an *augmented assignment\nstatement*, specifically *in-place add* `+=`, we get the same\nbehaviour as earlier.",
"_____no_output_____"
]
],
[
[
"string_1 = string_2 = 'abc'",
"_____no_output_____"
],
[
"string_2 += 'd'\nstring_1 is string_2, string_1, string_2",
"_____no_output_____"
]
],
[
[
" However, for the `list` objects the behaviour changes.",
"_____no_output_____"
]
],
[
[
"list_1 = list_2 = ['a', 'b', 'c']",
"_____no_output_____"
],
[
"list_2 += ['d']\nlist_1 is list_2, list_1, list_2",
"_____no_output_____"
]
],
[
[
" The `+=` in `foo += 1` is not just syntactic sugar for `foo = foo +\n1`. The `+=` and other augmented assignment statements have their\nown bytecodes and methods.",
"_____no_output_____"
],
[
" Notice BINARY_ADD vs. INPLACE_ADD. The run-time types of the\nobjects to which `name_1` and `name_2` are bound are irrelevant to the\nbytecode that gets produced.",
"_____no_output_____"
]
],
[
[
"import codeop, dis",
"_____no_output_____"
],
[
"dis.dis(codeop.compile_command(\"name_1 = name_1 + name_2\"))",
"_____no_output_____"
],
[
"dis.dis(codeop.compile_command(\"name_1 += name_2\"))",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"list_2 = ['a', 'b', 'c']",
"_____no_output_____"
],
[
"list_2",
"_____no_output_____"
]
],
[
[
" Notice that `__iadd__` returns a value",
"_____no_output_____"
]
],
[
[
"list_2.__iadd__(['d'])",
"_____no_output_____"
]
],
[
[
" and it also changes the list",
"_____no_output_____"
]
],
[
[
"list_2",
"_____no_output_____"
],
[
"string_2.__iadd__('4')",
"_____no_output_____"
]
],
[
[
"\nSo what happens when `INPLACE_ADD` operates on the `str` object?\n\nIf `INPLACE_ADD` doesn't find `__iadd__` it instead calls `__add__` and\nreassigns `string_2`, i.e. it falls back to `__add__`.\n\nhttps://docs.python.org/3/reference/datamodel.html#object.__iadd__:\n\n> These methods are called to implement the augmented arithmetic\n> assignments (+=, etc.). These methods should attempt to do the\n> operation in-place (modifying self) and return the result (which\n> could be, but does not have to be, self). If a specific method is\n> not defined, the augmented assignment falls back to the normal\n> methods.\n",
"_____no_output_____"
],
[
" Here's similar behaviour with a tuple:",
"_____no_output_____"
]
],
[
[
"tuple_1 = (7,)\ntuple_1",
"_____no_output_____"
],
[
"tuple_1[0].__iadd__(1)",
"_____no_output_____"
],
[
"tuple_1[0] += 1",
"_____no_output_____"
],
[
"tuple_1[0] = tuple_1[0] + 1",
"_____no_output_____"
],
[
"tuple_1",
"_____no_output_____"
]
],
[
[
" Here's surprising behaviour with a tuple:",
"_____no_output_____"
]
],
[
[
"tuple_2 = ([12, 13],)\ntuple_2",
"_____no_output_____"
],
[
"tuple_2[0] += [14]",
"_____no_output_____"
]
],
[
[
" What value do we expect `tuple_2` to have?",
"_____no_output_____"
]
],
[
[
"tuple_2",
"_____no_output_____"
]
],
[
[
" Let's simulate the steps to see why this behaviour makes sense.",
"_____no_output_____"
]
],
[
[
"list_1 = [12, 13]",
"_____no_output_____"
],
[
"tuple_2 = (list_1,)",
"_____no_output_____"
],
[
"tuple_2",
"_____no_output_____"
],
[
"temp = list_1.__iadd__([14])",
"_____no_output_____"
],
[
"temp",
"_____no_output_____"
],
[
"temp == list_1",
"_____no_output_____"
],
[
"temp is list_1",
"_____no_output_____"
],
[
"tuple_2",
"_____no_output_____"
],
[
"tuple_2[0] = temp",
"_____no_output_____"
]
],
[
[
" For later study:",
"_____no_output_____"
]
],
[
[
"dis.dis(codeop.compile_command(\"tuple_2 = ([12, 13],); tuple_2[0] += [14]\"))",
"_____no_output_____"
],
[
"dis.dis(codeop.compile_command(\"tuple_2 = ([12, 13],); temp = tuple_2[0].__iadd__([14]); tuple_2[0] = temp\"))",
"_____no_output_____"
]
],
[
[
" For a similar explanation see \nhttps://docs.python.org/3/faq/programming.html#faq-augmented-assignment-tuple-error",
"_____no_output_____"
],
[
"### 3.5 Function Arguments are Passed by Name Binding",
"_____no_output_____"
],
[
" Can functions modify the arguments passed to them?\n\n When a caller passes an argument to a function, the function starts\n execution with a local name, the parameter from its signature, bound\n to the argument object passed in.",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
]
],
[
[
"def function_1(string_2):\n print('A -->', string_2)\n string_2 += ' blue'\n print('B -->', string_2)",
"_____no_output_____"
],
[
"string_1 = 'red'\nstring_1",
"_____no_output_____"
],
[
"function_1(string_1)",
"_____no_output_____"
],
[
"string_1",
"_____no_output_____"
]
],
[
[
" To see more clearly why `string_1` is still a name bound to `'red'`, consider\nthis version which is functionally equivalent but has two changes\nhighlighted in the comments:",
"_____no_output_____"
]
],
[
[
"def function_2(string_2):\n print('A -->', string_2)\n string_2 = string_2 + ' blue' # Changed from +=\n print('B -->', string_2)",
"_____no_output_____"
],
[
"function_2('red') # Changed from string_1 to 'red'",
"_____no_output_____"
],
[
"'red'",
"_____no_output_____"
]
],
[
[
" In both cases the name `string_2` at the beginning of `function_1` and\n`function_2` was a name that was bound to the `str` object `'red'`,\nand in both the function-local name `string_2` was re-bound to\nthe new `str` object `'red blue'`.",
"_____no_output_____"
],
[
" Let's try this with a `list`.",
"_____no_output_____"
]
],
[
[
"def function_3(list_2):\n print('A -->', list_2)\n list_2 += ['blue'] # += with lists is shorthand for list.extend()\n print('B -->', list_2)",
"_____no_output_____"
],
[
"list_1 = ['red']\nlist_1",
"_____no_output_____"
],
[
"function_3(list_1)",
"_____no_output_____"
],
[
"list_1",
"_____no_output_____"
]
],
[
[
" In both cases parameter names are bound to arguments, and whether or\nnot the function can or does change the object passed in depends on\nthe object, not how it's passed to the function.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a5b89194a83355f764806972ac443e7de1843fe
| 17,691 |
ipynb
|
Jupyter Notebook
|
notebooks/0_HelloWorld.ipynb
|
uvacreate/2021-coding-the-humanities
|
21b7148774c501d61852fff3565f1494c0f8e177
|
[
"MIT"
] | 2 |
2021-02-14T00:03:15.000Z
|
2021-11-30T17:14:02.000Z
|
notebooks/0_HelloWorld.ipynb
|
uvacreate/2021-coding-the-humanities
|
21b7148774c501d61852fff3565f1494c0f8e177
|
[
"MIT"
] | null | null | null |
notebooks/0_HelloWorld.ipynb
|
uvacreate/2021-coding-the-humanities
|
21b7148774c501d61852fff3565f1494c0f8e177
|
[
"MIT"
] | 6 |
2021-01-18T08:10:39.000Z
|
2022-01-21T17:11:07.000Z
| 29.193069 | 410 | 0.597422 |
[
[
[
"# Getting started",
"_____no_output_____"
],
[
"## Installing Python",
"_____no_output_____"
],
[
"It is recommended that you install the full Anaconda Python 3.8, as it set up your Python environment, together with a bunch of often used packages that you'll use during this course. A guide on installing Anaconda can be found here: https://docs.anaconda.com/anaconda/install/. NB: You don't have to install the optional stuff, such as the PyCharm editor. \n\nFor more instructions, take a look at: https://github.com/uvacreate/2021-coding-the-humanities/blob/master/setup.md. \n",
"_____no_output_____"
],
[
"If you completed all the steps and you have Python and Jupyter notebooks installed, open this file again as a notebook and continue with the content below. Good luck and have fun! 🎉",
"_____no_output_____"
],
[
"# Hello World\n\nThis notebook contains some code to allow you to check if everything runs as intended.\n\n[Jupyter notebooks](https://jupyter.org) contain cells of Python code, or text written in [markdown](https://www.markdownguide.org/getting-started/). This cell for instance contains text written in markdown syntax. You can edit it by double clicking on it. You can create new cells using the \"+\" (top right bar), and you can run cells to 'execute' the markdown syntax they contain and see what happens.",
"_____no_output_____"
],
[
"The other type of cells contain Python code and need to be executed. You can either do this by clicking on the cell and then on the play button in the top of the window. Or by pressing `shift + ENTER`. Try this with the next cell, and you'll see the result of this first line of Python. ",
"_____no_output_____"
],
[
"**For a more extended revision of these materials, see http://www.karsdorp.io/python-course (Chapter 1).**",
"_____no_output_____"
]
],
[
[
"# It is customary for your first program to print Hello World! This is how you do it in Python.\n\nprint(\"Hello World!\")",
"_____no_output_____"
],
[
"# You can comment your code using '#'. What you write afterwards won't be interpreted as code.\n# This comes in handy if you want to comment on smaller bits of your code. Or if you want to\n# add a TODO for yourself to remind you that some code needs to be added or revised.",
"_____no_output_____"
]
],
[
[
"The code you write is executed from a certain *working directory* (we will see more when doing input/output). \n\nYou can access your working directory by using a *package* (bundle of Python code which does something for you) part of the so-called Python standard library: `os` (a package to interact with the operating system).",
"_____no_output_____"
]
],
[
[
"import os # we first import the package",
"_____no_output_____"
],
[
"os.getcwd() # we then can use some of its functionalities. In this case, we get the current working directory (cwd)",
"_____no_output_____"
]
],
[
[
"## Python versions\n\n\n\nIt is important that you at least run a version of Python that is being supported with security updates. Currently (Spring 2021), this means Python 3.6 or higher. You can see all current versions and their support dates on the [Python website](https://www.python.org/downloads/)\n\nFor this course it is recommended to have Python 3.8 installed, since every Python version adds, but sometimes also changes functionality. If you recently installed Python through [Anaconda](https://www.anaconda.com/products/individual#), you're most likely running version 3.8!",
"_____no_output_____"
],
[
"Let's check the Python version you are using by importing the `sys` package. Try running the next cell and see it's output.",
"_____no_output_____"
]
],
[
[
"import sys\n\nprint(sys.executable) # the path where the Python executable is located\nprint(sys.version) # its version\nprint(sys.version_info)",
"_____no_output_____"
]
],
[
[
"You now printed the version of Python you have installed. \n\nYou can also check the version of a package via its property `__version__`. A common package for working with tabular data is `pandas` (more on this package later). You can import the package and make it referencable by another name (a shorthand) by doing:",
"_____no_output_____"
]
],
[
[
"import pandas as pd # now 'pd' is the shorthand for the 'pandas' package",
"_____no_output_____"
]
],
[
[
"NB: Is this raising an error? Look further down for a (possible) explanation!\n\nNow the `pandas` package can be called by typing `pd`. The version number of packages is usually stored in a _magic attribute_ or a _dunder_ (=double underscore) called `__version__`. ",
"_____no_output_____"
]
],
[
[
"pd.__version__",
"_____no_output_____"
]
],
[
[
"The code above printed something without using the `print()` statement. Let's do the same, but this time by using a `print()` statement. ",
"_____no_output_____"
]
],
[
[
"print(pd.__version__)",
"_____no_output_____"
]
],
[
[
"Can you spot the difference? Why do you think this is? What kind of datatype do you think the version number is? And what kind of datatype can be printed on your screen? We'll go over these differences and the involved datatypes during the first lecture and seminar. \n\nIf you want to know more about a (built-in) function of Python, you can check its manual online. The information on the `print()` function can be found in the manual for [built-in functions](https://docs.python.org/3.8/library/functions.html#print)\n\nMore on datatypes later on. ",
"_____no_output_____"
],
[
"### Exercise\nTry printing your own name using the `print()` function. ",
"_____no_output_____"
]
],
[
[
"# TODO: print your own name\n",
"_____no_output_____"
],
[
"# TODO: print your own name and your age on one line\n",
"_____no_output_____"
]
],
[
[
"If all of the above cells were executed without any errors, you're clear to go! \n\nHowever, if you did get an error, you should start debugging. Most of the times, the errors returned by Python are quite meaningful. Perhaps you got this message when trying to import the `pandas` package:\n\n```python\n---------------------------------------------------------------------------\nModuleNotFoundError Traceback (most recent call last)\n<ipython-input-26-981caee58ba7> in <module>\n----> 1 import pandas as pd\n\nModuleNotFoundError: No module named 'pandas'\n``` \n\nIf you go over this error message, you can see:\n\n1. The type of error, in this example `ModuleNotFoundError` with some extra explanation\n2. The location in your code where the error occurred or was _raised_, indicated with the ----> arrow\n\nIn this case, you do not have this (external) package installed in your Python installation. Have you installed the full Anaconda package? You can resolve this error by installing the package from Python's package index ([PyPI](https://pypi.org/)), which is like a store for Python packages you can use in your code. \n\nTo install the `pandas` package (if missing), run in a cell:\n\n```python\npip install pandas\n```\n\nOr to update the `pandas` package you already have installed:\n\n```python\npip install pandas -U\n```\n\nTry this in the cell below!\n\n",
"_____no_output_____"
]
],
[
[
"# Try either installing or updating (if there is an update) your pandas package\n# your code here\n",
"_____no_output_____"
]
],
[
[
"If you face other errors, then Google (or DuckDuckGo etc.) is your friend. You'll see tons of questions on Python related problems on websites such as Stack Overflow. It's tempting to simply copy paste a coding pattern from there into your own code. But if you do, make sure you fully understand what is going on. Also, in assignments in this course, we ask you to:\n1. Specify a URL or source of the website/book you got your copied code from\n2. Explain in a _short_ text or through comments by line what the copied code is doing\n\nThis will be repeated during the lectures.\n\nHowever, if you're still stuck, you can open a discussion in our [Canvas course](https://canvas.uva.nl/courses/22381/discussion_topics). You're also very much invited to engage in threads on the discussion board of others and help them out. Debugging, solving, and explaining these coding puzzles for sure makes you a better programmer!",
"_____no_output_____"
],
[
"# Basic stuff\nThe code below does some basic things using Python. Please check if you know what it does and, if not, you can still figure it out. Just traverse through the rest of this notebook by executing each cell if this is all new to you and try to understand what happens.\n\n\n\nThe [first notebook](https://github.com/uvacreate/2021-coding-the-humanities/blob/master/notebooks/1_Basics.ipynb) that we're discussing in class is paced more slowly. You can already take a look at it if you want to work ahead. We'll be repeating the concepts below, and more.\n\nIf you think you already master these 'Python basics' and the material from the first notebook, then get into contact with us for some more challenging exercises!",
"_____no_output_____"
],
[
"## Variables and operations",
"_____no_output_____"
]
],
[
[
"a = 2\nb = a",
"_____no_output_____"
],
[
"# Or, assign two variables at the same time\nc, d = 10, 20",
"_____no_output_____"
],
[
"c",
"_____no_output_____"
],
[
"b += c",
"_____no_output_____"
],
[
"# Just typing a variable name in the Python interpreter (= terminal/shell/cell) also returns/prints its value\na",
"_____no_output_____"
],
[
"# Now, what's the value of b?\nb",
"_____no_output_____"
],
[
"# Why the double equals sign? How is this different from the above a = b ? \na == b",
"_____no_output_____"
],
[
"# Because the ≠ sign is hard to find on your keyboard\na != b",
"_____no_output_____"
],
[
"s = \"Hello World!\"\n\nprint(s)",
"_____no_output_____"
],
[
"s[-1]",
"_____no_output_____"
],
[
"s[:5]",
"_____no_output_____"
],
[
"s[6:]",
"_____no_output_____"
],
[
"s[6:-1]",
"_____no_output_____"
],
[
"s",
"_____no_output_____"
],
[
"words = [\"A\", \"list\", \"of\", \"strings\"]\nwords",
"_____no_output_____"
],
[
"letters = list(s) # Names in green are reserved by Python: avoid using them as variable names\nletters",
"_____no_output_____"
]
],
[
[
"If you do have bound a value to a built-in function of Python by accident, you can undo this by restarting your 'kernel' in Jupyter Notebook. Click `Kernel` and then `Restart` in the bar in the top of the screen. You'll make Python loose it's memory of previously declared variables. This also means that you must re-run all cells again if you need the executions and their outcomes.",
"_____no_output_____"
]
],
[
[
"# Sets are unordered collections of unique elements\nunique_letters = set(letters)\nunique_letters",
"_____no_output_____"
],
[
"# Variables have a certain data type. \n# Python is very flexible with allowing you to assign variables to data as you like\n# If you need a certain data type, you need to check it explicitly\n\ntype(s)",
"_____no_output_____"
],
[
"print(\"If you forgot the value of variable 'a':\", a)\ntype(a)",
"_____no_output_____"
],
[
"type(2.3)",
"_____no_output_____"
],
[
"type(\"Hello\")",
"_____no_output_____"
],
[
"type(letters)",
"_____no_output_____"
],
[
"type(unique_letters)",
"_____no_output_____"
]
],
[
[
"#### Exercise\n\n1. Create variables of each type: integer, float, text, list, and set. \n2. Try using mathematical operators such as `+ - * / **` on the numerical datatypes (integer and float)\n3. Print their value as a string",
"_____no_output_____"
]
],
[
[
"# Your code here",
"_____no_output_____"
]
],
[
[
"Hint: You can insert more cells by going to `Insert` and then `Insert Cell Above/Below` in this Jupyter Notebook.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5b8aa26a4adc9af2d72af53460c5b457bbbf1f
| 20,898 |
ipynb
|
Jupyter Notebook
|
code/Path-Planning-Algorithm/A-Star.ipynb
|
saurabdixit/RoboND-Rover-Project
|
9c97213e7c0ea0caa8e8d8ba4ca592a610476d72
|
[
"MIT"
] | null | null | null |
code/Path-Planning-Algorithm/A-Star.ipynb
|
saurabdixit/RoboND-Rover-Project
|
9c97213e7c0ea0caa8e8d8ba4ca592a610476d72
|
[
"MIT"
] | null | null | null |
code/Path-Planning-Algorithm/A-Star.ipynb
|
saurabdixit/RoboND-Rover-Project
|
9c97213e7c0ea0caa8e8d8ba4ca592a610476d72
|
[
"MIT"
] | null | null | null | 72.062069 | 5,908 | 0.761556 |
[
[
[
"import numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\nimport random\nimport os\n\n\nimg = cv2.imread(\"./map_bw.png\")\nMap = np.array(~(img[:,:,0]==0)).astype(int)\nNavigable_terrain = np.array(Map.nonzero()).T\nSidx = random.sample(range(0,Navigable_terrain.shape[0]),1)\nEidx = Sidx\nwhile (Sidx==Eidx):\n Eidx = random.sample(range(0,Navigable_terrain.shape[0]),1)\n\nStart = Navigable_terrain[Sidx,:][0]\nGoal = Navigable_terrain[Eidx,:][0]\nMap = np.array(~(img[:,:,0]==0)).astype(int)\nobstacles = np.where(Map == 0)\n\n#img_size = 120\n#img = np.ones([img_size,img_size,3],np.int)\n#img = img * 255;\n#obst_size = 4; #DO NOT CHANGE\n#obstacle_end_points = np.array(\n# [[int(img.shape[0]/obst_size),int(img.shape[1]/obst_size)]\n# ,[int(img.shape[0] - img.shape[0]/obst_size),int(img.shape[1]/obst_size)]\n# ,[int(img.shape[0] - img.shape[0]/obst_size),int(img.shape[1] - img.shape[1]/obst_size)]\n# ,[int(img.shape[0]/obst_size),int(img.shape[1] - img.shape[1]/obst_size)]\n# ])\n#cv2.polylines(img,pts=[obstacle_end_points], isClosed=False, color=(0,0,0), thickness = int(img_size/20))\n#Map = np.array(img[:,:,2]).astype(int)\n#obstacles = np.where(Map == 0)\n#Start = np.array([int(img_size/2),int(img_size/2)])\n#Goal = np.array([int(img_size/2),int(img_size - img_size/6)])\n\n\n#print(Start)\n#print(Goal)\nStart = np.array([83,73])\nGoal = np.array([156,111])\nimg[Start[0],Start[1],:] = 0\nimg[Start[0],Start[1],0] = 255\nimg[Goal[0],Goal[1],:] = 0\nimg[Goal[0],Goal[1],1] = 255\nplt.imshow(img)\nplt.show()",
"_____no_output_____"
],
[
"Traversal_array = np.array([[-1, -1]\n ,[0, -1]\n ,[1, -1]\n ,[-1, 0]\n ,[1, 0]\n ,[-1, 1]\n ,[0, 1]\n ,[1, 1]])\n\ndef GetNavigableNeighbors(pos,Traversal_array,obstacles):\n neighbors = np.ones_like(Traversal_array) * pos\n neighbors = neighbors + Traversal_array\n NavigableNeighbors = []\n for neighbor in neighbors.tolist():\n if not(neighbor in np.append([obstacles[0]],[obstacles[1]],axis=0).T.tolist()):\n NavigableNeighbors.append(neighbor)\n return np.array(NavigableNeighbors)\n\ndef euclidean_dist(pt1,pt2):\n return np.sqrt(np.square(pt1[0]-pt2[0]) + np.square(pt1[1]-pt2[1]))\n\nclass Node():\n def __init__(self,pos,goal,parent=None):\n self.pos = np.array(pos)\n if parent != None:\n self.gcost = euclidean_dist(pos,parent.pos) + parent.gcost\n else:\n self.gcost = 0\n self.parent = parent\n self.hcost = euclidean_dist(pos,goal)\n \n def GetFCost(self):\n return self.gcost + self.hcost\n def Print(self):\n print(\"Position: \",self.pos)\n print(\"Parent: \",self.parent)\n print(\"Gcost: \",self.gcost)\n print(\"Hcost: \",self.hcost)\n print(\"Fcost: \",self.gcost+self.hcost)\n\nStartNode = Node(Start,Goal)\nparent_id = -1\nExploring_nodes = np.array([[parent_id,StartNode.pos[0],StartNode.pos[1],StartNode.GetFCost(),StartNode.hcost,0]])\nExecutingNode = StartNode\nExploredNodes = {'0':ExecutingNode}\n#type(ExecutingNode.pos)",
"_____no_output_____"
],
[
"while ExecutingNode.pos.astype(int).tolist() != Goal.astype(int).tolist():\n parent_id += 1\n for neighbor in GetNavigableNeighbors(ExecutingNode.pos,Traversal_array,obstacles).tolist():\n if not(neighbor in np.array(Exploring_nodes[:,1:3]).tolist()):\n CurrentNode = Node(neighbor,Goal,ExecutingNode)\n Exploring_nodes = np.append(Exploring_nodes\n ,[[parent_id, \n CurrentNode.pos[0],\n CurrentNode.pos[1],\n CurrentNode.GetFCost(),\n CurrentNode.hcost,0]]\n ,axis = 0)\n dict_index = Exploring_nodes.shape[0] - 1\n ExploredNodes.update({str(dict_index) : CurrentNode})\n #CurrentNode.Print()\n\n\n idx = Exploring_nodes[:,1:3].tolist().index(ExecutingNode.pos.tolist())\n Exploring_nodes[idx,5] = 1\n non_visited = np.where(Exploring_nodes[:,5] != 1)[0]\n lowest_fcost = non_visited[np.where(Exploring_nodes[non_visited,3] \n == np.min(Exploring_nodes[non_visited,3]))[0]]\n\n #print(ExecutingNode.pos)\n if len(lowest_fcost) == 1:\n ExecutingNode = ExploredNodes[str(lowest_fcost[0])]\n #print(\"Fcost: \",lowest_fcost)\n else:\n lowest_hcost = non_visited[np.where(Exploring_nodes[non_visited,4] \n == np.min(Exploring_nodes[non_visited,4]))[0]]\n #print(\"Hcost: \",lowest_hcost)\n ExecutingNode = ExploredNodes[str(lowest_hcost[0])]\n\n #img[int(ExecutingNode.pos[0]),int(ExecutingNode.pos[1]),:] = 0\n #img[int(ExecutingNode.pos[0]),int(ExecutingNode.pos[1]),2] = 255\nplt.imshow(img)\nplt.show()",
"_____no_output_____"
],
[
"\n\nprint(Goal)",
"[156 111]\n"
],
[
"def GetPath(Start,FinalExecutingNode):\n Path = []\n current_node = FinalExecutingNode\n while Start.astype(int).tolist() != current_node.pos.astype(int).tolist():\n Path.append(current_node.pos.astype(int).tolist())\n current_node = current_node.parent\n img[int(current_node.pos[0]),int(current_node.pos[1]),:] = 0\n img[int(current_node.pos[0]),int(current_node.pos[1]),2] = 255\n return Path\n\n#print(GetPath(Start,ExecutingNode))\n#img = cv2.imread(\"./map_bw.png\")\n\nPath = GetPath(Start,ExecutingNode)\nplt.imshow(img)\nplt.show()",
"_____no_output_____"
],
[
"parent_node = ExecutingNode.parent\nparent_node.Print()\n\n",
"Position: [155 111]\nParent: <__main__.Node object at 0x000001234C967EB8>\nGcost: 116.2253967444161\nHcost: 1.0\nFcost: 117.2253967444161\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5b94c0aac24a6ad22337b66b1dc57b22416cbc
| 8,939 |
ipynb
|
Jupyter Notebook
|
Python Programming basics 13.ipynb
|
RitheshMacom/Python-basic-programming
|
a28f0929a343f2af12a2e1873221b0b14e6db435
|
[
"CNRI-Python"
] | null | null | null |
Python Programming basics 13.ipynb
|
RitheshMacom/Python-basic-programming
|
a28f0929a343f2af12a2e1873221b0b14e6db435
|
[
"CNRI-Python"
] | null | null | null |
Python Programming basics 13.ipynb
|
RitheshMacom/Python-basic-programming
|
a28f0929a343f2af12a2e1873221b0b14e6db435
|
[
"CNRI-Python"
] | 1 |
2022-03-10T19:23:28.000Z
|
2022-03-10T19:23:28.000Z
| 26.137427 | 212 | 0.502965 |
[
[
[
"### Basic Programming 13",
"_____no_output_____"
],
[
"### 1. Write a program that calculates and prints the value according to the given formula:\n\n### Q = Square root of [(2 C D)/H]\n\n### Following are the fixed values of C and H:\n\n### C is 50. H is 30.\n\n### D is the variable whose values should be input to your program in a comma-separated sequence.\n\n### Example\n\n### Let us assume the following comma separated input sequence is given to the program:\n\n### 100,150,180\n\n### The output of the program should be:\n\n### 18,22,24",
"_____no_output_____"
]
],
[
[
"import math\n\nnumbers = input(\"Provide D in with comma separated: \")\nnumbers = numbers.split(',')\n\nresult_list = []\nresult_string = ''\nfor D in numbers:\n Q = round(math.sqrt(2 * 50 * int(D) / 30))\n result_list.append(str(Q))\n \nprint(','.join(result_list))",
"Provide D in with comma separated: 120,435,109\n20,38,19\n"
]
],
[
[
"### 2. Write a program which takes 2 digits, X,Y as input and generates a 2-dimensional array. \n### The element value in the i-th row and j-th column of the array should be i*j.\n### Note: i=0,1.., X-1; j=0,1,¡Y-1.\n\n### Example\n### Suppose the following inputs are given to the program:\n### 3,5\n### Then, the output of the program should be:\n### [[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]",
"_____no_output_____"
]
],
[
[
"x=int(input('Enter the value of X: '))\ny=int(input('Enter the value of Y: '))\n\nl1=[]\nfor i in range(x):\n l2=[]\n for j in range(y):\n l2.append(i*j)\n l1.append(l2)\nl1",
"Enter the value of X: 7\nEnter the value of Y: 9\n"
]
],
[
[
"### 3. Write a program that accepts a comma separated sequence of words as input and prints the words in a comma-separated sequence after sorting them alphabetically.\n\n\n### Suppose the following input is supplied to the program:\n### without,hello,bag,world\n### Then, the output should be:\n### bag,hello,without,world",
"_____no_output_____"
]
],
[
[
"items=[x for x in input('Enter comma seperated words ').split(',')]\nitems.sort() \nprint(','.join(items))",
"Enter comma seperated words water,has,bubble,circle\nbubble,circle,has,water\n"
]
],
[
[
"### 4.Write a program that accepts a sequence of whitespace separated words as input and prints the words after removing all duplicate words and sorting them alphanumerically.\n\n### Suppose the following input is supplied to the program:\n\n### hello world and practice makes perfect and hello world again\n\n### Then, the output should be:\n\n### again and hello makes perfect practice world",
"_____no_output_____"
]
],
[
[
"s=input('Enter the sequence of white separated words: ').split(' ')\nprint(' '.join(sorted(set(s))))",
"Enter the sequence of white separated words: hello world and practice makes perfect and hello world again\n again and hello makes perfect practice world\n"
]
],
[
[
"### 5. Write a program that accepts a sentence and calculate the number od letters and digits.\n\n### suppose the Following input is supplied to the program:\n### hello world! 123\n### Then,the output should be:\n### LETTERS 10\n### DIGITS 3",
"_____no_output_____"
]
],
[
[
"s = input(\"Input a string : \")\ndigits=letters=0\nfor c in s:\n if c.isdigit():\n digits += 1\n elif c.isalpha():\n letters += 1\n else:\n pass\nprint(\"Letters\", letters)\nprint(\"Digits\", digits)",
"Input a string : hello world! 123\nLetters 10\nDigits 3\n"
]
],
[
[
"### 6.A website requires the users to input username and password to register. Write a program to check the validity of password input by users.\n### Following are the criteria for checking the password:\n#### 1. At least 1 letter between [a-z]\n#### 2. At least 1 number between [0-9]\n#### 1. At least 1 letter between [A-Z]\n#### 3. At least 1 character from [$#@]\n#### 4. Minimum length of transaction password: 6\n#### 5. Maximum length of transaction password: 12\n### Your program should accept a sequence of comma separated passwords and will check them according to the above criteria. Passwords that match the criteria are to be printed, each separated by a comma.\n### Example\n### If the following passwords are given as input to the program:\n### ABd1234@1,a F1#,2w3E*,2We3345\n### Then, the output of the program should be:\n### ABd1234@1",
"_____no_output_____"
]
],
[
[
"import re\n\npswd = input(\"Type the passwords in comma separated form: \").split(\",\")\n\nvalid = []\nfor i in pswd:\n \n if len(i) < 6 or len(i) > 12:\n continue\n\n elif not re.search(\"([a-z])+\", i):\n continue\n\n elif not re.search(\"([A-Z])+\", i):\n continue\n\n elif not re.search(\"([0-9])+\", i):\n continue\n\n elif not re.search(\"([!@$%^&])+\", i):\n continue\n\n else:\n valid.append(i)\n\n print((\" \").join(valid))\n break\n\nelse:\n print('Invalid password')",
"Type the passwords in comma separated form: ABd1234@1,a F1#,2w3E*,2We3345\nABd1234@1\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5ba891173fe0d5dcf491ee41ad81018f887a45
| 204,422 |
ipynb
|
Jupyter Notebook
|
Model 1.ipynb
|
molenathyhoangxuannguyen/Kaggle-Four-Shapes-Classification-Challenge-
|
26d78abb358936760f90014f29569d8e204a2d3d
|
[
"Apache-2.0"
] | 1 |
2020-12-27T17:17:50.000Z
|
2020-12-27T17:17:50.000Z
|
Model 1.ipynb
|
molenathyhoangxuannguyen/Kaggle-Four-Shapes-Classification-Challenge
|
26d78abb358936760f90014f29569d8e204a2d3d
|
[
"Apache-2.0"
] | null | null | null |
Model 1.ipynb
|
molenathyhoangxuannguyen/Kaggle-Four-Shapes-Classification-Challenge
|
26d78abb358936760f90014f29569d8e204a2d3d
|
[
"Apache-2.0"
] | null | null | null | 417.187755 | 118,568 | 0.933789 |
[
[
[
"!pip install efficientnet",
"Requirement already satisfied: efficientnet in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (1.1.1)\nRequirement already satisfied: keras-applications<=1.0.8,>=1.0.7 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from efficientnet) (1.0.8)\nRequirement already satisfied: scikit-image in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from efficientnet) (0.17.2)\nRequirement already satisfied: numpy>=1.9.1 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from keras-applications<=1.0.8,>=1.0.7->efficientnet) (1.19.2)\nRequirement already satisfied: h5py in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from keras-applications<=1.0.8,>=1.0.7->efficientnet) (2.10.0)\nRequirement already satisfied: six in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from h5py->keras-applications<=1.0.8,>=1.0.7->efficientnet) (1.15.0)\nRequirement already satisfied: scipy>=1.0.1 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from scikit-image->efficientnet) (1.5.2)\nRequirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from scikit-image->efficientnet) (3.3.2)\nRequirement already satisfied: networkx>=2.0 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from scikit-image->efficientnet) (2.5)\nRequirement already satisfied: pillow!=7.1.0,!=7.1.1,>=4.3.0 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from scikit-image->efficientnet) (8.0.1)\nRequirement already satisfied: imageio>=2.3.0 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from scikit-image->efficientnet) (2.9.0)\nRequirement already satisfied: tifffile>=2019.7.26 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from scikit-image->efficientnet) (2020.10.1)\nRequirement already satisfied: PyWavelets>=1.1.1 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from scikit-image->efficientnet) (1.1.1)\nRequirement already satisfied: cycler>=0.10 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (0.10.0)\nRequirement already satisfied: certifi>=2020.06.20 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (2020.6.20)\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (1.3.0)\nRequirement already satisfied: python-dateutil>=2.1 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (2.8.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (2.4.7)\nRequirement already satisfied: decorator>=4.3.0 in c:\\users\\nguyent2\\anaconda3\\lib\\site-packages (from networkx>=2.0->scikit-image->efficientnet) (4.4.2)\n"
],
[
"#import the libraries needed \n\nimport pandas as pd\nimport numpy as np\n\nimport os\nimport cv2\n\nfrom tqdm import tqdm_notebook as tqdm\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import train_test_split\nfrom keras_preprocessing.image import ImageDataGenerator\n\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Dense\nimport efficientnet.tfkeras as efn\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
],
[
"current_path = r'C:\\Users\\nguyent2\\Desktop\\Kaggle-Four-Shapes-Classification-Challenge\\Kaggle Dataset\\shapes'\ncircle_paths = os.listdir(os.path.join(current_path, 'circle'))\nsquare_paths = os.listdir(os.path.join(current_path, 'square'))\nstar_paths = os.listdir(os.path.join(current_path, 'star'))\ntriangle_paths = os.listdir(os.path.join(current_path, 'triangle'))",
"_____no_output_____"
],
[
"print(f'We got {len(circle_paths)} circles, {len(square_paths)} squares, {len(star_paths)} stars, and {len(triangle_paths)} triangles' )",
"We got 3720 circles, 3765 squares, 3765 stars, and 3720 triangles\n"
],
[
"circles = pd.DataFrame()\nsquares = pd.DataFrame()\nstars = pd.DataFrame()\ntriangles = pd.DataFrame()\n\nfor n,i in enumerate(tqdm(range(len(circle_paths)))):\n circle_path = os.path.join(current_path, 'circle' ,circle_paths[i])\n circles.loc[n,'path'] = circle_path\n circles.loc[n, 'circle'] = 1\n circles.loc[n, 'square'] = 2\n circles.loc[n, 'star'] = 3 \n circles.loc[n, 'triangle'] = 0\n\nfor n,i in enumerate(tqdm(range(len(square_paths)))):\n square_path = os.path.join(current_path, 'square' ,square_paths[i])\n squares.loc[n,'path'] = square_path\n squares.loc[n, 'circle'] = 0\n squares.loc[n, 'square'] = 1 \n squares.loc[n, 'star'] = 2 \n squares.loc[n, 'triangle'] = 3\n \nfor n,i in enumerate(tqdm(range(len(star_paths)))):\n star_path = os.path.join(current_path, 'star' ,star_paths[i])\n stars.loc[n,'path'] = star_path\n stars.loc[n, 'circle'] = 3\n stars.loc[n, 'square'] = 0\n stars.loc[n, 'star'] = 1 \n stars.loc[n, 'triangle'] = 2\n \nfor n,i in enumerate(tqdm(range(len(triangle_paths)))):\n triangle_path = os.path.join(current_path, 'triangle' ,triangle_paths[i])\n triangles.loc[n,'path'] = triangle_path\n triangles.loc[n, 'circle'] = 2\n triangles.loc[n, 'square'] = 3 \n triangles.loc[n, 'star'] = 0 \n triangles.loc[n, 'triangle'] = 1\n \ndata = pd.concat([circles, squares, stars, triangles], axis=0).sample(frac=1.0, random_state=42).reset_index(drop=True)",
"_____no_output_____"
],
[
"plt.figure(figsize=(16,16))\n\nfor i in range(36):\n plt.subplot(6,6,i+1)\n img = cv2.imread(data.path[i])\n plt.imshow(img)\n plt.title(data.iloc[i,1:].sort_values().index[1])\n plt.axis('off')",
"_____no_output_____"
],
[
"train, test = train_test_split(data, test_size=.3, random_state=42)\n\ntrain.shape, test.shape",
"_____no_output_____"
],
[
"example = train.sample(n=1).reset_index(drop=True)\nexample_data_gen = ImageDataGenerator(\n rescale=1./255,\n horizontal_flip=True,\n vertical_flip=True,\n)\n\nexample_gen = example_data_gen.flow_from_dataframe(example,\n target_size=(200,200),\n x_col=\"path\",\n y_col=['circle', 'square', 'star','triangle'],\n class_mode='raw',\n shuffle=False,\n batch_size=32)\n\nplt.figure(figsize=(20, 20))\nfor i in range(0, 9):\n plt.subplot(3, 3, i+1)\n for X_batch, _ in example_gen:\n image = X_batch[0]\n plt.imshow(image)\n plt.axis('off')\n break",
"Found 1 validated image filenames.\n"
],
[
"test_data_gen= ImageDataGenerator(rescale=1./255)\n\ntrain_data_gen= ImageDataGenerator(\n rescale=1./255,\n horizontal_flip=True,\n vertical_flip=True,\n)",
"_____no_output_____"
],
[
"train_generator=train_data_gen.flow_from_dataframe(train,\n target_size=(200,200),\n x_col=\"path\",\n y_col=['circle','square', 'star','triangle'],\n class_mode='raw',\n shuffle=False,\n batch_size=32)",
"Found 10479 validated image filenames.\n"
],
[
"test_generator=test_data_gen.flow_from_dataframe(test,\n target_size=(200,200),\n x_col=\"path\",\n y_col=['circle', 'square','star','triangle'],\n class_mode='raw',\n shuffle=False,\n batch_size=1)",
"Found 4491 validated image filenames.\n"
],
[
"def get_model():\n base_model = efn.EfficientNetB0(weights='imagenet', include_top=False, pooling='avg', input_shape=(200, 200, 3))\n x = base_model.output\n predictions = Dense(4, activation='softmax')(x)\n model = Model(inputs=base_model.input, outputs=predictions)\n model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy'])\n return model",
"_____no_output_____"
],
[
"model = get_model()\nmodel.fit_generator(train_generator,\n epochs=1,\n steps_per_epoch=train_generator.n/32,\n )",
" 33/327 [==>...........................] - ETA: 31:25 - loss: 12.6281 - accuracy: 0.8330"
],
[
"model.evaluate(test_generator)",
"_____no_output_____"
],
[
"pred_test = np.argmax(model.predict(test_generator, verbose=1), axis=1)",
"_____no_output_____"
],
[
"plt.figure(figsize=(24,24))\n\nfor i in range(100):\n plt.subplot(10,10,i+1)\n img = cv2.imread(test.reset_index(drop=True).path[i])\n plt.imshow(img)\n plt.title(test.reset_index(drop=True).iloc[0,1:].index[pred_test[i]])\n plt.axis('off')",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5ba9b8d8f459344f22d33ca3ce5dacfc576fc8
| 64,564 |
ipynb
|
Jupyter Notebook
|
train/Randomforest.ipynb
|
ehjhihlo/Phylo-GCN
|
75787adc829e2897c8083adb1c82044da2edb938
|
[
"MIT"
] | 1 |
2022-03-25T02:42:19.000Z
|
2022-03-25T02:42:19.000Z
|
train/Randomforest.ipynb
|
ehjhihlo/Phylo-GCN
|
75787adc829e2897c8083adb1c82044da2edb938
|
[
"MIT"
] | null | null | null |
train/Randomforest.ipynb
|
ehjhihlo/Phylo-GCN
|
75787adc829e2897c8083adb1c82044da2edb938
|
[
"MIT"
] | null | null | null | 46.819434 | 5,578 | 0.392154 |
[
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport math\nimport seaborn as sns\nfrom sklearn import datasets\nfrom sklearn import metrics\nfrom sklearn.preprocessing import LabelBinarizer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import confusion_matrix, roc_auc_score, f1_score, precision_score, recall_score\nos.environ['TF_CPP_MIN_LOG_LEVEL']= '2'",
"_____no_output_____"
],
[
"# !gdown --id '1S9iwczSf6KL5jMSmU20SXKCSD3BUx4o_' --output level-6.csv #GMPR_genus\n!gdown --id '1q0yp1iM66BKvqee46bOuSZYwl_SJCTp0' --output level-6.csv #GMPR_species",
"Downloading...\nFrom: https://drive.google.com/uc?id=1q0yp1iM66BKvqee46bOuSZYwl_SJCTp0\nTo: /content/level-6.csv\n\r 0% 0.00/1.67M [00:00<?, ?B/s]\r100% 1.67M/1.67M [00:00<00:00, 61.1MB/s]\n"
],
[
"train = pd.read_csv(\"level-6.csv\")\ntrain.head()\ntrain.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 443 entries, 0 to 442\nColumns: 1139 entries, index to Diagnosis\ndtypes: float64(785), int64(352), object(2)\nmemory usage: 3.8+ MB\n"
],
[
"from sklearn.preprocessing import LabelEncoder\nlabelencoder = LabelEncoder()\ntrain[\"Diagnosis\"] = labelencoder.fit_transform(train[\"Diagnosis\"])\n# test[\"Diagnosis\"] = labelencoder.fit_transform(test[\"Diagnosis\"])\n# for i in range(len(train)):\n# if train[\"Diagnosis\"][i] == 'Cancer':\n# train[\"Diagnosis\"][i] = str(1)\n# else:\n# train[\"Diagnosis\"][i] = str(0)\ntrain",
"_____no_output_____"
],
[
"not_select = [\"index\", \"Diagnosis\"]\ntrain_select = train.drop(not_select,axis=1)\ndf_final_select = train_select",
"_____no_output_____"
]
],
[
[
"#Random Forest Classifier",
"_____no_output_____"
]
],
[
[
"#Use RandomForestClassifier to predict Cancer\nx = df_final_select\ny = train[\"Diagnosis\"]\n# y = np.array(y,dtype=int)\nX_train,X_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=0)\n\n#RandomForest\nrfc = RandomForestClassifier(n_estimators=1000)\nrfc.fit(X_train,y_train)\ny_predict = rfc.predict(X_test)\nscore_rfc = rfc.score(X_test,y_test)\nscore_rfc_train = rfc.score(X_train,y_train)\nprint(\"train_accuracy = \",score_rfc_train*100,\" %\")\nprint(\"val_accuracy = \",score_rfc*100,\" %\")",
"train_accuracy = 100.0 %\nval_accuracy = 82.02247191011236 %\n"
],
[
"mat = confusion_matrix(y_test, y_predict)\nsns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False)\nplt.xlabel('true label')\nplt.ylabel('predicted label')\nscore_recall = recall_score(y_test, y_predict, average=None)\nf1score = f1_score(y_test, y_predict, average=\"macro\")\nprecisionscore = precision_score(y_test, y_predict, average=None)\nauc_roc = roc_auc_score(y_test, y_predict)\nprint(\"precision = \",precisionscore)\nprint(\"recall = \",score_recall)\nprint(\"auc_roc = \",auc_roc)\nprint(\"f1_score = \",f1score)\n\nwith open('RF_result.csv','w') as f:\n f.write('Precision_Normal,Precision_Cancer,Recall_Normal,Recall_Cancer,Auc_Score,F1_Score,')\n f.write('\\n')\n f.write(str(precisionscore[0])+','+str(precisionscore[1])+','+str(score_recall[0])+','+str(score_recall[1])+','+str(auc_roc)+','+str(f1score))",
"precision = [0.84210526 0.78125 ]\nrecall = [0.87272727 0.73529412]\nauc_roc = 0.8040106951871658\nf1_score = 0.8073593073593073\n"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a5bc3a3c5684e3b23ef4906e6e924afeefb0309
| 56,124 |
ipynb
|
Jupyter Notebook
|
assignments/Transfer_4.2_ImageNet.ipynb
|
8p631G9/8p361-project-imaging
|
32713052118a83836b485f0f14c16113da49a69b
|
[
"MIT"
] | null | null | null |
assignments/Transfer_4.2_ImageNet.ipynb
|
8p631G9/8p361-project-imaging
|
32713052118a83836b485f0f14c16113da49a69b
|
[
"MIT"
] | null | null | null |
assignments/Transfer_4.2_ImageNet.ipynb
|
8p631G9/8p361-project-imaging
|
32713052118a83836b485f0f14c16113da49a69b
|
[
"MIT"
] | null | null | null | 89.942308 | 1,835 | 0.674863 |
[
[
[
"# disable overly verbose tensorflow logging\nimport os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'} \nimport tensorflow as tf\n\n\nimport numpy as np\n\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras.models import Sequential, Model\nfrom tensorflow.keras.layers import Input, Dense, GlobalAveragePooling2D, Dropout, Flatten, Conv2D, MaxPool2D, Reshape\nfrom tensorflow.keras.optimizers import SGD\nfrom tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard\n\nfrom tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input\n\n# unused for now, to be used for ROC analysis\nfrom sklearn.metrics import roc_curve, auc\n\nallow_growth = True\n\n# the size of the images in the PCAM dataset\nIMAGE_SIZE = 96\n\ndatagen = ImageDataGenerator(preprocessing_function=preprocess_input)",
"_____no_output_____"
]
],
[
[
"# Initialize the MobileNetV2 model for fine-tuning on the dataset",
"_____no_output_____"
]
],
[
[
"input_shape = (IMAGE_SIZE, IMAGE_SIZE, 3)\n\n\ninput = Input(input_shape)\n\n# get the pretrained model, cut out the top layer\npretrained = MobileNetV2(input_shape=input_shape, include_top=False, weights='imagenet')\npretrained.summary()\n# if the pretrained model it to be used as a feature extractor, and not for\n# fine-tuning, the weights of the model can be frozen in the following way\n# for layer in pretrained.layers:\n# layer.trainable = False\n\noutput = pretrained(input)\noutput = GlobalAveragePooling2D()(output)\noutput = Dropout(0.5)(output)\noutput = Dense(1, activation='sigmoid')(output)\n\nmodel = Model(input, output)\n\n# note the lower lr compared to the cnn example\nmodel.compile(SGD(lr=0.001, momentum=0.95), loss = 'binary_crossentropy', metrics=['accuracy'])\n\n# print a summary of the model on screen\nmodel.summary()",
"Model: \"mobilenetv2_1.00_96\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_2 (InputLayer) [(None, 96, 96, 3)] 0 \n__________________________________________________________________________________________________\nConv1 (Conv2D) (None, 48, 48, 32) 864 input_2[0][0] \n__________________________________________________________________________________________________\nbn_Conv1 (BatchNormalization) (None, 48, 48, 32) 128 Conv1[0][0] \n__________________________________________________________________________________________________\nConv1_relu (ReLU) (None, 48, 48, 32) 0 bn_Conv1[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_depthwise (Depthw (None, 48, 48, 32) 288 Conv1_relu[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_depthwise_BN (Bat (None, 48, 48, 32) 128 expanded_conv_depthwise[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_depthwise_relu (R (None, 48, 48, 32) 0 expanded_conv_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_project (Conv2D) (None, 48, 48, 16) 512 expanded_conv_depthwise_relu[0][0\n__________________________________________________________________________________________________\nexpanded_conv_project_BN (Batch (None, 48, 48, 16) 64 expanded_conv_project[0][0] \n__________________________________________________________________________________________________\nblock_1_expand (Conv2D) (None, 48, 48, 96) 1536 expanded_conv_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_1_expand_BN (BatchNormali (None, 48, 48, 96) 384 block_1_expand[0][0] \n__________________________________________________________________________________________________\nblock_1_expand_relu (ReLU) (None, 48, 48, 96) 0 block_1_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_1_pad (ZeroPadding2D) (None, 49, 49, 96) 0 block_1_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_1_depthwise (DepthwiseCon (None, 24, 24, 96) 864 block_1_pad[0][0] \n__________________________________________________________________________________________________\nblock_1_depthwise_BN (BatchNorm (None, 24, 24, 96) 384 block_1_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_1_depthwise_relu (ReLU) (None, 24, 24, 96) 0 block_1_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_1_project (Conv2D) (None, 24, 24, 24) 2304 block_1_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_1_project_BN (BatchNormal (None, 24, 24, 24) 96 block_1_project[0][0] \n__________________________________________________________________________________________________\nblock_2_expand (Conv2D) (None, 24, 24, 144) 3456 block_1_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_2_expand_BN (BatchNormali (None, 24, 24, 144) 576 block_2_expand[0][0] \n__________________________________________________________________________________________________\nblock_2_expand_relu (ReLU) (None, 24, 24, 144) 0 block_2_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_2_depthwise (DepthwiseCon (None, 24, 24, 144) 1296 block_2_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_2_depthwise_BN (BatchNorm (None, 24, 24, 144) 576 block_2_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_2_depthwise_relu (ReLU) (None, 24, 24, 144) 0 block_2_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_2_project (Conv2D) (None, 24, 24, 24) 3456 block_2_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_2_project_BN (BatchNormal (None, 24, 24, 24) 96 block_2_project[0][0] \n__________________________________________________________________________________________________\nblock_2_add (Add) (None, 24, 24, 24) 0 block_1_project_BN[0][0] \n block_2_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_3_expand (Conv2D) (None, 24, 24, 144) 3456 block_2_add[0][0] \n__________________________________________________________________________________________________\nblock_3_expand_BN (BatchNormali (None, 24, 24, 144) 576 block_3_expand[0][0] \n__________________________________________________________________________________________________\nblock_3_expand_relu (ReLU) (None, 24, 24, 144) 0 block_3_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_3_pad (ZeroPadding2D) (None, 25, 25, 144) 0 block_3_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_3_depthwise (DepthwiseCon (None, 12, 12, 144) 1296 block_3_pad[0][0] \n__________________________________________________________________________________________________\nblock_3_depthwise_BN (BatchNorm (None, 12, 12, 144) 576 block_3_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_3_depthwise_relu (ReLU) (None, 12, 12, 144) 0 block_3_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_3_project (Conv2D) (None, 12, 12, 32) 4608 block_3_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_3_project_BN (BatchNormal (None, 12, 12, 32) 128 block_3_project[0][0] \n__________________________________________________________________________________________________\nblock_4_expand (Conv2D) (None, 12, 12, 192) 6144 block_3_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_4_expand_BN (BatchNormali (None, 12, 12, 192) 768 block_4_expand[0][0] \n__________________________________________________________________________________________________\nblock_4_expand_relu (ReLU) (None, 12, 12, 192) 0 block_4_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_4_depthwise (DepthwiseCon (None, 12, 12, 192) 1728 block_4_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_4_depthwise_BN (BatchNorm (None, 12, 12, 192) 768 block_4_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_4_depthwise_relu (ReLU) (None, 12, 12, 192) 0 block_4_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_4_project (Conv2D) (None, 12, 12, 32) 6144 block_4_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_4_project_BN (BatchNormal (None, 12, 12, 32) 128 block_4_project[0][0] \n__________________________________________________________________________________________________\nblock_4_add (Add) (None, 12, 12, 32) 0 block_3_project_BN[0][0] \n block_4_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_5_expand (Conv2D) (None, 12, 12, 192) 6144 block_4_add[0][0] \n__________________________________________________________________________________________________\nblock_5_expand_BN (BatchNormali (None, 12, 12, 192) 768 block_5_expand[0][0] \n__________________________________________________________________________________________________\nblock_5_expand_relu (ReLU) (None, 12, 12, 192) 0 block_5_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_5_depthwise (DepthwiseCon (None, 12, 12, 192) 1728 block_5_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_5_depthwise_BN (BatchNorm (None, 12, 12, 192) 768 block_5_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_5_depthwise_relu (ReLU) (None, 12, 12, 192) 0 block_5_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_5_project (Conv2D) (None, 12, 12, 32) 6144 block_5_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_5_project_BN (BatchNormal (None, 12, 12, 32) 128 block_5_project[0][0] \n__________________________________________________________________________________________________\nblock_5_add (Add) (None, 12, 12, 32) 0 block_4_add[0][0] \n block_5_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_6_expand (Conv2D) (None, 12, 12, 192) 6144 block_5_add[0][0] \n__________________________________________________________________________________________________\nblock_6_expand_BN (BatchNormali (None, 12, 12, 192) 768 block_6_expand[0][0] \n__________________________________________________________________________________________________\nblock_6_expand_relu (ReLU) (None, 12, 12, 192) 0 block_6_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_6_pad (ZeroPadding2D) (None, 13, 13, 192) 0 block_6_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_6_depthwise (DepthwiseCon (None, 6, 6, 192) 1728 block_6_pad[0][0] \n__________________________________________________________________________________________________\nblock_6_depthwise_BN (BatchNorm (None, 6, 6, 192) 768 block_6_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_6_depthwise_relu (ReLU) (None, 6, 6, 192) 0 block_6_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_6_project (Conv2D) (None, 6, 6, 64) 12288 block_6_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_6_project_BN (BatchNormal (None, 6, 6, 64) 256 block_6_project[0][0] \n__________________________________________________________________________________________________\nblock_7_expand (Conv2D) (None, 6, 6, 384) 24576 block_6_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_7_expand_BN (BatchNormali (None, 6, 6, 384) 1536 block_7_expand[0][0] \n__________________________________________________________________________________________________\nblock_7_expand_relu (ReLU) (None, 6, 6, 384) 0 block_7_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_7_depthwise (DepthwiseCon (None, 6, 6, 384) 3456 block_7_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_7_depthwise_BN (BatchNorm (None, 6, 6, 384) 1536 block_7_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_7_depthwise_relu (ReLU) (None, 6, 6, 384) 0 block_7_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_7_project (Conv2D) (None, 6, 6, 64) 24576 block_7_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_7_project_BN (BatchNormal (None, 6, 6, 64) 256 block_7_project[0][0] \n__________________________________________________________________________________________________\nblock_7_add (Add) (None, 6, 6, 64) 0 block_6_project_BN[0][0] \n block_7_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_8_expand (Conv2D) (None, 6, 6, 384) 24576 block_7_add[0][0] \n__________________________________________________________________________________________________\nblock_8_expand_BN (BatchNormali (None, 6, 6, 384) 1536 block_8_expand[0][0] \n__________________________________________________________________________________________________\nblock_8_expand_relu (ReLU) (None, 6, 6, 384) 0 block_8_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_8_depthwise (DepthwiseCon (None, 6, 6, 384) 3456 block_8_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_8_depthwise_BN (BatchNorm (None, 6, 6, 384) 1536 block_8_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_8_depthwise_relu (ReLU) (None, 6, 6, 384) 0 block_8_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_8_project (Conv2D) (None, 6, 6, 64) 24576 block_8_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_8_project_BN (BatchNormal (None, 6, 6, 64) 256 block_8_project[0][0] \n__________________________________________________________________________________________________\nblock_8_add (Add) (None, 6, 6, 64) 0 block_7_add[0][0] \n block_8_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_9_expand (Conv2D) (None, 6, 6, 384) 24576 block_8_add[0][0] \n__________________________________________________________________________________________________\nblock_9_expand_BN (BatchNormali (None, 6, 6, 384) 1536 block_9_expand[0][0] \n__________________________________________________________________________________________________\nblock_9_expand_relu (ReLU) (None, 6, 6, 384) 0 block_9_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_9_depthwise (DepthwiseCon (None, 6, 6, 384) 3456 block_9_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_9_depthwise_BN (BatchNorm (None, 6, 6, 384) 1536 block_9_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_9_depthwise_relu (ReLU) (None, 6, 6, 384) 0 block_9_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_9_project (Conv2D) (None, 6, 6, 64) 24576 block_9_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_9_project_BN (BatchNormal (None, 6, 6, 64) 256 block_9_project[0][0] \n__________________________________________________________________________________________________\nblock_9_add (Add) (None, 6, 6, 64) 0 block_8_add[0][0] \n block_9_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_10_expand (Conv2D) (None, 6, 6, 384) 24576 block_9_add[0][0] \n__________________________________________________________________________________________________\nblock_10_expand_BN (BatchNormal (None, 6, 6, 384) 1536 block_10_expand[0][0] \n__________________________________________________________________________________________________\nblock_10_expand_relu (ReLU) (None, 6, 6, 384) 0 block_10_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_10_depthwise (DepthwiseCo (None, 6, 6, 384) 3456 block_10_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_10_depthwise_BN (BatchNor (None, 6, 6, 384) 1536 block_10_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_10_depthwise_relu (ReLU) (None, 6, 6, 384) 0 block_10_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_10_project (Conv2D) (None, 6, 6, 96) 36864 block_10_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_10_project_BN (BatchNorma (None, 6, 6, 96) 384 block_10_project[0][0] \n__________________________________________________________________________________________________\nblock_11_expand (Conv2D) (None, 6, 6, 576) 55296 block_10_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_11_expand_BN (BatchNormal (None, 6, 6, 576) 2304 block_11_expand[0][0] \n__________________________________________________________________________________________________\nblock_11_expand_relu (ReLU) (None, 6, 6, 576) 0 block_11_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_11_depthwise (DepthwiseCo (None, 6, 6, 576) 5184 block_11_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_11_depthwise_BN (BatchNor (None, 6, 6, 576) 2304 block_11_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_11_depthwise_relu (ReLU) (None, 6, 6, 576) 0 block_11_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_11_project (Conv2D) (None, 6, 6, 96) 55296 block_11_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_11_project_BN (BatchNorma (None, 6, 6, 96) 384 block_11_project[0][0] \n__________________________________________________________________________________________________\nblock_11_add (Add) (None, 6, 6, 96) 0 block_10_project_BN[0][0] \n block_11_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_12_expand (Conv2D) (None, 6, 6, 576) 55296 block_11_add[0][0] \n__________________________________________________________________________________________________\nblock_12_expand_BN (BatchNormal (None, 6, 6, 576) 2304 block_12_expand[0][0] \n__________________________________________________________________________________________________\nblock_12_expand_relu (ReLU) (None, 6, 6, 576) 0 block_12_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_12_depthwise (DepthwiseCo (None, 6, 6, 576) 5184 block_12_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_12_depthwise_BN (BatchNor (None, 6, 6, 576) 2304 block_12_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_12_depthwise_relu (ReLU) (None, 6, 6, 576) 0 block_12_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_12_project (Conv2D) (None, 6, 6, 96) 55296 block_12_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_12_project_BN (BatchNorma (None, 6, 6, 96) 384 block_12_project[0][0] \n__________________________________________________________________________________________________\nblock_12_add (Add) (None, 6, 6, 96) 0 block_11_add[0][0] \n block_12_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_13_expand (Conv2D) (None, 6, 6, 576) 55296 block_12_add[0][0] \n__________________________________________________________________________________________________\nblock_13_expand_BN (BatchNormal (None, 6, 6, 576) 2304 block_13_expand[0][0] \n__________________________________________________________________________________________________\nblock_13_expand_relu (ReLU) (None, 6, 6, 576) 0 block_13_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_13_pad (ZeroPadding2D) (None, 7, 7, 576) 0 block_13_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_13_depthwise (DepthwiseCo (None, 3, 3, 576) 5184 block_13_pad[0][0] \n__________________________________________________________________________________________________\nblock_13_depthwise_BN (BatchNor (None, 3, 3, 576) 2304 block_13_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_13_depthwise_relu (ReLU) (None, 3, 3, 576) 0 block_13_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_13_project (Conv2D) (None, 3, 3, 160) 92160 block_13_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_13_project_BN (BatchNorma (None, 3, 3, 160) 640 block_13_project[0][0] \n__________________________________________________________________________________________________\nblock_14_expand (Conv2D) (None, 3, 3, 960) 153600 block_13_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_14_expand_BN (BatchNormal (None, 3, 3, 960) 3840 block_14_expand[0][0] \n__________________________________________________________________________________________________\nblock_14_expand_relu (ReLU) (None, 3, 3, 960) 0 block_14_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_14_depthwise (DepthwiseCo (None, 3, 3, 960) 8640 block_14_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_14_depthwise_BN (BatchNor (None, 3, 3, 960) 3840 block_14_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_14_depthwise_relu (ReLU) (None, 3, 3, 960) 0 block_14_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_14_project (Conv2D) (None, 3, 3, 160) 153600 block_14_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_14_project_BN (BatchNorma (None, 3, 3, 160) 640 block_14_project[0][0] \n__________________________________________________________________________________________________\nblock_14_add (Add) (None, 3, 3, 160) 0 block_13_project_BN[0][0] \n block_14_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_15_expand (Conv2D) (None, 3, 3, 960) 153600 block_14_add[0][0] \n__________________________________________________________________________________________________\nblock_15_expand_BN (BatchNormal (None, 3, 3, 960) 3840 block_15_expand[0][0] \n__________________________________________________________________________________________________\nblock_15_expand_relu (ReLU) (None, 3, 3, 960) 0 block_15_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_15_depthwise (DepthwiseCo (None, 3, 3, 960) 8640 block_15_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_15_depthwise_BN (BatchNor (None, 3, 3, 960) 3840 block_15_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_15_depthwise_relu (ReLU) (None, 3, 3, 960) 0 block_15_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_15_project (Conv2D) (None, 3, 3, 160) 153600 block_15_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_15_project_BN (BatchNorma (None, 3, 3, 160) 640 block_15_project[0][0] \n__________________________________________________________________________________________________\nblock_15_add (Add) (None, 3, 3, 160) 0 block_14_add[0][0] \n block_15_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_16_expand (Conv2D) (None, 3, 3, 960) 153600 block_15_add[0][0] \n__________________________________________________________________________________________________\nblock_16_expand_BN (BatchNormal (None, 3, 3, 960) 3840 block_16_expand[0][0] \n__________________________________________________________________________________________________\nblock_16_expand_relu (ReLU) (None, 3, 3, 960) 0 block_16_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_16_depthwise (DepthwiseCo (None, 3, 3, 960) 8640 block_16_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_16_depthwise_BN (BatchNor (None, 3, 3, 960) 3840 block_16_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_16_depthwise_relu (ReLU) (None, 3, 3, 960) 0 block_16_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_16_project (Conv2D) (None, 3, 3, 320) 307200 block_16_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_16_project_BN (BatchNorma (None, 3, 3, 320) 1280 block_16_project[0][0] \n__________________________________________________________________________________________________\nConv_1 (Conv2D) (None, 3, 3, 1280) 409600 block_16_project_BN[0][0] \n__________________________________________________________________________________________________\nConv_1_bn (BatchNormalization) (None, 3, 3, 1280) 5120 Conv_1[0][0] \n__________________________________________________________________________________________________\nout_relu (ReLU) (None, 3, 3, 1280) 0 Conv_1_bn[0][0] \n==================================================================================================\nTotal params: 2,257,984\nTrainable params: 2,223,872\nNon-trainable params: 34,112\n__________________________________________________________________________________________________\n"
]
],
[
[
"# Get the datagenerators",
"_____no_output_____"
]
],
[
[
"def get_pcam_generators(base_dir, train_batch_size=32, val_batch_size=32):\n\n # dataset parameters\n train_path = os.path.join(base_dir, 'train+val', 'train')\n valid_path = os.path.join(base_dir, 'train+val', 'valid')\n\t \n # instantiate data generators\n datagen = ImageDataGenerator(preprocessing_function=preprocess_input)\n\n train_gen = datagen.flow_from_directory(train_path,\n target_size=(IMAGE_SIZE, IMAGE_SIZE),\n batch_size=train_batch_size,\n class_mode='binary')\n\n val_gen = datagen.flow_from_directory(valid_path,\n target_size=(IMAGE_SIZE, IMAGE_SIZE),\n batch_size=val_batch_size,\n class_mode='binary')\n\n return train_gen, val_gen",
"_____no_output_____"
],
[
"# get the data generators\ntrain_gen, val_gen = get_pcam_generators(r'C:\\Users\\20173884\\Documents\\8P361')",
"Found 144000 images belonging to 2 classes.\nFound 16000 images belonging to 2 classes.\n"
]
],
[
[
"# Model",
"_____no_output_____"
]
],
[
[
"# save the model and weights\nmodel_name = 'transfer_4.2_ImageNet_model'\nmodel_filepath = model_name + '.json'\nweights_filepath = model_name + '_weights.hdf5'\n\nmodel_json = model.to_json() # serialize model to JSON\nwith open(model_filepath, 'w') as json_file:\n json_file.write(model_json)\n\n\n# define the model checkpoint and Tensorboard callbacks\ncheckpoint = ModelCheckpoint(weights_filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')\ntensorboard = TensorBoard(os.path.join('logs', model_name))\ncallbacks_list = [checkpoint, tensorboard]\n\n\n# train the model, note that we define \"mini-epochs\"\ntrain_steps = train_gen.n//train_gen.batch_size//20\nval_steps = val_gen.n//val_gen.batch_size//20\n\n# since the model is trained for only 10 \"mini-epochs\", i.e. half of the data is\n# not used during training\nhistory = model.fit_generator(train_gen, steps_per_epoch=train_steps,\n validation_data=val_gen,\n validation_steps=val_steps,\n epochs=10,\n callbacks=callbacks_list)",
"C:\\Users\\20173884\\AppData\\Local\\Continuum\\anaconda3\\envs\\8p361\\lib\\site-packages\\tensorflow\\python\\keras\\engine\\training.py:1844: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.\n warnings.warn('`Model.fit_generator` is deprecated and '\n"
]
],
[
[
"### ",
"_____no_output_____"
],
[
"### View loss graph\n````bash\nactivate 8p361\ncd 'path/where/logs/are'\ntensorboard --logdir logs\n````",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a5bcd3ca4ecd07c0904dfe579ba9c9681cfc357
| 77,301 |
ipynb
|
Jupyter Notebook
|
session/tatabahasa/transformertag-base-tatabahasa.ipynb
|
AetherPrior/malaya
|
45d37b171dff9e92c5d30bd7260b282cd0912a7d
|
[
"MIT"
] | 88 |
2021-01-06T10:01:31.000Z
|
2022-03-30T17:34:09.000Z
|
session/tatabahasa/transformertag-base-tatabahasa.ipynb
|
AetherPrior/malaya
|
45d37b171dff9e92c5d30bd7260b282cd0912a7d
|
[
"MIT"
] | 43 |
2021-01-14T02:44:41.000Z
|
2022-03-31T19:47:42.000Z
|
session/tatabahasa/transformertag-base-tatabahasa.ipynb
|
AetherPrior/malaya
|
45d37b171dff9e92c5d30bd7260b282cd0912a7d
|
[
"MIT"
] | 38 |
2021-01-06T07:15:03.000Z
|
2022-03-19T05:07:50.000Z
| 38.305748 | 316 | 0.605594 |
[
[
[
"import os\n\nos.environ['CUDA_VISIBLE_DEVICES'] = '3'",
"_____no_output_____"
],
[
"from tensor2tensor.data_generators import problem\nfrom tensor2tensor.data_generators import text_problems\nfrom tensor2tensor.data_generators import translate\nfrom tensor2tensor.layers import common_attention\nfrom tensor2tensor.utils import registry\nfrom tensor2tensor import problems\nimport tensorflow as tf\nimport os\nimport logging\nimport sentencepiece as spm\nimport transformer_tag\nfrom tensor2tensor.layers import modalities",
"WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_gan/python/estimator/tpu_gan_estimator.py:42: The name tf.estimator.tpu.TPUEstimator is deprecated. Please use tf.compat.v1.estimator.tpu.TPUEstimator instead.\n\n"
],
[
"vocab = 'sp10m.cased.t5.model'\nsp = spm.SentencePieceProcessor()\nsp.Load(vocab)\n\nclass Encoder:\n def __init__(self, sp):\n self.sp = sp\n self.vocab_size = sp.GetPieceSize() + 100\n\n def encode(self, s):\n return self.sp.EncodeAsIds(s)\n\n def decode(self, ids, strip_extraneous = False):\n return self.sp.DecodeIds(list(ids))",
"_____no_output_____"
],
[
"d = [\n {'class': 0, 'Description': 'PAD', 'salah': '', 'betul': ''},\n {\n 'class': 1,\n 'Description': 'kesambungan subwords',\n 'salah': '',\n 'betul': '',\n },\n {\n 'class': 2,\n 'Description': 'tiada kesalahan',\n 'salah': '',\n 'betul': '',\n },\n {\n 'class': 3,\n 'Description': 'kesalahan frasa nama, Perkara yang diterangkan mesti mendahului \"penerang\"',\n 'salah': 'Cili sos',\n 'betul': 'sos cili',\n },\n {\n 'class': 4,\n 'Description': 'kesalahan kata jamak',\n 'salah': 'mereka-mereka',\n 'betul': 'mereka',\n },\n {\n 'class': 5,\n 'Description': 'kesalahan kata penguat',\n 'salah': 'sangat tinggi sekali',\n 'betul': 'sangat tinggi',\n },\n {\n 'class': 6,\n 'Description': 'kata adjektif dan imbuhan \"ter\" tanpa penguat.',\n 'salah': 'Sani mendapat markah yang tertinggi sekali.',\n 'betul': 'Sani mendapat markah yang tertinggi.',\n },\n {\n 'class': 7,\n 'Description': 'kesalahan kata hubung',\n 'salah': 'Sally sedang membaca bila saya tiba di rumahnya.',\n 'betul': 'Sally sedang membaca apabila saya tiba di rumahnya.',\n },\n {\n 'class': 8,\n 'Description': 'kesalahan kata bilangan',\n 'salah': 'Beribu peniaga tidak membayar cukai pendapatan.',\n 'betul': 'Beribu-ribu peniaga tidak membayar cukai pendapatan',\n },\n {\n 'class': 9,\n 'Description': 'kesalahan kata sendi',\n 'salah': 'Umar telah berpindah daripada sekolah ini bulan lalu.',\n 'betul': 'Umar telah berpindah dari sekolah ini bulan lalu.',\n },\n {\n 'class': 10,\n 'Description': 'kesalahan penjodoh bilangan',\n 'salah': 'Setiap orang pelajar',\n 'betul': 'Setiap pelajar.',\n },\n {\n 'class': 11,\n 'Description': 'kesalahan kata ganti diri',\n 'salah': 'Pencuri itu telah ditangkap. Beliau dibawa ke balai polis.',\n 'betul': 'Pencuri itu telah ditangkap. Dia dibawa ke balai polis.',\n },\n {\n 'class': 12,\n 'Description': 'kesalahan ayat pasif',\n 'salah': 'Cerpen itu telah dikarang oleh saya.',\n 'betul': 'Cerpen itu telah saya karang.',\n },\n {\n 'class': 13,\n 'Description': 'kesalahan kata tanya',\n 'salah': 'Kamu berasal dari manakah ?',\n 'betul': 'Kamu berasal dari mana ?',\n },\n {\n 'class': 14,\n 'Description': 'kesalahan tanda baca',\n 'salah': 'Kamu berasal dari manakah .',\n 'betul': 'Kamu berasal dari mana ?',\n },\n {\n 'class': 15,\n 'Description': 'kesalahan kata kerja tak transitif',\n 'salah': 'Dia kata kepada saya',\n 'betul': 'Dia berkata kepada saya',\n },\n {\n 'class': 16,\n 'Description': 'kesalahan kata kerja transitif',\n 'salah': 'Dia suka baca buku',\n 'betul': 'Dia suka membaca buku',\n },\n {\n 'class': 17,\n 'Description': 'penggunaan kata yang tidak tepat',\n 'salah': 'Tembuk Besar negeri Cina dibina oleh Shih Huang Ti.',\n 'betul': 'Tembok Besar negeri Cina dibina oleh Shih Huang Ti',\n },\n]\n\n\nclass Tatabahasa:\n def __init__(self, d):\n self.d = d\n self.kesalahan = {i['Description']: no for no, i in enumerate(self.d)}\n self.reverse_kesalahan = {v: k for k, v in self.kesalahan.items()}\n self.vocab_size = len(self.d)\n\n def encode(self, s):\n return [self.kesalahan[i] for i in s]\n\n def decode(self, ids, strip_extraneous = False):\n return [self.reverse_kesalahan[i] for i in ids]",
"_____no_output_____"
],
[
"@registry.register_problem\nclass Grammar(text_problems.Text2TextProblem):\n \"\"\"grammatical error correction.\"\"\"\n\n def feature_encoders(self, data_dir):\n encoder = Encoder(sp)\n t = Tatabahasa(d)\n return {'inputs': encoder, 'targets': encoder, 'targets_error_tag': t}\n\n def hparams(self, defaults, model_hparams):\n super(Grammar, self).hparams(defaults, model_hparams)\n if 'use_error_tags' not in model_hparams:\n model_hparams.add_hparam('use_error_tags', True)\n if 'middle_prediction' not in model_hparams:\n model_hparams.add_hparam('middle_prediction', False)\n if 'middle_prediction_layer_factor' not in model_hparams:\n model_hparams.add_hparam('middle_prediction_layer_factor', 2)\n if 'ffn_in_prediction_cascade' not in model_hparams:\n model_hparams.add_hparam('ffn_in_prediction_cascade', 1)\n if 'error_tag_embed_size' not in model_hparams:\n model_hparams.add_hparam('error_tag_embed_size', 12)\n if model_hparams.use_error_tags:\n defaults.modality[\n 'targets_error_tag'\n ] = modalities.ModalityType.SYMBOL\n error_tag_vocab_size = self._encoders[\n 'targets_error_tag'\n ].vocab_size\n defaults.vocab_size['targets_error_tag'] = error_tag_vocab_size\n\n def example_reading_spec(self):\n data_fields, _ = super(Grammar, self).example_reading_spec()\n data_fields['targets_error_tag'] = tf.VarLenFeature(tf.int64)\n return data_fields, None\n\n @property\n def approx_vocab_size(self):\n return 32100\n\n @property\n def is_generate_per_split(self):\n return False\n\n @property\n def dataset_splits(self):\n return [\n {'split': problem.DatasetSplit.TRAIN, 'shards': 200},\n {'split': problem.DatasetSplit.EVAL, 'shards': 1},\n ]",
"_____no_output_____"
],
[
"DATA_DIR = os.path.expanduser('t2t-tatabahasa/data')\nTMP_DIR = os.path.expanduser('t2t-tatabahasa/tmp')\nTRAIN_DIR = os.path.expanduser('t2t-tatabahasa/train-base')",
"_____no_output_____"
],
[
"PROBLEM = 'grammar'\nt2t_problem = problems.problem(PROBLEM)",
"_____no_output_____"
],
[
"MODEL = 'transformer_tag'\nHPARAMS = 'transformer_base'",
"_____no_output_____"
],
[
"from tensor2tensor.utils.trainer_lib import create_run_config, create_experiment\nfrom tensor2tensor.utils.trainer_lib import create_hparams\nfrom tensor2tensor.utils import registry\nfrom tensor2tensor import models\nfrom tensor2tensor import problems\nfrom tensor2tensor.utils import trainer_lib",
"_____no_output_____"
],
[
"X = tf.placeholder(tf.int32, [None, None], name = 'x_placeholder')\nY = tf.placeholder(tf.int32, [None, None], name = 'y_placeholder')\ntargets_error_tag = tf.placeholder(tf.int32, [None, None], 'error_placeholder')\nX_seq_len = tf.count_nonzero(X, 1, dtype=tf.int32)\nmaxlen_decode = tf.reduce_max(X_seq_len)\n\nx = tf.expand_dims(tf.expand_dims(X, -1), -1)\ny = tf.expand_dims(tf.expand_dims(Y, -1), -1)\ntargets_error_tag_ = tf.expand_dims(tf.expand_dims(targets_error_tag, -1), -1)\n\nfeatures = {\n \"inputs\": x,\n \"targets\": y,\n \"target_space_id\": tf.constant(1, dtype=tf.int32),\n 'targets_error_tag': targets_error_tag,\n}\nModes = tf.estimator.ModeKeys\nhparams = trainer_lib.create_hparams(HPARAMS, data_dir=DATA_DIR, problem_name=PROBLEM)",
"WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py:507: calling count_nonzero (from tensorflow.python.ops.math_ops) with axis is deprecated and will be removed in a future version.\nInstructions for updating:\nreduction_indices is deprecated, use axis instead\n"
],
[
"hparams.filter_size = 3072\nhparams.hidden_size = 768\nhparams.num_heads = 12\nhparams.num_hidden_layers = 8\nhparams.vocab_divisor = 128\nhparams.dropout = 0.1\nhparams.max_length = 256\n\n# LM\nhparams.label_smoothing = 0.0\nhparams.shared_embedding_and_softmax_weights = False\nhparams.eval_drop_long_sequences = True\nhparams.max_length = 256\nhparams.multiproblem_mixing_schedule = 'pretrain'\n\n# tpu\nhparams.symbol_modality_num_shards = 1\nhparams.attention_dropout_broadcast_dims = '0,1'\nhparams.relu_dropout_broadcast_dims = '1'\nhparams.layer_prepostprocess_dropout_broadcast_dims = '1'",
"_____no_output_____"
],
[
"model = registry.model(MODEL)(hparams, Modes.PREDICT)",
"INFO:tensorflow:Setting T2TModel mode to 'infer'\n"
],
[
"# logits = model(features)\n# logits\n\n# sess = tf.InteractiveSession()\n# sess.run(tf.global_variables_initializer())\n# l = sess.run(logits, feed_dict = {X: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]],\n# Y: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]],\n# targets_error_tag: [[10,10, 10, 10,10,1],\n# [10,10, 10, 10,10,1]]})",
"_____no_output_____"
],
[
"features = {\n \"inputs\": x,\n \"target_space_id\": tf.constant(1, dtype=tf.int32),\n}\n\nwith tf.variable_scope(tf.get_variable_scope(), reuse = False):\n fast_result = model._greedy_infer(features, maxlen_decode)",
"WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensor2tensor-1.15.7-py3.6.egg/tensor2tensor/layers/common_attention.py:931: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n"
],
[
"result_seq = tf.identity(fast_result['outputs'], name = 'greedy')\nresult_tag = tf.identity(fast_result['outputs_tag'], name = 'tag_greedy')",
"_____no_output_____"
],
[
"from tensor2tensor.layers import common_layers\n\ndef accuracy_per_sequence(predictions, targets, weights_fn = common_layers.weights_nonzero):\n padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets)\n weights = weights_fn(padded_labels)\n padded_labels = tf.to_int32(padded_labels)\n padded_predictions = tf.to_int32(padded_predictions)\n not_correct = tf.to_float(tf.not_equal(padded_predictions, padded_labels)) * weights\n axis = list(range(1, len(padded_predictions.get_shape())))\n correct_seq = 1.0 - tf.minimum(1.0, tf.reduce_sum(not_correct, axis=axis))\n return tf.reduce_mean(correct_seq)\n\ndef padded_accuracy(predictions, targets, weights_fn = common_layers.weights_nonzero):\n padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets)\n weights = weights_fn(padded_labels)\n padded_labels = tf.to_int32(padded_labels)\n padded_predictions = tf.to_int32(padded_predictions)\n n = tf.to_float(tf.equal(padded_predictions, padded_labels)) * weights\n d = tf.reduce_sum(weights)\n return tf.reduce_sum(n) / d",
"_____no_output_____"
],
[
"acc_seq = padded_accuracy(result_seq, Y)\nacc_tag = padded_accuracy(result_tag, targets_error_tag)",
"_____no_output_____"
],
[
"ckpt_path = tf.train.latest_checkpoint(os.path.join(TRAIN_DIR))\nckpt_path",
"_____no_output_____"
],
[
"sess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())",
"_____no_output_____"
],
[
"var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)\nsaver = tf.train.Saver(var_list = var_lists)\nsaver.restore(sess, ckpt_path)",
"INFO:tensorflow:Restoring parameters from t2t-tatabahasa/train-base/model.ckpt-140000\n"
],
[
"import pickle\n\nwith open('../pure-text/dataset-tatabahasa.pkl', 'rb') as fopen:\n data = pickle.load(fopen)\n\nencoder = Encoder(sp)",
"_____no_output_____"
],
[
"def get_xy(row, encoder):\n x, y, tag = [], [], []\n\n for i in range(len(row[0])):\n t = encoder.encode(row[0][i][0])\n y.extend(t)\n t = encoder.encode(row[1][i][0])\n x.extend(t)\n tag.extend([row[1][i][1]] * len(t))\n\n # EOS\n x.append(1)\n y.append(1)\n tag.append(0)\n\n return x, y, tag",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"x, y, tag = get_xy(data[10], encoder)",
"_____no_output_____"
],
[
"e = encoder.encode('Pilih mana jurusan yang sesuai dengan kebolehan anda dalam peperiksaan Sijil Pelajaran Malaysia semasa memohon kemasukan ke institusi pengajian tinggi.') + [1]",
"_____no_output_____"
],
[
"r = sess.run(fast_result, \n feed_dict = {X: [e]})",
"_____no_output_____"
],
[
"r['outputs_tag']",
"_____no_output_____"
],
[
"encoder.decode(r['outputs'][0].tolist())",
"_____no_output_____"
],
[
"encoder.decode(x)",
"_____no_output_____"
],
[
"encoder.decode(y)",
"_____no_output_____"
],
[
"hparams.problem.example_reading_spec()[0]",
"_____no_output_____"
],
[
"def parse(serialized_example):\n\n data_fields = hparams.problem.example_reading_spec()[0]\n features = tf.parse_single_example(\n serialized_example, features = data_fields\n )\n for k in features.keys():\n features[k] = features[k].values\n\n return features",
"_____no_output_____"
],
[
"dataset = tf.data.TFRecordDataset('t2t-tatabahasa/data/grammar-dev-00000-of-00001')\ndataset = dataset.map(parse, num_parallel_calls=32)\ndataset = dataset.padded_batch(32, \n padded_shapes = {\n 'inputs': tf.TensorShape([None]),\n 'targets': tf.TensorShape([None]),\n 'targets_error_tag': tf.TensorShape([None])\n },\n padding_values = {\n 'inputs': tf.constant(0, dtype = tf.int64),\n 'targets': tf.constant(0, dtype = tf.int64),\n 'targets_error_tag': tf.constant(0, dtype = tf.int64),\n })\ndataset = dataset.make_one_shot_iterator().get_next()\ndataset",
"WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/autograph/converters/directives.py:119: The name tf.parse_single_example is deprecated. Please use tf.io.parse_single_example instead.\n\n"
],
[
"seqs, tags = [], []\nindex = 0\nwhile True:\n try:\n d = sess.run(dataset)\n s, t = sess.run([acc_seq, acc_tag], feed_dict = {X:d['inputs'], \n Y: d['targets'], \n targets_error_tag: d['targets_error_tag']})\n seqs.append(s)\n tags.append(t)\n print(f'done {index}')\n index += 1\n except:\n break",
"done 0\ndone 1\ndone 2\ndone 3\ndone 4\ndone 5\ndone 6\ndone 7\ndone 8\ndone 9\ndone 10\ndone 11\ndone 12\ndone 13\ndone 14\ndone 15\ndone 16\ndone 17\ndone 18\ndone 19\ndone 20\ndone 21\ndone 22\ndone 23\ndone 24\ndone 25\ndone 26\ndone 27\ndone 28\ndone 29\ndone 30\ndone 31\ndone 32\ndone 33\ndone 34\ndone 35\ndone 36\ndone 37\ndone 38\ndone 39\ndone 40\ndone 41\ndone 42\ndone 43\ndone 44\ndone 45\ndone 46\ndone 47\ndone 48\ndone 49\ndone 50\ndone 51\ndone 52\ndone 53\ndone 54\ndone 55\ndone 56\ndone 57\ndone 58\ndone 59\ndone 60\ndone 61\ndone 62\ndone 63\ndone 64\ndone 65\ndone 66\ndone 67\ndone 68\ndone 69\ndone 70\ndone 71\ndone 72\ndone 73\ndone 74\ndone 75\ndone 76\ndone 77\ndone 78\ndone 79\ndone 80\ndone 81\ndone 82\ndone 83\ndone 84\ndone 85\ndone 86\ndone 87\ndone 88\ndone 89\ndone 90\ndone 91\ndone 92\ndone 93\ndone 94\ndone 95\ndone 96\ndone 97\ndone 98\ndone 99\ndone 100\ndone 101\ndone 102\ndone 103\ndone 104\ndone 105\ndone 106\ndone 107\ndone 108\ndone 109\ndone 110\ndone 111\ndone 112\ndone 113\ndone 114\ndone 115\ndone 116\ndone 117\ndone 118\ndone 119\ndone 120\ndone 121\ndone 122\ndone 123\ndone 124\ndone 125\ndone 126\ndone 127\ndone 128\ndone 129\ndone 130\ndone 131\ndone 132\ndone 133\ndone 134\ndone 135\ndone 136\ndone 137\ndone 138\ndone 139\ndone 140\ndone 141\ndone 142\ndone 143\ndone 144\ndone 145\ndone 146\ndone 147\ndone 148\ndone 149\ndone 150\ndone 151\ndone 152\ndone 153\ndone 154\ndone 155\ndone 156\ndone 157\ndone 158\ndone 159\ndone 160\ndone 161\ndone 162\ndone 163\ndone 164\ndone 165\ndone 166\ndone 167\ndone 168\ndone 169\ndone 170\ndone 171\ndone 172\ndone 173\ndone 174\ndone 175\ndone 176\ndone 177\ndone 178\ndone 179\ndone 180\ndone 181\ndone 182\ndone 183\ndone 184\ndone 185\ndone 186\ndone 187\ndone 188\ndone 189\ndone 190\ndone 191\ndone 192\ndone 193\ndone 194\ndone 195\ndone 196\ndone 197\ndone 198\ndone 199\ndone 200\ndone 201\ndone 202\ndone 203\ndone 204\ndone 205\ndone 206\ndone 207\ndone 208\ndone 209\ndone 210\ndone 211\ndone 212\ndone 213\ndone 214\ndone 215\ndone 216\ndone 217\ndone 218\ndone 219\ndone 220\ndone 221\ndone 222\ndone 223\ndone 224\ndone 225\ndone 226\ndone 227\ndone 228\ndone 229\ndone 230\ndone 231\ndone 232\ndone 233\ndone 234\ndone 235\ndone 236\ndone 237\ndone 238\ndone 239\ndone 240\ndone 241\ndone 242\ndone 243\ndone 244\ndone 245\ndone 246\ndone 247\ndone 248\ndone 249\ndone 250\ndone 251\ndone 252\ndone 253\ndone 254\ndone 255\ndone 256\ndone 257\ndone 258\ndone 259\ndone 260\ndone 261\ndone 262\ndone 263\ndone 264\ndone 265\ndone 266\ndone 267\ndone 268\ndone 269\ndone 270\ndone 271\ndone 272\ndone 273\ndone 274\ndone 275\ndone 276\ndone 277\ndone 278\ndone 279\ndone 280\ndone 281\ndone 282\ndone 283\ndone 284\ndone 285\ndone 286\ndone 287\ndone 288\ndone 289\ndone 290\ndone 291\ndone 292\ndone 293\ndone 294\ndone 295\ndone 296\ndone 297\ndone 298\ndone 299\ndone 300\ndone 301\ndone 302\ndone 303\ndone 304\ndone 305\ndone 306\ndone 307\ndone 308\ndone 309\ndone 310\ndone 311\ndone 312\ndone 313\ndone 314\ndone 315\ndone 316\n"
],
[
"np.mean(seqs), np.mean(tags)",
"_____no_output_____"
],
[
"saver = tf.train.Saver(tf.trainable_variables())\nsaver.save(sess, 'transformertag-base/model.ckpt')",
"_____no_output_____"
],
[
"strings = ','.join(\n [\n n.name\n for n in tf.get_default_graph().as_graph_def().node\n if ('Variable' in n.op\n or 'Placeholder' in n.name\n or 'greedy' in n.name\n or 'tag_greedy' in n.name\n or 'x_placeholder' in n.name\n or 'self/Softmax' in n.name)\n and 'adam' not in n.name\n and 'beta' not in n.name\n and 'global_step' not in n.name\n and 'modality' not in n.name\n and 'Assign' not in n.name\n ]\n)\nstrings.split(',')",
"_____no_output_____"
],
[
"def freeze_graph(model_dir, output_node_names):\n\n if not tf.gfile.Exists(model_dir):\n raise AssertionError(\n \"Export directory doesn't exists. Please specify an export \"\n 'directory: %s' % model_dir\n )\n\n checkpoint = tf.train.get_checkpoint_state(model_dir)\n input_checkpoint = checkpoint.model_checkpoint_path\n\n absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])\n output_graph = absolute_model_dir + '/frozen_model.pb'\n clear_devices = True\n with tf.Session(graph = tf.Graph()) as sess:\n saver = tf.train.import_meta_graph(\n input_checkpoint + '.meta', clear_devices = clear_devices\n )\n saver.restore(sess, input_checkpoint)\n output_graph_def = tf.graph_util.convert_variables_to_constants(\n sess,\n tf.get_default_graph().as_graph_def(),\n output_node_names.split(','),\n )\n with tf.gfile.GFile(output_graph, 'wb') as f:\n f.write(output_graph_def.SerializeToString())\n print('%d ops in the final graph.' % len(output_graph_def.node))",
"_____no_output_____"
],
[
"freeze_graph('transformertag-base', strings)",
"INFO:tensorflow:Restoring parameters from transformertag-base/model.ckpt\n"
],
[
"def load_graph(frozen_graph_filename):\n with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:\n graph_def = tf.GraphDef()\n graph_def.ParseFromString(f.read())\n with tf.Graph().as_default() as graph:\n tf.import_graph_def(graph_def)\n return graph",
"_____no_output_____"
],
[
"g = load_graph('transformertag-base/frozen_model.pb')\nx = g.get_tensor_by_name('import/x_placeholder:0')\ngreedy = g.get_tensor_by_name('import/greedy:0')\ntag_greedy = g.get_tensor_by_name('import/tag_greedy:0')\ntest_sess = tf.InteractiveSession(graph = g)",
"/home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py:1750: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).\n warnings.warn('An interactive session is already active. This can '\n"
],
[
"test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']})",
"_____no_output_____"
],
[
"import tensorflow as tf\nfrom tensorflow.tools.graph_transforms import TransformGraph\nfrom glob import glob\ntf.set_random_seed(0)",
"_____no_output_____"
],
[
"import tensorflow_text\nimport tf_sentencepiece",
"_____no_output_____"
],
[
"transforms = ['add_default_attributes',\n 'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',\n 'fold_constants(ignore_errors=true)',\n 'fold_batch_norms',\n 'fold_old_batch_norms',\n 'quantize_weights(fallback_min=-10, fallback_max=10)',\n 'strip_unused_nodes',\n 'sort_by_execution_order']\n\npb = 'transformertag-base/frozen_model.pb'\ninput_graph_def = tf.GraphDef()\nwith tf.gfile.FastGFile(pb, 'rb') as f:\n input_graph_def.ParseFromString(f.read())\n \ntransformed_graph_def = TransformGraph(input_graph_def, \n ['x_placeholder'],\n ['greedy', 'tag_greedy'], transforms)\n\nwith tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:\n f.write(transformed_graph_def.SerializeToString())",
"WARNING:tensorflow:From <ipython-input-45-4ca23320d2af>:12: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.gfile.GFile.\n"
],
[
"g = load_graph('transformertag-base/frozen_model.pb.quantized')\nx = g.get_tensor_by_name('import/x_placeholder:0')\ngreedy = g.get_tensor_by_name('import/greedy:0')\ntag_greedy = g.get_tensor_by_name('import/tag_greedy:0')\ntest_sess = tf.InteractiveSession(graph = g)",
"_____no_output_____"
],
[
"test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']})",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5bf0bd79c3897ea8b67f83d456a3190315c482
| 722 |
ipynb
|
Jupyter Notebook
|
00_core.ipynb
|
arampacha/standard_transformer
|
4420c01cd7f07385aa914a3312c8243f81413b5e
|
[
"Apache-2.0"
] | null | null | null |
00_core.ipynb
|
arampacha/standard_transformer
|
4420c01cd7f07385aa914a3312c8243f81413b5e
|
[
"Apache-2.0"
] | 2 |
2021-09-28T05:39:30.000Z
|
2022-02-26T10:20:17.000Z
|
00_core.ipynb
|
arampacha/standard_transformer
|
4420c01cd7f07385aa914a3312c8243f81413b5e
|
[
"Apache-2.0"
] | null | null | null | 14.734694 | 33 | 0.469529 |
[
[
[
"# default_exp core",
"_____no_output_____"
]
],
[
[
"# The Core\n\n> is empty.",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5bfc434bc17fe9eb081b7762e867184f20cf22
| 46,473 |
ipynb
|
Jupyter Notebook
|
examples/notif_bot/notif_bot_POC.ipynb
|
sgoley/fastquant
|
4fe93779d13c5318cd960835a16b49376d77c7d6
|
[
"MIT"
] | 6 |
2020-01-12T01:59:46.000Z
|
2020-01-20T05:01:42.000Z
|
examples/notif_bot/notif_bot_POC.ipynb
|
sgoley/fastquant
|
4fe93779d13c5318cd960835a16b49376d77c7d6
|
[
"MIT"
] | 3 |
2020-01-14T12:49:21.000Z
|
2020-01-15T10:48:04.000Z
|
examples/notif_bot/notif_bot_POC.ipynb
|
sgoley/fastquant
|
4fe93779d13c5318cd960835a16b49376d77c7d6
|
[
"MIT"
] | 1 |
2021-07-23T10:02:46.000Z
|
2021-07-23T10:02:46.000Z
| 39.686593 | 595 | 0.468466 |
[
[
[
"# Overview\nThis notebook demonstrates the proof of concept for the proposed daily notification scheme.\n\nAssumptions:\n- The strategy has been thoroughly backtested and ready for actual trading.\n- Today is Nov 5, 2018, which is trading Day 0.",
"_____no_output_____"
],
[
"## Step 1: Initialize data (Day 0)\n- Save the data to disk. This is so the script will only fetch the latest stock data.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nfrom fastquant import backtest, get_stock_data",
"_____no_output_____"
],
[
"# Setup our variables\nFILE_DIR = \"jfc.csv\"\nSYMBOL = \"JFC\"\ntoday = \"2018-11-06\" # Fake today date for demo purposes\n\ndf = get_stock_data(SYMBOL, \"2018-01-01\", \"2018-11-05\")\ndf.to_csv(FILE_DIR)",
"_____no_output_____"
]
],
[
[
"## Step 2: Daily script calls (Day 1-2)\n1. Fetch today's data\n2. Load data from disk and append today's data\n3. Run backtest with args `live=True` and `today=today`",
"_____no_output_____"
],
[
"### Demo Day 1: Nov 6, 2018 (buy)",
"_____no_output_____"
]
],
[
[
"def daily_fetch(file_dir, symbol, today):\n today_df = get_stock_data(symbol, today, today)\n\n # Retrieve saved historical data on disk and append new data\n # TODO: add checks if daily updates were broken\n df = pd.read_csv(file_dir, parse_dates=[\"dt\"]).set_index(\"dt\")\n df = df.append(today_df)\n df.to_csv(file_dir)\n\n return df",
"_____no_output_____"
],
[
"today = \"2018-11-06\" # Fake today date for demo purposes\ndf = daily_fetch(FILE_DIR, SYMBOL, today)\n\nbacktest(\n \"smac\", df, fast_period=15, slow_period=40, verbose=False, live=True, today=today\n)",
"===Global level arguments===\ninit_cash : 100000\nbuy_prop : 1\nsell_prop : 1\ncommission : 0.0075\n===Strategy level arguments===\nfast_period : 15\nslow_period : 40\n>>> Notif bot: BUY! <<<\nFinal Portfolio Value: 97652.8975\nFinal PnL: -2347.1\nTime used (seconds): 0.05349397659301758\nOptimal parameters: {'init_cash': 100000, 'buy_prop': 1, 'sell_prop': 1, 'commission': 0.0075, 'execution_type': 'close', 'live': True, 'today': '2018-11-06', 'notif_script_dir': False, 'symbol': '', 'fast_period': 15, 'slow_period': 40}\nOptimal metrics: {'rtot': -0.023750856806590226, 'ravg': -0.00011418681157014532, 'rnorm': -0.028365016583377686, 'rnorm100': -2.8365016583377685, 'sharperatio': None, 'pnl': -2347.1, 'final_value': 97652.8975}\n"
]
],
[
[
"- Notice the line that prints `>>> Notif bot: BUY! <<<`\n- For this POC, it's just a print function, but it could be any script call.",
"_____no_output_____"
],
[
"### Demo Day 2: Nov 7, 2018 (hold)",
"_____no_output_____"
]
],
[
[
"today = \"2018-11-07\" # Fake today date for demo purposes\ndf = daily_fetch(FILE_DIR, SYMBOL, today)\n\nbacktest(\n \"smac\", df, fast_period=15, slow_period=40, verbose=False, live=True, today=today\n)",
"===Global level arguments===\ninit_cash : 100000\nbuy_prop : 1\nsell_prop : 1\ncommission : 0.0075\n===Strategy level arguments===\nfast_period : 15\nslow_period : 40\nFinal Portfolio Value: 98142.89749999999\nFinal PnL: -1857.1\nTime used (seconds): 0.055661678314208984\nOptimal parameters: {'init_cash': 100000, 'buy_prop': 1, 'sell_prop': 1, 'commission': 0.0075, 'execution_type': 'close', 'live': True, 'today': '2018-11-07', 'notif_script_dir': False, 'symbol': '', 'fast_period': 15, 'slow_period': 40}\nOptimal metrics: {'rtot': -0.018745631612988572, 'ravg': -8.969201728702666e-05, 'rnorm': -0.02234886802384841, 'rnorm100': -2.234886802384841, 'sharperatio': None, 'pnl': -1857.1, 'final_value': 98142.89749999999}\n"
]
],
[
[
"- Notice that there is no BUY or SELL notification\n- TODO: Find a way to insert a HOLD notification. It's a good indicator that the cronjob is working as expected.",
"_____no_output_____"
],
[
"# Notes",
"_____no_output_____"
],
[
"When using in live trading, use:\n```\nfrom datetime import date\n\ntoday = date.today().strftime(\"%Y-%m-%d\")\n```",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a5c0997d678dc5156088e4167afe324a675c77d
| 67,985 |
ipynb
|
Jupyter Notebook
|
digits_LR.ipynb
|
the-cryptozoologist/machine-learning
|
681ffe02947af2c446b6013f5cd9569a19d92ea1
|
[
"MIT"
] | null | null | null |
digits_LR.ipynb
|
the-cryptozoologist/machine-learning
|
681ffe02947af2c446b6013f5cd9569a19d92ea1
|
[
"MIT"
] | null | null | null |
digits_LR.ipynb
|
the-cryptozoologist/machine-learning
|
681ffe02947af2c446b6013f5cd9569a19d92ea1
|
[
"MIT"
] | null | null | null | 85.7314 | 27,380 | 0.755431 |
[
[
[
"# Digits Dataset - Logistic Regression",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\n\nimport sklearn\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
]
],
[
[
"## Data",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_digits",
"_____no_output_____"
],
[
"digits = load_digits()",
"_____no_output_____"
],
[
"df = pd.DataFrame(data=np.c_[digits[\"data\"], digits[\"target\"]],columns = digits[\"feature_names\"]+[\"target\"])",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df[\"target\"].nunique()",
"_____no_output_____"
],
[
"print(df[\"target\"].unique())",
"[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]\n"
]
],
[
[
"## Visualization",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=[30,5])\nfor i, (image,label) in enumerate(zip(digits.data[0:10],digits.target[0:10])):\n plt.subplot(1, 10, i + 1)\n plt.imshow(np.reshape(image, (8,8)), cmap=\"gray\")\n plt.title(f\"Digit {i}\")",
"_____no_output_____"
]
],
[
[
"## Train Test Split",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X = digits.data\ny = digits.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.42, random_state=101)",
"_____no_output_____"
],
[
"print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)",
"(1042, 64) (755, 64) (1042,) (755,)\n"
]
],
[
[
"## Logistic Regression",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression",
"_____no_output_____"
],
[
"log_reg = LogisticRegression()",
"_____no_output_____"
],
[
"log_reg.fit(X_train, y_train)",
"_____no_output_____"
],
[
"pred = log_reg.predict(X_test)",
"_____no_output_____"
]
],
[
[
"## Model accuracy",
"_____no_output_____"
]
],
[
[
"pred.shape",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix, classification_report",
"_____no_output_____"
],
[
"print(confusion_matrix(y_test, pred))",
"[[73 0 0 0 0 0 0 0 0 0]\n [ 0 70 0 0 0 0 0 0 2 0]\n [ 0 1 68 0 0 0 0 0 0 0]\n [ 0 0 0 79 0 1 0 0 0 1]\n [ 0 0 0 0 81 0 0 0 2 0]\n [ 0 0 0 0 0 78 0 0 0 2]\n [ 0 1 0 0 1 0 64 0 1 0]\n [ 0 0 0 0 3 0 0 70 0 1]\n [ 0 4 1 1 0 1 0 0 73 0]\n [ 0 0 0 0 0 1 0 0 1 74]]\n"
],
[
"print(classification_report(y_test, pred))",
" precision recall f1-score support\n\n 0 1.00 1.00 1.00 73\n 1 0.92 0.97 0.95 72\n 2 0.99 0.99 0.99 69\n 3 0.99 0.98 0.98 81\n 4 0.95 0.98 0.96 83\n 5 0.96 0.97 0.97 80\n 6 1.00 0.96 0.98 67\n 7 1.00 0.95 0.97 74\n 8 0.92 0.91 0.92 80\n 9 0.95 0.97 0.96 76\n\n accuracy 0.97 755\n macro avg 0.97 0.97 0.97 755\nweighted avg 0.97 0.97 0.97 755\n\n"
]
],
[
[
"## Innacurate Predictions",
"_____no_output_____"
]
],
[
[
"i = 0\nwrong = []\nfor prediction, true in zip(pred, y_test):\n if prediction != true:\n wrong.append(i)\n i += 1",
"_____no_output_____"
],
[
"plt.figure(figsize=[40,20])\nfor plot, false in enumerate(wrong[:10]):\n plt.subplot(1,10, plot + 1)\n plt.imshow(np.reshape(X_test[false], (8,8)), cmap = \"gray\")\n plt.title(f\"Predicted: {pred[false]}, Data: {y_test[false]}\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a5c162f7ac5c04d5940a036663ddc0d63dd0109
| 567,138 |
ipynb
|
Jupyter Notebook
|
RandomForests_on_Amazon_Reviews.ipynb
|
tanmay-kulkarni/amazon_food_reviews
|
4742a68eddff297c98abc4ffa4b21fe1168f6d4b
|
[
"MIT"
] | null | null | null |
RandomForests_on_Amazon_Reviews.ipynb
|
tanmay-kulkarni/amazon_food_reviews
|
4742a68eddff297c98abc4ffa4b21fe1168f6d4b
|
[
"MIT"
] | null | null | null |
RandomForests_on_Amazon_Reviews.ipynb
|
tanmay-kulkarni/amazon_food_reviews
|
4742a68eddff297c98abc4ffa4b21fe1168f6d4b
|
[
"MIT"
] | null | null | null | 277.058134 | 155,913 | 0.912603 |
[
[
[
"# Objective:\n\nClassify Amazon food reviews using Random Forest Classifier.\n\nWe'll do the following exercises in this notebook\n\n* Load the data stored in the format\n 1. BoW\n 2. Tfidf\n 3. Avg. W2V\n 4. Tfidf weighted W2V\n* Divide the data in cross validation sets and find the optimal parameters n_estimators and max_depth using GridSearchCV\n* Observe the Cross Validation score for **each** combination of *n_estimator* and *max_depth* in Cross Validation\n* Plot confusion matrix and calculate Precision, Recall, FPR, TNR, FNR. ",
"_____no_output_____"
],
[
"**Brief Summary of Random Forest Classifier**",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(r'C:\\Users\\ucanr\\Dropbox\\AAIC\\assignments mandatory\\9. RF and GBDT\\RF_summary.jpg')",
"_____no_output_____"
],
[
"# To suprress the warnings as they make the notebook less presentable.\n\nimport sys\nimport warnings\n\nif not sys.warnoptions:\n warnings.simplefilter(\"ignore\")",
"_____no_output_____"
]
],
[
[
"Import the necessary libraries.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sbn\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import recall_score, precision_score, confusion_matrix, accuracy_score\nfrom sklearn.model_selection import TimeSeriesSplit, GridSearchCV, cross_val_score, RandomizedSearchCV\nfrom sklearn.ensemble import RandomForestClassifier\nimport pickle\nimport numpy as np\nfrom sklearn.svm import SVC",
"_____no_output_____"
]
],
[
[
"This Jupyter notebook extension notifies you when a cell finishes its execution!",
"_____no_output_____"
]
],
[
[
"%load_ext jupyternotify",
"The jupyternotify extension is already loaded. To reload it, use:\n %reload_ext jupyternotify\n"
]
],
[
[
"## Important parameters of Random Forest\n\n* *n_estimators* : This parameter specifies how many base learners to use. Generally, more the number of estimators, better the results due to the aggregation.\n* *max_depth* : This determines how deep the decision trees will be built. For RF, training deep trees is preferred since the overfitting is nullified in the aggregation phase.",
"_____no_output_____"
],
[
"Load the target variable y of train and test sets. Note that the entire dataset is being used. All 350k reviews. The dataset is divided into train and test with ratio 80:20 respectively.",
"_____no_output_____"
]
],
[
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\y_train_full80_20.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/y_train_full80_20.pkl', 'rb')\ny_train = pickle.load(f)\nf.close()\nprint('The datatype of y_train is : {}'.format(type(y_train)))\nprint('The shape of y_train is : {}'.format(y_train.shape))",
"The datatype of y_train is : <class 'numpy.ndarray'>\nThe shape of y_train is : (291336,)\n"
],
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\y_test_full80_20.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/y_test_full80_20.pkl', 'rb')\ny_test = pickle.load(f)\nf.close()\nprint('The datatype of y_test is : {}'.format(type(y_test)))\nprint('The shape of y_test is : {}'.format(y_test.shape))",
"The datatype of y_test is : <class 'numpy.ndarray'>\nThe shape of y_test is : (72835,)\n"
]
],
[
[
"## Bag of Words\n\nI had saved the trained BoW model and the transformed data on disk. Let's load it. ",
"_____no_output_____"
]
],
[
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_train_transformed_bow_full_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_train_transformed_bow_full_nparray.pkl', 'rb')\nX_train_transformed_bow = pickle.load(f)\nf.close()\nprint('The datatype of X_train_transformed_bow is : {}'.format(type(X_train_transformed_bow)))\nprint('The shape of X_train_transformed_bow is : {}'.format(X_train_transformed_bow.shape))",
"The datatype of X_train_transformed_bow is : <class 'scipy.sparse.csr.csr_matrix'>\nThe shape of X_train_transformed_bow is : (291336, 64221)\n"
]
],
[
[
"There are 64221 features in the bow representation. Load test data too.",
"_____no_output_____"
]
],
[
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_test_transformed_bow_full_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_test_transformed_bow_full_nparray.pkl', 'rb')\nX_test_transformed_bow = pickle.load(f)\nf.close()\nprint('The datatype of X_test_transformed_bow is : {}'.format(type(X_test_transformed_bow)))\nprint('The shape of X_test_transformed_bow is : {}'.format(X_test_transformed_bow.shape))",
"The datatype of X_test_transformed_bow is : <class 'scipy.sparse.csr.csr_matrix'>\nThe shape of X_test_transformed_bow is : (72835, 64221)\n"
]
],
[
[
"Count the number of non-zero elements in the array.",
"_____no_output_____"
]
],
[
[
"X_train_transformed_bow.count_nonzero",
"_____no_output_____"
]
],
[
[
"## Feature scaling\n\nSince the base learner of RF is decision trees, it doesn't really need data to be standardized. But for the sake of consistency in the workflow, let's do it.",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler(with_mean = False)\nX_train_transformed_bow_std = scaler.fit_transform(X_train_transformed_bow)",
"_____no_output_____"
],
[
"X_test_transformed_bow_std = scaler.transform(X_test_transformed_bow)",
"_____no_output_____"
],
[
"X_train_transformed_bow_std.shape",
"_____no_output_____"
],
[
"X_test_transformed_bow_std.shape",
"_____no_output_____"
],
[
"y_train.shape",
"_____no_output_____"
]
],
[
[
"## Some Functions\n\nLet's define some functions that we'll call repeatedly in this notebook. \n\n1. **n_depth_score** : Returns a dataframe containing the n_estimator, max_depth and accuracy score tried by GridSearch \n\n2. **give_me_ratios** : To plot ratios such as Precision, Recall, TNR, FPR, FNR.\n3. **plot_confusion_matrix** : As the name says.\n4. **GridSearch** : Create Time based cross validation splits using TimeSeriesSplit() and create a gridsearch object for a Random Forest classifier.\n5. **headmap** : For each pair of (n_estimator, max_depth) value in GridSearch, it will plot the score for train and test data during cross validation.",
"_____no_output_____"
]
],
[
[
"def n_depth_score(cv_results_):\n\n D={'n':[], 'depth':[], 'score':[]}\n\n for n in [ 10 , 30, 50 , 100 , 150 , 175]:\n\n for depth in [3, 5, 9, 13, 17, 23, 31]:\n\n d={'n_estimators': n, 'max_depth': depth}\n \n flag=True\n \n try:\n \n ind=cv_results_['params'].index(d)\n \n except:\n \n flag = False\n\n D['n'].append(n)\n D['depth'].append(depth)\n \n if flag == False:\n \n D['score'].append(-1)\n \n else:\n D['score'].append(cv_results_['mean_train_score'][ind])\n \n return(pd.DataFrame.from_dict(D)) ",
"_____no_output_____"
],
[
"def give_me_ratios(X_train, y_train, X_test, y_test, vector_type, table, clf, best_n, best_depth): \n \n cm_train = confusion_matrix(y_train, clf.predict(X_train))\n tn, fp, fn, tp = cm_train.ravel()\n\n recall_train = round(tp/(tp+fn),2)\n precision_train = round(tp/(tp+fp),2)\n tnr_train = round(tn/(tn+fp),2)\n fpr_train = round(fp/(fp+tn),2)\n fnr_train = round(fn/(fn+tp),2)\n accuracy_train = round((tp+tn)/(tp+tn+fp+fn))\n accuracy_train = (tp+tn)/(tp+tn+fp+fn)\n\n cm_test = confusion_matrix(y_test, clf.predict(X_test))\n tn, fp, fn, tp = cm_test.ravel()\n recall_test = round(tp/(tp+fn),2)\n precision_test = round(tp/(tp+fp),2)\n tnr_test = round(tn/(tn+fp),2)\n fpr_test = round(fp/(fp+tn),2)\n fnr_test = round(fn/(fn+tp),2)\n accuracy_test = round(tp+tn)/(tp+tn+fp+fn)\n\n table.field_names = ['Vector Type','Data Set','Best n_estimators','Best max_depth', 'Precision', 'Recall', 'TNR', 'FPR', 'FNR', 'Accuracy']\n table.add_row([vector_type,'Train',best_n,best_depth, precision_train, recall_train, tnr_train, fpr_train, fnr_train, accuracy_train])\n table.add_row([vector_type,'Test',best_n,best_depth, precision_test, recall_test, tnr_test, fpr_test, fnr_test, accuracy_test])\n\n print(table)\n \n return (cm_train, cm_test)",
"_____no_output_____"
],
[
"def plot_confusion_matrix(cm_train, cm_test, title):\n \n import pandas as pd\n plt.style.use('fivethirtyeight')\n plt.figure(figsize=(15,6)).suptitle(title, fontsize=15)\n \n plt.subplot(1,2,1)\n df_cm = pd.DataFrame(cm_train, range(2), range(2))\n sbn.heatmap(df_cm, annot=True, annot_kws={\"size\":16}, cbar=False, fmt = 'd', xticklabels=['Negative', 'Positive'], yticklabels=['Negative', 'Positive'], cmap=\"YlGnBu\")\n plt.xticks(fontsize=13)\n plt.yticks(fontsize=13)\n plt.xlabel('Predicted Class', fontsize=15)\n plt.ylabel('Actual Class', fontsize=15)\n plt.title('Train Data', fontsize = 14)\n\n plt.subplot(1,2,2) \n df_cm = pd.DataFrame(cm_test, range(2), range(2))\n sbn.heatmap(df_cm, annot=True, annot_kws={\"size\":16}, cbar=False, fmt = 'd', xticklabels=['Negative', 'Positive'], yticklabels=['Negative', 'Positive'], cmap=\"YlGnBu\")\n plt.xticks(fontsize=13)\n plt.yticks(fontsize=13)\n plt.xlabel('Predicted Class', fontsize=15)\n # plt.ylabel('Actual Class', fontsize=15)\n\n plt.title('Test Data', fontsize = 14)\n\n plt.tight_layout()",
"_____no_output_____"
],
[
"def GridSearch(X_train):\n\n tscv = TimeSeriesSplit(n_splits=7)\n my_cv = tscv.split(X_train)\n\n rfc = RandomForestClassifier(class_weight='balanced')\n \n hyp_par = {\n 'max_depth' : [ 3, 5, 9, 13, 17, 23, 31 ],\n 'n_estimators' : [ 10 , 30, 50 , 100 , 150 , 175 ] \n }\n\n clf = GridSearchCV(estimator=rfc, cv=my_cv, param_grid=hyp_par, n_jobs=6, return_train_score=True)\n \n return clf",
"_____no_output_____"
],
[
"def heatmap(df, vector_type, style):\n\n plt.figure(figsize=(15,6))\n plt.style.use(style)\n plt.subplot(1,1,1)\n sbn.heatmap(data=df.pivot('n','depth','score'), annot=True, linewidth = 0.5, cmap=\"YlGnBu\")\n plt.title('{} | Training/CV Accuracy'.format(vector_type), fontsize = 15)\n plt.xlabel('Max Depth', fontsize = 14)\n plt.ylabel('# estimators', fontsize = 14)\n plt.xticks(fontsize=13)\n plt.yticks(fontsize=13)\n\n plt.tight_layout()\n plt.show()",
"_____no_output_____"
]
],
[
[
"**BoW | GridSearchCV**",
"_____no_output_____"
],
[
"Get the classifier by calling the GridSearch funtion.",
"_____no_output_____"
]
],
[
[
"clf = GridSearch(X_train_transformed_bow_std)",
"_____no_output_____"
]
],
[
[
"Train the model",
"_____no_output_____"
]
],
[
[
"%%notify\n%%time\n\nclf.fit(X_train_transformed_bow_std, y_train)",
"CPU times: user 4min 25s, sys: 3.95 s, total: 4min 29s\nWall time: 36min 8s\n"
]
],
[
[
"**Heatmap of scores for each pair of (n_estimators,max_depth) found during GridSearch.**",
"_____no_output_____"
]
],
[
[
"heatmap(n_depth_score(clf.cv_results_), vector_type='BoW', style = 'bmh')",
"_____no_output_____"
]
],
[
[
"Import prettytable to summarize the results in a table",
"_____no_output_____"
]
],
[
[
"from prettytable import PrettyTable\ntable = PrettyTable()",
"_____no_output_____"
]
],
[
[
"**Ratios | BoW**",
"_____no_output_____"
]
],
[
[
"cm_bow_train, cm_bow_test = give_me_ratios(X_train_transformed_bow_std, y_train, X_test_transformed_bow_std, y_test, 'Bag of Words', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth'])",
"+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Vector Type | Data Set | Best n_estimators | Best max_depth | Precision | Recall | TNR | FPR | FNR | Accuracy |\n+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Bag of Words | Train | 175 | 31 | 0.97 | 0.94 | 0.85 | 0.15 | 0.06 | 0.9268851086031249 |\n| Bag of Words | Test | 175 | 31 | 0.95 | 0.92 | 0.75 | 0.25 | 0.08 | 0.8892702684149104 |\n+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n"
]
],
[
[
"**confusion matrix | BoW**",
"_____no_output_____"
]
],
[
[
"plot_confusion_matrix(cm_bow_train, cm_bow_test, title=\"BOW | Grid Search\")",
"_____no_output_____"
]
],
[
[
"## Tfidf\n\nIn this section, we'll apply Random Forests on reviews represented in the Tfidf format. Load the transformed train and test sets.",
"_____no_output_____"
]
],
[
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_train_transformed_tfidf_full_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_train_transformed_tfidf_full_nparray.pkl', 'rb')\nX_train_transformed_tfidf = pickle.load(f)\nf.close()\nprint('The datatype of X_train_transformed_tfidf is : {}'.format(type(X_train_transformed_tfidf)))\nprint('The shape of X_train_transformed_tfidf is : {}'.format(X_train_transformed_tfidf.shape))",
"The datatype of X_train_transformed_tfidf is : <class 'scipy.sparse.csr.csr_matrix'>\nThe shape of X_train_transformed_tfidf is : (291336, 64221)\n"
],
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_test_transformed_tfidf_full_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_test_transformed_tfidf_full_nparray.pkl', 'rb')\nX_test_transformed_tfidf = pickle.load(f)\nf.close()\nprint('The datatype of X_test_transformed_tfidf is : {}'.format(type(X_test_transformed_tfidf)))\nprint('The shape of X_test_transformed_tfidf is : {}'.format(X_test_transformed_tfidf.shape))",
"The datatype of X_test_transformed_tfidf is : <class 'scipy.sparse.csr.csr_matrix'>\nThe shape of X_test_transformed_tfidf is : (72835, 64221)\n"
]
],
[
[
"Standardize data",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler(with_mean = False)\nX_train_transformed_tfidf_std = scaler.fit_transform(X_train_transformed_tfidf)",
"_____no_output_____"
],
[
"X_test_transformed_tfidf_std = scaler.transform(X_test_transformed_tfidf)",
"_____no_output_____"
]
],
[
[
"**GridSearch | TFIDF **",
"_____no_output_____"
]
],
[
[
"clf = GridSearch(X_train_transformed_tfidf_std)",
"_____no_output_____"
]
],
[
[
"Train the model",
"_____no_output_____"
]
],
[
[
"%%notify\n%%time\n\nclf.fit(X_train_transformed_tfidf_std, y_train)",
"CPU times: user 4min 23s, sys: 5.06 s, total: 4min 28s\nWall time: 35min 41s\n"
]
],
[
[
"**Score Heatmap | Tfidf**",
"_____no_output_____"
]
],
[
[
"heatmap(n_depth_score(clf.cv_results_), vector_type='Tfidf', style = 'ggplot')",
"_____no_output_____"
]
],
[
[
"**Ratios | Tfidf**",
"_____no_output_____"
]
],
[
[
"%%notify\n\ncm_tfidf_train, cm_tfidf_test = give_me_ratios(X_train_transformed_tfidf_std, y_train, X_test_transformed_tfidf_std, y_test, 'Tfidf', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth'])",
"+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Vector Type | Data Set | Best n_estimators | Best max_depth | Precision | Recall | TNR | FPR | FNR | Accuracy |\n+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Bag of Words | Train | 175 | 31 | 0.97 | 0.94 | 0.85 | 0.15 | 0.06 | 0.9268851086031249 |\n| Bag of Words | Test | 175 | 31 | 0.95 | 0.92 | 0.75 | 0.25 | 0.08 | 0.8892702684149104 |\n| Tfidf | Train | 175 | 31 | 0.97 | 0.95 | 0.85 | 0.15 | 0.05 | 0.9325143476947579 |\n| Tfidf | Test | 175 | 31 | 0.95 | 0.92 | 0.75 | 0.25 | 0.08 | 0.8890780531337956 |\n+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n"
]
],
[
[
"**confusion matrix | Tfidf**",
"_____no_output_____"
]
],
[
[
"plot_confusion_matrix(cm_tfidf_train, cm_tfidf_test, title=\"Tfidf | Grid Search\")",
"_____no_output_____"
]
],
[
[
"## Avg W2V\n\nIn this section, we'll apply Random Forests on data represented in the avg. W2V format.\nload the train and test data stored on disk.",
"_____no_output_____"
]
],
[
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_train_transformed_avgW2V_full80_20_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_train_transformed_avgW2V_full80_20_nparray.pkl', 'rb')\nX_train_transformed_avgW2V = pickle.load(f)\nf.close()\nprint('The datatype of X_train_transformed_avgW2V is : {}'.format(type(X_train_transformed_avgW2V)))\nprint('The shape of X_train_transformed_avgW2V is : {}'.format(X_train_transformed_avgW2V.shape))",
"The datatype of X_train_transformed_avgW2V is : <class 'numpy.ndarray'>\nThe shape of X_train_transformed_avgW2V is : (291336, 50)\n"
],
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_test_transformed_avgW2V_full80_20_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_test_transformed_avgW2V_full80_20_nparray.pkl', 'rb')\nX_test_transformed_avgW2V = pickle.load(f)\nf.close()\nprint('The datatype of X_test_transformed_avgW2V is : {}'.format(type(X_test_transformed_avgW2V)))\nprint('The shape of X_test_transformed_avgW2V is : {}'.format(X_test_transformed_avgW2V.shape))",
"The datatype of X_test_transformed_avgW2V is : <class 'numpy.ndarray'>\nThe shape of X_test_transformed_avgW2V is : (72835, 50)\n"
]
],
[
[
"Standardize the data",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler(with_mean = True)\nX_train_transformed_avgW2V_std = scaler.fit_transform(X_train_transformed_avgW2V)",
"_____no_output_____"
],
[
"X_test_transformed_avgW2V_std = scaler.transform(X_test_transformed_avgW2V)",
"_____no_output_____"
]
],
[
[
"**GridSearch | avg. W2V **\n\n",
"_____no_output_____"
]
],
[
[
"clf = GridSearch(X_train_transformed_avgW2V_std)",
"_____no_output_____"
],
[
"%%notify\n%%time\n\nclf.fit(X_train_transformed_avgW2V_std, y_train)",
"CPU times: user 10min 7s, sys: 6.24 s, total: 10min 13s\nWall time: 1h 31min 22s\n"
]
],
[
[
"**Score Heatmap | Avg. W2V**",
"_____no_output_____"
]
],
[
[
"heatmap(n_depth_score(clf.cv_results_), vector_type='Avg. W2V', style = 'ggplot')",
"_____no_output_____"
]
],
[
[
"**Ratios | Avg. W2V**",
"_____no_output_____"
]
],
[
[
"%%notify\n\ncm_w2v_train, cm_w2v_test = give_me_ratios(X_train_transformed_avgW2V_std, y_train, X_test_transformed_avgW2V_std, y_test, 'Avg. W2V', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth'])",
"+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Vector Type | Data Set | Best n_estimators | Best max_depth | Precision | Recall | TNR | FPR | FNR | Accuracy |\n+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Bag of Words | Train | 175 | 31 | 0.97 | 0.94 | 0.85 | 0.15 | 0.06 | 0.9268851086031249 |\n| Bag of Words | Test | 175 | 31 | 0.95 | 0.92 | 0.75 | 0.25 | 0.08 | 0.8892702684149104 |\n| Tfidf | Train | 175 | 31 | 0.97 | 0.95 | 0.85 | 0.15 | 0.05 | 0.9325143476947579 |\n| Tfidf | Test | 175 | 31 | 0.95 | 0.92 | 0.75 | 0.25 | 0.08 | 0.8890780531337956 |\n| Avg. W2V | Train | 175 | 17 | 1.0 | 0.98 | 0.99 | 0.01 | 0.02 | 0.9805825575967269 |\n| Avg. W2V | Test | 175 | 17 | 0.91 | 0.95 | 0.54 | 0.46 | 0.05 | 0.8797968009885357 |\n+--------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n"
]
],
[
[
"**Confusion Matrix | Avg. W2V**",
"_____no_output_____"
]
],
[
[
"plot_confusion_matrix(cm_w2v_train, cm_w2v_test, title=\"Avg. W2V | Grid Search\")",
"_____no_output_____"
]
],
[
[
"## Tfidf weighted W2V\n\nIn this last section, we apply Random Forests on vectors represented in the form of Tfidf weighted W2V.",
"_____no_output_____"
]
],
[
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_train_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_train_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb')\nX_train_transformed_TfidfW2V = pickle.load(f)\nf.close()\nprint('The datatype of X_train_transformed_TfidfW2V is : {}'.format(type(X_train_transformed_TfidfW2V)))\nprint('The shape of X_train_transformed_TfidfW2V is : {}'.format(X_train_transformed_TfidfW2V.shape))",
"The datatype of X_train_transformed_TfidfW2V is : <class 'numpy.ndarray'>\nThe shape of X_train_transformed_TfidfW2V is : (291336, 50)\n"
],
[
"# f = open(r'D:\\data_science\\datasets\\amazon2\\X_test_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb')\nf = open('/home/ucanreachtvk/data/X_test_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb')\nX_test_transformed_TfidfW2V = pickle.load(f)\nf.close()\nprint('The datatype of X_test_transformed_TfidfW2V is : {}'.format(type(X_test_transformed_TfidfW2V)))\nprint('The shape of X_train_transformed_TfidfW2V is : {}'.format(X_test_transformed_TfidfW2V.shape))",
"The datatype of X_test_transformed_TfidfW2V is : <class 'numpy.ndarray'>\nThe shape of X_train_transformed_TfidfW2V is : (72835, 50)\n"
]
],
[
[
"Standardize data",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler(with_mean = True)\nX_train_transformed_TfidfW2V_std = scaler.fit_transform(X_train_transformed_TfidfW2V)",
"_____no_output_____"
],
[
"X_test_transformed_TfidfW2V_std = scaler.transform(X_test_transformed_TfidfW2V)",
"_____no_output_____"
]
],
[
[
"**GridSearch | Tfidf Weighted W2V**",
"_____no_output_____"
]
],
[
[
"clf = GridSearch(X_train_transformed_TfidfW2V_std)",
"_____no_output_____"
],
[
"%%notify\n%%time\n\nclf.fit(X_train_transformed_TfidfW2V_std, y_train)",
"CPU times: user 8min 27s, sys: 7.89 s, total: 8min 35s\nWall time: 1h 35min 20s\n"
]
],
[
[
"**Score Heatmap | Tfidf wt. W2V**",
"_____no_output_____"
]
],
[
[
"heatmap(n_depth_score(clf.cv_results_), vector_type='Tfidf wt. W2V', style = 'ggplot')",
"_____no_output_____"
]
],
[
[
"**Ratios | Tfidf wt. W2V**",
"_____no_output_____"
]
],
[
[
"cm_tfidfw2v_train, cm_tfidfw2v_test = give_me_ratios(X_train_transformed_TfidfW2V_std, y_train, X_test_transformed_TfidfW2V_std, y_test, 'Tfidf wt. W2V', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth'])",
"+---------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Vector Type | Data Set | Best n_estimators | Best max_depth | Precision | Recall | TNR | FPR | FNR | Accuracy |\n+---------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n| Bag of Words | Train | 175 | 31 | 0.97 | 0.94 | 0.85 | 0.15 | 0.06 | 0.9268851086031249 |\n| Bag of Words | Test | 175 | 31 | 0.95 | 0.92 | 0.75 | 0.25 | 0.08 | 0.8892702684149104 |\n| Tfidf | Train | 175 | 31 | 0.97 | 0.95 | 0.85 | 0.15 | 0.05 | 0.9325143476947579 |\n| Tfidf | Test | 175 | 31 | 0.95 | 0.92 | 0.75 | 0.25 | 0.08 | 0.8890780531337956 |\n| Avg. W2V | Train | 175 | 17 | 1.0 | 0.98 | 0.99 | 0.01 | 0.02 | 0.9805825575967269 |\n| Avg. W2V | Test | 175 | 17 | 0.91 | 0.95 | 0.54 | 0.46 | 0.05 | 0.8797968009885357 |\n| Tfidf wt. W2V | Train | 150 | 17 | 1.0 | 0.98 | 0.99 | 0.01 | 0.02 | 0.9795493862756405 |\n| Tfidf wt. W2V | Test | 150 | 17 | 0.83 | 0.99 | 0.02 | 0.98 | 0.01 | 0.8258529553099472 |\n+---------------+----------+-------------------+----------------+-----------+--------+------+------+------+--------------------+\n"
]
],
[
[
"**Confusion Matrix | Tfidf wt. W2V**",
"_____no_output_____"
]
],
[
[
"plot_confusion_matrix(cm_tfidfw2v_train, cm_tfidfw2v_test, title=\"Tfidf wt. W2V | Grid Search\")",
"_____no_output_____"
]
],
[
[
"### Conclusion:\n\n* We applied Random Forests on amazon food reviews for various vector representations.\n* Found that higher the number of estimators and max_depth, better the performance of the model, as expected. \n* Plotted the confusion matrix for train and test data and also calculated several important ratios based on it such as Precision, Recall, FNR, etc.\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5c19ed1c4e306013f1a287034553d67ad2fee2
| 10,967 |
ipynb
|
Jupyter Notebook
|
ROCStories_preprocess.ipynb
|
mil-tokyo/missing-position-prediction
|
7f6a868e4b25b52311e43df39d124109c1adc75c
|
[
"Apache-2.0"
] | 2 |
2020-11-04T03:25:08.000Z
|
2021-08-14T13:29:10.000Z
|
ROCStories_preprocess.ipynb
|
mil-tokyo/missing-position-prediction
|
7f6a868e4b25b52311e43df39d124109c1adc75c
|
[
"Apache-2.0"
] | null | null | null |
ROCStories_preprocess.ipynb
|
mil-tokyo/missing-position-prediction
|
7f6a868e4b25b52311e43df39d124109c1adc75c
|
[
"Apache-2.0"
] | 1 |
2021-08-13T21:30:16.000Z
|
2021-08-13T21:30:16.000Z
| 21.253876 | 122 | 0.48573 |
[
[
[
"# Preprocess \"ROC Stories\" for Story Completion",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline",
"_____no_output_____"
],
[
"import os\nimport glob\nimport pandas as pd\n\nDATAPATH = '/path/to/ROCStories'",
"_____no_output_____"
],
[
"ROCstory_spring2016 = pd.read_csv(os.path.join(DATAPATH, \"ROCStories__spring2016 - ROCStories_spring2016.csv\"))\nROCstory_winter2017 = pd.read_csv(os.path.join(DATAPATH, \"ROCStories_winter2017 - ROCStories_winter2017.csv\"))",
"_____no_output_____"
],
[
"ROCstory_train = pd.concat([ROCstory_spring2016, ROCstory_winter2017])",
"_____no_output_____"
],
[
"len(ROCstory_train[\"storyid\"].unique())",
"_____no_output_____"
],
[
"stories = ROCstory_train.loc[:, \"sentence1\":\"sentence5\"].values",
"_____no_output_____"
]
],
[
[
"## Train, Dev, Test",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"train_and_dev, test_stories = train_test_split(stories, test_size=0.1)",
"_____no_output_____"
],
[
"train_stories, dev_stories = train_test_split(train_and_dev, test_size=1/9)",
"_____no_output_____"
],
[
"len(train_stories), len(dev_stories), len(test_stories)",
"_____no_output_____"
]
],
[
[
"### dev",
"_____no_output_____"
]
],
[
[
"import numpy as np\nnp.random.seed(1234)",
"_____no_output_____"
],
[
"dev_missing_indexes = np.random.randint(low=0, high=5, size=len(dev_stories))",
"_____no_output_____"
],
[
"dev_stories_with_missing = []\n\nfor st, mi in zip(dev_stories, dev_missing_indexes):\n missing_sentence = st[mi]\n remain_sentences = np.delete(st, mi)\n \n dev_stories_with_missing.append([remain_sentences[0], \n remain_sentences[1],\n remain_sentences[2],\n remain_sentences[3], \n mi, missing_sentence])",
"_____no_output_____"
],
[
"dev_df = pd.DataFrame(dev_stories_with_missing,\n columns=['stories_with_missing_sentence1',\n 'stories_with_missing_sentence2',\n 'stories_with_missing_sentence3',\n 'stories_with_missing_sentence4',\n 'missing_id', 'missing_sentence'])",
"_____no_output_____"
],
[
"dev_df.to_csv(\"./data/rocstories_completion_dev.csv\", index=False)",
"_____no_output_____"
]
],
[
[
"### test",
"_____no_output_____"
]
],
[
[
"test_missing_indexes = np.random.randint(low=0, high=5, size=len(test_stories))",
"_____no_output_____"
],
[
"test_stories_with_missing = []\n\nfor st, mi in zip(test_stories, test_missing_indexes):\n missing_sentence = st[mi]\n remain_sentences = np.delete(st, mi)\n \n test_stories_with_missing.append([remain_sentences[0], \n remain_sentences[1],\n remain_sentences[2],\n remain_sentences[3], \n mi, missing_sentence])",
"_____no_output_____"
],
[
"test_df = pd.DataFrame(test_stories_with_missing,\n columns=['stories_with_missing_sentence1',\n 'stories_with_missing_sentence2',\n 'stories_with_missing_sentence3',\n 'stories_with_missing_sentence4',\n 'missing_id', 'missing_sentence'])",
"_____no_output_____"
],
[
"test_df.to_csv(\"./data/rocstories_completion_test.csv\", index=False)",
"_____no_output_____"
]
],
[
[
"### train",
"_____no_output_____"
]
],
[
[
"train_df = pd.DataFrame(train_stories,\n columns=['sentence1',\n 'sentence2',\n 'sentence3',\n 'sentence4',\n 'sentence5'])",
"_____no_output_____"
],
[
"train_df.to_csv(\"./data/rocstories_completion_train.csv\", index=False)",
"_____no_output_____"
]
],
[
[
"## load saved data",
"_____no_output_____"
]
],
[
[
"train_df2 = pd.read_csv(\"./data/rocstories_completion_train.csv\")",
"_____no_output_____"
],
[
"# train_df2.head()",
"_____no_output_____"
],
[
"dev_df2 = pd.read_csv(\"./data/rocstories_completion_dev.csv\")",
"_____no_output_____"
],
[
"# dev_df2.head()",
"_____no_output_____"
],
[
"test_df2 = pd.read_csv(\"./data/rocstories_completion_test.csv\")",
"_____no_output_____"
],
[
"# test_df2.head()",
"_____no_output_____"
],
[
"dev_df2.missing_id.value_counts()",
"_____no_output_____"
],
[
"test_df2.missing_id.value_counts()",
"_____no_output_____"
]
],
[
[
"### mini size dataset",
"_____no_output_____"
]
],
[
[
"train_mini, train_else = train_test_split(train_df, test_size=0.9)",
"_____no_output_____"
],
[
"len(train_mini)",
"_____no_output_____"
],
[
"train_mini.to_csv(\"./data/rocstories_completion_train_mini.csv\", index=False)",
"_____no_output_____"
],
[
"dev_mini, dev_else = train_test_split(dev_df, test_size=0.9)",
"_____no_output_____"
],
[
"len(dev_mini)",
"_____no_output_____"
],
[
"dev_mini.to_csv(\"./data/rocstories_completion_dev_mini.csv\", index=False)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5c24df2f16dd8d317985211569ede9787ea241
| 7,088 |
ipynb
|
Jupyter Notebook
|
example.ipynb
|
patarapolw/hanzilvlib
|
2874dcd994053ac5617df44d222dc8086caeaae9
|
[
"MIT"
] | 1 |
2022-02-14T09:58:50.000Z
|
2022-02-14T09:58:50.000Z
|
example.ipynb
|
patarapolw/hanzilvlib
|
2874dcd994053ac5617df44d222dc8086caeaae9
|
[
"MIT"
] | null | null | null |
example.ipynb
|
patarapolw/hanzilvlib
|
2874dcd994053ac5617df44d222dc8086caeaae9
|
[
"MIT"
] | null | null | null | 28.01581 | 145 | 0.423956 |
[
[
[
"from hanzilvlib.level import HanziLevel\nhlp = HanziLevel()\nhlp.get_hanzi_list(level=1)",
"_____no_output_____"
],
[
"from hanzilvlib.dictionary import HanziDict, VocabDict, SentenceDict\nhanzi_dict = HanziDict()\nvocab_dict = VocabDict()\nsentence_dict = SentenceDict()",
"_____no_output_____"
],
[
"hanzi_dict.search_hanzi('你')",
"Building prefix dict from /Users/patarapolw/PycharmProjects/hanzilvlib/venv/lib/python3.7/site-packages/wordfreq/data/jieba_zh.txt ...\nDumping model to file cache /var/folders/rg/1rs2m55j3l59r6k84tt2fc500000gn/T/jieba.u6d2f1d045931490b957a78c6ed4c1434.cache\nLoading model cost 0.123 seconds.\nPrefix dict has been built succesfully.\n"
],
[
"vocab_dict.search_vocab(\"你我\")",
"_____no_output_____"
],
[
"sentence_dict.search_sentence(\"谢谢\")",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
4a5c25f10168dfc84acab4c5e1b6a279cff63c9f
| 1,494 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/Untitled-checkpoint.ipynb
|
SubhadityaMukherjee/DataLoader.jl
|
d943b88681fa8e3a52c80e3406cfc2d35c7d2b9f
|
[
"MIT"
] | 1 |
2020-06-19T14:35:06.000Z
|
2020-06-19T14:35:06.000Z
|
.ipynb_checkpoints/Untitled-checkpoint.ipynb
|
SubhadityaMukherjee/DataLoader.jl
|
d943b88681fa8e3a52c80e3406cfc2d35c7d2b9f
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/Untitled-checkpoint.ipynb
|
SubhadityaMukherjee/DataLoader.jl
|
d943b88681fa8e3a52c80e3406cfc2d35c7d2b9f
|
[
"MIT"
] | null | null | null | 18.444444 | 133 | 0.513387 |
[
[
[
"using PyCall",
"┌ Info: Precompiling PyCall [438e738f-606a-5dbb-bf0a-cddfbfd45ab0]\n└ @ Base loading.jl:1260\n"
],
[
"pyautogui = pyimport(\"pyautogui\")",
"_____no_output_____"
],
[
"pyautogui.moveTo(100, 150)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4a5c300a64cd239f55cd106204a7c95a67234f40
| 40,982 |
ipynb
|
Jupyter Notebook
|
prototype/xgboost.ipynb
|
raresraf/AlgoRAF
|
a5ecf6a8a2811e9b0348392b2fa09ff095b13e38
|
[
"MIT"
] | null | null | null |
prototype/xgboost.ipynb
|
raresraf/AlgoRAF
|
a5ecf6a8a2811e9b0348392b2fa09ff095b13e38
|
[
"MIT"
] | null | null | null |
prototype/xgboost.ipynb
|
raresraf/AlgoRAF
|
a5ecf6a8a2811e9b0348392b2fa09ff095b13e38
|
[
"MIT"
] | null | null | null | 40.376355 | 266 | 0.278195 |
[
[
[
"# Configs\n\nembedding_type = \"perf\" # time or perf",
"_____no_output_____"
],
[
"import pandas as pd\nimport matplotlib\nimport numpy as np\nfrom sklearn import tree\n\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import recall_score\nfrom sklearn.metrics import f1_score\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.model_selection import train_test_split\n\nnp.set_printoptions(precision=3, suppress=True)",
"_____no_output_____"
],
[
"dataset = pd.read_csv(f\"../dataset/{embedding_type}/dataset.csv\")\ndataset = pd.get_dummies(dataset)\n\ndataset.head()\n\nlabels = [\n \"label_strings\",\n \"label_implementation\",\n \"label_greedy\",\n \"label_brute_force\",\n \"label_dp\",\n \"label_divide_and_conquer\",\n \"label_graphs\",\n \"label_binary_search\",\n \"label_math\",\n \"label_sortings\",\n \"label_shortest_paths\",\n]\nprint_labels = list(map(lambda l: (l.split('_', 1)[1].replace('_', ' ')), labels))\n\ntrain, test = train_test_split(dataset, test_size=0.33, random_state=42, shuffle=True)\n\ntrain_dataset_features = train.copy().drop(labels, axis=1)\ntrain_dataset_labels = pd.concat([train.copy().pop(x) for x in labels], axis=1)\n\ntest_dataset_features = test.copy().drop(labels, axis=1)\ntest_dataset_labels = pd.concat([test.copy().pop(x) for x in labels], axis=1)\n\n",
"_____no_output_____"
],
[
"test_dataset_features.sort_index()",
"_____no_output_____"
],
[
"test_dataset_labels.sort_index()",
"_____no_output_____"
],
[
"import xgboost as xgb\nfrom sklearn.multioutput import MultiOutputClassifier\n\nmodel = MultiOutputClassifier(xgb.XGBClassifier(objective='binary:logistic'))\nmodel.fit(train_dataset_features, train_dataset_labels)",
"_____no_output_____"
],
[
"print(classification_report(train_dataset_labels, model.predict(train_dataset_features), \n target_names = print_labels))",
" precision recall f1-score support\n\n strings 1.00 1.00 1.00 1522\n implementation 1.00 1.00 1.00 2788\n greedy 1.00 1.00 1.00 1060\n brute force 1.00 1.00 1.00 645\n dp 1.00 1.00 1.00 72\ndivide and conquer 1.00 1.00 1.00 67\n graphs 1.00 1.00 1.00 159\n binary search 1.00 1.00 1.00 67\n math 1.00 1.00 1.00 711\n sortings 1.00 1.00 1.00 334\n shortest paths 1.00 1.00 1.00 159\n\n micro avg 1.00 1.00 1.00 7584\n macro avg 1.00 1.00 1.00 7584\n weighted avg 1.00 1.00 1.00 7584\n samples avg 0.99 0.99 0.99 7584\n\n"
],
[
"print(classification_report(test_dataset_labels, model.predict(test_dataset_features), \n target_names = print_labels))",
" precision recall f1-score support\n\n strings 0.94 0.90 0.92 756\n implementation 0.94 0.98 0.96 1387\n greedy 0.92 0.77 0.84 523\n brute force 0.98 0.77 0.86 311\n dp 0.87 0.74 0.80 35\ndivide and conquer 1.00 0.68 0.81 31\n graphs 0.91 0.88 0.90 83\n binary search 1.00 0.68 0.81 31\n math 0.97 0.91 0.94 301\n sortings 0.95 0.61 0.74 176\n shortest paths 0.91 0.88 0.90 83\n\n micro avg 0.94 0.88 0.91 3717\n macro avg 0.94 0.80 0.86 3717\n weighted avg 0.94 0.88 0.91 3717\n samples avg 0.94 0.91 0.91 3717\n\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5c34f98568fb706afdb4b1e7445206607ec244
| 442,778 |
ipynb
|
Jupyter Notebook
|
2_multilayer_model_performance_with_squared_images.ipynb
|
yantraguru/deeplearn
|
71eb374d053593ac006a1d8305e20759a6d6accb
|
[
"Apache-2.0"
] | null | null | null |
2_multilayer_model_performance_with_squared_images.ipynb
|
yantraguru/deeplearn
|
71eb374d053593ac006a1d8305e20759a6d6accb
|
[
"Apache-2.0"
] | null | null | null |
2_multilayer_model_performance_with_squared_images.ipynb
|
yantraguru/deeplearn
|
71eb374d053593ac006a1d8305e20759a6d6accb
|
[
"Apache-2.0"
] | null | null | null | 163.025773 | 97,040 | 0.763319 |
[
[
[
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:85% !important; }</style>\"))",
"_____no_output_____"
],
[
"import os\nimport time\nimport numpy as np\nimport pandas as pd\n\nfrom os import listdir\nfrom io import BytesIO\nimport requests\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers,models,utils\nfrom tensorflow.keras.layers import Dense,Flatten\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras.callbacks import EarlyStopping\n\nfrom scipy import stats\nfrom sklearn import preprocessing\n\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import roc_curve, auc\n\nimport PIL\nfrom PIL import Image\n\nimport seaborn as sns\nfrom matplotlib.pyplot import imshow\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"if tf.test.gpu_device_name():\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))\nelse:\n print(\"Please install GPU version of TF\")",
"Default GPU Device: /device:GPU:0\n"
],
[
"DATA_DIR = 'data/caps_and_shoes_squared/'\nIMAGE_SIZE = (28,28)\nFEATURE_SIZE = IMAGE_SIZE[0]*IMAGE_SIZE[1]",
"_____no_output_____"
],
[
"def convert_img_to_data(image):\n data = np.asarray(image)\n gs_image = image.convert(mode='L')\n gs_data = np.asarray(gs_image)\n gs_image.thumbnail(IMAGE_SIZE, Image.ANTIALIAS)\n gs_resized = gs_image.resize(IMAGE_SIZE,Image.ANTIALIAS)\n gs_resized_data = np.asarray(gs_resized)\n reshaped_gs_data = gs_resized_data.reshape(IMAGE_SIZE[0]*IMAGE_SIZE[1])\n return reshaped_gs_data\n\ndef convert_images_from_dir(dir_path):\n image_data = []\n \n for filename in listdir(dir_path):\n image = Image.open(dir_path +os.sep + filename)\n reshaped_gs_data = convert_img_to_data(image)\n image_data.append(reshaped_gs_data)\n \n return image_data\n\ndef load_from_dir(dir_path, labels):\n label_data = []\n image_data = []\n for label in labels:\n data_from_dir = convert_images_from_dir(dir_path + label)\n labels_for_data = [label for i in range(len(data_from_dir))]\n image_data += data_from_dir\n label_data += labels_for_data\n \n print('Found %d images belonging to %d classes' % (len(image_data), len(labels)))\n return (np.array(image_data),np.array(label_data))\n\ndef load_img_data(data_dir):\n train_dir = DATA_DIR + 'train/'\n validation_dir = DATA_DIR + 'val/'\n test_dir = DATA_DIR + 'test/'\n \n if (os.path.isdir(train_dir) and os.path.isdir(validation_dir) and os.path.isdir(test_dir)) :\n labels = [subdirname.name for subdirname in os.scandir(train_dir) if subdirname.is_dir()] \n \n train_data = load_from_dir(train_dir,labels)\n validation_data = load_from_dir(validation_dir,labels)\n test_data = load_from_dir(test_dir,labels)\n \n return train_data, validation_data, test_data",
"_____no_output_____"
],
[
"train_data, validation_data, test_data = load_img_data(DATA_DIR)\nX_train, y_train = train_data\nX_val, y_val = validation_data\nX_test, y_test = test_data",
"Found 2299 images belonging to 2 classes\nFound 678 images belonging to 2 classes\nFound 327 images belonging to 2 classes\n"
],
[
"X_train = X_train.astype('float32') / 255\nX_val = X_val.astype('float32') / 255\nX_test = X_test.astype('float32') / 255",
"_____no_output_____"
],
[
"le = preprocessing.LabelEncoder()\nle.fit(y_train)\ny_train = le.transform(y_train)\ny_val = le.transform(y_val)\ny_test = le.transform(y_test)\ny_train = utils.to_categorical(y_train)\ny_val = utils.to_categorical(y_val)\ny_test = utils.to_categorical(y_test)",
"_____no_output_____"
],
[
"def define_multilayer_model_architecture_64_32_16():\n model = models.Sequential()\n model.add(Dense(64, activation='relu', input_shape=(FEATURE_SIZE,)))\n model.add(Dense(32, activation='relu'))\n model.add(Dense(16, activation='relu'))\n model.add(Dense(2, activation='softmax'))\n \n model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])\n return model",
"_____no_output_____"
],
[
"model = define_multilayer_model_architecture_64_32_16()\n%time history = model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=500, batch_size=32, shuffle=True, verbose = 1)",
"Train on 2299 samples, validate on 678 samples\nEpoch 1/500\n2299/2299 [==============================] - 0s 121us/sample - loss: 0.6709 - accuracy: 0.5920 - val_loss: 0.6401 - val_accuracy: 0.6578\nEpoch 2/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.6251 - accuracy: 0.6725 - val_loss: 0.6068 - val_accuracy: 0.6593\nEpoch 3/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.5930 - accuracy: 0.6812 - val_loss: 0.5685 - val_accuracy: 0.7065\nEpoch 4/500\n2299/2299 [==============================] - 0s 56us/sample - loss: 0.5692 - accuracy: 0.7134 - val_loss: 0.5747 - val_accuracy: 0.7021\nEpoch 5/500\n2299/2299 [==============================] - 0s 49us/sample - loss: 0.5595 - accuracy: 0.7129 - val_loss: 0.5293 - val_accuracy: 0.7330\nEpoch 6/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.5452 - accuracy: 0.7216 - val_loss: 0.5280 - val_accuracy: 0.7493\nEpoch 7/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.5408 - accuracy: 0.7334 - val_loss: 0.5457 - val_accuracy: 0.7404\nEpoch 8/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.5296 - accuracy: 0.7425 - val_loss: 0.4988 - val_accuracy: 0.7729\nEpoch 9/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.5146 - accuracy: 0.7521 - val_loss: 0.4928 - val_accuracy: 0.7729\nEpoch 10/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.5111 - accuracy: 0.7590 - val_loss: 0.4791 - val_accuracy: 0.7802\nEpoch 11/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.4992 - accuracy: 0.7642 - val_loss: 0.4848 - val_accuracy: 0.7950\nEpoch 12/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.5045 - accuracy: 0.7564 - val_loss: 0.4815 - val_accuracy: 0.7640\nEpoch 13/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4869 - accuracy: 0.7708 - val_loss: 0.4664 - val_accuracy: 0.7994\nEpoch 14/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4739 - accuracy: 0.7847 - val_loss: 0.4994 - val_accuracy: 0.7522\nEpoch 15/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4690 - accuracy: 0.7808 - val_loss: 0.5042 - val_accuracy: 0.7552\nEpoch 16/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.4530 - accuracy: 0.7969 - val_loss: 0.4573 - val_accuracy: 0.7965\nEpoch 17/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.4544 - accuracy: 0.7956 - val_loss: 0.4283 - val_accuracy: 0.8171\nEpoch 18/500\n2299/2299 [==============================] - 0s 60us/sample - loss: 0.4556 - accuracy: 0.7947 - val_loss: 0.4323 - val_accuracy: 0.8142\nEpoch 19/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4423 - accuracy: 0.7986 - val_loss: 0.4177 - val_accuracy: 0.8068\nEpoch 20/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4400 - accuracy: 0.7999 - val_loss: 0.4116 - val_accuracy: 0.8201\nEpoch 21/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.4292 - accuracy: 0.8073 - val_loss: 0.4101 - val_accuracy: 0.8171\nEpoch 22/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.4259 - accuracy: 0.8147 - val_loss: 0.4439 - val_accuracy: 0.7743\nEpoch 23/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.4260 - accuracy: 0.8069 - val_loss: 0.4008 - val_accuracy: 0.8333\nEpoch 24/500\n2299/2299 [==============================] - 0s 57us/sample - loss: 0.4163 - accuracy: 0.8086 - val_loss: 0.4221 - val_accuracy: 0.7965\nEpoch 25/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.3996 - accuracy: 0.8256 - val_loss: 0.4366 - val_accuracy: 0.7935\nEpoch 26/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.4082 - accuracy: 0.8134 - val_loss: 0.4064 - val_accuracy: 0.7994\nEpoch 27/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.4060 - accuracy: 0.8151 - val_loss: 0.4375 - val_accuracy: 0.7802\nEpoch 28/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.3964 - accuracy: 0.8247 - val_loss: 0.4007 - val_accuracy: 0.8053\nEpoch 29/500\n2299/2299 [==============================] - 0s 55us/sample - loss: 0.3951 - accuracy: 0.8199 - val_loss: 0.4529 - val_accuracy: 0.7729\nEpoch 30/500\n2299/2299 [==============================] - 0s 57us/sample - loss: 0.4134 - accuracy: 0.8125 - val_loss: 0.3987 - val_accuracy: 0.8230\nEpoch 31/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.3910 - accuracy: 0.8191 - val_loss: 0.3799 - val_accuracy: 0.8215\nEpoch 32/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.3931 - accuracy: 0.8264 - val_loss: 0.3854 - val_accuracy: 0.8142\nEpoch 33/500\n2299/2299 [==============================] - 0s 56us/sample - loss: 0.3808 - accuracy: 0.8312 - val_loss: 0.3750 - val_accuracy: 0.8201\nEpoch 34/500\n2299/2299 [==============================] - 0s 55us/sample - loss: 0.3727 - accuracy: 0.8421 - val_loss: 0.4040 - val_accuracy: 0.8274\nEpoch 35/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.3620 - accuracy: 0.8460 - val_loss: 0.4798 - val_accuracy: 0.7655\nEpoch 36/500\n2299/2299 [==============================] - 0s 60us/sample - loss: 0.3769 - accuracy: 0.8234 - val_loss: 0.5704 - val_accuracy: 0.7109\nEpoch 37/500\n2299/2299 [==============================] - 0s 57us/sample - loss: 0.3609 - accuracy: 0.8386 - val_loss: 0.3629 - val_accuracy: 0.8496\nEpoch 38/500\n2299/2299 [==============================] - 0s 55us/sample - loss: 0.3883 - accuracy: 0.8191 - val_loss: 0.3802 - val_accuracy: 0.8260\nEpoch 39/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.3647 - accuracy: 0.8325 - val_loss: 0.4214 - val_accuracy: 0.7994\nEpoch 40/500\n2299/2299 [==============================] - 0s 61us/sample - loss: 0.3446 - accuracy: 0.8491 - val_loss: 0.4071 - val_accuracy: 0.8112\nEpoch 41/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.3523 - accuracy: 0.8391 - val_loss: 0.3967 - val_accuracy: 0.8348\nEpoch 42/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.3430 - accuracy: 0.8530 - val_loss: 0.3787 - val_accuracy: 0.8215\nEpoch 43/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.3271 - accuracy: 0.8565 - val_loss: 0.3852 - val_accuracy: 0.8319\nEpoch 44/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.3400 - accuracy: 0.8565 - val_loss: 0.5229 - val_accuracy: 0.7463\nEpoch 45/500\n2299/2299 [==============================] - 0s 59us/sample - loss: 0.3329 - accuracy: 0.8591 - val_loss: 0.3518 - val_accuracy: 0.8496\nEpoch 46/500\n2299/2299 [==============================] - 0s 56us/sample - loss: 0.3329 - accuracy: 0.8538 - val_loss: 0.3734 - val_accuracy: 0.8392\nEpoch 47/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.3370 - accuracy: 0.8565 - val_loss: 0.4302 - val_accuracy: 0.7891\nEpoch 48/500\n2299/2299 [==============================] - 0s 57us/sample - loss: 0.3274 - accuracy: 0.8578 - val_loss: 0.4690 - val_accuracy: 0.7788\nEpoch 49/500\n2299/2299 [==============================] - 0s 55us/sample - loss: 0.3165 - accuracy: 0.8678 - val_loss: 0.3501 - val_accuracy: 0.8614\nEpoch 50/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.3230 - accuracy: 0.8595 - val_loss: 0.4073 - val_accuracy: 0.8097\nEpoch 51/500\n2299/2299 [==============================] - 0s 60us/sample - loss: 0.3294 - accuracy: 0.8521 - val_loss: 0.3796 - val_accuracy: 0.8437\nEpoch 52/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.3111 - accuracy: 0.8656 - val_loss: 0.3555 - val_accuracy: 0.8599\nEpoch 53/500\n2299/2299 [==============================] - 0s 60us/sample - loss: 0.3218 - accuracy: 0.8582 - val_loss: 0.3739 - val_accuracy: 0.8407\nEpoch 54/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.3119 - accuracy: 0.8595 - val_loss: 0.3581 - val_accuracy: 0.8319\nEpoch 55/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.3171 - accuracy: 0.8656 - val_loss: 0.3974 - val_accuracy: 0.8024\n"
],
[
"plt.figure(num=None, figsize=(16, 6))\nplt.plot(history.history['accuracy'], label='train')\nplt.plot(history.history['val_accuracy'], label='validation')\nplt.legend()\nplt.xlim(0, 500)\nplt.show()",
"_____no_output_____"
],
[
"ITER = 10\ntraining_time_list = []\ntest_accuracy_list = []\nfor iter_count in range(ITER):\n model = define_multilayer_model_architecture_64_32_16()\n start_time = time.time()\n model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=250, batch_size=32, verbose=0, shuffle=True)\n training_time = time.time() - start_time\n training_time_list.append(training_time)\n test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32, verbose=0)\n test_accuracy_list.append(test_accuracy)\n\nprint('Accuracies over 10 runs : %s' % test_accuracy_list)\nprint('Avg training time : %.3f s' % np.mean(training_time_list))\nprint('Avg test accuracy : %.4f +- %.2f' % (np.mean(test_accuracy_list), np.std(test_accuracy_list)))\nprint('Total parameters : %d' % model.count_params())",
"Accuracies over 10 runs : [0.82263, 0.87461776, 0.8287462, 0.8440367, 0.8287462, 0.8470948, 0.8562691, 0.8165138, 0.8501529, 0.853211]\nAvg training time : 30.120 s\nAvg test accuracy : 0.8422 +- 0.02\nTotal parameters : 52882\n"
],
[
"def define_multilayer_model_architecture_32_8():\n model = models.Sequential()\n model.add(Dense(32, activation='relu', input_shape=(FEATURE_SIZE,)))\n model.add(Dense(8, activation='relu'))\n model.add(Dense(2, activation='softmax'))\n \n model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])\n return model",
"_____no_output_____"
],
[
"model = define_multilayer_model_architecture_32_8()\n%time history = model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=500, batch_size=32, shuffle=True, verbose = 1)",
"Train on 2299 samples, validate on 678 samples\nEpoch 1/500\n2299/2299 [==============================] - 0s 113us/sample - loss: 0.6594 - accuracy: 0.6055 - val_loss: 0.6080 - val_accuracy: 0.6829\nEpoch 2/500\n2299/2299 [==============================] - 0s 49us/sample - loss: 0.5966 - accuracy: 0.6794 - val_loss: 0.5671 - val_accuracy: 0.7153\nEpoch 3/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.5757 - accuracy: 0.7038 - val_loss: 0.5512 - val_accuracy: 0.7271\nEpoch 4/500\n2299/2299 [==============================] - 0s 49us/sample - loss: 0.5614 - accuracy: 0.7081 - val_loss: 0.5416 - val_accuracy: 0.7153\nEpoch 5/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.5480 - accuracy: 0.7199 - val_loss: 0.5266 - val_accuracy: 0.7271\nEpoch 6/500\n2299/2299 [==============================] - 0s 49us/sample - loss: 0.5310 - accuracy: 0.7338 - val_loss: 0.5219 - val_accuracy: 0.7330\nEpoch 7/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.5299 - accuracy: 0.7451 - val_loss: 0.5076 - val_accuracy: 0.7375\nEpoch 8/500\n2299/2299 [==============================] - 0s 49us/sample - loss: 0.5165 - accuracy: 0.7512 - val_loss: 0.5251 - val_accuracy: 0.7522\nEpoch 9/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.5103 - accuracy: 0.7464 - val_loss: 0.5277 - val_accuracy: 0.7507\nEpoch 10/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.5126 - accuracy: 0.7573 - val_loss: 0.5193 - val_accuracy: 0.7493\nEpoch 11/500\n2299/2299 [==============================] - 0s 47us/sample - loss: 0.5036 - accuracy: 0.7555 - val_loss: 0.5089 - val_accuracy: 0.7655\nEpoch 12/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.4893 - accuracy: 0.7642 - val_loss: 0.4890 - val_accuracy: 0.7655\nEpoch 13/500\n2299/2299 [==============================] - 0s 55us/sample - loss: 0.4907 - accuracy: 0.7656 - val_loss: 0.4934 - val_accuracy: 0.7596\nEpoch 14/500\n2299/2299 [==============================] - 0s 48us/sample - loss: 0.4799 - accuracy: 0.7747 - val_loss: 0.4906 - val_accuracy: 0.7640\nEpoch 15/500\n2299/2299 [==============================] - 0s 55us/sample - loss: 0.4803 - accuracy: 0.7695 - val_loss: 0.7742 - val_accuracy: 0.5590\nEpoch 16/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.4772 - accuracy: 0.7682 - val_loss: 0.4648 - val_accuracy: 0.7847\nEpoch 17/500\n2299/2299 [==============================] - 0s 49us/sample - loss: 0.4723 - accuracy: 0.7856 - val_loss: 0.5549 - val_accuracy: 0.7080\nEpoch 18/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.4707 - accuracy: 0.7812 - val_loss: 0.4657 - val_accuracy: 0.7802\nEpoch 19/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4650 - accuracy: 0.7825 - val_loss: 0.4821 - val_accuracy: 0.7729\nEpoch 20/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4621 - accuracy: 0.7882 - val_loss: 0.4699 - val_accuracy: 0.7832\nEpoch 21/500\n2299/2299 [==============================] - 0s 50us/sample - loss: 0.4567 - accuracy: 0.7930 - val_loss: 0.4774 - val_accuracy: 0.7950\nEpoch 22/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.4514 - accuracy: 0.7986 - val_loss: 0.4663 - val_accuracy: 0.7876\nEpoch 23/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.4438 - accuracy: 0.8034 - val_loss: 0.5074 - val_accuracy: 0.7448\nEpoch 24/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.4492 - accuracy: 0.7973 - val_loss: 0.4522 - val_accuracy: 0.8009\nEpoch 25/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.4445 - accuracy: 0.7969 - val_loss: 0.4479 - val_accuracy: 0.7891\nEpoch 26/500\n2299/2299 [==============================] - 0s 56us/sample - loss: 0.4342 - accuracy: 0.7973 - val_loss: 0.4681 - val_accuracy: 0.7729\nEpoch 27/500\n2299/2299 [==============================] - 0s 53us/sample - loss: 0.4265 - accuracy: 0.8095 - val_loss: 0.5821 - val_accuracy: 0.6947\nEpoch 28/500\n2299/2299 [==============================] - 0s 59us/sample - loss: 0.4270 - accuracy: 0.7999 - val_loss: 0.4411 - val_accuracy: 0.8038\nEpoch 29/500\n2299/2299 [==============================] - 0s 63us/sample - loss: 0.4262 - accuracy: 0.8077 - val_loss: 0.4383 - val_accuracy: 0.8009\nEpoch 30/500\n2299/2299 [==============================] - 0s 63us/sample - loss: 0.4243 - accuracy: 0.8047 - val_loss: 0.4377 - val_accuracy: 0.7965\nEpoch 31/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.4272 - accuracy: 0.7986 - val_loss: 0.5045 - val_accuracy: 0.7507\nEpoch 32/500\n2299/2299 [==============================] - 0s 51us/sample - loss: 0.4254 - accuracy: 0.7999 - val_loss: 0.4236 - val_accuracy: 0.8201\nEpoch 33/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.4000 - accuracy: 0.8217 - val_loss: 0.6011 - val_accuracy: 0.6844\nEpoch 34/500\n2299/2299 [==============================] - 0s 54us/sample - loss: 0.4092 - accuracy: 0.8095 - val_loss: 0.4235 - val_accuracy: 0.8186\nEpoch 35/500\n2299/2299 [==============================] - 0s 52us/sample - loss: 0.3946 - accuracy: 0.8204 - val_loss: 0.4724 - val_accuracy: 0.7758\nEpoch 36/500\n2299/2299 [==============================] - 0s 58us/sample - loss: 0.4056 - accuracy: 0.8151 - val_loss: 0.4150 - val_accuracy: 0.8112\nEpoch 37/500\n2299/2299 [==============================] - 0s 57us/sample - loss: 0.3950 - accuracy: 0.8221 - val_loss: 0.4147 - val_accuracy: 0.8156\nEpoch 38/500\n2299/2299 [==============================] - 0s 56us/sample - loss: 0.3985 - accuracy: 0.8273 - val_loss: 0.4549 - val_accuracy: 0.7773\nEpoch 39/500\n2299/2299 [==============================] - 0s 72us/sample - loss: 0.3916 - accuracy: 0.8234 - val_loss: 0.5244 - val_accuracy: 0.7316\nEpoch 40/500\n2299/2299 [==============================] - 0s 63us/sample - loss: 0.3978 - accuracy: 0.8212 - val_loss: 0.4265 - val_accuracy: 0.8245\nEpoch 41/500\n2299/2299 [==============================] - 0s 61us/sample - loss: 0.3795 - accuracy: 0.8291 - val_loss: 0.4178 - val_accuracy: 0.8053\nEpoch 42/500\n2299/2299 [==============================] - 0s 59us/sample - loss: 0.3791 - accuracy: 0.8295 - val_loss: 0.4039 - val_accuracy: 0.8451\nEpoch 43/500\n2299/2299 [==============================] - 0s 61us/sample - loss: 0.3860 - accuracy: 0.8356 - val_loss: 0.5008 - val_accuracy: 0.7478\nEpoch 44/500\n2299/2299 [==============================] - 0s 65us/sample - loss: 0.3802 - accuracy: 0.8195 - val_loss: 0.5061 - val_accuracy: 0.7611\nEpoch 45/500\n2299/2299 [==============================] - 0s 66us/sample - loss: 0.3847 - accuracy: 0.8256 - val_loss: 0.4023 - val_accuracy: 0.8201\nEpoch 46/500\n2299/2299 [==============================] - 0s 62us/sample - loss: 0.3672 - accuracy: 0.8382 - val_loss: 0.4853 - val_accuracy: 0.7714\nEpoch 47/500\n2299/2299 [==============================] - 0s 66us/sample - loss: 0.3734 - accuracy: 0.8312 - val_loss: 0.4625 - val_accuracy: 0.7847\nEpoch 48/500\n2299/2299 [==============================] - 0s 75us/sample - loss: 0.3732 - accuracy: 0.8351 - val_loss: 0.4348 - val_accuracy: 0.7965\nEpoch 49/500\n2299/2299 [==============================] - 0s 65us/sample - loss: 0.3715 - accuracy: 0.8365 - val_loss: 0.3938 - val_accuracy: 0.8378\nEpoch 50/500\n2299/2299 [==============================] - 0s 61us/sample - loss: 0.3663 - accuracy: 0.8291 - val_loss: 0.4135 - val_accuracy: 0.8068\nEpoch 51/500\n2299/2299 [==============================] - 0s 76us/sample - loss: 0.3654 - accuracy: 0.8382 - val_loss: 0.5560 - val_accuracy: 0.7493\nEpoch 52/500\n2299/2299 [==============================] - 0s 83us/sample - loss: 0.3633 - accuracy: 0.8308 - val_loss: 0.4609 - val_accuracy: 0.7699\nEpoch 53/500\n2299/2299 [==============================] - 0s 68us/sample - loss: 0.3496 - accuracy: 0.8469 - val_loss: 0.4285 - val_accuracy: 0.8068\nEpoch 54/500\n2299/2299 [==============================] - 0s 63us/sample - loss: 0.3552 - accuracy: 0.8460 - val_loss: 0.4103 - val_accuracy: 0.8142\nEpoch 55/500\n2299/2299 [==============================] - 0s 59us/sample - loss: 0.3416 - accuracy: 0.8517 - val_loss: 0.3883 - val_accuracy: 0.8392\n"
],
[
"plt.figure(num=None, figsize=(16, 6))\nplt.plot(history.history['accuracy'], label='train')\nplt.plot(history.history['val_accuracy'], label='validation')\nplt.legend()\nplt.xlim(0, 500)\nplt.show()",
"_____no_output_____"
],
[
"ITER = 10\ntraining_time_list = []\ntest_accuracy_list = []\nfor iter_count in range(ITER):\n model = define_multilayer_model_architecture_32_8()\n start_time = time.time()\n model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=250, batch_size=32, verbose=0, shuffle=True)\n training_time = time.time() - start_time\n training_time_list.append(training_time)\n test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32, verbose=0)\n test_accuracy_list.append(test_accuracy)\n\nprint('Accuracies over 10 runs : %s' % test_accuracy_list)\nprint('Avg training time : %.3f s' % np.mean(training_time_list))\nprint('Avg test accuracy : %.4f +- %.2f' % (np.mean(test_accuracy_list), np.std(test_accuracy_list)))\nprint('Total parameters : %d' % model.count_params())",
"Accuracies over 10 runs : [0.8409786, 0.8318043, 0.82568806, 0.82263, 0.7461774, 0.82568806, 0.8318043, 0.8379205, 0.8409786, 0.8348624]\nAvg training time : 26.210 s\nAvg test accuracy : 0.8239 +- 0.03\nTotal parameters : 25402\n"
],
[
"model = define_multilayer_model_architecture_64_32_16()\n%time history = model.fit(X_train, y_train, validation_split = 0.2, epochs=225, batch_size=32, shuffle=True, verbose = 0)",
"CPU times: user 25.6 s, sys: 992 ms, total: 26.6 s\nWall time: 18.9 s\n"
],
[
"plt.figure(num=None, figsize=(16, 6))\nplt.plot(history.history['accuracy'], label='train')\nplt.plot(history.history['val_accuracy'], label='validation')\nplt.legend()\nplt.xlim(0, 500)\nplt.show()",
"_____no_output_____"
],
[
"model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=50, batch_size=32, shuffle=True, verbose = 2)",
"Train on 2299 samples, validate on 678 samples\nEpoch 1/50\n2299/2299 - 0s - loss: 0.3427 - accuracy: 0.8756 - val_loss: 0.4275 - val_accuracy: 0.8304\nEpoch 2/50\n2299/2299 - 0s - loss: 0.2265 - accuracy: 0.9126 - val_loss: 0.5323 - val_accuracy: 0.8053\nEpoch 3/50\n2299/2299 - 0s - loss: 0.2190 - accuracy: 0.9187 - val_loss: 0.4795 - val_accuracy: 0.8569\nEpoch 4/50\n2299/2299 - 0s - loss: 0.2434 - accuracy: 0.9182 - val_loss: 0.4293 - val_accuracy: 0.8451\nEpoch 5/50\n2299/2299 - 0s - loss: 0.2115 - accuracy: 0.9156 - val_loss: 0.4339 - val_accuracy: 0.8289\nEpoch 6/50\n2299/2299 - 0s - loss: 0.1836 - accuracy: 0.9274 - val_loss: 0.3957 - val_accuracy: 0.8643\nEpoch 7/50\n2299/2299 - 0s - loss: 0.2023 - accuracy: 0.9239 - val_loss: 0.4330 - val_accuracy: 0.8304\nEpoch 8/50\n2299/2299 - 0s - loss: 0.1566 - accuracy: 0.9395 - val_loss: 0.4713 - val_accuracy: 0.8304\nEpoch 9/50\n2299/2299 - 0s - loss: 0.1550 - accuracy: 0.9321 - val_loss: 0.6474 - val_accuracy: 0.7670\nEpoch 10/50\n2299/2299 - 0s - loss: 0.1844 - accuracy: 0.9234 - val_loss: 0.3736 - val_accuracy: 0.8628\nEpoch 11/50\n2299/2299 - 0s - loss: 0.1405 - accuracy: 0.9435 - val_loss: 0.4256 - val_accuracy: 0.8510\nEpoch 12/50\n2299/2299 - 0s - loss: 0.1491 - accuracy: 0.9374 - val_loss: 0.3729 - val_accuracy: 0.8628\nEpoch 13/50\n2299/2299 - 0s - loss: 0.1634 - accuracy: 0.9356 - val_loss: 0.3782 - val_accuracy: 0.8643\nEpoch 14/50\n2299/2299 - 0s - loss: 0.1452 - accuracy: 0.9421 - val_loss: 0.3822 - val_accuracy: 0.8658\nEpoch 15/50\n2299/2299 - 0s - loss: 0.2170 - accuracy: 0.9126 - val_loss: 0.4074 - val_accuracy: 0.8333\nEpoch 16/50\n2299/2299 - 0s - loss: 0.1539 - accuracy: 0.9339 - val_loss: 0.3751 - val_accuracy: 0.8805\nEpoch 17/50\n2299/2299 - 0s - loss: 0.1268 - accuracy: 0.9474 - val_loss: 0.4762 - val_accuracy: 0.8304\nEpoch 18/50\n2299/2299 - 0s - loss: 0.1504 - accuracy: 0.9343 - val_loss: 0.3772 - val_accuracy: 0.8687\nEpoch 19/50\n2299/2299 - 0s - loss: 0.1251 - accuracy: 0.9487 - val_loss: 0.4743 - val_accuracy: 0.8274\nEpoch 20/50\n2299/2299 - 0s - loss: 0.1542 - accuracy: 0.9395 - val_loss: 0.4141 - val_accuracy: 0.8510\nEpoch 21/50\n2299/2299 - 0s - loss: 0.1393 - accuracy: 0.9474 - val_loss: 0.4233 - val_accuracy: 0.8466\nEpoch 22/50\n2299/2299 - 0s - loss: 0.1142 - accuracy: 0.9474 - val_loss: 0.3630 - val_accuracy: 0.8746\nEpoch 23/50\n2299/2299 - 0s - loss: 0.1108 - accuracy: 0.9587 - val_loss: 0.4092 - val_accuracy: 0.8658\nEpoch 24/50\n2299/2299 - 0s - loss: 0.1095 - accuracy: 0.9526 - val_loss: 0.5189 - val_accuracy: 0.8363\nEpoch 25/50\n2299/2299 - 0s - loss: 0.1274 - accuracy: 0.9426 - val_loss: 0.4137 - val_accuracy: 0.8525\nEpoch 26/50\n2299/2299 - 0s - loss: 0.1079 - accuracy: 0.9539 - val_loss: 0.3952 - val_accuracy: 0.8658\nEpoch 27/50\n2299/2299 - 0s - loss: 0.0937 - accuracy: 0.9604 - val_loss: 0.4149 - val_accuracy: 0.8776\nEpoch 28/50\n2299/2299 - 0s - loss: 0.2613 - accuracy: 0.9387 - val_loss: 0.3595 - val_accuracy: 0.8614\nEpoch 29/50\n2299/2299 - 0s - loss: 0.1136 - accuracy: 0.9565 - val_loss: 0.4345 - val_accuracy: 0.8702\nEpoch 30/50\n2299/2299 - 0s - loss: 0.1191 - accuracy: 0.9513 - val_loss: 0.6303 - val_accuracy: 0.7847\nEpoch 31/50\n2299/2299 - 0s - loss: 0.1081 - accuracy: 0.9565 - val_loss: 0.3999 - val_accuracy: 0.8643\nEpoch 32/50\n2299/2299 - 0s - loss: 0.0994 - accuracy: 0.9526 - val_loss: 0.5881 - val_accuracy: 0.8112\nEpoch 33/50\n2299/2299 - 0s - loss: 0.1008 - accuracy: 0.9587 - val_loss: 0.4560 - val_accuracy: 0.8510\nEpoch 34/50\n2299/2299 - 0s - loss: 0.2341 - accuracy: 0.9339 - val_loss: 0.6631 - val_accuracy: 0.6991\nEpoch 35/50\n2299/2299 - 0s - loss: 0.1753 - accuracy: 0.9247 - val_loss: 0.4364 - val_accuracy: 0.8333\nEpoch 36/50\n2299/2299 - 0s - loss: 0.0997 - accuracy: 0.9648 - val_loss: 0.4383 - val_accuracy: 0.8658\nEpoch 37/50\n2299/2299 - 0s - loss: 0.1055 - accuracy: 0.9587 - val_loss: 0.4337 - val_accuracy: 0.8643\nEpoch 38/50\n2299/2299 - 0s - loss: 0.1308 - accuracy: 0.9491 - val_loss: 0.5207 - val_accuracy: 0.8127\nEpoch 39/50\n2299/2299 - 0s - loss: 0.0913 - accuracy: 0.9639 - val_loss: 0.4987 - val_accuracy: 0.8378\nEpoch 40/50\n2299/2299 - 0s - loss: 0.0835 - accuracy: 0.9709 - val_loss: 0.4781 - val_accuracy: 0.8392\nEpoch 41/50\n2299/2299 - 0s - loss: 0.0918 - accuracy: 0.9622 - val_loss: 0.4371 - val_accuracy: 0.8599\nEpoch 42/50\n2299/2299 - 0s - loss: 0.0988 - accuracy: 0.9626 - val_loss: 0.4337 - val_accuracy: 0.8717\nEpoch 43/50\n2299/2299 - 0s - loss: 0.1223 - accuracy: 0.9508 - val_loss: 0.4215 - val_accuracy: 0.8746\nEpoch 44/50\n2299/2299 - 0s - loss: 0.0906 - accuracy: 0.9661 - val_loss: 0.4549 - val_accuracy: 0.8481\nEpoch 45/50\n2299/2299 - 0s - loss: 0.0884 - accuracy: 0.9661 - val_loss: 0.4482 - val_accuracy: 0.8510\nEpoch 46/50\n2299/2299 - 0s - loss: 0.0876 - accuracy: 0.9635 - val_loss: 0.4418 - val_accuracy: 0.8643\nEpoch 47/50\n2299/2299 - 0s - loss: 0.1787 - accuracy: 0.9321 - val_loss: 0.4343 - val_accuracy: 0.8614\nEpoch 48/50\n2299/2299 - 0s - loss: 0.1109 - accuracy: 0.9552 - val_loss: 0.4735 - val_accuracy: 0.8525\nEpoch 49/50\n2299/2299 - 0s - loss: 0.0896 - accuracy: 0.9626 - val_loss: 0.4600 - val_accuracy: 0.8687\nEpoch 50/50\n2299/2299 - 0s - loss: 0.1203 - accuracy: 0.9500 - val_loss: 0.4461 - val_accuracy: 0.8643\n"
],
[
"test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32) \nprint('Test loss: %.4f accuracy: %.4f' % (test_loss, test_accuracy))",
"327/327 [==============================] - 0s 98us/sample - loss: 0.4706 - accuracy: 0.8532\nTest loss: 0.4706 accuracy: 0.8532\n"
],
[
"ITER = 10\ntraining_time_list = []\ntest_accuracy_list = []\nfor iter_count in range(ITER):\n model = define_multilayer_model_architecture_64_32_16()\n start_time = time.time()\n model.fit(X_train, y_train, validation_split = 0.2, epochs=200, batch_size=32, shuffle=True, verbose = 0)\n model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=100, batch_size=32, verbose=0, shuffle=True)\n training_time = time.time() - start_time\n training_time_list.append(training_time)\n test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32, verbose=0)\n test_accuracy_list.append(test_accuracy)\n print('iter # %d : %.3f'%(iter_count+1,test_accuracy))\n\nprint('Accuracies over 10 runs : %s' % test_accuracy_list)\nprint('Avg training time : %.3f s' % np.mean(training_time_list))\nprint('Avg test accuracy : %.4f +- %.2f' % (np.mean(test_accuracy_list), np.std(test_accuracy_list)))\nprint('Total parameters : %d' % model.count_params())",
"iter # 1 : 0.841\niter # 2 : 0.859\niter # 3 : 0.859\niter # 4 : 0.844\niter # 5 : 0.841\niter # 6 : 0.847\niter # 7 : 0.832\niter # 8 : 0.850\niter # 9 : 0.850\niter # 10 : 0.853\nAccuracies over 10 runs : [0.8409786, 0.8593272, 0.8593272, 0.8440367, 0.8409786, 0.8470948, 0.8318043, 0.8501529, 0.8501529, 0.853211]\nAvg training time : 32.623 s\nAvg test accuracy : 0.8477 +- 0.01\nTotal parameters : 52882\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5c3bac01bb4e9fec2038f5e1375cca8953f5b5
| 6,695 |
ipynb
|
Jupyter Notebook
|
notebooks/test.ipynb
|
uwon0625/prudential
|
2466495bd07e0774b74d27ec0ca856f22d23b5e6
|
[
"MIT"
] | null | null | null |
notebooks/test.ipynb
|
uwon0625/prudential
|
2466495bd07e0774b74d27ec0ca856f22d23b5e6
|
[
"MIT"
] | null | null | null |
notebooks/test.ipynb
|
uwon0625/prudential
|
2466495bd07e0774b74d27ec0ca856f22d23b5e6
|
[
"MIT"
] | null | null | null | 26.050584 | 134 | 0.519642 |
[
[
[
"import pandas as pd\nimport numpy as np\nimport json\nimport xgboost as xgb\nfrom xgboost.sklearn import XGBClassifier\nfrom sklearn import model_selection, metrics\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.model_selection import GridSearchCV\nimport time\n\n#load data\nconfig = json.load(open('settings.json'))\ntrain = pd.read_csv(config['train_modified'])\ntest = pd.read_csv(config['test_modified'])\nnum_classes = 8\n\nfrom sklearn.model_selection import train_test_split\ntrain_part, test_part = train_test_split(train, test_size=0.2)\n\ntarget='Response'\nIDcol = 'Id'\n\npredictors = [x for x in train_part.columns if x not in [target, IDcol]]\n\nX=train_part[predictors]\ny=train_part[target]",
"_____no_output_____"
],
[
"X1=train_part\nX1['Response'] -=1 #reduce to fit [0,classes)\ny1=test_part\ny1['Response'] -= 1",
"C:\\Users\\dli\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \nC:\\Users\\dli\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n after removing the cwd from sys.path.\n"
],
[
"y1[\"Response\"].value_counts()",
"_____no_output_____"
],
[
"param = {'max_depth' : 4, \n 'eta' : 0.01, \n 'silent' : 1, \n 'min_child_weight' : 10, \n 'subsample' : 0.5,\n 'early_stopping_rounds' : 100,\n 'objective' : 'multi:softprob',\n 'num_class' : 8,\n 'colsample_bytree' : 0.3,\n 'seed' : 0}\nnum_rounds=7000\ndtrain=xgb.DMatrix(X1[predictors],X1[target],missing=float('nan'))",
"_____no_output_____"
],
[
"%time bst = xgb.train(param, dtrain, num_rounds)",
"Wall time: 50min 59s\n"
],
[
"dtest=xgb.DMatrix(y1[predictors],y1[target],missing=float('nan'))\n%time prob = bst.predict(dtest)",
"Wall time: 16 s\n"
],
[
"y=np.argmax(prob, axis=1)\nprint ('Accuracy:%.4g'%metrics.accuracy_score(y1[target],y))",
"Accuracy:0.5893\n"
],
[
"prob",
"_____no_output_____"
],
[
"y0 = test\ny0[target] -= 1\ndtest=xgb.DMatrix(y0[predictors],y0[target],missing=float('nan'))\n%time prob2 = bst.predict(dtest)\ny=np.argmax(prob2, axis=1)\ny += 1\nresult = pd.DataFrame({\"Id\": y0['Id'].values, \"Response\": y})\nresult.to_csv('../src/submissions/submission_xg2.csv', index=False)",
"Wall time: 28.7 s\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5c54c34963f5decba34b4ac276cf4b192b2078
| 675,362 |
ipynb
|
Jupyter Notebook
|
workshop/prerequisites/intro_jupyter.ipynb
|
djarecka/DGPA_workshop_2022
|
6e7703881bdffc317ac9551f99ccfe656f8b9f42
|
[
"BSD-3-Clause"
] | null | null | null |
workshop/prerequisites/intro_jupyter.ipynb
|
djarecka/DGPA_workshop_2022
|
6e7703881bdffc317ac9551f99ccfe656f8b9f42
|
[
"BSD-3-Clause"
] | 3 |
2022-03-17T11:59:09.000Z
|
2022-03-23T13:54:09.000Z
|
workshop/prerequisites/intro_jupyter.ipynb
|
djarecka/DGPA_workshop_2022
|
6e7703881bdffc317ac9551f99ccfe656f8b9f42
|
[
"BSD-3-Clause"
] | 4 |
2022-03-17T08:48:19.000Z
|
2022-03-23T09:19:41.000Z
| 51.990916 | 205,052 | 0.679237 |
[
[
[
"# Introduction to the jupyter ecosystem & notebooks\n\n\n",
"_____no_output_____"
],
[
"## Before we get started ...\n<br>\n\n- most of what you’ll see within this lecture was prepared by Ross Markello, Michael Notter and Peer Herholz and further adapted for this course by Peer Herholz \n- based on Tal Yarkoni's [\"Introduction to Python\" lecture at Neurohackademy 2019](https://neurohackademy.org/course/introduction-to-python-2/)\n- based on [IPython notebooks from J. R. Johansson](http://github.com/jrjohansson/scientific-python-lectures)\n- based on http://www.stavros.io/tutorials/python/ & http://www.swaroopch.com/notes/python\n- based on https://github.com/oesteban/biss2016 & https://github.com/jvns/pandas-cookbook\n",
"_____no_output_____"
],
[
"## Objectives 📍\n\n* learn basic and efficient usage of the `jupyter ecosystem` & `notebooks`\n * what is `Jupyter` & how to utilize `jupyter notebooks`",
"_____no_output_____"
],
[
"## To Jupyter & beyond\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_ecosystem.png\" alt=\"logo\" title=\"jupyter\" width=\"500\" height=\"200\" /> \n\n- a community of people\n \n- an ecosystem of open tools and standards for interactive computing\n\n- language-agnostic and modular\n \n- empower people to use other open tools\n",
"_____no_output_____"
],
[
"## To Jupyter & beyond\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_example.png\" alt=\"logo\" title=\"jupyter\" width=\"900\" height=\"400\" /> ",
"_____no_output_____"
],
[
"## Before we get started 2...\n \nWe're going to be working in [Jupyter notebooks]() for most of this presentation!\n\nTo load yours, do the following:",
"_____no_output_____"
],
[
"1. Open a terminal/shell & navigate to the folder where you stored the course material (`cd`)",
"_____no_output_____"
],
[
"2. Type `jupyter notebook`",
"_____no_output_____"
],
[
"3. If you're not automatically directed to a webpage copy the URL (`https://....`) printed in the `terminal` and paste it in your `browser`",
"_____no_output_____"
],
[
"## Files Tab\n\nThe `files tab` provides an interactive view of the portion of the `filesystem` which is accessible by the `user`. This is typically rooted by the directory in which the notebook server was started.\n\nThe top of the `files list` displays `clickable` breadcrumbs of the `current directory`. It is possible to navigate the `filesystem` by clicking on these `breadcrumbs` or on the `directories` displayed in the `notebook list`.\n\nA new `notebook` can be created by clicking on the `New dropdown button` at the top of the list, and selecting the desired `language kernel`.\n\n`Notebooks` can also be `uploaded` to the `current directory` by dragging a `notebook` file onto the list or by clicking the `Upload button` at the top of the list.",
"_____no_output_____"
],
[
"### The Notebook\n\nWhen a `notebook` is opened, a new `browser tab` will be created which presents the `notebook user interface (UI)`. This `UI` allows for `interactively editing` and `running` the `notebook document`.\n\nA new `notebook` can be created from the `dashboard` by clicking on the `Files tab`, followed by the `New dropdown button`, and then selecting the `language` of choice for the `notebook`.\n\nAn `interactive tour` of the `notebook UI` can be started by selecting `Help` -> `User Interface Tour` from the `notebook menu bar`.",
"_____no_output_____"
],
[
"### Header\n\nAt the top of the `notebook document` is a `header` which contains the `notebook title`, a `menubar`, and `toolbar`. This `header` remains `fixed` at the top of the screen, even as the `body` of the `notebook` is `scrolled`. The `title` can be edited `in-place` (which renames the `notebook file`), and the `menubar` and `toolbar` contain a variety of actions which control `notebook navigation` and `document structure`.\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_header_4_0.png\" alt=\"logo\" title=\"jupyter\" width=\"600\" height=\"100\" /> ",
"_____no_output_____"
],
[
"### Body\n\nThe `body` of a `notebook` is composed of `cells`. Each `cell` contains either `markdown`, `code input`, `code output`, or `raw text`. `Cells` can be included in any order and edited at-will, allowing for a large amount of flexibility for constructing a narrative.\n\n- `Markdown cells` - These are used to build a `nicely formatted narrative` around the `code` in the document. The majority of this lesson is composed of `markdown cells`.\n- to get a `markdown cell` you can either select the `cell` and use `esc` + `m` or via `Cell -> cell type -> markdown`\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_body_4_0.png\" alt=\"logo\" title=\"jupyter\" width=\"700\" height=\"200\" />",
"_____no_output_____"
],
[
"- `Code cells` - These are used to define the `computational code` in the `document`. They come in `two forms`: \n - the `input cell` where the `user` types the `code` to be `executed`, \n - and the `output cell` which is the `representation` of the `executed code`. Depending on the `code`, this `representation` may be a `simple scalar value`, or something more complex like a `plot` or an `interactive widget`.\n- to get a `code cell` you can either select the `cell` and use `esc` + `y` or via `Cell -> cell type -> code`\n\n \n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_body_4_0.png\" alt=\"logo\" title=\"jupyter\" width=\"700\" height=\"200\" />\n ",
"_____no_output_____"
],
[
"- `Raw cells` - These are used when `text` needs to be included in `raw form`, without `execution` or `transformation`.\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_body_4_0.png\" alt=\"logo\" title=\"jupyter\" width=\"700\" height=\"200\" />\n ",
"_____no_output_____"
],
[
"### Modality\n\nThe `notebook user interface` is `modal`. This means that the `keyboard` behaves `differently` depending upon the `current mode` of the `notebook`. A `notebook` has `two modes`: `edit` and `command`.\n\n`Edit mode` is indicated by a `green cell border` and a `prompt` showing in the `editor area`. When a `cell` is in `edit mode`, you can type into the `cell`, like a `normal text editor`.\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/edit_mode.png\" alt=\"logo\" title=\"jupyter\" width=\"700\" height=\"100\" /> ",
"_____no_output_____"
],
[
"`Command mode` is indicated by a `grey cell border`. When in `command mode`, the structure of the `notebook` can be modified as a whole, but the `text` in `individual cells` cannot be changed. Most importantly, the `keyboard` is `mapped` to a set of `shortcuts` for efficiently performing `notebook and cell actions`. For example, pressing `c` when in `command` mode, will `copy` the `current cell`; no modifier is needed.\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/command_mode.png\" alt=\"logo\" title=\"jupyter\" width=\"700\" height=\"100\" /> ",
"_____no_output_____"
],
[
"### Mouse navigation\n\nThe `first concept` to understand in `mouse-based navigation` is that `cells` can be `selected by clicking on them`. The `currently selected cell` is indicated with a `grey` or `green border depending` on whether the `notebook` is in `edit or command mode`. Clicking inside a `cell`'s `editor area` will enter `edit mode`. Clicking on the `prompt` or the `output area` of a `cell` will enter `command mode`.\n\nThe `second concept` to understand in `mouse-based navigation` is that `cell actions` usually apply to the `currently selected cell`. For example, to `run` the `code in a cell`, select it and then click the `Run button` in the `toolbar` or the `Cell` -> `Run` menu item. Similarly, to `copy` a `cell`, select it and then click the `copy selected cells button` in the `toolbar` or the `Edit` -> `Copy` menu item. With this simple pattern, it should be possible to perform nearly every `action` with the `mouse`.\n\n`Markdown cells` have one other `state` which can be `modified` with the `mouse`. These `cells` can either be `rendered` or `unrendered`. When they are `rendered`, a nice `formatted representation` of the `cell`'s `contents` will be presented. When they are `unrendered`, the `raw text source` of the `cell` will be presented. To `render` the `selected cell` with the `mouse`, click the `button` in the `toolbar` or the `Cell` -> `Run` menu item. To `unrender` the `selected cell`, `double click` on the `cell`.",
"_____no_output_____"
],
[
"### Keyboard Navigation\n\nThe `modal user interface` of the `IPython Notebook` has been optimized for efficient `keyboard` usage. This is made possible by having `two different sets` of `keyboard shortcuts`: one set that is `active in edit mode` and another in `command mode`.\n\nThe most important `keyboard shortcuts` are `Enter`, which enters `edit mode`, and `Esc`, which enters `command mode`.\n\nIn `edit mode`, most of the `keyboard` is dedicated to `typing` into the `cell's editor`. Thus, in `edit mode` there are relatively `few shortcuts`. In `command mode`, the entire `keyboard` is available for `shortcuts`, so there are many more possibilities.\n\nThe following images give an overview of the available `keyboard shortcuts`. These can viewed in the `notebook` at any time via the `Help` -> `Keyboard Shortcuts` menu item.",
"_____no_output_____"
],
[
"<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_shortcuts_4_0.png\" alt=\"logo\" title=\"jupyter\" width=\"500\" height=\"500\" /> ",
"_____no_output_____"
],
[
"The following shortcuts have been found to be the most useful in day-to-day tasks:\n\n- Basic navigation: `enter`, `shift-enter`, `up/k`, `down/j`\n- Saving the `notebook`: `s`\n- `Cell types`: `y`, `m`, `1-6`, `r`\n- `Cell creation`: `a`, `b`\n- `Cell editing`: `x`, `c`, `v`, `d`, `z`, `ctrl+shift+-`\n- `Kernel operations`: `i`, `.`",
"_____no_output_____"
],
[
"### Markdown Cells\n\n`Text` can be added to `IPython Notebooks` using `Markdown cells`. `Markdown` is a popular `markup language` that is a `superset of HTML`. Its specification can be found here:\n\nhttp://daringfireball.net/projects/markdown/\n\nYou can view the `source` of a `cell` by `double clicking` on it, or while the `cell` is selected in `command mode`, press `Enter` to edit it. Once a `cell` has been `edited`, use `Shift-Enter` to `re-render` it.",
"_____no_output_____"
],
[
"### Markdown basics\n\nYou can make text _italic_ or **bold**.",
"_____no_output_____"
],
[
"You can build nested itemized or enumerated lists:\n\n* One\n - Sublist\n - This\n - Sublist\n - That\n - The other thing\n* Two\n - Sublist\n* Three\n - Sublist\n\nNow another list:\n\n1. Here we go\n 1. Sublist\n 2. Sublist\n2. There we go\n3. Now this",
"_____no_output_____"
],
[
"You can add horizontal rules:\n\n---",
"_____no_output_____"
],
[
"Here is a blockquote:\n\n> Beautiful is better than ugly.\n> Explicit is better than implicit.\n> Simple is better than complex.\n> Complex is better than complicated.\n> Flat is better than nested.\n> Sparse is better than dense.\n> Readability counts.\n> Special cases aren't special enough to break the rules.\n> Although practicality beats purity.\n> Errors should never pass silently.\n> Unless explicitly silenced.\n> In the face of ambiguity, refuse the temptation to guess.\n> There should be one-- and preferably only one --obvious way to do it.\n> Although that way may not be obvious at first unless you're Dutch.\n> Now is better than never.\n> Although never is often better than *right* now.\n> If the implementation is hard to explain, it's a bad idea.\n> If the implementation is easy to explain, it may be a good idea.\n> Namespaces are one honking great idea -- let's do more of those!",
"_____no_output_____"
],
[
"You can add headings using Markdown's syntax:\n\n<pre>\n# Heading 1\n\n# Heading 2\n\n## Heading 2.1\n\n## Heading 2.2\n</pre>",
"_____no_output_____"
],
[
"### Embedded code\n\nYou can embed code meant for illustration instead of execution in Python:\n\n def f(x):\n \"\"\"a docstring\"\"\"\n return x**2\n\nor other languages:\n\n if (i=0; i<n; i++) {\n printf(\"hello %d\\n\", i);\n x += 4;\n }",
"_____no_output_____"
],
[
"### Github flavored markdown (GFM)\n\nThe `Notebook webapp` supports `Github flavored markdown` meaning that you can use `triple backticks` for `code blocks` \n<pre>\n```python\nprint \"Hello World\"\n```\n\n```javascript\nconsole.log(\"Hello World\")\n```\n</pre>\n\nGives \n```python\nprint \"Hello World\"\n```\n\n```javascript\nconsole.log(\"Hello World\")\n```\n\nAnd a table like this : \n\n<pre>\n| This | is |\n|------|------|\n| a | table| \n</pre>\n\nA nice HTML Table\n\n| This | is |\n|------|------|\n| a | table| ",
"_____no_output_____"
],
[
"### General HTML\n\nBecause `Markdown` is a `superset of HTML` you can even add things like `HTML tables`:\n\n<table>\n<tr>\n<th>Header 1</th>\n<th>Header 2</th>\n</tr>\n<tr>\n<td>row 1, cell 1</td>\n<td>row 1, cell 2</td>\n</tr>\n<tr>\n<td>row 2, cell 1</td>\n<td>row 2, cell 2</td>\n</tr>\n</table>",
"_____no_output_____"
],
[
"### Local files\n\nIf you have `local files` in your `Notebook directory`, you can refer to these `files` in `Markdown cells` directly:\n\n [subdirectory/]<filename>\n\n\n\nThese do not `embed` the data into the `notebook file`, and require that the `files` exist when you are viewing the `notebook`.",
"_____no_output_____"
],
[
"### Security of local files\n\nNote that this means that the `IPython notebook server` also acts as a `generic file server` for `files` inside the same `tree` as your `notebooks`. Access is not granted outside the `notebook` folder so you have strict control over what `files` are `visible`, but for this reason **it is highly recommended that you do not run the notebook server with a notebook directory at a high level in your filesystem (e.g. your home directory)**.\n\nWhen you run the `notebook` in a `password-protected` manner, `local file` access is `restricted` to `authenticated users` unless `read-only views` are active.",
"_____no_output_____"
],
[
"### Markdown attachments\n\nSince `Jupyter notebook version 5.0`, in addition to `referencing external files` you can `attach a file` to a `markdown cell`. To do so `drag` the `file` from e.g. the `browser` or local `storage` in a `markdown cell` while `editing` it.",
"_____no_output_____"
],
[
"`Files` are stored in `cell metadata` and will be `automatically scrubbed` at `save-time` if not `referenced`. You can recognize `attached images` from other `files` by their `url` that starts with `attachment`.\n\nKeep in mind that `attached files` will `increase the size` of your `notebook`.\n\nYou can manually edit the `attachement` by using the `View` > `Cell Toolbar` > `Attachment` menu, but you should not need to.",
"_____no_output_____"
],
[
"### Code cells\n\nWhen executing code in `IPython`, all valid `Python syntax` works as-is, but `IPython` provides a number of `features` designed to make the `interactive experience` more `fluid` and `efficient`. First, we need to explain how to run `cells`. Try to run the `cell` below!",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nprint(\"Hi! This is a cell. Click on it and press the ▶ button above to run it\")",
"Hi! This is a cell. Click on it and press the ▶ button above to run it\n"
]
],
[
[
"You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that.",
"_____no_output_____"
],
[
"### Tab Completion\n\nOne of the most useful things about `Jupyter Notebook` is its tab completion.\n\nTry this: click just after `read_csv`( in the cell below and press `Shift+Tab` 4 times, slowly. Note that if you're using `JupyterLab` you don't have an additional help box option.",
"_____no_output_____"
]
],
[
[
"pd.read_csv(",
"_____no_output_____"
]
],
[
[
"After the first time, you should see this:\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_tab-once.png\" alt=\"logo\" title=\"jupyter\" width=\"700\" height=\"200\" /> ",
"_____no_output_____"
],
[
"After the second time:\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_tab-twice.png\" alt=\"logo\" title=\"jupyter\" width=\"500\" height=\"200\" /> ",
"_____no_output_____"
],
[
"After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function:\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_tab-4-times.png\" alt=\"logo\" title=\"jupyter\" width=\"700\" height=\"300\" /> \n\nThis is amazingly useful. You can think of this as \"the more confused I am, the more times I should press `Shift+Tab`\".",
"_____no_output_____"
],
[
"Okay, let's try `tab completion` for `function names`!",
"_____no_output_____"
]
],
[
[
"pd.r",
"_____no_output_____"
]
],
[
[
"You should see this:\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_function-completion.png\" alt=\"logo\" title=\"jupyter\" width=\"300\" height=\"200\" /> ",
"_____no_output_____"
],
[
"## Get Help\n\nThere's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj`? or `obj`?? to get help or more help for an object.",
"_____no_output_____"
]
],
[
[
"pd.read_csv?",
"\u001b[0;31mSignature:\u001b[0m\n\u001b[0mpd\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread_csv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mfilepath_or_buffer\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;34m'FilePathOrBuffer'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0msep\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m<\u001b[0m\u001b[0mno_default\u001b[0m\u001b[0;34m>\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdelimiter\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mheader\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'infer'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mnames\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m<\u001b[0m\u001b[0mno_default\u001b[0m\u001b[0;34m>\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mindex_col\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0musecols\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0msqueeze\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mprefix\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m<\u001b[0m\u001b[0mno_default\u001b[0m\u001b[0;34m>\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mmangle_dupe_cols\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;34m'DtypeArg | None'\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mengine\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mconverters\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mtrue_values\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mfalse_values\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mskipinitialspace\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mskiprows\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mskipfooter\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mnrows\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mna_values\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mkeep_default_na\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mna_filter\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mskip_blank_lines\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mparse_dates\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0minfer_datetime_format\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mkeep_date_col\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdate_parser\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdayfirst\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mcache_dates\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0miterator\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mchunksize\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mcompression\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'infer'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mthousands\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdecimal\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;34m'str'\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'.'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlineterminator\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mquotechar\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'\"'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mquoting\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdoublequote\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mescapechar\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mcomment\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mencoding\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mencoding_errors\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;34m'str | None'\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'strict'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdialect\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0merror_bad_lines\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mwarn_bad_lines\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mon_bad_lines\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mdelim_whitespace\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlow_memory\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mmemory_map\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mfloat_precision\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mstorage_options\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;34m'StorageOptions'\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;31mDocstring:\u001b[0m\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object or file-like object\n Any valid string path is acceptable. The string could be a URL. Valid\n URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is\n expected. A local file could be: file://localhost/path/to/table.csv.\n\n If you want to pass in a path object, pandas accepts any ``os.PathLike``.\n\n By file-like object, we refer to objects with a ``read()`` method, such as\n a file handle (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default ','\n Delimiter to use. If sep is None, the C engine cannot automatically detect\n the separator, but the Python parsing engine can, meaning the latter will\n be used and automatically detect the separator by Python's builtin sniffer\n tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n different from ``'\\s+'`` will be interpreted as regular expressions and\n will also force the use of the Python parsing engine. Note that regex\n delimiters are prone to ignoring quoted data. Regex example: ``'\\r\\t'``.\ndelimiter : str, default ``None``\n Alias for sep.\nheader : int, list of int, default 'infer'\n Row number(s) to use as the column names, and the start of the\n data. Default behavior is to infer the column names: if no names\n are passed the behavior is identical to ``header=0`` and column\n names are inferred from the first line of the file, if column\n names are passed explicitly then the behavior is identical to\n ``header=None``. Explicitly pass ``header=0`` to be able to\n replace existing names. The header can be a list of integers that\n specify row locations for a multi-index on the columns\n e.g. [0,1,3]. Intervening rows that are not specified will be\n skipped (e.g. 2 in this example is skipped). Note that this\n parameter ignores commented lines and empty lines if\n ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n data rather than the first line of the file.\nnames : array-like, optional\n List of column names to use. If the file contains a header row,\n then you should explicitly pass ``header=0`` to override the column names.\n Duplicates in this list are not allowed.\nindex_col : int, str, sequence of int / str, or False, default ``None``\n Column(s) to use as the row labels of the ``DataFrame``, either given as\n string name or column index. If a sequence of int / str is given, a\n MultiIndex is used.\n\n Note: ``index_col=False`` can be used to force pandas to *not* use the first\n column as the index, e.g. when you have a malformed file with delimiters at\n the end of each line.\nusecols : list-like or callable, optional\n Return a subset of the columns. If list-like, all elements must either\n be positional (i.e. integer indices into the document columns) or strings\n that correspond to column names provided either by the user in `names` or\n inferred from the document header row(s). For example, a valid list-like\n `usecols` parameter would be ``[0, 1, 2]`` or ``['foo', 'bar', 'baz']``.\n Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n To instantiate a DataFrame from ``data`` with element order preserved use\n ``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columns\n in ``['foo', 'bar']`` order or\n ``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``\n for ``['bar', 'foo']`` order.\n\n If callable, the callable function will be evaluated against the column\n names, returning names where the callable function evaluates to True. An\n example of a valid callable argument would be ``lambda x: x.upper() in\n ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster\n parsing time and lower memory usage.\nsqueeze : bool, default False\n If the parsed data only contains one column then return a Series.\nprefix : str, optional\n Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than\n 'X'...'X'. Passing in False will cause data to be overwritten if there\n are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32,\n 'c': 'Int64'}\n Use `str` or `object` together with suitable `na_values` settings\n to preserve and not interpret dtype.\n If converters are specified, they will be applied INSTEAD\n of dtype conversion.\nengine : {'c', 'python'}, optional\n Parser engine to use. The C engine is faster while the python engine is\n currently more feature-complete.\nconverters : dict, optional\n Dict of functions for converting values in certain columns. Keys can either\n be integers or column labels.\ntrue_values : list, optional\n Values to consider as True.\nfalse_values : list, optional\n Values to consider as False.\nskipinitialspace : bool, default False\n Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n Line numbers to skip (0-indexed) or number of lines to skip (int)\n at the start of the file.\n\n If callable, the callable function will be evaluated against the row\n indices, returning True if the row should be skipped and False otherwise.\n An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n Number of lines at bottom of file to skip (Unsupported with engine='c').\nnrows : int, optional\n Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n Additional strings to recognize as NA/NaN. If dict passed, specific\n per-column NA values. By default the following values are interpreted as\n NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',\n '1.#IND', '1.#QNAN', '<NA>', 'N/A', 'NA', 'NULL', 'NaN', 'n/a',\n 'nan', 'null'.\nkeep_default_na : bool, default True\n Whether or not to include the default NaN values when parsing the data.\n Depending on whether `na_values` is passed in, the behavior is as follows:\n\n * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n is appended to the default NaN values used for parsing.\n * If `keep_default_na` is True, and `na_values` are not specified, only\n the default NaN values are used for parsing.\n * If `keep_default_na` is False, and `na_values` are specified, only\n the NaN values specified `na_values` are used for parsing.\n * If `keep_default_na` is False, and `na_values` are not specified, no\n strings will be parsed as NaN.\n\n Note that if `na_filter` is passed in as False, the `keep_default_na` and\n `na_values` parameters will be ignored.\nna_filter : bool, default True\n Detect missing value markers (empty strings and the value of na_values). In\n data without any NAs, passing na_filter=False can improve the performance\n of reading a large file.\nverbose : bool, default False\n Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n The behavior is as follows:\n\n * boolean. If True -> try parsing the index.\n * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n each as a separate date column.\n * list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as\n a single date column.\n * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call\n result 'foo'\n\n If a column or index cannot be represented as an array of datetimes,\n say because of an unparsable value or a mixture of timezones, the column\n or index will be returned unaltered as an object data type. For\n non-standard datetime parsing, use ``pd.to_datetime`` after\n ``pd.read_csv``. To parse an index or column with a mixture of timezones,\n specify ``date_parser`` to be a partially-applied\n :func:`pandas.to_datetime` with ``utc=True``. See\n :ref:`io.csv.mixed_timezones` for more.\n\n Note: A fast-path exists for iso8601-formatted dates.\ninfer_datetime_format : bool, default False\n If True and `parse_dates` is enabled, pandas will attempt to infer the\n format of the datetime strings in the columns, and if it can be inferred,\n switch to a faster method of parsing them. In some cases this can increase\n the parsing speed by 5-10x.\nkeep_date_col : bool, default False\n If True and `parse_dates` specifies combining multiple columns then\n keep the original columns.\ndate_parser : function, optional\n Function to use for converting a sequence of string columns to an array of\n datetime instances. The default uses ``dateutil.parser.parser`` to do the\n conversion. Pandas will try to call `date_parser` in three different ways,\n advancing to the next if an exception occurs: 1) Pass one or more arrays\n (as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) the\n string values from the columns defined by `parse_dates` into a single array\n and pass that; and 3) call `date_parser` once for each row using one or\n more strings (corresponding to the columns defined by `parse_dates`) as\n arguments.\ndayfirst : bool, default False\n DD/MM format dates, international and European format.\ncache_dates : bool, default True\n If True, use a cache of unique, converted dates to apply the datetime\n conversion. May produce significant speed-up when parsing duplicate\n date strings, especially ones with timezone offsets.\n\n .. versionadded:: 0.25.0\niterator : bool, default False\n Return TextFileReader object for iteration or getting chunks with\n ``get_chunk()``.\n\n .. versionchanged:: 1.2\n\n ``TextFileReader`` is a context manager.\nchunksize : int, optional\n Return TextFileReader object for iteration.\n See the `IO Tools docs\n <https://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_\n for more information on ``iterator`` and ``chunksize``.\n\n .. versionchanged:: 1.2\n\n ``TextFileReader`` is a context manager.\ncompression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'\n For on-the-fly decompression of on-disk data. If 'infer' and\n `filepath_or_buffer` is path-like, then detect compression from the\n following extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise no\n decompression). If using 'zip', the ZIP file must contain only one data\n file to be read in. Set to None for no decompression.\nthousands : str, optional\n Thousands separator.\ndecimal : str, default '.'\n Character to recognize as decimal point (e.g. use ',' for European data).\nlineterminator : str (length 1), optional\n Character to break file into lines. Only valid with C parser.\nquotechar : str (length 1), optional\n The character used to denote the start and end of a quoted item. Quoted\n items can include the delimiter and it will be ignored.\nquoting : int or csv.QUOTE_* instance, default 0\n Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of\n QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).\ndoublequote : bool, default ``True``\n When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate\n whether or not to interpret two consecutive quotechar elements INSIDE a\n field as a single ``quotechar`` element.\nescapechar : str (length 1), optional\n One-character string used to escape other characters.\ncomment : str, optional\n Indicates remainder of line should not be parsed. If found at the beginning\n of a line, the line will be ignored altogether. This parameter must be a\n single character. Like empty lines (as long as ``skip_blank_lines=True``),\n fully commented lines are ignored by the parameter `header` but not by\n `skiprows`. For example, if ``comment='#'``, parsing\n ``#empty\\na,b,c\\n1,2,3`` with ``header=0`` will result in 'a,b,c' being\n treated as the header.\nencoding : str, optional\n Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python\n standard encodings\n <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ .\n\n .. versionchanged:: 1.2\n\n When ``encoding`` is ``None``, ``errors=\"replace\"`` is passed to\n ``open()``. Otherwise, ``errors=\"strict\"`` is passed to ``open()``.\n This behavior was previously only the case for ``engine=\"python\"``.\n\n .. versionchanged:: 1.3.0\n\n ``encoding_errors`` is a new argument. ``encoding`` has no longer an\n influence on how encoding errors are handled.\n\nencoding_errors : str, optional, default \"strict\"\n How encoding errors are treated. `List of possible values\n <https://docs.python.org/3/library/codecs.html#error-handlers>`_ .\n\n .. versionadded:: 1.3.0\n\ndialect : str or csv.Dialect, optional\n If provided, this parameter will override values (default or not) for the\n following parameters: `delimiter`, `doublequote`, `escapechar`,\n `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to\n override values, a ParserWarning will be issued. See csv.Dialect\n documentation for more details.\nerror_bad_lines : bool, default ``None``\n Lines with too many fields (e.g. a csv line with too many commas) will by\n default cause an exception to be raised, and no DataFrame will be returned.\n If False, then these \"bad lines\" will be dropped from the DataFrame that is\n returned.\n\n .. deprecated:: 1.3.0\n The ``on_bad_lines`` parameter should be used instead to specify behavior upon\n encountering a bad line instead.\nwarn_bad_lines : bool, default ``None``\n If error_bad_lines is False, and warn_bad_lines is True, a warning for each\n \"bad line\" will be output.\n\n .. deprecated:: 1.3.0\n The ``on_bad_lines`` parameter should be used instead to specify behavior upon\n encountering a bad line instead.\non_bad_lines : {'error', 'warn', 'skip'}, default 'error'\n Specifies what to do upon encountering a bad line (a line with too many fields).\n Allowed values are :\n\n - 'error', raise an Exception when a bad line is encountered.\n - 'warn', raise a warning when a bad line is encountered and skip that line.\n - 'skip', skip bad lines without raising or warning when they are encountered.\n\n .. versionadded:: 1.3.0\n\ndelim_whitespace : bool, default False\n Specifies whether or not whitespace (e.g. ``' '`` or ``' '``) will be\n used as the sep. Equivalent to setting ``sep='\\s+'``. If this option\n is set to True, nothing should be passed in for the ``delimiter``\n parameter.\nlow_memory : bool, default True\n Internally process the file in chunks, resulting in lower memory use\n while parsing, but possibly mixed type inference. To ensure no mixed\n types either set False, or specify the type with the `dtype` parameter.\n Note that the entire file is read into a single DataFrame regardless,\n use the `chunksize` or `iterator` parameter to return the data in chunks.\n (Only valid with C parser).\nmemory_map : bool, default False\n If a filepath is provided for `filepath_or_buffer`, map the file object\n directly onto memory and access the data directly from there. Using this\n option can improve performance because there is no longer any I/O overhead.\nfloat_precision : str, optional\n Specifies which converter the C engine should use for floating-point\n values. The options are ``None`` or 'high' for the ordinary converter,\n 'legacy' for the original lower precision pandas converter, and\n 'round_trip' for the round-trip converter.\n\n .. versionchanged:: 1.2\n\nstorage_options : dict, optional\n Extra options that make sense for a particular storage connection, e.g.\n host, port, username, password, etc. For HTTP(S) URLs the key-value pairs\n are forwarded to ``urllib`` as header options. For other URLs (e.g.\n starting with \"s3://\", and \"gcs://\") the key-value pairs are forwarded to\n ``fsspec``. Please see ``fsspec`` and ``urllib`` for more details.\n\n .. versionadded:: 1.2\n\nReturns\n-------\nDataFrame or TextParser\n A comma-separated values (csv) file is returned as two-dimensional\n data structure with labeled axes.\n\nSee Also\n--------\nDataFrame.to_csv : Write DataFrame to a comma-separated values (csv) file.\nread_csv : Read a comma-separated values (csv) file into DataFrame.\nread_fwf : Read a table of fixed-width formatted lines into DataFrame.\n\nExamples\n--------\n>>> pd.read_csv('data.csv') # doctest: +SKIP\n\u001b[0;31mFile:\u001b[0m ~/anaconda3/envs/pfp_2021/lib/python3.9/site-packages/pandas/io/parsers/readers.py\n\u001b[0;31mType:\u001b[0m function\n"
]
],
[
[
"## Writing code\n\nWriting code in a `notebook` is pretty normal.",
"_____no_output_____"
]
],
[
[
"def print_10_nums():\n for i in range(10):\n print(i)",
"_____no_output_____"
],
[
"print_10_nums()",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n"
]
],
[
[
"If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`.\n\nFor a full list of all keyboard shortcuts, click on the small `keyboard icon` in the `notebook header` or click on `Help` > `Keyboard Shortcuts`.",
"_____no_output_____"
],
[
"### The interactive workflow: input, output, history\n\n`Notebooks` provide various options for `inputs` and `outputs`, while also allowing to access the `history` of `run commands`.",
"_____no_output_____"
]
],
[
[
"2+10",
"_____no_output_____"
],
[
"_+10\n",
"_____no_output_____"
]
],
[
[
"You can suppress the `storage` and `rendering` of `output` if you append `;` to the last `cell` (this comes in handy when plotting with `matplotlib`, for example):",
"_____no_output_____"
]
],
[
[
"10+20;",
"_____no_output_____"
],
[
"_\n",
"_____no_output_____"
]
],
[
[
"The `output` is stored in `_N` and `Out[N]` variables:",
"_____no_output_____"
]
],
[
[
"_8 == Out[8]",
"_____no_output_____"
]
],
[
[
"Previous inputs are available, too:",
"_____no_output_____"
]
],
[
[
"In[9]",
"_____no_output_____"
],
[
"_i",
"_____no_output_____"
],
[
"%history -n 1-5",
" 1:\nimport pandas as pd\n\nprint(\"Hi! This is a cell. Click on it and press the ▶ button above to run it\")\n 2: pd.read_csv(\n 3:\nimport pandas as pd\n\nprint(\"Hi! This is a cell. Click on it and press the ▶ button above to run it\")\n 4: pd.read_csv?\n 5:\ndef print_10_nums():\n for i in range(10):\n print(i)\n"
]
],
[
[
"### Accessing the underlying operating system\n\nThrough `notebooks` you can also access the underlying `operating system` and `communicate` with it as you would do in e.g. a `terminal` via `bash`:",
"_____no_output_____"
]
],
[
[
"!pwd",
"/Users/peerherholz/google_drive/GitHub/DGPA_workshop_2022/workshop/prerequisites\n"
],
[
"files = !ls\nprint(\"My current directory's files:\")\nprint(files)",
"My current directory's files:\n['Predicting_age_with_machine_learning.ipynb', '__pycache__', 'demograhics_new.txt', 'demographics.csv', 'demographics.txt', 'demographics_new.csv', 'diffusion_imaging.ipynb', 'functional_connectivity.ipynb', 'image_manipulation_nibabel.ipynb', 'image_manipulation_nilearn.ipynb', 'intro_jupyter.ipynb', 'intro_python.ipynb', 'intro_to_shell.ipynb', 'machine_learning_keras.ipynb', 'machine_learning_nilearn.ipynb', 'machine_learning_preparation.ipynb', 'mymodule.py', 'python_numpy.ipynb', 'python_scikit.ipynb', 'python_scipy.ipynb', 'python_visualization_for_data.ipynb', 'statistical_analyses_MRI.ipynb']\n"
],
[
"!echo $files",
"[Predicting_age_with_machine_learning.ipynb, __pycache__, demograhics_new.txt, demographics.csv, demographics.txt, demographics_new.csv, diffusion_imaging.ipynb, functional_connectivity.ipynb, image_manipulation_nibabel.ipynb, image_manipulation_nilearn.ipynb, intro_jupyter.ipynb, intro_python.ipynb, intro_to_shell.ipynb, machine_learning_keras.ipynb, machine_learning_nilearn.ipynb, machine_learning_preparation.ipynb, mymodule.py, python_numpy.ipynb, python_scikit.ipynb, python_scipy.ipynb, python_visualization_for_data.ipynb, statistical_analyses_MRI.ipynb]\n"
],
[
"!echo {files[0].upper()}",
"PREDICTING_AGE_WITH_MACHINE_LEARNING.IPYNB\n"
]
],
[
[
"### Magic functions\n\n`IPython` has all kinds of `magic functions`. `Magic functions` are prefixed by `%` or `%%,` and typically take their `arguments` without `parentheses`, `quotes` or even `commas` for convenience. `Line magics` take a single `%` and `cell magics` are prefixed with two `%%`.",
"_____no_output_____"
],
[
"Some useful magic functions are:\n\nMagic Name | Effect\n---------- | -------------------------------------------------------------\n%env | Get, set, or list environment variables\n%pdb | Control the automatic calling of the pdb interactive debugger\n%pylab | Load numpy and matplotlib to work interactively\n%%debug | Activates debugging mode in cell\n%%html | Render the cell as a block of HTML\n%%latex | Render the cell as a block of latex\n%%sh | %%sh script magic\n%%time | Time execution of a Python statement or expression\n\nYou can run `%magic` to get a list of `magic functions` or `%quickref` for a reference sheet.",
"_____no_output_____"
]
],
[
[
"%magic",
"\nIPython's 'magic' functions\n===========================\n\nThe magic function system provides a series of functions which allow you to\ncontrol the behavior of IPython itself, plus a lot of system-type\nfeatures. There are two kinds of magics, line-oriented and cell-oriented.\n\nLine magics are prefixed with the % character and work much like OS\ncommand-line calls: they get as an argument the rest of the line, where\narguments are passed without parentheses or quotes. For example, this will\ntime the given statement::\n\n %timeit range(1000)\n\nCell magics are prefixed with a double %%, and they are functions that get as\nan argument not only the rest of the line, but also the lines below it in a\nseparate argument. These magics are called with two arguments: the rest of the\ncall line and the body of the cell, consisting of the lines below the first.\nFor example::\n\n %%timeit x = numpy.random.randn((100, 100))\n numpy.linalg.svd(x)\n\nwill time the execution of the numpy svd routine, running the assignment of x\nas part of the setup phase, which is not timed.\n\nIn a line-oriented client (the terminal or Qt console IPython), starting a new\ninput with %% will automatically enter cell mode, and IPython will continue\nreading input until a blank line is given. In the notebook, simply type the\nwhole cell as one entity, but keep in mind that the %% escape can only be at\nthe very start of the cell.\n\nNOTE: If you have 'automagic' enabled (via the command line option or with the\n%automagic function), you don't need to type in the % explicitly for line\nmagics; cell magics always require an explicit '%%' escape. By default,\nIPython ships with automagic on, so you should only rarely need the % escape.\n\nExample: typing '%cd mydir' (without the quotes) changes your working directory\nto 'mydir', if it exists.\n\nFor a list of the available magic functions, use %lsmagic. For a description\nof any of them, type %magic_name?, e.g. '%cd?'.\n\nCurrently the magic system has the following functions:\n%alias:\n Define an alias for a system command.\n \n '%alias alias_name cmd' defines 'alias_name' as an alias for 'cmd'\n \n Then, typing 'alias_name params' will execute the system command 'cmd\n params' (from your underlying operating system).\n \n Aliases have lower precedence than magic functions and Python normal\n variables, so if 'foo' is both a Python variable and an alias, the\n alias can not be executed until 'del foo' removes the Python variable.\n \n You can use the %l specifier in an alias definition to represent the\n whole line when the alias is called. For example::\n \n In [2]: alias bracket echo \"Input in brackets: <%l>\"\n In [3]: bracket hello world\n Input in brackets: <hello world>\n \n You can also define aliases with parameters using %s specifiers (one\n per parameter)::\n \n In [1]: alias parts echo first %s second %s\n In [2]: %parts A B\n first A second B\n In [3]: %parts A\n Incorrect number of arguments: 2 expected.\n parts is an alias to: 'echo first %s second %s'\n \n Note that %l and %s are mutually exclusive. You can only use one or\n the other in your aliases.\n \n Aliases expand Python variables just like system calls using ! or !!\n do: all expressions prefixed with '$' get expanded. For details of\n the semantic rules, see PEP-215:\n http://www.python.org/peps/pep-0215.html. This is the library used by\n IPython for variable expansion. If you want to access a true shell\n variable, an extra $ is necessary to prevent its expansion by\n IPython::\n \n In [6]: alias show echo\n In [7]: PATH='A Python string'\n In [8]: show $PATH\n A Python string\n In [9]: show $$PATH\n /usr/local/lf9560/bin:/usr/local/intel/compiler70/ia32/bin:...\n \n You can use the alias facility to access all of $PATH. See the %rehashx\n function, which automatically creates aliases for the contents of your\n $PATH.\n \n If called with no parameters, %alias prints the current alias table\n for your system. For posix systems, the default aliases are 'cat',\n 'cp', 'mv', 'rm', 'rmdir', and 'mkdir', and other platform-specific\n aliases are added. For windows-based systems, the default aliases are\n 'copy', 'ddir', 'echo', 'ls', 'ldir', 'mkdir', 'ren', and 'rmdir'.\n \n You can see the definition of alias by adding a question mark in the\n end::\n \n In [1]: cat?\n Repr: <alias cat for 'cat'>\n%alias_magic:\n ::\n \n %alias_magic [-l] [-c] [-p PARAMS] name target\n \n Create an alias for an existing line or cell magic.\n \n Examples\n --------\n ::\n \n In [1]: %alias_magic t timeit\n Created `%t` as an alias for `%timeit`.\n Created `%%t` as an alias for `%%timeit`.\n \n In [2]: %t -n1 pass\n 1 loops, best of 3: 954 ns per loop\n \n In [3]: %%t -n1\n ...: pass\n ...:\n 1 loops, best of 3: 954 ns per loop\n \n In [4]: %alias_magic --cell whereami pwd\n UsageError: Cell magic function `%%pwd` not found.\n In [5]: %alias_magic --line whereami pwd\n Created `%whereami` as an alias for `%pwd`.\n \n In [6]: %whereami\n Out[6]: u'/home/testuser'\n \n In [7]: %alias_magic h history \"-p -l 30\" --line\n Created `%h` as an alias for `%history -l 30`.\n \n positional arguments:\n name Name of the magic to be created.\n target Name of the existing line or cell magic.\n \n optional arguments:\n -l, --line Create a line magic alias.\n -c, --cell Create a cell magic alias.\n -p PARAMS, --params PARAMS\n Parameters passed to the magic function.\n%autoawait:\n \n Allow to change the status of the autoawait option.\n \n This allow you to set a specific asynchronous code runner.\n \n If no value is passed, print the currently used asynchronous integration\n and whether it is activated.\n \n It can take a number of value evaluated in the following order:\n \n - False/false/off deactivate autoawait integration\n - True/true/on activate autoawait integration using configured default\n loop\n - asyncio/curio/trio activate autoawait integration and use integration\n with said library.\n \n - `sync` turn on the pseudo-sync integration (mostly used for\n `IPython.embed()` which does not run IPython with a real eventloop and\n deactivate running asynchronous code. Turning on Asynchronous code with\n the pseudo sync loop is undefined behavior and may lead IPython to crash.\n \n If the passed parameter does not match any of the above and is a python\n identifier, get said object from user namespace and set it as the\n runner, and activate autoawait. \n \n If the object is a fully qualified object name, attempt to import it and\n set it as the runner, and activate autoawait.\n \n \n The exact behavior of autoawait is experimental and subject to change\n across version of IPython and Python.\n%autocall:\n Make functions callable without having to type parentheses.\n \n Usage:\n \n %autocall [mode]\n \n The mode can be one of: 0->Off, 1->Smart, 2->Full. If not given, the\n value is toggled on and off (remembering the previous state).\n \n In more detail, these values mean:\n \n 0 -> fully disabled\n \n 1 -> active, but do not apply if there are no arguments on the line.\n \n In this mode, you get::\n \n In [1]: callable\n Out[1]: <built-in function callable>\n \n In [2]: callable 'hello'\n ------> callable('hello')\n Out[2]: False\n \n 2 -> Active always. Even if no arguments are present, the callable\n object is called::\n \n In [2]: float\n ------> float()\n Out[2]: 0.0\n \n Note that even with autocall off, you can still use '/' at the start of\n a line to treat the first argument on the command line as a function\n and add parentheses to it::\n \n In [8]: /str 43\n ------> str(43)\n Out[8]: '43'\n \n # all-random (note for auto-testing)\n%automagic:\n Make magic functions callable without having to type the initial %.\n \n Without arguments toggles on/off (when off, you must call it as\n %automagic, of course). With arguments it sets the value, and you can\n use any of (case insensitive):\n \n - on, 1, True: to activate\n \n - off, 0, False: to deactivate.\n \n Note that magic functions have lowest priority, so if there's a\n variable whose name collides with that of a magic fn, automagic won't\n work for that function (you get the variable instead). However, if you\n delete the variable (del var), the previously shadowed magic function\n becomes visible to automagic again.\n%autosave:\n Set the autosave interval in the notebook (in seconds).\n \n The default value is 120, or two minutes.\n ``%autosave 0`` will disable autosave.\n \n This magic only has an effect when called from the notebook interface.\n It has no effect when called in a startup file.\n%bookmark:\n Manage IPython's bookmark system.\n \n %bookmark <name> - set bookmark to current dir\n %bookmark <name> <dir> - set bookmark to <dir>\n %bookmark -l - list all bookmarks\n %bookmark -d <name> - remove bookmark\n %bookmark -r - remove all bookmarks\n \n You can later on access a bookmarked folder with::\n \n %cd -b <name>\n \n or simply '%cd <name>' if there is no directory called <name> AND\n there is such a bookmark defined.\n \n Your bookmarks persist through IPython sessions, but they are\n associated with each profile.\n%cat:\n Alias for `!cat`\n%cd:\n Change the current working directory.\n \n This command automatically maintains an internal list of directories\n you visit during your IPython session, in the variable _dh. The\n command %dhist shows this history nicely formatted. You can also\n do 'cd -<tab>' to see directory history conveniently.\n \n Usage:\n \n cd 'dir': changes to directory 'dir'.\n \n cd -: changes to the last visited directory.\n \n cd -<n>: changes to the n-th directory in the directory history.\n \n cd --foo: change to directory that matches 'foo' in history\n \n cd -b <bookmark_name>: jump to a bookmark set by %bookmark\n (note: cd <bookmark_name> is enough if there is no\n directory <bookmark_name>, but a bookmark with the name exists.)\n 'cd -b <tab>' allows you to tab-complete bookmark names.\n \n Options:\n \n -q: quiet. Do not print the working directory after the cd command is\n executed. By default IPython's cd command does print this directory,\n since the default prompts do not display path information.\n \n Note that !cd doesn't work for this purpose because the shell where\n !command runs is immediately discarded after executing 'command'.\n \n Examples\n --------\n ::\n \n In [10]: cd parent/child\n /home/tsuser/parent/child\n%clear:\n Clear the terminal.\n%colors:\n Switch color scheme for prompts, info system and exception handlers.\n \n Currently implemented schemes: NoColor, Linux, LightBG.\n \n Color scheme names are not case-sensitive.\n \n Examples\n --------\n To get a plain black and white terminal::\n \n %colors nocolor\n%conda:\n Run the conda package manager within the current kernel.\n \n Usage:\n %conda install [pkgs]\n%config:\n configure IPython\n \n %config Class[.trait=value]\n \n This magic exposes most of the IPython config system. Any\n Configurable class should be able to be configured with the simple\n line::\n \n %config Class.trait=value\n \n Where `value` will be resolved in the user's namespace, if it is an\n expression or variable name.\n \n Examples\n --------\n \n To see what classes are available for config, pass no arguments::\n \n In [1]: %config\n Available objects for config:\n TerminalInteractiveShell\n HistoryManager\n PrefilterManager\n AliasManager\n IPCompleter\n DisplayFormatter\n \n To view what is configurable on a given class, just pass the class\n name::\n \n In [2]: %config IPCompleter\n IPCompleter options\n -----------------\n IPCompleter.omit__names=<Enum>\n Current: 2\n Choices: (0, 1, 2)\n Instruct the completer to omit private method names\n Specifically, when completing on ``object.<tab>``.\n When 2 [default]: all names that start with '_' will be excluded.\n When 1: all 'magic' names (``__foo__``) will be excluded.\n When 0: nothing will be excluded.\n IPCompleter.merge_completions=<CBool>\n Current: True\n Whether to merge completion results into a single list\n If False, only the completion results from the first non-empty\n completer will be returned.\n IPCompleter.limit_to__all__=<CBool>\n Current: False\n Instruct the completer to use __all__ for the completion\n Specifically, when completing on ``object.<tab>``.\n When True: only those names in obj.__all__ will be included.\n When False [default]: the __all__ attribute is ignored\n IPCompleter.greedy=<CBool>\n Current: False\n Activate greedy completion\n This will enable completion on elements of lists, results of\n function calls, etc., but can be unsafe because the code is\n actually evaluated on TAB.\n \n but the real use is in setting values::\n \n In [3]: %config IPCompleter.greedy = True\n \n and these values are read from the user_ns if they are variables::\n \n In [4]: feeling_greedy=False\n \n In [5]: %config IPCompleter.greedy = feeling_greedy\n%connect_info:\n Print information for connecting other clients to this kernel\n \n It will print the contents of this session's connection file, as well as\n shortcuts for local clients.\n \n In the simplest case, when called from the most recently launched kernel,\n secondary clients can be connected, simply with:\n \n $> jupyter <app> --existing\n%cp:\n Alias for `!cp`\n%debug:\n ::\n \n %debug [--breakpoint FILE:LINE] [statement ...]\n \n Activate the interactive debugger.\n \n This magic command support two ways of activating debugger.\n One is to activate debugger before executing code. This way, you\n can set a break point, to step through the code from the point.\n You can use this mode by giving statements to execute and optionally\n a breakpoint.\n \n The other one is to activate debugger in post-mortem mode. You can\n activate this mode simply running %debug without any argument.\n If an exception has just occurred, this lets you inspect its stack\n frames interactively. Note that this will always work only on the last\n traceback that occurred, so you must call this quickly after an\n exception that you wish to inspect has fired, because if another one\n occurs, it clobbers the previous one.\n \n If you want IPython to automatically do this on every exception, see\n the %pdb magic for more details.\n \n .. versionchanged:: 7.3\n When running code, user variables are no longer expanded,\n the magic line is always left unmodified.\n \n positional arguments:\n statement Code to run in debugger. You can omit this in cell\n magic mode.\n \n optional arguments:\n --breakpoint <FILE:LINE>, -b <FILE:LINE>\n Set break point at LINE in FILE.\n%dhist:\n Print your history of visited directories.\n \n %dhist -> print full history\n %dhist n -> print last n entries only\n %dhist n1 n2 -> print entries between n1 and n2 (n2 not included)\n \n This history is automatically maintained by the %cd command, and\n always available as the global list variable _dh. You can use %cd -<n>\n to go to directory number <n>.\n \n Note that most of time, you should view directory history by entering\n cd -<TAB>.\n%dirs:\n Return the current directory stack.\n%doctest_mode:\n Toggle doctest mode on and off.\n \n This mode is intended to make IPython behave as much as possible like a\n plain Python shell, from the perspective of how its prompts, exceptions\n and output look. This makes it easy to copy and paste parts of a\n session into doctests. It does so by:\n \n - Changing the prompts to the classic ``>>>`` ones.\n - Changing the exception reporting mode to 'Plain'.\n - Disabling pretty-printing of output.\n \n Note that IPython also supports the pasting of code snippets that have\n leading '>>>' and '...' prompts in them. This means that you can paste\n doctests from files or docstrings (even if they have leading\n whitespace), and the code will execute correctly. You can then use\n '%history -t' to see the translated history; this will give you the\n input after removal of all the leading prompts and whitespace, which\n can be pasted back into an editor.\n \n With these features, you can switch into this mode easily whenever you\n need to do testing and changes to doctests, without having to leave\n your existing IPython session.\n%ed:\n Alias for `%edit`.\n%edit:\n Bring up an editor and execute the resulting code.\n \n Usage:\n %edit [options] [args]\n \n %edit runs an external text editor. You will need to set the command for\n this editor via the ``TerminalInteractiveShell.editor`` option in your\n configuration file before it will work.\n \n This command allows you to conveniently edit multi-line code right in\n your IPython session.\n \n If called without arguments, %edit opens up an empty editor with a\n temporary file and will execute the contents of this file when you\n close it (don't forget to save it!).\n \n Options:\n \n -n <number>\n Open the editor at a specified line number. By default, the IPython\n editor hook uses the unix syntax 'editor +N filename', but you can\n configure this by providing your own modified hook if your favorite\n editor supports line-number specifications with a different syntax.\n \n -p\n Call the editor with the same data as the previous time it was used,\n regardless of how long ago (in your current session) it was.\n \n -r\n Use 'raw' input. This option only applies to input taken from the\n user's history. By default, the 'processed' history is used, so that\n magics are loaded in their transformed version to valid Python. If\n this option is given, the raw input as typed as the command line is\n used instead. When you exit the editor, it will be executed by\n IPython's own processor.\n \n Arguments:\n \n If arguments are given, the following possibilities exist:\n \n - The arguments are numbers or pairs of colon-separated numbers (like\n 1 4:8 9). These are interpreted as lines of previous input to be\n loaded into the editor. The syntax is the same of the %macro command.\n \n - If the argument doesn't start with a number, it is evaluated as a\n variable and its contents loaded into the editor. You can thus edit\n any string which contains python code (including the result of\n previous edits).\n \n - If the argument is the name of an object (other than a string),\n IPython will try to locate the file where it was defined and open the\n editor at the point where it is defined. You can use ``%edit function``\n to load an editor exactly at the point where 'function' is defined,\n edit it and have the file be executed automatically.\n \n If the object is a macro (see %macro for details), this opens up your\n specified editor with a temporary file containing the macro's data.\n Upon exit, the macro is reloaded with the contents of the file.\n \n Note: opening at an exact line is only supported under Unix, and some\n editors (like kedit and gedit up to Gnome 2.8) do not understand the\n '+NUMBER' parameter necessary for this feature. Good editors like\n (X)Emacs, vi, jed, pico and joe all do.\n \n - If the argument is not found as a variable, IPython will look for a\n file with that name (adding .py if necessary) and load it into the\n editor. It will execute its contents with execfile() when you exit,\n loading any code in the file into your interactive namespace.\n \n Unlike in the terminal, this is designed to use a GUI editor, and we do\n not know when it has closed. So the file you edit will not be\n automatically executed or printed.\n \n Note that %edit is also available through the alias %ed.\n%env:\n Get, set, or list environment variables.\n \n Usage:\n \n %env: lists all environment variables/values\n %env var: get value for var\n %env var val: set value for var\n %env var=val: set value for var\n %env var=$val: set value for var, using python expansion if possible\n%gui:\n Enable or disable IPython GUI event loop integration.\n \n %gui [GUINAME]\n \n This magic replaces IPython's threaded shells that were activated\n using the (pylab/wthread/etc.) command line flags. GUI toolkits\n can now be enabled at runtime and keyboard\n interrupts should work without any problems. The following toolkits\n are supported: wxPython, PyQt4, PyGTK, Tk and Cocoa (OSX)::\n \n %gui wx # enable wxPython event loop integration\n %gui qt4|qt # enable PyQt4 event loop integration\n %gui qt5 # enable PyQt5 event loop integration\n %gui gtk # enable PyGTK event loop integration\n %gui gtk3 # enable Gtk3 event loop integration\n %gui gtk4 # enable Gtk4 event loop integration\n %gui tk # enable Tk event loop integration\n %gui osx # enable Cocoa event loop integration\n # (requires %matplotlib 1.1)\n %gui # disable all event loop integration\n \n WARNING: after any of these has been called you can simply create\n an application object, but DO NOT start the event loop yourself, as\n we have already handled that.\n%hist:\n Alias for `%history`.\n%history:\n ::\n \n %history [-n] [-o] [-p] [-t] [-f FILENAME] [-g [PATTERN ...]]\n [-l [LIMIT]] [-u]\n [range ...]\n \n Print input history (_i<n> variables), with most recent last.\n \n By default, input history is printed without line numbers so it can be\n directly pasted into an editor. Use -n to show them.\n \n By default, all input history from the current session is displayed.\n Ranges of history can be indicated using the syntax:\n \n ``4``\n Line 4, current session\n ``4-6``\n Lines 4-6, current session\n ``243/1-5``\n Lines 1-5, session 243\n ``~2/7``\n Line 7, session 2 before current\n ``~8/1-~6/5``\n From the first line of 8 sessions ago, to the fifth line of 6\n sessions ago.\n \n Multiple ranges can be entered, separated by spaces\n \n The same syntax is used by %macro, %save, %edit, %rerun\n \n Examples\n --------\n ::\n \n In [6]: %history -n 4-6\n 4:a = 12\n 5:print a**2\n 6:%history -n 4-6\n \n positional arguments:\n range\n \n optional arguments:\n -n print line numbers for each input. This feature is only\n available if numbered prompts are in use.\n -o also print outputs for each input.\n -p print classic '>>>' python prompts before each input.\n This is useful for making documentation, and in\n conjunction with -o, for producing doctest-ready output.\n -t print the 'translated' history, as IPython understands\n it. IPython filters your input and converts it all into\n valid Python source before executing it (things like\n magics or aliases are turned into function calls, for\n example). With this option, you'll see the native\n history instead of the user-entered version: '%cd /'\n will be seen as 'get_ipython().run_line_magic(\"cd\",\n \"/\")' instead of '%cd /'.\n -f FILENAME FILENAME: instead of printing the output to the screen,\n redirect it to the given file. The file is always\n overwritten, though *when it can*, IPython asks for\n confirmation first. In particular, running the command\n 'history -f FILENAME' from the IPython Notebook\n interface will replace FILENAME even if it already\n exists *without* confirmation.\n -g <[PATTERN ...]> treat the arg as a glob pattern to search for in (full)\n history. This includes the saved history (almost all\n commands ever written). The pattern may contain '?' to\n match one unknown character and '*' to match any number\n of unknown characters. Use '%hist -g' to show full saved\n history (may be very long).\n -l <[LIMIT]> get the last n lines from all sessions. Specify n as a\n single arg, or the default is the last 10 lines.\n -u when searching history using `-g`, show only unique\n history.\n%killbgscripts:\n Kill all BG processes started by %%script and its family.\n%ldir:\n Alias for `!ls -F -G -l %l | grep /$`\n%less:\n Show a file through the pager.\n \n Files ending in .py are syntax-highlighted.\n%lf:\n Alias for `!ls -F -l -G %l | grep ^-`\n%lk:\n Alias for `!ls -F -l -G %l | grep ^l`\n%ll:\n Alias for `!ls -F -l -G`\n%load:\n Load code into the current frontend.\n \n Usage:\n %load [options] source\n \n where source can be a filename, URL, input history range, macro, or\n element in the user namespace\n \n Options:\n \n -r <lines>: Specify lines or ranges of lines to load from the source.\n Ranges could be specified as x-y (x..y) or in python-style x:y \n (x..(y-1)). Both limits x and y can be left blank (meaning the \n beginning and end of the file, respectively).\n \n -s <symbols>: Specify function or classes to load from python source. \n \n -y : Don't ask confirmation for loading source above 200 000 characters.\n \n -n : Include the user's namespace when searching for source code.\n \n This magic command can either take a local filename, a URL, an history\n range (see %history) or a macro as argument, it will prompt for\n confirmation before loading source with more than 200 000 characters, unless\n -y flag is passed or if the frontend does not support raw_input::\n \n %load myscript.py\n %load 7-27\n %load myMacro\n %load http://www.example.com/myscript.py\n %load -r 5-10 myscript.py\n %load -r 10-20,30,40: foo.py\n %load -s MyClass,wonder_function myscript.py\n %load -n MyClass\n %load -n my_module.wonder_function\n%load_ext:\n Load an IPython extension by its module name.\n%loadpy:\n Alias of `%load`\n \n `%loadpy` has gained some flexibility and dropped the requirement of a `.py`\n extension. So it has been renamed simply into %load. You can look at\n `%load`'s docstring for more info.\n%logoff:\n Temporarily stop logging.\n \n You must have previously started logging.\n%logon:\n Restart logging.\n \n This function is for restarting logging which you've temporarily\n stopped with %logoff. For starting logging for the first time, you\n must use the %logstart function, which allows you to specify an\n optional log filename.\n%logstart:\n Start logging anywhere in a session.\n \n %logstart [-o|-r|-t|-q] [log_name [log_mode]]\n \n If no name is given, it defaults to a file named 'ipython_log.py' in your\n current directory, in 'rotate' mode (see below).\n \n '%logstart name' saves to file 'name' in 'backup' mode. It saves your\n history up to that point and then continues logging.\n \n %logstart takes a second optional parameter: logging mode. This can be one\n of (note that the modes are given unquoted):\n \n append\n Keep logging at the end of any existing file.\n \n backup\n Rename any existing file to name~ and start name.\n \n global\n Append to a single logfile in your home directory.\n \n over\n Overwrite any existing log.\n \n rotate\n Create rotating logs: name.1~, name.2~, etc.\n \n Options:\n \n -o\n log also IPython's output. In this mode, all commands which\n generate an Out[NN] prompt are recorded to the logfile, right after\n their corresponding input line. The output lines are always\n prepended with a '#[Out]# ' marker, so that the log remains valid\n Python code.\n \n Since this marker is always the same, filtering only the output from\n a log is very easy, using for example a simple awk call::\n \n awk -F'#\\[Out\\]# ' '{if($2) {print $2}}' ipython_log.py\n \n -r\n log 'raw' input. Normally, IPython's logs contain the processed\n input, so that user lines are logged in their final form, converted\n into valid Python. For example, %Exit is logged as\n _ip.magic(\"Exit\"). If the -r flag is given, all input is logged\n exactly as typed, with no transformations applied.\n \n -t\n put timestamps before each input line logged (these are put in\n comments).\n \n -q \n suppress output of logstate message when logging is invoked\n%logstate:\n Print the status of the logging system.\n%logstop:\n Fully stop logging and close log file.\n \n In order to start logging again, a new %logstart call needs to be made,\n possibly (though not necessarily) with a new filename, mode and other\n options.\n%ls:\n Alias for `!ls -F -G`\n%lsmagic:\n List currently available magic functions.\n%lx:\n Alias for `!ls -F -l -G %l | grep ^-..x`\n%macro:\n Define a macro for future re-execution. It accepts ranges of history,\n filenames or string objects.\n \n Usage:\n %macro [options] name n1-n2 n3-n4 ... n5 .. n6 ...\n \n Options:\n \n -r: use 'raw' input. By default, the 'processed' history is used,\n so that magics are loaded in their transformed version to valid\n Python. If this option is given, the raw input as typed at the\n command line is used instead.\n \n -q: quiet macro definition. By default, a tag line is printed \n to indicate the macro has been created, and then the contents of \n the macro are printed. If this option is given, then no printout\n is produced once the macro is created.\n \n This will define a global variable called `name` which is a string\n made of joining the slices and lines you specify (n1,n2,... numbers\n above) from your input history into a single string. This variable\n acts like an automatic function which re-executes those lines as if\n you had typed them. You just type 'name' at the prompt and the code\n executes.\n \n The syntax for indicating input ranges is described in %history.\n \n Note: as a 'hidden' feature, you can also use traditional python slice\n notation, where N:M means numbers N through M-1.\n \n For example, if your history contains (print using %hist -n )::\n \n 44: x=1\n 45: y=3\n 46: z=x+y\n 47: print x\n 48: a=5\n 49: print 'x',x,'y',y\n \n you can create a macro with lines 44 through 47 (included) and line 49\n called my_macro with::\n \n In [55]: %macro my_macro 44-47 49\n \n Now, typing `my_macro` (without quotes) will re-execute all this code\n in one pass.\n \n You don't need to give the line-numbers in order, and any given line\n number can appear multiple times. You can assemble macros with any\n lines from your input history in any order.\n \n The macro is a simple object which holds its value in an attribute,\n but IPython's display system checks for macros and executes them as\n code instead of printing them when you type their name.\n \n You can view a macro's contents by explicitly printing it with::\n \n print macro_name\n%magic:\n Print information about the magic function system.\n \n Supported formats: -latex, -brief, -rest\n%man:\n Find the man page for the given command and display in pager.\n%matplotlib:\n ::\n \n %matplotlib [-l] [gui]\n \n Set up matplotlib to work interactively.\n \n This function lets you activate matplotlib interactive support\n at any point during an IPython session. It does not import anything\n into the interactive namespace.\n \n If you are using the inline matplotlib backend in the IPython Notebook\n you can set which figure formats are enabled using the following::\n \n In [1]: from IPython.display import set_matplotlib_formats\n \n In [2]: set_matplotlib_formats('pdf', 'svg')\n \n The default for inline figures sets `bbox_inches` to 'tight'. This can\n cause discrepancies between the displayed image and the identical\n image created using `savefig`. This behavior can be disabled using the\n `%config` magic::\n \n In [3]: %config InlineBackend.print_figure_kwargs = {'bbox_inches':None}\n \n In addition, see the docstring of\n `IPython.display.set_matplotlib_formats` and\n `IPython.display.set_matplotlib_close` for more information on\n changing additional behaviors of the inline backend.\n \n Examples\n --------\n To enable the inline backend for usage with the IPython Notebook::\n \n In [1]: %matplotlib inline\n \n In this case, where the matplotlib default is TkAgg::\n \n In [2]: %matplotlib\n Using matplotlib backend: TkAgg\n \n But you can explicitly request a different GUI backend::\n \n In [3]: %matplotlib qt\n \n You can list the available backends using the -l/--list option::\n \n In [4]: %matplotlib --list\n Available matplotlib backends: ['osx', 'qt4', 'qt5', 'gtk3', 'gtk4', 'notebook', 'wx', 'qt', 'nbagg',\n 'gtk', 'tk', 'inline']\n \n positional arguments:\n gui Name of the matplotlib backend to use ('agg', 'gtk', 'gtk3',\n 'gtk4', 'inline', 'ipympl', 'nbagg', 'notebook', 'osx', 'pdf',\n 'ps', 'qt', 'qt4', 'qt5', 'qt6', 'svg', 'tk', 'widget', 'wx').\n If given, the corresponding matplotlib backend is used,\n otherwise it will be matplotlib's default (which you can set in\n your matplotlib config file).\n \n optional arguments:\n -l, --list Show available matplotlib backends\n%mkdir:\n Alias for `!mkdir`\n%more:\n Show a file through the pager.\n \n Files ending in .py are syntax-highlighted.\n%mv:\n Alias for `!mv`\n%notebook:\n ::\n \n %notebook filename\n \n Export and convert IPython notebooks.\n \n This function can export the current IPython history to a notebook file.\n For example, to export the history to \"foo.ipynb\" do \"%notebook foo.ipynb\".\n \n The -e or --export flag is deprecated in IPython 5.2, and will be\n removed in the future.\n \n positional arguments:\n filename Notebook name or filename\n%page:\n Pretty print the object and display it through a pager.\n \n %page [options] OBJECT\n \n If no object is given, use _ (last output).\n \n Options:\n \n -r: page str(object), don't pretty-print it.\n%pastebin:\n Upload code to dpaste.com, returning the URL.\n \n Usage:\n %pastebin [-d \"Custom description\"][-e 24] 1-7\n \n The argument can be an input history range, a filename, or the name of a\n string or macro.\n \n Options:\n \n -d: Pass a custom description. The default will say\n \"Pasted from IPython\".\n -e: Pass number of days for the link to be expired.\n The default will be 7 days.\n%pdb:\n Control the automatic calling of the pdb interactive debugger.\n \n Call as '%pdb on', '%pdb 1', '%pdb off' or '%pdb 0'. If called without\n argument it works as a toggle.\n \n When an exception is triggered, IPython can optionally call the\n interactive pdb debugger after the traceback printout. %pdb toggles\n this feature on and off.\n \n The initial state of this feature is set in your configuration\n file (the option is ``InteractiveShell.pdb``).\n \n If you want to just activate the debugger AFTER an exception has fired,\n without having to type '%pdb on' and rerunning your code, you can use\n the %debug magic.\n%pdef:\n Print the call signature for any callable object.\n \n If the object is a class, print the constructor information.\n \n Examples\n --------\n ::\n \n In [3]: %pdef urllib.urlopen\n urllib.urlopen(url, data=None, proxies=None)\n%pdoc:\n Print the docstring for an object.\n \n If the given object is a class, it will print both the class and the\n constructor docstrings.\n%pfile:\n Print (or run through pager) the file where an object is defined.\n \n The file opens at the line where the object definition begins. IPython\n will honor the environment variable PAGER if set, and otherwise will\n do its best to print the file in a convenient form.\n \n If the given argument is not an object currently defined, IPython will\n try to interpret it as a filename (automatically adding a .py extension\n if needed). You can thus use %pfile as a syntax highlighting code\n viewer.\n%pinfo:\n Provide detailed information about an object.\n \n '%pinfo object' is just a synonym for object? or ?object.\n%pinfo2:\n Provide extra detailed information about an object.\n \n '%pinfo2 object' is just a synonym for object?? or ??object.\n%pip:\n Run the pip package manager within the current kernel.\n \n Usage:\n %pip install [pkgs]\n%popd:\n Change to directory popped off the top of the stack.\n%pprint:\n Toggle pretty printing on/off.\n%precision:\n Set floating point precision for pretty printing.\n \n Can set either integer precision or a format string.\n \n If numpy has been imported and precision is an int,\n numpy display precision will also be set, via ``numpy.set_printoptions``.\n \n If no argument is given, defaults will be restored.\n \n Examples\n --------\n ::\n \n In [1]: from math import pi\n \n In [2]: %precision 3\n Out[2]: u'%.3f'\n \n In [3]: pi\n Out[3]: 3.142\n \n In [4]: %precision %i\n Out[4]: u'%i'\n \n In [5]: pi\n Out[5]: 3\n \n In [6]: %precision %e\n Out[6]: u'%e'\n \n In [7]: pi**10\n Out[7]: 9.364805e+04\n \n In [8]: %precision\n Out[8]: u'%r'\n \n In [9]: pi**10\n Out[9]: 93648.047476082982\n%prun:\n Run a statement through the python code profiler.\n \n Usage, in line mode:\n %prun [options] statement\n \n Usage, in cell mode:\n %%prun [options] [statement]\n code...\n code...\n \n In cell mode, the additional code lines are appended to the (possibly\n empty) statement in the first line. Cell mode allows you to easily\n profile multiline blocks without having to put them in a separate\n function.\n \n The given statement (which doesn't require quote marks) is run via the\n python profiler in a manner similar to the profile.run() function.\n Namespaces are internally managed to work correctly; profile.run\n cannot be used in IPython because it makes certain assumptions about\n namespaces which do not hold under IPython.\n \n Options:\n \n -l <limit>\n you can place restrictions on what or how much of the\n profile gets printed. The limit value can be:\n \n * A string: only information for function names containing this string\n is printed.\n \n * An integer: only these many lines are printed.\n \n * A float (between 0 and 1): this fraction of the report is printed\n (for example, use a limit of 0.4 to see the topmost 40% only).\n \n You can combine several limits with repeated use of the option. For\n example, ``-l __init__ -l 5`` will print only the topmost 5 lines of\n information about class constructors.\n \n -r\n return the pstats.Stats object generated by the profiling. This\n object has all the information about the profile in it, and you can\n later use it for further analysis or in other functions.\n \n -s <key>\n sort profile by given key. You can provide more than one key\n by using the option several times: '-s key1 -s key2 -s key3...'. The\n default sorting key is 'time'.\n \n The following is copied verbatim from the profile documentation\n referenced below:\n \n When more than one key is provided, additional keys are used as\n secondary criteria when the there is equality in all keys selected\n before them.\n \n Abbreviations can be used for any key names, as long as the\n abbreviation is unambiguous. The following are the keys currently\n defined:\n \n ============ =====================\n Valid Arg Meaning\n ============ =====================\n \"calls\" call count\n \"cumulative\" cumulative time\n \"file\" file name\n \"module\" file name\n \"pcalls\" primitive call count\n \"line\" line number\n \"name\" function name\n \"nfl\" name/file/line\n \"stdname\" standard name\n \"time\" internal time\n ============ =====================\n \n Note that all sorts on statistics are in descending order (placing\n most time consuming items first), where as name, file, and line number\n searches are in ascending order (i.e., alphabetical). The subtle\n distinction between \"nfl\" and \"stdname\" is that the standard name is a\n sort of the name as printed, which means that the embedded line\n numbers get compared in an odd way. For example, lines 3, 20, and 40\n would (if the file names were the same) appear in the string order\n \"20\" \"3\" and \"40\". In contrast, \"nfl\" does a numeric compare of the\n line numbers. In fact, sort_stats(\"nfl\") is the same as\n sort_stats(\"name\", \"file\", \"line\").\n \n -T <filename>\n save profile results as shown on screen to a text\n file. The profile is still shown on screen.\n \n -D <filename>\n save (via dump_stats) profile statistics to given\n filename. This data is in a format understood by the pstats module, and\n is generated by a call to the dump_stats() method of profile\n objects. The profile is still shown on screen.\n \n -q\n suppress output to the pager. Best used with -T and/or -D above.\n \n If you want to run complete programs under the profiler's control, use\n ``%run -p [prof_opts] filename.py [args to program]`` where prof_opts\n contains profiler specific options as described here.\n \n You can read the complete documentation for the profile module with::\n \n In [1]: import profile; profile.help()\n \n .. versionchanged:: 7.3\n User variables are no longer expanded,\n the magic line is always left unmodified.\n%psearch:\n Search for object in namespaces by wildcard.\n \n %psearch [options] PATTERN [OBJECT TYPE]\n \n Note: ? can be used as a synonym for %psearch, at the beginning or at\n the end: both a*? and ?a* are equivalent to '%psearch a*'. Still, the\n rest of the command line must be unchanged (options come first), so\n for example the following forms are equivalent\n \n %psearch -i a* function\n -i a* function?\n ?-i a* function\n \n Arguments:\n \n PATTERN\n \n where PATTERN is a string containing * as a wildcard similar to its\n use in a shell. The pattern is matched in all namespaces on the\n search path. By default objects starting with a single _ are not\n matched, many IPython generated objects have a single\n underscore. The default is case insensitive matching. Matching is\n also done on the attributes of objects and not only on the objects\n in a module.\n \n [OBJECT TYPE]\n \n Is the name of a python type from the types module. The name is\n given in lowercase without the ending type, ex. StringType is\n written string. By adding a type here only objects matching the\n given type are matched. Using all here makes the pattern match all\n types (this is the default).\n \n Options:\n \n -a: makes the pattern match even objects whose names start with a\n single underscore. These names are normally omitted from the\n search.\n \n -i/-c: make the pattern case insensitive/sensitive. If neither of\n these options are given, the default is read from your configuration\n file, with the option ``InteractiveShell.wildcards_case_sensitive``.\n If this option is not specified in your configuration file, IPython's\n internal default is to do a case sensitive search.\n \n -e/-s NAMESPACE: exclude/search a given namespace. The pattern you\n specify can be searched in any of the following namespaces:\n 'builtin', 'user', 'user_global','internal', 'alias', where\n 'builtin' and 'user' are the search defaults. Note that you should\n not use quotes when specifying namespaces.\n \n -l: List all available object types for object matching. This function\n can be used without arguments.\n \n 'Builtin' contains the python module builtin, 'user' contains all\n user data, 'alias' only contain the shell aliases and no python\n objects, 'internal' contains objects used by IPython. The\n 'user_global' namespace is only used by embedded IPython instances,\n and it contains module-level globals. You can add namespaces to the\n search with -s or exclude them with -e (these options can be given\n more than once).\n \n Examples\n --------\n ::\n \n %psearch a* -> objects beginning with an a\n %psearch -e builtin a* -> objects NOT in the builtin space starting in a\n %psearch a* function -> all functions beginning with an a\n %psearch re.e* -> objects beginning with an e in module re\n %psearch r*.e* -> objects that start with e in modules starting in r\n %psearch r*.* string -> all strings in modules beginning with r\n \n Case sensitive search::\n \n %psearch -c a* list all object beginning with lower case a\n \n Show objects beginning with a single _::\n \n %psearch -a _* list objects beginning with a single underscore\n \n List available objects::\n \n %psearch -l list all available object types\n%psource:\n Print (or run through pager) the source code for an object.\n%pushd:\n Place the current dir on stack and change directory.\n \n Usage:\n %pushd ['dirname']\n%pwd:\n Return the current working directory path.\n \n Examples\n --------\n ::\n \n In [9]: pwd\n Out[9]: '/home/tsuser/sprint/ipython'\n%pycat:\n Show a syntax-highlighted file through a pager.\n \n This magic is similar to the cat utility, but it will assume the file\n to be Python source and will show it with syntax highlighting.\n \n This magic command can either take a local filename, an url,\n an history range (see %history) or a macro as argument ::\n \n %pycat myscript.py\n %pycat 7-27\n %pycat myMacro\n %pycat http://www.example.com/myscript.py\n%pylab:\n ::\n \n %pylab [--no-import-all] [gui]\n \n Load numpy and matplotlib to work interactively.\n \n This function lets you activate pylab (matplotlib, numpy and\n interactive support) at any point during an IPython session.\n \n %pylab makes the following imports::\n \n import numpy\n import matplotlib\n from matplotlib import pylab, mlab, pyplot\n np = numpy\n plt = pyplot\n \n from IPython.display import display\n from IPython.core.pylabtools import figsize, getfigs\n \n from pylab import *\n from numpy import *\n \n If you pass `--no-import-all`, the last two `*` imports will be excluded.\n \n See the %matplotlib magic for more details about activating matplotlib\n without affecting the interactive namespace.\n \n positional arguments:\n gui Name of the matplotlib backend to use ('agg', 'gtk',\n 'gtk3', 'gtk4', 'inline', 'ipympl', 'nbagg', 'notebook',\n 'osx', 'pdf', 'ps', 'qt', 'qt4', 'qt5', 'qt6', 'svg', 'tk',\n 'widget', 'wx'). If given, the corresponding matplotlib\n backend is used, otherwise it will be matplotlib's default\n (which you can set in your matplotlib config file).\n \n optional arguments:\n --no-import-all Prevent IPython from performing ``import *`` into the\n interactive namespace. You can govern the default behavior\n of this flag with the InteractiveShellApp.pylab_import_all\n configurable.\n%qtconsole:\n Open a qtconsole connected to this kernel.\n \n Useful for connecting a qtconsole to running notebooks, for better\n debugging.\n%quickref:\n Show a quick reference sheet\n%recall:\n Repeat a command, or get command to input line for editing.\n \n %recall and %rep are equivalent.\n \n - %recall (no arguments):\n \n Place a string version of last computation result (stored in the\n special '_' variable) to the next input prompt. Allows you to create\n elaborate command lines without using copy-paste::\n \n In[1]: l = [\"hei\", \"vaan\"]\n In[2]: \"\".join(l)\n Out[2]: heivaan\n In[3]: %recall\n In[4]: heivaan_ <== cursor blinking\n \n %recall 45\n \n Place history line 45 on the next input prompt. Use %hist to find\n out the number.\n \n %recall 1-4\n \n Combine the specified lines into one cell, and place it on the next\n input prompt. See %history for the slice syntax.\n \n %recall foo+bar\n \n If foo+bar can be evaluated in the user namespace, the result is\n placed at the next input prompt. Otherwise, the history is searched\n for lines which contain that substring, and the most recent one is\n placed at the next input prompt.\n%rehashx:\n Update the alias table with all executable files in $PATH.\n \n rehashx explicitly checks that every entry in $PATH is a file\n with execute access (os.X_OK).\n \n Under Windows, it checks executability as a match against a\n '|'-separated string of extensions, stored in the IPython config\n variable win_exec_ext. This defaults to 'exe|com|bat'.\n \n This function also resets the root module cache of module completer,\n used on slow filesystems.\n%reload_ext:\n Reload an IPython extension by its module name.\n%rep:\n Alias for `%recall`.\n%rerun:\n Re-run previous input\n \n By default, you can specify ranges of input history to be repeated\n (as with %history). With no arguments, it will repeat the last line.\n \n Options:\n \n -l <n> : Repeat the last n lines of input, not including the\n current command.\n \n -g foo : Repeat the most recent line which contains foo\n%reset:\n Resets the namespace by removing all names defined by the user, if\n called without arguments, or by removing some types of objects, such\n as everything currently in IPython's In[] and Out[] containers (see\n the parameters for details).\n \n Parameters\n ----------\n -f : force reset without asking for confirmation.\n \n -s : 'Soft' reset: Only clears your namespace, leaving history intact.\n References to objects may be kept. By default (without this option),\n we do a 'hard' reset, giving you a new session and removing all\n references to objects from the current session.\n \n --aggressive: Try to aggressively remove modules from sys.modules ; this\n may allow you to reimport Python modules that have been updated and\n pick up changes, but can have unattended consequences.\n \n in : reset input history\n \n out : reset output history\n \n dhist : reset directory history\n \n array : reset only variables that are NumPy arrays\n \n See Also\n --------\n reset_selective : invoked as ``%reset_selective``\n \n Examples\n --------\n ::\n \n In [6]: a = 1\n \n In [7]: a\n Out[7]: 1\n \n In [8]: 'a' in get_ipython().user_ns\n Out[8]: True\n \n In [9]: %reset -f\n \n In [1]: 'a' in get_ipython().user_ns\n Out[1]: False\n \n In [2]: %reset -f in\n Flushing input history\n \n In [3]: %reset -f dhist in\n Flushing directory history\n Flushing input history\n \n Notes\n -----\n Calling this magic from clients that do not implement standard input,\n such as the ipython notebook interface, will reset the namespace\n without confirmation.\n%reset_selective:\n Resets the namespace by removing names defined by the user.\n \n Input/Output history are left around in case you need them.\n \n %reset_selective [-f] regex\n \n No action is taken if regex is not included\n \n Options\n -f : force reset without asking for confirmation.\n \n See Also\n --------\n reset : invoked as ``%reset``\n \n Examples\n --------\n \n We first fully reset the namespace so your output looks identical to\n this example for pedagogical reasons; in practice you do not need a\n full reset::\n \n In [1]: %reset -f\n \n Now, with a clean namespace we can make a few variables and use\n ``%reset_selective`` to only delete names that match our regexp::\n \n In [2]: a=1; b=2; c=3; b1m=4; b2m=5; b3m=6; b4m=7; b2s=8\n \n In [3]: who_ls\n Out[3]: ['a', 'b', 'b1m', 'b2m', 'b2s', 'b3m', 'b4m', 'c']\n \n In [4]: %reset_selective -f b[2-3]m\n \n In [5]: who_ls\n Out[5]: ['a', 'b', 'b1m', 'b2s', 'b4m', 'c']\n \n In [6]: %reset_selective -f d\n \n In [7]: who_ls\n Out[7]: ['a', 'b', 'b1m', 'b2s', 'b4m', 'c']\n \n In [8]: %reset_selective -f c\n \n In [9]: who_ls\n Out[9]: ['a', 'b', 'b1m', 'b2s', 'b4m']\n \n In [10]: %reset_selective -f b\n \n In [11]: who_ls\n Out[11]: ['a']\n \n Notes\n -----\n Calling this magic from clients that do not implement standard input,\n such as the ipython notebook interface, will reset the namespace\n without confirmation.\n%rm:\n Alias for `!rm`\n%rmdir:\n Alias for `!rmdir`\n%run:\n Run the named file inside IPython as a program.\n \n Usage::\n \n %run [-n -i -e -G]\n [( -t [-N<N>] | -d [-b<N>] | -p [profile options] )]\n ( -m mod | file ) [args]\n \n Parameters after the filename are passed as command-line arguments to\n the program (put in sys.argv). Then, control returns to IPython's\n prompt.\n \n This is similar to running at a system prompt ``python file args``,\n but with the advantage of giving you IPython's tracebacks, and of\n loading all variables into your interactive namespace for further use\n (unless -p is used, see below).\n \n The file is executed in a namespace initially consisting only of\n ``__name__=='__main__'`` and sys.argv constructed as indicated. It thus\n sees its environment as if it were being run as a stand-alone program\n (except for sharing global objects such as previously imported\n modules). But after execution, the IPython interactive namespace gets\n updated with all variables defined in the program (except for __name__\n and sys.argv). This allows for very convenient loading of code for\n interactive work, while giving each program a 'clean sheet' to run in.\n \n Arguments are expanded using shell-like glob match. Patterns\n '*', '?', '[seq]' and '[!seq]' can be used. Additionally,\n tilde '~' will be expanded into user's home directory. Unlike\n real shells, quotation does not suppress expansions. Use\n *two* back slashes (e.g. ``\\\\*``) to suppress expansions.\n To completely disable these expansions, you can use -G flag.\n \n On Windows systems, the use of single quotes `'` when specifying \n a file is not supported. Use double quotes `\"`.\n \n Options:\n \n -n\n __name__ is NOT set to '__main__', but to the running file's name\n without extension (as python does under import). This allows running\n scripts and reloading the definitions in them without calling code\n protected by an ``if __name__ == \"__main__\"`` clause.\n \n -i\n run the file in IPython's namespace instead of an empty one. This\n is useful if you are experimenting with code written in a text editor\n which depends on variables defined interactively.\n \n -e\n ignore sys.exit() calls or SystemExit exceptions in the script\n being run. This is particularly useful if IPython is being used to\n run unittests, which always exit with a sys.exit() call. In such\n cases you are interested in the output of the test results, not in\n seeing a traceback of the unittest module.\n \n -t\n print timing information at the end of the run. IPython will give\n you an estimated CPU time consumption for your script, which under\n Unix uses the resource module to avoid the wraparound problems of\n time.clock(). Under Unix, an estimate of time spent on system tasks\n is also given (for Windows platforms this is reported as 0.0).\n \n If -t is given, an additional ``-N<N>`` option can be given, where <N>\n must be an integer indicating how many times you want the script to\n run. The final timing report will include total and per run results.\n \n For example (testing the script uniq_stable.py)::\n \n In [1]: run -t uniq_stable\n \n IPython CPU timings (estimated):\n User : 0.19597 s.\n System: 0.0 s.\n \n In [2]: run -t -N5 uniq_stable\n \n IPython CPU timings (estimated):\n Total runs performed: 5\n Times : Total Per run\n User : 0.910862 s, 0.1821724 s.\n System: 0.0 s, 0.0 s.\n \n -d\n run your program under the control of pdb, the Python debugger.\n This allows you to execute your program step by step, watch variables,\n etc. Internally, what IPython does is similar to calling::\n \n pdb.run('execfile(\"YOURFILENAME\")')\n \n with a breakpoint set on line 1 of your file. You can change the line\n number for this automatic breakpoint to be <N> by using the -bN option\n (where N must be an integer). For example::\n \n %run -d -b40 myscript\n \n will set the first breakpoint at line 40 in myscript.py. Note that\n the first breakpoint must be set on a line which actually does\n something (not a comment or docstring) for it to stop execution.\n \n Or you can specify a breakpoint in a different file::\n \n %run -d -b myotherfile.py:20 myscript\n \n When the pdb debugger starts, you will see a (Pdb) prompt. You must\n first enter 'c' (without quotes) to start execution up to the first\n breakpoint.\n \n Entering 'help' gives information about the use of the debugger. You\n can easily see pdb's full documentation with \"import pdb;pdb.help()\"\n at a prompt.\n \n -p\n run program under the control of the Python profiler module (which\n prints a detailed report of execution times, function calls, etc).\n \n You can pass other options after -p which affect the behavior of the\n profiler itself. See the docs for %prun for details.\n \n In this mode, the program's variables do NOT propagate back to the\n IPython interactive namespace (because they remain in the namespace\n where the profiler executes them).\n \n Internally this triggers a call to %prun, see its documentation for\n details on the options available specifically for profiling.\n \n There is one special usage for which the text above doesn't apply:\n if the filename ends with .ipy[nb], the file is run as ipython script,\n just as if the commands were written on IPython prompt.\n \n -m\n specify module name to load instead of script path. Similar to\n the -m option for the python interpreter. Use this option last if you\n want to combine with other %run options. Unlike the python interpreter\n only source modules are allowed no .pyc or .pyo files.\n For example::\n \n %run -m example\n \n will run the example module.\n \n -G\n disable shell-like glob expansion of arguments.\n%save:\n Save a set of lines or a macro to a given filename.\n \n Usage:\n %save [options] filename n1-n2 n3-n4 ... n5 .. n6 ...\n \n Options:\n \n -r: use 'raw' input. By default, the 'processed' history is used,\n so that magics are loaded in their transformed version to valid\n Python. If this option is given, the raw input as typed as the\n command line is used instead.\n \n -f: force overwrite. If file exists, %save will prompt for overwrite\n unless -f is given.\n \n -a: append to the file instead of overwriting it.\n \n This function uses the same syntax as %history for input ranges,\n then saves the lines to the filename you specify.\n \n It adds a '.py' extension to the file if you don't do so yourself, and\n it asks for confirmation before overwriting existing files.\n \n If `-r` option is used, the default extension is `.ipy`.\n%sc:\n Shell capture - run shell command and capture output (DEPRECATED use !).\n \n DEPRECATED. Suboptimal, retained for backwards compatibility.\n \n You should use the form 'var = !command' instead. Example:\n \n \"%sc -l myfiles = ls ~\" should now be written as\n \n \"myfiles = !ls ~\"\n \n myfiles.s, myfiles.l and myfiles.n still apply as documented\n below.\n \n --\n %sc [options] varname=command\n \n IPython will run the given command using commands.getoutput(), and\n will then update the user's interactive namespace with a variable\n called varname, containing the value of the call. Your command can\n contain shell wildcards, pipes, etc.\n \n The '=' sign in the syntax is mandatory, and the variable name you\n supply must follow Python's standard conventions for valid names.\n \n (A special format without variable name exists for internal use)\n \n Options:\n \n -l: list output. Split the output on newlines into a list before\n assigning it to the given variable. By default the output is stored\n as a single string.\n \n -v: verbose. Print the contents of the variable.\n \n In most cases you should not need to split as a list, because the\n returned value is a special type of string which can automatically\n provide its contents either as a list (split on newlines) or as a\n space-separated string. These are convenient, respectively, either\n for sequential processing or to be passed to a shell command.\n \n For example::\n \n # Capture into variable a\n In [1]: sc a=ls *py\n \n # a is a string with embedded newlines\n In [2]: a\n Out[2]: 'setup.py\\nwin32_manual_post_install.py'\n \n # which can be seen as a list:\n In [3]: a.l\n Out[3]: ['setup.py', 'win32_manual_post_install.py']\n \n # or as a whitespace-separated string:\n In [4]: a.s\n Out[4]: 'setup.py win32_manual_post_install.py'\n \n # a.s is useful to pass as a single command line:\n In [5]: !wc -l $a.s\n 146 setup.py\n 130 win32_manual_post_install.py\n 276 total\n \n # while the list form is useful to loop over:\n In [6]: for f in a.l:\n ...: !wc -l $f\n ...:\n 146 setup.py\n 130 win32_manual_post_install.py\n \n Similarly, the lists returned by the -l option are also special, in\n the sense that you can equally invoke the .s attribute on them to\n automatically get a whitespace-separated string from their contents::\n \n In [7]: sc -l b=ls *py\n \n In [8]: b\n Out[8]: ['setup.py', 'win32_manual_post_install.py']\n \n In [9]: b.s\n Out[9]: 'setup.py win32_manual_post_install.py'\n \n In summary, both the lists and strings used for output capture have\n the following special attributes::\n \n .l (or .list) : value as list.\n .n (or .nlstr): value as newline-separated string.\n .s (or .spstr): value as space-separated string.\n%set_env:\n Set environment variables. Assumptions are that either \"val\" is a\n name in the user namespace, or val is something that evaluates to a\n string.\n \n Usage:\n %set_env var val: set value for var\n %set_env var=val: set value for var\n %set_env var=$val: set value for var, using python expansion if possible\n%store:\n Lightweight persistence for python variables.\n \n Example::\n \n In [1]: l = ['hello',10,'world']\n In [2]: %store l\n In [3]: exit\n \n (IPython session is closed and started again...)\n \n ville@badger:~$ ipython\n In [1]: l\n NameError: name 'l' is not defined\n In [2]: %store -r\n In [3]: l\n Out[3]: ['hello', 10, 'world']\n \n Usage:\n \n * ``%store`` - Show list of all variables and their current\n values\n * ``%store spam bar`` - Store the *current* value of the variables spam\n and bar to disk\n * ``%store -d spam`` - Remove the variable and its value from storage\n * ``%store -z`` - Remove all variables from storage\n * ``%store -r`` - Refresh all variables, aliases and directory history\n from store (overwrite current vals)\n * ``%store -r spam bar`` - Refresh specified variables and aliases from store\n (delete current val)\n * ``%store foo >a.txt`` - Store value of foo to new file a.txt\n * ``%store foo >>a.txt`` - Append value of foo to file a.txt\n \n It should be noted that if you change the value of a variable, you\n need to %store it again if you want to persist the new value.\n \n Note also that the variables will need to be pickleable; most basic\n python types can be safely %store'd.\n \n Also aliases can be %store'd across sessions.\n To remove an alias from the storage, use the %unalias magic.\n%sx:\n Shell execute - run shell command and capture output (!! is short-hand).\n \n %sx command\n \n IPython will run the given command using commands.getoutput(), and\n return the result formatted as a list (split on '\\n'). Since the\n output is _returned_, it will be stored in ipython's regular output\n cache Out[N] and in the '_N' automatic variables.\n \n Notes:\n \n 1) If an input line begins with '!!', then %sx is automatically\n invoked. That is, while::\n \n !ls\n \n causes ipython to simply issue system('ls'), typing::\n \n !!ls\n \n is a shorthand equivalent to::\n \n %sx ls\n \n 2) %sx differs from %sc in that %sx automatically splits into a list,\n like '%sc -l'. The reason for this is to make it as easy as possible\n to process line-oriented shell output via further python commands.\n %sc is meant to provide much finer control, but requires more\n typing.\n \n 3) Just like %sc -l, this is a list with special attributes:\n ::\n \n .l (or .list) : value as list.\n .n (or .nlstr): value as newline-separated string.\n .s (or .spstr): value as whitespace-separated string.\n \n This is very useful when trying to use such lists as arguments to\n system commands.\n%system:\n Shell execute - run shell command and capture output (!! is short-hand).\n \n %sx command\n \n IPython will run the given command using commands.getoutput(), and\n return the result formatted as a list (split on '\\n'). Since the\n output is _returned_, it will be stored in ipython's regular output\n cache Out[N] and in the '_N' automatic variables.\n \n Notes:\n \n 1) If an input line begins with '!!', then %sx is automatically\n invoked. That is, while::\n \n !ls\n \n causes ipython to simply issue system('ls'), typing::\n \n !!ls\n \n is a shorthand equivalent to::\n \n %sx ls\n \n 2) %sx differs from %sc in that %sx automatically splits into a list,\n like '%sc -l'. The reason for this is to make it as easy as possible\n to process line-oriented shell output via further python commands.\n %sc is meant to provide much finer control, but requires more\n typing.\n \n 3) Just like %sc -l, this is a list with special attributes:\n ::\n \n .l (or .list) : value as list.\n .n (or .nlstr): value as newline-separated string.\n .s (or .spstr): value as whitespace-separated string.\n \n This is very useful when trying to use such lists as arguments to\n system commands.\n%tb:\n Print the last traceback.\n \n Optionally, specify an exception reporting mode, tuning the\n verbosity of the traceback. By default the currently-active exception\n mode is used. See %xmode for changing exception reporting modes.\n \n Valid modes: Plain, Context, Verbose, and Minimal.\n%time:\n Time execution of a Python statement or expression.\n \n The CPU and wall clock times are printed, and the value of the\n expression (if any) is returned. Note that under Win32, system time\n is always reported as 0, since it can not be measured.\n \n This function can be used both as a line and cell magic:\n \n - In line mode you can time a single-line statement (though multiple\n ones can be chained with using semicolons).\n \n - In cell mode, you can time the cell body (a directly\n following statement raises an error).\n \n This function provides very basic timing functionality. Use the timeit\n magic for more control over the measurement.\n \n .. versionchanged:: 7.3\n User variables are no longer expanded,\n the magic line is always left unmodified.\n \n Examples\n --------\n ::\n \n In [1]: %time 2**128\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00\n Out[1]: 340282366920938463463374607431768211456L\n \n In [2]: n = 1000000\n \n In [3]: %time sum(range(n))\n CPU times: user 1.20 s, sys: 0.05 s, total: 1.25 s\n Wall time: 1.37\n Out[3]: 499999500000L\n \n In [4]: %time print 'hello world'\n hello world\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00\n \n \n .. note::\n The time needed by Python to compile the given expression will be\n reported if it is more than 0.1s.\n \n In the example below, the actual exponentiation is done by Python\n at compilation time, so while the expression can take a noticeable\n amount of time to compute, that time is purely due to the\n compilation::\n \n In [5]: %time 3**9999;\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00 s\n \n In [6]: %time 3**999999;\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00 s\n Compiler : 0.78 s\n%timeit:\n Time execution of a Python statement or expression\n \n Usage, in line mode:\n %timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] statement\n or in cell mode:\n %%timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] setup_code\n code\n code...\n \n Time execution of a Python statement or expression using the timeit\n module. This function can be used both as a line and cell magic:\n \n - In line mode you can time a single-line statement (though multiple\n ones can be chained with using semicolons).\n \n - In cell mode, the statement in the first line is used as setup code\n (executed but not timed) and the body of the cell is timed. The cell\n body has access to any variables created in the setup code.\n \n Options:\n -n<N>: execute the given statement <N> times in a loop. If <N> is not\n provided, <N> is determined so as to get sufficient accuracy.\n \n -r<R>: number of repeats <R>, each consisting of <N> loops, and take the\n best result.\n Default: 7\n \n -t: use time.time to measure the time, which is the default on Unix.\n This function measures wall time.\n \n -c: use time.clock to measure the time, which is the default on\n Windows and measures wall time. On Unix, resource.getrusage is used\n instead and returns the CPU user time.\n \n -p<P>: use a precision of <P> digits to display the timing result.\n Default: 3\n \n -q: Quiet, do not print result.\n \n -o: return a TimeitResult that can be stored in a variable to inspect\n the result in more details.\n \n .. versionchanged:: 7.3\n User variables are no longer expanded,\n the magic line is always left unmodified.\n \n Examples\n --------\n ::\n \n In [1]: %timeit pass\n 8.26 ns ± 0.12 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each)\n \n In [2]: u = None\n \n In [3]: %timeit u is None\n 29.9 ns ± 0.643 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)\n \n In [4]: %timeit -r 4 u == None\n \n In [5]: import time\n \n In [6]: %timeit -n1 time.sleep(2)\n \n \n The times reported by %timeit will be slightly higher than those\n reported by the timeit.py script when variables are accessed. This is\n due to the fact that %timeit executes the statement in the namespace\n of the shell, compared with timeit.py, which uses a single setup\n statement to import function or create variables. Generally, the bias\n does not matter as long as results from timeit.py are not mixed with\n those from %timeit.\n%unalias:\n Remove an alias\n%unload_ext:\n Unload an IPython extension by its module name.\n \n Not all extensions can be unloaded, only those which define an\n ``unload_ipython_extension`` function.\n%who:\n Print all interactive variables, with some minimal formatting.\n \n If any arguments are given, only variables whose type matches one of\n these are printed. For example::\n \n %who function str\n \n will only list functions and strings, excluding all other types of\n variables. To find the proper type names, simply use type(var) at a\n command line to see how python prints type names. For example:\n \n ::\n \n In [1]: type('hello')\n Out[1]: <type 'str'>\n \n indicates that the type name for strings is 'str'.\n \n ``%who`` always excludes executed names loaded through your configuration\n file and things which are internal to IPython.\n \n This is deliberate, as typically you may load many modules and the\n purpose of %who is to show you only what you've manually defined.\n \n Examples\n --------\n \n Define two variables and list them with who::\n \n In [1]: alpha = 123\n \n In [2]: beta = 'test'\n \n In [3]: %who\n alpha beta\n \n In [4]: %who int\n alpha\n \n In [5]: %who str\n beta\n%who_ls:\n Return a sorted list of all interactive variables.\n \n If arguments are given, only variables of types matching these\n arguments are returned.\n \n Examples\n --------\n \n Define two variables and list them with who_ls::\n \n In [1]: alpha = 123\n \n In [2]: beta = 'test'\n \n In [3]: %who_ls\n Out[3]: ['alpha', 'beta']\n \n In [4]: %who_ls int\n Out[4]: ['alpha']\n \n In [5]: %who_ls str\n Out[5]: ['beta']\n%whos:\n Like %who, but gives some extra information about each variable.\n \n The same type filtering of %who can be applied here.\n \n For all variables, the type is printed. Additionally it prints:\n \n - For {},[],(): their length.\n \n - For numpy arrays, a summary with shape, number of\n elements, typecode and size in memory.\n \n - Everything else: a string representation, snipping their middle if\n too long.\n \n Examples\n --------\n \n Define two variables and list them with whos::\n \n In [1]: alpha = 123\n \n In [2]: beta = 'test'\n \n In [3]: %whos\n Variable Type Data/Info\n --------------------------------\n alpha int 123\n beta str test\n%xdel:\n Delete a variable, trying to clear it from anywhere that\n IPython's machinery has references to it. By default, this uses\n the identity of the named object in the user namespace to remove\n references held under other names. The object is also removed\n from the output history.\n \n Options\n -n : Delete the specified name from all namespaces, without\n checking their identity.\n%xmode:\n Switch modes for the exception handlers.\n \n Valid modes: Plain, Context, Verbose, and Minimal.\n \n If called without arguments, acts as a toggle.\n \n When in verbose mode the value --show (and --hide) \n will respectively show (or hide) frames with ``__tracebackhide__ =\n True`` value set.\n%%!:\n Shell execute - run shell command and capture output (!! is short-hand).\n \n %sx command\n \n IPython will run the given command using commands.getoutput(), and\n return the result formatted as a list (split on '\\n'). Since the\n output is _returned_, it will be stored in ipython's regular output\n cache Out[N] and in the '_N' automatic variables.\n \n Notes:\n \n 1) If an input line begins with '!!', then %sx is automatically\n invoked. That is, while::\n \n !ls\n \n causes ipython to simply issue system('ls'), typing::\n \n !!ls\n \n is a shorthand equivalent to::\n \n %sx ls\n \n 2) %sx differs from %sc in that %sx automatically splits into a list,\n like '%sc -l'. The reason for this is to make it as easy as possible\n to process line-oriented shell output via further python commands.\n %sc is meant to provide much finer control, but requires more\n typing.\n \n 3) Just like %sc -l, this is a list with special attributes:\n ::\n \n .l (or .list) : value as list.\n .n (or .nlstr): value as newline-separated string.\n .s (or .spstr): value as whitespace-separated string.\n \n This is very useful when trying to use such lists as arguments to\n system commands.\n%%HTML:\n Alias for `%%html`.\n%%SVG:\n Alias for `%%svg`.\n%%bash:\n %%bash script magic\n \n Run cells with bash in a subprocess.\n \n This is a shortcut for `%%script bash`\n%%capture:\n ::\n \n %capture [--no-stderr] [--no-stdout] [--no-display] [output]\n \n run the cell, capturing stdout, stderr, and IPython's rich display() calls.\n \n positional arguments:\n output The name of the variable in which to store output. This is a\n utils.io.CapturedIO object with stdout/err attributes for the\n text of the captured output. CapturedOutput also has a show()\n method for displaying the output, and __call__ as well, so you\n can use that to quickly display the output. If unspecified,\n captured output is discarded.\n \n optional arguments:\n --no-stderr Don't capture stderr.\n --no-stdout Don't capture stdout.\n --no-display Don't capture IPython's rich display.\n%%debug:\n ::\n \n %debug [--breakpoint FILE:LINE] [statement ...]\n \n Activate the interactive debugger.\n \n This magic command support two ways of activating debugger.\n One is to activate debugger before executing code. This way, you\n can set a break point, to step through the code from the point.\n You can use this mode by giving statements to execute and optionally\n a breakpoint.\n \n The other one is to activate debugger in post-mortem mode. You can\n activate this mode simply running %debug without any argument.\n If an exception has just occurred, this lets you inspect its stack\n frames interactively. Note that this will always work only on the last\n traceback that occurred, so you must call this quickly after an\n exception that you wish to inspect has fired, because if another one\n occurs, it clobbers the previous one.\n \n If you want IPython to automatically do this on every exception, see\n the %pdb magic for more details.\n \n .. versionchanged:: 7.3\n When running code, user variables are no longer expanded,\n the magic line is always left unmodified.\n \n positional arguments:\n statement Code to run in debugger. You can omit this in cell\n magic mode.\n \n optional arguments:\n --breakpoint <FILE:LINE>, -b <FILE:LINE>\n Set break point at LINE in FILE.\n%%file:\n Alias for `%%writefile`.\n%%html:\n ::\n \n %html [--isolated]\n \n Render the cell as a block of HTML\n \n optional arguments:\n --isolated Annotate the cell as 'isolated'. Isolated cells are rendered\n inside their own <iframe> tag\n%%javascript:\n Run the cell block of Javascript code\n%%js:\n Run the cell block of Javascript code\n \n Alias of `%%javascript`\n%%latex:\n Render the cell as a block of latex\n \n The subset of latex which is support depends on the implementation in\n the client. In the Jupyter Notebook, this magic only renders the subset\n of latex defined by MathJax\n [here](https://docs.mathjax.org/en/v2.5-latest/tex.html).\n%%markdown:\n Render the cell as Markdown text block\n%%perl:\n %%perl script magic\n \n Run cells with perl in a subprocess.\n \n This is a shortcut for `%%script perl`\n%%prun:\n Run a statement through the python code profiler.\n \n Usage, in line mode:\n %prun [options] statement\n \n Usage, in cell mode:\n %%prun [options] [statement]\n code...\n code...\n \n In cell mode, the additional code lines are appended to the (possibly\n empty) statement in the first line. Cell mode allows you to easily\n profile multiline blocks without having to put them in a separate\n function.\n \n The given statement (which doesn't require quote marks) is run via the\n python profiler in a manner similar to the profile.run() function.\n Namespaces are internally managed to work correctly; profile.run\n cannot be used in IPython because it makes certain assumptions about\n namespaces which do not hold under IPython.\n \n Options:\n \n -l <limit>\n you can place restrictions on what or how much of the\n profile gets printed. The limit value can be:\n \n * A string: only information for function names containing this string\n is printed.\n \n * An integer: only these many lines are printed.\n \n * A float (between 0 and 1): this fraction of the report is printed\n (for example, use a limit of 0.4 to see the topmost 40% only).\n \n You can combine several limits with repeated use of the option. For\n example, ``-l __init__ -l 5`` will print only the topmost 5 lines of\n information about class constructors.\n \n -r\n return the pstats.Stats object generated by the profiling. This\n object has all the information about the profile in it, and you can\n later use it for further analysis or in other functions.\n \n -s <key>\n sort profile by given key. You can provide more than one key\n by using the option several times: '-s key1 -s key2 -s key3...'. The\n default sorting key is 'time'.\n \n The following is copied verbatim from the profile documentation\n referenced below:\n \n When more than one key is provided, additional keys are used as\n secondary criteria when the there is equality in all keys selected\n before them.\n \n Abbreviations can be used for any key names, as long as the\n abbreviation is unambiguous. The following are the keys currently\n defined:\n \n ============ =====================\n Valid Arg Meaning\n ============ =====================\n \"calls\" call count\n \"cumulative\" cumulative time\n \"file\" file name\n \"module\" file name\n \"pcalls\" primitive call count\n \"line\" line number\n \"name\" function name\n \"nfl\" name/file/line\n \"stdname\" standard name\n \"time\" internal time\n ============ =====================\n \n Note that all sorts on statistics are in descending order (placing\n most time consuming items first), where as name, file, and line number\n searches are in ascending order (i.e., alphabetical). The subtle\n distinction between \"nfl\" and \"stdname\" is that the standard name is a\n sort of the name as printed, which means that the embedded line\n numbers get compared in an odd way. For example, lines 3, 20, and 40\n would (if the file names were the same) appear in the string order\n \"20\" \"3\" and \"40\". In contrast, \"nfl\" does a numeric compare of the\n line numbers. In fact, sort_stats(\"nfl\") is the same as\n sort_stats(\"name\", \"file\", \"line\").\n \n -T <filename>\n save profile results as shown on screen to a text\n file. The profile is still shown on screen.\n \n -D <filename>\n save (via dump_stats) profile statistics to given\n filename. This data is in a format understood by the pstats module, and\n is generated by a call to the dump_stats() method of profile\n objects. The profile is still shown on screen.\n \n -q\n suppress output to the pager. Best used with -T and/or -D above.\n \n If you want to run complete programs under the profiler's control, use\n ``%run -p [prof_opts] filename.py [args to program]`` where prof_opts\n contains profiler specific options as described here.\n \n You can read the complete documentation for the profile module with::\n \n In [1]: import profile; profile.help()\n \n .. versionchanged:: 7.3\n User variables are no longer expanded,\n the magic line is always left unmodified.\n%%pypy:\n %%pypy script magic\n \n Run cells with pypy in a subprocess.\n \n This is a shortcut for `%%script pypy`\n%%python:\n %%python script magic\n \n Run cells with python in a subprocess.\n \n This is a shortcut for `%%script python`\n%%python2:\n %%python2 script magic\n \n Run cells with python2 in a subprocess.\n \n This is a shortcut for `%%script python2`\n%%python3:\n %%python3 script magic\n \n Run cells with python3 in a subprocess.\n \n This is a shortcut for `%%script python3`\n%%ruby:\n %%ruby script magic\n \n Run cells with ruby in a subprocess.\n \n This is a shortcut for `%%script ruby`\n%%script:\n ::\n \n %shebang [--no-raise-error] [--proc PROC] [--bg] [--err ERR] [--out OUT]\n \n Run a cell via a shell command\n \n The `%%script` line is like the #! line of script,\n specifying a program (bash, perl, ruby, etc.) with which to run.\n \n The rest of the cell is run by that program.\n \n Examples\n --------\n ::\n \n In [1]: %%script bash\n ...: for i in 1 2 3; do\n ...: echo $i\n ...: done\n 1\n 2\n 3\n \n optional arguments:\n --no-raise-error Whether you should raise an error message in addition to a\n stream on stderr if you get a nonzero exit code.\n --proc PROC The variable in which to store Popen instance. This is\n used only when --bg option is given.\n --bg Whether to run the script in the background. If given, the\n only way to see the output of the command is with\n --out/err.\n --err ERR The variable in which to store stderr from the script. If\n the script is backgrounded, this will be the stderr\n *pipe*, instead of the stderr text itself and will not be\n autoclosed.\n --out OUT The variable in which to store stdout from the script. If\n the script is backgrounded, this will be the stdout\n *pipe*, instead of the stderr text itself and will not be\n auto closed.\n%%sh:\n %%sh script magic\n \n Run cells with sh in a subprocess.\n \n This is a shortcut for `%%script sh`\n%%svg:\n Render the cell as an SVG literal\n%%sx:\n Shell execute - run shell command and capture output (!! is short-hand).\n \n %sx command\n \n IPython will run the given command using commands.getoutput(), and\n return the result formatted as a list (split on '\\n'). Since the\n output is _returned_, it will be stored in ipython's regular output\n cache Out[N] and in the '_N' automatic variables.\n \n Notes:\n \n 1) If an input line begins with '!!', then %sx is automatically\n invoked. That is, while::\n \n !ls\n \n causes ipython to simply issue system('ls'), typing::\n \n !!ls\n \n is a shorthand equivalent to::\n \n %sx ls\n \n 2) %sx differs from %sc in that %sx automatically splits into a list,\n like '%sc -l'. The reason for this is to make it as easy as possible\n to process line-oriented shell output via further python commands.\n %sc is meant to provide much finer control, but requires more\n typing.\n \n 3) Just like %sc -l, this is a list with special attributes:\n ::\n \n .l (or .list) : value as list.\n .n (or .nlstr): value as newline-separated string.\n .s (or .spstr): value as whitespace-separated string.\n \n This is very useful when trying to use such lists as arguments to\n system commands.\n%%system:\n Shell execute - run shell command and capture output (!! is short-hand).\n \n %sx command\n \n IPython will run the given command using commands.getoutput(), and\n return the result formatted as a list (split on '\\n'). Since the\n output is _returned_, it will be stored in ipython's regular output\n cache Out[N] and in the '_N' automatic variables.\n \n Notes:\n \n 1) If an input line begins with '!!', then %sx is automatically\n invoked. That is, while::\n \n !ls\n \n causes ipython to simply issue system('ls'), typing::\n \n !!ls\n \n is a shorthand equivalent to::\n \n %sx ls\n \n 2) %sx differs from %sc in that %sx automatically splits into a list,\n like '%sc -l'. The reason for this is to make it as easy as possible\n to process line-oriented shell output via further python commands.\n %sc is meant to provide much finer control, but requires more\n typing.\n \n 3) Just like %sc -l, this is a list with special attributes:\n ::\n \n .l (or .list) : value as list.\n .n (or .nlstr): value as newline-separated string.\n .s (or .spstr): value as whitespace-separated string.\n \n This is very useful when trying to use such lists as arguments to\n system commands.\n%%time:\n Time execution of a Python statement or expression.\n \n The CPU and wall clock times are printed, and the value of the\n expression (if any) is returned. Note that under Win32, system time\n is always reported as 0, since it can not be measured.\n \n This function can be used both as a line and cell magic:\n \n - In line mode you can time a single-line statement (though multiple\n ones can be chained with using semicolons).\n \n - In cell mode, you can time the cell body (a directly\n following statement raises an error).\n \n This function provides very basic timing functionality. Use the timeit\n magic for more control over the measurement.\n \n .. versionchanged:: 7.3\n User variables are no longer expanded,\n the magic line is always left unmodified.\n \n Examples\n --------\n ::\n \n In [1]: %time 2**128\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00\n Out[1]: 340282366920938463463374607431768211456L\n \n In [2]: n = 1000000\n \n In [3]: %time sum(range(n))\n CPU times: user 1.20 s, sys: 0.05 s, total: 1.25 s\n Wall time: 1.37\n Out[3]: 499999500000L\n \n In [4]: %time print 'hello world'\n hello world\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00\n \n \n .. note::\n The time needed by Python to compile the given expression will be\n reported if it is more than 0.1s.\n \n In the example below, the actual exponentiation is done by Python\n at compilation time, so while the expression can take a noticeable\n amount of time to compute, that time is purely due to the\n compilation::\n \n In [5]: %time 3**9999;\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00 s\n \n In [6]: %time 3**999999;\n CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s\n Wall time: 0.00 s\n Compiler : 0.78 s\n%%timeit:\n Time execution of a Python statement or expression\n \n Usage, in line mode:\n %timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] statement\n or in cell mode:\n %%timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] setup_code\n code\n code...\n \n Time execution of a Python statement or expression using the timeit\n module. This function can be used both as a line and cell magic:\n \n - In line mode you can time a single-line statement (though multiple\n ones can be chained with using semicolons).\n \n - In cell mode, the statement in the first line is used as setup code\n (executed but not timed) and the body of the cell is timed. The cell\n body has access to any variables created in the setup code.\n \n Options:\n -n<N>: execute the given statement <N> times in a loop. If <N> is not\n provided, <N> is determined so as to get sufficient accuracy.\n \n -r<R>: number of repeats <R>, each consisting of <N> loops, and take the\n best result.\n Default: 7\n \n -t: use time.time to measure the time, which is the default on Unix.\n This function measures wall time.\n \n -c: use time.clock to measure the time, which is the default on\n Windows and measures wall time. On Unix, resource.getrusage is used\n instead and returns the CPU user time.\n \n -p<P>: use a precision of <P> digits to display the timing result.\n Default: 3\n \n -q: Quiet, do not print result.\n \n -o: return a TimeitResult that can be stored in a variable to inspect\n the result in more details.\n \n .. versionchanged:: 7.3\n User variables are no longer expanded,\n the magic line is always left unmodified.\n \n Examples\n --------\n ::\n \n In [1]: %timeit pass\n 8.26 ns ± 0.12 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each)\n \n In [2]: u = None\n \n In [3]: %timeit u is None\n 29.9 ns ± 0.643 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)\n \n In [4]: %timeit -r 4 u == None\n \n In [5]: import time\n \n In [6]: %timeit -n1 time.sleep(2)\n \n \n The times reported by %timeit will be slightly higher than those\n reported by the timeit.py script when variables are accessed. This is\n due to the fact that %timeit executes the statement in the namespace\n of the shell, compared with timeit.py, which uses a single setup\n statement to import function or create variables. Generally, the bias\n does not matter as long as results from timeit.py are not mixed with\n those from %timeit.\n%%writefile:\n ::\n \n %writefile [-a] filename\n \n Write the contents of the cell to a file.\n \n The file will be overwritten unless the -a (--append) flag is specified.\n \n positional arguments:\n filename file to write\n \n optional arguments:\n -a, --append Append contents of the cell to an existing file. The file will\n be created if it does not exist.\n\nSummary of magic functions (from %lsmagic):\nAvailable line magics:\n%alias %alias_magic %autoawait %autocall %automagic %autosave %bookmark %cat %cd %clear %colors %conda %config %connect_info %cp %debug %dhist %dirs %doctest_mode %ed %edit %env %gui %hist %history %killbgscripts %ldir %less %lf %lk %ll %load %load_ext %loadpy %logoff %logon %logstart %logstate %logstop %ls %lsmagic %lx %macro %magic %man %matplotlib %mkdir %more %mv %notebook %page %pastebin %pdb %pdef %pdoc %pfile %pinfo %pinfo2 %pip %popd %pprint %precision %prun %psearch %psource %pushd %pwd %pycat %pylab %qtconsole %quickref %recall %rehashx %reload_ext %rep %rerun %reset %reset_selective %rm %rmdir %run %save %sc %set_env %store %sx %system %tb %time %timeit %unalias %unload_ext %who %who_ls %whos %xdel %xmode\n\nAvailable cell magics:\n%%! %%HTML %%SVG %%bash %%capture %%debug %%file %%html %%javascript %%js %%latex %%markdown %%perl %%prun %%pypy %%python %%python2 %%python3 %%ruby %%script %%sh %%svg %%sx %%system %%time %%timeit %%writefile\n\nAutomagic is ON, % prefix IS NOT needed for line magics."
]
],
[
[
"`Line` vs `cell magics`:",
"_____no_output_____"
]
],
[
[
"%timeit list(range(1000))",
"13.6 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n"
],
[
"%%timeit\nlist(range(10))\nlist(range(100))",
"1.28 µs ± 189 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n"
]
],
[
[
"`Line magics` can be used even inside `code blocks`:",
"_____no_output_____"
]
],
[
[
"for i in range(1, 5):\n size = i*100\n print('size:', size, end=' ')\n %timeit list(range(size))",
"size: 100 904 ns ± 57.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\nsize: 200 1.33 µs ± 76.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\nsize: 300 2.21 µs ± 60.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\nsize: 400 3.59 µs ± 50.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n"
]
],
[
[
"`Magics` can do anything they want with their input, so it doesn't have to be valid `Python`:",
"_____no_output_____"
]
],
[
[
"%%bash\necho \"My shell is:\" $SHELL\necho \"My disk usage is:\"\ndf -h",
"My shell is: /bin/bash\nMy disk usage is:\nFilesystem Size Used Avail Capacity iused ifree %iused Mounted on\n/dev/disk1s1 466Gi 10Gi 31Gi 26% 488411 4881964469 0% /\ndevfs 228Ki 228Ki 0Bi 100% 790 0 100% /dev\n/dev/disk1s2 466Gi 406Gi 31Gi 93% 5249330 4877203550 0% /System/Volumes/Data\n/dev/disk1s5 466Gi 18Gi 31Gi 38% 18 4882452862 0% /private/var/vm\nmap auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /System/Volumes/Data/home\ndrivefs 200Gi 171Gi 29Gi 86% 18446744069414585309 4294967295 1867079359252488448% /Volumes/GoogleDrive\n/dev/disk1s4 466Gi 504Mi 31Gi 2% 57 4882452823 0% /Volumes/Recovery\n"
]
],
[
[
"Another interesting `cell magic`: create any `file` you want `locally` from the `notebook`:",
"_____no_output_____"
]
],
[
[
"%%writefile test.txt\nThis is a test file!\nIt can contain anything I want...\n\nAnd more...",
"Writing test.txt\n"
],
[
"!cat test.txt",
"This is a test file!\nIt can contain anything I want...\n\nAnd more...\n"
]
],
[
[
"Let's see what other `magics` are currently defined in the `system`:",
"_____no_output_____"
]
],
[
[
"%lsmagic",
"_____no_output_____"
]
],
[
[
"## Writing latex \n\nLet's use `%%latex` to render a block of `latex`:",
"_____no_output_____"
]
],
[
[
"%%latex\n$$F(k) = \\int_{-\\infty}^{\\infty} f(x) e^{2\\pi i k} \\mathrm{d} x$$",
"_____no_output_____"
]
],
[
[
"### Running normal Python code: execution and errors\n\nNot only can you input normal `Python code`, you can even paste straight from a `Python` or `IPython shell session`:",
"_____no_output_____"
]
],
[
[
">>> # Fibonacci series:\n... # the sum of two elements defines the next\n... a, b = 0, 1\n>>> while b < 10:\n... print(b)\n... a, b = b, a+b",
"1\n1\n2\n3\n5\n8\n"
],
[
"In [1]: for i in range(10):\n ...: print(i, end=' ')\n ...: ",
"0 1 2 3 4 5 6 7 8 9 "
]
],
[
[
"And when your code produces errors, you can control how they are displayed with the `%xmode` magic:",
"_____no_output_____"
]
],
[
[
"%%writefile mod.py\n\ndef f(x):\n return 1.0/(x-1)\n\ndef g(y):\n return f(y+1)",
"Writing mod.py\n"
]
],
[
[
"Now let's call the function `g` with an argument that would produce an error:",
"_____no_output_____"
]
],
[
[
"import mod\nmod.g(0)",
"_____no_output_____"
],
[
"%xmode plain\nmod.g(0)",
"Exception reporting mode: Plain\n"
],
[
"%xmode verbose\nmod.g(0)",
"Exception reporting mode: Verbose\n"
]
],
[
[
"The default `%xmode` is \"context\", which shows additional context but not all local variables. Let's restore that one for the rest of our session.",
"_____no_output_____"
]
],
[
[
"%xmode context",
"Exception reporting mode: Context\n"
]
],
[
[
"## Running code in other languages with special `%%` magics",
"_____no_output_____"
]
],
[
[
"%%perl\n@months = (\"July\", \"August\", \"September\");\nprint $months[0];",
"July"
],
[
"%%ruby\nname = \"world\"\nputs \"Hello #{name.capitalize}!\"",
"Hello World!\n"
]
],
[
[
"### Raw Input in the notebook\n\nSince `1.0` the `IPython notebook web application` supports `raw_input` which for example allow us to invoke the `%debug` `magic` in the `notebook`:",
"_____no_output_____"
]
],
[
[
"mod.g(0)",
"_____no_output_____"
],
[
"%debug",
"> \u001b[0;32m/Users/peerherholz/google_drive/GitHub/DGPA_workshop_2022/workshop/prerequisites/mod.py\u001b[0m(3)\u001b[0;36mf\u001b[0;34m()\u001b[0m\n\u001b[0;32m 1 \u001b[0;31m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2 \u001b[0;31m\u001b[0;32mdef\u001b[0m \u001b[0mf\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m----> 3 \u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0;36m1.0\u001b[0m\u001b[0;34m/\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 4 \u001b[0;31m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 5 \u001b[0;31m\u001b[0;32mdef\u001b[0m \u001b[0mg\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\n--KeyboardInterrupt--\n\nKeyboardInterrupt: Interrupted by user\n"
]
],
[
[
"Don't forget to exit your `debugging session`. `Raw input` can of course be used to ask for `user input`:",
"_____no_output_____"
]
],
[
[
"enjoy = input('Are you enjoying this tutorial? ')\nprint('enjoy is:', enjoy)",
"enjoy is: yes\n"
]
],
[
[
"### Plotting in the notebook\n\n`Notebooks` support a variety of fantastic `plotting options`, including `static` and `interactive` graphics. This `magic` configures `matplotlib` to `render` its `figures` `inline`:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"x = np.linspace(0, 2*np.pi, 300)\ny = np.sin(x**2)\nplt.plot(x, y)\nplt.title(\"A little chirp\")\nfig = plt.gcf() # let's keep the figure object around for later...",
"_____no_output_____"
],
[
"import plotly.figure_factory as ff\n\n# Add histogram data\nx1 = np.random.randn(200) - 2\nx2 = np.random.randn(200)\nx3 = np.random.randn(200) + 2\nx4 = np.random.randn(200) + 4\n\n# Group data together\nhist_data = [x1, x2, x3, x4]\n\ngroup_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4']\n\n# Create distplot with custom bin_size\nfig = ff.create_distplot(hist_data, group_labels, bin_size=.2)\nfig.show()",
"_____no_output_____"
]
],
[
[
"## The IPython kernel/client model",
"_____no_output_____"
]
],
[
[
"%connect_info",
"{\n \"shell_port\": 9007,\n \"iopub_port\": 9009,\n \"stdin_port\": 9008,\n \"control_port\": 9006,\n \"hb_port\": 9005,\n \"ip\": \"127.0.0.1\",\n \"key\": \"06de83f0-3e03-43c3-abec-5aa8e443170c\",\n \"transport\": \"tcp\",\n \"signature_scheme\": \"hmac-sha256\",\n \"kernel_name\": \"\"\n}\n\nPaste the above JSON into a file, and connect with:\n $> jupyter <app> --existing <file>\nor, if you are local, you can connect with just:\n $> jupyter <app> --existing /var/folders/61/0lj9r7px3k52gv9yfyx6ky300000gn/T/tmp-571231AoTuwMSUHcM.json\nor even just:\n $> jupyter <app> --existing\nif this is the most recent Jupyter kernel you have started.\n"
]
],
[
[
"We can connect automatically a Qt Console to the currently running kernel with the `%qtconsole` magic, or by typing `ipython console --existing <kernel-UUID>` in any terminal:",
"_____no_output_____"
]
],
[
[
"%qtconsole",
"_____no_output_____"
]
],
[
[
"## Saving a Notebook\n\n`Jupyter Notebooks` `autosave`, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status:\n\n`Last Checkpoint: 2 minutes ago (unsaved changes)`\n`Last Checkpoint: a few seconds ago (autosaved)`\n\nIf you want to save a notebook on purpose, either click on `File` > `Save` and `Checkpoint` or press `Ctrl+S`.",
"_____no_output_____"
],
[
"## To Jupyter & beyond\n\n<img align=\"center\" src=\"https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_example.png\" alt=\"logo\" title=\"jupyter\" width=\"800\" height=\"400\" /> ",
"_____no_output_____"
],
[
"1. Open a terminal",
"_____no_output_____"
],
[
"2. Type `jupyter lab`",
"_____no_output_____"
],
[
"3. If you're not automatically directed to a webpage copy the URL printed in the terminal and paste it in your browser",
"_____no_output_____"
],
[
"4. Click \"New\" in the top-right corner and select \"Python 3\"",
"_____no_output_____"
],
[
"5. You have a `Jupyter notebook` within `Jupyter lab`!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a5c62081665c37e20dc3ad1507f79dbd1c32f8f
| 222,248 |
ipynb
|
Jupyter Notebook
|
Hands on/CS185C02_Sp21_ho13.ipynb
|
shahadeshubhu/CS-185C
|
7c210fc4198d2dc6009d1c470ae292be05c62bf6
|
[
"MIT"
] | null | null | null |
Hands on/CS185C02_Sp21_ho13.ipynb
|
shahadeshubhu/CS-185C
|
7c210fc4198d2dc6009d1c470ae292be05c62bf6
|
[
"MIT"
] | null | null | null |
Hands on/CS185C02_Sp21_ho13.ipynb
|
shahadeshubhu/CS-185C
|
7c210fc4198d2dc6009d1c470ae292be05c62bf6
|
[
"MIT"
] | null | null | null | 222,248 | 222,248 | 0.929255 |
[
[
[
"## Use barcharts and heatmaps to visualize patterns in your data\nIGN Game Reviews provide scores from experts for the most recent game releases, ranging from 0 (Disaster) to 10 (Masterpiece).\n<img src=\"https://i.imgur.com/Oh06Fu1.png\">\n\n\n",
"_____no_output_____"
],
[
"## Load the data\n1. Read the IGN data file into a dataframe named `ign_scores`. \n2. Use the `\"Platform\"` column to label the rows.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nIGN=\"https://raw.githubusercontent.com/csbfx/advpy122-data/master/ign_scores.csv\"\n## Your code here . . .\nign_scores = pd.read_csv(IGN)\nign_scores = ign_scores.set_index('Platform')\nign_scores",
"_____no_output_____"
]
],
[
[
"## Problem 1\nUse the dataframe `ign_scores` to determine the highest score received by PC games, for any platform? ",
"_____no_output_____"
]
],
[
[
"pc_games = ign_scores.loc['PC'].max()\npc_games",
"_____no_output_____"
]
],
[
[
"## Problem 2\nUse the dataframe `ign_scores` to determine which genre has the lowest score for the `PlayStation Vita` platform.",
"_____no_output_____"
]
],
[
[
"psv_games = ign_scores.loc['PlayStation Vita'].idxmin()\npsv_games",
"_____no_output_____"
]
],
[
[
"## Problem 3\nYour instructor's favorite video game has been Mario Kart Wii, a racing game released for the Wii platform in 2008. And, IGN agrees with her that it is a great game -- their rating for this game is a whopping 8.9! Inspired by the success of this game, your instructor is considering creating your very own racing game for the Wii platform. Perform the following analyses to help her determine which platform she should focus on.\n\n1. Create a bar chart that shows the score for *Racing* games, for each platform. Your chart should have one bar for each platform. Provide a meaningful title to the plot.\n\n2. Based on the bar chart, do you expect a racing game for the **Wii** platform to receive a high rating? If not, use the pandas to find out from the dataframe `ign_scores` which gaming platform is the best for racing game?",
"_____no_output_____"
]
],
[
[
"## Use ign_scores to determine which gaming platform is the best\n## for racing game.\n\n## Your code here . . . \nign_scores.plot.bar(y='Racing', title=\"Bar chart for ratings of racing games for different platforms\")",
"_____no_output_____"
]
],
[
[
"As shown in the bar plot, Wii has the lowest ratings for the 'Racing' genre. XBox one has the highest rating for racing games and hence would be the best platform",
"_____no_output_____"
],
[
"## Problem 4\nSince your instructor's gaming interests are pretty broad, you can help her decide to use the IGN scores to determine the choice of genre and platform. \n\n1. Create a heatmap using the IGN scores by genre and platform and include the scores in the cells of the heatmap.\n2. Base on the heatmap, which combination of genre and platform receives the highest average ratings? Which combination receives the lowest average rankings? Write the answers in a markdown cell.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,7))\nsns.heatmap(data= ign_scores, annot=True)\nplt.xlabel(\"Genre\");",
"_____no_output_____"
]
],
[
[
"Simulation games on the playStation 4 receive highest average ratings at 9.2. The lowest average ratings are scored on the Game Boy Color for fighting and shooter games. \n",
"_____no_output_____"
],
[
"## Problem 5\nUse the Pokemon dataset to create a clustermap with color. First, filter the dataframe to only keep data with `Type 1` equals to one of the following values: `Water`, `Normal`, `Grass`, `Bug` and `Psychic`. Annotate the dendrogram using different colors for these five different `Type 1` values. Use `Name` as the index.\n\npokemon_data is in https://raw.githubusercontent.com/csbfx/advpy122-data/master/Pokemon.csv",
"_____no_output_____"
]
],
[
[
"pokemon_data = pd.read_csv(\"https://raw.githubusercontent.com/csbfx/advpy122-data/master/Pokemon.csv\")\ntypes = ['Water', 'Normal', 'Grass', 'Bug', 'Psychic']\npokemon_data = pokemon_data[pokemon_data['Type 1'].isin(types)]\npokemon_data['Legendary'] = pokemon_data['Legendary'].astype('int')\ng = sns.clustermap(pokemon_data.set_index('Name').drop(columns=['Type 1', 'Type 2', '#', 'Legendary', 'Generation', 'Total']), cmap=\"BuPu\",\n figsize=(12,8),\n row_colors=pokemon_data.set_index('Name')['Type 1'].replace(\n {\"Normal\":\"red\",\n \"Psychic\":\"purple\",\n \"Water\":\"lightblue\",\n \"Grass\": \"green\",\n \"Bug\": \"black\"\n }))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a5c8c97634a7ac30a022bbb72dcdabee60d3c08
| 8,200 |
ipynb
|
Jupyter Notebook
|
code_signal_IV_solutions.ipynb
|
harmishpatel21/codesignal-IV-solutions
|
03d1649e60532ced66655dc64c705e8e6a3d3ae3
|
[
"MIT"
] | null | null | null |
code_signal_IV_solutions.ipynb
|
harmishpatel21/codesignal-IV-solutions
|
03d1649e60532ced66655dc64c705e8e6a3d3ae3
|
[
"MIT"
] | null | null | null |
code_signal_IV_solutions.ipynb
|
harmishpatel21/codesignal-IV-solutions
|
03d1649e60532ced66655dc64c705e8e6a3d3ae3
|
[
"MIT"
] | null | null | null | 28.873239 | 405 | 0.447073 |
[
[
[
"<a href=\"https://colab.research.google.com/github/harmishpatel21/codesignal-IV-solutions/blob/main/code_signal_IV_solutions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"**Code Signal Solution for Interview Challenges**",
"_____no_output_____"
],
[
"**Problem Statement 1:**\n\n'''Given an array a that contains only numbers in the range from 1 to a.length, find the first duplicate number for which the second occurrence has the minimal index. In other words, if there are more than 1 duplicated numbers, return the number for which the second occurrence has a smaller index than the second occurrence of the other number does. If there are no such elements, return -1.\n\nExample\n\nFor a = [2, 1, 3, 5, 3, 2], the output should be firstDuplicate(a) = 3.\n\nThere are 2 duplicates: numbers 2 and 3. The second occurrence of 3 has a smaller index than the second occurrence of 2 does, so the answer is 3.\n\nFor a = [2, 2], the output should be firstDuplicate(a) = 2;\n\nFor a = [2, 4, 3, 5, 1], the output should be firstDuplicate(a) = -1.\n\nInput/Output\n\n[execution time limit] 4 seconds (py3)\n\n[input] array.integer a\n\nGuaranteed constraints:\n1 ≤ a.length ≤ 105,\n1 ≤ a[i] ≤ a.length.\n\n[output] integer\n\nThe element in a that occurs in the array more than once and has the minimal index for its second occurrence. If there are no such elements, return -1.'''\n",
"_____no_output_____"
]
],
[
[
"def firstDuplicate(a):\n seen = set()\n for i in a:\n if i in seen:\n return i\n seen.add(i)\n return -1\n\na = [2, 1, 3, 5, 3, 2]\nprint(firstDuplicate(a))",
"{2}\n{1, 2}\n{1, 2, 3}\n{1, 2, 3, 5}\n3\n"
]
],
[
[
"**Problem Statement 2:**\n\nGiven a string s consisting of small English letters, find and return the first instance of a non-repeating character in it. If there is no such character, return '_'.\n\nExample\n\nFor s = \"abacabad\", the output should be\nfirstNotRepeatingCharacter(s) = 'c'.\n\nThere are 2 non-repeating characters in the string: 'c' and 'd'. Return c since it appears in the string first.\n\nFor s = \"abacabaabacaba\", the output should be\nfirstNotRepeatingCharacter(s) = '_'.\n\nThere are no characters in this string that do not repeat.\n\nInput/Output\n\n[execution time limit] 4 seconds (py3)\n\n[input] string s\n\nA string that contains only lowercase English letters.\n\nGuaranteed constraints:\n1 ≤ s.length ≤ 105.\n\n[output] char\n\nThe first non-repeating character in s, or '_' if there are no characters that do not repeat.",
"_____no_output_____"
]
],
[
[
"def firstNotRepeatingCharacter(s):\n x = []\n for i in s:\n if s.index(i) == s.rindex(i):\n return i\n return '_' \n\n\ns = \"abacabad\"\n# s = \"abacabaabacaba\"\nprint(firstNotRepeatingCharacter(s))",
"c\n"
]
],
[
[
"**Problem Statement 3:**\n\nNote: Try to solve this task in-place (with O(1) additional memory), since this is what you'll be asked to do during an interview.\n\nYou are given an n x n 2D matrix that represents an image. Rotate the image by 90 degrees (clockwise).\n\nExample\n\nFor\n\na = [[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]]\nthe output should be\n\nrotateImage(a) =\n [[7, 4, 1],\n [8, 5, 2],\n [9, 6, 3]]\nInput/Output\n\n[execution time limit] 4 seconds (py3)\n\n[input] array.array.integer a\n\nGuaranteed constraints:\n1 ≤ a.length ≤ 100,\na[i].length = a.length,\n1 ≤ a[i][j] ≤ 104.\n\n[output] array.array.integer",
"_____no_output_____"
]
],
[
[
"def rotateImage(a):\n return list(zip(*a[::-1]))\n\n\na = [[1, 2, 3], \n [4, 5, 6], \n [7, 8, 9]] \n# print(len(a))\n# print(a[-2][-1])\nrotateImage(a)",
"_____no_output_____"
],
[
"a[::-1]",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a5c91e9e1e7c4caa4a068f6d362394dd58c65a3
| 9,766 |
ipynb
|
Jupyter Notebook
|
Exploratory data analysis.ipynb
|
msunil10052/Image-Classification-1
|
5feddb50ba75a0237edaf356de483217baea720f
|
[
"MIT"
] | null | null | null |
Exploratory data analysis.ipynb
|
msunil10052/Image-Classification-1
|
5feddb50ba75a0237edaf356de483217baea720f
|
[
"MIT"
] | null | null | null |
Exploratory data analysis.ipynb
|
msunil10052/Image-Classification-1
|
5feddb50ba75a0237edaf356de483217baea720f
|
[
"MIT"
] | null | null | null | 9,766 | 9,766 | 0.667827 |
[
[
[
"Fashion MNIST dataset",
"_____no_output_____"
]
],
[
[
"#!pip install --upgrade tensorflow",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\n# TensorFlow and tf.keras\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# Helper libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)",
"2.0.0\n"
]
],
[
[
"# Import the Fashion MNIST dataset",
"_____no_output_____"
],
[
"This notebook uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:\n\n<table>\n <tr><td>\n <img src=\"https://tensorflow.org/images/fashion-mnist-sprite.png\"\n alt=\"Fashion MNIST sprite\" width=\"600\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://github.com/zalandoresearch/fashion-mnist\">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/> \n </td></tr>\n</table>\n\nFashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the \"Hello, World\" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing you'll use here.\n\nThis guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.\n\nHere, 60,000 images are used to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow:",
"_____no_output_____"
]
],
[
[
"fashion_mnist = keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n8192/5148 [===============================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 0s 0us/step\n"
]
],
[
[
"Loading the dataset returns four NumPy arrays:\n\n* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.\n* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.\n\nThe images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:\n\n<table>\n <tr>\n <th>Label</th>\n <th>Class</th>\n </tr>\n <tr>\n <td>0</td>\n <td>T-shirt/top</td>\n </tr>\n <tr>\n <td>1</td>\n <td>Trouser</td>\n </tr>\n <tr>\n <td>2</td>\n <td>Pullover</td>\n </tr>\n <tr>\n <td>3</td>\n <td>Dress</td>\n </tr>\n <tr>\n <td>4</td>\n <td>Coat</td>\n </tr>\n <tr>\n <td>5</td>\n <td>Sandal</td>\n </tr>\n <tr>\n <td>6</td>\n <td>Shirt</td>\n </tr>\n <tr>\n <td>7</td>\n <td>Sneaker</td>\n </tr>\n <tr>\n <td>8</td>\n <td>Bag</td>\n </tr>\n <tr>\n <td>9</td>\n <td>Ankle boot</td>\n </tr>\n</table>\n\nEach image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:",
"_____no_output_____"
]
],
[
[
"class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']",
"_____no_output_____"
]
],
[
[
"## Explore the data\n\nLet's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:",
"_____no_output_____"
]
],
[
[
"train_images.shape",
"_____no_output_____"
]
],
[
[
"Likewise, there are 60,000 labels in the training set:",
"_____no_output_____"
]
],
[
[
"len(train_labels)",
"_____no_output_____"
]
],
[
[
"Each label is an integer between 0 and 9:",
"_____no_output_____"
]
],
[
[
"train_labels[0:2]",
"_____no_output_____"
]
],
[
[
"There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:",
"_____no_output_____"
]
],
[
[
"test_images.shape",
"_____no_output_____"
]
],
[
[
"And the test set contains 10,000 images labels:",
"_____no_output_____"
]
],
[
[
"len(test_labels)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a5cafd7c1c00a0097e49faa91a9fa2215905531
| 10,017 |
ipynb
|
Jupyter Notebook
|
notebooks/IntroductionToResidual.ipynb
|
TerrainBento/umami
|
ab6cf5aabd8ecfe22b3de8e762317ceb53236362
|
[
"MIT"
] | 1 |
2019-11-05T17:49:05.000Z
|
2019-11-05T17:49:05.000Z
|
notebooks/IntroductionToResidual.ipynb
|
TerrainBento/umami
|
ab6cf5aabd8ecfe22b3de8e762317ceb53236362
|
[
"MIT"
] | 9 |
2019-09-18T20:04:40.000Z
|
2020-02-28T19:56:24.000Z
|
notebooks/IntroductionToResidual.ipynb
|
TerrainBento/umami
|
ab6cf5aabd8ecfe22b3de8e762317ceb53236362
|
[
"MIT"
] | 2 |
2019-09-11T17:08:32.000Z
|
2019-10-29T11:57:29.000Z
| 31.901274 | 519 | 0.594589 |
[
[
[
"# Part 2: Introduction to Umami and the `Residual` Class\n\nUmami is a package for calculating metrics for use with for Earth surface dynamics models. This notebook is the second notebook in a three-part introduction to using umami.\n\n## Scope of this tutorial\n\nBefore starting this tutorial, you should have completed [Part 1: Introduction to Umami and the `Metric` Class](IntroductionToMetric.ipynb).\n\nIn this tutorial you will learn the basic principles behind using the `Residual` class to compare models and data using terrain statistics. \n\nIf you have comments or questions about the notebooks, the best place to get help is through [GitHub Issues](https://github.com/TerrainBento/umami/issues).\n\nTo begin this example, we will import the required python packages. ",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\n\nfrom io import StringIO\nimport numpy as np\nfrom landlab import RasterModelGrid, imshow_grid\nfrom umami import Residual",
"_____no_output_____"
]
],
[
[
"## Step 1 Create grids\n\nUnlike the first notebook, here we need to compare model and data. We will create two grids, the `model_grid` and the `data_grid` each with a field called field called `topographic__elevation` to it. Both are size (10x10). The `data_grid` slopes to the south-west, while the `model_grid` has some additional noise added to it. \n\nFirst, we construct and plot the `data_grid`.",
"_____no_output_____"
]
],
[
[
"data_grid = RasterModelGrid((10, 10))\ndata_z = data_grid.add_zeros(\"node\", \"topographic__elevation\")\ndata_z += data_grid.x_of_node + data_grid.y_of_node\n\nimshow_grid(data_grid, data_z)",
"_____no_output_____"
]
],
[
[
"Next, we construct and plot `model_grid`. It differs only in that it has random noise added to the core nodes. ",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\n\nmodel_grid = RasterModelGrid((10, 10))\nmodel_z = model_grid.add_zeros(\"node\", \"topographic__elevation\")\nmodel_z += model_grid.x_of_node + model_grid.y_of_node\nmodel_z[model_grid.core_nodes] += np.random.randn(model_grid.core_nodes.size)\nimshow_grid(model_grid, model_z)",
"_____no_output_____"
]
],
[
[
"We can difference the two grids to see how they differ. As expected, it looks like normally distributed noise. ",
"_____no_output_____"
]
],
[
[
"imshow_grid(model_grid, data_z - model_z, cmap=\"seismic\")",
"_____no_output_____"
]
],
[
[
"This example shows a difference map with 64 residuals on it. A more realistic application with a much larger domain would have tens of thousands. Methods of model analysis such as calibration and sensitivity analysis need model output, such as the topography shown here, to be distilled into a smaller number of values. This is the task that umami facilitates. \n\n## Step 2: Construct an umami `Residual`\n\nSimilar to constructing a `Metric`, a residual is specified by a dictionary or YAML-style input file. \n\nHere we repeat some of the content of the prior notebook:\n\nEach calculation gets its own unique name (the key in the dictionary), and is associated with a value, a dictionary specifying exactly what should be calculated. The only value of the dictionary required by all umami calculations is `_func`, which indicates which of the [`umami.calculations`](https://umami.readthedocs.io/en/latest/umami.calculations.html) will be performed. Subsequent elements of this dictionary are the required inputs to the calculation function and are described in their documentation. \n\nNote that some calculations listed in the [`umami.calculations`](https://umami.readthedocs.io/en/latest/umami.calculations.html) submodule are valid for both the umami `Metric` and `Residual` classes, while others are for `Residual`s only (the `Metric` class was covered in [Part 1](IntroductionToMetric.ipynb) of this notebook series). \n\nThe order that calculations are listed is read in as an [OrderedDict](https://docs.python.org/3/library/collections.html#collections.OrderedDict) and retained as the \"calculation order\". \n\nIn our example we will use the following dictionary: \n\n```python\nresiduals = {\n \"me\": {\n \"_func\": \"aggregate\",\n \"method\": \"mean\",\n \"field\": \"topographic__elevation\"\n },\n \"ep10\": {\n \"_func\": \"aggregate\",\n \"method\": \"percentile\",\n \"field\": \"topographic__elevation\",\n \"q\": 10\n }\n}\n```\nThis specifies calculation of the mean of `topographic__elevation` (to be called \"me\") and the 10th percentile `topographic__elevation` (called \"ep10\"). The equivalent portion of a YAML input file would look like:\n\n```yaml\nresiduals:\n me:\n _func: aggregate\n method: mean\n field: topographic__elevation\n ep10:\n _func: aggregate\n method: percentile\n field: topographic__elevation\n q: 10\n```\n\nThe following code constructs the `Residual`. Note that the only difference with the prior notebook is that instead of specifying only one grid, here we provide two. Under the hood umami checkes that the grids are compatible and will raise errors if they are not.",
"_____no_output_____"
]
],
[
[
"residuals = {\n \"me\": {\n \"_func\": \"aggregate\",\n \"method\": \"mean\",\n \"field\": \"topographic__elevation\"\n },\n \"ep10\": {\n \"_func\": \"aggregate\",\n \"method\": \"percentile\",\n \"field\": \"topographic__elevation\",\n \"q\": 10\n }\n}\nresidual = Residual(model_grid, data_grid, residuals=residuals)",
"_____no_output_____"
]
],
[
[
"To calculate the residuals, run the `calculate` bound method. ",
"_____no_output_____"
]
],
[
[
"residual.calculate()",
"_____no_output_____"
]
],
[
[
"Just like `Metric` classes, the `Residual` has some usefull methods and attributes. \n\n`residual.names` gives the names as a list, in calculation order. ",
"_____no_output_____"
]
],
[
[
"residual.names",
"_____no_output_____"
]
],
[
[
"`residual.values` gives the values as a list, in calculation order. ",
"_____no_output_____"
]
],
[
[
"residual.values",
"_____no_output_____"
]
],
[
[
"And a function is available to get the value of a given metric.",
"_____no_output_____"
]
],
[
[
"residual.value(\"me\")",
"_____no_output_____"
]
],
[
[
"## Step 5: Write output\n\nThe methods for writing output avaiable in `Metric` are also provided by `Residual`.",
"_____no_output_____"
]
],
[
[
"out = StringIO()\nresidual.write_residuals_to_file(out, style=\"dakota\")\nfile_contents = out.getvalue().splitlines()\nfor line in file_contents:\n print(line.strip())",
"_____no_output_____"
],
[
"out = StringIO()\nresidual.write_residuals_to_file(out, style=\"yaml\")\nfile_contents = out.getvalue().splitlines()\nfor line in file_contents:\n print(line.strip())",
"_____no_output_____"
]
],
[
[
"# Next steps\n\nNow that you have a sense for how the `Metric` and `Residual` classes are used, try the next notebook: [Part 3: Other IO options (using umami without Landlab or terrainbento)](OtherIO_options.ipynb).",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4a5cb1769732029b1b5b75ddf6cbb96e3bcc72c0
| 71,382 |
ipynb
|
Jupyter Notebook
|
Correction essentiality/Untitled.ipynb
|
oligogenic/DIDA_SSL
|
cbf61892bfde999eadf31db918833f6c75a5c9f3
|
[
"MIT"
] | 1 |
2018-07-19T10:34:46.000Z
|
2018-07-19T10:34:46.000Z
|
Correction essentiality/Untitled.ipynb
|
oligogenic/DIDA_SSL
|
cbf61892bfde999eadf31db918833f6c75a5c9f3
|
[
"MIT"
] | null | null | null |
Correction essentiality/Untitled.ipynb
|
oligogenic/DIDA_SSL
|
cbf61892bfde999eadf31db918833f6c75a5c9f3
|
[
"MIT"
] | null | null | null | 34.584302 | 955 | 0.350579 |
[
[
[
"import pickle as pk\nimport pandas as pd\n%pylab inline",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"y_dic = pk.load(open(\"labelDic.cPickle\",\"rb\"))\nX_dic = pk.load(open(\"vectorDicGDIpair.cPickle\",\"rb\"))",
"_____no_output_____"
],
[
"df = pd.read_csv('dida_v2_full.csv', index_col=0).replace('CO', 1).replace('TD', 0).replace('UK', -1)",
"_____no_output_____"
],
[
"rd = np.vectorize(lambda x: round(x * 10)/10)\n\nessA_changed = {}\nessB_changed = {}\nrecA_changed = {}\nrecB_changed = {}\npath_changed = {}\ndeef_changed = {}\n\nfor ddid in X_dic:\n x1 = rd(array(X_dic[ddid])[ [2, 3, 6, 7, 8] ])\n x2 = rd(array(df.loc[ddid])[ [2, 3, 6, 7, 9, 12] ])\n \n if x1[0] != x2[0]: recA_changed[ddid] = (x1[0], x2[0])\n if x1[1] != x2[1]: essA_changed[ddid] = (x1[1], x2[1])\n if x1[2] != x2[2]: recB_changed[ddid] = (x1[2], x2[2])\n if x1[3] != x2[3]: essB_changed[ddid] = (x1[3], x2[3])\n if x1[4] != x2[4]: path_changed[ddid] = (x1[4], x2[4])\n \n if y_dic[ddid] != x2[5]: deef_changed[ddid] = (y_dic[ddid], x2[5])",
"_____no_output_____"
],
[
"print(essA_changed)",
"{'dd024': (0.0, 1.0), 'dd034': (0.0, 1.0), 'dd035': (0.0, 1.0), 'dd036': (0.0, 1.0), 'dd037': (0.0, 1.0), 'dd033': (0.0, 1.0), 'dd134': (0.0, 1.0), 'dd207': (0.4, 0.0), 'dd124': (0.0, 1.0), 'dd123': (0.0, 1.0), 'dd114': (0.0, 1.0), 'dd193': (0.0, 1.0), 'dd192': (0.0, 1.0), 'dd191': (0.0, 1.0), 'dd190': (0.0, 1.0), 'dd197': (0.4, 1.0), 'dd196': (0.4, 1.0), 'dd195': (0.4, 1.0), 'dd194': (0.4, 1.0), 'dd199': (0.4, 1.0), 'dd198': (0.4, 0.0), 'dd188': (0.0, 1.0), 'dd184': (0.0, 1.0), 'dd185': (0.0, 1.0), 'dd187': (0.0, 1.0), 'dd180': (0.0, 1.0), 'dd181': (0.0, 1.0), 'dd182': (0.0, 1.0), 'dd173': (0.0, 1.0), 'dd175': (0.0, 1.0), 'dd174': (0.0, 1.0), 'dd177': (0.0, 1.0), 'dd176': (0.0, 1.0), 'dd179': (0.0, 1.0), 'dd178': (0.0, 1.0), 'dd074': (0.0, 1.0), 'dd075': (0.0, 1.0), 'dd049': (0.0, 1.0), 'dd006': (0.0, 1.0), 'dd086': (0.0, 1.0), 'dd083': (0.0, 1.0), 'dd082': (0.0, 1.0), 'dd157': (0.0, 1.0), 'dd155': (0.0, 1.0), 'dd153': (0.0, 1.0)}\n"
],
[
"print('Essentiality gene A lost: ' + ', '.join(sorted(essA_changed.keys())))\nprint('Essentiality gene B lost: ' + ', '.join(sorted(essB_changed.keys())))\nprint('Recessiveness gene A changed: dd207, 1.00 -> 0.15')",
"Essentiality gene A lost: dd006, dd024, dd033, dd034, dd035, dd036, dd037, dd049, dd074, dd075, dd082, dd083, dd086, dd114, dd123, dd124, dd134, dd153, dd155, dd157, dd173, dd174, dd175, dd176, dd177, dd178, dd179, dd180, dd181, dd182, dd184, dd185, dd187, dd188, dd190, dd191, dd192, dd193, dd194, dd195, dd196, dd197, dd198, dd199, dd207\nEssentiality gene B lost: dd021, dd032, dd038, dd039, dd043, dd117, dd118, dd119, dd120, dd158, dd159, dd160, dd188, dd190, dd191, dd192, dd193, dd199, dd200\nRecessiveness gene A changed: dd207, 1.00 -> 0.15\n"
],
[
"df_sapiens = pd.read_csv('Mus musculus_consolidated.csv').drop(['locus', 'datasets', 'datasetIDs', 'essentiality status'], 1)\ndf_sapiens.head()",
"_____no_output_____"
],
[
"genes = []\nfor k in df['Pair']:\n g1, g2 = k.split('/')\n if g1 not in genes:\n genes.append(g1)\n if g2 not in genes:\n genes.append(g2)\ngenes = sorted(genes)",
"_____no_output_____"
],
[
"lookup_ess = {}\nfor line in array(df_sapiens):\n name, ess = line\n if type(name) is float: continue;\n lookup_ess[name.upper()] = ess",
"_____no_output_____"
],
[
"import pickle \npathway_pickle = open('ess_pickle', 'wb')\npickle.dump(lookup_ess, pathway_pickle)",
"_____no_output_____"
],
[
"result_s = {}\nfor g in genes:\n if g in lookup_ess:\n result_s[g] = lookup_ess[g]\n else:\n result_s[g] = 'N/A'\n print(g, 'not found.')",
"ALAD not found.\nATP2B3 not found.\nBAAT not found.\nBBS9 not found.\nC10orf2 not found.\nC2orf71 not found.\nCCDC28B not found.\nCEP41 not found.\nDFNB31 not found.\nEYS not found.\nKAL1 not found.\nMAN1B1 not found.\nMYH7 not found.\nMYH7B not found.\nNEXN not found.\nNSMF not found.\nOTUD4 not found.\nPSMA3 not found.\nPSMB4 not found.\nSCN2A not found.\nTRAPPC9 not found.\nWDR11 not found.\n"
],
[
"for key in result_s:\n x = result_s[key]\n if x == 'Essential':\n result_s[key] = 1\n elif x == 'Nonessential':\n result_s[key] = 0",
"_____no_output_____"
],
[
"new_essA, new_essB = [], []\nfor pair in df['Pair']:\n g1, g2 = pair.split('/')\n new_essA.append(result_s[g1])\n new_essB.append(result_s[g2])\nnew_essA = array(new_essA)\nnew_essB = array(new_essB)",
"_____no_output_____"
],
[
"df2 = pd.read_csv('dida_v2_full.csv', index_col=0)\nnew_essA[new_essA == 'N/A'] = 0.67\nnew_essB[new_essB == 'N/A'] = 0.62\ndf2['EssA'] = new_essA\ndf2['EssB'] = new_essB",
"C:\\Users\\azizf\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n \nC:\\Users\\azizf\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n This is separate from the ipykernel package so we can avoid doing imports until\n"
],
[
"df2.to_csv('dida_v2_full_newess.csv')",
"_____no_output_____"
],
[
"pd.read_csv('dida_v2_full_newess.csv', index_col=0)",
"_____no_output_____"
],
[
"new_essA[new_essA == 'N/A'] = 0\nmean(array(new_essA).astype(int))",
"C:\\Users\\azizf\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"new_essB[new_essB == 'N/A'] = 0\nmean(array(new_essB).astype(int))",
"C:\\Users\\azizf\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"for g in result_s:\n print(g + ',' + result_s[g])",
"ABCC6,Nonessential\nABCC9,Essential\nALAD,N/A\nARL6,Essential\nATP2B2,Essential\nATP2B3,N/A\nBAAT,N/A\nBBS1,Essential\nBBS10,Essential\nBBS12,Nonessential\nBBS2,Essential\nBBS4,Essential\nBBS5,Nonessential\nBBS7,Essential\nBBS9,N/A\nBMP15,Nonessential\nBMPR2,Essential\nC10orf2,N/A\nC2orf71,N/A\nCAV3,Nonessential\nCCDC28B,N/A\nCD2AP,Essential\nCDH23,Nonessential\nCDK5RAP2,Essential\nCEP152,Nonessential\nCEP290,Essential\nCEP41,N/A\nCHD7,Essential\nCOL17A1,Essential\nCOL4A3,Essential\nCOL4A4,Essential\nCOL4A5,Essential\nCPOX,Nonessential\nCPT2,Essential\nCYP1B1,Nonessential\nDFNB31,N/A\nDISP1,Essential\nDSP,Essential\nDUSP6,Essential\nDYNC2H1,Essential\nEDA,Essential\nEDNRB,Essential\nEMD,Nonessential\nEYS,N/A\nF5,Essential\nFGA,Essential\nFGFR1,Essential\nFGG,Essential\nFIGLA,Essential\nFLRT3,Essential\nFOXC1,Essential\nFOXI1,Essential\nFOXL2,Essential\nFZD4,Essential\nGALT,Nonessential\nGDAP1,Nonessential\nGDF9,Essential\nGDNF,Essential\nGGCX,Essential\nGJB2,Essential\nGJB3,Essential\nGNRHR,Essential\nGPR98,Nonessential\nHAMP,Nonessential\nHFE,Nonessential\nHMBS,Essential\nHNF1A,Essential\nHNF4A,Essential\nIL10RA,Nonessential\nIL17RD,Nonessential\nITGA7,Essential\nJUP,Essential\nKAL1,N/A\nKCNA5,Nonessential\nKCNE1,Nonessential\nKCNE2,Nonessential\nKCNH2,Essential\nKCNJ10,Essential\nKCNQ1,Nonessential\nKISS1R,Essential\nKRT14,Essential\nKRT5,Essential\nLAMA1,Essential\nLAMB3,Essential\nLMBRD1,Essential\nLMNA,Essential\nLRP5,Essential\nMAN1B1,N/A\nMEFV,Essential\nMFN2,Essential\nMITF,Essential\nMKKS,Essential\nMTR,Essential\nMYH7,N/A\nMYH7B,N/A\nMYO6,Nonessential\nMYO7A,Essential\nMYOC,Nonessential\nNEK1,Essential\nNEUROD1,Essential\nNEXN,N/A\nNLRP3,Essential\nNOBOX,Essential\nNOD2,Nonessential\nNPHS1,Essential\nNPHS2,Essential\nNSMF,N/A\nOCA2,Essential\nOTUD4,N/A\nPARK2,Essential\nPARK7,Nonessential\nPCDH15,Nonessential\nPDE3A,Essential\nPDX1,Essential\nPDZD7,Nonessential\nPINK1,Nonessential\nPITX2,Essential\nPOLG,Essential\nPRF1,Essential\nPROK2,Nonessential\nPROKR2,Essential\nPRPH2,Nonessential\nPSMA3,N/A\nPSMB4,N/A\nPSMB8,Nonessential\nPSMB9,Nonessential\nPYGM,Nonessential\nRAB27A,Nonessential\nRBM20,Nonessential\nREC8,Essential\nRET,Essential\nRNF216,Essential\nROM1,Nonessential\nRP1L1,Nonessential\nSCN1A,Essential\nSCN2A,N/A\nSCN5A,Essential\nSEC23A,Essential\nSHH,Essential\nSLC26A4,Nonessential\nSLC3A1,Essential\nSLC45A2,Essential\nSLC7A9,Nonessential\nSMC1B,Essential\nSPRY4,Essential\nSTX11,Nonessential\nSTXBP2,Essential\nTACR3,Nonessential\nTEK,Essential\nTJP2,Essential\nTMPRSS3,Nonessential\nTNFRSF1A,Nonessential\nTRAPPC9,N/A\nTRIM54,Essential\nTRIM63,Nonessential\nTTC8,Essential\nTTN,Essential\nTYR,Essential\nUNC13D,Nonessential\nUROD,Essential\nUSH1C,Nonessential\nUSH2A,Nonessential\nWDR11,N/A\nWNT10A,Nonessential\nWT1,Essential\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5cb179a862af7fa63a3ec79b866a8c52a90d4f
| 33,564 |
ipynb
|
Jupyter Notebook
|
_posts/DS/Deep-Learning/Deep-Learning-Coursera/5. Sequence Models/Operations on word vectors - v2.ipynb
|
wansook0316/wansook.github.io
|
9a5c2cb84d7a9d64f7dfc7e9d4110d4e5af67efb
|
[
"MIT"
] | 136 |
2018-04-02T11:08:06.000Z
|
2022-02-27T21:31:17.000Z
|
_posts/DS/Deep-Learning/Deep-Learning-Coursera/5. Sequence Models/Operations on word vectors - v2.ipynb
|
wansook0316/wansook.github.io
|
9a5c2cb84d7a9d64f7dfc7e9d4110d4e5af67efb
|
[
"MIT"
] | 1 |
2019-01-20T06:47:19.000Z
|
2019-01-20T06:47:19.000Z
|
_posts/DS/Deep-Learning/Deep-Learning-Coursera/5. Sequence Models/Operations on word vectors - v2.ipynb
|
wansook0316/wansook.github.io
|
9a5c2cb84d7a9d64f7dfc7e9d4110d4e5af67efb
|
[
"MIT"
] | 201 |
2018-04-19T22:06:50.000Z
|
2022-03-13T16:21:58.000Z
| 39.909631 | 608 | 0.56212 |
[
[
[
"# Operations on word vectors\n\nWelcome to your first assignment of this week! \n\nBecause word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings. \n\n**After this assignment you will be able to:**\n\n- Load pre-trained word vectors, and measure similarity using cosine similarity\n- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______. \n- Modify word embeddings to reduce their gender bias \n\nLet's get started! Run the following cell to load the packages you will need.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom w2v_utils import *",
"Using TensorFlow backend.\n"
]
],
[
[
"Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`. ",
"_____no_output_____"
]
],
[
[
"words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')",
"_____no_output_____"
]
],
[
[
"You've loaded:\n- `words`: set of words in the vocabulary.\n- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.\n\nYou've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are. \n\n",
"_____no_output_____"
],
[
"# 1 - Cosine similarity\n\nTo measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows: \n\n$$\\text{CosineSimilarity(u, v)} = \\frac {u . v} {||u||_2 ||v||_2} = cos(\\theta) \\tag{1}$$\n\nwhere $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value. \n\n<img src=\"images/cosine_sim.png\" style=\"width:800px;height:250px;\">\n<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>\n\n**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.\n\n**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \\sqrt{\\sum_{i=1}^{n} u_i^2}$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: cosine_similarity\n\ndef cosine_similarity(u, v):\n \"\"\"\n Cosine similarity reflects the degree of similariy between u and v\n \n Arguments:\n u -- a word vector of shape (n,) \n v -- a word vector of shape (n,)\n\n Returns:\n cosine_similarity -- the cosine similarity between u and v defined by the formula above.\n \"\"\"\n \n distance = 0.0\n \n ### START CODE HERE ###\n # Compute the dot product between u and v (≈1 line)\n dot = np.dot(u,v)\n # Compute the L2 norm of u (≈1 line)\n norm_u = np.linalg.norm(u)\n \n # Compute the L2 norm of v (≈1 line)\n norm_v = np.linalg.norm(v)\n # Compute the cosine similarity defined by formula (1) (≈1 line)\n cosine_similarity = dot/(norm_u*norm_v)\n ### END CODE HERE ###\n \n return cosine_similarity",
"_____no_output_____"
],
[
"father = word_to_vec_map[\"father\"]\nmother = word_to_vec_map[\"mother\"]\nball = word_to_vec_map[\"ball\"]\ncrocodile = word_to_vec_map[\"crocodile\"]\nfrance = word_to_vec_map[\"france\"]\nitaly = word_to_vec_map[\"italy\"]\nparis = word_to_vec_map[\"paris\"]\nrome = word_to_vec_map[\"rome\"]\n\nprint(\"cosine_similarity(father, mother) = \", cosine_similarity(father, mother))\nprint(\"cosine_similarity(ball, crocodile) = \",cosine_similarity(ball, crocodile))\nprint(\"cosine_similarity(france - paris, rome - italy) = \",cosine_similarity(france - paris, rome - italy))",
"cosine_similarity(father, mother) = 0.890903844289\ncosine_similarity(ball, crocodile) = 0.274392462614\ncosine_similarity(france - paris, rome - italy) = -0.675147930817\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **cosine_similarity(father, mother)** =\n </td>\n <td>\n 0.890903844289\n </td>\n </tr>\n <tr>\n <td>\n **cosine_similarity(ball, crocodile)** =\n </td>\n <td>\n 0.274392462614\n </td>\n </tr>\n <tr>\n <td>\n **cosine_similarity(france - paris, rome - italy)** =\n </td>\n <td>\n -0.675147930817\n </td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave. ",
"_____no_output_____"
],
[
"## 2 - Word analogy task\n\nIn the word analogy task, we complete the sentence <font color='brown'>\"*a* is to *b* as *c* is to **____**\"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \\approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity. \n\n**Exercise**: Complete the code below to be able to perform word analogies!",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: complete_analogy\n\ndef complete_analogy(word_a, word_b, word_c, word_to_vec_map):\n \"\"\"\n Performs the word analogy task as explained above: a is to b as c is to ____. \n \n Arguments:\n word_a -- a word, string\n word_b -- a word, string\n word_c -- a word, string\n word_to_vec_map -- dictionary that maps words to their corresponding vectors. \n \n Returns:\n best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity\n \"\"\"\n \n # convert words to lower case\n word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()\n \n ### START CODE HERE ###\n # Get the word embeddings v_a, v_b and v_c (≈1-3 lines)\n e_a = word_to_vec_map.get(word_a)\n e_b = word_to_vec_map.get(word_b)\n e_c = word_to_vec_map.get(word_c)\n # e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]\n ### END CODE HERE ###\n \n words = word_to_vec_map.keys()\n max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number\n best_word = None # Initialize best_word with None, it will help keep track of the word to output\n\n # loop over the whole word vector set\n for w in words: \n # to avoid best_word being one of the input words, pass on them.\n if w in [word_a, word_b, word_c] :\n continue\n \n ### START CODE HERE ###\n # Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)\n cosine_sim = cosine_similarity(np.subtract(e_b,e_a), np.subtract(word_to_vec_map.get(w),e_c))\n \n # If the cosine_sim is more than the max_cosine_sim seen so far,\n # then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)\n if cosine_sim > max_cosine_sim:\n max_cosine_sim = cosine_sim\n best_word = w\n ### END CODE HERE ###\n \n return best_word",
"_____no_output_____"
]
],
[
[
"Run the cell below to test your code, this may take 1-2 minutes.",
"_____no_output_____"
]
],
[
[
"triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]\nfor triad in triads_to_try:\n print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))",
"italy -> italian :: spain -> spanish\nindia -> delhi :: japan -> tokyo\nman -> woman :: boy -> girl\nsmall -> smaller :: large -> larger\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **italy -> italian** ::\n </td>\n <td>\n spain -> spanish\n </td>\n </tr>\n <tr>\n <td>\n **india -> delhi** ::\n </td>\n <td>\n japan -> tokyo\n </td>\n </tr>\n <tr>\n <td>\n **man -> woman ** ::\n </td>\n <td>\n boy -> girl\n </td>\n </tr>\n <tr>\n <td>\n **small -> smaller ** ::\n </td>\n <td>\n large -> larger\n </td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?. ",
"_____no_output_____"
],
[
"### Congratulations!\n\nYou've come to the end of this assignment. Here are the main points you should remember:\n\n- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.) \n- For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started. \n\nEven though you have finished the graded portions, we recommend you take a look too at the rest of this notebook. \n\nCongratulations on finishing the graded portions of this notebook! \n",
"_____no_output_____"
],
[
"## 3 - Debiasing word vectors (OPTIONAL/UNGRADED) ",
"_____no_output_____"
],
[
"In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded. \n\nLets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of \"gender\". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.) \n",
"_____no_output_____"
]
],
[
[
"g = word_to_vec_map['woman'] - word_to_vec_map['man']\nprint(g)",
"[-0.087144 0.2182 -0.40986 -0.03922 -0.1032 0.94165\n -0.06042 0.32988 0.46144 -0.35962 0.31102 -0.86824\n 0.96006 0.01073 0.24337 0.08193 -1.02722 -0.21122\n 0.695044 -0.00222 0.29106 0.5053 -0.099454 0.40445\n 0.30181 0.1355 -0.0606 -0.07131 -0.19245 -0.06115\n -0.3204 0.07165 -0.13337 -0.25068714 -0.14293 -0.224957\n -0.149 0.048882 0.12191 -0.27362 -0.165476 -0.20426\n 0.54376 -0.271425 -0.10245 -0.32108 0.2516 -0.33455\n -0.04371 0.01258 ]\n"
]
],
[
[
"Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity. ",
"_____no_output_____"
]
],
[
[
"print ('List of names and their similarities with constructed vector:')\n\n# girls and boys name\nname_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']\n\nfor w in name_list:\n print (w, cosine_similarity(word_to_vec_map[w], g))",
"List of names and their similarities with constructed vector:\njohn -0.23163356146\nmarie 0.315597935396\nsophie 0.318687898594\nronaldo -0.312447968503\npriya 0.17632041839\nrahul -0.169154710392\ndanielle 0.243932992163\nreza -0.079304296722\nkaty 0.283106865957\nyasmin 0.233138577679\n"
]
],
[
[
"As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable. \n\nBut let's try with some other words.",
"_____no_output_____"
]
],
[
[
"print('Other words and their similarities:')\nword_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist', \n 'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']\nfor w in word_list:\n print (w, cosine_similarity(word_to_vec_map[w], g))",
"Other words and their similarities:\nlipstick 0.276919162564\nguns -0.18884855679\nscience -0.0608290654093\narts 0.00818931238588\nliterature 0.0647250443346\nwarrior -0.209201646411\ndoctor 0.118952894109\ntree -0.0708939917548\nreceptionist 0.330779417506\ntechnology -0.131937324476\nfashion 0.0356389462577\nteacher 0.179209234318\nengineer -0.0803928049452\npilot 0.00107644989919\ncomputer -0.103303588739\nsinger 0.185005181365\n"
]
],
[
[
"Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, \"computer\" is closer to \"man\" while \"literature\" is closer to \"woman\". Ouch! \n\nWe'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as \"actor\"/\"actress\" or \"grandmother\"/\"grandfather\" should remain gender specific, while other words such as \"receptionist\" or \"technology\" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.\n\n### 3.1 - Neutralize bias for non-gender specific words \n\nThe figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\\perp}$. In linear algebra, we say that the 49 dimensional $g_{\\perp}$ is perpendicular (or \"othogonal\") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$. \n\nEven though $g_{\\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below. \n\n<img src=\"images/neutral.png\" style=\"width:800px;height:300px;\">\n<caption><center> **Figure 2**: The word vector for \"receptionist\" represented before and after applying the neutralize operation. </center></caption>\n\n**Exercise**: Implement `neutralize()` to remove the bias of words such as \"receptionist\" or \"scientist\". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$: \n\n$$e^{bias\\_component} = \\frac{e \\cdot g}{||g||_2^2} * g\\tag{2}$$\n$$e^{debiased} = e - e^{bias\\_component}\\tag{3}$$\n\nIf you are an expert in linear algebra, you may recognize $e^{bias\\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.\n\n<!-- \n**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:\n$$u = u_B + u_{\\perp}$$\nwhere : $u_B = $ and $ u_{\\perp} = u - u_B $\n!--> ",
"_____no_output_____"
]
],
[
[
"def neutralize(word, g, word_to_vec_map):\n \"\"\"\n Removes the bias of \"word\" by projecting it on the space orthogonal to the bias axis. \n This function ensures that gender neutral words are zero in the gender subspace.\n \n Arguments:\n word -- string indicating the word to debias\n g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)\n word_to_vec_map -- dictionary mapping words to their corresponding vectors.\n \n Returns:\n e_debiased -- neutralized word vector representation of the input \"word\"\n \"\"\"\n \n ### START CODE HERE ###\n # Select word vector representation of \"word\". Use word_to_vec_map. (≈ 1 line)\n e = word_to_vec_map[word]\n \n # Compute e_biascomponent using the formula give above. (≈ 1 line)\n e_biascomponent = (np.dot(e,g)/np.linalg.norm(g)**2)*g\n \n # Neutralize e by substracting e_biascomponent from it \n # e_debiased should be equal to its orthogonal projection. (≈ 1 line)\n e_debiased = e-e_biascomponent\n ### END CODE HERE ###\n \n return e_debiased",
"_____no_output_____"
],
[
"e = \"receptionist\"\nprint(\"cosine similarity between \" + e + \" and g, before neutralizing: \", cosine_similarity(word_to_vec_map[\"receptionist\"], g))\n\ne_debiased = neutralize(\"receptionist\", g, word_to_vec_map)\nprint(\"cosine similarity between \" + e + \" and g, after neutralizing: \", cosine_similarity(e_debiased, g))",
"cosine similarity between receptionist and g, before neutralizing: 0.330779417506\ncosine similarity between receptionist and g, after neutralizing: -3.26732746085e-17\n"
]
],
[
[
"**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).\n\n\n<table>\n <tr>\n <td>\n **cosine similarity between receptionist and g, before neutralizing:** :\n </td>\n <td>\n 0.330779417506\n </td>\n </tr>\n <tr>\n <td>\n **cosine similarity between receptionist and g, after neutralizing:** :\n </td>\n <td>\n -3.26732746085e-17\n </tr>\n</table>",
"_____no_output_____"
],
[
"### 3.2 - Equalization algorithm for gender-specific words\n\nNext, lets see how debiasing can also be applied to word pairs such as \"actress\" and \"actor.\" Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that \"actress\" is closer to \"babysit\" than \"actor.\" By applying neutralizing to \"babysit\" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that \"actor\" and \"actress\" are equidistant from \"babysit.\" The equalization algorithm takes care of this. \n\nThe key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works: \n\n<img src=\"images/equalize10.png\" style=\"width:800px;height:400px;\">\n\n\nThe derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are: \n\n$$ \\mu = \\frac{e_{w1} + e_{w2}}{2}\\tag{4}$$ \n\n$$ \\mu_{B} = \\frac {\\mu \\cdot \\text{bias_axis}}{||\\text{bias_axis}||_2^2} *\\text{bias_axis}\n\\tag{5}$$ \n\n$$\\mu_{\\perp} = \\mu - \\mu_{B} \\tag{6}$$\n\n$$ e_{w1B} = \\frac {e_{w1} \\cdot \\text{bias_axis}}{||\\text{bias_axis}||_2^2} *\\text{bias_axis}\n\\tag{7}$$ \n$$ e_{w2B} = \\frac {e_{w2} \\cdot \\text{bias_axis}}{||\\text{bias_axis}||_2^2} *\\text{bias_axis}\n\\tag{8}$$\n\n\n$$e_{w1B}^{corrected} = \\sqrt{ |{1 - ||\\mu_{\\perp} ||^2_2} |} * \\frac{e_{\\text{w1B}} - \\mu_B} {|(e_{w1} - \\mu_{\\perp}) - \\mu_B)|} \\tag{9}$$\n\n\n$$e_{w2B}^{corrected} = \\sqrt{ |{1 - ||\\mu_{\\perp} ||^2_2} |} * \\frac{e_{\\text{w2B}} - \\mu_B} {|(e_{w2} - \\mu_{\\perp}) - \\mu_B)|} \\tag{10}$$\n\n$$e_1 = e_{w1B}^{corrected} + \\mu_{\\perp} \\tag{11}$$\n$$e_2 = e_{w2B}^{corrected} + \\mu_{\\perp} \\tag{12}$$\n\n\n**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!",
"_____no_output_____"
]
],
[
[
"def equalize(pair, bias_axis, word_to_vec_map):\n \"\"\"\n Debias gender specific words by following the equalize method described in the figure above.\n \n Arguments:\n pair -- pair of strings of gender specific words to debias, e.g. (\"actress\", \"actor\") \n bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender\n word_to_vec_map -- dictionary mapping words to their corresponding vectors\n \n Returns\n e_1 -- word vector corresponding to the first word\n e_2 -- word vector corresponding to the second word\n \"\"\"\n \n ### START CODE HERE ###\n # Step 1: Select word vector representation of \"word\". Use word_to_vec_map. (≈ 2 lines)\n w1, w2 = pair[0],pair[1]\n e_w1, e_w2 = word_to_vec_map[w1],word_to_vec_map[w2]\n \n # Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)\n mu = (e_w1 + e_w2)/2\n\n # Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)\n mu_B = (np.dot(mu,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis\n mu_orth = mu-mu_B\n\n # Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)\n e_w1B = (np.dot(e_w1,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis\n e_w2B = (np.dot(e_w2,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis\n \n # Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)\n corrected_e_w1B = np.sqrt(np.abs(1-np.linalg.norm(mu_orth)**2))*((e_w1B - mu_B)/np.abs((e_w1-mu_orth)-mu_B))\n corrected_e_w2B = np.sqrt(np.abs(1-np.linalg.norm(mu_orth)**2))*((e_w2B - mu_B)/np.abs((e_w2-mu_orth)-mu_B))\n\n # Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)\n e1 = corrected_e_w1B + mu_orth\n e2 = corrected_e_w2B + mu_orth\n \n ### END CODE HERE ###\n \n return e1, e2",
"_____no_output_____"
],
[
"print(\"cosine similarities before equalizing:\")\nprint(\"cosine_similarity(word_to_vec_map[\\\"man\\\"], gender) = \", cosine_similarity(word_to_vec_map[\"man\"], g))\nprint(\"cosine_similarity(word_to_vec_map[\\\"woman\\\"], gender) = \", cosine_similarity(word_to_vec_map[\"woman\"], g))\nprint()\ne1, e2 = equalize((\"man\", \"woman\"), g, word_to_vec_map)\nprint(\"cosine similarities after equalizing:\")\nprint(\"cosine_similarity(e1, gender) = \", cosine_similarity(e1, g))\nprint(\"cosine_similarity(e2, gender) = \", cosine_similarity(e2, g))",
"cosine similarities before equalizing:\ncosine_similarity(word_to_vec_map[\"man\"], gender) = -0.117110957653\ncosine_similarity(word_to_vec_map[\"woman\"], gender) = 0.356666188463\n\ncosine similarities after equalizing:\ncosine_similarity(e1, gender) = -0.716572752584\ncosine_similarity(e2, gender) = 0.739659647493\n"
]
],
[
[
"**Expected Output**:\n\ncosine similarities before equalizing:\n<table>\n <tr>\n <td>\n **cosine_similarity(word_to_vec_map[\"man\"], gender)** =\n </td>\n <td>\n -0.117110957653\n </td>\n </tr>\n <tr>\n <td>\n **cosine_similarity(word_to_vec_map[\"woman\"], gender)** =\n </td>\n <td>\n 0.356666188463\n </td>\n </tr>\n</table>\n\ncosine similarities after equalizing:\n<table>\n <tr>\n <td>\n **cosine_similarity(u1, gender)** =\n </td>\n <td>\n -0.700436428931\n </td>\n </tr>\n <tr>\n <td>\n **cosine_similarity(u2, gender)** =\n </td>\n <td>\n 0.700436428931\n </td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words. \n\nThese debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the \"gender\" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well. \n ",
"_____no_output_____"
],
[
"### Congratulations\n\nYou have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified. \n\nCongratulations on finishing this notebook! \n",
"_____no_output_____"
],
[
"**References**:\n- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to\nHomemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)\n- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a5cb1e3c996e94f4dff47a3429159b79a861c00
| 11,961 |
ipynb
|
Jupyter Notebook
|
notebooks/feature_engineering_new/raw/ex5.ipynb
|
qursaan/learntools
|
3df5094cb78ed1a6aaca2d16c782ade523d6a92b
|
[
"Apache-2.0"
] | 359 |
2018-03-23T15:57:52.000Z
|
2022-03-25T21:56:28.000Z
|
notebooks/feature_engineering_new/raw/ex5.ipynb
|
qursaan/learntools
|
3df5094cb78ed1a6aaca2d16c782ade523d6a92b
|
[
"Apache-2.0"
] | 84 |
2018-06-14T00:06:52.000Z
|
2022-02-08T17:25:54.000Z
|
notebooks/feature_engineering_new/raw/ex5.ipynb
|
qursaan/learntools
|
3df5094cb78ed1a6aaca2d16c782ade523d6a92b
|
[
"Apache-2.0"
] | 213 |
2018-05-02T19:06:31.000Z
|
2022-03-20T15:40:34.000Z
| 30.435115 | 588 | 0.551793 |
[
[
[
"# Introduction #\n\nIn this exercise, you'll work through several applications of PCA to the [*Ames*](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data) dataset.",
"_____no_output_____"
],
[
"Run this cell to set everything up!",
"_____no_output_____"
]
],
[
[
"# Setup feedback system\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.feature_engineering_new.ex5 import *\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.decomposition import PCA\nfrom sklearn.feature_selection import mutual_info_regression\nfrom sklearn.model_selection import cross_val_score\nfrom xgboost import XGBRegressor\n\n# Set Matplotlib defaults\nplt.style.use(\"seaborn-whitegrid\")\nplt.rc(\"figure\", autolayout=True)\nplt.rc(\n \"axes\",\n labelweight=\"bold\",\n labelsize=\"large\",\n titleweight=\"bold\",\n titlesize=14,\n titlepad=10,\n)\n\n\ndef apply_pca(X, standardize=True):\n # Standardize\n if standardize:\n X = (X - X.mean(axis=0)) / X.std(axis=0)\n # Create principal components\n pca = PCA()\n X_pca = pca.fit_transform(X)\n # Convert to dataframe\n component_names = [f\"PC{i+1}\" for i in range(X_pca.shape[1])]\n X_pca = pd.DataFrame(X_pca, columns=component_names)\n # Create loadings\n loadings = pd.DataFrame(\n pca.components_.T, # transpose the matrix of loadings\n columns=component_names, # so the columns are the principal components\n index=X.columns, # and the rows are the original features\n )\n return pca, X_pca, loadings\n\n\ndef plot_variance(pca, width=8, dpi=100):\n # Create figure\n fig, axs = plt.subplots(1, 2)\n n = pca.n_components_\n grid = np.arange(1, n + 1)\n # Explained variance\n evr = pca.explained_variance_ratio_\n axs[0].bar(grid, evr)\n axs[0].set(\n xlabel=\"Component\", title=\"% Explained Variance\", ylim=(0.0, 1.0)\n )\n # Cumulative Variance\n cv = np.cumsum(evr)\n axs[1].plot(np.r_[0, grid], np.r_[0, cv], \"o-\")\n axs[1].set(\n xlabel=\"Component\", title=\"% Cumulative Variance\", ylim=(0.0, 1.0)\n )\n # Set up figure\n fig.set(figwidth=8, dpi=100)\n return axs\n\n\ndef make_mi_scores(X, y):\n X = X.copy()\n for colname in X.select_dtypes([\"object\", \"category\"]):\n X[colname], _ = X[colname].factorize()\n # All discrete features should now have integer dtypes\n discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes]\n mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0)\n mi_scores = pd.Series(mi_scores, name=\"MI Scores\", index=X.columns)\n mi_scores = mi_scores.sort_values(ascending=False)\n return mi_scores\n\n\ndef score_dataset(X, y, model=XGBRegressor()):\n # Label encoding for categoricals\n for colname in X.select_dtypes([\"category\", \"object\"]):\n X[colname], _ = X[colname].factorize()\n # Metric for Housing competition is RMSLE (Root Mean Squared Log Error)\n score = cross_val_score(\n model, X, y, cv=5, scoring=\"neg_mean_squared_log_error\",\n )\n score = -1 * score.mean()\n score = np.sqrt(score)\n return score\n\n\ndf = pd.read_csv(\"../input/fe-course-data/ames.csv\")",
"_____no_output_____"
]
],
[
[
"Let's choose a few features that are highly correlated with our target, `SalePrice`.\n",
"_____no_output_____"
]
],
[
[
"features = [\n \"GarageArea\",\n \"YearRemodAdd\",\n \"TotalBsmtSF\",\n \"GrLivArea\",\n]\n\nprint(\"Correlation with SalePrice:\\n\")\nprint(df[features].corrwith(df.SalePrice))",
"_____no_output_____"
]
],
[
[
"We'll rely on PCA to untangle the correlational structure of these features and suggest relationships that might be usefully modeled with new features.\n\nRun this cell to apply PCA and extract the loadings.",
"_____no_output_____"
]
],
[
[
"X = df.copy()\ny = X.pop(\"SalePrice\")\nX = X.loc[:, features]\n\n# `apply_pca`, defined above, reproduces the code from the tutorial\npca, X_pca, loadings = apply_pca(X)\nprint(loadings)",
"_____no_output_____"
]
],
[
[
"# 1) Interpret Component Loadings\n\nLook at the loadings for components `PC1` and `PC3`. Can you think of a description of what kind of contrast each component has captured? After you've thought about it, run the next cell for a solution.",
"_____no_output_____"
]
],
[
[
"# View the solution (Run this cell to receive credit!)\nq_1.check()",
"_____no_output_____"
]
],
[
[
"-------------------------------------------------------------------------------\n\nYour goal in this question is to use the results of PCA to discover one or more new features that improve the performance of your model. One option is to create features inspired by the loadings, like we did in the tutorial. Another option is to use the components themselves as features (that is, add one or more columns of `X_pca` to `X`).\n\n# 2) Create New Features\n\nAdd one or more new features to the dataset `X`. For a correct solution, get a validation score below 0.140 RMSLE. (If you get stuck, feel free to use the `hint` below!)",
"_____no_output_____"
]
],
[
[
"X = df.copy()\ny = X.pop(\"SalePrice\")\n\n# YOUR CODE HERE: Add new features to X.\n# ____\n\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\n\n# Check your answer\nq_2.check()",
"_____no_output_____"
],
[
"# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_2.hint()\n#_COMMENT_IF(PROD)_\nq_2.solution()",
"_____no_output_____"
],
[
"#%%RM_IF(PROD)%%\nX = df.copy()\ny = X.pop(\"SalePrice\")\n\nX[\"Feature1\"] = X.GrLivArea - X.TotalBsmtSF\n\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\nq_2.assert_check_failed()",
"_____no_output_____"
],
[
"#%%RM_IF(PROD)%%\n# Solution 1: Inspired by loadings\nX = df.copy()\ny = X.pop(\"SalePrice\")\n\nX[\"Feature1\"] = X.GrLivArea + X.TotalBsmtSF\nX[\"Feature2\"] = X.YearRemodAdd * X.TotalBsmtSF\n\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\n\n# Solution 2: Uses components\nX = df.copy()\ny = X.pop(\"SalePrice\")\n\nX = X.join(X_pca)\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\nq_2.assert_check_passed()",
"_____no_output_____"
]
],
[
[
"-------------------------------------------------------------------------------\n\nThe next question explores a way you can use PCA to detect outliers in the dataset (meaning, data points that are unusually extreme in some way). Outliers can have a detrimental effect on model performance, so it's good to be aware of them in case you need to take corrective action. PCA in particular can show you anomalous *variation* which might not be apparent from the original features: neither small houses nor houses with large basements are unusual, but it is unusual for small houses to have large basements. That's the kind of thing a principal component can show you.\n\nRun the next cell to show distribution plots for each of the principal components you created above.",
"_____no_output_____"
]
],
[
[
"sns.catplot(\n y=\"value\",\n col=\"variable\",\n data=X_pca.melt(),\n kind='boxen',\n sharey=False,\n col_wrap=2,\n);",
"_____no_output_____"
]
],
[
[
"As you can see, in each of the components there are several points lying at the extreme ends of the distributions -- outliers, that is.\n\nNow run the next cell to see those houses that sit at the extremes of a component:",
"_____no_output_____"
]
],
[
[
"# You can change PC1 to PC2, PC3, or PC4\ncomponent = \"PC1\"\n\nidx = X_pca[component].sort_values(ascending=False).index\ndf.loc[idx, [\"SalePrice\", \"Neighborhood\", \"SaleCondition\"] + features]",
"_____no_output_____"
]
],
[
[
"# 3) Outlier Detection\n\nDo you notice any patterns in the extreme values? Does it seem like the outliers are coming from some special subset of the data?\n\nAfter you've thought about your answer, run the next cell for the solution and some discussion.",
"_____no_output_____"
]
],
[
[
"# View the solution (Run this cell to receive credit!)\nq_3.check()",
"_____no_output_____"
]
],
[
[
"# Keep Going #\n\n[**Apply target encoding**](#$NEXT_NOTEBOOK_URL$) to give a boost to categorical features.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5cc871e87f9f6e1365db7e671f7d537ad6973b
| 8,162 |
ipynb
|
Jupyter Notebook
|
LinkedIn/LinkedIn_Send_posts_feed_to_gsheet.ipynb
|
krajai/testt
|
3aaf5fd7fe85e712c8c1615852b50f9ccb6737e5
|
[
"BSD-3-Clause"
] | 1 |
2022-03-24T07:46:45.000Z
|
2022-03-24T07:46:45.000Z
|
LinkedIn/LinkedIn_Send_posts_feed_to_gsheet.ipynb
|
PZawieja/awesome-notebooks
|
8ae86e5689749716e1315301cecdad6f8843dcf8
|
[
"BSD-3-Clause"
] | null | null | null |
LinkedIn/LinkedIn_Send_posts_feed_to_gsheet.ipynb
|
PZawieja/awesome-notebooks
|
8ae86e5689749716e1315301cecdad6f8843dcf8
|
[
"BSD-3-Clause"
] | null | null | null | 24.005882 | 297 | 0.549988 |
[
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# LinkedIn - Send posts feed to gsheet\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/LinkedIn/LinkedIn_Send_posts_feed_to_gsheet.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #linkedin #profile #post #stats #naas_drivers #automation #content #googlesheets",
"_____no_output_____"
],
[
"**Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/)",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Import libraries",
"_____no_output_____"
]
],
[
[
"from naas_drivers import linkedin, gsheet\nimport naas\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"### Setup LinkedIn\n👉 <a href='https://www.notion.so/LinkedIn-driver-Get-your-cookies-d20a8e7e508e42af8a5b52e33f3dba75'>How to get your cookies ?</a>",
"_____no_output_____"
]
],
[
[
"# Lindekin cookies\nLI_AT = \"AQEDARCNSioDe6wmAAABfqF-HR4AAAF-xYqhHlYAtSu7EZZEpFer0UZF-GLuz2DNSz4asOOyCRxPGFjenv37irMObYYgxxxxxxx\"\nJSESSIONID = \"ajax:12XXXXXXXXXXXXXXXXX\"\n\n# Linkedin profile url\nPROFILE_URL = \"https://www.linkedin.com/in/xxxxxx/\"\n\n# Number of posts updated in Gsheet (This avoid to requests the entire database)\nLIMIT = 10",
"_____no_output_____"
]
],
[
[
"### Setup your Google Sheet\n👉 Get your spreadsheet URL<br>\n👉 Share your gsheet with our service account to connect : [email protected]<br>\n👉 Create your sheet before sending data into it",
"_____no_output_____"
]
],
[
[
"# Spreadsheet URL\nSPREADSHEET_URL = \"https://docs.google.com/spreadsheets/d/XXXXXXXXXXXXXXXXXXXX\"\n\n# Sheet name\nSHEET_NAME = \"LK_POSTS_FEED\"",
"_____no_output_____"
]
],
[
[
"### Setup Naas",
"_____no_output_____"
]
],
[
[
"naas.scheduler.add(cron=\"0 8 * * *\")\n\n#-> To delete your scheduler, please uncomment the line below and execute this cell\n# naas.scheduler.delete()",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Get data from Google Sheet",
"_____no_output_____"
]
],
[
[
"df_gsheet = gsheet.connect(SPREADSHEET_URL).get(sheet_name=SHEET_NAME)\ndf_gsheet",
"_____no_output_____"
]
],
[
[
"### Get new posts and update last posts stats",
"_____no_output_____"
]
],
[
[
"def get_new_posts(df_gsheet, key, limit=LIMIT, sleep=False):\n posts = []\n if len(df_gsheet) > 0:\n posts = df_gsheet[key].unique()\n else:\n df_posts_feed = linkedin.connect(LI_AT, JSESSIONID).profile.get_posts_feed(PROFILE_URL, limit=-1, sleep=sleep)\n return df_posts_feed\n \n # Get new\n df_posts_feed = linkedin.connect(LI_AT, JSESSIONID).profile.get_posts_feed(PROFILE_URL, limit=LIMIT, sleep=sleep)\n df_new = pd.concat([df_posts_feed, df_gsheet]).drop_duplicates(key, keep=\"first\")\n return df_new\n\ndf_new = get_new_posts(df_gsheet, \"POST_URL\", limit=LIMIT)\ndf_new",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Send to Google Sheet",
"_____no_output_____"
]
],
[
[
"gsheet.connect(SPREADSHEET_URL).send(df_new,\n sheet_name=SHEET_NAME,\n append=False)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a5ccfe8b51aa6e4b31936d218383f18a849b226
| 143,975 |
ipynb
|
Jupyter Notebook
|
Analysis.ipynb
|
jasonwvh/streamlit-backorder-prediction
|
f87089d7ba554ec4a173ab041599593fb59a6266
|
[
"MIT"
] | null | null | null |
Analysis.ipynb
|
jasonwvh/streamlit-backorder-prediction
|
f87089d7ba554ec4a173ab041599593fb59a6266
|
[
"MIT"
] | null | null | null |
Analysis.ipynb
|
jasonwvh/streamlit-backorder-prediction
|
f87089d7ba554ec4a173ab041599593fb59a6266
|
[
"MIT"
] | null | null | null | 57.844516 | 43,094 | 0.635201 |
[
[
[
"---\n\n# **Product Backorders**\n\n---\n\n## Introduction\nA **product backorder** is a customer order that has not been fulfilled. Product backorder may be the result of strong sales performance (e.g. the product is in such high demand that production cannot keep up with sales). However, backorders can upset consumers, lead to canceled orders and decreased customer loyalty. Companies want to avoid backorders, but also avoid overstocking every product (leading to higher inventory costs). Hence, this project aims to develop a product that can predict if a product will go on backorder not.\n\n---\n\n## Problem Statement\n1. What are the variables that lead to backorder?\n2. What are the relationship between the varialbes?\n\n---\n\n## Hypothesis\nNational inventory and sales performance are directly correlated with backorder.\n\n---\n\n## Objective\n1. To identify the relationship between the attributes\n2. To identify which attributes correlates most to backorder\n3. To predict backorder by selecting relevant attributes\n\n---\n\n## Dataset\nFrom the [Backorders Wiki Page](https://github.com/AasthaMadan/Product-Backorders/wiki/Product-back-orders-prediction), we can find the information about the dataset\n\n* sku – Random ID for the product\n* national_inv – Current inventory level for the part\n* lead_time – Transit time for product (if available)\n* in_transit_qty – Amount of product in transit from source\n* forecast_3_month – Forecast sales for the next 3 months\n* forecast_6_month – Forecast sales for the next 6 months\n* forecast_9_month – Forecast sales for the next 9 months\n* sales_1_month – Sales quantity for the prior 1 month time period\n* sales_3_month – Sales quantity for the prior 3 month time period\n* sales_6_month – Sales quantity for the prior 6 month time period\n* sales_9_month – Sales quantity for the prior 9 month time period\n* min_bank – Minimum recommend amount to stock\n* potential_issue – Source issue for part identified\n* pieces_past_due – Parts overdue from source\n* perf_6_month_avg – Source performance for prior 6 month period\n* perf_12_month_avg – Source performance for prior 12 month period\n* local_bo_qty – Amount of stock orders overdue\n* deck_risk – Part risk flag\n* oe_constraint – Part risk flag\n* ppap_risk – Part risk flag\n* stop_auto_buy – Part risk flag\n* rev_stop – Part risk flag\n* went_on_backorder – Product actually went on backorder.\n\n---\n\n\n\n\n",
"_____no_output_____"
],
[
"# **Exploratory Data Analysis**\r\n\r\nExploratory data analysis is the inital investigation of data so as to discover patterns, spot anomalies, and test hypothesis with the help of statistics.\r\n\r\nTo do this, we will first import our datasets and merge them using the pandas library",
"_____no_output_____"
]
],
[
[
"import pandas as pd\r\n\r\n# Load train and test data\r\ntrain_df = pd.read_csv(\"drive/MyDrive/data_mining_portfolio/Kaggle_Training_Dataset_v2.csv\")\r\ntest_df = pd.read_csv(\"drive/MyDrive/data_mining_portfolio/Kaggle_Test_Dataset_v2.csv\")\r\n\r\n# Merge both the datasets\r\nmerged_df = pd.concat([train_df, test_df])",
"/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2718: DtypeWarning: Columns (0) have mixed types.Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n"
]
],
[
[
"Next, we can look at some properties of the dataset such as shape, data types, and part of the actual data itself. ",
"_____no_output_____"
]
],
[
[
"# Size of dataset\r\nprint(\"Shape:\\n\", merged_df.shape)\r\n\r\n# Look at the data types\r\nprint(\"\\nDatatypes:\\n\", merged_df.dtypes)\r\n\r\n# An initial look at the 1st 5 rows \r\nprint(\"\\nFirst 5:\\n\", merged_df.head())\r\n\r\n# The last 5 rows\r\nprint(\"\\nLast5:\\n\", merged_df.tail())\r\n\r\n# Count number of null values for each variable\r\nprint(\"\\nNulls:\\n\", merged_df.isnull().sum())",
"Shape:\n (1929937, 23)\n\nDatatypes:\n sku object\nnational_inv float64\nlead_time float64\nin_transit_qty float64\nforecast_3_month float64\nforecast_6_month float64\nforecast_9_month float64\nsales_1_month float64\nsales_3_month float64\nsales_6_month float64\nsales_9_month float64\nmin_bank float64\npotential_issue object\npieces_past_due float64\nperf_6_month_avg float64\nperf_12_month_avg float64\nlocal_bo_qty float64\ndeck_risk object\noe_constraint object\nppap_risk object\nstop_auto_buy object\nrev_stop object\nwent_on_backorder object\ndtype: object\n\nFirst 5:\n sku national_inv lead_time ... stop_auto_buy rev_stop went_on_backorder\n0 1026827 0.0 NaN ... Yes No No\n1 1043384 2.0 9.0 ... Yes No No\n2 1043696 2.0 NaN ... Yes No No\n3 1043852 7.0 8.0 ... Yes No No\n4 1044048 8.0 NaN ... Yes No No\n\n[5 rows x 23 columns]\n\nLast5:\n sku national_inv ... rev_stop went_on_backorder\n242071 3526988 13.0 ... No No\n242072 3526989 13.0 ... No No\n242073 3526990 10.0 ... No No\n242074 3526991 2913.0 ... No No\n242075 (242075 rows) NaN ... NaN NaN\n\n[5 rows x 23 columns]\n\nNulls:\n sku 0\nnational_inv 2\nlead_time 115619\nin_transit_qty 2\nforecast_3_month 2\nforecast_6_month 2\nforecast_9_month 2\nsales_1_month 2\nsales_3_month 2\nsales_6_month 2\nsales_9_month 2\nmin_bank 2\npotential_issue 2\npieces_past_due 2\nperf_6_month_avg 2\nperf_12_month_avg 2\nlocal_bo_qty 2\ndeck_risk 2\noe_constraint 2\nppap_risk 2\nstop_auto_buy 2\nrev_stop 2\nwent_on_backorder 2\ndtype: int64\n"
]
],
[
[
"We find out that:\r\n\r\n* We can see there's almost 2 million records with 23 different attributes,\r\n* 15 of these attributes are numerical\r\n* 8 of these attributes are non-numerical\r\n* lead_time has 115619 null values\r\n\r\n---\r\n\r\nWe now aggregate the dataset, first a summary of the overall dataset, then a summary with separating the classes.",
"_____no_output_____"
]
],
[
[
"# Select numerical parameters\r\nnum_params = ['national_inv',\r\n 'lead_time',\r\n 'in_transit_qty',\r\n 'forecast_3_month',\r\n 'forecast_6_month',\r\n 'forecast_9_month',\r\n 'sales_1_month',\r\n 'sales_3_month',\r\n 'sales_6_month',\r\n 'sales_9_month',\r\n 'min_bank',\r\n 'pieces_past_due',\r\n 'perf_6_month_avg',\r\n 'perf_12_month_avg',\r\n 'local_bo_qty']\r\n\r\n# Describe data\r\nprint(\"\\nSummary:\\n\", merged_df[num_params].describe().transpose())\r\n\r\n# Pivot backorder\r\nprint(\"\\nBackorder:\\n\", merged_df.pivot_table(values=num_params,index=['went_on_backorder']).transpose())\r\n\r\n# Class proportion for target variable\r\nprint(\"\\nProportion of Backorder before SMOTE:\\n\", merged_df['went_on_backorder'].value_counts(normalize=True))",
"\nSummary:\n count mean ... 75% max\nnational_inv 1929935.0 496.568259 ... 80.00 12334404.0\nlead_time 1814318.0 7.878627 ... 9.00 52.0\nin_transit_qty 1929935.0 43.064397 ... 0.00 489408.0\nforecast_3_month 1929935.0 178.539864 ... 4.00 1510592.0\nforecast_6_month 1929935.0 345.465893 ... 12.00 2461360.0\nforecast_9_month 1929935.0 506.606748 ... 20.00 3777304.0\nsales_1_month 1929935.0 55.368164 ... 4.00 741774.0\nsales_3_month 1929935.0 174.663858 ... 15.00 1105478.0\nsales_6_month 1929935.0 341.565349 ... 31.00 2146625.0\nsales_9_month 1929935.0 523.577094 ... 47.00 3205172.0\nmin_bank 1929935.0 52.776366 ... 3.00 313319.0\npieces_past_due 1929935.0 2.016193 ... 0.00 146496.0\nperf_6_month_avg 1929935.0 -6.899870 ... 0.96 1.0\nperf_12_month_avg 1929935.0 -6.462343 ... 0.95 1.0\nlocal_bo_qty 1929935.0 0.653704 ... 0.00 12530.0\n\n[15 rows x 8 columns]\n\nBackorder:\n went_on_backorder No Yes\nforecast_3_month 178.566740 174.856734\nforecast_6_month 345.974100 275.821257\nforecast_9_month 507.636728 365.458479\nin_transit_qty 43.344159 4.725842\nlead_time 7.890117 6.354233\nlocal_bo_qty 0.626744 4.348258\nmin_bank 52.962026 27.333524\nnational_inv 500.036607 21.266361\nperf_12_month_avg -6.488200 -2.918946\nperf_6_month_avg -6.926540 -3.245012\npieces_past_due 2.003972 3.691009\nsales_1_month 55.556432 29.567914\nsales_3_month 175.322597 84.390315\nsales_6_month 342.979619 147.753952\nsales_9_month 525.812742 217.204134\n\nProportion of Backorder before SMOTE:\n No 0.992756\nYes 0.007244\nName: went_on_backorder, dtype: float64\n"
]
],
[
[
"We find that overall:\r\n* The mean inventory of products is about 500\r\n* The mean product in transit is 43\r\n* The mean sales per month is 55\r\n\r\nWhen separated by class:\r\n* The product that did not go on backorders, have high inventory, but also higher sales and quantity in transit.\r\n* The product that go on backorders, have low inventory, but also lower sales and quantity in transit.\r\n* **For products that go on backorder, the sales is higher than the inventory, whereas for products that do not go on backorder, the sales is lower than the inventory.**\r\n\r\nThis confirms our hypothesis, in that national inventory and sales performance are directly correlated with backorder.\r\n\r\n---\r\n\r\nNow we can construct a correlation matrix to see the correlation between each attributes. ",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\n# Correlation Matrix Plot of all variables\r\nvarnames=list(merged_df)[1:] \r\ncorrelations = merged_df[varnames].corr()\r\nfig = plt.figure()\r\nax = fig.add_subplot(111)\r\ncax = ax.matshow(correlations, vmin=-1, vmax=1)\r\nfig.colorbar(cax)\r\nticks = np.arange(0,23,1)\r\nax.set_xticks(ticks)\r\nax.set_yticks(ticks)\r\nax.set_xticklabels(varnames,rotation=90)\r\nax.set_yticklabels(varnames)\r\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can see that:\r\n\r\n* Sales and forecast variables are highly correlated.\r\n\r\nThis means that, when doing our prediction, we don't have to use every attributes that correlate highly with each other. Using fewer attributes may speed up training time.\r\n\r\n---",
"_____no_output_____"
],
[
"## Tableau EDA\r\n\r\nWe can now perform more complex data analysis on Tableau. The first chart we will look at is **Real sales vs forecast**:\r\n\r\n\r\n\r\nWe see that the prediction in the original dataset correlates highly with the real sales. We can investigate further by looking at the \"yes\" and \"no\" backorder products separately.\r\n\r\n\r\n\r\nFor the \"no\" backorder products, the forecasted sales and the actual sales are the same. But for the \"yes\" backorder products, there is a disparity between the forecasted sales and the actual sales. \r\n\r\n**The actual sales are higher than the forecasted sales for backorder products.**\r\n\r\n---\r\n",
"_____no_output_____"
],
[
"# **Data Pre-processing**\r\nData pre-processing is a data mining technique that involves transforming raw data into an understandable format. Real-world data is often inconsistent and incomplete, and we need to transform it into a format that our machine learning models can understand.\r\n\r\nFirst of all, we need to get rid of the null value sas there are many null values in the dataset. We also remove the 'SKU' column as it is the ID of the product and are not meaningful in any way.\r\n\r\nAfter that, we can compare the proportion of \"Yes\" and \"No\" backorder products:",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import normalize\r\nfrom imblearn.over_sampling import SMOTE\r\n\r\n# Replace NaN values in lead_time\r\nmerged_df.lead_time = merged_df.lead_time.fillna(merged_df.lead_time.median())\r\n\r\n# Change the -99 placeholder to NA for perf_6_month_avg and perf_12_month_avg\r\nmerged_df['perf_6_month_avg'] = merged_df['perf_6_month_avg'].replace(-99, np.NaN)\r\nmerged_df['perf_12_month_avg'] = merged_df['perf_12_month_avg'].replace(-99, np.NaN)\r\n\r\n# Drop rows with null values \r\nmerged_df = merged_df.dropna()\r\n\r\n# Remove the sku column\r\nmerged_df = merged_df.drop([\"sku\"], axis=1)\r\n\r\n# Class proportion for target variable\r\nprint(\"\\nProportion of Backorder before SMOTE:\\n\", merged_df['went_on_backorder'].value_counts(normalize=True))",
"/usr/local/lib/python3.6/dist-packages/sklearn/externals/six.py:31: FutureWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/).\n \"(https://pypi.org/project/six/).\", FutureWarning)\n/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.neighbors.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.neighbors. Anything that cannot be imported from sklearn.neighbors is now part of the private API.\n warnings.warn(message, FutureWarning)\n"
]
],
[
[
"And find out that 98.13% of products are \"Yes\" backorder and only 1.87% are \"No\" backorder.\r\n\r\n---\r\n\r\nNext, we want to transform the 'Yes' and 'No' values to '1' and '0', as some models are not able to work with non-numerical values. Also, we want to remove the records where forecast and sales are 0, because these products do not contribute to our prediction.",
"_____no_output_____"
]
],
[
[
"# Convert from non-numerical to numerical\r\ncat_params = ['potential_issue', 'deck_risk', 'oe_constraint', 'ppap_risk',\r\n 'stop_auto_buy', 'rev_stop', 'went_on_backorder']\r\n\r\nfor param in cat_params:\r\n merged_df[param] = (merged_df[param] == 'Yes').astype(int)\r\n\r\n# Remove records where forecast and sales are 0 \r\nattributes = ['forecast_3_month', 'forecast_6_month', 'forecast_9_month',\r\n 'sales_1_month', 'sales_3_month', 'sales_6_month', 'sales_9_month']\r\n \r\nfor attr in attributes:\r\n merged_df = merged_df.drop(merged_df[merged_df[attr] == 0].index)",
"_____no_output_____"
]
],
[
[
"As the data is still vastly unbalanced, we need to balance it somehow. We can do this by applying the SMOTE technique. After that, we can save it to a csv file for future use.",
"_____no_output_____"
]
],
[
[
"# SMOTE teachnique to balance dataset\r\nX = merged_df.drop(['went_on_backorder'], axis = 1)\r\ny = merged_df['went_on_backorder']\r\noversample = SMOTE()\r\nX, y = oversample.fit_resample(X, y)\r\ndf = pd.concat([pd.DataFrame(X), pd.DataFrame(y)], axis=1)\r\n\r\n# Rename labels in final dataset\r\nlabels = merged_df.columns\r\ndf.columns = labels\r\n\r\n# Save to csv\r\ndf.to_csv(r'data.csv')\r\n\r\n# Class proportion before SMOTE\r\nprint(\"\\nProportion of Backorder before SMOTE:\\n\", merged_df['went_on_backorder'].value_counts(normalize=True))\r\n\r\n# Class proportion after SMOTE\r\nprint(\"\\nProportion of Backorder after SMOTE:'n\", df['went_on_backorder'].value_counts(normalize=True))",
"/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function safe_indexing is deprecated; safe_indexing is deprecated in version 0.22 and will be removed in version 0.24.\n warnings.warn(msg, category=FutureWarning)\n"
]
],
[
[
"---\r\n# **Descriptive Data Mining**\r\n\r\nDescriptive data mining is applying data mining techniques to determine the similarities in the data and to find existing patterns.\r\n\r\nWe will apply 2 descriptive data mining techniques here:\r\n1. A Priori Association Rules\r\n2. K-Means Clustering\r\n\r\n---",
"_____no_output_____"
],
[
"## Association Rules\r\nAssociation Rules calculate how frequent the items appear together\r\n\r\n---\r\n\r\n### RapidMiner\r\nWe first run this on RapidMiner. The data is first discretized and transformed to Binomial before we process it. It should be noted that we are running FPGrowth in RapidMiner because it does not have A Priori. We use A Priori Association Rules on Colab because FPgrowth is unavailable here. However, they are very similar algorithms. The figure below shows the operators used in RapidMiner:\r\n\r\n\r\n\r\nAnd the two figures below shows the output:\r\n\r\n\r\n\r\n\r\n\r\nWe can conclude from the results that, when\r\n* oe_constraint\r\n* potential_issue\r\n* desk_risk\r\n\r\nis **False**, then rev_stop will most probably also be **False**\r\n\r\nNext, we export the data that we have processed here into a CSV that we can use on Colab.\r\n\r\n---\r\n\r\nWe can now use the data that is exported and run A Priori Association Rules on Colab. ",
"_____no_output_____"
]
],
[
[
"!pip install -q mlxtend\r\nfrom mlxtend.frequent_patterns import apriori, association_rules\r\nimport pandas as pd\r\n\r\n# Read data\r\ndisc_df = pd.read_csv(\"drive/MyDrive/data_mining_portfolio/discretized.csv\")\r\n\r\n# Analyze frequent itemsets and write to csv\r\nap = apriori(disc_df, min_support=0.95, use_colnames=True)\r\nprint(\"\\nFrequent Itemsets:\\n\", ap)\r\nap.to_csv('itemsets.csv')\r\n\r\n# Create association rules and write to csv\r\nrules = association_rules(ap, metric=\"confidence\", min_threshold=0.8)\r\nprint(\"\\nAssociation Rules:\\n\", rules)\r\nrules.to_csv('rules.csv')",
"\nFrequent Itemsets:\n support itemsets\n0 0.998698 (potential_issue = range1 [-? - 0.500])\n1 0.950488 (deck_risk = range1 [-? - 0.500])\n2 0.999755 (oe_constraint = range1 [-? - 0.500])\n3 0.960804 (stop_auto_buy = range2 [0 - ?])\n4 1.000000 (rev_stop = range1 [-? - 0])\n5 0.998453 (oe_constraint = range1 [-? - 0.500], potentia...\n6 0.959502 (stop_auto_buy = range2 [0 - ?], potential_iss...\n7 0.998698 (rev_stop = range1 [-? - 0], potential_issue =...\n8 0.950243 (oe_constraint = range1 [-? - 0.500], deck_ris...\n9 0.950488 (rev_stop = range1 [-? - 0], deck_risk = range...\n10 0.960559 (oe_constraint = range1 [-? - 0.500], stop_aut...\n11 0.999755 (oe_constraint = range1 [-? - 0.500], rev_stop...\n12 0.960804 (stop_auto_buy = range2 [0 - ?], rev_stop = ra...\n13 0.959257 (oe_constraint = range1 [-? - 0.500], stop_aut...\n14 0.998453 (oe_constraint = range1 [-? - 0.500], rev_stop...\n15 0.959502 (stop_auto_buy = range2 [0 - ?], rev_stop = ra...\n16 0.950243 (oe_constraint = range1 [-? - 0.500], rev_stop...\n17 0.960559 (stop_auto_buy = range2 [0 - ?], oe_constraint...\n18 0.959257 (stop_auto_buy = range2 [0 - ?], oe_constraint...\n\nAssociation Rules:\n antecedents ... conviction\n0 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n1 (potential_issue = range1 [-? - 0.500]) ... 0.998698\n2 (stop_auto_buy = range2 [0 - ?]) ... 0.960804\n3 (potential_issue = range1 [-? - 0.500]) ... 0.998698\n4 (rev_stop = range1 [-? - 0]) ... 1.000000\n5 (potential_issue = range1 [-? - 0.500]) ... inf\n6 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n7 (deck_risk = range1 [-? - 0.500]) ... 0.950488\n8 (rev_stop = range1 [-? - 0]) ... 1.000000\n9 (deck_risk = range1 [-? - 0.500]) ... inf\n10 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n11 (stop_auto_buy = range2 [0 - ?]) ... 0.960804\n12 (oe_constraint = range1 [-? - 0.500]) ... inf\n13 (rev_stop = range1 [-? - 0]) ... 1.000000\n14 (stop_auto_buy = range2 [0 - ?]) ... inf\n15 (rev_stop = range1 [-? - 0]) ... 1.000000\n16 (oe_constraint = range1 [-? - 0.500], stop_aut... ... 0.960559\n17 (oe_constraint = range1 [-? - 0.500], potentia... ... 0.998453\n18 (stop_auto_buy = range2 [0 - ?], potential_iss... ... 0.959502\n19 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n20 (stop_auto_buy = range2 [0 - ?]) ... 0.960804\n21 (potential_issue = range1 [-? - 0.500]) ... 0.998698\n22 (oe_constraint = range1 [-? - 0.500], rev_stop... ... 0.999755\n23 (oe_constraint = range1 [-? - 0.500], potentia... ... inf\n24 (rev_stop = range1 [-? - 0], potential_issue =... ... 0.998698\n25 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n26 (rev_stop = range1 [-? - 0]) ... 1.000000\n27 (potential_issue = range1 [-? - 0.500]) ... 0.998698\n28 (stop_auto_buy = range2 [0 - ?], rev_stop = ra... ... 0.960804\n29 (stop_auto_buy = range2 [0 - ?], potential_iss... ... inf\n30 (rev_stop = range1 [-? - 0], potential_issue =... ... 0.998698\n31 (stop_auto_buy = range2 [0 - ?]) ... 0.960804\n32 (rev_stop = range1 [-? - 0]) ... 1.000000\n33 (potential_issue = range1 [-? - 0.500]) ... 0.998698\n34 (oe_constraint = range1 [-? - 0.500], rev_stop... ... 0.999755\n35 (oe_constraint = range1 [-? - 0.500], deck_ris... ... inf\n36 (rev_stop = range1 [-? - 0], deck_risk = range... ... 0.950488\n37 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n38 (rev_stop = range1 [-? - 0]) ... 1.000000\n39 (deck_risk = range1 [-? - 0.500]) ... 0.950488\n40 (oe_constraint = range1 [-? - 0.500], stop_aut... ... inf\n41 (stop_auto_buy = range2 [0 - ?], rev_stop = ra... ... 0.960804\n42 (oe_constraint = range1 [-? - 0.500], rev_stop... ... 0.999755\n43 (stop_auto_buy = range2 [0 - ?]) ... 0.960804\n44 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n45 (rev_stop = range1 [-? - 0]) ... 1.000000\n46 (oe_constraint = range1 [-? - 0.500], stop_aut... ... 0.960559\n47 (oe_constraint = range1 [-? - 0.500], stop_aut... ... inf\n48 (stop_auto_buy = range2 [0 - ?], potential_iss... ... 0.959502\n49 (oe_constraint = range1 [-? - 0.500], rev_stop... ... 0.998453\n50 (oe_constraint = range1 [-? - 0.500], stop_aut... ... 0.960559\n51 (stop_auto_buy = range2 [0 - ?], rev_stop = ra... ... 0.960804\n52 (stop_auto_buy = range2 [0 - ?], potential_iss... ... 0.959502\n53 (oe_constraint = range1 [-? - 0.500], rev_stop... ... 0.999755\n54 (oe_constraint = range1 [-? - 0.500], potentia... ... 0.998453\n55 (rev_stop = range1 [-? - 0], potential_issue =... ... 0.998698\n56 (stop_auto_buy = range2 [0 - ?]) ... 0.960804\n57 (oe_constraint = range1 [-? - 0.500]) ... 0.999755\n58 (rev_stop = range1 [-? - 0]) ... 1.000000\n59 (potential_issue = range1 [-? - 0.500]) ... 0.998698\n\n[60 rows x 9 columns]\n"
]
],
[
[
"We managed to obtain very similar results, whereby when\r\n\r\n* oe_constraint\r\n* potential_issue\r\n* desk_risk\r\n\r\nis **False**, the rev_stop is also **False**.\r\n\r\n---",
"_____no_output_____"
],
[
"## Clustering\r\n\r\nClustering is an unsupervised machine learning algorithm that divide the population or data points into a number of groups such that data points in the same groups are more similar to other data points in the same group and dissimilar to the data points in other groups. It is basically a collection of objects on the basis of similarity and dissimilarity between them.\r\n\r\nThe clustering algorithm we use is K-Means as it is one of the fastest clustering algorithms. We will use the Davies-Bouldin Index to measure its performance. A high Davies-Bouldin score means that the clusters are very similar to each other, whereas a low Davies-Bouldin score means that the clusters are well separated from each other. **In general, we want a low score.**\r\n\r\n---\r\n\r\n## RapidMiner\r\nWe first run it on RapidMiner. We are only using the \"Yes\" backorder products as using the \"No\" backorder products will turn the task into a classification task. The task is repeated 5 times for 5 different number of clusters. Figure below shows the operators used in RapidMiner:\r\n",
"_____no_output_____"
],
[
"Next, we can run it on Python, starting with 2 clusters.",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import KMeans\r\nfrom sklearn.metrics import davies_bouldin_score\r\n\r\n# Select only backorder data\r\nbo_df = df[df['went_on_backorder'] == 1]\r\nX = bo_df.drop(columns='went_on_backorder', axis=0)\r\n\r\n# Clustering\r\nKMmodel = KMeans(n_clusters=2)\r\nKMpred = KMmodel.fit_predict(X)\r\nKMlabels = KMmodel.labels_\r\nKMbi = davies_bouldin_score(X, KMlabels)\r\nprint(\"K-Means with 2 clusters\")\r\nprint(\"Davies-Bouldin Index:\", KMbi)",
"K-Means with 2 clusters\nDavies-Bouldin Index: 0.4324246018578214\n"
]
],
[
[
"We obtained a Davies-Bouldin score of 0.432.\r\n\r\n---\r\n\r\nNow we try with 4 clusters.",
"_____no_output_____"
]
],
[
[
"# Clustering\r\nKMmodel = KMeans(n_clusters=4)\r\nKMpred = KMmodel.fit_predict(X)\r\nKMlabels = KMmodel.labels_\r\nKMbi = davies_bouldin_score(X, KMlabels)\r\nprint(\"K-Means with 4 clusters\")\r\nprint(\"Davies-Bouldin Index:\", KMbi)",
"K-Means with 4 clusters\nDavies-Bouldin Index: 0.48389029593856103\n"
]
],
[
[
"And obtained a score of 0.484\r\n\r\n---\r\n\r\nFinally, we test with 3 clusters",
"_____no_output_____"
]
],
[
[
"# Clustering\r\nKMmodel = KMeans(n_clusters=3)\r\nKMpred = KMmodel.fit_predict(X)\r\nKMlabels = KMmodel.labels_\r\nKMbi = davies_bouldin_score(X, KMlabels)\r\nprint(\"K-Means with 3 clusters\")\r\nprint(\"Davies-Bouldin Index:\", KMbi)",
"K-Means with 3 clusters\nDavies-Bouldin Index: 0.31455921648427315\n"
]
],
[
[
"And obtained a score of 0.315. This is the best result we have obtained so we will use this to cluster our data.\r\n\r\n---\r\n\r\nThe figure below shows the Davies-Bouldin Index for different clusters on RapidMiner and Colab. As we can see, the score on both platforms converged at k=3, which is a good sign that k=3 is the optimal number of clusters.\r\n\r\n\r\n---\r\n\r\nFinally, using k=3, we can analyse the properties of the different clusters.",
"_____no_output_____"
]
],
[
[
"# Add cluster column\r\nKM = X.copy()\r\nKM['cluster'] = pd.Series(KMpred, index=KM.index)\r\n\r\n# Separate into different clusters\r\ncl0 = KM.loc[KM['cluster'] == 0]\r\ncl1 = KM.loc[KM['cluster'] == 1]\r\ncl2 = KM.loc[KM['cluster'] == 2]\r\n\r\n# Find out number of instances in each cluster\r\nprint(\"Cluster_0: \", cl0.shape)\r\nprint(\"Cluster_1: \", cl1.shape)\r\nprint(\"Cluster_2: \", cl2.shape)\r\n\r\n# Aggregate the different clusters\r\ncl0_mean = cl0.agg('mean').drop('cluster')\r\ncl1_mean = cl1.agg('mean').drop('cluster')\r\ncl2_mean = cl2.agg('mean').drop('cluster')\r\n\r\npd.concat([cl0_mean, cl1_mean, cl2_mean], axis=1)",
"Cluster_0: (346361, 22)\nCluster_1: (828, 22)\nCluster_2: (40, 22)\n"
]
],
[
[
"We find out that:\r\n\r\n* Most items are in the Cluster_0\r\n* Cluster_1 and Cluster_2 are outliers\r\n* Cluster_0 has a low inventory at an average of 9.72, and a higher sales at 32.59.\r\n* Cluster_1 has a high inventory at 6501.40 and a lower sales at 2487.\r\n* Cluster_2 has a negative inventory at -380.10 and a positive sales at 861.34.\r\n\r\n**Cluster 0 and Cluster 2 confirms our results earlier that, a higher sales than inventory will result in a backorder.**\r\nOn the other hand, Cluster_1 has a lower sales than inventory, however there are only small a small quantity of items that are in this cluster. Cluster_1 is the outlier.\r\n\r\n---\r\n",
"_____no_output_____"
],
[
"# **Predictive Data Mining**\r\n\r\nPredictive data mining allows us to predict events that has not happened yet. This type of data mining is done for the purpose of using business intelligence or other data to forecast or predict trends. This type of data mining can help business leaders make better decisions and can add value to the efforts of the analytics team.\r\n\r\nFor this project, we are comparing Random Forest and Adaboost. Random forests are an ensemble learning method for classification. It works by constructing a multitude of Decision Trees at training time and outputting the class that is the mode of the classes.\r\n\r\nAdaboost is also an ensemble learning method, but can be used in conjunction with many other types of learning algorithms instead of just Decision Tree.\r\n\r\nNote that we do not use the forecast because they are predicted values, and we do not use min_bank because it is a 'recommended' value. We only want actual numbers for our prediction.\r\n\r\n---\r\n\r\n### RapidMiner\r\nFirst, we run this on RapidMiner. Using the default max_depth=10 for both Random Forest and AdaBoost, we calculate the performance on different train-test ratios, 80:20 and 70:30. The figure below shows our operators on RapidMiner:\r\n\r\n\r\n\r\nThe figure below shows the tabulated results:\r\n\r\n\r\n\r\nWe can see that, overall, Random Forest outperformed AdaBoost by as much as 10% in terms of precision.\r\n\r\n---\r\n\r\n",
"_____no_output_____"
],
[
"## 30% Test Ratio\r\n\r\nWe can now test in on Colab. Starting with 30% testing ratio and a max_depth=1 on Random Forest,",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\r\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\r\nfrom sklearn.metrics import precision_score, recall_score, accuracy_score\r\nfrom joblib import dump\r\nfrom time import perf_counter\r\n\r\n# selecting features that we want\r\nX = df.drop(columns=['went_on_backorder', 'forecast_3_month', 'forecast_6_month', 'forecast_9_month', 'perf_12_month_avg', 'sales_1_month', 'sales_3_month', 'sales_9_month', 'min_bank'], axis=0)\r\nY = df['went_on_backorder']\r\n\r\n# test size\r\ntest_size = 0.3\r\n\r\n# train test split\r\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size, random_state=42)\r\nprint(\"Train to test ratio:\", 1-test_size, test_size)\r\n\r\n# training random forest\r\nstart = perf_counter()\r\nRFmodel = RandomForestClassifier(max_depth=1)\r\nRFmodel.fit(X_train, Y_train)\r\n\r\n# testing random forest\r\nRFpred = RFmodel.predict(X_test)\r\nRFacc = round(accuracy_score(Y_test, RFpred) * 100, 2)\r\nRFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2)\r\nRFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"Random Forest\", \"Accuracy\", RFacc, \"Precision:\", RFprec, \"Recall:\", RFrec)\r\nprint(\"Random Forest Time Elapsed: \", time_elapsed, \" seconds.\")",
"Train to test ratio: 0.7 0.3\nRandom Forest Accuracy 79.69 Precision: 79.97 Recall: 79.69\nRandom Forest Time Elapsed: 15.01725381600005 seconds.\n"
]
],
[
[
"We obtained the following results:\r\n* Accuracy: 79.69%\r\n* Precision: 79.97%\r\n* Recall: 79.69%\r\n* Time elapsed: 15 seconds\r\n\r\n---\r\n\r\nNow we use a max_depth=10 ",
"_____no_output_____"
]
],
[
[
"# training random forest\r\nstart = perf_counter()\r\nRFmodel = RandomForestClassifier(max_depth=10)\r\nRFmodel.fit(X_train, Y_train)\r\n\r\n# testing random forest\r\nRFpred = RFmodel.predict(X_test)\r\nRFacc = round(accuracy_score(Y_test, RFpred) * 100, 2)\r\nRFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2)\r\nRFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"Random Forest\", \"Accuracy\", RFacc, \"Precision:\", RFprec, \"Recall:\", RFrec)\r\nprint(\"Random Forest Time Elapsed: \", time_elapsed, \" seconds.\")",
"Random Forest Accuracy 90.64 Precision: 90.77 Recall: 90.64\nRandom Forest Time Elapsed: 74.78256261299993 seconds.\n"
]
],
[
[
"And obtained the following results:\r\n* Accuracy: 90.64%\r\n* Precision: 90.77%\r\n* Recall: 90.64%\r\n* Time elapsed: 75 seconds\r\n\r\nWe can see that by increase the number of max_depth, we greatly increased the time it took to train the model.\r\n\r\n---\r\n\r\nWe do the same thing again with max_depth=25 ",
"_____no_output_____"
]
],
[
[
"# training random forest\r\nstart = perf_counter()\r\nRFmodel = RandomForestClassifier(max_depth=25)\r\nRFmodel.fit(X_train, Y_train)\r\n\r\n# testing random forest\r\nRFpred = RFmodel.predict(X_test)\r\nRFacc = round(accuracy_score(Y_test, RFpred) * 100, 2)\r\nRFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2)\r\nRFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"Random Forest\", \"Accuracy\", RFacc, \"Precision:\", RFprec, \"Recall:\", RFrec)\r\nprint(\"Random Forest Time Elapsed: \", time_elapsed, \" seconds.\")",
"Random Forest Accuracy 97.76 Precision: 97.77 Recall: 97.76\nRandom Forest Time Elapsed: 112.1124660160001 seconds.\n"
]
],
[
[
"And obtained the following results:\r\n* Accuracy: 97.76%\r\n* Precision: 97.77%\r\n* Recall: 97.76%\r\n* Time elapsed: 112 seconds\r\n\r\nThis is the best result with Random Forest so far. However, we can not know if we are overfitting until we test our model on the data product\r\n\r\n---\r\n\r\nNow, we can train our Adaboost model. First we use Naive Bayes, particularly the Gaussian Naive Bayes, as the base estimator.",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB\r\n\r\n# training adaboost\r\nstart = perf_counter()\r\nABmodel = AdaBoostClassifier(base_estimator=GaussianNB())\r\nABmodel.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = ABmodel.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 65.98 Precision: 68.65 Recall: 65.98\nAdaBoost Time Elapsed: 24.231937038999945 seconds.\n"
]
],
[
[
" We obtained the following results:\r\n* Accuracy: 65.98%\r\n* Precision: 68.65%\r\n* Recall: 65.98%\r\n* Time elapsed: 24 seconds\r\n\r\nThis is not a good result, so we will not use this model\r\n\r\n---\r\n\r\nNext, we try the ExtraTreeClassifier as the base estimator. ",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import ExtraTreeClassifier\r\n\r\n# training adaboost\r\nstart = perf_counter()\r\nABmodel = AdaBoostClassifier(base_estimator=ExtraTreeClassifier())\r\nABmodel.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = ABmodel.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 96.89 Precision: 96.89 Recall: 96.89\nAdaBoost Time Elapsed: 44.81217424199997 seconds.\n"
]
],
[
[
"We obtained the following results:\r\n* Accuracy: 96.89%\r\n* Precision: 96.89%\r\n* Recall: 96.89%\r\n* Time elapsed: 44 seconds\r\n\r\nThis is a good result, but we can do better.\r\n\r\n---\r\n\r\nNow we can test the default base estimator, which is the Decision Tree if no parameters are given. ",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\r\n\r\n# training adaboost\r\nstart = perf_counter()\r\nABmodel = AdaBoostClassifier(base_estimator=DecisionTreeClassifier())\r\nABmodel.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = ABmodel.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 98.43 Precision: 98.43 Recall: 98.43\nAdaBoost Time Elapsed: 214.69023468800003 seconds.\n"
]
],
[
[
"We obtained the following results:\r\n* Accuracy: 98.43%\r\n* Precision: 98.43%\r\n* Recall: 98.43%\r\n* Time elapsed: 215 seconds\r\n\r\nThe accuracy, precision, and recall are very high, however it took much longer to train. \r\n\r\n---\r\n\r\nNow that we have determined Decision Tree is a good base estimator, we can try it with max_depth=1. ",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\r\n\r\n# training adaboost\r\nstart = perf_counter()\r\nABmodel = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1))\r\nABmodel.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = ABmodel.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 87.32 Precision: 87.35 Recall: 87.32\nAdaBoost Time Elapsed: 30.614752263000014 seconds.\n"
]
],
[
[
"We obtained the following results:\r\n* Accuracy: 87.32%\r\n* Precision: 87.35%\r\n* Recall: 87.32%\r\n* Time elapsed: 31 seconds\r\n\r\nThe time elapsed is much lower, but the performance is still relatively good.\r\n\r\n---\r\n\r\nNow we try with max_depth=25 ",
"_____no_output_____"
]
],
[
[
"# training adaboost\r\nstart = perf_counter()\r\nABmodel = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=25))\r\nABmodel.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = ABmodel.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 98.7 Precision: 98.7 Recall: 98.7\nAdaBoost Time Elapsed: 194.90734541999996 seconds.\n"
]
],
[
[
"And obtained the following results:\r\n* Accuracy: 98.70%\r\n* Precision: 98.70%\r\n* Recall: 98.70%\r\n* Time elapsed: 195 seconds\r\n\r\nThis is the longest time elapsed so far, but also the best performance in terms of accuracy, precision, and recall. Again, we will not know if the model is overfitting the data until we test it on our data product.\r\n\r\n---\r\n## 20% Test Ratio\r\n\r\nNow, we can try to run the same process with 20% testing ratio instead. Starting with Random Forest with max_depth=1, ",
"_____no_output_____"
]
],
[
[
"# test size\r\ntest_size = 0.2\r\n\r\n# train test split\r\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size, random_state=42)\r\nprint(\"Train to test ratio:\", 1-test_size, test_size)\r\n\r\n# training random forest\r\nstart = perf_counter()\r\nRF_1 = RandomForestClassifier(max_depth=1)\r\nRF_1.fit(X_train, Y_train)\r\n\r\n# testing random forest\r\nRFpred = RF_1.predict(X_test)\r\nRFacc = round(accuracy_score(Y_test, RFpred) * 100, 2)\r\nRFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2)\r\nRFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"Random Forest\", \"Accuracy\", RFacc, \"Precision:\", RFprec, \"Recall:\", RFrec)\r\nprint(\"Random Forest Time Elapsed: \", time_elapsed, \" seconds.\")",
"Train to test ratio: 0.8 0.2\nRandom Forest Accuracy 78.69 Precision: 78.83 Recall: 78.69\nRandom Forest Time Elapsed: 15.100225382999952 seconds.\n"
]
],
[
[
"we obtained the results:\r\n* Accuracy: 78.69%\r\n* Precision: 78.83%\r\n* Recall: 78.69%\r\n* Time elapsed: 15 seconds\r\n\r\nThe time elapsed is very low and the performance is relatively high.\r\n\r\n---\r\n\r\nWe do the same thing but with max_depth=10. ",
"_____no_output_____"
]
],
[
[
"# training random forest\r\nstart = perf_counter()\r\nRF_10 = RandomForestClassifier(max_depth=10)\r\nRF_10.fit(X_train, Y_train)\r\n\r\n# testing random forest\r\nRFpred = RF_10.predict(X_test)\r\nRFacc = round(accuracy_score(Y_test, RFpred) * 100, 2)\r\nRFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2)\r\nRFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"Random Forest\", \"Accuracy\", RFacc, \"Precision:\", RFprec, \"Recall:\", RFrec)\r\nprint(\"Random Forest Time Elapsed: \", time_elapsed, \" seconds.\")",
"Random Forest Accuracy 90.76 Precision: 90.9 Recall: 90.76\nRandom Forest Time Elapsed: 74.43826018799996 seconds.\n"
]
],
[
[
"We obtained the results:\r\n* Accuracy: 90.76%\r\n* Precision: 90.90%\r\n* Recall: 90.76%\r\n* Time elapsed: 74 seconds\r\n\r\nThis is a very good result.\r\n\r\n---\r\n\r\nNow let's try max_depth=25\r\n\r\n",
"_____no_output_____"
]
],
[
[
"# training random forest\r\nstart = perf_counter()\r\nRF_25 = RandomForestClassifier(max_depth=25)\r\nRF_25.fit(X_train, Y_train)\r\n\r\n# testing random forest\r\nRFpred = RF_25.predict(X_test)\r\nRFacc = round(accuracy_score(Y_test, RFpred) * 100, 2)\r\nRFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2)\r\nRFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"Random Forest\", \"Accuracy\", RFacc, \"Precision:\", RFprec, \"Recall:\", RFrec)\r\nprint(\"Random Forest Time Elapsed: \", time_elapsed, \" seconds.\")",
"Random Forest Accuracy 97.86 Precision: 97.87 Recall: 97.86\nRandom Forest Time Elapsed: 114.00814012799992 seconds.\n"
]
],
[
[
"We get the results:\r\n* Accuracy: 97.86%\r\n* Precision: 97.87%\r\n* Recall: 97.86%\r\n* Time elapsed: 114 seconds\r\n\r\nThis is the best performance for our Random Forest.\r\n\r\n---\r\n\r\nWe do the same thing for Adaboost, starting with max_depth=1. ",
"_____no_output_____"
]
],
[
[
"# training adaboost\r\nstart = perf_counter()\r\nAB_1 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1))\r\nAB_1.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = AB_1.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 87.38 Precision: 87.41 Recall: 87.38\nAdaBoost Time Elapsed: 31.59044422099987 seconds.\n"
]
],
[
[
"We obtained the results:\r\n* Accuracy: 87.38%\r\n* Precision: 87.41%\r\n* Recall: 87.38%\r\n* Time elapsed: 32 seconds\r\n\r\nWe see that when the max_depth are the same, Adaboost outperformed Random Forest, contrary to our results from RapidMiner. Adaboost took twice as long as Random Forest to train.\r\n\r\n---\r\n\r\nNow we can try max_depth=10",
"_____no_output_____"
]
],
[
[
"# training adaboost\r\nstart = perf_counter()\r\nAB_10 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10))\r\nAB_10.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = AB_10.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 98.8 Precision: 98.8 Recall: 98.8\nAdaBoost Time Elapsed: 218.75830530500002 seconds.\n"
]
],
[
[
"We obtained the results:\r\n* Accuracy: 98.80%\r\n* Precision: 98.80%\r\n* Recall: 98.80%\r\n* Time elapsed: 219 seconds\r\n\r\n---\r\n\r\nFinally, we try max_depth=25",
"_____no_output_____"
]
],
[
[
"# training adaboost\r\nstart = perf_counter()\r\nAB_25 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=25))\r\nAB_25.fit(X_train, Y_train)\r\n\r\n# testing adaboost\r\nABpred = AB_25.predict(X_test)\r\nABacc = round(accuracy_score(Y_test, ABpred) * 100, 2)\r\nABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2)\r\nABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2)\r\ntime_elapsed = perf_counter() - start\r\nprint(\"AdaBoost\", \"Accuracy:\", ABacc, \"Precision:\", ABprec, \"Recall:\", ABrec)\r\nprint(\"AdaBoost Time Elapsed: \", time_elapsed, \" seconds.\")",
"AdaBoost Accuracy: 98.81 Precision: 98.81 Recall: 98.81\nAdaBoost Time Elapsed: 244.1962600600002 seconds.\n"
]
],
[
[
" We obtained the results:\r\n* Accuracy: 98.81%\r\n* Precision: 98.81%\r\n* Recall: 98.81%\r\n* Time elapsed: 244 seconds\r\n\r\nAgain, we see a higher slightly performance compared to Random Forest, and again the training took twice as long.\r\n\r\n---\r\n\r\nWe save both the Random Forest and the Adaboost forest, for max_depth=1, max_depth=10 and max_depth=25. This is because we want to test if any of these models are overfitting",
"_____no_output_____"
]
],
[
[
"# save model\r\ndump(RF_1, 'RandomForest_1.joblib')\r\ndump(RF_10, 'RandomForest_10.joblib')\r\ndump(RF_25, 'RandomForest_25.joblib')\r\n\r\n# save model\r\ndump(AB_1, 'Adaboost_1.joblib')\r\ndump(AB_10, 'Adaboost_10.joblib')\r\ndump(AB_25, 'Adaboost_25.joblib')",
"_____no_output_____"
]
],
[
[
"---\r\n# **Results and Analysis**\r\n---\r\n\r\n## Exploratory Data Analysis\r\nFrom our exploratory data analysis, **we can see that there is indeed a correlation between national inventory, sales performance, and backorder**. Generally, when the sales performance exceeds the national inventory, the product becomes a backorder. Another thing we have learned from the data analysis is that, when the sales performance exceeds the forecasted sales, then there is a high probability that the product will also go on backorder.\r\n\r\n---\r\n\r\n## Association Rules\r\nWith association rules, we can see the relationship between the attributes. The categorical attributes (oe_constraint, desk_risk, rev_stop) frequently go together. **When oe_constraint is false, then there is a high likelihood that rev_stop is also false, and vice versa.**\r\n\r\n\r\n---\r\n\r\n\r\n## Clustering\r\nWith clustering, we are able to cluster similar instances together. Within the \"Yes\" backorder products, we see that cluster 0 is the majority cluster, whereas cluster 1 and cluster 2 can be considered as an outlier. **Cluster 0 is low in sales and inventory, cluster 1 is high in both sales and inventory, and cluster 2 is in between the two.**\r\n\r\nFor cluster 0 and cluster 2, we see that the average sales do exceed the average inventory, which confirms our hypothesis. Cluster 1 contradicts our hypothesis, but it can be considered an outlier to the data.\r\n\r\n\r\n---\r\n\r\n\r\n## Predictive Data Mining\r\nAfter comparing 2 classification algorithms, we find that, in general, **Adaboost has the higher accuracy, precision, and recall than Random Forest**, when max_depth are the same. \r\n\r\nThe result here on Colab is different to our results on RapidMiner. On RapidMiner, the performance of Random Forest is higher than Adaboost. I'm not certain why, and I can only hypothesize that it is due to a different implementation of the algorithms.\r\n\r\nHere is the tabulated results:\r\n\r\n\r\n\r\nIf we look at the file sizes, Adaboost_1, Adaboost_10, Adaboost_25 have file sizes 31.1KB, 2.85MB, and 17.5MB respectively.\r\n\r\nOn the other hand, RandomForest_1, RandomForest_10, and RandomForest_25 have file sizes 60KB, 6.84MB, and 156MB respectively.\r\n\r\nAs for time taken to train the model, Adaboost_1, Adaboost_10, Adaboost_25 took 32, 219, and 244 seconds respectively.\r\n\r\nRandomForest_1, RandomForest_10, and RandomForest_25 took 15, 74, 114 seconds respectively.\r\n\r\n**Adaboost takes twice as long as Random Forest to train.**\r\n\r\nHowever, we won't know if any of these models are overfitting until we test it in the Data product.",
"_____no_output_____"
],
[
"# **Data Product**\r\n\r\nThe data product is built on streamlit because it allows us to rapidly prototype a data product without much coding. Our data product allows the user to manipulate the variables and predict if they will be a backorder or not. We will use these features to predict the backorder:\r\n\r\n* national_inv\r\n* lead_time\r\n* In_transit_qty\r\n* sales_6_month\r\n* perf_6_months_avg\r\n* potential_issue\r\n* pieces_past_due\r\n* local_bo_qty\r\n* deck_risk\r\n* oe_constraint\r\n* ppap_risk\r\n* stop_auto_buy\r\n* rev_stop\r\n",
"_____no_output_____"
]
],
[
[
"%%writefile app.py\n\nimport pandas as pd\nimport streamlit as st\nfrom joblib import load\nfrom PIL import Image\n\nDATA_PATH = 'data.csv'\n\[email protected]\ndef load_data(path):\n data = pd.read_csv(path)\n lowercase = lambda x: str(x).lower()\n data.rename(lowercase, axis='columns', inplace=True)\n return data\n\ndata_load_state = st.text('Loading data...')\ndf = load_data(DATA_PATH)\ndata_load_state.text(\"Done loading data!\")\n\n\ndef main():\n @st.cache\n def agg_data(df, mode):\n dat = df.agg([mode])\n return dat\n\n data_agg_state = st.text('Aggregating data...')\n dfMin = agg_data(df, 'min')\n dfMax = agg_data(df, 'max')\n dfMedian = agg_data(df, 'median')\n dfMode = agg_data(df, 'mode')\n data_agg_state.text(\"Done aggregating data!\")\n\n st.title('Product Backorder')\n st.sidebar.title(\"Features\")\n\n quant_parameter_list = ['national_inv',\n 'lead_time',\n 'in_transit_qty',\n 'sales_1_month',\n 'pieces_past_due',\n 'perf_6_month_avg',\n 'local_bo_qty']\n\n qual_parameter_list = ['potential_issue',\n 'deck_risk',\n 'oe_constraint',\n 'ppap_risk',\n 'stop_auto_buy',\n 'rev_stop']\n\n parameter_input_values=[]\n values=[]\n \n model_select = st.selectbox(label='Select Classification Model', options=(('Adaboost_1', 'Adaboost_10','Adaboost_25', 'RandomForest_1', 'RandomForest_10', 'RandomForest_25')))\n\n for parameter in quant_parameter_list:\n values = st.sidebar.slider(label=parameter, key=parameter, value=float(dfMedian[parameter]), min_value=float(dfMin[parameter]), max_value=float(dfMax[parameter]), step=0.1)\n parameter_input_values.append(values)\n\n for parameter in qual_parameter_list:\n ind = dfMode[parameter].iloc[0]\n values = st.sidebar.selectbox(label=parameter, key=parameter, index=int(ind), options=('Yes', 'No'))\n val = 1 if values == 'Yes' else 0\n parameter_input_values.append(val)\n\n parameter_list = quant_parameter_list + qual_parameter_list\n input_variables=pd.DataFrame([parameter_input_values],columns=parameter_list)\n st.write('\\n\\n')\n\n if (model_select == \"Adaboost_1\"):\n model = load('Adaboost_1.joblib')\n elif (model_select == \"Adaboost_10\"):\n model = load('Adaboost_10.joblib')\n elif (model_select == \"Adaboost_25\"):\n model = load('Adaboost_25.joblib')\n elif (model_select == \"RandomForest_1\"):\n model = load('RandomForest_1.joblib')\n elif (model_select == \"RandomForest_10\"):\n model = load('RandomForest_10.joblib')\n elif (model_select == \"RandomForest_25\"):\n model = load('RandomForest_25.joblib')\n else:\n model = load('Adaboost_1.joblib')\n\n if st.button(\"Will the product be a backorder?\"):\n prediction = model.predict(input_variables)\n pred = 'No' if prediction == 0 else 'Yes'\n st.text(pred)\n\nif __name__ == '__main__':\n main()",
"Writing app.py\n"
]
],
[
[
"We install ngrok so we can run streamlit on Colab",
"_____no_output_____"
]
],
[
[
"!pip -q install streamlit\r\n!pip -q install pyngrok\r\n\r\n# Setup a tunnel to the streamlit port 8501\r\nfrom pyngrok import ngrok\r\npublic_url = ngrok.connect(port='8501')\r\npublic_url",
"\u001b[K |████████████████████████████████| 7.5MB 5.1MB/s \n\u001b[K |████████████████████████████████| 4.6MB 59.3MB/s \n\u001b[K |████████████████████████████████| 163kB 60.6MB/s \n\u001b[K |████████████████████████████████| 81kB 7.8MB/s \n\u001b[K |████████████████████████████████| 112kB 38.3MB/s \n\u001b[K |████████████████████████████████| 122kB 52.4MB/s \n\u001b[K |████████████████████████████████| 71kB 4.6MB/s \n\u001b[?25h Building wheel for blinker (setup.py) ... \u001b[?25l\u001b[?25hdone\n\u001b[31mERROR: google-colab 1.0.0 has requirement ipykernel~=4.10, but you'll have ipykernel 5.4.3 which is incompatible.\u001b[0m\n Building wheel for pyngrok (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
],
[
"!streamlit run --server.port 80 app.py & >/dev/null",
"\u001b[0m\n\u001b[34m\u001b[1m You can now view your Streamlit app in your browser.\u001b[0m\n\u001b[0m\n\u001b[34m Network URL: \u001b[0m\u001b[1mhttp://172.28.0.2:80\u001b[0m\n\u001b[34m External URL: \u001b[0m\u001b[1mhttp://35.231.250.250:80\u001b[0m\n\u001b[0m\n\u001b[34m Stopping...\u001b[0m\n"
]
],
[
[
"We can now terminate streamlit and ngrok",
"_____no_output_____"
]
],
[
[
"!pgrep streamlit\r\nngrok.kill()",
"347\n"
]
],
[
[
"# **Conclusion**\n\nWe can test our model in our data product. By logical deduction and also confirmed by our data analysis earlier on, sales that exceed the national inventory will go on a backorder, and vice versa.\n\nSo, first we test each model with **national_inv=100000 and sales=1**. The expected output is \"No.\" These are the results obtained:\n\n* **Adaboost_1**: No\n* **Adaboost_10**: No\n* **Adaboost_25**: No\n* **RandomForest_1**: *Yes*\n* **RandomForest_10**: No\n* **RandomForest_25** : No\n\nWe can see that the Random Forest model with max_depth=10 is misclassifying our product.\n\nNext, we test each model with **national_inv=100 and sales=100000**. The expected output is \"Yes.\" These are the results obtained:\n\n* **Adaboost_1**: Yes\n* **Adaboost_10**: Yes\n* **Adaboost_25**: Yes\n\n* **RandomForest_1**: Yes\n* **RandomForest_10**: Yes\n* **RandomForest_25** : *No*\n\n\nHere is the results in tabular form:\n\n\n\nWe can see that **all 3 of the Adaboost models got both test cases correct**. On the other hand, for Random Forest, **only RandomForest_10 got both cases correct**.\n\nHence, we can conclude that for this dataset, in terms of accuracy and file sizes, Adaboost is the superior model to Random Forest. However, there is a trade-off, and that is the training time. **Adaboost takes twice as long as Random Forest to train.**\n\nSince there is not much difference between Adaboost_10 and Adaboost_25 in terms of accuracy, so we can assume that there is diminishing returns after max_depth=10. Thus, **Adaboost_10 is actually the better model out of the two**.\n\nWe should also keep in mind that **accuracy alone is not enough to tell the effectiveness of a model**, and there are many factors that we should consider.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5cddae485aa6a93ab3256621948df204cc4e82
| 13,249 |
ipynb
|
Jupyter Notebook
|
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-1-blank-checkpoint.ipynb
|
nkmah2/ML_Uni_Washington_Coursera
|
a5852f1716189a1126919b12d767f894ad8490ac
|
[
"MIT"
] | null | null | null |
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-1-blank-checkpoint.ipynb
|
nkmah2/ML_Uni_Washington_Coursera
|
a5852f1716189a1126919b12d767f894ad8490ac
|
[
"MIT"
] | null | null | null |
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-1-blank-checkpoint.ipynb
|
nkmah2/ML_Uni_Washington_Coursera
|
a5852f1716189a1126919b12d767f894ad8490ac
|
[
"MIT"
] | null | null | null | 28.553879 | 319 | 0.611442 |
[
[
[
"# Regression Week 2: Multiple Regression (Interpretation)",
"_____no_output_____"
],
[
"The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.\n\nIn this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:\n* Use SFrames to do some feature engineering\n* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)\n* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares\n* Look at coefficients and interpret their meanings\n* Evaluate multiple models via RSS",
"_____no_output_____"
],
[
"# Fire up graphlab create",
"_____no_output_____"
]
],
[
[
"import graphlab",
"_____no_output_____"
]
],
[
[
"# Load in house sales data\n\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.",
"_____no_output_____"
]
],
[
[
"sales = graphlab.SFrame('kc_house_data.gl/')",
"[INFO] \u001b[1;32m1449795420 : INFO: (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_FILE to /home/nitin/anaconda/lib/python2.7/site-packages/certifi/cacert.pem\n\u001b[0m\u001b[1;32m1449795420 : INFO: (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_DIR to \n\u001b[0mThis non-commercial license of GraphLab Create is assigned to [email protected] and will expire on October 14, 2016. For commercial licensing options, visit https://dato.com/buy/.\n\n[INFO] Start server at: ipc:///tmp/graphlab_server-4679 - Server binary: /home/nitin/anaconda/lib/python2.7/site-packages/graphlab/unity_server - Server log: /tmp/graphlab_server_1449795420.log\n[INFO] GraphLab Server Version: 1.7.1\n"
]
],
[
[
"# Split data into training and testing.\nWe use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you). ",
"_____no_output_____"
]
],
[
[
"train_data,test_data = sales.random_split(.8,seed=0)",
"_____no_output_____"
]
],
[
[
"# Learning a multiple regression model",
"_____no_output_____"
],
[
"Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:\nexample_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:\n\n(Aside: We set validation_set = None to ensure that the results are always the same)",
"_____no_output_____"
]
],
[
[
"example_features = ['sqft_living', 'bedrooms', 'bathrooms']\nexample_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features, \n validation_set = None)",
"_____no_output_____"
]
],
[
[
"Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:",
"_____no_output_____"
]
],
[
[
"example_weight_summary = example_model.get(\"coefficients\")\nprint example_weight_summary",
"_____no_output_____"
]
],
[
[
"# Making Predictions\n\nIn the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions. \n\nRecall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:",
"_____no_output_____"
]
],
[
[
"example_predictions = example_model.predict(train_data)\nprint example_predictions[0] # should be 271789.505878",
"_____no_output_____"
]
],
[
[
"# Compute RSS",
"_____no_output_____"
],
[
"Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.",
"_____no_output_____"
]
],
[
[
"def get_residual_sum_of_squares(model, data, outcome):\n # First get the predictions\n\n # Then compute the residuals/errors\n\n # Then square and add them up\n\n return(RSS) ",
"_____no_output_____"
]
],
[
[
"Test your function by computing the RSS on TEST data for the example model:",
"_____no_output_____"
]
],
[
[
"rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])\nprint rss_example_train # should be 2.7376153833e+14",
"_____no_output_____"
]
],
[
[
"# Create some new features",
"_____no_output_____"
],
[
"Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even \"interaction\" features such as the product of bedrooms and bathrooms.",
"_____no_output_____"
],
[
"You will use the logarithm function to create a new feature. so first you should import it from the math library.",
"_____no_output_____"
]
],
[
[
"from math import log",
"_____no_output_____"
]
],
[
[
"Next create the following 4 new features as column in both TEST and TRAIN data:\n* bedrooms_squared = bedrooms\\*bedrooms\n* bed_bath_rooms = bedrooms\\*bathrooms\n* log_sqft_living = log(sqft_living)\n* lat_plus_long = lat + long \nAs an example here's the first one:",
"_____no_output_____"
]
],
[
[
"train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)\ntest_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)",
"_____no_output_____"
],
[
"# create the remaining 3 features in both TEST and TRAIN data\n\n",
"_____no_output_____"
]
],
[
[
"* Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.\n* bedrooms times bathrooms gives what's called an \"interaction\" feature. It is large when *both* of them are large.\n* Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.\n* Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)",
"_____no_output_____"
],
[
"**Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)**",
"_____no_output_____"
],
[
"# Learning Multiple Models",
"_____no_output_____"
],
[
"Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:\n* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude\n* Model 2: add bedrooms\\*bathrooms\n* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude",
"_____no_output_____"
]
],
[
[
"model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']\nmodel_2_features = model_1_features + ['bed_bath_rooms']\nmodel_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']",
"_____no_output_____"
]
],
[
[
"Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:",
"_____no_output_____"
]
],
[
[
"# Learn the three models: (don't forget to set validation_set = None)\n",
"_____no_output_____"
],
[
"# Examine/extract each model's coefficients:\n",
"_____no_output_____"
]
],
[
[
"**Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?**\n\n**Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?**\n\nThink about what this means.",
"_____no_output_____"
],
[
"# Comparing multiple models\n\nNow that you've learned three models and extracted the model weights we want to evaluate which model is best.",
"_____no_output_____"
],
[
"First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.",
"_____no_output_____"
]
],
[
[
"# Compute the RSS on TRAINING data for each of the three models and record the values:\n",
"_____no_output_____"
]
],
[
[
"**Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data?** Is this what you expected?",
"_____no_output_____"
],
[
"Now compute the RSS on on TEST data for each of the three models.",
"_____no_output_____"
]
],
[
[
"# Compute the RSS on TESTING data for each of the three models and record the values:\n",
"_____no_output_____"
]
],
[
[
"**Quiz Question: Which model (1, 2 or 3) has lowest RSS on TESTING Data?** Is this what you expected?Think about the features that were added to each model from the previous.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5ce731a9ef7949edd11e4ec5aba3d52d84f80f
| 138,823 |
ipynb
|
Jupyter Notebook
|
old_projects/misc_projects/Gate Scaling.ipynb
|
AlexisRalli/VQE-code
|
4112d2bba4c327360e95dfd7cb6120b2ce67bf29
|
[
"MIT"
] | 1 |
2021-04-01T14:01:46.000Z
|
2021-04-01T14:01:46.000Z
|
old_projects/misc_projects/Gate Scaling.ipynb
|
AlexisRalli/VQE-code
|
4112d2bba4c327360e95dfd7cb6120b2ce67bf29
|
[
"MIT"
] | 5 |
2019-11-13T16:23:54.000Z
|
2021-04-07T11:03:06.000Z
|
old_projects/misc_projects/Gate Scaling.ipynb
|
AlexisRalli/VQE-code
|
4112d2bba4c327360e95dfd7cb6120b2ce67bf29
|
[
"MIT"
] | null | null | null | 94.759727 | 80,815 | 0.779021 |
[
[
[
"import numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"def N_single_qubit_gates_req_Rot(N_system_qubits, set_size):\n return (2*N_system_qubits+1)*(set_size-1)\ndef N_CNOT_gates_req_Rot(N_system_qubits, set_size):\n return 2*(N_system_qubits-1)*(set_size-1)",
"_____no_output_____"
],
[
"def N_cV_gates_req_LCU(N_system_qubits, set_size):\n Na=np.ceil(np.log2(set_size))\n return (N_system_qubits*((2**Na) -1)) *(set_size-1)\ndef N_CNOT_gates_req_LCU(N_system_qubits, set_size):\n Na=np.ceil(np.log2(set_size))\n return ((2**Na) -2) *(set_size-1)",
"_____no_output_____"
],
[
"## better\n\n# O(2 N_system) change of basis single qubit gates\n# O(2 [N_system-1]) CNOT gates\n# 2 * Hadamard gates\n# 1 m-controlled Tofolli gate!\n\n## overall reduction = 16m-32\n## requiring (m-2) garbage bits --> ALWAYS PRESENT IN SYSTEM REGISTER!!!\n\ndef N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, set_size):\n change_of_basis = 2*N_system_qubits\n H_gates = 2\n \n return (change_of_basis+H_gates)*(set_size-1)\n \n\ndef N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, set_size):\n \n cnot_Gates = 2*(N_system_qubits-1)\n \n Na=np.ceil(np.log2(set_size))\n \n ## perez gates\n N_perez_gates = 4*(Na-2)\n \n N_CNOT_in_perez = N_perez_gates*1\n N_cV_gates_in_perez = N_perez_gates*3\n \n if ((16*Na-32)!=(N_CNOT_in_perez+N_cV_gates_in_perez)).all():\n raise ValueError('16m-32 is the expected decomposition!')\n# if np.array_equal((16*Na-32), (N_CNOT_in_perez+N_cV_gates_in_perez)):\n# raise ValueError('16m-32 is the expected decomposition!')\n \n return ((cnot_Gates+N_CNOT_in_perez)*(set_size-1)) , (N_cV_gates_in_perez*(set_size-1))",
"_____no_output_____"
],
[
"x_nsets=np.arange(2,200,1)",
"_____no_output_____"
],
[
"# Data for plotting\nN_system_qubits=4\ny_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)\ny_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)\n\ny_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)\ny_LCU_CNOT = N_CNOT_gates_req_LCU(N_system_qubits, x_nsets)\n\ny_LCU_single_NEW=N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)\ny_LCU_CNOT_NEW, y_LCU_cV_NEW = N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)",
"_____no_output_____"
],
[
"%matplotlib notebook\nfig, ax = plt.subplots()\n\nax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')\nax.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--', label='CNOT gates - Sequence of Rotations')\n\nax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\\dagger}$ gates - LCU')\nax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')\n\nax.plot(x_nsets, y_LCU_single_NEW, color='yellow', label='Single qubit gates - LCU_new')\nax.plot(x_nsets, y_LCU_CNOT_NEW, color='teal', linestyle='--', label='CNOT gates - LCU_new')\nax.plot(x_nsets, y_LCU_cV_NEW, color='slategrey', label='cV gates - LCU_new')\n\n\n\nax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')\n# ,title='Scaling of methods')\nax.grid()\nplt.legend()\n\n# # http://akuederle.com/matplotlib-zoomed-up-inset\n# from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes\n# # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left\n# axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom\n\n# axins.plot(x_nsets, y_rot_single, color='b')\n# axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--')\n# axins.plot(x_nsets, y_LCU_cV, color='g')\n# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')\n\n# x1, x2, y1, y2 = 2, 5, 0, 50 # specify the limits\n# axins.set_xlim(x1, x2) # apply the x-limits\n# axins.set_ylim(y1, y2) # apply the y-limits\n# # axins.set_yticks(np.arange(0, 100, 20))\n# plt.yticks(visible=True)\n# plt.xticks(visible=True)\n\n# from mpl_toolkits.axes_grid1.inset_locator import mark_inset\n# mark_inset(ax, axins, loc1=2, loc2=4, fc=\"none\", ec=\"0.5\") # loc here is which corner zoom goes to!\n\n\n\n# fig.savefig(\"test.png\")\nplt.show()",
"_____no_output_____"
],
[
"# %matplotlib notebook\n\n# fig, ax = plt.subplots()\n\n# ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')\n# ax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations')\n\n# ax.plot(x_nsets, y_LCU_single, color='g', label='c-$V$ and c-$V^{\\dagger}$ gates - LCU')\n# ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')\n\n# ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates',\n# title='Scaling of methods')\n# ax.grid()\n# plt.legend()\n\n# # http://akuederle.com/matplotlib-zoomed-up-inset\n# from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes\n# axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left\n# axins.plot(x_nsets, y_rot_single, color='b')\n# axins.plot(x_nsets, y_rot_CNOT, color='r')\n# axins.plot(x_nsets, y_LCU_single, color='g')\n# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')\n\n# x1, x2, y1, y2 = 2, 4, 0, 100 # specify the limits\n# axins.set_xlim(x1, x2) # apply the x-limits\n# axins.set_ylim(y1, y2) # apply the y-limits\n# # axins.set_yticks(np.arange(0, 100, 20))\n# plt.yticks(visible=True)\n# plt.xticks(visible=True)\n\n# from mpl_toolkits.axes_grid1.inset_locator import mark_inset\n# mark_inset(ax, axins, loc1=2, loc2=4, fc=\"none\", ec=\"0.5\") # loc here is which corner zoom goes to!\n\n\n\n# # fig.savefig(\"test.png\")\n# plt.show()",
"_____no_output_____"
],
[
"# Data for plotting\nN_system_qubits=10 # < ---- CHANGED\ny_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)\ny_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)\n\ny_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)\ny_LCU_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)\n\ny_LCU_single_NEW=N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)\ny_LCU_CNOT_NEW, y_LCU_cV_NEW = N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets)",
"_____no_output_____"
],
[
"%matplotlib notebook\nfig, ax = plt.subplots()\n\nax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')\nax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations')\n\nax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\\dagger}$ gates - LCU')\nax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')\n\nax.plot(x_nsets, y_LCU_single_NEW, color='yellow', label='Single qubit gates - LCU_new')\nax.plot(x_nsets, y_LCU_CNOT_NEW, color='teal', linestyle='--', label='CNOT gates - LCU_new')\nax.plot(x_nsets, y_LCU_cV_NEW, color='slategrey', label='cV gates - LCU_new')\n\nax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')\n# ,title='Scaling of methods')\nax.grid()\nplt.legend()\n\n# # # http://akuederle.com/matplotlib-zoomed-up-inset\n# # from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes\n# # # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left\n# axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom\n\n# axins.plot(x_nsets, y_rot_single, color='b')\n# axins.plot(x_nsets, y_rot_CNOT, color='r')\n# axins.plot(x_nsets, y_LCU_cV, color='g')\n# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')\n\n# x1, x2, y1, y2 = 2, 3, 0, 50 # specify the limits\n# axins.set_xlim(x1, x2) # apply the x-limits\n# axins.set_ylim(y1, y2) # apply the y-limits\n\n# axins.set_xticks(np.arange(2, 4, 1))\n# plt.yticks(visible=True)\n# plt.xticks(visible=True)\n\n# from mpl_toolkits.axes_grid1.inset_locator import mark_inset\n# mark_inset(ax, axins, loc1=2, loc2=4, fc=\"none\", ec=\"0.5\") # loc here is which corner zoom goes to!\n\n# # fig.savefig(\"test.png\")\nplt.show()",
"_____no_output_____"
],
[
"# Data for plotting\nN_system_qubits=100 # < ---- CHANGED\ny_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)\ny_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)\n\ny_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)\ny_LCU_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)",
"_____no_output_____"
],
[
"%matplotlib notebook\nfig, ax = plt.subplots()\n\nax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations', linewidth=3)\nax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations')\n\nax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\\dagger}$ gates - LCU')\nax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')\n\nax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')\n# ,title='Scaling of methods')\nax.grid()\nplt.legend()\n\n# http://akuederle.com/matplotlib-zoomed-up-inset\nfrom mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes\n# axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left\naxins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom\n\naxins.plot(x_nsets, y_rot_single, color='b', linewidth=2)\naxins.plot(x_nsets, y_rot_CNOT, color='r')\naxins.plot(x_nsets, y_LCU_cV, color='g')\naxins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')\n\nx1, x2, y1, y2 = 1.5, 3, 90, 500 # specify the limits\naxins.set_xlim(x1, x2) # apply the x-limits\naxins.set_ylim(y1, y2) # apply the y-limits\n# axins.set_yticks(np.arange(0, 100, 20))\nplt.yticks(visible=True)\nplt.xticks(visible=True)\n\nfrom mpl_toolkits.axes_grid1.inset_locator import mark_inset\nmark_inset(ax, axins, loc1=2, loc2=4, fc=\"none\", ec=\"0.5\") # loc here is which corner zoom goes to!\n\n\n\n# fig.savefig(\"test.png\")\nplt.show()",
"_____no_output_____"
],
[
"# Data for plotting\nN_system_qubits=1\ny_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)\ny_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)\n\ny_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets)\ny_LCU_CNOT = N_CNOT_gates_req_LCU(N_system_qubits, x_nsets)",
"_____no_output_____"
],
[
"%matplotlib notebook\nfig, ax = plt.subplots()\n\nax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')\nax.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--', label='CNOT gates - Sequence of Rotations')\n\nax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\\dagger}$ gates - LCU')\nax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--')\n\nax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates')\n# ,title='Scaling of methods')\nax.grid()\nplt.legend()\n\n# http://akuederle.com/matplotlib-zoomed-up-inset\nfrom mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes\n# axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left\naxins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom\n\naxins.plot(x_nsets, y_rot_single, color='b')\naxins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--')\naxins.plot(x_nsets, y_LCU_cV, color='g')\naxins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')\n\nx1, x2, y1, y2 = 2, 3, -5, 10 # specify the limits\naxins.set_xlim(x1, x2) # apply the x-limits\naxins.set_ylim(y1, y2) # apply the y-limits\n# axins.set_yticks(np.arange(0, 100, 20))\naxins.set_xticks(np.arange(2, 4, 1))\nplt.yticks(visible=True)\nplt.xticks(visible=True)\n\nfrom mpl_toolkits.axes_grid1.inset_locator import mark_inset\nmark_inset(ax, axins, loc1=2, loc2=4, fc=\"none\", ec=\"0.5\") # loc here is which corner zoom goes to!\n\n\n\n# fig.savefig(\"test.png\")\nplt.show()",
"_____no_output_____"
],
[
"# Data for plotting\nN_system_qubits=5\nx_nsets=2\n\nprint(N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets))\nprint(N_CNOT_gates_req_Rot(N_system_qubits, x_nsets))\n\nprint('###')\n\nprint(N_cV_gates_req_LCU(N_system_qubits, x_nsets))\nprint(N_CNOT_gates_req_LCU(N_system_qubits, x_nsets))\nprint(4)",
"_____no_output_____"
],
[
"### results for |S_l|=2\nX_no_system_qubits=np.arange(1,11,1)\nx_nsets=2\n\ny_rot_single=N_single_qubit_gates_req_Rot(X_no_system_qubits, x_nsets)\ny_rot_CNOT = N_CNOT_gates_req_Rot(X_no_system_qubits, x_nsets)\n\ny_LCU_cV=N_cV_gates_req_LCU(X_no_system_qubits, x_nsets)\n# y_LCU_CNOT = N_CNOT_gates_req_LCU(X_no_system_qubits, x_nsets)\ny_LCU_CNOT=np.zeros(len(X_no_system_qubits))\nsingle_qubit_LCU_gates=np.array([4 for _ in range(len(X_no_system_qubits))])",
"_____no_output_____"
],
[
"%matplotlib notebook\nfig, ax = plt.subplots()\n\nax.plot(X_no_system_qubits, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations')\nax.plot(X_no_system_qubits, y_rot_CNOT, color='r', linestyle='-', label='CNOT gates - Sequence of Rotations')\n\nax.plot(X_no_system_qubits, y_LCU_cV, color='g', label='single controlled $\\sigma$ gates - LCU')\nax.plot(X_no_system_qubits, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='-')\nax.plot(X_no_system_qubits, single_qubit_LCU_gates, color='m', label='Single qubit gates - LCU', linestyle='-')\n\nax.set(xlabel='$N_{s}$', ylabel='Number of gates')\n# ,title='Scaling of methods')\n\nax.set_xticks(X_no_system_qubits)\n\nax.grid()\nplt.legend()\n\n# # http://akuederle.com/matplotlib-zoomed-up-inset\n# from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes\n# # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left\n# axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom\n\n# axins.plot(x_nsets, y_rot_single, color='b')\n# axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--')\n# axins.plot(x_nsets, y_LCU_cV, color='g')\n# axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--')\n\n# x1, x2, y1, y2 = 2, 3, -5, 10 # specify the limits\n# axins.set_xlim(x1, x2) # apply the x-limits\n# axins.set_ylim(y1, y2) # apply the y-limits\n# # axins.set_yticks(np.arange(0, 100, 20))\n# axins.set_xticks(np.arange(2, 4, 1))\n# plt.yticks(visible=True)\n# plt.xticks(visible=True)\n\n# from mpl_toolkits.axes_grid1.inset_locator import mark_inset\n# mark_inset(ax, axins, loc1=2, loc2=4, fc=\"none\", ec=\"0.5\") # loc here is which corner zoom goes to!\n\n\n\n# fig.savefig(\"test.png\")\nplt.show()",
"_____no_output_____"
],
[
"V = ((1j+1)/2)*np.array([[1,-1j],[-1j, 1]], dtype=complex)",
"_____no_output_____"
],
[
"\nCNOT",
"_____no_output_____"
],
[
"from functools import reduce\n\nzero=np.array([[1],[0]])\none=np.array([[0],[1]])\nidentity=np.eye(2)\nX=np.array([[0,1], [1,0]])\n\nCNOT= np.kron(np.outer(one, one), X)+np.kron(np.outer(zero, zero), identity)\n\n\n###\nI_one_V = reduce(np.kron, [identity, np.kron(np.outer(one, one), V)+np.kron(np.outer(zero, zero), identity)])\n###\nzero_zero=np.kron(zero,zero)\nzero_one=np.kron(zero,one)\none_zero=np.kron(one,zero)\none_one=np.kron(one,one)\n\none_I_V = np.kron(np.outer(zero_zero, zero_zero), identity)+np.kron(np.outer(zero_one, zero_one), identity)+ \\\n np.kron(np.outer(one_zero, one_zero), V)+np.kron(np.outer(one_one, one_one), V)\n###\n\nCNOT_I=reduce(np.kron, [CNOT, identity])\n##\nI_one_Vdag = reduce(np.kron, [identity, np.kron(np.outer(one, one), V.conj().transpose())+np.kron(np.outer(zero, zero), identity)])\n##\n\nperez_gate = reduce(np.multiply, [I_one_V, one_I_V, CNOT_I, I_one_Vdag])",
"_____no_output_____"
],
[
"##check\n\n# peres = TOF(x0,x1,x2) CNOT(x0, x1)\n\nzero_zero=np.kron(zero,zero)\nzero_one=np.kron(zero,one)\none_zero=np.kron(one,zero)\none_one=np.kron(one,one)\n\nTOF = np.kron(np.outer(zero_zero, zero_zero), identity)+np.kron(np.outer(zero_one, zero_one), identity)+ \\\n np.kron(np.outer(one_zero, one_zero), identity)+np.kron(np.outer(one_one, one_one), X)\n\nCNOT_I = reduce(np.kron, [CNOT, identity])\n\nchecker = np.multiply(CNOT_I, TOF)",
"_____no_output_____"
],
[
"checker==perez_gate",
"_____no_output_____"
],
[
"print(perez_gate)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5cee7157adfb7e026609b382b57973b3145db1
| 50,913 |
ipynb
|
Jupyter Notebook
|
KK_B2_Hadoop_and_Hive.ipynb
|
suvam4991/KKolab
|
939abd5927fcdea0cdd5fa00d242fb2e9d3b15ed
|
[
"MIT"
] | 1 |
2021-08-21T05:11:41.000Z
|
2021-08-21T05:11:41.000Z
|
KK_B2_Hadoop_and_Hive.ipynb
|
suvam4991/KKolab
|
939abd5927fcdea0cdd5fa00d242fb2e9d3b15ed
|
[
"MIT"
] | null | null | null |
KK_B2_Hadoop_and_Hive.ipynb
|
suvam4991/KKolab
|
939abd5927fcdea0cdd5fa00d242fb2e9d3b15ed
|
[
"MIT"
] | 3 |
2022-02-14T05:50:48.000Z
|
2022-03-25T10:14:15.000Z
| 33.761936 | 358 | 0.488755 |
[
[
[
"<a href=\"https://colab.research.google.com/github/prithwis/KKolab/blob/main/KK_B2_Hadoop_and_Hive.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"<br>\n\n\n<hr>\n\n[Prithwis Mukerjee](http://www.linkedin.com/in/prithwis)<br>",
"_____no_output_____"
],
[
"#Hive with Hadoop\nThis notebook has all the codes / commands required to install Hadoop and Hive <br>\n\n",
"_____no_output_____"
],
[
"##Acknowledgements\nHadoop Installation from [Anjaly Sam's Github Repository](https://github.com/anjalysam/Hadoop) <br>\nHive Installation from [PhoenixNAP](https://phoenixnap.com/kb/install-hive-on-ubuntu) website",
"_____no_output_____"
],
[
"#1 Hadoop\nHadoop is a pre-requisite for Hive <br>\n",
"_____no_output_____"
],
[
"## 1.1 Download, Install Hadoop",
"_____no_output_____"
]
],
[
[
"# The default JVM available at /usr/lib/jvm/java-11-openjdk-amd64/ works for Hadoop\n# But gives errors with Hive https://stackoverflow.com/questions/54037773/hive-exception-class-jdk-internal-loader-classloadersappclassloader-cannot\n# Hence this JVM needs to be installed\n!apt-get update > /dev/null\n!apt-get install openjdk-8-jdk-headless -qq > /dev/null",
"_____no_output_____"
],
[
"# Download the latest version of Hadoop\n# Change the version number in this and subsequent cells\n#\n!wget https://downloads.apache.org/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz\n# Unzip it\n# the tar command with the -x flag to extract, -z to uncompress, -v for verbose output, and -f to specify that we’re extracting from a file\n!tar -xzf hadoop-3.3.0.tar.gz\n#copy hadoop file to user/local\n!mv hadoop-3.3.0/ /usr/local/",
"--2021-08-20 11:42:57-- https://downloads.apache.org/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz\nResolving downloads.apache.org (downloads.apache.org)... 135.181.214.104, 135.181.209.10, 88.99.95.219, ...\nConnecting to downloads.apache.org (downloads.apache.org)|135.181.214.104|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 500749234 (478M) [application/x-gzip]\nSaving to: ‘hadoop-3.3.0.tar.gz’\n\nhadoop-3.3.0.tar.gz 100%[===================>] 477.55M 17.8MB/s in 29s \n\n2021-08-20 11:43:27 (16.6 MB/s) - ‘hadoop-3.3.0.tar.gz’ saved [500749234/500749234]\n\n"
]
],
[
[
"## 1.2 Set Environment Variables\n",
"_____no_output_____"
]
],
[
[
"#To find the default Java path\n!readlink -f /usr/bin/java | sed \"s:bin/java::\"\n!ls /usr/lib/jvm/",
"/usr/lib/jvm/java-11-openjdk-amd64/\ndefault-java\t\t java-11-openjdk-amd64 java-8-openjdk-amd64\njava-1.11.0-openjdk-amd64 java-1.8.0-openjdk-amd64\n"
],
[
"#To set java path, go to /usr/local/hadoop-3.3.0/etc/hadoop/hadoop-env.sh then\n#. . . export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/ . . .\n#we have used a simpler alternative route using os.environ - it works\n\nimport os\nos.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\" # default is changed\n#os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-11-openjdk-amd64/\"\nos.environ[\"HADOOP_HOME\"] = \"/usr/local/hadoop-3.3.0/\"",
"_____no_output_____"
],
[
"!echo $PATH",
"/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin\n"
],
[
"# Add Hadoop BIN to PATH\n# get current_path from output of previous command\ncurrent_path = '/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin'\nnew_path = current_path+':/usr/local/hadoop-3.3.0/bin/'\nos.environ[\"PATH\"] = new_path",
"_____no_output_____"
]
],
[
[
"## 1.3 Test Hadoop Installation",
"_____no_output_____"
]
],
[
[
"#Running Hadoop - Test RUN, not doing anything at all\n#!/usr/local/hadoop-3.3.0/bin/hadoop\n# UNCOMMENT the following line if you want to make sure that Hadoop is alive!\n#!hadoop",
"_____no_output_____"
],
[
"# Testing Hadoop with PI generating sample program, should calculate value of pi = 3.14157500000000000000\n# pi example\n#Uncomment the following line if you want to test Hadoop with pi example\n#!hadoop jar /usr/local/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar pi 16 100000",
"_____no_output_____"
]
],
[
[
"#2 Hive",
"_____no_output_____"
],
[
"## 2.1 Download, Install HIVE",
"_____no_output_____"
]
],
[
[
"# Download and Unzip the correct version and unzip\n!wget https://downloads.apache.org/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz\n!tar xzf apache-hive-3.1.2-bin.tar.gz",
"--2021-08-20 11:45:26-- https://downloads.apache.org/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz\nResolving downloads.apache.org (downloads.apache.org)... 88.99.95.219, 135.181.214.104, 135.181.209.10, ...\nConnecting to downloads.apache.org (downloads.apache.org)|88.99.95.219|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 278813748 (266M) [application/x-gzip]\nSaving to: ‘apache-hive-3.1.2-bin.tar.gz’\n\napache-hive-3.1.2-b 100%[===================>] 265.90M 17.5MB/s in 16s \n\n2021-08-20 11:45:43 (16.2 MB/s) - ‘apache-hive-3.1.2-bin.tar.gz’ saved [278813748/278813748]\n\n"
]
],
[
[
"## 2.2 Set Environment *Variables*",
"_____no_output_____"
]
],
[
[
"# Make sure that the version number is correct and is as downloaded\nos.environ[\"HIVE_HOME\"] = \"/content/apache-hive-3.1.2-bin\"\n!echo $HIVE_HOME",
"/content/apache-hive-3.1.2-bin\n"
],
[
"!echo $PATH",
"/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/hadoop-3.3.0/bin/\n"
],
[
"# current_path is set from output of previous command\ncurrent_path = '/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/hadoop-3.3.0/bin/'\nnew_path = current_path+':/content/apache-hive-3.1.2-bin/bin'\nos.environ[\"PATH\"] = new_path\n!echo $PATH",
"/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/hadoop-3.3.0/bin/:/content/apache-hive-3.1.2-bin/bin\n"
],
[
"!echo $JAVA_HOME\n!echo $HADOOP_HOME\n!echo $HIVE_HOME",
"/usr/lib/jvm/java-8-openjdk-amd64\n/usr/local/hadoop-3.3.0/\n/content/apache-hive-3.1.2-bin\n"
]
],
[
[
"## 2.3 Set up HDFS Directories",
"_____no_output_____"
]
],
[
[
"!hdfs dfs -mkdir /tmp\n!hdfs dfs -chmod g+w /tmp\n#!hdfs dfs -ls /\n!hdfs dfs -mkdir -p /content/warehouse\n!hdfs dfs -chmod g+w /content/warehouse\n#!hdfs dfs -ls /content/",
"mkdir: `/tmp': File exists\n"
]
],
[
[
"## 2.4 Initialise HIVE - note and fix errors",
"_____no_output_____"
]
],
[
[
"# TYPE this command, do not copy and paste. Non printing characters cause havoc \n# There will be two errors, that we will fix\n# UNCOMMENT the following line if you WISH TO SEE the errors\n!schematool -initSchema -dbType derby\n",
"SLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:/content/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:/usr/local/hadoop-3.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]\nException in thread \"main\" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V\n\tat org.apache.hadoop.conf.Configuration.set(Configuration.java:1380)\n\tat org.apache.hadoop.conf.Configuration.set(Configuration.java:1361)\n\tat org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)\n\tat org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)\n\tat org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:448)\n\tat org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141)\n\tat org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5104)\n\tat org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)\n\tat org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:323)\n\tat org.apache.hadoop.util.RunJar.main(RunJar.java:236)\n"
]
],
[
[
"### 2.4.1 Fix One Warning, One Error \nSLF4J is duplicate, need to locate them and remove one <br>\nGuava jar version is low",
"_____no_output_____"
]
],
[
[
"# locate multiple instances of slf4j ...\n!ls $HADOOP_HOME/share/hadoop/common/lib/*slf4j*\n!ls $HIVE_HOME/lib/*slf4j*",
"/usr/local/hadoop-3.3.0//share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar\n/usr/local/hadoop-3.3.0//share/hadoop/common/lib/slf4j-api-1.7.25.jar\n/usr/local/hadoop-3.3.0//share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar\n/content/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar\n"
],
[
"# removed the logging jar from Hive, retaining the Hadoop jar\n!mv /content/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar ./",
"_____no_output_____"
],
[
"# guava jar needs to above v 20\n# https://stackoverflow.com/questions/45247193/nosuchmethoderror-com-google-common-base-preconditions-checkargumentzljava-lan\n!ls $HIVE_HOME/lib/gu*",
"/content/apache-hive-3.1.2-bin/lib/guava-19.0.jar\n"
],
[
"# the one available with Hadoop is better, v 27\n!ls $HADOOP_HOME/share/hadoop/hdfs/lib/gu*",
"/usr/local/hadoop-3.3.0//share/hadoop/hdfs/lib/guava-27.0-jre.jar\n"
],
[
"# Remove the Hive Guava and replace with Hadoop Guava\n!mv $HIVE_HOME/lib/guava-19.0.jar ./\n!cp $HADOOP_HOME/share/hadoop/hdfs/lib/guava-27.0-jre.jar $HIVE_HOME/lib/",
"_____no_output_____"
]
],
[
[
"##2.5 Initialize HIVE",
"_____no_output_____"
]
],
[
[
"#Type this command, dont copy-paste\n# Non printing characters inside the command will give totally illogical errors\n!schematool -initSchema -dbType derby",
"Metastore connection URL:\t jdbc:derby:;databaseName=metastore_db;create=true\nMetastore Connection Driver :\t org.apache.derby.jdbc.EmbeddedDriver\nMetastore connection User:\t APP\nStarting metastore schema initialization to 3.1.0\nInitialization script hive-schema-3.1.0.derby.sql\n\n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n\n \n\n \n \n \n\n \n \n \n\n\n \n \n \n\n \n \n \n\n\n \n \n \n \n \n \n \n\n\n \n \n \n\n \n \n \n\n \n \n\n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n\n\n \n\n\n\n \n\n\n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n\n \n\n\nInitialization script completed\nschemaTool completed\n"
]
],
[
[
"## 2.6 Test HIVE \n1. Create database\n2. Create table\n3. Insert data\n4. Retrieve data\n\nusing command line options as [given here](https://cwiki.apache.org/confluence/display/hive/languagemanual+cli#).",
"_____no_output_____"
]
],
[
[
"!hive -e \"create database if not exists praxisDB;\"",
"Hive Session ID = 284e0dcf-9dea-4946-8397-8c8a65a693b9\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = 42e91be9-1812-4037-8e46-4905f4594e5d\nOK\nTime taken: 1.242 seconds\n"
],
[
"!hive -e \"show databases\"",
"Hive Session ID = 58ab1bdf-d431-4632-807a-48bee850f492\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = 660719af-202a-48d6-8dc9-02a4eb8151ce\nOK\ndefault\npraxisdb\nTime taken: 1.459 seconds, Fetched: 2 row(s)\n"
],
[
"!hive -database praxisdb -e \"create table if not exists emp (name string, age int)\"",
"Hive Session ID = c3219f98-9726-4346-9aaa-969083f3ed8b\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = e61eb779-9bff-4084-90bf-16f8ca7c34be\nOK\nTime taken: 1.098 seconds\nOK\nTime taken: 1.202 seconds\n"
],
[
"!hive -database praxisdb -e \"show tables\"",
"Hive Session ID = ecbc302b-53d3-47bc-80b9-817ff2abfd5c\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = 5d1942d6-383b-400c-9b62-d01a1426e8c7\nOK\nTime taken: 0.966 seconds\nOK\nemp\nTime taken: 0.513 seconds, Fetched: 1 row(s)\n"
],
[
"!hive -database praxisdb -e \"insert into emp values ('naren', 70)\"",
"Hive Session ID = b47021a9-ab31-4704-8832-7230bb735538\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = 8e9adca4-d2d8-45e5-85cf-b3500bbc9e25\nOK\nTime taken: 0.948 seconds\nQuery ID = root_20210820115333_20a40479-9427-47e4-b067-c46e315d0c27\nTotal jobs = 3\nLaunching Job 1 out of 3\nNumber of reduce tasks determined at compile time: 1\nIn order to change the average load for a reducer (in bytes):\n set hive.exec.reducers.bytes.per.reducer=<number>\nIn order to limit the maximum number of reducers:\n set hive.exec.reducers.max=<number>\nIn order to set a constant number of reducers:\n set mapreduce.job.reduces=<number>\nJob running in-process (local Hadoop)\n2021-08-20 11:53:39,020 Stage-1 map = 100%, reduce = 100%\nEnded Job = job_local333599249_0001\nStage-4 is selected by condition resolver.\nStage-3 is filtered out by condition resolver.\nStage-5 is filtered out by condition resolver.\nMoving data to directory file:/user/hive/warehouse/praxisdb.db/emp/.hive-staging_hive_2021-08-20_11-53-33_562_5269288869856455183-1/-ext-10000\nLoading data to table praxisdb.emp\nMapReduce Jobs Launched: \nStage-Stage-1: HDFS Read: 0 HDFS Write: 0 SUCCESS\nTotal MapReduce CPU Time Spent: 0 msec\nOK\nTime taken: 6.541 seconds\n"
],
[
"!hive -database praxisdb -e \"insert into emp values ('aditya', 49)\"",
"Hive Session ID = 8afdafbd-ca2c-43a9-98f5-bce7535a29ef\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = a4262f89-9d9b-4181-8be4-04e87217792c\nOK\nTime taken: 0.957 seconds\nQuery ID = root_20210820115356_faea7857-d10e-47ec-bc81-6cce39a28cb8\nTotal jobs = 3\nLaunching Job 1 out of 3\nNumber of reduce tasks determined at compile time: 1\nIn order to change the average load for a reducer (in bytes):\n set hive.exec.reducers.bytes.per.reducer=<number>\nIn order to limit the maximum number of reducers:\n set hive.exec.reducers.max=<number>\nIn order to set a constant number of reducers:\n set mapreduce.job.reduces=<number>\nJob running in-process (local Hadoop)\n2021-08-20 11:54:01,852 Stage-1 map = 100%, reduce = 100%\nEnded Job = job_local287010703_0001\nStage-4 is selected by condition resolver.\nStage-3 is filtered out by condition resolver.\nStage-5 is filtered out by condition resolver.\nMoving data to directory file:/user/hive/warehouse/praxisdb.db/emp/.hive-staging_hive_2021-08-20_11-53-56_772_303806826349560737-1/-ext-10000\nLoading data to table praxisdb.emp\nMapReduce Jobs Launched: \nStage-Stage-1: HDFS Read: 0 HDFS Write: 0 SUCCESS\nTotal MapReduce CPU Time Spent: 0 msec\nOK\nTime taken: 6.374 seconds\n"
],
[
"!hive -database praxisdb -e \"select * from emp\"",
"Hive Session ID = 017c27a0-3571-48ec-9de5-d59d8f117490\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = d8d009c2-6f7c-4754-9532-b9f371c73fd8\nOK\nTime taken: 1.069 seconds\nOK\naditya\t49\nnaren\t70\nTime taken: 2.367 seconds, Fetched: 2 row(s)\n"
],
[
"# Silent Mode\n!hive -S -database praxisdb -e \"select * from emp\"",
"Hive Session ID = 0581379c-28ce-4a4c-8508-7794a78011ea\nHive Session ID = a4dd94db-89ca-4b8f-b7e5-9b68c92d8b57\naditya\t49\nnaren\t70\n"
]
],
[
[
"## 2.7 Bulk Data Load from CSV file",
"_____no_output_____"
]
],
[
[
"#drop table\n!hive -database praxisDB -e 'DROP table if exists eCommerce'\n#create table\n# Invoice Date is being treated as a STRING because input data is not correctly formatted\n!hive -database praxisDB -e \" \\\nCREATE TABLE eCommerce ( \\\nInvoiceNo varchar(10), \\\nStockCode varchar(10), \\\nDescription varchar(50), \\\nQuantity int, \\\nInvoiceDate string, \\\nUnitPrice decimal(6,2), \\\nCustomerID varchar(10), \\\nCountry varchar(15) \\\n) row format delimited fields terminated by ','; \\\n\"",
"Hive Session ID = 53ecd67d-8b1d-411b-b183-c95d93465749\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = befb8653-ec8c-4707-bf23-996cc90f83da\nOK\nTime taken: 1.15 seconds\nOK\nTime taken: 0.147 seconds\nHive Session ID = 9817bff5-c2a4-4794-b4dd-f6e034b9fbbf\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = 75534260-e711-4f66-ae2d-a72e87c59862\nOK\nTime taken: 0.929 seconds\nOK\nTime taken: 1.092 seconds\n"
],
[
"!hive -database praxisdb -e \"describe eCommerce\"",
"Hive Session ID = 2c92f565-80b2-4b55-a888-39ed4e4c08d9\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = d6989798-b059-41d9-b569-41f31ca3b7b7\nOK\nTime taken: 0.917 seconds\nOK\ninvoiceno \tvarchar(10) \t \nstockcode \tvarchar(10) \t \ndescription \tvarchar(50) \t \nquantity \tint \t \ninvoicedate \tstring \t \nunitprice \tdecimal(6,2) \t \ncustomerid \tvarchar(10) \t \ncountry \tvarchar(15) \t \nTime taken: 0.798 seconds, Fetched: 8 row(s)\n"
]
],
[
[
"This data may not be clean and may have commas embedded in the CSV file. To see how clearn this look at this notebook : [Spark SQLContext HiveContext](https://github.com/prithwis/KKolab/blob/main/KK_C1_SparkSQL_SQLContext_HiveContext.ipynb) ",
"_____no_output_____"
]
],
[
[
"#Data as CSV file\n!gdown https://drive.google.com/uc?id=1JJH24ZZaiJrEKValD--UtyFcWl7UanwV # 2% data ~ 10K rows\n!gdown https://drive.google.com/uc?id=1g7mJ0v4fkERW0HWc1eq-SHs_jvQ0N2Oe # 100% data ~ 500K rows",
"Downloading...\nFrom: https://drive.google.com/uc?id=1JJH24ZZaiJrEKValD--UtyFcWl7UanwV\nTo: /content/eCommerce_02PC_2021.csv\n100% 917k/917k [00:00<00:00, 5.62MB/s]\nDownloading...\nFrom: https://drive.google.com/uc?id=1g7mJ0v4fkERW0HWc1eq-SHs_jvQ0N2Oe\nTo: /content/eCommerce_Full_2021.csv\n45.6MB [00:01, 39.1MB/s]\n"
],
[
"#remove the CRLF character from the end of the row if it exists\n!sed 's/\\r//' /content/eCommerce_Full_2021.csv > datafile.csv\n#!sed 's/\\r//' /content/eCommerce_02PC_2021.csv > datafile.csv\n# remove the first line containing headers from the file\n!sed -i -e \"1d\" datafile.csv \n!head datafile.csv ",
"536365,85123A,WHITE HANGING HEART T-LIGHT HOLDER,6,12/1/2010 8:26,2.55,17850,United Kingdom\n536365,71053,WHITE METAL LANTERN,6,12/1/2010 8:26,3.39,17850,United Kingdom\n536365,84406B,CREAM CUPID HEARTS COAT HANGER,8,12/1/2010 8:26,2.75,17850,United Kingdom\n536365,84029G,KNITTED UNION FLAG HOT WATER BOTTLE,6,12/1/2010 8:26,3.39,17850,United Kingdom\n536365,84029E,RED WOOLLY HOTTIE WHITE HEART.,6,12/1/2010 8:26,3.39,17850,United Kingdom\n536365,22752,SET 7 BABUSHKA NESTING BOXES,2,12/1/2010 8:26,7.65,17850,United Kingdom\n536365,21730,GLASS STAR FROSTED T-LIGHT HOLDER,6,12/1/2010 8:26,4.25,17850,United Kingdom\n536366,22633,HAND WARMER UNION JACK,6,12/1/2010 8:28,1.85,17850,United Kingdom\n536366,22632,HAND WARMER RED POLKA DOT,6,12/1/2010 8:28,1.85,17850,United Kingdom\n536367,84879,ASSORTED COLOUR BIRD ORNAMENT,32,12/1/2010 8:34,1.69,13047,United Kingdom\n"
],
[
"# delete all rows from table\n!hive -database praxisdb -e 'TRUNCATE TABLE eCommerce'\n# LOAD\n!hive -database praxisdb -e \"LOAD DATA LOCAL INPATH 'datafile.csv' INTO TABLE eCommerce\"",
"Hive Session ID = 64097d84-1d25-46c0-9fbf-6ac4c9ed8230\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = 8070260e-fffe-4c34-9ed8-12b0b67a8398\nOK\nTime taken: 1.139 seconds\nOK\nTime taken: 1.131 seconds\nHive Session ID = 6a6e7bb6-69c5-4668-900f-494097f8b50a\n\nLogging initialized using configuration in jar:file:/content/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true\nHive Session ID = 5afe7c79-89ff-49f6-982a-fad95acbbf0a\nOK\nTime taken: 0.958 seconds\nLoading data to table praxisdb.ecommerce\nOK\nTime taken: 1.549 seconds\n"
],
[
"!hive -S -database praxisdb -e \"select count(*) from eCommerce\"",
"Hive Session ID = 3c84292e-d359-4633-ab3b-ebc61f4ca9f3\nHive Session ID = 19a497aa-1163-475d-b473-f6307e42088f\n541909\n"
],
[
"!hive -S -database praxisdb -e \"select * from eCommerce limit 30\"",
"Hive Session ID = a261fbc0-97bd-4236-b3d7-69790b3eb89e\nHive Session ID = 2608f418-8fbf-4c30-ba13-584efd14166f\n536365\t85123A\tWHITE HANGING HEART T-LIGHT HOLDER\t6\t12/1/2010 8:26\t2.55\t17850\tUnited Kingdom\n536365\t71053\tWHITE METAL LANTERN\t6\t12/1/2010 8:26\t3.39\t17850\tUnited Kingdom\n536365\t84406B\tCREAM CUPID HEARTS COAT HANGER\t8\t12/1/2010 8:26\t2.75\t17850\tUnited Kingdom\n536365\t84029G\tKNITTED UNION FLAG HOT WATER BOTTLE\t6\t12/1/2010 8:26\t3.39\t17850\tUnited Kingdom\n536365\t84029E\tRED WOOLLY HOTTIE WHITE HEART.\t6\t12/1/2010 8:26\t3.39\t17850\tUnited Kingdom\n536365\t22752\tSET 7 BABUSHKA NESTING BOXES\t2\t12/1/2010 8:26\t7.65\t17850\tUnited Kingdom\n536365\t21730\tGLASS STAR FROSTED T-LIGHT HOLDER\t6\t12/1/2010 8:26\t4.25\t17850\tUnited Kingdom\n536366\t22633\tHAND WARMER UNION JACK\t6\t12/1/2010 8:28\t1.85\t17850\tUnited Kingdom\n536366\t22632\tHAND WARMER RED POLKA DOT\t6\t12/1/2010 8:28\t1.85\t17850\tUnited Kingdom\n536367\t84879\tASSORTED COLOUR BIRD ORNAMENT\t32\t12/1/2010 8:34\t1.69\t13047\tUnited Kingdom\n536367\t22745\tPOPPY'S PLAYHOUSE BEDROOM \t6\t12/1/2010 8:34\t2.10\t13047\tUnited Kingdom\n536367\t22748\tPOPPY'S PLAYHOUSE KITCHEN\t6\t12/1/2010 8:34\t2.10\t13047\tUnited Kingdom\n536367\t22749\tFELTCRAFT PRINCESS CHARLOTTE DOLL\t8\t12/1/2010 8:34\t3.75\t13047\tUnited Kingdom\n536367\t22310\tIVORY KNITTED MUG COSY \t6\t12/1/2010 8:34\t1.65\t13047\tUnited Kingdom\n536367\t84969\tBOX OF 6 ASSORTED COLOUR TEASPOONS\t6\t12/1/2010 8:34\t4.25\t13047\tUnited Kingdom\n536367\t22623\tBOX OF VINTAGE JIGSAW BLOCKS \t3\t12/1/2010 8:34\t4.95\t13047\tUnited Kingdom\n536367\t22622\tBOX OF VINTAGE ALPHABET BLOCKS\t2\t12/1/2010 8:34\t9.95\t13047\tUnited Kingdom\n536367\t21754\tHOME BUILDING BLOCK WORD\t3\t12/1/2010 8:34\t5.95\t13047\tUnited Kingdom\n536367\t21755\tLOVE BUILDING BLOCK WORD\t3\t12/1/2010 8:34\t5.95\t13047\tUnited Kingdom\n536367\t21777\tRECIPE BOX WITH METAL HEART\t4\t12/1/2010 8:34\t7.95\t13047\tUnited Kingdom\n536367\t48187\tDOORMAT NEW ENGLAND\t4\t12/1/2010 8:34\t7.95\t13047\tUnited Kingdom\n536368\t22960\tJAM MAKING SET WITH JARS\t6\t12/1/2010 8:34\t4.25\t13047\tUnited Kingdom\n536368\t22913\tRED COAT RACK PARIS FASHION\t3\t12/1/2010 8:34\t4.95\t13047\tUnited Kingdom\n536368\t22912\tYELLOW COAT RACK PARIS FASHION\t3\t12/1/2010 8:34\t4.95\t13047\tUnited Kingdom\n536368\t22914\tBLUE COAT RACK PARIS FASHION\t3\t12/1/2010 8:34\t4.95\t13047\tUnited Kingdom\n536369\t21756\tBATH BUILDING BLOCK WORD\t3\t12/1/2010 8:35\t5.95\t13047\tUnited Kingdom\n536370\t22728\tALARM CLOCK BAKELIKE PINK\t24\t12/1/2010 8:45\t3.75\t12583\tFrance\n536370\t22727\tALARM CLOCK BAKELIKE RED \t24\t12/1/2010 8:45\t3.75\t12583\tFrance\n536370\t22726\tALARM CLOCK BAKELIKE GREEN\t12\t12/1/2010 8:45\t3.75\t12583\tFrance\n536370\t21724\tPANDA AND BUNNIES STICKER SHEET\t12\t12/1/2010 8:45\t0.85\t12583\tFrance\n"
]
],
[
[
"#Chronobooks <br>\n<hr>\nChronotantra and Chronoyantra are two science fiction novels that explore the collapse of human civilisation on Earth and then its rebirth and reincarnation both on Earth as well as on the distant worlds of Mars, Titan and Enceladus. But is it the human civilisation that is being reborn? Or is it some other sentience that is revealing itself. \nIf you have an interest in AI and found this material useful, you may consider buying these novels, in paperback or kindle, from [http://bit.ly/chronobooks](http://bit.ly/chronobooks)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a5d01b946d86cb6d3f28e42c8676a541516d4bc
| 459,034 |
ipynb
|
Jupyter Notebook
|
python/.ipynb_checkpoints/neuron_yield-checkpoint.ipynb
|
int-brain-lab/analysis
|
a8bf474815cb6ed690b3da07ff2f501459375b75
|
[
"MIT"
] | 7 |
2018-11-22T17:36:02.000Z
|
2020-10-17T10:59:59.000Z
|
python/.ipynb_checkpoints/neuron_yield-checkpoint.ipynb
|
int-brain-lab/tmp_analysis_matlab
|
effedfd0b5997411f576b4ebcc747c8613715c24
|
[
"MIT"
] | 3 |
2020-01-13T17:47:14.000Z
|
2020-05-21T18:31:47.000Z
|
python/.ipynb_checkpoints/neuron_yield-checkpoint.ipynb
|
int-brain-lab/tmp_analysis_matlab
|
effedfd0b5997411f576b4ebcc747c8613715c24
|
[
"MIT"
] | 4 |
2019-05-30T17:55:32.000Z
|
2021-01-06T18:45:48.000Z
| 330.240288 | 192,608 | 0.905022 |
[
[
[
"# what's the neuron yield across probes, experimenters and recording sites?\nAnne Urai & Nate Miska, 2020",
"_____no_output_____"
]
],
[
[
"# GENERAL THINGS FOR COMPUTING AND PLOTTING\nimport pandas as pd\nimport numpy as np\nimport os, sys, time\nimport scipy as sp\n\n# visualisation\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# ibl specific things\nimport datajoint as dj\nfrom ibl_pipeline import reference, subject, action, acquisition, data, behavior\nfrom ibl_pipeline.analyses import behavior as behavior_analysis\nephys = dj.create_virtual_module('ephys', 'ibl_ephys')\nfigpath = os.path.join(os.path.expanduser('~'), 'Data/Figures_IBL')",
"Connecting [email protected]:3306\nConnected to https://alyx.internationalbrainlab.org as anneu\n"
]
],
[
[
"## 1. neuron yield per lab and Npix probe over time\nReplicates https://github.com/int-brain-lab/analysis/blob/master/python/probe_performance_over_sessions.py using DJ",
"_____no_output_____"
]
],
[
[
"probe_insertions = ephys.ProbeInsertion * ephys.DefaultCluster.Metrics * subject.SubjectLab \\\n * (acquisition.SessionProject\n & 'session_project = \"ibl_neuropixel_brainwide_01\"') \\\n * behavior_analysis.SessionTrainingStatus\nprobe_insertions = probe_insertions.proj('probe_serial_number', 'probe_model_name', 'lab_name', 'metrics',\n 'good_enough_for_brainwide_map',\n session_date='DATE(session_start_time)')\nclusts = probe_insertions.fetch(format='frame').reset_index()",
"_____no_output_____"
],
[
"# put metrics into df columns from the blob (feature request: can these be added as attributes instead?)\nfor kix, k in enumerate(['ks2_label']):\n tmp_var = []\n for id, c in clusts.iterrows():\n if k in c['metrics'].keys():\n tmp = c['metrics'][k]\n else:\n tmp = np.nan\n tmp_var.append(tmp)\n clusts[k] = tmp_var",
"_____no_output_____"
],
[
"# hofer and mrsic-flogel probes are shared\nclusts['lab_name'] = clusts['lab_name'].str.replace('mrsicflogellab','swclab')\nclusts['lab_name'] = clusts['lab_name'].str.replace('hoferlab','swclab')\nclusts.lab_name.unique()",
"_____no_output_____"
],
[
"clusts['probe_name'] = clusts['lab_name'] + ', ' + clusts['probe_model_name'] + ': ' + clusts['probe_serial_number']\nclusts_summ = clusts.groupby(['lab_name', 'probe_name', 'session_start_time', 'ks2_label'])['session_date'].count().reset_index()\n\n# use recording session number instead of date\nclusts_summ['recording'] = clusts_summ.groupby(['probe_name']).cumcount() + 1",
"_____no_output_____"
],
[
"sns.set(style=\"ticks\", context=\"paper\")\ng, axes = plt.subplots(6,6,figsize=(18,20))\n\nfor probe, ax in zip(clusts_summ.probe_name.unique(), axes.flatten()):\n df = clusts_summ[clusts_summ.probe_name==probe].groupby(['session_start_time','ks2_label']).session_date.sum()\n df.unstack().plot.barh(ax=ax, stacked=True, legend=False, colormap='Pastel2')\n ax.set_title(probe, fontsize=12)\n ax.axvline(x=60, color='seagreen', linestyle=\"--\")\n ax.set_yticks([])\n ax.set_ylabel('')\n ax.set_ylim([-1, np.max([max(ax.get_ylim()), 10])])\n ax.set_xlim([0, 1000])\n \naxes.flatten()[-1].set_axis_off()\nsns.despine(trim=True) \nplt.tight_layout()\nplt.xlabel('Number of KS2 neurons')\nplt.ylabel('Recording session')\ng.savefig(os.path.join(figpath, 'probe_yield_oversessions.pdf'))",
"_____no_output_____"
]
],
[
[
"# 2. what is the overall yield of sessions, neurons etc?",
"_____no_output_____"
]
],
[
[
"## overall distribution of neurons per session\ng = sns.FacetGrid(data=clusts_summ, hue='ks2_label', palette='Set2')\ng.map(sns.distplot, \"session_date\", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend()\nfor ax in g.axes.flatten():\n ax.axvline(x=60, color='seagreen', linestyle=\"--\")\n \ng.set_xlabels('Number of KS2 neurons')\ng.set_ylabels('Number of sessions')\ng.savefig(os.path.join(figpath, 'probe_yield_allrecs.pdf'))\n\nprint('TOTAL YIELD SO FAR:')\nclusts.groupby(['ks2_label'])['ks2_label'].count()",
"TOTAL YIELD SO FAR:\n"
],
[
"## overall distribution of neurons per session\ng = sns.FacetGrid(data=clusts_summ, hue='ks2_label', col_wrap=4, col='lab_name', palette='Set2')\ng.map(sns.distplot, \"session_date\", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend()\nfor ax in g.axes.flatten():\n ax.axvline(x=60, color='seagreen', linestyle=\"--\")\n \ng.set_xlabels('Number of KS2 neurons')\ng.set_ylabels('Number of sessions')\n#g.savefig(os.path.join(figpath, 'probe_yield_allrecs_perlab.pdf'))\n",
"_____no_output_____"
],
[
"## overall number of sessions that meet criteria for behavior and neural yield\nsessions = clusts.loc[clusts.ks2_label == 'good', :].groupby(['lab_name', 'subject_uuid', 'session_start_time', \n 'good_enough_for_brainwide_map'])['cluster_id'].count().reset_index()\nsessions['enough_neurons'] = (sessions['cluster_id'] > 60)\nct = sessions.groupby(['good_enough_for_brainwide_map', 'enough_neurons'])['cluster_id'].count().reset_index()\nprint('total nr of sessions: %d'%ct.cluster_id.sum())\npd.pivot_table(ct, columns=['good_enough_for_brainwide_map'], values=['cluster_id'], index=['enough_neurons'])\n#sessions.describe()\n# pd.pivot_table(df, values='cluster_id', index=['lab_name'],\n# columns=['enough_neurons'], aggfunc=np.sum)",
"total nr of sessions: 124\n"
],
[
"# check that this pandas wrangling is correct...\nephys_sessions = acquisition.Session * subject.Subject * subject.SubjectLab \\\n * (acquisition.SessionProject\n & 'session_project = \"ibl_neuropixel_brainwide_01\"') \\\n * behavior_analysis.SessionTrainingStatus \\\n & ephys.ProbeInsertion & ephys.DefaultCluster.Metrics \nephys_sessions = ephys_sessions.fetch(format='frame').reset_index()\n# ephys_sessions\n# ephys_sessions.groupby(['good_enough_for_brainwide_map'])['session_start_time'].describe()",
"_____no_output_____"
],
[
"# which sessions do *not* show good enough behavior?\nephys_sessions.loc[ephys_sessions.good_enough_for_brainwide_map == 0, :].groupby([\n 'lab_name', 'subject_nickname', 'session_start_time'])['session_start_time'].unique()",
"_____no_output_____"
],
[
"# per lab, what's the drop-out due to behavior? \nephys_sessions['good_enough_for_brainwide_map'] = ephys_sessions['good_enough_for_brainwide_map'].astype(int)\nephys_sessions.groupby(['lab_name'])['good_enough_for_brainwide_map'].describe()",
"_____no_output_____"
],
[
"ephys_sessions['good_enough_for_brainwide_map'].describe()",
"_____no_output_____"
],
[
"# per lab, what's the dropout due to yield?\nsessions['enough_neurons'] = sessions['enough_neurons'].astype(int)\nsessions.groupby(['lab_name'])['enough_neurons'].describe()",
"_____no_output_____"
],
[
"## also show the total number of neurons, only from good behavior sessions\nprobe_insertions = ephys.ProbeInsertion * ephys.DefaultCluster.Metrics * subject.SubjectLab \\\n * (acquisition.SessionProject\n & 'session_project = \"ibl_neuropixel_brainwide_01\"') \\\n * (behavior_analysis.SessionTrainingStatus & \n 'good_enough_for_brainwide_map = 1')\nprobe_insertions = probe_insertions.proj('probe_serial_number', 'probe_model_name', 'lab_name', 'metrics',\n 'good_enough_for_brainwide_map',\n session_date='DATE(session_start_time)')\nclusts = probe_insertions.fetch(format='frame').reset_index()\n\n# put metrics into df columns from the blob (feature request: can these be added as attributes instead?)\nfor kix, k in enumerate(['ks2_label']):\n tmp_var = []\n for id, c in clusts.iterrows():\n if k in c['metrics'].keys():\n tmp = c['metrics'][k]\n else:\n tmp = np.nan\n tmp_var.append(tmp)\n clusts[k] = tmp_var\n \n# hofer and mrsic-flogel probes are shared\nclusts['lab_name'] = clusts['lab_name'].str.replace('mrsicflogellab','swclab')\nclusts['lab_name'] = clusts['lab_name'].str.replace('hoferlab','swclab')\nclusts.lab_name.unique()\n\nclusts['probe_name'] = clusts['lab_name'] + ', ' + clusts['probe_model_name'] + ': ' + clusts['probe_serial_number']\nclusts_summ = clusts.groupby(['lab_name', 'probe_name', 'session_start_time', 'ks2_label'])['session_date'].count().reset_index()\n\n# use recording session number instead of date\nclusts_summ['recording'] = clusts_summ.groupby(['probe_name']).cumcount() + 1\n\n## overall distribution of neurons per session\ng = sns.FacetGrid(data=clusts_summ, hue='ks2_label', palette='Set2')\ng.map(sns.distplot, \"session_date\", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend()\nfor ax in g.axes.flatten():\n ax.axvline(x=60, color='seagreen', linestyle=\"--\")\n \ng.set_xlabels('Number of KS2 neurons')\ng.set_ylabels('Number of sessions')\ng.savefig(os.path.join(figpath, 'probe_yield_allrecs_goodsessions.pdf'))\n\nprint('TOTAL YIELD (from good sessions) SO FAR:')\nclusts.groupby(['ks2_label'])['ks2_label'].count()",
"TOTAL YIELD (from good sessions) SO FAR:\n"
]
],
[
[
"## 2. how does probe yield in the repeated site differ between mice/experimenters?",
"_____no_output_____"
]
],
[
[
"probes_rs = (ephys.ProbeTrajectory & 'insertion_data_source = \"Planned\"'\n & 'x BETWEEN -2400 AND -2100' & 'y BETWEEN -2100 AND -1900' & 'theta BETWEEN 14 AND 16')\n\nclust = ephys.DefaultCluster * ephys.DefaultCluster.Metrics * probes_rs * subject.SubjectLab() * subject.Subject()\nclust = clust.proj('cluster_amp', 'cluster_depth', 'firing_rate', 'subject_nickname', 'lab_name','metrics',\n 'x', 'y', 'theta', 'phi', 'depth')\nclusts = clust.fetch(format='frame').reset_index()\nclusts['col_name'] = clusts['lab_name'] + ', ' + clusts['subject_nickname']\n\n# put metrics into df columns from the blob\nfor kix, k in enumerate(clusts['metrics'][0].keys()):\n tmp_var = []\n for id, c in clusts.iterrows():\n if k in c['metrics'].keys():\n tmp = c['metrics'][k]\n else:\n tmp = np.nan\n tmp_var.append(tmp)\n clusts[k] = tmp_var\n\nclusts",
"_____no_output_____"
],
[
"sns.set(style=\"ticks\", context=\"paper\")\ng, axes = plt.subplots(1,1,figsize=(4,4))\ndf = clusts.groupby(['col_name', 'ks2_label']).ks2_label.count()\ndf.unstack().plot.barh(ax=axes, stacked=True, legend=True, colormap='Pastel2')\naxes.axvline(x=60, color='seagreen', linestyle=\"--\")\naxes.set_ylabel('')\nsns.despine(trim=True) \nplt.xlabel('Number of KS2 neurons')\ng.savefig(os.path.join(figpath, 'probe_yield_rs.pdf'))",
"_____no_output_____"
],
[
"## firing rate as a function of depth\nprint('plotting')\ng = sns.FacetGrid(data=clusts, col='col_name', col_wrap=4, hue='ks2_label',\n palette='Pastel2', col_order=sorted(clusts.col_name.unique()))\ng.map(sns.scatterplot, \"firing_rate\", \"cluster_depth\", alpha=0.5).add_legend()\ng.set_titles('{col_name}')\ng.set_xlabels('Firing rate (spks/s)')\ng.set_ylabels('Depth')\nplt.tight_layout()\nsns.despine(trim=True)\ng.savefig(os.path.join(figpath, 'neurons_rsi_firingrate.pdf'))\n",
"plotting\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a5d092c12c0ccf3a5114c4f810a08cc40ff42d4
| 5,079 |
ipynb
|
Jupyter Notebook
|
assignments/assignment03/NumpyEx02.ipynb
|
rsterbentz/phys202-2015-work
|
c8c441ef8308b6b2f3edd71938b91dcabe370bbd
|
[
"MIT"
] | null | null | null |
assignments/assignment03/NumpyEx02.ipynb
|
rsterbentz/phys202-2015-work
|
c8c441ef8308b6b2f3edd71938b91dcabe370bbd
|
[
"MIT"
] | null | null | null |
assignments/assignment03/NumpyEx02.ipynb
|
rsterbentz/phys202-2015-work
|
c8c441ef8308b6b2f3edd71938b91dcabe370bbd
|
[
"MIT"
] | null | null | null | 21.43038 | 222 | 0.519984 |
[
[
[
"# Numpy Exercise 2",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
]
],
[
[
"## Factorial",
"_____no_output_____"
],
[
"Write a function that computes the factorial of small numbers using `np.arange` and `np.cumprod`.",
"_____no_output_____"
]
],
[
[
"def np_fact(n):\n \"\"\"Compute n! = n*(n-1)*...*1 using Numpy.\"\"\"\n if n == 0:\n return 1\n else:\n a = np.arange(1,float(n)+1,1)\n return a.cumprod()[n-1]",
"_____no_output_____"
],
[
"assert np_fact(0)==1\nassert np_fact(1)==1\nassert np_fact(10)==3628800\nassert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]",
"_____no_output_____"
]
],
[
[
"Write a function that computes the factorial of small numbers using a Python loop.",
"_____no_output_____"
]
],
[
[
"def loop_fact(n):\n \"\"\"Compute n! using a Python for loop.\"\"\"\n if n == 0:\n return 1\n else:\n prod = 1\n for i in range(1, n+1):\n prod = prod * i\n return prod",
"_____no_output_____"
],
[
"assert loop_fact(0)==1\nassert loop_fact(1)==1\nassert loop_fact(10)==3628800\nassert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]",
"_____no_output_____"
]
],
[
[
"Use the `%timeit` magic to time both versions of this function for an argument of `50`. The syntax for `%timeit` is:\n\n```python\n%timeit -n1 -r1 function_to_time()\n```",
"_____no_output_____"
]
],
[
[
"%timeit -n1 -r1 np_fact(50)\n%timeit -n1 -r1 loop_fact(50)",
"1 loops, best of 1: 62.9 µs per loop\n1 loops, best of 1: 28.8 µs per loop\n"
]
],
[
[
"In the cell below, summarize your timing tests. Which version is faster? Why do you think that version is faster?",
"_____no_output_____"
],
[
"It seems after a couple runs, the loop_fact function is consistently faster than np_fact. This could be because the methods used in loop_fact are part of Python, while np_fact has code that comes from another source.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a5d13a4507e2e3ea3e6475147ab224086a5b043
| 510,760 |
ipynb
|
Jupyter Notebook
|
tutorials/noise/3_measurement_error_mitigation.ipynb
|
mmcelaney/qiskit-tutorials
|
478bc8311312504affb79a3cbac384fb261491be
|
[
"Apache-2.0"
] | 1 |
2021-02-02T06:30:04.000Z
|
2021-02-02T06:30:04.000Z
|
tutorials/noise/3_measurement_error_mitigation.ipynb
|
mmcelaney/qiskit-tutorials
|
478bc8311312504affb79a3cbac384fb261491be
|
[
"Apache-2.0"
] | null | null | null |
tutorials/noise/3_measurement_error_mitigation.ipynb
|
mmcelaney/qiskit-tutorials
|
478bc8311312504affb79a3cbac384fb261491be
|
[
"Apache-2.0"
] | 2 |
2019-11-04T01:23:31.000Z
|
2020-03-08T17:28:31.000Z
| 156.339149 | 24,736 | 0.778238 |
[
[
[
"# Measurement Error Mitigation",
"_____no_output_____"
],
[
"## Introduction\n\nThe measurement calibration is used to mitigate measurement errors. \nThe main idea is to prepare all $2^n$ basis input states and compute the probability of measuring counts in the other basis states. \nFrom these calibrations, it is possible to correct the average results of another experiment of interest. This notebook gives examples for how to use the ``ignis.mitigation.measurement`` module.",
"_____no_output_____"
]
],
[
[
"# Import general libraries (needed for functions)\nimport numpy as np\nimport time\n\n# Import Qiskit classes\nimport qiskit \nfrom qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister, Aer\nfrom qiskit.providers.aer import noise\nfrom qiskit.tools.visualization import plot_histogram\n\n# Import measurement calibration functions\nfrom qiskit.ignis.mitigation.measurement import (complete_meas_cal, tensored_meas_cal,\n CompleteMeasFitter, TensoredMeasFitter)",
"_____no_output_____"
]
],
[
[
"## 3 Qubit Example of the Calibration Matrices",
"_____no_output_____"
],
[
"Assume that we would like to generate a calibration matrix for the 3 qubits Q2, Q3 and Q4 in a 5-qubit Quantum Register [Q0,Q1,Q2,Q3,Q4]. \n\nSince we have 3 qubits, there are $2^3=8$ possible quantum states.",
"_____no_output_____"
],
[
"## Generating Measurement Calibration Circuits\n\nFirst, we generate a list of measurement calibration circuits for the full Hilbert space. \nEach circuit creates a basis state. \nIf there are $n=3$ qubits, then we get $2^3=8$ calibration circuits.",
"_____no_output_____"
],
[
"The following function **complete_meas_cal** returns a list **meas_calibs** of `QuantumCircuit` objects containing the calibration circuits, \nand a list **state_labels** of the calibration state labels.\n\nThe input to this function can be given in one of the following three forms:\n\n- **qubit_list:** A list of qubits to perform the measurement correction on, or:\n- **qr (QuantumRegister):** A quantum register, or:\n- **cr (ClassicalRegister):** A classical register.\n\nIn addition, one can provide a string **circlabel**, which is added at the beginning of the circuit names for unique identification.\n\nFor example, in our case, the input is a 5-qubit `QuantumRegister` containing the qubits Q2,Q3,Q4:",
"_____no_output_____"
]
],
[
[
"# Generate the calibration circuits\nqr = qiskit.QuantumRegister(5)\nqubit_list = [2,3,4]\nmeas_calibs, state_labels = complete_meas_cal(qubit_list=qubit_list, qr=qr, circlabel='mcal')",
"_____no_output_____"
]
],
[
[
"Print the $2^3=8$ state labels (for the 3 qubits Q2,Q3,Q4):",
"_____no_output_____"
]
],
[
[
"state_labels",
"_____no_output_____"
]
],
[
[
"## Computing the Calibration Matrix\n\nIf we do not apply any noise, then the calibration matrix is expected to be the $8 \\times 8$ identity matrix.",
"_____no_output_____"
]
],
[
[
"# Execute the calibration circuits without noise\nbackend = qiskit.Aer.get_backend('qasm_simulator')\njob = qiskit.execute(meas_calibs, backend=backend, shots=1000)\ncal_results = job.result()",
"_____no_output_____"
],
[
"# The calibration matrix without noise is the identity matrix\nmeas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')\nprint(meas_fitter.cal_matrix)",
"[[1. 0. 0. 0. 0. 0. 0. 0.]\n [0. 1. 0. 0. 0. 0. 0. 0.]\n [0. 0. 1. 0. 0. 0. 0. 0.]\n [0. 0. 0. 1. 0. 0. 0. 0.]\n [0. 0. 0. 0. 1. 0. 0. 0.]\n [0. 0. 0. 0. 0. 1. 0. 0.]\n [0. 0. 0. 0. 0. 0. 1. 0.]\n [0. 0. 0. 0. 0. 0. 0. 1.]]\n"
]
],
[
[
"Assume that we apply some noise model from Qiskit Aer to the 5 qubits, \nthen the calibration matrix will have most of its mass on the main diagonal, with some additional 'noise'.\n\nAlternatively, we can execute the calibration circuits using an IBMQ provider.",
"_____no_output_____"
]
],
[
[
"# Generate a noise model for the 5 qubits\nnoise_model = noise.NoiseModel()\nfor qi in range(5):\n read_err = noise.errors.readout_error.ReadoutError([[0.9, 0.1],[0.25,0.75]])\n noise_model.add_readout_error(read_err, [qi])",
"_____no_output_____"
],
[
"# Execute the calibration circuits\nbackend = qiskit.Aer.get_backend('qasm_simulator')\njob = qiskit.execute(meas_calibs, backend=backend, shots=1000, noise_model=noise_model)\ncal_results = job.result()",
"_____no_output_____"
],
[
"# Calculate the calibration matrix with the noise model\nmeas_fitter = CompleteMeasFitter(cal_results, state_labels, qubit_list=qubit_list, circlabel='mcal')\nprint(meas_fitter.cal_matrix)",
"[[0.728 0.207 0.214 0.047 0.205 0.052 0.051 0.01 ]\n [0.073 0.592 0.029 0.164 0.018 0.168 0.006 0.048]\n [0.087 0.019 0.613 0.173 0.029 0.005 0.188 0.05 ]\n [0.009 0.076 0.059 0.512 0.002 0.014 0.018 0.126]\n [0.082 0.024 0.024 0.006 0.601 0.159 0.169 0.052]\n [0.011 0.068 0.002 0.015 0.068 0.528 0.023 0.149]\n [0.009 0.004 0.055 0.021 0.067 0.021 0.493 0.137]\n [0.001 0.01 0.004 0.062 0.01 0.053 0.052 0.428]]\n"
],
[
"# Plot the calibration matrix\nmeas_fitter.plot_calibration()",
"_____no_output_____"
]
],
[
[
"## Analyzing the Results\n\nWe would like to compute the total measurement fidelity, and the measurement fidelity for a specific qubit, for example, Q0.\n\nSince the on-diagonal elements of the calibration matrix are the probabilities of measuring state 'x' given preparation of state 'x', \nthen the trace of this matrix is the average assignment fidelity.\n",
"_____no_output_____"
]
],
[
[
"# What is the measurement fidelity?\nprint(\"Average Measurement Fidelity: %f\" % meas_fitter.readout_fidelity())\n\n# What is the measurement fidelity of Q0?\nprint(\"Average Measurement Fidelity of Q0: %f\" % meas_fitter.readout_fidelity(\n label_list = [['000','001','010','011'],['100','101','110','111']]))",
"Average Measurement Fidelity: 0.561875\nAverage Measurement Fidelity of Q0: 0.826500\n"
]
],
[
[
"## Applying the Calibration\n\nWe now perform another experiment and correct the measured results. \n\n## Correct Measurement Noise on a 3Q GHZ State\n\nAs an example, we start with the 3-qubit GHZ state on the qubits Q2,Q3,Q4:\n\n$$ \\mid GHZ \\rangle = \\frac{\\mid{000} \\rangle + \\mid{111} \\rangle}{\\sqrt{2}}$$",
"_____no_output_____"
]
],
[
[
"# Make a 3Q GHZ state\ncr = ClassicalRegister(3)\nghz = QuantumCircuit(qr, cr)\nghz.h(qr[2])\nghz.cx(qr[2], qr[3])\nghz.cx(qr[3], qr[4])\nghz.measure(qr[2],cr[0])\nghz.measure(qr[3],cr[1])\nghz.measure(qr[4],cr[2])",
"_____no_output_____"
]
],
[
[
"We now execute the calibration circuits (with the noise model above):",
"_____no_output_____"
]
],
[
[
"job = qiskit.execute([ghz], backend=backend, shots=5000, noise_model=noise_model)\nresults = job.result()",
"_____no_output_____"
]
],
[
[
"We now compute the results without any error mitigation and with the mitigation, namely after applying the calibration matrix to the results.\n\nThere are two fitting methods for applying the calibration (if no method is defined, then 'least_squares' is used). \n- **'pseudo_inverse'**, which is a direct inversion of the calibration matrix, \n- **'least_squares'**, which constrains to have physical probabilities.\n\nThe raw data to be corrected can be given in a number of forms:\n\n- Form1: A counts dictionary from results.get_counts,\n- Form2: A list of counts of length=len(state_labels),\n- Form3: A list of counts of length=M*len(state_labels) where M is an integer (e.g. for use with the tomography data),\n- Form4: A qiskit Result (e.g. results as above).",
"_____no_output_____"
]
],
[
[
"# Results without mitigation\nraw_counts = results.get_counts()\n\n# Get the filter object\nmeas_filter = meas_fitter.filter\n\n# Results with mitigation\nmitigated_results = meas_filter.apply(results)\nmitigated_counts = mitigated_results.get_counts(0)",
"_____no_output_____"
]
],
[
[
"We can now plot the results with and without error mitigation:",
"_____no_output_____"
]
],
[
[
"from qiskit.tools.visualization import *\nplot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated'])",
"_____no_output_____"
]
],
[
[
"### Applying to a reduced subset of qubits",
"_____no_output_____"
],
[
"Consider now that we want to correct a 2Q Bell state, but we have the 3Q calibration matrix. We can reduce the matrix and build a new mitigation object.",
"_____no_output_____"
]
],
[
[
"# Make a 2Q Bell state between Q2 and Q4\ncr = ClassicalRegister(2)\nbell = QuantumCircuit(qr, cr)\nbell.h(qr[2])\nbell.cx(qr[2], qr[4])\nbell.measure(qr[2],cr[0])\nbell.measure(qr[4],cr[1])",
"_____no_output_____"
],
[
"job = qiskit.execute([bell], backend=backend, shots=5000, noise_model=noise_model)\nresults = job.result()",
"_____no_output_____"
],
[
"#build a fitter from the subset\nmeas_fitter_sub = meas_fitter.subset_fitter(qubit_sublist=[2,4])",
"_____no_output_____"
],
[
"#The calibration matrix is now in the space Q2/Q4\nmeas_fitter_sub.cal_matrix",
"_____no_output_____"
],
[
"# Results without mitigation\nraw_counts = results.get_counts()\n\n# Get the filter object\nmeas_filter_sub = meas_fitter_sub.filter\n\n# Results with mitigation\nmitigated_results = meas_filter_sub.apply(results)\nmitigated_counts = mitigated_results.get_counts(0)\nfrom qiskit.tools.visualization import *\nplot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated'])",
"_____no_output_____"
]
],
[
[
"## Tensored mitigation\n\nThe calibration can be simplified if the error is known to be local. By \"local error\" we mean that the error can be tensored to subsets of qubits. In this case, less than $2^n$ states are needed for the computation of the calibration matrix.\n\nAssume that the error acts locally on qubit 2 and the pair of qubits 3 and 4. Construct the calibration circuits by using the function `tensored_meas_cal`. Unlike before we need to explicitly divide the qubit list up into subset regions.",
"_____no_output_____"
]
],
[
[
"# Generate the calibration circuits\nqr = qiskit.QuantumRegister(5)\nmit_pattern = [[2],[3,4]]\nmeas_calibs, state_labels = tensored_meas_cal(mit_pattern=mit_pattern, qr=qr, circlabel='mcal')",
"_____no_output_____"
]
],
[
[
"We now retrieve the names of the generated circuits. Note that in each label (of length 3), the least significant bit corresponds to qubit 2, the middle bit corresponds to qubit 3, and the most significant bit corresponds to qubit 4.",
"_____no_output_____"
]
],
[
[
"for circ in meas_calibs:\n print(circ.name)",
"mcalcal_000\nmcalcal_010\nmcalcal_101\nmcalcal_111\n"
]
],
[
[
"Let us elaborate on the circuit names. We see that there are only four circuits, instead of eight. The total number of required circuits is $2^m$ where $m$ is the number of qubits in the larget subset (here $m=2$).\n\nEach basis state of qubits 3 and 4 appears exactly once. Only two basis states are required for qubit 2, so these are split equally across the four experiments. For example, state '0' of qubit 2 appears in state labels '000' and '010'.\n\nWe now execute the calibration circuits on an Aer simulator, using the same noise model as before. This noise is in fact local to qubits 3 and 4 separately, but assume that we don't know it, and that we only know that it is local for qubit 2.",
"_____no_output_____"
]
],
[
[
"# Generate a noise model for the 5 qubits\nnoise_model = noise.NoiseModel()\nfor qi in range(5):\n read_err = noise.errors.readout_error.ReadoutError([[0.9, 0.1],[0.25,0.75]])\n noise_model.add_readout_error(read_err, [qi])",
"_____no_output_____"
],
[
"# Execute the calibration circuits\nbackend = qiskit.Aer.get_backend('qasm_simulator')\njob = qiskit.execute(meas_calibs, backend=backend, shots=5000, noise_model=noise_model)\ncal_results = job.result()",
"_____no_output_____"
],
[
"meas_fitter = TensoredMeasFitter(cal_results, mit_pattern=mit_pattern)",
"_____no_output_____"
]
],
[
[
"The fitter provides two calibration matrices. One matrix is for qubit 2, and the other matrix is for qubits 3 and 4.",
"_____no_output_____"
]
],
[
[
"print(meas_fitter.cal_matrices)",
"[array([[0.9053, 0.2497],\n [0.0947, 0.7503]]), array([[0.8106, 0.229 , 0.2236, 0.0674],\n [0.0852, 0.6656, 0.026 , 0.1906],\n [0.0946, 0.0262, 0.67 , 0.1714],\n [0.0096, 0.0792, 0.0804, 0.5706]])]\n"
]
],
[
[
"We can look at the readout fidelities of the individual tensored components or qubits within a set:",
"_____no_output_____"
]
],
[
[
"#readout fidelity of Q2\nprint('Readout fidelity of Q2: %f'%meas_fitter.readout_fidelity(0))\n\n#readout fidelity of Q3/Q4\nprint('Readout fidelity of Q3/4 space (e.g. mean assignment '\n '\\nfidelity of 00,10,01 and 11): %f'%meas_fitter.readout_fidelity(1))\n\n#readout fidelity of Q3\nprint('Readout fidelity of Q3: %f'%meas_fitter.readout_fidelity(1,[['00','10'],['01','11']]))",
"Readout fidelity of Q2: 0.827800\nReadout fidelity of Q3/4 space (e.g. mean assignment \nfidelity of 00,10,01 and 11): 0.679200\nReadout fidelity of Q3: 0.826200\n"
]
],
[
[
"Plot the individual calibration matrices:",
"_____no_output_____"
]
],
[
[
"# Plot the calibration matrix\nprint('Q2 Calibration Matrix')\nmeas_fitter.plot_calibration(0)\nprint('Q3/Q4 Calibration Matrix')\nmeas_fitter.plot_calibration(1)",
"Q2 Calibration Matrix\n"
],
[
"# Make a 3Q GHZ state\ncr = ClassicalRegister(3)\nghz = QuantumCircuit(qr, cr)\nghz.h(qr[2])\nghz.cx(qr[2], qr[3])\nghz.cx(qr[3], qr[4])\nghz.measure(qr[2],cr[0])\nghz.measure(qr[3],cr[1])\nghz.measure(qr[4],cr[2])",
"_____no_output_____"
]
],
[
[
"We now execute the calibration circuits (with the noise model above):",
"_____no_output_____"
]
],
[
[
"job = qiskit.execute([ghz], backend=backend, shots=5000, noise_model=noise_model)\nresults = job.result()",
"_____no_output_____"
],
[
"# Results without mitigation\nraw_counts = results.get_counts()\n\n# Get the filter object\nmeas_filter = meas_fitter.filter\n\n# Results with mitigation\nmitigated_results = meas_filter.apply(results)\nmitigated_counts = mitigated_results.get_counts(0)",
"_____no_output_____"
]
],
[
[
"Plot the raw vs corrected state:",
"_____no_output_____"
]
],
[
[
"meas_filter = meas_fitter.filter\nmitigated_results = meas_filter.apply(results)\nmitigated_counts = mitigated_results.get_counts(0)\nplot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated'])",
"_____no_output_____"
]
],
[
[
"As a check we should get the same answer if we build the full correction matrix from a tensor product of the subspace calibration matrices:",
"_____no_output_____"
]
],
[
[
"meas_calibs2, state_labels2 = complete_meas_cal([2,3,4])\nmeas_fitter2 = CompleteMeasFitter(None, state_labels2)\nmeas_fitter2.cal_matrix = np.kron(meas_fitter.cal_matrices[1],meas_fitter.cal_matrices[0])\nmeas_filter2 = meas_fitter2.filter\nmitigated_results2 = meas_filter2.apply(results)\nmitigated_counts2 = mitigated_results2.get_counts(0)\nplot_histogram([raw_counts, mitigated_counts2], legend=['raw', 'mitigated'])",
"_____no_output_____"
]
],
[
[
"## Running Aqua Algorithms with Measurement Error Mitigation\nTo use measurement error mitigation when running quantum circuits as part of an Aqua algorithm, we need to include the respective measurement error fitting instance in the QuantumInstance. This object also holds the specifications for the chosen backend.\n\nIn the following, we illustrate measurement error mitigation of Aqua algorithms on the example of searching the ground state of a Hamiltonian with VQE.",
"_____no_output_____"
],
[
"First, we need to import the libraries that provide backends as well as the classes that are needed to run the algorithm.",
"_____no_output_____"
]
],
[
[
"# Import qiskit functions and libraries\nfrom qiskit import Aer, IBMQ\nfrom qiskit.circuit.library import TwoLocal\nfrom qiskit.aqua import QuantumInstance\nfrom qiskit.aqua.algorithms import VQE\nfrom qiskit.aqua.components.optimizers import COBYLA\nfrom qiskit.aqua.operators import X, Y, Z, I, CX, T, H, S, PrimitiveOp\nfrom qiskit.providers.aer import noise\n\n# Import error mitigation functions\nfrom qiskit.ignis.mitigation.measurement import CompleteMeasFitter",
"_____no_output_____"
]
],
[
[
"Then, we initialize the instances that are required to execute the algorithm.",
"_____no_output_____"
]
],
[
[
"# Initialize Hamiltonian\nh_op = (-1.0523732 * I^I) + \\\n (0.39793742 * I^Z) + \\\n (-0.3979374 * Z^I) + \\\n (-0.0112801 * Z^Z) + \\\n (0.18093119 * X^X)\n# Initialize trial state\nvar_form = TwoLocal(h_op.num_qubits, ['ry', 'rz'], 'cz', reps=3, entanglement='full')\n# Initialize optimizer\noptimizer = COBYLA(maxiter=350)\n# Initialize algorithm to find the ground state\nvqe = VQE(h_op, var_form, optimizer)",
"_____no_output_____"
]
],
[
[
"Here, we choose the Aer `qasm_simulator` as backend and also add a custom noise model.\nThe application of an actual quantum backend provided by IBMQ is outlined in the commented code.",
"_____no_output_____"
]
],
[
[
"# Generate a noise model\nnoise_model = noise.NoiseModel()\nfor qi in range(h_op.num_qubits):\n read_err = noise.errors.readout_error.ReadoutError([[0.8, 0.2],[0.1,0.9]])\n noise_model.add_readout_error(read_err, [qi]) \n \n# Initialize the backend configuration using measurement error mitigation with a QuantumInstance \nqi_noise_model_qasm = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=1000,\n measurement_error_mitigation_cls=CompleteMeasFitter,\n measurement_error_mitigation_shots=1000)\n\n# Intialize your TOKEN and provider with \n# provider = IBMQ.get_provider(...)\n# qi_noise_model_ibmq = QuantumInstance(backend=provider = provider.get_backend(backend_name)), shots=8000,\n# measurement_error_mitigation_cls=CompleteMeasFitter, measurement_error_mitigation_shots=8000)",
"_____no_output_____"
]
],
[
[
"Finally, we can run the algorithm and check the results.",
"_____no_output_____"
]
],
[
[
"# Run the algorithm\nresult = vqe.run(qi_noise_model_qasm)\nprint(result)",
"/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n/home/computertreker/git/qiskit/qiskit-tutorial/foo/lib/python3.8/site-packages/qiskit/aqua/operators/state_fns/dict_state_fn.py:207: DeprecationWarning: The Python built-in `round` is deprecated for complex scalars, and will raise a `TypeError` in a future release. Use `np.round` or `scalar.round` instead.\n return round(sum([v * front.primitive.get(b, 0) for (b, v) in\n"
],
[
"import qiskit.tools.jupyter\n%qiskit_version_table\n%qiskit_copyright",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a5d1cfcd493e79d5a35f6c9593bb7c91dced70a
| 141,458 |
ipynb
|
Jupyter Notebook
|
permutation_notes.ipynb
|
matthew-brett/eef-pilot1
|
3a80a1e795f3e54b668eeb68a486a57ea80673b6
|
[
"CC-BY-4.0"
] | null | null | null |
permutation_notes.ipynb
|
matthew-brett/eef-pilot1
|
3a80a1e795f3e54b668eeb68a486a57ea80673b6
|
[
"CC-BY-4.0"
] | null | null | null |
permutation_notes.ipynb
|
matthew-brett/eef-pilot1
|
3a80a1e795f3e54b668eeb68a486a57ea80673b6
|
[
"CC-BY-4.0"
] | null | null | null | 28.786732 | 5,372 | 0.294271 |
[
[
[
"pwd",
"_____no_output_____"
],
[
"import pandas",
"_____no_output_____"
],
[
"audit_data = pandas.read_table('audit_of_political_engagement_14_2017.tab')",
"_____no_output_____"
],
[
"audit_data",
"_____no_output_____"
],
[
"brexit_age = audit_data[['cut15', 'numage']]\nbrexit_age",
"_____no_output_____"
],
[
"filtered = brexit_age.loc[brexit_age['numage'] != 0]",
"_____no_output_____"
],
[
"filtered",
"_____no_output_____"
],
[
"remainers = filtered.loc[filtered['cut15'] == 1]\nremainers",
"_____no_output_____"
],
[
"leavers = filtered.loc[filtered['cut15'] == 2]\nleavers\n",
"_____no_output_____"
],
[
"import statistics",
"_____no_output_____"
],
[
"statistics.mean(leavers['numage'])",
"_____no_output_____"
],
[
"leave_ages = leavers['numage']",
"_____no_output_____"
],
[
"leave_ages",
"_____no_output_____"
],
[
"sum(leave_ages) / len(leave_ages)",
"_____no_output_____"
],
[
"leave_ages.mean()",
"_____no_output_____"
],
[
"leaver_mean = leavers['numage'].mean()\nleaver_mean",
"_____no_output_____"
],
[
"remain_mean = remainers['numage'].mean()\nremain_mean",
"_____no_output_____"
],
[
"observed_diff = leaver_mean - remain_mean\nobserved_diff",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"plt.hist(remainers['numage']);\nplt.hist(leavers['numage']);",
"_____no_output_____"
],
[
"plt.hist(leavers['numage']);",
"_____no_output_____"
],
[
"import random",
"_____no_output_____"
],
[
"my_list = [5, 1, 2, 5, 6]",
"_____no_output_____"
],
[
"random.shuffle(my_list)\nmy_list",
"_____no_output_____"
],
[
"type(leavers['numage'])",
"_____no_output_____"
],
[
"leave_list = list(leavers['numage'])\nlen(leave_list)",
"_____no_output_____"
],
[
"remain_list = list(remainers['numage'])\nlen(remain_list)",
"_____no_output_____"
],
[
"remain_list",
"_____no_output_____"
],
[
"list1 = [1, 2, 3]\nlist2 = [3, 4, 5]\nlist1.extend(list2)\nlist1",
"_____no_output_____"
],
[
"list1 = [1, 2, 3]\nlist2 = [3, 4, 5, 6]",
"_____no_output_____"
],
[
"[list1, list2]",
"_____no_output_____"
],
[
"[1, 2]",
"_____no_output_____"
],
[
"[list1, list2]",
"_____no_output_____"
],
[
"pooled = remain_list\npooled.extend(leave_list)\nlen(pooled)",
"_____no_output_____"
],
[
"leavers['numage']",
"_____no_output_____"
],
[
"random.shuffle(pooled)",
"_____no_output_____"
],
[
"fake_remain = pooled[:774]\nfake_leave = pooled[774:]",
"_____no_output_____"
],
[
"mean = statistics.mean\ntype(mean)",
"_____no_output_____"
],
[
"fake_difference = mean(fake_remain) - mean(fake_leave)",
"_____no_output_____"
],
[
"fake_difference",
"_____no_output_____"
],
[
"fake_differences = []\nfor i in range(10000):\n # shuffle the pooled list\n # put the first 774 into fake remain\n # put the rest into fake leave\n # calculate the mean for fake remain\n # calculate the mean for fake leave\n # calculate the difference\n # put that into the \"fake_differences\" list",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5d236b09ea19d11564b1a781e0afe973740dd1
| 216,646 |
ipynb
|
Jupyter Notebook
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
AlphaGit/deep-learning
|
1efa4a57b7c67c40d19ea31304cfeb3afdf0e08e
|
[
"MIT"
] | null | null | null |
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
AlphaGit/deep-learning
|
1efa4a57b7c67c40d19ea31304cfeb3afdf0e08e
|
[
"MIT"
] | null | null | null |
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
AlphaGit/deep-learning
|
1efa4a57b7c67c40d19ea31304cfeb3afdf0e08e
|
[
"MIT"
] | null | null | null | 273.888748 | 91,534 | 0.897833 |
[
[
[
"# Generative Adversarial Network\n\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\n\nGANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\n* [Pix2Pix](https://affinelayer.com/pixsrv/) \n* [CycleGAN](https://github.com/junyanz/CycleGAN)\n* [A whole list](https://github.com/wiseodd/generative-models)\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\n\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\n\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\nExtracting MNIST_data\\train-images-idx3-ubyte.gz\nSuccessfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\nExtracting MNIST_data\\train-labels-idx1-ubyte.gz\nSuccessfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\nExtracting MNIST_data\\t10k-images-idx3-ubyte.gz\nSuccessfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\nExtracting MNIST_data\\t10k-labels-idx1-ubyte.gz\n"
]
],
[
[
"## Model Inputs\n\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks.\n\n>**Exercise:** Finish the `model_inputs` function below. Create the placeholders for `inputs_real` and `inputs_z` using the input sizes `real_dim` and `z_dim` respectively.",
"_____no_output_____"
]
],
[
[
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"_____no_output_____"
]
],
[
[
"## Generator network\n\n\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\n\n#### Variable Scope\nHere we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks.\n\nWe could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\n\nTo use `tf.variable_scope`, you use a `with` statement:\n```python\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\n```\n\nHere's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scope#the_problem) to get another look at using `tf.variable_scope`.\n\n#### Leaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`:\n$$\nf(x) = max(\\alpha * x, x)\n$$\n\n#### Tanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.\n\n>**Exercise:** Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the `reuse` keyword argument from the function to `tf.variable_scope`.",
"_____no_output_____"
]
],
[
[
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('generator', reuse=reuse): # finish this\n # Hidden layer\n h1 = tf.layers.dense(z, n_units)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim)\n out = tf.tanh(logits)\n \n return out",
"_____no_output_____"
]
],
[
[
"## Discriminator\n\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\n>**Exercise:** Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the `reuse` keyword argument from the function arguments to `tf.variable_scope`.",
"_____no_output_____"
]
],
[
[
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('discriminator', reuse=reuse): # finish this\n # Hidden layer\n h1 = tf.layers.dense(x, n_units)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1)\n out = tf.sigmoid(logits)\n \n return out, logits",
"_____no_output_____"
]
],
[
[
"## Hyperparameters",
"_____no_output_____"
]
],
[
[
"# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1",
"_____no_output_____"
]
],
[
[
"## Build network\n\nNow we're building the network from the functions defined above.\n\nFirst is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z.\n\nThen, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes.\n\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`.\n\n>**Exercise:** Build the network from the functions you defined earlier.",
"_____no_output_____"
]
],
[
[
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Generator network here\ng_model = generator(input_z, input_size, g_hidden_size, False, alpha)\n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = discriminator(input_real, d_hidden_size, False, alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, True, alpha)",
"_____no_output_____"
]
],
[
[
"## Discriminator and Generator Losses\n\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropies, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like \n\n```python\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n```\n\nFor the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)`\n\nThe discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\n\nFinally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\n>**Exercise:** Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.",
"_____no_output_____"
]
],
[
[
"# Calculate losses\nreal_labels = tf.ones_like(d_logits_real)\nd_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=real_labels * (1 - smooth)))\n\nfake_labels = tf.zeros_like(d_logits_fake)\nd_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=fake_labels))\n\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=real_labels))",
"_____no_output_____"
]
],
[
[
"## Optimizers\n\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use `tf.trainable_variables()`. This creates a list of all the variables we've defined in our graph.\n\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with `generator`. So, we just need to iterate through the list from `tf.trainable_variables()` and keep variables that start with `generator`. Each variable object has an attribute `name` which holds the name of the variable as a string (`var.name == 'weights_0'` for instance). \n\nWe can do something similar with the discriminator. All the variables in the discriminator start with `discriminator`.\n\nThen, in the optimizer we pass the variable lists to the `var_list` keyword argument of the `minimize` method. This tells the optimizer to only update the listed variables. Something like `tf.train.AdamOptimizer().minimize(loss, var_list=var_list)` will only train the variables in `var_list`.\n\n>**Exercise: ** Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using `AdamOptimizer`, create an optimizer for each network that update the network variables separately.",
"_____no_output_____"
]
],
[
[
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [v for v in t_vars if v.name.startswith('generator')]\nd_vars = [v for v in t_vars if v.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)",
"_____no_output_____"
]
],
[
[
"## Training",
"_____no_output_____"
]
],
[
[
"batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_real: batch_images, input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Epoch 1/100... Discriminator Loss: 0.3579... Generator Loss: 3.9339\nEpoch 2/100... Discriminator Loss: 0.4760... Generator Loss: 2.1872\nEpoch 3/100... Discriminator Loss: 0.4543... Generator Loss: 3.0853\nEpoch 4/100... Discriminator Loss: 1.1209... Generator Loss: 3.4812\nEpoch 5/100... Discriminator Loss: 1.1886... Generator Loss: 1.8047\nEpoch 6/100... Discriminator Loss: 1.1559... Generator Loss: 4.4393\nEpoch 7/100... Discriminator Loss: 1.3462... Generator Loss: 1.5613\nEpoch 8/100... Discriminator Loss: 1.7626... Generator Loss: 2.0263\nEpoch 9/100... Discriminator Loss: 0.7294... Generator Loss: 2.0563\nEpoch 10/100... Discriminator Loss: 2.0996... Generator Loss: 1.2065\nEpoch 11/100... Discriminator Loss: 1.5716... Generator Loss: 4.8676\nEpoch 12/100... Discriminator Loss: 1.3104... Generator Loss: 3.9929\nEpoch 13/100... Discriminator Loss: 1.8977... Generator Loss: 1.9363\nEpoch 14/100... Discriminator Loss: 1.1104... Generator Loss: 3.2569\nEpoch 15/100... Discriminator Loss: 0.8410... Generator Loss: 2.6334\nEpoch 16/100... Discriminator Loss: 0.7424... Generator Loss: 2.2439\nEpoch 17/100... Discriminator Loss: 2.2107... Generator Loss: 1.0171\nEpoch 18/100... Discriminator Loss: 0.9390... Generator Loss: 1.8153\nEpoch 19/100... Discriminator Loss: 1.2255... Generator Loss: 1.4375\nEpoch 20/100... Discriminator Loss: 1.2254... Generator Loss: 1.6921\nEpoch 21/100... Discriminator Loss: 1.2783... Generator Loss: 1.5455\nEpoch 22/100... Discriminator Loss: 0.9384... Generator Loss: 1.5467\nEpoch 23/100... Discriminator Loss: 1.2565... Generator Loss: 1.3073\nEpoch 24/100... Discriminator Loss: 0.9197... Generator Loss: 2.3008\nEpoch 25/100... Discriminator Loss: 1.0555... Generator Loss: 1.6772\nEpoch 26/100... Discriminator Loss: 1.0434... Generator Loss: 1.6047\nEpoch 27/100... Discriminator Loss: 1.1738... Generator Loss: 2.0763\nEpoch 28/100... Discriminator Loss: 0.9390... Generator Loss: 2.0406\nEpoch 29/100... Discriminator Loss: 1.0946... Generator Loss: 1.8302\nEpoch 30/100... Discriminator Loss: 1.0984... Generator Loss: 1.3775\nEpoch 31/100... Discriminator Loss: 0.7956... Generator Loss: 2.7664\nEpoch 32/100... Discriminator Loss: 1.3846... Generator Loss: 1.5518\nEpoch 33/100... Discriminator Loss: 0.8759... Generator Loss: 2.9479\nEpoch 34/100... Discriminator Loss: 1.1668... Generator Loss: 1.6331\nEpoch 35/100... Discriminator Loss: 1.0991... Generator Loss: 1.8359\nEpoch 36/100... Discriminator Loss: 0.8028... Generator Loss: 2.1985\nEpoch 37/100... Discriminator Loss: 1.0506... Generator Loss: 1.9822\nEpoch 38/100... Discriminator Loss: 0.9611... Generator Loss: 2.3757\nEpoch 39/100... Discriminator Loss: 1.0848... Generator Loss: 2.6655\nEpoch 40/100... Discriminator Loss: 0.9209... Generator Loss: 1.5801\nEpoch 41/100... Discriminator Loss: 0.9325... Generator Loss: 1.8702\nEpoch 42/100... Discriminator Loss: 0.9957... Generator Loss: 1.9360\nEpoch 43/100... Discriminator Loss: 0.8190... Generator Loss: 1.8371\nEpoch 44/100... Discriminator Loss: 1.0133... Generator Loss: 1.7178\nEpoch 45/100... Discriminator Loss: 0.9911... Generator Loss: 1.9263\nEpoch 46/100... Discriminator Loss: 1.0236... Generator Loss: 1.6995\nEpoch 47/100... Discriminator Loss: 1.1221... Generator Loss: 2.2287\nEpoch 48/100... Discriminator Loss: 0.9339... Generator Loss: 1.8668\nEpoch 49/100... Discriminator Loss: 0.8754... Generator Loss: 2.2611\nEpoch 50/100... Discriminator Loss: 1.0197... Generator Loss: 1.9309\nEpoch 51/100... Discriminator Loss: 0.9923... Generator Loss: 1.7737\nEpoch 52/100... Discriminator Loss: 0.9332... Generator Loss: 1.6129\nEpoch 53/100... Discriminator Loss: 1.2781... Generator Loss: 1.7842\nEpoch 54/100... Discriminator Loss: 1.0270... Generator Loss: 1.7400\nEpoch 55/100... Discriminator Loss: 1.2452... Generator Loss: 1.2127\nEpoch 56/100... Discriminator Loss: 1.0182... Generator Loss: 2.0256\nEpoch 57/100... Discriminator Loss: 1.0177... Generator Loss: 1.9873\nEpoch 58/100... Discriminator Loss: 1.1519... Generator Loss: 1.7250\nEpoch 59/100... Discriminator Loss: 0.9253... Generator Loss: 2.0365\nEpoch 60/100... Discriminator Loss: 1.3119... Generator Loss: 1.5078\nEpoch 61/100... Discriminator Loss: 0.9688... Generator Loss: 1.8154\nEpoch 62/100... Discriminator Loss: 0.8965... Generator Loss: 2.0512\nEpoch 63/100... Discriminator Loss: 0.9452... Generator Loss: 1.8296\nEpoch 64/100... Discriminator Loss: 0.8989... Generator Loss: 1.7254\nEpoch 65/100... Discriminator Loss: 0.8914... Generator Loss: 1.6267\nEpoch 66/100... Discriminator Loss: 0.9379... Generator Loss: 2.0549\nEpoch 67/100... Discriminator Loss: 1.0180... Generator Loss: 1.5954\nEpoch 68/100... Discriminator Loss: 1.0796... Generator Loss: 1.8728\nEpoch 69/100... Discriminator Loss: 0.9985... Generator Loss: 1.9288\nEpoch 70/100... Discriminator Loss: 0.9873... Generator Loss: 1.7292\nEpoch 71/100... Discriminator Loss: 1.3379... Generator Loss: 1.9157\nEpoch 72/100... Discriminator Loss: 1.0667... Generator Loss: 1.5350\nEpoch 73/100... Discriminator Loss: 0.9757... Generator Loss: 1.3968\nEpoch 74/100... Discriminator Loss: 1.0384... Generator Loss: 2.1754\nEpoch 75/100... Discriminator Loss: 0.9226... Generator Loss: 1.7966\nEpoch 76/100... Discriminator Loss: 0.8917... Generator Loss: 2.0054\nEpoch 77/100... Discriminator Loss: 1.1428... Generator Loss: 1.9282\nEpoch 78/100... Discriminator Loss: 0.9392... Generator Loss: 1.8299\nEpoch 79/100... Discriminator Loss: 0.9354... Generator Loss: 2.0687\nEpoch 80/100... Discriminator Loss: 0.8766... Generator Loss: 1.8104\nEpoch 81/100... Discriminator Loss: 0.8996... Generator Loss: 1.6285\nEpoch 82/100... Discriminator Loss: 1.0775... Generator Loss: 1.4005\nEpoch 83/100... Discriminator Loss: 0.9424... Generator Loss: 2.0881\nEpoch 84/100... Discriminator Loss: 0.9071... Generator Loss: 1.7794\nEpoch 85/100... Discriminator Loss: 1.0173... Generator Loss: 1.6312\nEpoch 86/100... Discriminator Loss: 0.9016... Generator Loss: 2.0693\nEpoch 87/100... Discriminator Loss: 0.9139... Generator Loss: 2.5368\nEpoch 88/100... Discriminator Loss: 0.9957... Generator Loss: 1.7784\nEpoch 89/100... Discriminator Loss: 0.9890... Generator Loss: 1.8256\nEpoch 90/100... Discriminator Loss: 0.8501... Generator Loss: 2.0553\nEpoch 91/100... Discriminator Loss: 0.9099... Generator Loss: 1.8005\nEpoch 92/100... Discriminator Loss: 0.9750... Generator Loss: 2.0515\nEpoch 93/100... Discriminator Loss: 0.7951... Generator Loss: 2.1047\nEpoch 94/100... Discriminator Loss: 0.9956... Generator Loss: 1.9000\nEpoch 95/100... Discriminator Loss: 0.8064... Generator Loss: 2.5389\nEpoch 96/100... Discriminator Loss: 1.0264... Generator Loss: 1.7216\nEpoch 97/100... Discriminator Loss: 0.8489... Generator Loss: 2.0222\nEpoch 98/100... Discriminator Loss: 0.9147... Generator Loss: 2.1825\nEpoch 99/100... Discriminator Loss: 0.8432... Generator Loss: 2.1713\nEpoch 100/100... Discriminator Loss: 1.2846... Generator Loss: 1.2744\n"
]
],
[
[
"## Training loss\n\nHere we'll check out the training losses for the generator and discriminator.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## Generator samples from training\n\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"_____no_output_____"
]
],
[
[
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes",
"_____no_output_____"
],
[
"# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"_____no_output_____"
]
],
[
[
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_____no_output_____"
]
],
[
[
"_ = view_samples(-1, samples)",
"_____no_output_____"
]
],
[
[
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"_____no_output_____"
]
],
[
[
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"_____no_output_____"
]
],
[
[
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.",
"_____no_output_____"
],
[
"## Sampling from the generator\n\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"_____no_output_____"
]
],
[
[
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a5d3dd07a2f6719b51e75d672790ed44883138f
| 22,951 |
ipynb
|
Jupyter Notebook
|
notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb
|
sjain-stanford/finn
|
bdfbd4b79088accf92e60fc1fe790e697500dfe7
|
[
"BSD-3-Clause"
] | 283 |
2019-09-26T10:09:34.000Z
|
2022-03-09T16:36:23.000Z
|
notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb
|
sjain-stanford/finn
|
bdfbd4b79088accf92e60fc1fe790e697500dfe7
|
[
"BSD-3-Clause"
] | 238 |
2019-10-04T12:20:26.000Z
|
2022-03-31T04:50:53.000Z
|
notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb
|
sjain-stanford/finn
|
bdfbd4b79088accf92e60fc1fe790e697500dfe7
|
[
"BSD-3-Clause"
] | 144 |
2019-09-23T13:46:14.000Z
|
2022-03-18T12:55:07.000Z
| 38.834179 | 752 | 0.638709 |
[
[
[
"# FINN - Functional Verification of End-to-End Flow\n-----------------------------------------------------------------\n\n**Important: This notebook depends on the tfc_end2end_example notebook, because we are using models that are available at intermediate steps in the end-to-end flow. So please make sure the needed .onnx files are generated to run this notebook.**\n\nIn this notebook, we will show how to take the intermediate results of the end-to-end tfc example and verify their functionality with different methods. In the following picture you can see the section in the end-to-end flow about the *Simulation & Emulation Flows*. Besides the methods in this notebook, there is another one that is covered in the Jupyter notebook [tfc_end2end_example](tfc_end2end_example.ipynb): remote execution. The remote execution allows functional verification directly on the PYNQ board, for details please have a look at the mentioned Jupyter notebook.",
"_____no_output_____"
],
[
"<img src=\"verification.png\" alt=\"Drawing\" style=\"width: 500px;\"/>",
"_____no_output_____"
],
[
"We will use the following helper functions, `showSrc` to show source code of FINN library calls and `showInNetron` to show the ONNX model at the current transformation step. The Netron displays are interactive, but they only work when running the notebook actively and not on GitHub (i.e. if you are viewing this on GitHub you'll only see blank squares).",
"_____no_output_____"
]
],
[
[
"from finn.util.basic import make_build_dir\nfrom finn.util.visualization import showSrc, showInNetron\n \nbuild_dir = \"/workspace/finn\"",
"_____no_output_____"
]
],
[
[
"To verify the simulations, a \"golden\" output is calculated as a reference. This is calculated directly from the Brevitas model using PyTorch, by running some example data from the MNIST dataset through the trained model.",
"_____no_output_____"
]
],
[
[
"from pkgutil import get_data\nimport onnx\nimport onnx.numpy_helper as nph\nimport torch\nfrom finn.util.test import get_test_model_trained\n\nfc = get_test_model_trained(\"TFC\", 1, 1)\nraw_i = get_data(\"finn.data\", \"onnx/mnist-conv/test_data_set_0/input_0.pb\")\ninput_tensor = onnx.load_tensor_from_string(raw_i)\ninput_brevitas = torch.from_numpy(nph.to_array(input_tensor)).float()\noutput_golden = fc.forward(input_brevitas).detach().numpy()\noutput_golden",
"_____no_output_____"
]
],
[
[
"## Simulation using Python <a id='simpy'></a>\n\nIf an ONNX model consists of [standard ONNX](https://github.com/onnx/onnx/blob/master/docs/Operators.md) nodes and/or FINN custom operations that do not belong to the fpgadataflow (backend $\\neq$ \"fpgadataflow\") this model can be checked for functionality using Python.\n\nTo simulate a standard ONNX node [onnxruntime](https://github.com/microsoft/onnxruntime) is used. onnxruntime is an open source tool developed by Microsoft to run standard ONNX nodes. For the FINN custom op nodes execution functions are defined. The following is an example of the execution function of a XNOR popcount node.\n",
"_____no_output_____"
]
],
[
[
"from finn.custom_op.general.xnorpopcount import xnorpopcountmatmul\nshowSrc(xnorpopcountmatmul)",
"def xnorpopcountmatmul(inp0, inp1):\n \"\"\"Simulates XNOR-popcount matrix multiplication as a regular bipolar\n matrix multiplication followed by some post processing.\"\"\"\n # extract the operand shapes\n # (M, K0) = inp0.shape\n # (K1, N) = inp1.shape\n K0 = inp0.shape[-1]\n K1 = inp1.shape[0]\n # make sure shapes are compatible with matmul\n assert K0 == K1, \"Matrix shapes are not compatible with matmul.\"\n K = K0\n # convert binary inputs to bipolar\n inp0_bipolar = 2.0 * inp0 - 1.0\n inp1_bipolar = 2.0 * inp1 - 1.0\n # call regular numpy matrix multiplication\n out = np.matmul(inp0_bipolar, inp1_bipolar)\n # XNOR-popcount does not produce the regular dot product result --\n # it returns the number of +1s after XNOR. let P be the number of +1s\n # and N be the number of -1s. XNOR-popcount returns P, whereas the\n # regular dot product result from numpy is P-N, so we need to apply\n # some correction.\n # out = P-N\n # K = P+N\n # out + K = 2P, so P = (out + K)/2\n return (out + K) * 0.5\n\n"
]
],
[
[
"The function contains a description of the behaviour in Python and can thus calculate the result of the node.\n\nThis execution function and onnxruntime is used when `execute_onnx` from `onnx_exec` is applied to the model. The model is then simulated node by node and the result is stored in a context dictionary, which contains the values of each tensor at the end of the execution. To get the result, only the output tensor has to be extracted.\n\nThe procedure is shown below. We take the model right before the nodes should be converted into HLS layers and generate an input tensor to pass to the execution function. The input tensor is generated from the Brevitas example inputs.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom finn.core.modelwrapper import ModelWrapper\ninput_dict = {\"global_in\": nph.to_array(input_tensor)}\n\nmodel_for_sim = ModelWrapper(build_dir+\"/tfc_w1a1_ready_for_hls_conversion.onnx\")",
"_____no_output_____"
],
[
"import finn.core.onnx_exec as oxe\noutput_dict = oxe.execute_onnx(model_for_sim, input_dict)\noutput_pysim = output_dict[list(output_dict.keys())[0]]\n\n\n\nif np.isclose(output_pysim, output_golden, atol=1e-3).all():\n print(\"Results are the same!\")\nelse:\n print(\"The results are not the same!\")",
"Results are the same!\n"
]
],
[
[
"The result is compared with the theoretical \"golden\" value for verification.",
"_____no_output_____"
],
[
"## Simulation (cppsim) using C++\n\nWhen dealing with HLS custom op nodes in FINN the simulation using Python is no longer sufficient. After the nodes have been converted to HLS layers, the simulation using C++ can be used. To do this, the input tensor is stored in an .npy file and C++ code is generated that reads the values from the .npy array, streams them to the corresponding finn-hlslib function and writes the result to a new .npy file. This in turn can be read in Python and processed in the FINN flow. For this example the model after setting the folding factors in the HLS layers is used, please be aware that this is not the full model, but the dataflow partition, so before executing at the end of this section we have to integrate the model back into the parent model.",
"_____no_output_____"
]
],
[
[
"model_for_cppsim = ModelWrapper(build_dir+\"/tfc_w1_a1_set_folding_factors.onnx\")",
"_____no_output_____"
]
],
[
[
"To generate the code for this simulation and to generate the executable two transformations are used:\n* `PrepareCppSim` which generates the C++ code for the corresponding hls layer\n* `CompileCppSim` which compules the C++ code and stores the path to the executable",
"_____no_output_____"
]
],
[
[
"from finn.transformation.fpgadataflow.prepare_cppsim import PrepareCppSim\nfrom finn.transformation.fpgadataflow.compile_cppsim import CompileCppSim\nfrom finn.transformation.general import GiveUniqueNodeNames\n\nmodel_for_cppsim = model_for_cppsim.transform(GiveUniqueNodeNames())\nmodel_for_cppsim = model_for_cppsim.transform(PrepareCppSim())\nmodel_for_cppsim = model_for_cppsim.transform(CompileCppSim())",
"_____no_output_____"
]
],
[
[
"When we take a look at the model using netron, we can see that the transformations introduced new attributes.",
"_____no_output_____"
]
],
[
[
"model_for_cppsim.save(build_dir+\"/tfc_w1_a1_for_cppsim.onnx\")\nshowInNetron(build_dir+\"/tfc_w1_a1_for_cppsim.onnx\")",
"Serving '/workspace/finn/tfc_w1_a1_for_cppsim.onnx' at http://0.0.0.0:8081\n"
]
],
[
[
"The following node attributes have been added:\n* `code_gen_dir_cppsim` indicates the directory where the files for the simulation using C++ are stored\n* `executable_path` specifies the path to the executable\n\nWe take now a closer look into the files that were generated:",
"_____no_output_____"
]
],
[
[
"from finn.custom_op.registry import getCustomOp\n\nfc0 = model_for_cppsim.graph.node[1]\nfc0w = getCustomOp(fc0)\ncode_gen_dir = fc0w.get_nodeattr(\"code_gen_dir_cppsim\")\n!ls {code_gen_dir}",
"compile.sh\t\t\t memblock_0.dat thresh.h\r\nexecute_StreamingFCLayer_Batch.cpp node_model\t weights.npy\r\n"
]
],
[
[
"Besides the .cpp file, the folder contains .h files with the weights and thresholds. The shell script contains the compile command and *node_model* is the executable generated by compilation. Comparing this with the `executable_path` node attribute, it can be seen that it specifies exactly the path to *node_model*.",
"_____no_output_____"
],
[
"To simulate the model the execution mode(exec_mode) must be set to \"cppsim\". This is done using the transformation SetExecMode.",
"_____no_output_____"
]
],
[
[
"from finn.transformation.fpgadataflow.set_exec_mode import SetExecMode\n\nmodel_for_cppsim = model_for_cppsim.transform(SetExecMode(\"cppsim\"))\nmodel_for_cppsim.save(build_dir+\"/tfc_w1_a1_for_cppsim.onnx\")",
"_____no_output_____"
]
],
[
[
"Before the model can be executed using `execute_onnx`, we integrate the child model in the parent model. The function reads then the `exec_mode` and writes the input into the correct directory in a .npy file. To be able to read this in C++, there is an additional .hpp file ([npy2apintstream.hpp](https://github.com/Xilinx/finn/blob/master/src/finn/data/cpp/npy2apintstream.hpp)) in FINN, which uses cnpy to read .npy files and convert them into streams, or to read a stream and write it into an .npy. [cnpy](https://github.com/rogersce/cnpy) is a helper to read and write .npy and .npz formates in C++.\n\nThe result is again compared to the \"golden\" output.",
"_____no_output_____"
]
],
[
[
"parent_model = ModelWrapper(build_dir+\"/tfc_w1_a1_dataflow_parent.onnx\")\nsdp_node = parent_model.graph.node[2]\nchild_model = build_dir + \"/tfc_w1_a1_for_cppsim.onnx\"\ngetCustomOp(sdp_node).set_nodeattr(\"model\", child_model)\noutput_dict = oxe.execute_onnx(parent_model, input_dict)\noutput_cppsim = output_dict[list(output_dict.keys())[0]]\n\nif np.isclose(output_cppsim, output_golden, atol=1e-3).all():\n print(\"Results are the same!\")\nelse:\n print(\"The results are not the same!\")",
"Results are the same!\n"
]
],
[
[
"## Emulation (rtlsim) using PyVerilator\n\nThe emulation using [PyVerilator](https://github.com/maltanar/pyverilator) can be done after IP blocks are generated from the corresponding HLS layers. Pyverilator is a tool which makes it possible to simulate verilog files using verilator via a python interface.\n\nWe have two ways to use rtlsim, one is to run the model node-by-node as with the simulation methods, but if the model is in the form of the dataflow partition, the part of the graph that consist of only HLS nodes could also be executed as whole.",
"_____no_output_____"
],
[
"Because at the point where we want to grab and verify the model, the model is already in split form (parent graph consisting of non-hls layers and child graph consisting only of hls layers) we first have to reference the child graph within the parent graph. This is done using the node attribute `model` for the `StreamingDataflowPartition` node.\n\nFirst the procedure is shown, if the child graph has ip blocks corresponding to the individual layers, then the procedure is shown, if the child graph already has a stitched IP.",
"_____no_output_____"
],
[
"### Emulation of model node-by-node\n\nThe child model is loaded and the `exec_mode` for each node is set. To prepare the node-by-node emulation the transformation `PrepareRTLSim` is applied to the child model. With this transformation the emulation files are created for each node and can be used directly when calling `execute_onnx()`. Each node has a new node attribute \"rtlsim_so\" after transformation, which contains the path to the corresponding emulation files. Then it is saved in a new .onnx file so that the changed model can be referenced in the parent model.",
"_____no_output_____"
]
],
[
[
"from finn.transformation.fpgadataflow.prepare_rtlsim import PrepareRTLSim\nfrom finn.transformation.fpgadataflow.prepare_ip import PrepareIP\nfrom finn.transformation.fpgadataflow.hlssynth_ip import HLSSynthIP\n\ntest_fpga_part = \"xc7z020clg400-1\"\ntarget_clk_ns = 10\n\nchild_model = ModelWrapper(build_dir + \"/tfc_w1_a1_set_folding_factors.onnx\")\nchild_model = child_model.transform(GiveUniqueNodeNames())\nchild_model = child_model.transform(PrepareIP(test_fpga_part, target_clk_ns))\nchild_model = child_model.transform(HLSSynthIP())\nchild_model = child_model.transform(SetExecMode(\"rtlsim\"))\nchild_model = child_model.transform(PrepareRTLSim())\nchild_model.save(build_dir + \"/tfc_w1_a1_dataflow_child.onnx\")",
"_____no_output_____"
]
],
[
[
"The next step is to load the parent model and set the node attribute `model` in the StreamingDataflowPartition node (`sdp_node`). Afterwards the `exec_mode` is set in the parent model in each node.",
"_____no_output_____"
]
],
[
[
"# parent model\nmodel_for_rtlsim = ModelWrapper(build_dir + \"/tfc_w1_a1_dataflow_parent.onnx\")\n# reference child model\nsdp_node = getCustomOp(model_for_rtlsim.graph.node[2])\nsdp_node.set_nodeattr(\"model\", build_dir + \"/tfc_w1_a1_dataflow_child.onnx\")\n\nmodel_for_rtlsim = model_for_rtlsim.transform(SetExecMode(\"rtlsim\"))",
"_____no_output_____"
]
],
[
[
"Because the necessary files for the emulation are already generated in Jupyter notebook [tfc_end2end_example](tfc_end2end_example.ipynb), in the next step the execution of the model can be done directly.",
"_____no_output_____"
]
],
[
[
"output_dict = oxe.execute_onnx(model_for_rtlsim, input_dict)\noutput_rtlsim = output_dict[list(output_dict.keys())[0]]\n\nif np.isclose(output_rtlsim, output_golden, atol=1e-3).all():\n print(\"Results are the same!\")\nelse:\n print(\"The results are not the same!\")",
"Results are the same!\n"
]
],
[
[
"### Emulation of stitched IP\n\nHere we use the same procedure. First the child model is loaded, but in contrast to the layer-by-layer emulation, the metadata property `exec_mode` is set to \"rtlsim\" for the whole child model. When the model is integrated and executed in the last step, the verilog files of the stitched IP of the child model are used.",
"_____no_output_____"
]
],
[
[
"from finn.transformation.fpgadataflow.insert_dwc import InsertDWC\nfrom finn.transformation.fpgadataflow.insert_fifo import InsertFIFO\nfrom finn.transformation.fpgadataflow.create_stitched_ip import CreateStitchedIP\n\nchild_model = ModelWrapper(build_dir + \"/tfc_w1_a1_dataflow_child.onnx\")\nchild_model = child_model.transform(InsertDWC())\nchild_model = child_model.transform(InsertFIFO())\nchild_model = child_model.transform(GiveUniqueNodeNames())\nchild_model = child_model.transform(PrepareIP(test_fpga_part, target_clk_ns))\nchild_model = child_model.transform(HLSSynthIP())\nchild_model = child_model.transform(CreateStitchedIP(test_fpga_part, target_clk_ns))\nchild_model = child_model.transform(PrepareRTLSim())\nchild_model.set_metadata_prop(\"exec_mode\",\"rtlsim\")\nchild_model.save(build_dir + \"/tfc_w1_a1_dataflow_child.onnx\")",
"/workspace/finn/src/finn/transformation/fpgadataflow/hlssynth_ip.py:70: UserWarning: Using pre-existing IP for StreamingFCLayer_Batch_3\n warnings.warn(\"Using pre-existing IP for %s\" % node.name)\n/workspace/finn/src/finn/transformation/fpgadataflow/hlssynth_ip.py:70: UserWarning: Using pre-existing IP for StreamingFCLayer_Batch_1\n warnings.warn(\"Using pre-existing IP for %s\" % node.name)\n/workspace/finn/src/finn/transformation/fpgadataflow/hlssynth_ip.py:70: UserWarning: Using pre-existing IP for StreamingFCLayer_Batch_2\n warnings.warn(\"Using pre-existing IP for %s\" % node.name)\n/workspace/finn/src/finn/transformation/fpgadataflow/hlssynth_ip.py:70: UserWarning: Using pre-existing IP for StreamingFCLayer_Batch_0\n warnings.warn(\"Using pre-existing IP for %s\" % node.name)\n"
],
[
"# parent model\nmodel_for_rtlsim = ModelWrapper(build_dir + \"/tfc_w1_a1_dataflow_parent.onnx\")\n# reference child model\nsdp_node = getCustomOp(model_for_rtlsim.graph.node[2])\nsdp_node.set_nodeattr(\"model\", build_dir + \"/tfc_w1_a1_dataflow_child.onnx\")",
"_____no_output_____"
],
[
"output_dict = oxe.execute_onnx(model_for_rtlsim, input_dict)\noutput_rtlsim = output_dict[list(output_dict.keys())[0]]\n\nif np.isclose(output_rtlsim, output_golden, atol=1e-3).all():\n print(\"Results are the same!\")\nelse:\n print(\"The results are not the same!\")",
"Results are the same!\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a5d4096771f553bb9bd8f15fdffd292f77dcf79
| 11,742 |
ipynb
|
Jupyter Notebook
|
mission_to_mars-checkpoint.ipynb
|
aindrilachatterjee12345/Mission-to-Mars_module
|
466a2cc3939102c40f54312fc46f5139cc13b104
|
[
"MIT"
] | null | null | null |
mission_to_mars-checkpoint.ipynb
|
aindrilachatterjee12345/Mission-to-Mars_module
|
466a2cc3939102c40f54312fc46f5139cc13b104
|
[
"MIT"
] | null | null | null |
mission_to_mars-checkpoint.ipynb
|
aindrilachatterjee12345/Mission-to-Mars_module
|
466a2cc3939102c40f54312fc46f5139cc13b104
|
[
"MIT"
] | null | null | null | 29.281796 | 208 | 0.518225 |
[
[
[
"# dependencies and setup\nfrom bs4 import BeautifulSoup as bs\nfrom splinter import Browser\nimport time\nimport pandas as pd\n",
"_____no_output_____"
],
[
"# NEED TO CHANGE THE PATH TO MATCH YOUR COMPUTER\n# showing the computer where to find the chromedriver\nexecutable_path = {\"executable_path\": \"/usr/local/bin/chromedriver\"}\nbrowser = Browser(\"chrome\", **executable_path, headless=False)",
"_____no_output_____"
],
[
"# Visit the NASA website to find the top mars news article\nmars_url = \"https://mars.nasa.gov/news/\"\nbrowser.visit(mars_url)\ntime.sleep(1)\nhtml_mars_site = browser.html\ntime.sleep(1)\n# Scrape page into Soup\nsoup = bs(html_mars_site,\"html.parser\")\ntime.sleep(1)",
"_____no_output_____"
],
[
"# Find the the latest news title and headline text in soup\nnews_title = soup.find(\"div\",class_=\"content_title\").text\ntime.sleep(1)\nnews_p = soup.find(\"div\",class_=\"article_teaser_body\").text\ntime.sleep(1)",
"_____no_output_____"
],
[
"# print the variables to check that we're pulling the right things\nprint(f\"The latest title is: {news_title}\")\nprint(f\"With the description: {news_p}\")",
"The latest title is: Mars Now\nWith the description: Sometimes half measures can be a good thing – especially on a journey this long. The agency's latest rover only has about 146 million miles left to reach its destination.\n"
]
],
[
[
"# Image",
"_____no_output_____"
]
],
[
[
"# visit mars website\njpl_image_url = \"https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars\"\njpl_image_url_base = \"https://www.jpl.nasa.gov\"\nbrowser.visit(jpl_image_url)\ntime.sleep(1)\nbrowser.click_link_by_partial_text('FULL IMAGE')\ntime.sleep(1)\nbrowser.click_link_by_partial_text('more info')\ntime.sleep(1)\n# scrape page into Soup\nhtml_image_site = browser.html\nmars_image_soup = bs(html_image_site,\"html.parser\")",
"/opt/anaconda3/lib/python3.7/site-packages/splinter/driver/webdriver/__init__.py:493: FutureWarning: browser.find_link_by_partial_text is deprecated. Use browser.links.find_by_partial_text instead.\n FutureWarning,\n"
],
[
"# find the image in soup\nsearch_image = mars_image_soup.find(class_=\"main_image\")\nfeatured_image_url = jpl_image_url_base + search_image[\"src\"]\nprint(featured_image_url)",
"https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA19673_hires.jpg\n"
]
],
[
[
"# weather",
"_____no_output_____"
]
],
[
[
"# Visit the Twitter website to find the weather information\ntweet_weather_url = \"https://twitter.com/marswxreport?lang=en\"\nbrowser.visit(tweet_weather_url)\ntime.sleep(1)\n# Scrape page into Soup\nhtml_weather_twitter = browser.html\ntime.sleep(1)\nweather_soup = bs(html_weather_twitter,\"html.parser\")\n# Find the text in soup\nmars_weather = weather_soup.find(\"p\",class_=\"TweetTextSize TweetTextSize--normal js-tweet-text tweet-text\")\n",
"_____no_output_____"
]
],
[
[
"# Mars Facts",
"_____no_output_____"
]
],
[
[
"# Visit the Space Facts website to find Mars facts\nmars_facts_url = \"https://space-facts.com/mars/\"\nmars_facts = pd.read_html(mars_facts_url)\nfacts_df = mars_facts[0]\n# Create a dataframe and add columns\nfacts_df.columns = ['Description','Value']\nfacts_df.to_html(header=False, index=False)\nfacts_df",
"_____no_output_____"
]
],
[
[
"# Hemisphere",
"_____no_output_____"
]
],
[
[
"# Establish list of image urls and urls needed for hemispheres\nlist_of_img_urls = []\nmars_hemisphere_url = \"https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars\"\nhemisphere_base_url = \"https://astrogeology.usgs.gov\"\n# Visit the Astrogeology website to find Mars facts\nbrowser.visit(mars_hemisphere_url)\ntime.sleep(1)\n# Scrape page into Soup\nhtml_hemispheres = browser.html\ntime.sleep(1)\nhemisphere_soup = bs(html_hemispheres, 'html.parser')\ntime.sleep(1)\n# Find the text in soup\nitems = hemisphere_soup.find_all('div', class_='item')\ntime.sleep(1)",
"_____no_output_____"
],
[
"# Create a loop to populate list of image urls\nfor x in items:\n title = x.find(\"h3\").text\n time.sleep(1)\n \n image_url_portion = x.find('a', class_='itemLink product-item')[\"href\"]\n time.sleep(1)\n \n browser.visit(hemisphere_base_url + image_url_portion)\n time.sleep(1)\n \n image_url_portion_html = browser.html\n time.sleep(1)\n \n hemisphere_soup = bs(image_url_portion_html,\"html.parser\")\n time.sleep(1)\n \n complete_img_url = hemisphere_base_url + hemisphere_soup.find(\"img\",class_=\"wide-image\")[\"src\"]\n time.sleep(1)\n \n list_of_img_urls.append({\"title\":title,\"img_url\":complete_img_url})\n time.sleep(1)",
"_____no_output_____"
],
[
"list_of_img_urls",
"_____no_output_____"
],
[
"browser.quit()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a5d6d12de793097684f0a72cdc847440072e30e
| 10,866 |
ipynb
|
Jupyter Notebook
|
dev_swift/sox_integration_example.ipynb
|
artste/fastai_docs
|
d2a9a028c61511f06ac07d15ef26e3b9e22c5710
|
[
"Apache-2.0"
] | 1 |
2019-08-03T12:50:50.000Z
|
2019-08-03T12:50:50.000Z
|
dev_swift/sox_integration_example.ipynb
|
artste/fastai_docs
|
d2a9a028c61511f06ac07d15ef26e3b9e22c5710
|
[
"Apache-2.0"
] | null | null | null |
dev_swift/sox_integration_example.ipynb
|
artste/fastai_docs
|
d2a9a028c61511f06ac07d15ef26e3b9e22c5710
|
[
"Apache-2.0"
] | null | null | null | 43.638554 | 202 | 0.64734 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a5d911b1704ca89055569df318fc49cb41758b7
| 107,137 |
ipynb
|
Jupyter Notebook
|
FCMODEL.ipynb
|
Ayshwarya02/-Ayshwarya02-Covid19-Pneumonia-Prediction
|
96ba2a31ebeab7d9a0267bf5158e0aba1b9812cc
|
[
"MIT"
] | 1 |
2022-02-11T19:26:17.000Z
|
2022-02-11T19:26:17.000Z
|
FCMODEL.ipynb
|
Ayshwarya02/-Ayshwarya02-Covid19-Pneumonia-Prediction
|
96ba2a31ebeab7d9a0267bf5158e0aba1b9812cc
|
[
"MIT"
] | null | null | null |
FCMODEL.ipynb
|
Ayshwarya02/-Ayshwarya02-Covid19-Pneumonia-Prediction
|
96ba2a31ebeab7d9a0267bf5158e0aba1b9812cc
|
[
"MIT"
] | 1 |
2022-03-02T16:54:09.000Z
|
2022-03-02T16:54:09.000Z
| 93.24369 | 22,324 | 0.756489 |
[
[
[
"import tensorflow as tf\nfrom tensorflow.compat.v1 import ConfigProto\nfrom tensorflow.compat.v1 import InteractiveSession\n\nconfig = ConfigProto()\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.5\nconfig.gpu_options.allow_growth = True\nsession = InteractiveSession(config=config)",
"c:\\users\\sanjana\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\tensorflow\\python\\client\\session.py:1761: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).\n warnings.warn('An interactive session is already active. This can '\n"
],
[
"# import the libraries as shown below\n\nfrom tensorflow.keras.layers import Input, Lambda, Dense, Flatten\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.applications.resnet50 import ResNet50\n#from keras.applications.vgg16 import VGG16\nfrom tensorflow.keras.applications.resnet50 import preprocess_input\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img\nfrom tensorflow.keras.models import Sequential\nimport numpy as np\nfrom glob import glob\n#import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# re-size all the images to this\nIMAGE_SIZE = [224, 224]\n\ntrain_path = 'Datasets/train'\nvalid_path = 'Datasets/test'",
"_____no_output_____"
],
[
"# Import the Vgg 16 library as shown below and add preprocessing layer to the front of VGG\n# Here we will be using imagenet weights\n\nresnet = ResNet50(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)",
"_____no_output_____"
],
[
"# don't train existing weights\nfor layer in resnet.layers:\n layer.trainable = False",
"_____no_output_____"
],
[
"# useful for getting number of output classes\nfolders = glob('Datasets/train/*')",
"_____no_output_____"
],
[
"# our layers - you can add more if you want\nx = Flatten()(resnet.output)",
"_____no_output_____"
],
[
"prediction = Dense(len(folders), activation='softmax')(x)\n\n# create a model object\nmodel = Model(inputs=resnet.input, outputs=prediction)",
"_____no_output_____"
],
[
"# view the structure of the model\nmodel.summary()",
"Model: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_2 (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_2[0][0] \n__________________________________________________________________________________________________\nconv1_conv (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nconv1_bn (BatchNormalization) (None, 112, 112, 64) 256 conv1_conv[0][0] \n__________________________________________________________________________________________________\nconv1_relu (Activation) (None, 112, 112, 64) 0 conv1_bn[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 conv1_relu[0][0] \n__________________________________________________________________________________________________\npool1_pool (MaxPooling2D) (None, 56, 56, 64) 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_conv (Conv2D) (None, 56, 56, 64) 4160 pool1_pool[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_relu (Activation (None, 56, 56, 64) 0 conv2_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_relu (Activation (None, 56, 56, 64) 0 conv2_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_conv (Conv2D) (None, 56, 56, 256) 16640 pool1_pool[0][0] \n__________________________________________________________________________________________________\nconv2_block1_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_add (Add) (None, 56, 56, 256) 0 conv2_block1_0_bn[0][0] \n conv2_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_out (Activation) (None, 56, 56, 256) 0 conv2_block1_add[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_conv (Conv2D) (None, 56, 56, 64) 16448 conv2_block1_out[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_relu (Activation (None, 56, 56, 64) 0 conv2_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_relu (Activation (None, 56, 56, 64) 0 conv2_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_add (Add) (None, 56, 56, 256) 0 conv2_block1_out[0][0] \n conv2_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_out (Activation) (None, 56, 56, 256) 0 conv2_block2_add[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_conv (Conv2D) (None, 56, 56, 64) 16448 conv2_block2_out[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_relu (Activation (None, 56, 56, 64) 0 conv2_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_relu (Activation (None, 56, 56, 64) 0 conv2_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_add (Add) (None, 56, 56, 256) 0 conv2_block2_out[0][0] \n conv2_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_out (Activation) (None, 56, 56, 256) 0 conv2_block3_add[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_conv (Conv2D) (None, 28, 28, 128) 32896 conv2_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_relu (Activation (None, 28, 28, 128) 0 conv3_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_relu (Activation (None, 28, 28, 128) 0 conv3_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_conv (Conv2D) (None, 28, 28, 512) 131584 conv2_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block1_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_add (Add) (None, 28, 28, 512) 0 conv3_block1_0_bn[0][0] \n conv3_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_out (Activation) (None, 28, 28, 512) 0 conv3_block1_add[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block1_out[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_relu (Activation (None, 28, 28, 128) 0 conv3_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_relu (Activation (None, 28, 28, 128) 0 conv3_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_add (Add) (None, 28, 28, 512) 0 conv3_block1_out[0][0] \n conv3_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_out (Activation) (None, 28, 28, 512) 0 conv3_block2_add[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block2_out[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_relu (Activation (None, 28, 28, 128) 0 conv3_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_relu (Activation (None, 28, 28, 128) 0 conv3_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_add (Add) (None, 28, 28, 512) 0 conv3_block2_out[0][0] \n conv3_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_out (Activation) (None, 28, 28, 512) 0 conv3_block3_add[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_relu (Activation (None, 28, 28, 128) 0 conv3_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_relu (Activation (None, 28, 28, 128) 0 conv3_block4_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block4_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block4_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_add (Add) (None, 28, 28, 512) 0 conv3_block3_out[0][0] \n conv3_block4_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_out (Activation) (None, 28, 28, 512) 0 conv3_block4_add[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_conv (Conv2D) (None, 14, 14, 256) 131328 conv3_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_relu (Activation (None, 14, 14, 256) 0 conv4_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_relu (Activation (None, 14, 14, 256) 0 conv4_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_conv (Conv2D) (None, 14, 14, 1024) 525312 conv3_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block1_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_add (Add) (None, 14, 14, 1024) 0 conv4_block1_0_bn[0][0] \n conv4_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_out (Activation) (None, 14, 14, 1024) 0 conv4_block1_add[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block1_out[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_relu (Activation (None, 14, 14, 256) 0 conv4_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_relu (Activation (None, 14, 14, 256) 0 conv4_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_add (Add) (None, 14, 14, 1024) 0 conv4_block1_out[0][0] \n conv4_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_out (Activation) (None, 14, 14, 1024) 0 conv4_block2_add[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block2_out[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_relu (Activation (None, 14, 14, 256) 0 conv4_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_relu (Activation (None, 14, 14, 256) 0 conv4_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_add (Add) (None, 14, 14, 1024) 0 conv4_block2_out[0][0] \n conv4_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_out (Activation) (None, 14, 14, 1024) 0 conv4_block3_add[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block3_out[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_relu (Activation (None, 14, 14, 256) 0 conv4_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_relu (Activation (None, 14, 14, 256) 0 conv4_block4_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block4_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block4_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_add (Add) (None, 14, 14, 1024) 0 conv4_block3_out[0][0] \n conv4_block4_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_out (Activation) (None, 14, 14, 1024) 0 conv4_block4_add[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_relu (Activation (None, 14, 14, 256) 0 conv4_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_relu (Activation (None, 14, 14, 256) 0 conv4_block5_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block5_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block5_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_add (Add) (None, 14, 14, 1024) 0 conv4_block4_out[0][0] \n conv4_block5_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_out (Activation) (None, 14, 14, 1024) 0 conv4_block5_add[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block5_out[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_relu (Activation (None, 14, 14, 256) 0 conv4_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_relu (Activation (None, 14, 14, 256) 0 conv4_block6_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block6_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block6_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_add (Add) (None, 14, 14, 1024) 0 conv4_block5_out[0][0] \n conv4_block6_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_out (Activation) (None, 14, 14, 1024) 0 conv4_block6_add[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_conv (Conv2D) (None, 7, 7, 512) 524800 conv4_block6_out[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_relu (Activation (None, 7, 7, 512) 0 conv5_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_relu (Activation (None, 7, 7, 512) 0 conv5_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_conv (Conv2D) (None, 7, 7, 2048) 2099200 conv4_block6_out[0][0] \n__________________________________________________________________________________________________\nconv5_block1_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_add (Add) (None, 7, 7, 2048) 0 conv5_block1_0_bn[0][0] \n conv5_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_out (Activation) (None, 7, 7, 2048) 0 conv5_block1_add[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_conv (Conv2D) (None, 7, 7, 512) 1049088 conv5_block1_out[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_relu (Activation (None, 7, 7, 512) 0 conv5_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_relu (Activation (None, 7, 7, 512) 0 conv5_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_add (Add) (None, 7, 7, 2048) 0 conv5_block1_out[0][0] \n conv5_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_out (Activation) (None, 7, 7, 2048) 0 conv5_block2_add[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_conv (Conv2D) (None, 7, 7, 512) 1049088 conv5_block2_out[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_relu (Activation (None, 7, 7, 512) 0 conv5_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_relu (Activation (None, 7, 7, 512) 0 conv5_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_add (Add) (None, 7, 7, 2048) 0 conv5_block2_out[0][0] \n conv5_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_out (Activation) (None, 7, 7, 2048) 0 conv5_block3_add[0][0] \n__________________________________________________________________________________________________\nflatten_1 (Flatten) (None, 100352) 0 conv5_block3_out[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 3) 301059 flatten_1[0][0] \n==================================================================================================\nTotal params: 23,888,771\nTrainable params: 301,059\nNon-trainable params: 23,587,712\n__________________________________________________________________________________________________\n"
],
[
"# tell the model what cost and optimization method to use\nmodel.compile(\n loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy']\n)",
"_____no_output_____"
],
[
"# Use the Image Data Generator to import the images from the dataset\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\ntrain_datagen = ImageDataGenerator(rescale = 1./255,\n shear_range = 0.2,\n zoom_range = 0.2,\n horizontal_flip = True)\n\ntest_datagen = ImageDataGenerator(rescale = 1./255)",
"_____no_output_____"
],
[
"# Make sure you provide the same target size as initialied for the image size\ntraining_set = train_datagen.flow_from_directory('Datasets/train',\n target_size = (224, 224),\n batch_size = 32,\n class_mode = 'categorical')",
"Found 3334 images belonging to 3 classes.\n"
],
[
"test_set = test_datagen.flow_from_directory('Datasets/test',\n target_size = (224, 224),\n batch_size = 32,\n class_mode = 'categorical')",
"Found 484 images belonging to 3 classes.\n"
],
[
"# fit the model\n# Run the cell. It will take some time to execute\nr = model.fit_generator(\n training_set,\n validation_data=test_set,\n epochs=10,\n steps_per_epoch=len(training_set),\n validation_steps=len(test_set)\n)",
"c:\\users\\sanjana\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\tensorflow\\python\\keras\\engine\\training.py:1940: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.\n warnings.warn('`Model.fit_generator` is deprecated and '\n"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# plot the loss\nplt.plot(r.history['loss'], label='train loss')\nplt.plot(r.history['val_loss'], label='val loss')\nplt.legend()\nplt.show()\nplt.savefig('LossVal_loss')\n\n# plot the accuracy\nplt.plot(r.history['accuracy'], label='train acc')\nplt.plot(r.history['val_accuracy'], label='val acc')\nplt.legend()\nplt.show()\nplt.savefig('AccVal_acc')\n",
"_____no_output_____"
],
[
"from tensorflow.keras.models import load_model\n\nmodel.save('model_resnet50.h5')",
"c:\\users\\sanjana\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\tensorflow\\python\\keras\\utils\\generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.\n warnings.warn('Custom mask layers require a config and must override '\n"
],
[
"y_pred = model.predict(test_set)",
"_____no_output_____"
],
[
"y_pred",
"_____no_output_____"
],
[
"import numpy as np\ny_pred = np.argmax(y_pred, axis=1)",
"_____no_output_____"
],
[
"y_pred",
"_____no_output_____"
],
[
"from tensorflow.keras.models import load_model\nfrom tensorflow.keras.preprocessing import image",
"_____no_output_____"
],
[
"model=load_model('model_inception.h5')",
"_____no_output_____"
],
[
"from PIL import Image\nimg_data = np.random.random(size=(100, 100, 3))\nimg = tf.keras.preprocessing.image.array_to_img(img_data)\narray = tf.keras.preprocessing.image.img_to_array(img)",
"_____no_output_____"
],
[
"img_data",
"_____no_output_____"
],
[
"img=image.load_img('Datasets/test/Covid/1-s2.0-S0929664620300449-gr2_lrg-a.jpg',target_size=(224,224))",
"_____no_output_____"
],
[
"x=image.img_to_array(img)\nx",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"x=x/255",
"_____no_output_____"
],
[
"import numpy as np\nx=np.expand_dims(x,axis=0)\nimg_data=preprocess_input(x)\nimg_data.shape",
"_____no_output_____"
],
[
"model.predict(img_data)",
"_____no_output_____"
],
[
"a=np.argmax(model.predict(img_data), axis=1)",
"_____no_output_____"
],
[
"a==0",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5d926ff7bc1ce1cf25f7eba68c6c7fb45ba3ad
| 43,348 |
ipynb
|
Jupyter Notebook
|
python_tutorial.ipynb
|
DataBloodhound/python_tutorial_first_part
|
0a3aeb91b14449b659e7ae21d93737102e994f6b
|
[
"MIT"
] | null | null | null |
python_tutorial.ipynb
|
DataBloodhound/python_tutorial_first_part
|
0a3aeb91b14449b659e7ae21d93737102e994f6b
|
[
"MIT"
] | null | null | null |
python_tutorial.ipynb
|
DataBloodhound/python_tutorial_first_part
|
0a3aeb91b14449b659e7ae21d93737102e994f6b
|
[
"MIT"
] | null | null | null | 19.847985 | 817 | 0.459352 |
[
[
[
"### First steps\nThe easiest way to run Python in your computer, is to install Anaconda:\nhttps://www.anaconda.com/download for your OS (Windows, macOS, Linux).",
"_____no_output_____"
],
[
"Then from Anaconda's launcher you can run Jupyter notebook. This tutorial written in Jupyter notebooks.",
"_____no_output_____"
],
[
"### Magic hapenning here\nIn Jupyter notebook, you can run bash commands with '%' or '%%'. The difference in second case bash command will run for entire cell, and in first case bash command will run for line.",
"_____no_output_____"
]
],
[
[
"# run user defined finction from *.py file\n%run eratosthenes_sieve.py\neratosthenes_sieve(20)",
"_____no_output_____"
],
[
"# %timeit magic command shows execution time for line of code\n%timeit lst = [i**2 for i in range(10000)] # list comprehension",
"2.78 ms ± 82.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
],
[
"# there's list of all available magic commands\n%lsmagic",
"_____no_output_____"
]
],
[
[
"### Shell commands\nThey should start with '!' sign.",
"_____no_output_____"
]
],
[
[
"!echo \"Hello World!\" # it's like Python's print command",
"Hello World!\r\n"
],
[
"!pwd # path to current folder",
"/home/aliba/Documents/python/python_tutorial\r\n"
],
[
"!ls # list of contents",
"eratosthenes_sieve.py python_tutorial.ipynb\r\n"
]
],
[
[
"We can pass shell results into Python.",
"_____no_output_____"
]
],
[
[
"lst = !ls\nprint(lst)",
"['eratosthenes_sieve.py', 'python_tutorial.ipynb']\n"
]
],
[
[
"But they have different format rather than python lists. ",
"_____no_output_____"
]
],
[
[
"type(lst)",
"_____no_output_____"
]
],
[
[
"This type has additional grep and fields methods",
"_____no_output_____"
],
[
"### Python essentials\nPython is object oriented programming language. Everything (variables, lists, functions) in Python is object. Traditionally, very first program should be \"Hello World!\".",
"_____no_output_____"
]
],
[
[
"print(\"Hello World!\") # prints out given string inside print",
"Hello World!\n"
]
],
[
[
"Python can be used as calculator.",
"_____no_output_____"
]
],
[
[
"88 + 12",
"_____no_output_____"
],
[
"12 * 6",
"_____no_output_____"
],
[
"54 / 6 # return floating-point number",
"_____no_output_____"
],
[
"2**5",
"_____no_output_____"
]
],
[
[
"### Exercise\nCalculate number of seconds in one year (365 days).",
"_____no_output_____"
]
],
[
[
"365 * 24 * 60 * 60",
"_____no_output_____"
]
],
[
[
"### Variables\nAny values can be assigned to variable. Variable name can be any length. It shouldn't start with number, and Python's keywords are restricted to use as variable names.",
"_____no_output_____"
]
],
[
[
"my_name = 'Alibek'",
"_____no_output_____"
],
[
"my_name",
"_____no_output_____"
],
[
"my_weight = 83",
"_____no_output_____"
]
],
[
[
"Variables stored in memory, so we can use them for further calculations.",
"_____no_output_____"
]
],
[
[
"my_weight + 10",
"_____no_output_____"
]
],
[
[
"Order of mathematical operations follows *PEMDAS* convention.\n- P - parentheses\n- E - exponentiation\n- M&D - multiplication & division\n- A&S - addition & subtraction",
"_____no_output_____"
],
[
"\"+\" and \"*\" operators can be used to string variables as well. \"+\" for concatenation of strings, \"*\" for copying number of times.",
"_____no_output_____"
]
],
[
[
"str1 = 'Marko '\nstr2 = 'Polo'\nstr1 + str2",
"_____no_output_____"
],
[
"str1 * 3",
"_____no_output_____"
]
],
[
[
"Comments are very useful for programmer.\n- \\# - one line comment\n- \"\"\" - multi-line comment",
"_____no_output_____"
],
[
"### Functions\nPython has a lot of built-in functions. You can change type of variables, use mathematical operations, and many many more.",
"_____no_output_____"
]
],
[
[
"int(3.9)",
"_____no_output_____"
],
[
"str(3.9)",
"_____no_output_____"
],
[
"import math # import math operator for mathematical operations\nmath.sin(3)",
"_____no_output_____"
],
[
"math.sqrt(100)",
"_____no_output_____"
]
],
[
[
"You can nest functions inside other functions.",
"_____no_output_____"
]
],
[
[
"math.log(math.cos(25))",
"_____no_output_____"
]
],
[
[
"It's possible to define your own functions. The classic example is converting celcius to fahrenheits.",
"_____no_output_____"
]
],
[
[
"def cel_to_fa(celcius = 0):\n \"\"\"Converts celcius into fahrenheits\"\"\"\n return celcius * 1.8 + 32",
"_____no_output_____"
],
[
"cel_to_fa(36.6)",
"_____no_output_____"
],
[
"help(cel_to_fa) # docstrings available on help function",
"Help on function cel_to_fa in module __main__:\n\ncel_to_fa(celcius=0)\n Converts celcius into fahrenheits\n\n"
]
],
[
[
"### Exercise\nCalculate volume of sphere with formula $$v = \\frac{4}{3} \\pi r^3$$",
"_____no_output_____"
]
],
[
[
"def sphere_volume(r = 1):\n return math.pi * r**3 * 4 / 3",
"_____no_output_____"
],
[
"sphere_volume(r = 5)",
"_____no_output_____"
]
],
[
[
"### Conditionals and recursion\nBoolean expressions are either true or false. They can be produced with '==' operator.",
"_____no_output_____"
]
],
[
[
"5 == 6",
"_____no_output_____"
],
[
"3 == 3",
"_____no_output_____"
]
],
[
[
"There's also other relational operators:\n- != - not equal\n- \\> - greater\n- \\>= - greater or equal\n- < - less\n- <= - less or equal",
"_____no_output_____"
],
[
"There're three logical operators: and, or, not. The meaning of these operators are same as in english language semantics.",
"_____no_output_____"
],
[
"Conditional execution obtained with 'if' statement.",
"_____no_output_____"
]
],
[
[
"x = 5\nif x > 0:\n print('x is positive')",
"x is positive\n"
],
[
"if x % 2 == 0:\n print('x is even')\nelse:\n print('x is odd')",
"x is odd\n"
]
],
[
[
"Nested and chained conditionals sometimes necessary.",
"_____no_output_____"
]
],
[
[
"x = int(x / 2)\nif x % 2 == 0:\n if x == 2:\n print('x is even and equal to 2')\nelif x % 2 == 0:\n print('x is even')\nelse:\n print('x is odd')",
"x is even and equal to 2\n"
]
],
[
[
"Sometimes, functions can call themselves. It's called recursion.",
"_____no_output_____"
]
],
[
[
"def start_time(n = 5):\n if n == 0:\n print('start')\n else:\n print('{} seconds left'.format(n))\n start_time(n - 1)",
"_____no_output_____"
],
[
"start_time()",
"5 seconds left\n4 seconds left\n3 seconds left\n2 seconds left\n1 seconds left\nstart\n"
]
],
[
[
"### Exercise\nDefine factorial finction using recursion",
"_____no_output_____"
]
],
[
[
"def factorial(n = 5):\n if n == 0:\n return 1\n else:\n return n * factorial(n - 1)",
"_____no_output_____"
],
[
"factorial()",
"_____no_output_____"
],
[
"1*2*3*4*5",
"_____no_output_____"
]
],
[
[
"### Iteration\nThere's main 2 statements for iterations: while and for loops.",
"_____no_output_____"
]
],
[
[
"n = 5\nwhile n > 0:\n print(n)\n n = n - 1",
"5\n4\n3\n2\n1\n"
],
[
"n = 5\nfor i in range(n, 0, -1):\n print(i)",
"5\n4\n3\n2\n1\n"
]
],
[
[
"'break' operator can stop iterations on any given step",
"_____no_output_____"
]
],
[
[
"n = 5\nwhile n > 0:\n print(n)\n n -= 1\n if n == 2:\n break",
"5\n4\n3\n"
]
],
[
[
"### Exercise\nCalculate square root of number with Newton algorithm",
"_____no_output_____"
]
],
[
[
"def square_root_newton(x = 100, x0 = 1, eps = 1e-9):\n while True:\n ans = x0 - (x0**2 - x) / (2 * x0)\n if abs(ans - x0) < eps:\n break\n x0 = ans\n return x0",
"_____no_output_____"
],
[
"square_root_newton(9995)",
"_____no_output_____"
]
],
[
[
"### Strings\nStrings are sequence of characters. We can get any character from string.",
"_____no_output_____"
]
],
[
[
"my_name[2]",
"_____no_output_____"
],
[
"len(my_name) # number of characters (including spaces and other special symbols)",
"_____no_output_____"
],
[
"# we can loop over characters\nfor letter in my_name:\n print(letter)",
"A\nl\ni\nb\ne\nk\n"
],
[
"# string slices\nmy_name[0:3]",
"_____no_output_____"
],
[
"# strings are immutable\nmy_name[0] = 'M'",
"_____no_output_____"
],
[
"# strings have some built-in methods\nmy_name.upper()",
"_____no_output_____"
],
[
"my_name.find('b')",
"_____no_output_____"
],
[
"# 'in' operator checks if character in string sequence\n'e' in my_name",
"_____no_output_____"
],
[
"# you can check if strings are same\n'Alibyk' == my_name",
"_____no_output_____"
]
],
[
[
"### Exercise\nWrite function that reverses order of letters",
"_____no_output_____"
]
],
[
[
"def reverse_string(string):\n rev_str = ''\n for i in range(1, len(string) + 1):\n rev_str += string[-i]\n return rev_str",
"_____no_output_____"
],
[
"reverse_string('koka-kola')",
"_____no_output_____"
]
],
[
[
"### Lists\nThe most useful built-in type in Python. List is sequence of values, and values can be any type.",
"_____no_output_____"
]
],
[
[
"[1, 2, 3, 4, 5]",
"_____no_output_____"
],
[
"['ss', 2, True]",
"_____no_output_____"
]
],
[
[
"Lists are mutable",
"_____no_output_____"
]
],
[
[
"lst = [1, 2, 3, 4, 5]\nlst[2] = 'foo'\nlst",
"_____no_output_____"
],
[
"# list concatenation\nlst1 = [1, 2, 3]\nlst2 = [9, 8, 7]\nlst3 = lst1 + lst2\nlst3",
"_____no_output_____"
],
[
"# 'append' can add element to list\nlst3.append(10)\nlst3",
"_____no_output_____"
],
[
"# 'extend' add list elements to list\nlst3.extend(lst1)\nlst3",
"_____no_output_____"
],
[
"# in contrary append add list as element\nlst2.append(lst1)\nlst2",
"_____no_output_____"
],
[
"# you can sort elements of list\nlst3.sort()\nlst3",
"_____no_output_____"
],
[
"# sum of all list elements\nsum(lst3)",
"_____no_output_____"
],
[
"# 'pop' method return and delete last element of list\nlst3.pop()",
"_____no_output_____"
],
[
"lst3",
"_____no_output_____"
],
[
"# you can delete given element of list (note that you provide index number)\ndel lst3[3]\nlst3",
"_____no_output_____"
],
[
"# you can remove first appearance of element in list\ntxt = ['a', 'b', 'c', 'b']\ntxt.remove('b')\ntxt",
"_____no_output_____"
],
[
"# you can get list from string with 'list' operator\ntxt = 'parrot peter picked a peck of pickled peppers'\nlst = list(txt)\nlst",
"_____no_output_____"
],
[
"# 'split' method can break text into list elements by given split character\nlst = txt.split(' ')\nlst",
"_____no_output_____"
],
[
"# join will do the reverse\ntxt = ' '.join(lst)\ntxt",
"_____no_output_____"
]
],
[
[
"### Exercise\nFind number of prime numbers before 1000",
"_____no_output_____"
]
],
[
[
"def primes(n):\n primes = [False, False] + [True] * (n - 2)\n i = 2\n while i < n:\n if not primes[i]:\n i += 1\n continue\n else:\n k = i * i\n while k < n:\n primes[k] = False\n k += i\n i += 1\n return [i for i in range(n) if primes[i]]",
"_____no_output_____"
],
[
"len(primes(1000))",
"_____no_output_____"
]
],
[
[
"### Dictionaries\nDictionaries are like lists, but indexes can be any type. Collection of indexes called keys. Elements called values. The items of dictionaries called key-value pairs.",
"_____no_output_____"
]
],
[
[
"d = dict()\nd['one'] = 1\nd",
"_____no_output_____"
],
[
"d['two'] = 2\nd['three'] = 3\nd",
"_____no_output_____"
],
[
"d.values() # values of dictionary can be obtained with .values()",
"_____no_output_____"
],
[
"d.keys() # keys can be obtained with .keys()",
"_____no_output_____"
]
],
[
[
"### Tuples\nTuples are sequence of values. Values can be any type, and indexed just like lists. The maun difference that tuples are immutable.",
"_____no_output_____"
]
],
[
[
"t = 1, 2, 3, 4, 5",
"_____no_output_____"
],
[
"t",
"_____no_output_____"
]
],
[
[
"Tuples have same methods as list, except few differences.",
"_____no_output_____"
],
[
"### Function arguments\nUser defined functions can take any number of arguments. Particularly \\*args gives you opportunity to include as many variables as you want.",
"_____no_output_____"
]
],
[
[
"def printall(*args):\n print(args)",
"_____no_output_____"
],
[
"printall(1, 2, 3, 's')",
"(1, 2, 3, 's')\n"
]
],
[
[
"### Question\nName built-in functions with any number of arguments. (sum, max, etc.)",
"_____no_output_____"
],
[
"### Zip function\nzip function takes two or more sequence of elements, and return list of tuples.",
"_____no_output_____"
]
],
[
[
"t1 = 1,2,3\nt2 = 'one', 'two', 'three'\nt = zip(t1, t2)",
"_____no_output_____"
],
[
"t # it's zip object, and we can iterate over the elements",
"_____no_output_____"
],
[
"for elems in t:\n print(elems)",
"(1, 'one')\n(2, 'two')\n(3, 'three')\n"
]
],
[
[
"Zip object is iterator, that can loop through all elements, but can not get any value at given index.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a5da05462f4ea106644e752e7c7cbdfb5c154a0
| 16,570 |
ipynb
|
Jupyter Notebook
|
02jb_train_on_melspectrograms_pytorch_lme_pool_all_classes_simple_minmax_log.ipynb
|
rhine3/birdcall
|
2ec3535f9fdc57d3bb628d100ded141f9d8baefb
|
[
"Apache-2.0"
] | 50 |
2020-06-19T18:37:49.000Z
|
2020-09-18T15:47:27.000Z
|
02jb_train_on_melspectrograms_pytorch_lme_pool_all_classes_simple_minmax_log.ipynb
|
licaYu/birdcall
|
2ec3535f9fdc57d3bb628d100ded141f9d8baefb
|
[
"Apache-2.0"
] | 2 |
2020-08-24T11:48:13.000Z
|
2020-08-24T11:55:06.000Z
|
02jb_train_on_melspectrograms_pytorch_lme_pool_all_classes_simple_minmax_log.ipynb
|
licaYu/birdcall
|
2ec3535f9fdc57d3bb628d100ded141f9d8baefb
|
[
"Apache-2.0"
] | 9 |
2020-06-20T17:11:46.000Z
|
2020-08-27T21:51:11.000Z
| 52.603175 | 1,983 | 0.572299 |
[
[
[
"from birdcall.data import *\nfrom birdcall.metrics import *\nfrom birdcall.ops import *\n\nimport torch\nimport torchvision\nfrom torch import nn\nimport numpy as np\nimport pandas as pd\nfrom pathlib import Path\nimport soundfile as sf",
"_____no_output_____"
],
[
"BS = 16\nMAX_LR = 1e-3",
"_____no_output_____"
],
[
"classes = pd.read_pickle('data/classes.pkl')",
"_____no_output_____"
],
[
"splits = pd.read_pickle('data/all_splits.pkl')\nall_train_items = pd.read_pickle('data/all_train_items.pkl')\n\ntrain_items = np.array(all_train_items)[splits[0][0]].tolist()\nval_items = np.array(all_train_items)[splits[0][1]].tolist()",
"_____no_output_____"
],
[
"from collections import defaultdict\n\nclass2train_items = defaultdict(list)\n\nfor cls_name, path, duration in train_items:\n class2train_items[cls_name].append((path, duration))",
"_____no_output_____"
],
[
"train_ds = MelspecPoolDataset(class2train_items, classes, len_mult=50, normalize=False)\ntrain_dl = torch.utils.data.DataLoader(train_ds, batch_size=BS, num_workers=NUM_WORKERS, pin_memory=True, shuffle=True)",
"_____no_output_____"
],
[
"val_items = [(classes.index(item[0]), item[1], item[2]) for item in val_items]\nval_items_binned = bin_items_negative_class(val_items)",
"_____no_output_____"
],
[
"class Model(nn.Module):\n def __init__(self):\n super().__init__()\n self.cnn = nn.Sequential(*list(torchvision.models.resnet34(True).children())[:-2])\n self.classifier = nn.Sequential(*[\n nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),\n nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),\n nn.Linear(512, len(classes))\n ])\n \n def forward(self, x):\n x = torch.log10(1 + x)\n max_per_example = x.view(x.shape[0], -1).max(1)[0] # scaling to between 0 and 1\n x /= max_per_example[:, None, None, None, None] # per example!\n bs, im_num = x.shape[:2]\n x = x.view(-1, x.shape[2], x.shape[3], x.shape[4])\n x = self.cnn(x)\n x = x.mean((2,3))\n x = self.classifier(x)\n x = x.view(bs, im_num, -1)\n x = lme_pool(x)\n return x",
"_____no_output_____"
],
[
"model = Model().cuda()",
"_____no_output_____"
],
[
"import torch.optim as optim\nfrom sklearn.metrics import accuracy_score, f1_score\nimport time",
"_____no_output_____"
],
[
"criterion = nn.BCEWithLogitsLoss()\noptimizer = optim.Adam(model.parameters(), 1e-3)\nscheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, 5)",
"_____no_output_____"
],
[
"sc_ds = SoundscapeMelspecPoolDataset(pd.read_pickle('data/soundscape_items.pkl'), classes)\nsc_dl = torch.utils.data.DataLoader(sc_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)",
"_____no_output_____"
],
[
"t0 = time.time()\nfor epoch in range(260):\n running_loss = 0.0\n for i, data in enumerate(train_dl, 0):\n model.train()\n inputs, labels = data[0].cuda(), data[1].cuda()\n optimizer.zero_grad()\n\n outputs = model(inputs)\n loss = criterion(outputs, labels)\n if np.isnan(loss.item()): \n raise Exception(f'!!! nan encountered in loss !!! epoch: {epoch}\\n')\n loss.backward()\n optimizer.step()\n scheduler.step()\n\n running_loss += loss.item()\n\n\n if epoch % 5 == 4:\n model.eval();\n preds = []\n targs = []\n\n for num_specs in val_items_binned.keys():\n valid_ds = MelspecShortishValidatioDataset(val_items_binned[num_specs], classes)\n valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)\n\n with torch.no_grad():\n for data in valid_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n preds.append(outputs.cpu().detach())\n targs.append(labels.cpu().detach())\n\n preds = torch.cat(preds)\n targs = torch.cat(targs)\n\n f1s = []\n ts = []\n for t in np.linspace(0.4, 1, 61):\n f1s.append(f1_score(preds.sigmoid() > t, targs, average='micro'))\n ts.append(t)\n \n sc_preds = []\n sc_targs = []\n with torch.no_grad():\n for data in sc_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n sc_preds.append(outputs.cpu().detach())\n sc_targs.append(labels.cpu().detach())\n\n sc_preds = torch.cat(sc_preds)\n sc_targs = torch.cat(sc_targs)\n sc_f1 = f1_score(sc_preds.sigmoid() > 0.5, sc_targs, average='micro')\n \n sc_f1s = []\n sc_ts = []\n for t in np.linspace(0.4, 1, 61):\n sc_f1s.append(f1_score(sc_preds.sigmoid() > t, sc_targs, average='micro'))\n sc_ts.append(t)\n \n f1 = f1_score(preds.sigmoid() > 0.5, targs, average='micro')\n print(f'[{epoch + 1}, {(time.time() - t0)/60:.1f}] loss: {running_loss / (len(train_dl)-1):.3f}, f1: {max(f1s):.3f}, sc_f1: {max(sc_f1s):.3f}')\n running_loss = 0.0\n\n torch.save(model.state_dict(), f'models/{epoch+1}_lmepool_simple_minmax_log_{round(f1, 2)}.pth')",
"[5, 19.5] loss: 0.023, f1: 0.000, sc_f1: 0.000\n[10, 38.3] loss: 0.019, f1: 0.018, sc_f1: 0.000\n[15, 56.7] loss: 0.016, f1: 0.115, sc_f1: 0.000\n[20, 74.9] loss: 0.013, f1: 0.342, sc_f1: 0.012\n[25, 94.1] loss: 0.011, f1: 0.472, sc_f1: 0.000\n[30, 113.0] loss: 0.009, f1: 0.562, sc_f1: 0.011\n[35, 131.8] loss: 0.008, f1: 0.616, sc_f1: 0.024\n[40, 150.2] loss: 0.007, f1: 0.637, sc_f1: 0.023\n[45, 168.6] loss: 0.006, f1: 0.658, sc_f1: 0.022\n[50, 187.9] loss: 0.005, f1: 0.668, sc_f1: 0.010\n[55, 206.6] loss: 0.005, f1: 0.682, sc_f1: 0.032\n[60, 224.8] loss: 0.004, f1: 0.690, sc_f1: 0.012\n[65, 243.1] loss: 0.004, f1: 0.700, sc_f1: 0.011\n[70, 261.5] loss: 0.003, f1: 0.704, sc_f1: 0.012\n[75, 280.2] loss: 0.003, f1: 0.706, sc_f1: 0.000\n[80, 299.0] loss: 0.003, f1: 0.707, sc_f1: 0.000\n[85, 317.9] loss: 0.003, f1: 0.708, sc_f1: 0.010\n[90, 336.7] loss: 0.002, f1: 0.708, sc_f1: 0.021\n[95, 355.8] loss: 0.003, f1: 0.708, sc_f1: 0.012\n[100, 374.6] loss: 0.002, f1: 0.702, sc_f1: 0.011\n[105, 392.9] loss: 0.002, f1: 0.700, sc_f1: 0.011\n[110, 411.6] loss: 0.002, f1: 0.698, sc_f1: 0.000\n[115, 430.7] loss: 0.002, f1: 0.706, sc_f1: 0.011\n[120, 449.5] loss: 0.002, f1: 0.712, sc_f1: 0.012\n[125, 468.1] loss: 0.002, f1: 0.710, sc_f1: 0.011\n[130, 487.2] loss: 0.002, f1: 0.713, sc_f1: 0.011\n[135, 506.0] loss: 0.002, f1: 0.713, sc_f1: 0.011\n[140, 524.7] loss: 0.001, f1: 0.720, sc_f1: 0.000\n[145, 543.3] loss: 0.001, f1: 0.710, sc_f1: 0.000\n[150, 562.6] loss: 0.001, f1: 0.718, sc_f1: 0.000\n[155, 581.6] loss: 0.001, f1: 0.724, sc_f1: 0.000\n[160, 600.4] loss: 0.001, f1: 0.712, sc_f1: 0.011\n[165, 619.0] loss: 0.001, f1: 0.723, sc_f1: 0.022\n[170, 638.0] loss: 0.001, f1: 0.715, sc_f1: 0.011\n[175, 657.1] loss: 0.001, f1: 0.734, sc_f1: 0.012\n[180, 675.4] loss: 0.001, f1: 0.724, sc_f1: 0.000\n[185, 694.5] loss: 0.001, f1: 0.726, sc_f1: 0.000\n[190, 713.6] loss: 0.001, f1: 0.725, sc_f1: 0.012\n[195, 732.3] loss: 0.001, f1: 0.724, sc_f1: 0.000\n[200, 750.7] loss: 0.001, f1: 0.725, sc_f1: 0.000\n[205, 769.4] loss: 0.001, f1: 0.725, sc_f1: 0.000\n[210, 787.6] loss: 0.001, f1: 0.715, sc_f1: 0.000\n[215, 806.6] loss: 0.001, f1: 0.727, sc_f1: 0.000\n[220, 825.4] loss: 0.001, f1: 0.732, sc_f1: 0.000\n[225, 844.6] loss: 0.001, f1: 0.723, sc_f1: 0.000\n[230, 863.2] loss: 0.001, f1: 0.728, sc_f1: 0.000\n[235, 881.8] loss: 0.001, f1: 0.736, sc_f1: 0.000\n[240, 901.0] loss: 0.001, f1: 0.719, sc_f1: 0.000\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5daac17c98483d397d9362a11cdf053e2342ed
| 23,742 |
ipynb
|
Jupyter Notebook
|
Day_024_HW.ipynb
|
hengbinxu/ML100-Days
|
ed0eb6e32882239599df57486af3dc398f160d4c
|
[
"MIT"
] | 1 |
2019-01-02T01:18:27.000Z
|
2019-01-02T01:18:27.000Z
|
Day_024_HW.ipynb
|
hengbinxu/ML100-Days
|
ed0eb6e32882239599df57486af3dc398f160d4c
|
[
"MIT"
] | null | null | null |
Day_024_HW.ipynb
|
hengbinxu/ML100-Days
|
ed0eb6e32882239599df57486af3dc398f160d4c
|
[
"MIT"
] | null | null | null | 33.676596 | 210 | 0.453205 |
[
[
[
"# 作業 : (Kaggle)鐵達尼生存預測\nhttps://www.kaggle.com/c/titanic",
"_____no_output_____"
],
[
"# 作業1\n* 參考範例,將鐵達尼的船票票號( 'Ticket' )欄位使用特徵雜湊 / 標籤編碼 / 目標均值編碼三種轉換後, \n與其他數值型欄位一起預估生存機率",
"_____no_output_____"
]
],
[
[
"# 做完特徵工程前的所有準備 (與前範例相同)\nimport pandas as pd\nimport numpy as np\nimport copy, time\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.linear_model import LogisticRegression\n\ndata_path = 'data/data2/'\ndf_train = pd.read_csv(data_path + 'titanic_train.csv')\ndf_test = pd.read_csv(data_path + 'titanic_test.csv')\n\ntrain_Y = df_train['Survived']\nids = df_test['PassengerId']\ndf_train = df_train.drop(['PassengerId', 'Survived'] , axis=1)\ndf_test = df_test.drop(['PassengerId'] , axis=1)\ndf = pd.concat([df_train,df_test])\ndf.head()",
"_____no_output_____"
],
[
"#只取類別值 (object) 型欄位, 存於 object_features 中\nobject_features = []\nfor dtype, feature in zip(df.dtypes, df.columns):\n if dtype == 'object':\n object_features.append(feature)\nprint(f'{len(object_features)} Numeric Features : {object_features}\\n')\n\n# 只留類別型欄位\ndf = df[object_features]\ndf = df.fillna('None')\ntrain_num = train_Y.shape[0]\ndf.head()",
"5 Numeric Features : ['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked']\n\n"
]
],
[
[
"# 作業2\n* 承上題,三者比較效果何者最好?\n - Answer: 在此例中三種效果差不多,計數編碼準確度略高一些,但不明顯",
"_____no_output_____"
]
],
[
[
"# 對照組 : 標籤編碼 + 邏輯斯迴歸\ndf_temp = pd.DataFrame()\nfor c in df.columns:\n df_temp[c] = LabelEncoder().fit_transform(df[c])\ntrain_X = df_temp[:train_num]\nestimator = LogisticRegression()\nprint(cross_val_score(estimator, train_X, train_Y, cv=5).mean())\ndf_temp.head()",
"0.780004837244799\n"
],
[
"# 加上 'Cabin' 欄位的計數編碼\ncount_cabin = df.groupby('Cabin').size().reset_index()\ncount_cabin.columns = ['Cabin', 'Cabin_count']\ncount_df = pd.merge(df, count_cabin, on='Cabin', how='left')\ndf_temp['Cabin_count'] = count_df['Cabin_count']\ntrain_X = df_temp[:train_Y.shape[0]]\ntrain_X.head()",
"_____no_output_____"
],
[
"# 'Cabin'計數編碼 + 邏輯斯迴歸\ncv = 5\nLR = LogisticRegression()\nmean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()\nprint('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))",
"5-fold cross validation average accuracy: 0.7856230275549181\n"
],
[
"# 'Cabin'特徵雜湊 + 邏輯斯迴歸\ncv=5\ndf_temp = pd.DataFrame()\nfor c in df.columns:\n df_temp[c] = LabelEncoder().fit_transform(df[c])\n\ndf_temp['Cabin_hash'] = df['Cabin'].apply(lambda x: hash(x) % 10).reset_index()['Cabin']\ntrain_X = df_temp[:train_Y.shape[0]]\nLR = LogisticRegression()\nmean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()\nprint('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))",
"5-fold cross validation average accuracy: 0.7811284327504169\n"
],
[
"# 'Cabin'計數編碼 + 'Cabin'特徵雜湊 + 邏輯斯迴歸\ncv=5\ndf_temp = pd.DataFrame()\nfor c in df.columns:\n df_temp[c] = LabelEncoder().fit_transform(df[c])\ndf_temp['Cabin_hash']= df['Cabin'].apply(lambda x: hash(x) % 10).reset_index()['Cabin'] \ndf_temp['Cabin_count'] = count_df['Cabin_count']\ntrain_X = df_temp[:train_Y.shape[0]]\n\nLR = LogisticRegression()\nmean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()\nprint('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))",
"5-fold cross validation average accuracy: 0.7878576644264265\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4a5dac77193ff4d03b9807a934e9ea05f0ade894
| 148,041 |
ipynb
|
Jupyter Notebook
|
notebooks/7 QRRHO.ipynb
|
geem-lab/overreact-guide
|
4a0861bb8ffb451b2c26adfca32758a066a60704
|
[
"MIT"
] | 9 |
2021-11-09T15:57:07.000Z
|
2022-01-22T17:12:23.000Z
|
notebooks/7 QRRHO.ipynb
|
Leticia-maria/overreact-guide
|
de404bb738900536c9f916b10981d6b1e9b60ba8
|
[
"MIT"
] | 12 |
2021-11-23T19:08:31.000Z
|
2022-03-28T14:09:26.000Z
|
notebooks/7 QRRHO.ipynb
|
Leticia-maria/overreact-guide
|
de404bb738900536c9f916b10981d6b1e9b60ba8
|
[
"MIT"
] | 1 |
2021-12-19T00:44:56.000Z
|
2021-12-19T00:44:56.000Z
| 855.728324 | 42,849 | 0.779892 |
[
[
[
"Validate Grimme's QRRHO treatment for vibrational enthalpy.\n\n*Quasi*-Rigid Rotor Harmonic Oscillator (QRRHO) models attempt to improve free\nenergies for weakly bounded structures.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport overreact as rx\nfrom overreact import constants\n\nsns.set(style=\"white\", context=\"notebook\", palette=\"colorblind\", font_scale=1.1)",
"_____no_output_____"
],
[
"vibfreqs = np.linspace(0.0001, 400.0, num=400)\nvibmoments = rx.thermo._gas._vibrational_moment(vibfreqs)\n\nfig, ax = plt.subplots()\nax.plot(\n vibfreqs,\n [\n rx.thermo._gas.calc_vib_energy(vibfreq, qrrho=False) / constants.kcal\n for vibfreq in vibfreqs\n ],\n \"-\",\n label=\"RRHO\",\n)\nax.plot(\n vibfreqs,\n [\n rx.thermo._gas.calc_vib_energy(vibfreq, qrrho=True) / constants.kcal\n for vibfreq in vibfreqs\n ],\n \"-\",\n label=\"QRRHO\",\n)\n\ninset = fig.add_axes([0.6, 0.4, 0.25, 0.25])\nweights = rx.thermo._gas._head_gordon_damping(vibfreqs)\ninset.plot(vibfreqs, weights)\n\nax.set_ylabel(r\"Enthalpy [kcal mol$^{-1}$]\")\nax.set_xlabel(r\"$\\nu_i$ [cm$^{-1}$]\")\ninset.set_ylabel(r\"$\\omega$\")\ninset.set_xlabel(r\"$\\nu_i$ [cm$^{-1}$]\")\n\nax.set_ylim(0.2, 0.8)\nax.set_xlim(0, 400)\ninset.set_ylim(0.0, 1.0)\ninset.set_xlim(0, 400)\n\nax.legend()\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"Validate Grimme's QRRHO treatment for vibrational entropy.\nThe inset above consists of the damping function itself.\n\n*Quasi*-Rigid Rotor Harmonic Oscillator (QRRHO) models attempt to improve free\nenergies for weakly bounded structures.",
"_____no_output_____"
]
],
[
[
"vibfreqs = np.linspace(0.0001, 400.0, num=400)\nvibmoments = rx.thermo._gas._vibrational_moment(vibfreqs)\n\nfig, ax = plt.subplots()\nax.plot(\n vibfreqs,\n [rx.thermo._gas.calc_vib_entropy(vibfreq, qrrho=False) / constants.calorie for vibfreq in vibfreqs],\n \"--\",\n label=\"Harmonic approx. (RRHO)\",\n)\nax.plot(\n vibfreqs,\n [rx.thermo._gas.calc_vib_entropy(vibfreq, qrrho=True) / constants.calorie for vibfreq in vibfreqs],\n \"-\",\n label=\"Damped average (QRRHO)\",\n)\nax.plot(\n vibfreqs,\n [\n rx.thermo._gas.calc_rot_entropy(moments=vibmoment, independent=True) / constants.calorie\n for vibmoment in vibmoments\n ],\n \"-.\",\n label=\"Rotational approx.\",\n)\n\nax.set_ylabel(r\"Entropy [cal mol$^{-1}$ K$^{-1}$]\")\nax.set_xlabel(r\"$\\nu_i$ [cm$^{-1}$]\")\n\nax.set_ylim(0, 60 / constants.calorie)\nax.set_xlim(0, 400)\n\nax.legend()\nfig.tight_layout()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5db993cfaf32b6e4d74c9aadc02b3e7eb91d6e
| 692 |
ipynb
|
Jupyter Notebook
|
.virtual_documents/site_energy_consumption_prediction/2_EDA-R.ipynb
|
artanzand/site_energy_consumption_prediction
|
315bc7fcec9e6d5cd962f6ddf1403c76b2611b7a
|
[
"MIT"
] | null | null | null |
.virtual_documents/site_energy_consumption_prediction/2_EDA-R.ipynb
|
artanzand/site_energy_consumption_prediction
|
315bc7fcec9e6d5cd962f6ddf1403c76b2611b7a
|
[
"MIT"
] | null | null | null |
.virtual_documents/site_energy_consumption_prediction/2_EDA-R.ipynb
|
artanzand/site_energy_consumption_prediction
|
315bc7fcec9e6d5cd962f6ddf1403c76b2611b7a
|
[
"MIT"
] | null | null | null | 25.62963 | 73 | 0.758671 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a5dca0da9f6b1242cfd8682a2758caecf538c69
| 5,572 |
ipynb
|
Jupyter Notebook
|
first_simulation.ipynb
|
sturrion/stock-market-investment-tools
|
16c600b1fc38b3d02e03aee46747bdb4552a5ba8
|
[
"MIT"
] | 1 |
2019-05-07T09:35:34.000Z
|
2019-05-07T09:35:34.000Z
|
first_simulation.ipynb
|
sturrion/stock-market-investment-tools
|
16c600b1fc38b3d02e03aee46747bdb4552a5ba8
|
[
"MIT"
] | null | null | null |
first_simulation.ipynb
|
sturrion/stock-market-investment-tools
|
16c600b1fc38b3d02e03aee46747bdb4552a5ba8
|
[
"MIT"
] | null | null | null | 29.796791 | 126 | 0.51005 |
[
[
[
"### What if we buy a share every day at the highest price?",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"symbols = ['ABBV','AAPL','ADBE','APD','BRK-B','COST','CTL','DRI','IRM','KIM','MA','MCD','NFLX','NVDA','SO','V','VLO']\ndates = ['2018-01-01', '2018-12-31']\ndata_directory = './data/hist/'\nplot_directory = './plot/hist/'\n",
"_____no_output_____"
],
[
"def get_ticker_data(symbol, start_date, end_date):\n ticker = pd.read_csv(data_directory + symbol + '.csv')\n \n ticker['Date'] = pd.to_datetime(ticker['Date'], format='%Y-%m-%d')\n ticker = ticker[(ticker['Date'] >= pd.to_datetime(start_date, format='%Y-%m-%d')) \n & (ticker['Date'] <= pd.to_datetime(end_date, format='%Y-%m-%d'))]\n \n ticker['units'] = 1\n # At the highest price\n ticker['investment'] = ticker['units'] * ticker['High']\n \n ticker['total_units'] = ticker['units'].cumsum()\n ticker['total_investment'] = ticker['investment'].cumsum()\n # At the lowest price\n ticker['total_value'] = ticker['total_units'] * ticker['Low']\n \n ticker['percent'] = ((ticker['total_value'] - ticker['total_investment'])/ ticker['total_investment']) * 100.0\n \n return ticker\n",
"_____no_output_____"
],
[
"def get_ticker_data_adj(symbol, start_date, end_date):\n ticker = pd.read_csv(data_directory + symbol + '.csv')\n \n ticker['Date'] = pd.to_datetime(ticker['Date'], format='%Y-%m-%d')\n ticker = ticker[(ticker['Date'] >= pd.to_datetime(start_date, format='%Y-%m-%d')) \n & (ticker['Date'] <= pd.to_datetime(end_date, format='%Y-%m-%d'))]\n \n ticker['units'] = 1\n ticker['investment'] = ticker['units'] * ticker['Adj Close']\n \n ticker['total_units'] = ticker['units'].cumsum()\n ticker['total_investment'] = ticker['investment'].cumsum()\n ticker['total_value'] = ticker['total_units'] * ticker['Adj Close']\n \n ticker['percent'] = ((ticker['total_value'] - ticker['total_investment'])/ ticker['total_investment']) * 100.0\n \n return ticker",
"_____no_output_____"
],
[
"for symbol in symbols:\n\n ticker = get_ticker_data(symbol, *dates)\n \n fig = plt.figure(figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')\n # 1\n plt.subplot(2, 1, 1)\n plt.plot(ticker['Date'], ticker['total_investment'], color='b')\n plt.plot(ticker['Date'], ticker['total_value'], color='r')\n plt.title(symbol + ' Dates: ' + dates[0] + ' to ' + dates[1])\n plt.ylabel('Values')\n\n # 2\n plt.subplot(2, 1, 2)\n plt.plot(ticker['Date'], ticker['percent'], color='b')\n plt.xlabel('Dates')\n plt.ylabel('Percent')\n\n plt.show()\n\n #fig.savefig(plot_directory + symbol + '.pdf', bbox_inches='tight')\n",
"_____no_output_____"
],
[
"plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')\n \nfor symbol in symbols:\n\n ticker = get_ticker_data(symbol, *dates)\n \n plt.plot(ticker['Date'], ticker['percent'])\n plt.xlabel('Dates')\n plt.ylabel('Percent')\n\nplt.legend(symbols)\nplt.show()",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(len(symbols), 1, sharex=True)\n# Remove horizontal space between axes\nfig.subplots_adjust(hspace=0)\n\nfor i in range(0, len(symbols)):\n\n ticker = get_ticker_data(symbols[i], *dates)\n \n # Plot each graph, and manually set the y tick values\n axs[i].plot(ticker['Date'], ticker['percent'])\n axs[i].set_ylim(-200, 800)\n axs[i].legend([symbols[i]])\n \nprint(type(axs[i]))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5dce64c7b416629bebc8c3c6bd1ea6ec6a4c0f
| 96,390 |
ipynb
|
Jupyter Notebook
|
notebooks/MILESTONE 2/1.0-jls-cross_validation_analysis.ipynb
|
itSQualL/machine-learning-techinques
|
66d3f6b1f6d8612e1bb716c992e2b18be0a339da
|
[
"MIT"
] | null | null | null |
notebooks/MILESTONE 2/1.0-jls-cross_validation_analysis.ipynb
|
itSQualL/machine-learning-techinques
|
66d3f6b1f6d8612e1bb716c992e2b18be0a339da
|
[
"MIT"
] | 1 |
2018-12-02T20:31:24.000Z
|
2018-12-02T20:31:24.000Z
|
notebooks/MILESTONE 2/1.0-jls-cross_validation_analysis.ipynb
|
itSQualL/machine-learning-techniques
|
66d3f6b1f6d8612e1bb716c992e2b18be0a339da
|
[
"MIT"
] | null | null | null | 72.582831 | 26,816 | 0.69667 |
[
[
[
"# 1. Loading and filtering data",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"## 1.1. Firstly we load the data and filter the columns",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"/home/alberto/Documentos/MatchingLearning/Practicas/Moriarty2.csv\",\n usecols=[\"UUID\",\"ActionType\"])\n\ndf2 = pd.read_csv(\"/home/alberto/Documentos/MatchingLearning/Practicas/T4.csv\",\n usecols=[\"UUID\", \"CPU_0\", \"CPU_1\", \"CPU_2\", \"CPU_3\", \"Traffic_TotalRxBytes\", \n \"Traffic_TotalTxBytes\", \"MemFree\"])",
"_____no_output_____"
]
],
[
[
"## 1.2. Merging two datasheets",
"_____no_output_____"
],
[
"The first thing we need to do it is to convert column 'UUID' (which is a timestamp in milliseconds) into a date timestamp",
"_____no_output_____"
]
],
[
[
"df['UUID'] = pd.to_datetime(df['UUID'], unit=\"ms\")\ndf['UUID'] = df['UUID'].dt.round('t')\n\n\ndf2['UUID'] = pd.to_datetime(df['UUID'], unit=\"ms\")\ndf2['UUID'] = df['UUID'].dt.round('t')\n\ndata = pd.merge(df,df2, on=['UUID'])",
"_____no_output_____"
]
],
[
[
"## 1.3. Replace ActionType",
"_____no_output_____"
],
[
"We need numeric values in the columns, so we replace ActionType malicious/benign by 1/0 respectively. And finally que don't need column 'UUID' for the predincting model so we remove it. ",
"_____no_output_____"
]
],
[
[
"data['ActionType'] = data['ActionType'].replace(['malicious'], 1)\ndata['ActionType'] = data['ActionType'].replace(['benign'], 0)\ndata = data.drop('UUID', 1)\ndata\n",
"_____no_output_____"
]
],
[
[
"# 2. Naive Bayes.",
"_____no_output_____"
],
[
"## 2.1. Preprocessing.",
"_____no_output_____"
]
],
[
[
"x = data[['CPU_0', 'CPU_1', 'CPU_2', 'CPU_3', 'Traffic_TotalRxBytes', 'Traffic_TotalTxBytes', 'MemFree']]\n",
"_____no_output_____"
]
],
[
[
"## 2.2. Standarization.",
"_____no_output_____"
]
],
[
[
"from sklearn import preprocessing\nscaler = preprocessing.StandardScaler().fit(x)\nx_scaled = scaler.transform(x)",
"_____no_output_____"
]
],
[
[
"## 2.2. Round.",
"_____no_output_____"
]
],
[
[
"y = data['ActionType']\ny_round = [ round(e,0) for e in y ]",
"_____no_output_____"
]
],
[
[
"## 2.3. sample a training set while holding out 40% of the data for testing (evaluating) our classifier:",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nx_train, x_test, y_train, y_test = train_test_split(x_scaled, y_round, test_size=0.4)",
"_____no_output_____"
]
],
[
[
"## 2.4. Create a Gaussian Classifier.",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB\nmodel = GaussianNB()",
"_____no_output_____"
]
],
[
[
"## 2.5. Training the model.",
"_____no_output_____"
]
],
[
[
"model.fit(x_train, y_train)",
"_____no_output_____"
]
],
[
[
"## 2.6. Prediction with the same data.",
"_____no_output_____"
]
],
[
[
"y_pred = model.predict(x_test)",
"_____no_output_____"
]
],
[
[
"## 2.7. Calculation.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import mean_absolute_error\nmae = mean_absolute_error(y_test,y_pred)\nprint (\"Error Measure \", mae)",
"Error Measure 0.6422182468694096\n"
],
[
"#plt.subplot(2, 1, i + 1)\n# x axis for plotting\nimport numpy as np\nxx = np.stack(i for i in range(len(y_test)))\nplt.scatter(xx, y_test, c='r', label='data')\nplt.plot(xx, y_pred, c='g', label='prediction')\nplt.axis('tight')\nplt.legend()\nplt.title(\"Gaussian NaiveBayes\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 3. CROSS VALIDATION ANALYSIS",
"_____no_output_____"
],
[
"## 3.1. Features and labels",
"_____no_output_____"
]
],
[
[
"x = data[['CPU_0', 'CPU_1', 'CPU_2', 'CPU_3', 'Traffic_TotalRxBytes', 'Traffic_TotalTxBytes', 'MemFree']]\ny = data['ActionType']",
"_____no_output_____"
]
],
[
[
"## 3.2. x axis for plotting",
"_____no_output_____"
]
],
[
[
"import numpy as np\nxx = np.stack(i for i in range(len(y)))",
"_____no_output_____"
]
],
[
[
"## 3.3. Analysis",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfrom sklearn import neighbors\nfrom sklearn.cross_validation import cross_val_score\n\nfor i, weights in enumerate(['uniform', 'distance']):\n total_scores = []\n for n_neighbors in range(1,30):\n knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)\n knn.fit(x,y)\n scores = -cross_val_score(knn, x,y, \n scoring='neg_mean_absolute_error', cv=10)\n total_scores.append(scores.mean())\n \n plt.plot(range(0,len(total_scores)), total_scores, \n marker='o', label=weights)\n plt.ylabel('cv score')\n\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"# PCA.",
"_____no_output_____"
]
],
[
[
"from sklearn import preprocessing\n\nscaler = preprocessing.StandardScaler()\ndatanorm = scaler.fit_transform(data)",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA\n\nn_components = 2\nestimator = PCA(n_components)\nX_pca = estimator.fit_transform(datanorm)",
"_____no_output_____"
],
[
"import numpy\nimport matplotlib.pyplot as plt\n\nx = X_pca[:,0]\ny = X_pca[:,1]\nplt.scatter(x,y)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Creating CSV to analyze the results.",
"_____no_output_____"
]
],
[
[
"import os\n\ndirectory = \"../data/processed\"\n\nif not os.path.exists(directory):\n os.makedirs(directory)\n \ndata.to_csv(directory + \"/MoriartyT4.csv\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5dd1cbb7f54f5a916da9335c97efd21db9f1ee
| 151,944 |
ipynb
|
Jupyter Notebook
|
CarND-LaneLines-P1/P1.ipynb
|
mlandry1/CarND
|
bfa8a1af634017cc35eedff8974d299a58006554
|
[
"MIT"
] | 1 |
2018-05-13T08:43:59.000Z
|
2018-05-13T08:43:59.000Z
|
CarND-LaneLines-P1/P1.ipynb
|
mlandry1/CarND
|
bfa8a1af634017cc35eedff8974d299a58006554
|
[
"MIT"
] | null | null | null |
CarND-LaneLines-P1/P1.ipynb
|
mlandry1/CarND
|
bfa8a1af634017cc35eedff8974d299a58006554
|
[
"MIT"
] | 3 |
2018-05-13T08:44:05.000Z
|
2021-01-12T08:04:16.000Z
| 164.977199 | 119,954 | 0.876224 |
[
[
[
"# Self-Driving Car Engineer Nanodegree\n\n\n## Project: **Finding Lane Lines on the Road** \n***\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \n\nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.\n\n---\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\n\n**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".**\n\n---",
"_____no_output_____"
],
[
"**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**\n\n---\n\n<figure>\n <img src=\"line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n <p></p> \n<figure>\n <img src=\"laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>",
"_____no_output_____"
],
[
"**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see [this forum post](https://carnd-forums.udacity.com/cq/viewquestion.action?spaceKey=CAR&id=29496372&questionTitle=finding-lanes---import-cv2-fails-even-though-python-in-the-terminal-window-has-no-problem-with-import-cv2) for more troubleshooting tips.** ",
"_____no_output_____"
],
[
"## Import Packages",
"_____no_output_____"
]
],
[
[
"#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Read in an Image",
"_____no_output_____"
]
],
[
[
"#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')",
"This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n"
]
],
[
[
"## Ideas for Lane Detection Pipeline",
"_____no_output_____"
],
[
"**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**\n\n`cv2.inRange()` for color selection \n`cv2.fillPoly()` for regions selection \n`cv2.line()` to draw lines on an image given endpoints \n`cv2.addWeighted()` to coadd / overlay two images\n`cv2.cvtColor()` to grayscale or change color\n`cv2.imwrite()` to output images to file \n`cv2.bitwise_and()` to apply a mask to an image\n\n**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**",
"_____no_output_____"
],
[
"## Helper Functions",
"_____no_output_____"
],
[
"Below are some helper functions to help get you started. They should look familiar from the lesson!",
"_____no_output_____"
]
],
[
[
"import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\ndef draw_lines(img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=2):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n\n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n\n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n\n #Initialize variables\n sum_fit_left = 0\n sum_fit_right = 0\n number_fit_left = 0\n number_fit_right = 0\n\n for line in lines:\n for x1,y1,x2,y2 in line:\n #find the slope and offset of each line found (y=mx+b)\n fit = np.polyfit((x1, x2), (y1, y2), 1)\n\n #limit the slope to plausible left lane values and compute the mean slope/offset\n if fit[0] >= min_slope and fit[0] <= max_slope:\n sum_fit_left = fit + sum_fit_left\n number_fit_left = number_fit_left + 1\n\n #limit the slope to plausible right lane values and compute the mean slope/offset\n if fit[0] >= -max_slope and fit[0] <= -min_slope:\n sum_fit_right = fit + sum_fit_right\n number_fit_right = number_fit_right + 1\n\n #avoid division by 0\n if number_fit_left > 0:\n #Compute the mean of all fitted lines\n mean_left_fit = sum_fit_left/number_fit_left\n #Given two y points (bottom of image and top of region of interest), compute the x coordinates\n x_top_left = int((roi_top - mean_left_fit[1])/mean_left_fit[0])\n x_bottom_left = int((roi_bottom - mean_left_fit[1])/mean_left_fit[0])\n #Draw the line\n cv2.line(img, (x_bottom_left,roi_bottom), (x_top_left,roi_top), [255, 0, 0], 5)\n else:\n mean_left_fit = (0,0)\n\n if number_fit_right > 0:\n #Compute the mean of all fitted lines\n mean_right_fit = sum_fit_right/number_fit_right\n #Given two y points (bottom of image and top of region of interest), compute the x coordinates\n x_top_right = int((roi_top - mean_right_fit[1])/mean_right_fit[0])\n x_bottom_right = int((roi_bottom - mean_right_fit[1])/mean_right_fit[0])\n #Draw the line\n cv2.line(img, (x_bottom_right,roi_bottom), (x_top_right,roi_top), [255, 0, 0], 5)\n else:\n fit_right_mean = (0,0)\n\ndef hough_lines(img, roi_top, roi_bottom, min_slope, max_slope, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=4)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., λ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + λ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, λ)",
"_____no_output_____"
]
],
[
[
"## Test Images\n\nBuild your pipeline to work on the images in the directory \"test_images\" \n**You should make sure your pipeline works well on these images before you try the videos.**",
"_____no_output_____"
]
],
[
[
"import os\ntest_images = os.listdir(\"test_images/\")",
"_____no_output_____"
]
],
[
[
"## Build a Lane Finding Pipeline\n\n",
"_____no_output_____"
],
[
"Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.\n\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.",
"_____no_output_____"
]
],
[
[
"# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images directory.\n\ndef process_image1(img):\n #Apply greyscale\n gray_img = grayscale(img)\n\n # Define a kernel size and Apply Gaussian blur\n kernel_size = 5\n blur_img = gaussian_blur(gray_img, kernel_size)\n\n #Apply the Canny transform\n low_threshold = 50\n high_threshold = 150\n canny_img = canny(blur_img, low_threshold, high_threshold)\n\n #Region of interest (roi) horizontal percentages\n roi_hor_perc_top_left = 0.4675\n roi_hor_perc_top_right = 0.5375\n roi_hor_perc_bottom_left = 0.11\n roi_hor_perc_bottom_right = 0.95\n \n #Region of interest vertical percentages\n roi_vert_perc = 0.5975\n \n #Apply a region of interest mask of the image\n vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)\n croped_img = region_of_interest(canny_img,vertices)\n\n # Define the Hough img parameters\n rho = 2 # distance resolution in pixels of the Hough grid\n theta = np.pi/180 # angular resolution in radians of the Hough grid\n threshold = 15 # minimum number of votes (intersections in Hough grid cell)\n min_line_length = 40 # minimum number of pixels making up a line\n max_line_gap = 20 # maximum gap in pixels between connectable line segments\n min_slope = 0.5 # minimum line slope \n max_slope = 0.8 # maximum line slope\n \n # Apply the Hough transform to get an image and the lines\n hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)\n \n # Return the image of the lines blended with the original\n return weighted_img(img, hough_img, 0.7, 1.0)\n\n#prepare directory to receive processed images\nnewpath = 'test_images/processed' \n\nif not os.path.exists(newpath):\n os.makedirs(newpath)\n \nfor file in test_images:\n \n # skip files starting with processed\n if file.startswith('processed'):\n continue\n \n image = mpimg.imread('test_images/' + file) \n \n processed_img = process_image1(image)\n \n #Extract file name\n base = os.path.splitext(file)[0] \n \n #break\n mpimg.imsave('test_images/processed/processed-' + base +'.png', processed_img, format = 'png', cmap = plt.cm.gray)\n \n print(\"Processed \", file)",
"Processed solidYellowCurve.jpg\nProcessed solidYellowLeft.jpg\nProcessed solidWhiteRight.jpg\nProcessed whiteCarLaneSwitch.jpg\nProcessed solidYellowCurve2.jpg\nProcessed solidWhiteCurve.jpg\n"
]
],
[
[
"## Test on Videos\n\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\n\nWe can test our solution on two provided videos:\n\n`solidWhiteRight.mp4`\n\n`solidYellowLeft.mp4`\n\n**Note: if you get an `import error` when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out [this forum post](https://carnd-forums.udacity.com/questions/22677062/answers/22677109) for more troubleshooting tips.**\n\n**If you get an error that looks like this:**\n```\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\n```\n**Follow the instructions in the error message and check out [this forum post](https://carnd-forums.udacity.com/display/CAR/questions/26218840/import-videofileclip-error) for more troubleshooting tips across operating systems.**",
"_____no_output_____"
]
],
[
[
"# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML",
"_____no_output_____"
],
[
"def process_image(img):\n #Apply greyscale\n gray_img = grayscale(img)\n\n # Define a kernel size and Apply Gaussian blur\n kernel_size = 5\n blur_img = gaussian_blur(gray_img, kernel_size)\n\n #Apply the Canny transform\n low_threshold = 50\n high_threshold = 150\n canny_img = canny(blur_img, low_threshold, high_threshold)\n\n #Region of interest (roi) horizontal percentages\n roi_hor_perc_top_left = 0.4675\n roi_hor_perc_top_right = 0.5375\n roi_hor_perc_bottom_left = 0.11\n roi_hor_perc_bottom_right = 0.95\n \n #Region of interest vertical percentages\n roi_vert_perc = 0.5975\n\n #Apply a region of interest mask of the image\n vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)\n croped_img = region_of_interest(canny_img,vertices)\n\n # Define the Hough img parameters\n rho = 2 # distance resolution in pixels of the Hough grid\n theta = np.pi/180 # angular resolution in radians of the Hough grid\n threshold = 15 # minimum number of votes (intersections in Hough grid cell)\n min_line_length = 40 # minimum number of pixels making up a line\n max_line_gap = 20 # maximum gap in pixels between connectable line segments\n min_slope = 0.5 # minimum line slope \n max_slope = 0.8 # maximum line slope \n \n # Apply the Hough transform to get an image and the lines\n hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)\n \n # Return the image of the lines blended with the original\n return weighted_img(img, hough_img, 0.7, 1.0)",
"_____no_output_____"
]
],
[
[
"Let's try the one with the solid white lane on the right first ...",
"_____no_output_____"
]
],
[
[
"white_output = 'white.mp4'\nclip1 = VideoFileClip(\"solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)",
"[MoviePy] >>>> Building video white.mp4\n[MoviePy] Writing video white.mp4\n"
]
],
[
[
"Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.",
"_____no_output_____"
]
],
[
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\" >\n</video>\n\"\"\".format(white_output))",
"_____no_output_____"
]
],
[
[
"## Improve the draw_lines() function\n\n**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".**\n\n**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**",
"_____no_output_____"
],
[
"Now for the one with the solid yellow lane on the left. This one's more tricky!",
"_____no_output_____"
]
],
[
[
"yellow_output = 'yellow.mp4'\nclip2 = VideoFileClip('solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)",
"[MoviePy] >>>> Building video yellow.mp4\n[MoviePy] Writing video yellow.mp4\n"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))",
"_____no_output_____"
]
],
[
[
"## Writeup and Submission\n\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.\n",
"_____no_output_____"
],
[
"## Optional Challenge\n\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!",
"_____no_output_____"
]
],
[
[
"challenge_output = 'extra.mp4'\nclip2 = VideoFileClip('challenge.mp4')\nchallenge_clip = clip2.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)",
"[MoviePy] >>>> Building video extra.mp4\n[MoviePy] Writing video extra.mp4\n"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"raw"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a5dd5c7a14ec360911b591d563d6f6a2d2c903a
| 49,528 |
ipynb
|
Jupyter Notebook
|
code/chap23.ipynb
|
kgosbee/ModSimPy
|
0663a65c9771b4be3bd02743fde7ef329a0ff39f
|
[
"MIT"
] | null | null | null |
code/chap23.ipynb
|
kgosbee/ModSimPy
|
0663a65c9771b4be3bd02743fde7ef329a0ff39f
|
[
"MIT"
] | null | null | null |
code/chap23.ipynb
|
kgosbee/ModSimPy
|
0663a65c9771b4be3bd02743fde7ef329a0ff39f
|
[
"MIT"
] | null | null | null | 50.902364 | 21,272 | 0.699382 |
[
[
[
"# Modeling and Simulation in Python\n\nChapter 23\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n",
"_____no_output_____"
]
],
[
[
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *",
"_____no_output_____"
]
],
[
[
"### Code from the previous chapter",
"_____no_output_____"
]
],
[
[
"m = UNITS.meter\ns = UNITS.second\nkg = UNITS.kilogram\ndegree = UNITS.degree",
"_____no_output_____"
],
[
"params = Params(x = 0 * m, \n y = 1 * m,\n g = 9.8 * m/s**2,\n mass = 145e-3 * kg,\n diameter = 73e-3 * m,\n rho = 1.2 * kg/m**3,\n C_d = 0.3,\n angle = 45 * degree,\n velocity = 40 * m / s,\n t_end = 20 * s)",
"_____no_output_____"
],
[
"def make_system(params):\n \"\"\"Make a system object.\n \n params: Params object with angle, velocity, x, y,\n diameter, duration, g, mass, rho, and C_d\n \n returns: System object\n \"\"\"\n unpack(params)\n \n # convert angle to degrees\n theta = np.deg2rad(angle)\n \n # compute x and y components of velocity\n vx, vy = pol2cart(theta, velocity)\n \n # make the initial state\n init = State(x=x, y=y, vx=vx, vy=vy)\n \n # compute area from diameter\n area = np.pi * (diameter/2)**2\n \n return System(params, init=init, area=area)",
"_____no_output_____"
],
[
"def drag_force(V, system):\n \"\"\"Computes drag force in the opposite direction of `V`.\n \n V: velocity\n system: System object with rho, C_d, area\n \n returns: Vector drag force\n \"\"\"\n unpack(system)\n mag = -rho * V.mag**2 * C_d * area / 2\n direction = V.hat()\n f_drag = mag * direction\n return f_drag",
"_____no_output_____"
],
[
"def slope_func(state, t, system):\n \"\"\"Computes derivatives of the state variables.\n \n state: State (x, y, x velocity, y velocity)\n t: time\n system: System object with g, rho, C_d, area, mass\n \n returns: sequence (vx, vy, ax, ay)\n \"\"\"\n x, y, vx, vy = state\n unpack(system)\n\n V = Vector(vx, vy) \n a_drag = drag_force(V, system) / mass\n a_grav = Vector(0, -g)\n \n a = a_grav + a_drag\n \n return vx, vy, a.x, a.y",
"_____no_output_____"
],
[
"def event_func(state, t, system):\n \"\"\"Stop when the y coordinate is 0.\n \n state: State object\n t: time\n system: System object\n \n returns: y coordinate\n \"\"\"\n x, y, vx, vy = state\n return y",
"_____no_output_____"
]
],
[
[
"### Optimal launch angle\n\nTo find the launch angle that maximizes distance from home plate, we need a function that takes launch angle and returns range.",
"_____no_output_____"
]
],
[
[
"def range_func(angle, params): \n \"\"\"Computes range for a given launch angle.\n \n angle: launch angle in degrees\n params: Params object\n \n returns: distance in meters\n \"\"\"\n params = Params(params, angle=angle)\n system = make_system(params)\n results, details = run_ode_solver(system, slope_func, events=event_func)\n x_dist = get_last_value(results.x) * m\n return x_dist",
"_____no_output_____"
]
],
[
[
"Let's test `range_func`.",
"_____no_output_____"
]
],
[
[
"%time range_func(45, params)",
"Wall time: 106 ms\n"
]
],
[
[
"And sweep through a range of angles.",
"_____no_output_____"
]
],
[
[
"angles = linspace(20, 80, 21)\nsweep = SweepSeries()\n\nfor angle in angles:\n x_dist = range_func(angle, params)\n print(angle, x_dist)\n sweep[angle] = x_dist",
"20.0 79.96823513701818 meter\n23.0 86.2962864918857 meter\n26.0 91.59647908800756 meter\n29.0 95.89089380357947 meter\n32.0 99.20335822576214 meter\n35.0 101.55668007973463 meter\n38.0 102.97173880917646 meter\n41.0 103.46740813177843 meter\n44.0 103.060922479178 meter\n47.0 101.7684506860653 meter\n50.0 99.60572853320414 meter\n53.0 96.58867331645769 meter\n56.0 92.7339915489422 meter\n59.0 88.05990483905572 meter\n62.0 82.58716276454999 meter\n65.0 76.34016117578483 meter\n68.0 69.34714056465755 meter\n71.0 61.63878192638946 meter\n74.0 53.256101549629825 meter\n77.0 44.246680677829886 meter\n80.0 34.6702130194327 meter\n"
]
],
[
[
"Plotting the `Sweep` object, it looks like the peak is between 40 and 45 degrees.",
"_____no_output_____"
]
],
[
[
"plot(sweep, color='C2')\ndecorate(xlabel='Launch angle (degree)',\n ylabel='Range (m)',\n title='Range as a function of launch angle',\n legend=False)\n\nsavefig('figs/chap10-fig03.pdf')",
"Saving figure to file figs/chap10-fig03.pdf\n"
]
],
[
[
"We can use `max_bounded` to search for the peak efficiently.",
"_____no_output_____"
]
],
[
[
"%time res = max_bounded(range_func, [0, 90], params)",
"Wall time: 837 ms\n"
]
],
[
[
"`res` is an `ModSimSeries` object with detailed results:",
"_____no_output_____"
]
],
[
[
"res",
"_____no_output_____"
]
],
[
[
"`x` is the optimal angle and `fun` the optional range.",
"_____no_output_____"
]
],
[
[
"optimal_angle = res.x * degree",
"_____no_output_____"
],
[
"max_x_dist = res.fun",
"_____no_output_____"
]
],
[
[
"### Under the hood\n\nRead the source code for `max_bounded` and `min_bounded`, below.\n\nAdd a print statement to `range_func` that prints `angle`. Then run `max_bounded` again so you can see how many times it calls `range_func` and what the arguments are.",
"_____no_output_____"
]
],
[
[
"%psource max_bounded",
"_____no_output_____"
],
[
"%psource min_bounded",
"_____no_output_____"
]
],
[
[
"### The Manny Ramirez problem\n\nFinally, let's solve the Manny Ramirez problem:\n\n*What is the minimum effort required to hit a home run in Fenway Park?*\n\nFenway Park is a baseball stadium in Boston, Massachusetts. One of its most famous features is the \"Green Monster\", which is a wall in left field that is unusually close to home plate, only 310 feet along the left field line. To compensate for the short distance, the wall is unusually high, at 37 feet.\n\nAlthough the problem asks for a minimum, it is not an optimization problem. Rather, we want to solve for the initial velocity that just barely gets the ball to the top of the wall, given that it is launched at the optimal angle.\n\nAnd we have to be careful about what we mean by \"optimal\". For this problem, we don't want the longest range, we want the maximum height at the point where it reaches the wall.\n\nIf you are ready to solve the problem on your own, go ahead. Otherwise I will walk you through the process with an outline and some starter code.\n\nAs a first step, write a function called `height_func` that takes a launch angle and a params as parameters, simulates the flights of a baseball, and returns the height of the baseball when it reaches a point 94.5 meters (310 feet) from home plate.",
"_____no_output_____"
]
],
[
[
"def event_func1(state, t, system):\n \"\"\"Stop when the ball reaches the home plate.\n \n state: State object\n t: time\n system: System object\n \n returns: y coordinate\n \"\"\"\n x, y, vx, vy = state\n return x-94.5",
"_____no_output_____"
],
[
"def height_func(angle, params): \n \"\"\"Computes range for a given launch angle.\n \n angle: launch angle in degrees\n params: Params object\n \n returns: final height of the ball\n \"\"\"\n params = Params(params, angle=angle)\n system = make_system(params)\n results, details = run_ode_solver(system, slope_func, events=event_func1)\n y_height = get_last_value(results.y) * m\n return y_height",
"_____no_output_____"
]
],
[
[
"Always test the slope function with the initial conditions.",
"_____no_output_____"
]
],
[
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
]
],
[
[
"Test your function with a launch angle of 45 degrees:",
"_____no_output_____"
]
],
[
[
"# Solution goes here",
"_____no_output_____"
]
],
[
[
"Now use `max_bounded` to find the optimal angle. Is it higher or lower than the angle that maximizes range?",
"_____no_output_____"
]
],
[
[
"maxim = max_bounded(height_func, [0,90],params)",
"_____no_output_____"
],
[
"optimal_angle = maxim.x",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
]
],
[
[
"With initial velocity 40 m/s and an optimal launch angle, the ball clears the Green Monster with a little room to spare.\n\nWhich means we can get over the wall with a lower initial velocity.",
"_____no_output_____"
],
[
"### Finding the minimum velocity\n\nEven though we are finding the \"minimum\" velocity, we are not really solving a minimization problem. Rather, we want to find the velocity that makes the height at the wall exactly 11 m, given given that it's launched at the optimal angle. And that's a job for `fsolve`.\n\nWrite an error function that takes a velocity and a `Params` object as parameters. It should use `max_bounded` to find the highest possible height of the ball at the wall, for the given velocity. Then it should return the difference between that optimal height and 11 meters.",
"_____no_output_____"
]
],
[
[
"def error_func(velocity, params):\n params1 = Params(params=params,\n velocity=velocity)\n answer = max_bounded(height_func, [0,90],params)\n return 11 - answer.fun",
"_____no_output_____"
]
],
[
[
"Test your error function before you call `fsolve`.",
"_____no_output_____"
]
],
[
[
"error_func(12, params)",
"_____no_output_____"
]
],
[
[
"Then use `fsolve` to find the answer to the problem, the minimum velocity that gets the ball out of the park.",
"_____no_output_____"
]
],
[
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
]
],
[
[
"And just to check, run `error_func` with the value you found.",
"_____no_output_____"
]
],
[
[
"# Solution goes here",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5ddad1680a4f016714f58506613e119a435383
| 151,465 |
ipynb
|
Jupyter Notebook
|
06-ml-zoomcamp-homework-trees.ipynb
|
alexkolo/202109-ML-Zoomcamp-by-Alexey-Grigorev
|
d3d0ab44b327bb15d874aa1efb084c9c9a5e01c8
|
[
"BSD-3-Clause"
] | null | null | null |
06-ml-zoomcamp-homework-trees.ipynb
|
alexkolo/202109-ML-Zoomcamp-by-Alexey-Grigorev
|
d3d0ab44b327bb15d874aa1efb084c9c9a5e01c8
|
[
"BSD-3-Clause"
] | null | null | null |
06-ml-zoomcamp-homework-trees.ipynb
|
alexkolo/202109-ML-Zoomcamp-by-Alexey-Grigorev
|
d3d0ab44b327bb15d874aa1efb084c9c9a5e01c8
|
[
"BSD-3-Clause"
] | null | null | null | 85.62182 | 28,476 | 0.816756 |
[
[
[
"from IPython.display import Markdown as md\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\n#from sklearn.linear_model import LogisticRegression\n#from sklearn.metrics import auc as sklearn_auc\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.feature_extraction import DictVectorizer\n",
"_____no_output_____"
],
[
"import shelve\nsavefile = 'Savefile.sav'",
"_____no_output_____"
]
],
[
[
"- Homework source https://github.com/alexeygrigorev/mlbookcamp-code/blob/master/course-zoomcamp/06-trees/homework.md\n- Lecture https://github.com/alexeygrigorev/mlbookcamp-code/blob/master/chapter-06-trees/06-trees.ipynb\n\n## 6.10 Homework\n\nThe goal of this homework is to create a tree-based regression model for prediction apartment prices (column `'price'`).\n\nIn this homework we'll again use the New York City Airbnb Open Data dataset - the same one we used in homework 2 and 3.\n\nYou can take it from [Kaggle](https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data?select=AB_NYC_2019.csv)\nor download from [here](https://raw.githubusercontent.com/alexeygrigorev/datasets/master/AB_NYC_2019.csv)\nif you don't want to sign up to Kaggle.\n\n\nFor this homework, we prepared a [starter notebook](homework-6-starter.ipynb). \n\n\n## Loading the data\n\n* Use only the following columns:\n * `'neighbourhood_group',`\n * `'room_type',`\n * `'latitude',`\n * `'longitude',`\n * `'minimum_nights',`\n * `'number_of_reviews','reviews_per_month',`\n * `'calculated_host_listings_count',`\n * `'availability_365',`\n * `'price'`\n* Fill NAs with 0\n* Apply the log tranform to `price`\n* Do train/validation/test split with 60%/20%/20% distribution. \n* Use the `train_test_split` function and set the `random_state` parameter to 1\n* Use `DictVectorizer` to turn the dataframe into matrices",
"_____no_output_____"
]
],
[
[
"col = ['neighbourhood_group',\n 'room_type',\n 'latitude',\n 'longitude',\n 'minimum_nights',\n 'number_of_reviews','reviews_per_month',\n 'calculated_host_listings_count',\n 'availability_365',\n 'price']",
"_____no_output_____"
],
[
"df = (\n pd.read_csv('../input/new-york-city-airbnb-open-data/AB_NYC_2019.csv')\n[col]\n.fillna(0)\n) \ndf['price'] = np.log1p(df['price'])\ndf.head()",
"_____no_output_____"
],
[
"y='price'\ntest=0.2\nval=0.2\nseed=1\n\ndf_train_full, df_test = train_test_split(df, test_size=test, random_state=seed)\ndf_train, df_val = train_test_split(df_train_full, test_size=val/(1-test), random_state=seed)\n\ny_test = df_test[y].copy().values\ny_val = df_val[y].copy().values\ny_train = df_train[y].copy().values\ndel df_test[y]\ndel df_val[y]\ndel df_train[y]",
"_____no_output_____"
],
[
"# hot encoding\ndict_train = df_train.to_dict(orient='records')\ndict_val = df_val.to_dict(orient='records')\ndv = DictVectorizer(sparse=False)\nX_train = dv.fit_transform(dict_train)\nX_val = dv.transform(dict_val)",
"_____no_output_____"
]
],
[
[
"## Question 1\n\nLet's train a decision tree regressor to predict the price variable. \n\n* Train a model with `max_depth=1`\n\n\nWhich feature is used for splitting the data?\n\n* `room_type`\n* `neighbourhood_group`\n* `number_of_reviews`\n* `reviews_per_month`",
"_____no_output_____"
]
],
[
[
"# https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html\nfrom sklearn.tree import DecisionTreeRegressor",
"_____no_output_____"
],
[
"dt = DecisionTreeRegressor(max_depth=1)\ndt.fit(X_train, y_train)",
"_____no_output_____"
],
[
"from sklearn.tree import export_text\nprint(export_text(dt, feature_names=dv.get_feature_names()))",
"|--- room_type=Entire home/apt <= 0.50\n| |--- value: [4.29]\n|--- room_type=Entire home/apt > 0.50\n| |--- value: [5.15]\n\n"
],
[
"from sklearn.tree import plot_tree\nplot_tree(dt, feature_names=dv.get_feature_names())\nplt.show()",
"_____no_output_____"
],
[
"# first node\nfeature_id = dt.tree_.feature[0] # [12, -2, -2]\nfeature_name = dv.get_feature_names()[feature_id] # 'room_type=Entire home/apt'\nmd(f'### Which feature is used for splitting the data?: **{feature_name.split(\"=\")[0]}**')",
"_____no_output_____"
]
],
[
[
"## Question 2\n\nTrain a random forest model with these parameters:\n\n* `n_estimators=10`\n* `random_state=1`\n* `n_jobs=-1` (optional - to make training faster)\n\n\nWhat's the RMSE of this model on validation?\n\n* 0.059\n* 0.259\n* 0.459\n* 0.659",
"_____no_output_____"
]
],
[
[
"# https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html?highlight=randomforest#sklearn.ensemble.RandomForestRegressor\nfrom sklearn.ensemble import RandomForestRegressor\ndef get_rmse(y_pred, y_true):\n mse = ((y_pred - y_true) ** 2).mean()\n return np.sqrt(mse)\n#enddef",
"_____no_output_____"
],
[
"rf = RandomForestRegressor(n_estimators=10, random_state=1) # n_jobs=-1\nrf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"md(f\"### What's the RMSE of this model on validation? : **{get_rmse(rf.predict(X_val), y_val):.4f}**\")\n# 0.4599",
"_____no_output_____"
]
],
[
[
"## Question 3\n\nNow let's experiment with the `n_estimators` parameter\n\n* Try different values of this parameter from 10 to 200 with step 10\n* Set `random_state` to `1`\n* Evaluate the model on the validation dataset\n\n\nAfter which value of `n_estimators` does RMSE stop improving?\n\n- 10\n- 50\n- 70\n- 120",
"_____no_output_____"
]
],
[
[
"with shelve.open(savefile, 'c') as save: \n k = 'rmse01'\n if k not in save:\n rmse_list = {}\n for n in np.linspace(10,200,10).astype(int):\n rf = RandomForestRegressor(n_estimators=n, random_state=1,n_jobs=-1) \n rf.fit(X_train, y_train)\n rmse_list[n] = get_rmse(rf.predict(X_val), y_val)\n print(n,rmse_list[n])\n #endfor\n save[k] = rmse_list\n else:\n rmse_list = save[k]\n #endif\n#endwith ",
"10 0.4598535778342608\n31 0.4449740616858785\n52 0.4419501068021308\n73 0.44090304780000233\n94 0.44008336884350485\n115 0.43934201468419964\n136 0.4391147014994749\n157 0.43891067416530244\n178 0.4389872111168318\n200 0.4389432007046648\n"
],
[
"pd.Series(rmse_list).plot()\nplt.grid()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### After which value of n_estimators does RMSE stop improving? **120**",
"_____no_output_____"
],
[
"## Question 4\n\nLet's select the best `max_depth`:\n\n* Try different values of `max_depth`: `[10, 15, 20, 25]`\n* For each of these values, try different values of `n_estimators` from 10 till 200 (with step 10)\n* Fix the random seed: `random_state=1`\n\n\n\nWhat's the best `max_depth`:\n\n* 10\n* 15\n* 20\n* 25",
"_____no_output_____"
]
],
[
[
"rmse_list02 = None\nwith shelve.open(savefile, 'c') as save: \n k = 'rmse02'\n if k not in save:\n rmse_list02 = {}\n for d in [10, 15, 20, 25]:\n rmse_list02[d] = rmse_list02.get(d,{}) # create empty Dictionary if key doesn't exist yet\n for n in np.linspace(10,200,10).astype(int):\n if n not in rmse_list02[d]:\n rf = RandomForestRegressor(n_estimators=n\n ,max_depth=d\n ,random_state=1\n ,n_jobs=-1) # \n rf.fit(X_train, y_train)\n rmse_list02[d][n] = get_rmse(rf.predict(X_val), y_val)\n #endif\n print(d,n,rmse_list02[d][n])\n #endfor\n #endfor\n save[k]= rmse_list02\n else:\n rmse_list02 = save[k]\n #endif\n#endwith",
"10 10 0.445596171749275\n10 31 0.4414986504059841\n10 52 0.4410130339565325\n10 73 0.4407679173594132\n10 94 0.440217940598878\n10 115 0.4399747638876593\n10 136 0.43983174957306237\n10 157 0.4396148965840452\n10 178 0.4396916099421784\n10 200 0.4396792845818297\n15 10 0.4498175486561694\n15 31 0.43945221581370114\n15 52 0.43788522347538045\n15 73 0.43742232511680196\n15 94 0.4367991021783366\n15 115 0.43630885110913603\n15 136 0.4362696416747989\n15 157 0.43608195053576243\n15 178 0.43609089416007457\n15 200 0.4361312812227011\n20 10 0.4597643861421082\n20 31 0.443833611117118\n20 52 0.441103667367273\n20 73 0.4401594310659682\n20 94 0.4391514608761285\n20 115 0.4383862781240854\n20 136 0.43810293573041603\n20 157 0.43773110196271314\n20 178 0.4376731213332132\n20 200 0.43764564430470415\n25 10 0.46070004844483997\n25 31 0.44459511621456377\n25 52 0.4421442478922863\n25 73 0.44131141861554135\n25 94 0.4403994906569277\n25 115 0.439561299949833\n25 136 0.43928048414965926\n25 157 0.4390045858586755\n25 178 0.43900388848297894\n25 200 0.43896329959548797\n"
],
[
"plt.figure(figsize=(6, 4))\nfor d in [10, 15, 20, 25]:\n x = rmse_list02[d].keys()\n y = [rmse_list02[d][n] for n in x]\n plt.plot(x, y, label=f'depth={d}')\n#endfor\nplt.xticks(range(0, 201, 10))\nplt.grid()\nplt.legend()\nplt.xlabel('n_estimators')\nplt.ylabel('rmse')\nplt.show()",
"_____no_output_____"
],
[
"res = { min([rmse_list02[d][n] for n in rmse_list02[d]]) : d for d in rmse_list02 }\nmd(f\"### What's the best `max_depth`? : **{res[sorted(res)[0]]}**\") # 15",
"_____no_output_____"
]
],
[
[
"#### **Bonus question (not graded):**\n\nWill the answer be different if we change the seed for the model?\n\n**Answer**: it should *not*, since n_estimators is sufficently high (>100).",
"_____no_output_____"
],
[
"## Question 5\n\nWe can extract feature importance information from tree-based models. \n\nAt each step of the decision tree learning algorith, it finds the best split. \nWhen doint it, we can calculate \"gain\" - the reduction in impurity before and after the split. \nThis gain is quite useful in understanding what are the imporatant features \nfor tree-based models.\n\nIn Scikit-Learn, tree-based models contain this information in the\n[`feature_importances_`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html#sklearn.ensemble.RandomForestRegressor.feature_importances_)\nfield. \n\nFor this homework question, we'll find the most important feature:\n\n* Train the model with these parametes:\n * `n_estimators=10`,\n * `max_depth=20`,\n * `random_state=1`,\n * `n_jobs=-1` (optional)\n* Get the feature importance information from this model\n\n\nWhat's the most important feature? \n\n* `neighbourhood_group=Manhattan`\n* `room_type=Entire home/apt`\t\n* `longitude`\n* `latitude`",
"_____no_output_____"
]
],
[
[
"rf = RandomForestRegressor(n_estimators=10\n ,max_depth=20\n ,random_state=1\n ,n_jobs=-1) # \nrf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"importances = list(zip(dv.feature_names_, rf.feature_importances_))\ndf_importance = (\n pd.DataFrame(importances, columns=['feature', 'gain'])\n# [lambda x : x['gain'] > 0]\n .sort_values(by='gain', ascending=False)\n)\nmd(f\"### What's the most important feature? : **{df_importance['feature'].iloc[0]}**\")\n# room_type=Entire home/apt",
"_____no_output_____"
]
],
[
[
"## Question 6\n\nNow let's train an XGBoost model! For this question, we'll tune the `eta` parameter\n\n* Install XGBoost\n* Create DMatrix for train and validation\n* Create a watchlist\n* Train a model with these parameters for 100 rounds:\n\n```\nxgb_params = {\n 'eta': 0.3, \n 'max_depth': 6,\n 'min_child_weight': 1,\n \n 'objective': 'reg:squarederror',\n 'nthread': 8,\n \n 'seed': 1,\n 'verbosity': 1,\n}\n```\n\nNow change `eta` first to `0.1` and then to `0.01`\n\nWhat's the best eta?\n\n* 0.3\n* 0.1\n* 0.01",
"_____no_output_____"
]
],
[
[
"import xgboost as xgb # Install XGBoost\ndef parse_xgb_output(output):\n tree = []\n p_train = []\n p_val = []\n\n for line in output.stdout.strip().split('\\n'):\n it_line, train_line, val_line = line.split('\\t')\n\n it = int(it_line.strip('[]'))\n train = float(train_line.split(':')[1])\n val = float(val_line.split(':')[1])\n\n tree.append(it)\n p_train.append(train)\n p_val.append(val)\n\n return tree, p_train, p_val\n#enddef",
"_____no_output_____"
],
[
"# Create DMatrix for train and validation\ndtrain = xgb.DMatrix(X_train, label=y_train, feature_names=dv.feature_names_)\ndval = xgb.DMatrix(X_val, label=y_val, feature_names=dv.feature_names_)\n# Create a watchlist\nwatchlist = [(dtrain, 'train'), (dval, 'val')]\n# Train a model with these parameters for 100 rounds:\nxgb_params = {\n 'eta': 0.3, \n 'max_depth': 6,\n 'min_child_weight': 1,\n\n 'objective': 'reg:squarederror',\n 'nthread': 8,\n\n 'seed': 1,\n 'verbosity': 1,\n}",
"_____no_output_____"
],
[
"%%capture output\n# capture instruction that saves the result to output \nxgb_params['eta'] = 0.3\nmodel = xgb.train(xgb_params, dtrain,\n num_boost_round=100,\n evals=watchlist, verbose_eval=10)",
"_____no_output_____"
],
[
"tree, p_train, p_val = parse_xgb_output(output)\nprint(f'Eta={xgb_params[\"eta\"]} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))\n\nplt.figure(figsize=(6, 4))\nplt.plot(tree, p_train, color='black', linestyle='dashed', label='Train Loss')\nplt.plot(tree, p_val, color='black', linestyle='solid', label='Validation Loss')\n# plt.xticks(range(0, 101, 25))\nplt.legend()\nplt.title('XGBoost: number of trees vs \"squarederror\"')\nplt.xlabel('Number of trees')\nplt.ylabel('squarederror')\nplt.yscale('log')\nplt.show()",
"Eta=0.3 : Best performance (squarederror, number of trees) (0.43384, 50)\n"
],
[
"%%capture output_010\n# capture instruction that saves the result to output \nxgb_params['eta'] = 0.1\nmodel = xgb.train(xgb_params, dtrain,\n num_boost_round=100,\n evals=watchlist, verbose_eval=10)",
"_____no_output_____"
],
[
"tree, _, p_val = parse_xgb_output(output_010)\nprint(f'Eta={xgb_params[\"eta\"]} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))",
"Eta=0.1 : Best performance (squarederror, number of trees) (0.4325, 99)\n"
],
[
"%%capture output_001\n# capture instruction that saves the result to output \nxgb_params['eta'] = 0.01\nmodel = xgb.train(xgb_params, dtrain,\n num_boost_round=100,\n evals=watchlist, verbose_eval=10)",
"_____no_output_____"
],
[
"tree, _, p_val = parse_xgb_output(output_001)\nprint(f'Eta={xgb_params[\"eta\"]} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))",
"Eta=0.01 : Best performance (squarederror, number of trees) (1.63045, 99)\n"
],
[
"plt.figure(figsize=(6, 4))\nfor eta, out in zip([0.3,0.1,0.01],[output,output_010,output_001]):\n tree, _, p_val = parse_xgb_output(out)\n #plt.plot(tree, p_train, color='black', linestyle='dashed', label='eta=eta, Train Loss')\n plt.plot(tree, p_val, linestyle='solid', label=f'eta={eta}')\n print(f'Eta={eta} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))\n \n# plt.xticks(range(0, 101, 25))\nplt.legend()\nplt.title('XGBoost: number of trees vs Validation \"squarederror\"')\nplt.xlabel('Number of trees')\nplt.ylabel('Validation \"squarederror\"')\n# plt.yscale('log')\nplt.ylim(0.43,0.46)\nplt.grid()\nplt.show()",
"Eta=0.3 : Best performance (squarederror, number of trees) (0.43384, 50)\nEta=0.1 : Best performance (squarederror, number of trees) (0.4325, 99)\nEta=0.01 : Best performance (squarederror, number of trees) (1.63045, 99)\n"
],
[
"md(\"### What's the best eta? **0.1**\")",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5ddbd9f20caa4bd93003265d35ed7c62488676
| 48,489 |
ipynb
|
Jupyter Notebook
|
simulation/simulation_plotting.ipynb
|
jcooper036/tri_hybid_mapping
|
a4a0aebcf1a1fb3773b1b402a25635b53004856a
|
[
"MIT"
] | null | null | null |
simulation/simulation_plotting.ipynb
|
jcooper036/tri_hybid_mapping
|
a4a0aebcf1a1fb3773b1b402a25635b53004856a
|
[
"MIT"
] | null | null | null |
simulation/simulation_plotting.ipynb
|
jcooper036/tri_hybid_mapping
|
a4a0aebcf1a1fb3773b1b402a25635b53004856a
|
[
"MIT"
] | null | null | null | 591.329268 | 46,868 | 0.951474 |
[
[
[
"import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(10, 8))",
"_____no_output_____"
],
[
"df = pd.read_csv('/Volumes/Jacob_2TB_storage/sim_sec_recombination_mapping/genomics_scripts/analysis/simulation_out.csv')\n\n",
"_____no_output_____"
],
[
"plt.plot(df['position'], df['mel'], color = 'blue')\nplt.plot(df['position'], df['sim'], color = 'orange')\nplt.plot(df['position'], df['sec'], color = 'red')\nplt.ylim(0,1)\nplt.show()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4a5de308d4e05d718d14ecfcc1868892376d4b47
| 150,003 |
ipynb
|
Jupyter Notebook
|
examples/machine_learning/logistic_regression.ipynb
|
TShimko126/cvxpy
|
8b89b3f8ef7daba1db39f5029e4902f06c75b29f
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
examples/machine_learning/logistic_regression.ipynb
|
TShimko126/cvxpy
|
8b89b3f8ef7daba1db39f5029e4902f06c75b29f
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
examples/machine_learning/logistic_regression.ipynb
|
TShimko126/cvxpy
|
8b89b3f8ef7daba1db39f5029e4902f06c75b29f
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null | 38.432744 | 323 | 0.497403 |
[
[
[
"# Logistic regression with $\\ell_1$ regularization\n\nIn this example, we use CVXPY to train a logistic regression classifier with $\\ell_1$ regularization. We are given data $(x_i,y_i)$, $i=1,\\ldots, m$. The $x_i \\in {\\bf R}^n$ are feature vectors, while the $y_i \\in \\{0, 1\\}$ are associated boolean classes; we assume the first component of each $x_i$ is $1$.\n\nOur goal is to construct a linear classifier $\\hat y = \\mathbb{1}[\\beta^T x > 0]$, which is $1$ when $\\beta^T x$ is positive and $0$ otherwise. We model the posterior probabilities of the classes given the data linearly, with\n\n$$\n\\log \\frac{\\mathrm{Pr} (Y=1 \\mid X = x)}{\\mathrm{Pr} (Y=0 \\mid X = x)} = \\beta^T x.\n$$\n\nThis implies that\n\n$$\n\\mathrm{Pr} (Y=1 \\mid X = x) = \\frac{\\exp(\\beta^T x)}{1 + \\exp(\\beta^T x)}, \\quad\n\\mathrm{Pr} (Y=0 \\mid X = x) = \\frac{1}{1 + \\exp(\\beta^T x)}.\n$$\n\nWe fit $\\beta$ by maximizing the log-likelihood of the data, plus a regularization term $\\lambda \\|{\\beta_{1:}}\\|_1$ with $\\lambda > 0$:\n\n$$\n\\ell(\\beta) = \\sum_{i=1}^{m} y_i \\beta^T x_i - \\log(1 + \\exp (\\beta^T x_i)) - \\lambda \\|{\\beta_{1:}}\\|_1.\n$$\n\nBecause $\\ell$ is a concave function of $\\beta$, this is a convex optimization problem.\n\n",
"_____no_output_____"
]
],
[
[
"from __future__ import division\nimport cvxpy as cp\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"In the following code we generate data with $n=20$ features by randomly choosing $x_i$ and a sparse $\\beta_{\\mathrm{true}} \\in {\\bf R}^n$.\nWe then set $y_i = \\mathbb{1}[\\beta_{\\mathrm{true}}^T x_i - z_i > 0]$, where the $z_i$ are i.i.d. normal random variables.\nWe divide the data into training and test sets with $m=1000$ examples each.",
"_____no_output_____"
]
],
[
[
"np.random.seed(1)\nn = 20\nm = 1000\ndensity = 0.2\nbeta_true = np.random.randn(n,1)\nidxs = np.random.choice(range(n), int((1-density)*n), replace=False)\nfor idx in idxs:\n beta_true[idx] = 0\n\nsigma = 45\nX = np.random.normal(0, 5, size=(m,n))\nX[:, 0] = 1.0\nY = X @ beta_true + np.random.normal(0, sigma, size=(m,1))\nY[Y > 0] = 1\nY[Y <= 0] = 0\n\nX_test = np.random.normal(0, 5, size=(m, n))\nX_test[:, 0] = 1.0\nY_test = X_test @ beta_true + np.random.normal(0, sigma, size=(m,1))\nY_test[Y_test > 0] = 1\nY_test[Y_test <= 0] = 0",
"_____no_output_____"
]
],
[
[
"We next formulate the optimization problem using CVXPY.",
"_____no_output_____"
]
],
[
[
"beta = cp.Variable((n,1))\nlambd = cp.Parameter(nonneg=True)\nlog_likelihood = cp.sum(\n cp.reshape(cp.multiply(Y, X @ beta), (m,)) -\n cp.log_sum_exp(cp.hstack([np.zeros((m,1)), X @ beta]), axis=1) - \n lambd * cp.norm(beta[1:], 1)\n)\nproblem = cp.Problem(cp.Maximize(log_likelihood))",
"_____no_output_____"
]
],
[
[
"We solve the optimization problem for a range of $\\lambda$ to compute a trade-off curve.\nWe then plot the train and test error over the trade-off curve.\nA reasonable choice of $\\lambda$ is the value that minimizes the test error.",
"_____no_output_____"
]
],
[
[
"def error(scores, labels):\n scores[scores > 0] = 1\n scores[scores <= 0] = 0\n return np.sum(np.abs(scores - labels)) / float(np.size(labels))",
"_____no_output_____"
],
[
"trials = 100\ntrain_error = np.zeros(trials)\ntest_error = np.zeros(trials)\nlambda_vals = np.logspace(-2, 0, trials)\nbeta_vals = []\nfor i in range(trials):\n lambd.value = lambda_vals[i]\n problem.solve()\n train_error[i] = error(X @ beta.value, Y)\n test_error[i] = error(X_test @ beta.value, Y_test)\n beta_vals.append(beta.value)",
"_____no_output_____"
],
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\n\nplt.plot(lambda_vals, train_error, label=\"Train error\")\nplt.plot(lambda_vals, test_error, label=\"Test error\")\nplt.xscale('log')\nplt.legend(loc='upper left')\nplt.xlabel(r\"$\\lambda$\", fontsize=16)\nplt.show()",
"_____no_output_____"
]
],
[
[
"We also plot the regularization path, or the $\\beta_i$ versus $\\lambda$. Notice that \na few features remain non-zero longer for larger $\\lambda$ than the rest, which suggests that these features are the most important. ",
"_____no_output_____"
]
],
[
[
"for i in range(n):\n plt.plot(lambda_vals, [wi[i,0] for wi in beta_vals])\nplt.xlabel(r\"$\\lambda$\", fontsize=16)\nplt.xscale(\"log\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5de561308e39c8c15d3864fab5fb0299a4d4df
| 26,768 |
ipynb
|
Jupyter Notebook
|
site/ru/beta/tutorials/keras/basic_regression.ipynb
|
alphamuth/docs
|
d0d8e659cd61599cec08428d0447e0c817e3607c
|
[
"Apache-2.0"
] | 4 |
2019-08-20T11:59:23.000Z
|
2020-01-12T13:42:50.000Z
|
site/ru/beta/tutorials/keras/basic_regression.ipynb
|
alphamuth/docs
|
d0d8e659cd61599cec08428d0447e0c817e3607c
|
[
"Apache-2.0"
] | null | null | null |
site/ru/beta/tutorials/keras/basic_regression.ipynb
|
alphamuth/docs
|
d0d8e659cd61599cec08428d0447e0c817e3607c
|
[
"Apache-2.0"
] | 1 |
2020-06-05T08:31:20.000Z
|
2020-06-05T08:31:20.000Z
| 31.234539 | 479 | 0.524245 |
[
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# Регрессия: Спрогнозируй эффективность расхода топлива",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/beta/tutorials/keras/basic_regression\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/basic_regression.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/basic_regression.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/ru/beta/tutorials/keras/basic_regression.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"В задаче *регрессии*, мы хотим дать прогноз какого-либо непрерывного значения, например цену или вероятность. Сравните это с задачей *классификации*, где нужно выбрать конкретную категорию из ограниченного списка (например, есть ли на картинке яблоко или апельсин, распознайте какой фрукт на изображении).\n\nЭтот урок использует классический датасет [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) и строит модель, предсказывающую эффективность расхода топлива автомобилей конца 70-х и начала 80-х. Чтобы сделать это, мы предоставим модели описания множества различных автомобилей того времени. Эти описания будут содержать такие параметры как количество цилиндров, лошадиных сил, объем двигателя и вес.\n\nВ этом примере используется tf.keras API, подробнее [смотри здесь](https://www.tensorflow.org/guide/keras).",
"_____no_output_____"
]
],
[
[
"# Установим библиотеку seaborn для построения парных графиков\n!pip install seaborn",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport pathlib\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\n!pip install tensorflow==2.0.0-beta1\nimport tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## Датасет Auto MPG\n\nДатасет доступен в [репозитории машинного обучения UCI](https://archive.ics.uci.edu/ml/).\n\n",
"_____no_output_____"
],
[
"### Получите данные\nСперва загрузим датасет.",
"_____no_output_____"
]
],
[
[
"dataset_path = keras.utils.get_file(\"auto-mpg.data\", \"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data\")\ndataset_path",
"_____no_output_____"
]
],
[
[
"Импортируем его при помощи библиотеки Pandas:",
"_____no_output_____"
]
],
[
[
"column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',\n 'Acceleration', 'Model Year', 'Origin']\nraw_dataset = pd.read_csv(dataset_path, names=column_names,\n na_values = \"?\", comment='\\t',\n sep=\" \", skipinitialspace=True)\n\ndataset = raw_dataset.copy()\ndataset.tail()",
"_____no_output_____"
]
],
[
[
"### Подготовьте данные\n\nДатасет содержит несколько неизвестных значений.",
"_____no_output_____"
]
],
[
[
"dataset.isna().sum()",
"_____no_output_____"
]
],
[
[
"Чтобы урок оставался простым, удалим эти строки.",
"_____no_output_____"
]
],
[
[
"dataset = dataset.dropna()",
"_____no_output_____"
]
],
[
[
"Столбец `\"Origin\"` на самом деле категорийный, а не числовой. Поэтому конвертируем его в one-hot",
"_____no_output_____"
]
],
[
[
"origin = dataset.pop('Origin')",
"_____no_output_____"
],
[
"dataset['USA'] = (origin == 1)*1.0\ndataset['Europe'] = (origin == 2)*1.0\ndataset['Japan'] = (origin == 3)*1.0\ndataset.tail()",
"_____no_output_____"
]
],
[
[
"### Разделите данные на обучающую и тестовую выборки\n\nСейчас разделим датасет на обучающую и тестовую выборки.\n\nТестовую выборку будем использовать для итоговой оценки нашей модели",
"_____no_output_____"
]
],
[
[
"train_dataset = dataset.sample(frac=0.8,random_state=0)\ntest_dataset = dataset.drop(train_dataset.index)",
"_____no_output_____"
]
],
[
[
"### Проверьте данные\n\nПосмотрите на совместное распределение нескольких пар колонок из тренировочного набора данных:",
"_____no_output_____"
]
],
[
[
"sns.pairplot(train_dataset[[\"MPG\", \"Cylinders\", \"Displacement\", \"Weight\"]], diag_kind=\"kde\")",
"_____no_output_____"
]
],
[
[
"Также посмотрите на общую статистику:",
"_____no_output_____"
]
],
[
[
"train_stats = train_dataset.describe()\ntrain_stats.pop(\"MPG\")\ntrain_stats = train_stats.transpose()\ntrain_stats",
"_____no_output_____"
]
],
[
[
"### Отделите признаки от меток\n\nОтделите целевые значения или \"метки\" от признаков. Обучите модель для предсказания значений.",
"_____no_output_____"
]
],
[
[
"train_labels = train_dataset.pop('MPG')\ntest_labels = test_dataset.pop('MPG')",
"_____no_output_____"
]
],
[
[
"### Нормализуйте данные\n\nВзгляните еще раз на блок train_stats приведенный выше. Обратите внимание на то, как отличаются диапазоны каждого из признаков.",
"_____no_output_____"
],
[
"Это хорошая практика - нормализовать признаки у которых различные масштабы и диапазон изменений. Хотя модель *может* сходиться и без нормализации признаков, обучение при этом усложняется и итоговая модель становится зависимой от выбранных единиц измерения входных данных..\n\nПримечание. Мы намеренно генерируем эти статистические данные только из обучающей выборки, они же будут использоваться для нормализации тестовой выборки. Мы должны сделать это, чтобы тестовая выборка была из того распределения на которой обучалась модель.",
"_____no_output_____"
]
],
[
[
"def norm(x):\n return (x - train_stats['mean']) / train_stats['std']\nnormed_train_data = norm(train_dataset)\nnormed_test_data = norm(test_dataset)",
"_____no_output_____"
]
],
[
[
"Для обучения модели мы будем использовать эти нормализованные данные.\n\nВнимание: статистики использованные для нормализации входных данных (среднее и стандартное отклонение) должны быть применены к любым другим данным, которые используются в модели. Это же касается one-hot кодирования которое мы делали ранее. Преобразования необходимо применять как к тестовым данным, так и к данным с которыми модель используется в работе.",
"_____no_output_____"
],
[
"## Модель",
"_____no_output_____"
],
[
"### Постройте модель\n\nДавайте построим нашу модель. Мы будем использовать `Sequential` (последовательную) модель с двумя полносвязными скрытыми слоями, а выходной слой будет возвращать одно непрерывное значение. Этапы построения модели мы опишем в функции build_model, так как позже мы создадим еще одну модель.",
"_____no_output_____"
]
],
[
[
"def build_model():\n model = keras.Sequential([\n layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),\n layers.Dense(64, activation='relu'),\n layers.Dense(1)\n ])\n\n optimizer = tf.keras.optimizers.RMSprop(0.001)\n\n model.compile(loss='mse',\n optimizer=optimizer,\n metrics=['mae', 'mse'])\n return model",
"_____no_output_____"
],
[
"model = build_model()",
"_____no_output_____"
]
],
[
[
"### Проверьте модель\n\nИспользуйте метод `.summary` чтобы напечатать простое описание модели.",
"_____no_output_____"
]
],
[
[
"model.summary()",
"_____no_output_____"
]
],
[
[
"\nСейчас попробуем нашу модель. Возьмем пакет из`10` примеров из обучающей выборки и вызовем `model.predict` на них.",
"_____no_output_____"
]
],
[
[
"example_batch = normed_train_data[:10]\nexample_result = model.predict(example_batch)\nexample_result",
"_____no_output_____"
]
],
[
[
"Похоже все работает правильно, модель показывает результат ожидаемой размерности и типа.",
"_____no_output_____"
],
[
"### Обучите модель\n\nОбучите модель за 1000 эпох и запишите точность модели на тренировочных и проверочных данных в объекте `history`.",
"_____no_output_____"
]
],
[
[
"# Выведем прогресс обучения в виде точек после каждой завершенной эпохи\nclass PrintDot(keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs):\n if epoch % 100 == 0: print('')\n print('.', end='')\n\nEPOCHS = 1000\n\nhistory = model.fit(\n normed_train_data, train_labels,\n epochs=EPOCHS, validation_split = 0.2, verbose=0,\n callbacks=[PrintDot()])",
"_____no_output_____"
]
],
[
[
"Визуализируйте процесс обучения модели используя статистику содержащуюся в объекте `history`.",
"_____no_output_____"
]
],
[
[
"hist = pd.DataFrame(history.history)\nhist['epoch'] = history.epoch\nhist.tail()",
"_____no_output_____"
],
[
"def plot_history(history):\n hist = pd.DataFrame(history.history)\n hist['epoch'] = history.epoch\n\n plt.figure()\n plt.xlabel('Epoch')\n plt.ylabel('Mean Abs Error [MPG]')\n plt.plot(hist['epoch'], hist['mae'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mae'],\n label = 'Val Error')\n plt.ylim([0,5])\n plt.legend()\n\n plt.figure()\n plt.xlabel('Epoch')\n plt.ylabel('Mean Square Error [$MPG^2$]')\n plt.plot(hist['epoch'], hist['mse'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mse'],\n label = 'Val Error')\n plt.ylim([0,20])\n plt.legend()\n plt.show()\n\n\nplot_history(history)",
"_____no_output_____"
]
],
[
[
"Полученный график показывает, небольшое улучшение или даже деградацию ошибки валидации после примерно 100 эпох обучения. Давай обновим метод model.fit чтобы автоматически прекращать обучение когда ошибка валидации Val loss прекращает улучшаться. Для этого используем функцию *EarlyStopping callback* которая проверяет показатели обучения после каждой эпохи. Если после определенного количество эпох нет никаких улучшений, то функция автоматически остановит обучение.\n\nВы можете больше узнать про этот коллбек [здесь](https://www.tensorflow.org/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping).",
"_____no_output_____"
]
],
[
[
"model = build_model()\n\n# Параметр patience определяет количество эпох, проверяемых на улучшение\nearly_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)\n\nhistory = model.fit(normed_train_data, train_labels, epochs=EPOCHS,\n validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])\n\nplot_history(history)",
"_____no_output_____"
]
],
[
[
"График показывает что среднее значение ошибки на проверочных данных - около 2 галлонов на милю. Хорошо это или плохо? Решать тебе.\n\nДавай посмотрим как наша модель справится на **тестовой** выборке, которую мы еще не использовали при обучении модели. Эта проверка покажет нам какого результата ожидать от модели, когда мы будем ее использовать в реальном мире",
"_____no_output_____"
]
],
[
[
"loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)\n\nprint(\"Testing set Mean Abs Error: {:5.2f} MPG\".format(mae))",
"_____no_output_____"
]
],
[
[
"### Сделайте прогноз\n\nНаконец, спрогнозируйте значения миль-на-галлон используя данные из тестовой выборки:",
"_____no_output_____"
]
],
[
[
"test_predictions = model.predict(normed_test_data).flatten()\n\nplt.scatter(test_labels, test_predictions)\nplt.xlabel('True Values [MPG]')\nplt.ylabel('Predictions [MPG]')\nplt.axis('equal')\nplt.axis('square')\nplt.xlim([0,plt.xlim()[1]])\nplt.ylim([0,plt.ylim()[1]])\n_ = plt.plot([-100, 100], [-100, 100])\n",
"_____no_output_____"
]
],
[
[
"Вроде наша модель дает хорошие предсказания. Давайте посмотрим распределение ошибки.",
"_____no_output_____"
]
],
[
[
"error = test_predictions - test_labels\nplt.hist(error, bins = 25)\nplt.xlabel(\"Prediction Error [MPG]\")\n_ = plt.ylabel(\"Count\")",
"_____no_output_____"
]
],
[
[
"Она не достаточно гауссова, но мы могли это предполагать потому что количество примеров очень мало.",
"_____no_output_____"
],
[
"## Заключение\n\nЭто руководство познакомило тебя с несколькими способами решения задач регрессии.\n\n* Среднеквадратичная ошибка (MSE) это распространенная функция потерь используемая для задач регрессии (для классификации используются другие функции).\n* Аналогично, показатели оценки модели для регрессии отличаются от используемых в классификации. Обычной метрикой для регрессии является средняя абсолютная ошибка (MAE).\n* Когда значения числовых входных данных из разных диапазонов, каждый признак должен быть незавизимо масштабирован до одного и того же диапазона.\n* Если данных для обучения немного, используй небольшую сеть из нескольких скрытых слоев. Это поможет избежать переобучения.\n* Метод ранней остановки очень полезная техника для избежания переобучения.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a5ded0b986ad084755d6c3d23d728de073f5f41
| 10,331 |
ipynb
|
Jupyter Notebook
|
autoencoder/dg_y_autoencoder.ipynb
|
rn-unison/rn-jupyter
|
a7c03740498e3142645b3a2bf9d8f29de0b86902
|
[
"MIT"
] | null | null | null |
autoencoder/dg_y_autoencoder.ipynb
|
rn-unison/rn-jupyter
|
a7c03740498e3142645b3a2bf9d8f29de0b86902
|
[
"MIT"
] | null | null | null |
autoencoder/dg_y_autoencoder.ipynb
|
rn-unison/rn-jupyter
|
a7c03740498e3142645b3a2bf9d8f29de0b86902
|
[
"MIT"
] | 3 |
2019-02-04T22:05:19.000Z
|
2020-03-14T16:41:23.000Z
| 37.567273 | 584 | 0.535863 |
[
[
[
"<img src=\"imagenes/rn3.png\" width=\"200\">\n<img src=\"http://www.identidadbuho.uson.mx/assets/letragrama-rgb-150.jpg\" width=\"200\">",
"_____no_output_____"
],
[
"# [Curso de Redes Neuronales](https://rn-unison.github.io)\n\n# Redes neuronales multicapa y el algoritmo de *b-prop*\n\n[**Julio Waissman Vilanova**](http://mat.uson.mx/~juliowaissman/), 27 de febrero de 2019.\n\nEn esta libreta vamos a practicar con las diferentes variaciones del método de descenso de gradiente que se utilizan en el entrenamiento de redes neuronales profundas. Esta no es una libreta tutorial (por el momento, una segunda versión puede que si sea). Así, vamos a referenciar los algoritmos a tutoriales y artículos originales. Sebastian Ruder escribio [este tutorial que me parece muy bueno](http://ruder.io/optimizing-gradient-descent/index.html). Es claro, conciso y bien referenciado por si quieres mayor detalle. Nos basaremos en este tutorial para nuestra libreta.\n\nIgualmente, vamos a aprovechar la misma libreta para hacer y revisar como funcionan los *autocodificadores*. Los autocodificadores son muy importantes porque dan la intuición necesaria para introducir las redes convolucionales, y porque muestra el poder de compartir parámetros en diferentes partes de una arquitectura distribuida.\n\nEmpecemos por inicializar los modulos que vamos a requerir.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams['figure.figsize'] = (16,8)\nplt.style.use('ggplot')",
"_____no_output_____"
]
],
[
[
"## 1. Definiendo la red neuronal con arquitectura fija\n\nComo la definición de red neuronal, *f-prop* y *b-prop* ya fue tratados en otra libreta, vamos a inicializar una red neuronal sencilla, la cual tenga:\n\n1. Una etapa de autoencoder (usado para dos palabras)\n2. Una capa oculta con activación ReLU\n3. Una capa de salida con una neurona logística (problema de clasificación binaria)\n\nEl número de salidas del autocodificador lo vamos a denotar como $n_a$, y el número de unidades ReLU de la capa oculta como $n_h$ \n\nA contonuación se agregan celdas de código para \n\n1. Inicialización de pesos\n2. Predicción (feed forward)\n3. El algoritmo de *b-prop* (calculo de $\\delta^{(1)} y $\\delta^{(2)}$)\n\nSi bien es bastante estandar algunas consideraciones se hicieron las cuales se resaltan más adelanta",
"_____no_output_____"
]
],
[
[
"def inicializa_red(n_v, n_a, n_h):\n \"\"\"\n Inicializa la red neuronal\n \n Parámetros\n ----------\n n_v : int con el número de palabras en el vocabulario\n n_a : int con el número de características del autocodificador\n n_h : int con el número de unidades ReLU en la capa oculta\n \n Devuelve\n --------\n W = [W_a, W_h, W_o] Lista con las matrices de pesos\n B = [b_h, b_o] Lista con los sesgos\n \n \"\"\"\n \n np.random.seed(0) # Solo para efectos de reproducibilidad\n \n W_ac = np.random.randn(n_v, n_a)\n W_h = np.random.randn(n_h, 2 * n_a)\n W_o = np.random.randn(1, n_h)\n \n b_h = np.random.randn(n_h,1)\n b_o = np.sqrt(n_h) * (2 * np.random.rand() - 1.0)\n \n return [W_ac, W_h, W_o], [b_h, b_o]\n \n",
"_____no_output_____"
],
[
"def relu(A): \n \"\"\"\n Calcula el valor de ReLU de una matriz de activaciones A\n \n \"\"\"\n return np.maximum(A, 0)\n\ndef logistica(a):\n \"\"\"\n Calcula la funcion logística de a\n \n \"\"\"\n return 1. / (1. + np.exp(-a))\n\n\ndef feedforward(X, vocab, W, b):\n \"\"\"\n Calcula las activaciones de las unidades de la red neuronal\n \n Parámetros\n ----------\n X: un ndarra [-1, 2], dtype='str', con dos palabras por ejemplo\n vocab: Una lista con las palabras ordenadas del vocabulario a utilizar\n W: Lista con las matrices de pesos (ver inicializa_red para mas info)\n b: Lista con los vectores de sesgos (ver inicializa_red para mas info)\n \"\"\"\n W_a, W_h, W_o = W\n b_h, b_o = b\n\n one_hot_1 = [vocab.index(x_i) if x_i in vocab else -1 for x_i in X[:,0]]\n one_hot_2 = [vocab.index(x_i) if x_i in vocab else -1 for x_i in X[:,1]]\n\n activacion_z = np.array([one_hot_1, one_hot_2])\n \n activacion_a = np.r_[W_a[one_hot_1, :].T, \n W_a[one_hot_2, :].T]\n\n activacion_h = relu(W_h @ activacion_a + b_h)\n activacion_o = logistica(W_o @ activacion_h + b_o)\n \n return [activacion_z, activacion_a, activacion_h, activacion_o] ",
"_____no_output_____"
]
],
[
[
"#### Ejercicio: Realiza un ejemplo pequeño a mano, imprime las activaciones y compruebalo con tus resultados obtenidos manualmente\n\nSe agrega un ejemplo sin calcular manualmente. Posiblemente sea mejor establecer W y b en forma manual que faciliten los calculos y menos ejemplos.",
"_____no_output_____"
]
],
[
[
"vocab = ['a', 'e', 'ei', 'ti', 'tu', 'ya', 'ye', 'toto', 'tur', 'er', 'OOV']\n\nX = np.array([\n ['a', 'a'],\n ['e', 'tu'],\n ['ti', 'ya'],\n ['er', 'ye'],\n ['a', 'a'],\n ['e', 'tu']\n])\n\nn_v, n_a, n_h = len(vocab), 5, 7\nW, b = inicializa_red(n_v, n_a, n_h)\nA = feedforward(X, vocab, W, b)\n\nprint(\"Codificación 'one hot': \\n\", A[0])\nprint(\"Autocodificador: \\n\", A[1])\nprint(\"Activacion capa oculta:\\n\", A[2])\nprint(\"Salidas:\\n\", A[3])\n\nassert np.all(A[0][:, 1] == A[0][:, -1]) and np.all(A[0][:, 0] == A[0][:, -2])\nassert np.all(A[1][:, 1] == A[1][:, -1]) and np.all(A[1][:, 0] == A[1][:, -2])\nassert np.all(A[2][:, 1] == A[2][:, -1]) and np.all(A[2][:, 0] == A[2][:, -2])",
"Codificación 'one hot': \n [[0 1 3 9 0 1]\n [0 4 5 6 0 4]]\nAutocodificador: \n [[ 1.76405235 -0.97727788 0.33367433 -0.4380743 1.76405235 -0.97727788]\n [ 0.40015721 0.95008842 1.49407907 -1.25279536 0.40015721 0.95008842]\n [ 0.97873798 -0.15135721 -0.20515826 0.77749036 0.97873798 -0.15135721]\n [ 2.2408932 -0.10321885 0.3130677 -1.61389785 2.2408932 -0.10321885]\n [ 1.86755799 0.4105985 -0.85409574 -0.21274028 1.86755799 0.4105985 ]\n [ 1.76405235 -2.55298982 -1.45436567 0.15494743 1.76405235 -2.55298982]\n [ 0.40015721 0.6536186 0.04575852 0.37816252 0.40015721 0.6536186 ]\n [ 0.97873798 0.8644362 -0.18718385 -0.88778575 0.97873798 0.8644362 ]\n [ 2.2408932 -0.74216502 1.53277921 -1.98079647 2.2408932 -0.74216502]\n [ 1.86755799 2.26975462 1.46935877 -0.34791215 1.86755799 2.26975462]]\nActivacion capa oculta:\n [[ 0. 1.30377837 0. 4.2301833 0. 1.30377837]\n [ 0. 0. 0. 5.51935272 0. 0. ]\n [ 0. 7.83296367 0. 3.06781597 0. 7.83296367]\n [ 6.72215668 2.60667548 4.49524539 0. 6.72215668 2.60667548]\n [ 6.13088619 0. 0. 0.75006167 6.13088619 0. ]\n [15.99118816 0. 0. 0. 15.99118816 0. ]\n [ 7.32841956 0. 1.87432705 0. 7.32841956 0. ]]\nSalidas:\n [[0.28211383 0.99999993 0.98835902 0.96619412 0.28211383 0.99999993]]\n"
],
[
"def deriv_relu(a):\n \"\"\"\n Calcula la derivada de la activación de a usando ReLU\n \n \"\"\"\n return np.where(a > 0.0, 1.0, 0.0)\n\ndef b_prop(A, Y, W):\n \n W_a, W_h, W_o = W\n activacion_z, activacion_a, activacion_h, activacion_o = A\n \n delta_o = Y.reshape(1, -1) - activacion_o\n delta_h = deriv_relu(activacion_h) * (W_o.T @ delta_o)\n delta_a = W_h.T @ delta_h\n \n gradiente_W_o = delta_o.T @ activacion_h\n gradiente_W_h = delta_h.T @ activacion_a\n \n gradiente_b_o = delta_o.mean(axis=0).reshape(-1, 1)\n gradiente_b_h = delta_h.mean()\n \n gradiente_W_a",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a5deffa77ddc6a730d02b0e8bc66c553a728ea6
| 8,293 |
ipynb
|
Jupyter Notebook
|
20 April - Introduction to Machine Learning (part 2).ipynb
|
arunk-vnk-chn/insaid-interview-questions
|
b6e4806fd99caaf0bbef595669e2cd8f657c59a4
|
[
"CC0-1.0"
] | null | null | null |
20 April - Introduction to Machine Learning (part 2).ipynb
|
arunk-vnk-chn/insaid-interview-questions
|
b6e4806fd99caaf0bbef595669e2cd8f657c59a4
|
[
"CC0-1.0"
] | null | null | null |
20 April - Introduction to Machine Learning (part 2).ipynb
|
arunk-vnk-chn/insaid-interview-questions
|
b6e4806fd99caaf0bbef595669e2cd8f657c59a4
|
[
"CC0-1.0"
] | null | null | null | 35.900433 | 371 | 0.577957 |
[
[
[
"<a href=\"https://colab.research.google.com/github/arunk-vnk-chn/insaid-interview-questions/blob/master/20%20April%20-%20Introduction%20to%20Machine%20Learning%20(part%202).ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"\n# 20 April – Introduction to Machine Learning (part 2)",
"_____no_output_____"
],
[
"**1.\tTop 100 data science interview questions link (could cover 60-70% of questions asked in interviews)**\n* http://nitin-panwar.github.io/Top-100-Data-science-interview-questions/\n\n* https://www.edureka.co/blog/interview-questions/data-science-interview-questions/\n",
"_____no_output_____"
],
[
"**2.\tTypical interview process**\n* Round 1: Programming test using hackerrank or codility (may be skipped for general roles like data scientist or decision scientist, but very frequent for ML engineer, data science developer etc.)\n* Round 2: Case study on a business problem (very typical for a data scientist role)\n* Round 3: Detailed interview with senior data scientist",
"_____no_output_____"
],
[
"**3.\tWhat is the data science workflow, or life cycle of ML projects, or how do we execute a ML project end to end?**\n\n* It is important to adhere to the below flow strictly while doing any ML project.\n\n* It is a cyclic process, where the project across different phases till the business requirement is satisfied.\n\n\n",
"_____no_output_____"
],
[
"**4.\tGood question to ask interviewer: Does the company/department work majorly on POC (proof of concept) projects or actual production projects? What are the projects or products being developed?**\n*\tFor a good career progression, it is suggested to join a company where production projects are happening. As POC projects only happen for 3 to 6 months and don’t proceed further. There is more learning in end to end projects. Most end to end projects are happening at internet companies / start-ups. However they could pose a challenge for work-life balance.",
"_____no_output_____"
],
[
"**5.\tWhat is a data lake?**\n*\tA data lake is a singular representation of data from different sources which enables to carry on any analytical cycle or data science cycle over it. These involve data engineers.\n",
"_____no_output_____"
],
[
"\n\n**6.\tWhat is hypothesis testing?**\n*\tNoting down rules business domain suggests or establishes to make a prediction and then checking if the data supports that or not.\n\n",
"_____no_output_____"
],
[
"**7.\tWhat is EDA (exploratory data analysis)?**\n*\tTrying to obtain different insights from the data, generally done by trying different graphs to identify relationships between variables. This may be part of the hypothesis given by the business or something new figured by analysing the data. EDA has no limited scope and is open-ended.",
"_____no_output_____"
],
[
"**8.\tIs it preferable to do a prediction with lower number of features or higher?**\n*\tLesser the better, because lesser features will have a lesser dependency on different inputs. It has to be optimum to ensure neither is important information ignored nor is too much noise added.",
"_____no_output_____"
],
[
"**9.\tWhat is feature engineering?**\n*\tCreating relevant features or deriving KPIs by a mathematical combination of multiple columns. This helps optimise the number of features required for prediction.\n",
"_____no_output_____"
],
[
"**10.\tCan feature selection be automated?**\n*\tYes and no. It is a process which requires human intervention, however there are some tools which can automate some parts of the process.\n",
"_____no_output_____"
],
[
"**11.\tWhat is the difference between regression and classification?**\n*\tIn supervised learning, regression is when we are predicting a continuous number (uncountable) while classification is when we are trying to assign a discrete category (countable).\n",
"_____no_output_____"
],
[
"**12.\tWhat is regression?**\n*\tA predictive modelling technique which investigates the relationship between a dependent variable and one or more independent variables.\n",
"_____no_output_____"
],
[
"**13.\tWhat is linear regression?**\n*\tA regression in which the relationship between the dependent variable and all the independent variables is of the first degree.\n\n",
"_____no_output_____"
],
[
"**14.\tWhen is a model said to be robust or flexible?**\n*\tA model is robust or flexible if it can adapt well to new data. Therefore it has to be optimum and not over-fit the current data. Flexibility also supports easy interpretability to business.\n",
"_____no_output_____"
],
[
"**15.\tWhat is the difference between a model and an algorithm?**\n*\tAn algorithm is a set of steps and could indicate a model or the entire process of an ML project.\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a5e0d128968770839fabe75701996e2071b5452
| 713,164 |
ipynb
|
Jupyter Notebook
|
assignments/2018/assignment3_v2/LSTM_Captioning.ipynb
|
HuiChihWang/cs231n.github.io
|
1a8c214246a873a9c67986c15db23b574d913566
|
[
"MIT"
] | null | null | null |
assignments/2018/assignment3_v2/LSTM_Captioning.ipynb
|
HuiChihWang/cs231n.github.io
|
1a8c214246a873a9c67986c15db23b574d913566
|
[
"MIT"
] | null | null | null |
assignments/2018/assignment3_v2/LSTM_Captioning.ipynb
|
HuiChihWang/cs231n.github.io
|
1a8c214246a873a9c67986c15db23b574d913566
|
[
"MIT"
] | 1 |
2019-01-14T13:39:53.000Z
|
2019-01-14T13:39:53.000Z
| 1,172.967105 | 249,232 | 0.954859 |
[
[
[
"# Image Captioning with LSTMs\nIn the previous exercise you implemented a vanilla RNN and applied it to image captioning. In this notebook you will implement the LSTM update rule and use it for image captioning.",
"_____no_output_____"
]
],
[
[
"# As usual, a bit of setup\nimport time, os, json\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.rnn_layers import *\nfrom cs231n.captioning_solver import CaptioningSolver\nfrom cs231n.classifiers.rnn import CaptioningRNN\nfrom cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions\nfrom cs231n.image_utils import image_from_url\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))",
"_____no_output_____"
]
],
[
[
"# Load MS-COCO data\nAs in the previous notebook, we will use the Microsoft COCO dataset for captioning.",
"_____no_output_____"
]
],
[
[
"# Load COCO data from disk; this returns a dictionary\n# We'll work with dimensionality-reduced features for this notebook, but feel\n# free to experiment with the original features by changing the flag below.\ndata = load_coco_data(pca_features=True)\n\n# Print out all the keys and values from the data dictionary\nfor k, v in data.items():\n if type(v) == np.ndarray:\n print(k, type(v), v.shape, v.dtype)\n else:\n print(k, type(v), len(v))",
"train_captions <class 'numpy.ndarray'> (400135, 17) int32\ntrain_image_idxs <class 'numpy.ndarray'> (400135,) int32\nval_captions <class 'numpy.ndarray'> (195954, 17) int32\nval_image_idxs <class 'numpy.ndarray'> (195954,) int32\ntrain_features <class 'numpy.ndarray'> (82783, 512) float32\nval_features <class 'numpy.ndarray'> (40504, 512) float32\nidx_to_word <class 'list'> 1004\nword_to_idx <class 'dict'> 1004\ntrain_urls <class 'numpy.ndarray'> (82783,) <U63\nval_urls <class 'numpy.ndarray'> (40504,) <U63\n"
]
],
[
[
"# LSTM\nIf you read recent papers, you'll see that many people use a variant on the vanilla RNN called Long-Short Term Memory (LSTM) RNNs. Vanilla RNNs can be tough to train on long sequences due to vanishing and exploding gradients caused by repeated matrix multiplication. LSTMs solve this problem by replacing the simple update rule of the vanilla RNN with a gating mechanism as follows.\n\nSimilar to the vanilla RNN, at each timestep we receive an input $x_t\\in\\mathbb{R}^D$ and the previous hidden state $h_{t-1}\\in\\mathbb{R}^H$; the LSTM also maintains an $H$-dimensional *cell state*, so we also receive the previous cell state $c_{t-1}\\in\\mathbb{R}^H$. The learnable parameters of the LSTM are an *input-to-hidden* matrix $W_x\\in\\mathbb{R}^{4H\\times D}$, a *hidden-to-hidden* matrix $W_h\\in\\mathbb{R}^{4H\\times H}$ and a *bias vector* $b\\in\\mathbb{R}^{4H}$.\n\nAt each timestep we first compute an *activation vector* $a\\in\\mathbb{R}^{4H}$ as $a=W_xx_t + W_hh_{t-1}+b$. We then divide this into four vectors $a_i,a_f,a_o,a_g\\in\\mathbb{R}^H$ where $a_i$ consists of the first $H$ elements of $a$, $a_f$ is the next $H$ elements of $a$, etc. We then compute the *input gate* $g\\in\\mathbb{R}^H$, *forget gate* $f\\in\\mathbb{R}^H$, *output gate* $o\\in\\mathbb{R}^H$ and *block input* $g\\in\\mathbb{R}^H$ as\n\n$$\n\\begin{align*}\ni = \\sigma(a_i) \\hspace{2pc}\nf = \\sigma(a_f) \\hspace{2pc}\no = \\sigma(a_o) \\hspace{2pc}\ng = \\tanh(a_g)\n\\end{align*}\n$$\n\nwhere $\\sigma$ is the sigmoid function and $\\tanh$ is the hyperbolic tangent, both applied elementwise.\n\nFinally we compute the next cell state $c_t$ and next hidden state $h_t$ as\n\n$$\nc_{t} = f\\odot c_{t-1} + i\\odot g \\hspace{4pc}\nh_t = o\\odot\\tanh(c_t)\n$$\n\nwhere $\\odot$ is the elementwise product of vectors.\n\nIn the rest of the notebook we will implement the LSTM update rule and apply it to the image captioning task. \n\nIn the code, we assume that data is stored in batches so that $X_t \\in \\mathbb{R}^{N\\times D}$, and will work with *transposed* versions of the parameters: $W_x \\in \\mathbb{R}^{D \\times 4H}$, $W_h \\in \\mathbb{R}^{H\\times 4H}$ so that activations $A \\in \\mathbb{R}^{N\\times 4H}$ can be computed efficiently as $A = X_t W_x + H_{t-1} W_h$",
"_____no_output_____"
],
[
"# LSTM: step forward\nImplement the forward pass for a single timestep of an LSTM in the `lstm_step_forward` function in the file `cs231n/rnn_layers.py`. This should be similar to the `rnn_step_forward` function that you implemented above, but using the LSTM update rule instead.\n\nOnce you are done, run the following to perform a simple test of your implementation. You should see errors on the order of `e-8` or less.",
"_____no_output_____"
]
],
[
[
"N, D, H = 3, 4, 5\nx = np.linspace(-0.4, 1.2, num=N*D).reshape(N, D)\nprev_h = np.linspace(-0.3, 0.7, num=N*H).reshape(N, H)\nprev_c = np.linspace(-0.4, 0.9, num=N*H).reshape(N, H)\nWx = np.linspace(-2.1, 1.3, num=4*D*H).reshape(D, 4 * H)\nWh = np.linspace(-0.7, 2.2, num=4*H*H).reshape(H, 4 * H)\nb = np.linspace(0.3, 0.7, num=4*H)\n\nnext_h, next_c, cache = lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)\n\nexpected_next_h = np.asarray([\n [ 0.24635157, 0.28610883, 0.32240467, 0.35525807, 0.38474904],\n [ 0.49223563, 0.55611431, 0.61507696, 0.66844003, 0.7159181 ],\n [ 0.56735664, 0.66310127, 0.74419266, 0.80889665, 0.858299 ]])\nexpected_next_c = np.asarray([\n [ 0.32986176, 0.39145139, 0.451556, 0.51014116, 0.56717407],\n [ 0.66382255, 0.76674007, 0.87195994, 0.97902709, 1.08751345],\n [ 0.74192008, 0.90592151, 1.07717006, 1.25120233, 1.42395676]])\n\nprint('next_h error: ', rel_error(expected_next_h, next_h))\nprint('next_c error: ', rel_error(expected_next_c, next_c))",
"next_h error: 5.7054131185818695e-09\nnext_c error: 5.8143123088804145e-09\n"
]
],
[
[
"# LSTM: step backward\nImplement the backward pass for a single LSTM timestep in the function `lstm_step_backward` in the file `cs231n/rnn_layers.py`. Once you are done, run the following to perform numeric gradient checking on your implementation. You should see errors on the order of `e-7` or less.",
"_____no_output_____"
]
],
[
[
"np.random.seed(231)\n\nN, D, H = 4, 5, 6\nx = np.random.randn(N, D)\nprev_h = np.random.randn(N, H)\nprev_c = np.random.randn(N, H)\nWx = np.random.randn(D, 4 * H)\nWh = np.random.randn(H, 4 * H)\nb = np.random.randn(4 * H)\n\nnext_h, next_c, cache = lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)\n\ndnext_h = np.random.randn(*next_h.shape)\ndnext_c = np.random.randn(*next_c.shape)\n\nfx_h = lambda x: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]\nfh_h = lambda h: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]\nfc_h = lambda c: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]\nfWx_h = lambda Wx: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]\nfWh_h = lambda Wh: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]\nfb_h = lambda b: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]\n\nfx_c = lambda x: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]\nfh_c = lambda h: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]\nfc_c = lambda c: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]\nfWx_c = lambda Wx: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]\nfWh_c = lambda Wh: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]\nfb_c = lambda b: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]\n\nnum_grad = eval_numerical_gradient_array\n\ndx_num = num_grad(fx_h, x, dnext_h) + num_grad(fx_c, x, dnext_c)\ndh_num = num_grad(fh_h, prev_h, dnext_h) + num_grad(fh_c, prev_h, dnext_c)\ndc_num = num_grad(fc_h, prev_c, dnext_h) + num_grad(fc_c, prev_c, dnext_c)\ndWx_num = num_grad(fWx_h, Wx, dnext_h) + num_grad(fWx_c, Wx, dnext_c)\ndWh_num = num_grad(fWh_h, Wh, dnext_h) + num_grad(fWh_c, Wh, dnext_c)\ndb_num = num_grad(fb_h, b, dnext_h) + num_grad(fb_c, b, dnext_c)\n\ndx, dh, dc, dWx, dWh, db = lstm_step_backward(dnext_h, dnext_c, cache)\n\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dh error: ', rel_error(dh_num, dh))\nprint('dc error: ', rel_error(dc_num, dc))\nprint('dWx error: ', rel_error(dWx_num, dWx))\nprint('dWh error: ', rel_error(dWh_num, dWh))\nprint('db error: ', rel_error(db_num, db))",
"dx error: 6.14126356677057e-10\ndh error: 3.0914728531469933e-10\ndc error: 1.5221723979041107e-10\ndWx error: 1.6933643922734908e-09\ndWh error: 4.806248540056623e-08\ndb error: 1.734924139321044e-10\n"
]
],
[
[
"# LSTM: forward\nIn the function `lstm_forward` in the file `cs231n/rnn_layers.py`, implement the `lstm_forward` function to run an LSTM forward on an entire timeseries of data.\n\nWhen you are done, run the following to check your implementation. You should see an error on the order of `e-7` or less.",
"_____no_output_____"
]
],
[
[
"N, D, H, T = 2, 5, 4, 3\nx = np.linspace(-0.4, 0.6, num=N*T*D).reshape(N, T, D)\nh0 = np.linspace(-0.4, 0.8, num=N*H).reshape(N, H)\nWx = np.linspace(-0.2, 0.9, num=4*D*H).reshape(D, 4 * H)\nWh = np.linspace(-0.3, 0.6, num=4*H*H).reshape(H, 4 * H)\nb = np.linspace(0.2, 0.7, num=4*H)\n\nh, cache = lstm_forward(x, h0, Wx, Wh, b)\n\nexpected_h = np.asarray([\n [[ 0.01764008, 0.01823233, 0.01882671, 0.0194232 ],\n [ 0.11287491, 0.12146228, 0.13018446, 0.13902939],\n [ 0.31358768, 0.33338627, 0.35304453, 0.37250975]],\n [[ 0.45767879, 0.4761092, 0.4936887, 0.51041945],\n [ 0.6704845, 0.69350089, 0.71486014, 0.7346449 ],\n [ 0.81733511, 0.83677871, 0.85403753, 0.86935314]]])\n\nprint('h error: ', rel_error(expected_h, h))",
"h error: 8.610537452106624e-08\n"
]
],
[
[
"# LSTM: backward\nImplement the backward pass for an LSTM over an entire timeseries of data in the function `lstm_backward` in the file `cs231n/rnn_layers.py`. When you are done, run the following to perform numeric gradient checking on your implementation. You should see errors on the order of `e-8` or less. (For `dWh`, it's fine if your error is on the order of `e-6` or less).",
"_____no_output_____"
]
],
[
[
"from cs231n.rnn_layers import lstm_forward, lstm_backward\nnp.random.seed(231)\n\nN, D, T, H = 2, 3, 10, 6\n\nx = np.random.randn(N, T, D)\nh0 = np.random.randn(N, H)\nWx = np.random.randn(D, 4 * H)\nWh = np.random.randn(H, 4 * H)\nb = np.random.randn(4 * H)\n\nout, cache = lstm_forward(x, h0, Wx, Wh, b)\n\ndout = np.random.randn(*out.shape)\n\ndx, dh0, dWx, dWh, db = lstm_backward(dout, cache)\n\nfx = lambda x: lstm_forward(x, h0, Wx, Wh, b)[0]\nfh0 = lambda h0: lstm_forward(x, h0, Wx, Wh, b)[0]\nfWx = lambda Wx: lstm_forward(x, h0, Wx, Wh, b)[0]\nfWh = lambda Wh: lstm_forward(x, h0, Wx, Wh, b)[0]\nfb = lambda b: lstm_forward(x, h0, Wx, Wh, b)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\ndh0_num = eval_numerical_gradient_array(fh0, h0, dout)\ndWx_num = eval_numerical_gradient_array(fWx, Wx, dout)\ndWh_num = eval_numerical_gradient_array(fWh, Wh, dout)\ndb_num = eval_numerical_gradient_array(fb, b, dout)\n\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dh0 error: ', rel_error(dh0_num, dh0))\nprint('dWx error: ', rel_error(dWx_num, dWx))\nprint('dWh error: ', rel_error(dWh_num, dWh))\nprint('db error: ', rel_error(db_num, db))",
"dx error: 7.1588553323497326e-09\ndh0 error: 1.4205074062556152e-08\ndWx error: 1.190041651048399e-09\ndWh error: 1.4586835619804606e-07\ndb error: 1.050202179357724e-09\n"
]
],
[
[
"# INLINE QUESTION",
"_____no_output_____"
],
[
"Recall that in an LSTM the input gate $i$, forget gate $f$, and output gate $o$ are all outputs of a sigmoid function. Why don't we use the ReLU activation function instead of sigmoid to compute these values? Explain.",
"_____no_output_____"
],
[
"# LSTM captioning model\n\nNow that you have implemented an LSTM, update the implementation of the `loss` method of the `CaptioningRNN` class in the file `cs231n/classifiers/rnn.py` to handle the case where `self.cell_type` is `lstm`. This should require adding less than 10 lines of code.\n\nOnce you have done so, run the following to check your implementation. You should see a difference on the order of `e-10` or less.",
"_____no_output_____"
]
],
[
[
"N, D, W, H = 10, 20, 30, 40\nword_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}\nV = len(word_to_idx)\nT = 13\n\nmodel = CaptioningRNN(word_to_idx,\n input_dim=D,\n wordvec_dim=W,\n hidden_dim=H,\n cell_type='lstm',\n dtype=np.float64)\n\n# Set all model parameters to fixed values\nfor k, v in model.params.items():\n model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape)\n\nfeatures = np.linspace(-0.5, 1.7, num=N*D).reshape(N, D)\ncaptions = (np.arange(N * T) % V).reshape(N, T)\n\nloss, grads = model.loss(features, captions)\nexpected_loss = 9.82445935443\n\nprint('loss: ', loss)\nprint('expected loss: ', expected_loss)\nprint('difference: ', abs(loss - expected_loss))",
"loss: 9.82445935443226\nexpected loss: 9.82445935443\ndifference: 2.261302256556519e-12\n"
]
],
[
[
"# Overfit LSTM captioning model\nRun the following to overfit an LSTM captioning model on the same small dataset as we used for the RNN previously. You should see a final loss less than 0.5.",
"_____no_output_____"
]
],
[
[
"np.random.seed(231)\n\nsmall_data = load_coco_data(max_train=50)\n\nsmall_lstm_model = CaptioningRNN(\n cell_type='lstm',\n word_to_idx=data['word_to_idx'],\n input_dim=data['train_features'].shape[1],\n hidden_dim=512,\n wordvec_dim=256,\n dtype=np.float32,\n )\n\nsmall_lstm_solver = CaptioningSolver(small_lstm_model, small_data,\n update_rule='adam',\n num_epochs=50,\n batch_size=25,\n optim_config={\n 'learning_rate': 5e-3,\n },\n lr_decay=0.995,\n verbose=True, print_every=10,\n )\n\nsmall_lstm_solver.train()\n\n# Plot the training losses\nplt.plot(small_lstm_solver.loss_history)\nplt.xlabel('Iteration')\nplt.ylabel('Loss')\nplt.title('Training loss history')\nplt.show()",
"(Iteration 1 / 100) loss: 79.551150\n(Iteration 11 / 100) loss: 43.829099\n(Iteration 21 / 100) loss: 30.062613\n(Iteration 31 / 100) loss: 14.020053\n(Iteration 41 / 100) loss: 6.003986\n(Iteration 51 / 100) loss: 1.852246\n(Iteration 61 / 100) loss: 0.640582\n(Iteration 71 / 100) loss: 0.285610\n(Iteration 81 / 100) loss: 0.234114\n(Iteration 91 / 100) loss: 0.121609\n"
]
],
[
[
"# LSTM test-time sampling\nModify the `sample` method of the `CaptioningRNN` class to handle the case where `self.cell_type` is `lstm`. This should take fewer than 10 lines of code.\n\nWhen you are done run the following to sample from your overfit LSTM model on some training and validation set samples. As with the RNN, training results should be very good, and validation results probably won't make a lot of sense (because we're overfitting).",
"_____no_output_____"
]
],
[
[
"for split in ['train', 'val']:\n minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2)\n gt_captions, features, urls = minibatch\n gt_captions = decode_captions(gt_captions, data['idx_to_word'])\n\n sample_captions = small_lstm_model.sample(features)\n sample_captions = decode_captions(sample_captions, data['idx_to_word'])\n\n for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls):\n plt.imshow(image_from_url(url))\n plt.title('%s\\n%s\\nGT:%s' % (split, sample_caption, gt_caption))\n plt.axis('off')\n plt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5e0d1f51cbe96815ac038ab3882a1e2e798f9d
| 14,508 |
ipynb
|
Jupyter Notebook
|
examples/tutorials/Part 3 - Advanced Remote Execution Tools.ipynb
|
alexis-thual/PySyft
|
f34aba95776d57b9bf30252061a84b64fc23018b
|
[
"Apache-2.0"
] | null | null | null |
examples/tutorials/Part 3 - Advanced Remote Execution Tools.ipynb
|
alexis-thual/PySyft
|
f34aba95776d57b9bf30252061a84b64fc23018b
|
[
"Apache-2.0"
] | null | null | null |
examples/tutorials/Part 3 - Advanced Remote Execution Tools.ipynb
|
alexis-thual/PySyft
|
f34aba95776d57b9bf30252061a84b64fc23018b
|
[
"Apache-2.0"
] | null | null | null | 23.101911 | 489 | 0.531017 |
[
[
[
"# Part 3: Advanced Remote Execution Tools\n\nIn the last section we trained a toy model using Federated Learning. We did this by calling .send() and .get() on our model, sending it to the location of training data, updating it, and then bringing it back. However, at the end of the example we realized that we needed to go a bit further to protect people privacy. Namely, we want to average the gradients **before** calling .get(). That way, we won't ever see anyone's exact gradient (thus better protecting their privacy!!!)\n\nBut, in order to do this, we need a few more pieces:\n\n- use a pointer to send a Tensor directly to another worker\n\nAnd in addition, while we're here, we're going to learn about a few more advanced tensor operations as well which will help us both with this example and a few in the future!\n\nAuthors:\n- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)",
"_____no_output_____"
]
],
[
[
"import torch\nimport syft as sy\nhook = sy.TorchHook(torch)",
"_____no_output_____"
]
],
[
[
"# Section 3.1 - Pointers to Pointers\n\nAs you know, PointerTensor objects feel just like normal tensors. In fact, they are _so much like tensors_ that we can even have pointers **to** the pointers. Check it out!",
"_____no_output_____"
]
],
[
[
"bob = sy.VirtualWorker(hook, id='bob')\nalice = sy.VirtualWorker(hook, id='alice')\n\n# making sure that bob/alice know about each other\nbob.add_worker(alice)\nalice.add_worker(bob)",
"_____no_output_____"
],
[
"# this is a local tensor\nx = torch.tensor([1,2,3,4])\nx",
"_____no_output_____"
],
[
"# this sends the local tensor to Bob\nx_ptr = x.send(bob)\n\n# this is now a pointer\nx_ptr",
"_____no_output_____"
],
[
"# now we can SEND THE POINTER to alice!!!\npointer_to_x_ptr = x_ptr.send(alice)\n\npointer_to_x_ptr",
"_____no_output_____"
]
],
[
[
"### What happened?\n\nSo, in the previous example, we created a tensor called `x` and send it to Bob, creating a pointer on our local machine (`x_ptr`). \n\nThen, we called `x_ptr.send(alice)` which **sent the pointer** to Alice. \n\nNote, this did NOT move the data! Instead, it moved the pointer to the data!! ",
"_____no_output_____"
]
],
[
[
"# As you can see above, Bob still has the actual data (data is always stored in a LocalTensor type). \nbob._objects",
"_____no_output_____"
],
[
"# Alice, on the other hand, has x_ptr!! (notice how it points at bob)\nalice._objects",
"_____no_output_____"
]
],
[
[
"\n\n",
"_____no_output_____"
]
],
[
[
"# and we can use .get() to get x_ptr back from Alice\n\nx_ptr = pointer_to_x_ptr.get()\nx_ptr",
"_____no_output_____"
],
[
"# and then we can use x_ptr to get x back from Bob!\n\nx = x_ptr.get()\nx",
"_____no_output_____"
]
],
[
[
"### Arithmetic on Pointer -> Pointer -> Data Object\n\nAnd just like with normal pointers, we can perform arbitrary PyTorch operations across these tensors",
"_____no_output_____"
]
],
[
[
"bob._objects",
"_____no_output_____"
],
[
"alice._objects",
"_____no_output_____"
],
[
"p2p2x = torch.tensor([1,2,3,4,5]).send(bob).send(alice)\n\ny = p2p2x + p2p2x",
"_____no_output_____"
],
[
"bob._objects",
"_____no_output_____"
],
[
"alice._objects",
"_____no_output_____"
],
[
"y.get().get()",
"_____no_output_____"
],
[
"bob._objects",
"_____no_output_____"
],
[
"alice._objects",
"_____no_output_____"
],
[
"p2p2x.get().get()",
"_____no_output_____"
],
[
"bob._objects",
"_____no_output_____"
],
[
"alice._objects",
"_____no_output_____"
]
],
[
[
"# Section 3.2 - Pointer Chain Operations\n\nSo in the last section whenever we called a .send() or a .get() operation, it called that operation directly on the tensor on our local machine. However, if you have a chain of pointers, sometimes you want to call operations like .get() or .send() on the **last** pointer in the chain (such as sending data directly from one worker to another). To accomplish this, you want to use functions which are especially designed for this privacy preserving operation.\n\nThese operations are:\n\n- `my_pointer2pointer.move(another_worker)`",
"_____no_output_____"
]
],
[
[
"# x is now a pointer to a pointer to the data which lives on Bob's machine\nx = torch.tensor([1,2,3,4,5]).send(bob)",
"_____no_output_____"
],
[
"print(' bob:', bob._objects)\nprint('alice:',alice._objects)",
" bob: {18145778415: tensor([1, 2, 3, 4, 5])}\nalice: {}\n"
],
[
"x = x.move(alice)",
"_____no_output_____"
],
[
"print(' bob:', bob._objects)\nprint('alice:',alice._objects)",
" bob: {}\nalice: {63566599439: tensor([1, 2, 3, 4, 5])}\n"
],
[
"x",
"_____no_output_____"
]
],
[
[
"Excellent! Now we're equiped with the tools to perform remote **gradient averaging** using a trusted aggregator! ",
"_____no_output_____"
],
[
"# Congratulations!!! - Time to Join the Community!\n\nCongratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!\n\n### Star PySyft on GitHub\n\nThe easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.\n\n- [Star PySyft](https://github.com/OpenMined/PySyft)\n\n### Join our Slack!\n\nThe best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)\n\n### Join a Code Project!\n\nThe best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for \"Projects\". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more \"one off\" mini-projects by searching for GitHub issues marked \"good first issue\".\n\n- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)\n- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)\n\n### Donate\n\nIf you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!\n\n[OpenMined's Open Collective Page](https://opencollective.com/openmined)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4a5e1d9a19485b9884a4e4dca2a5ca11442f1c4b
| 309,837 |
ipynb
|
Jupyter Notebook
|
first-neural-network/Your_first_neural_network.ipynb
|
manasapte/deep-learning
|
d3b1da8c6ef79880d36e3064f5164bd1ef7b300c
|
[
"MIT"
] | null | null | null |
first-neural-network/Your_first_neural_network.ipynb
|
manasapte/deep-learning
|
d3b1da8c6ef79880d36e3064f5164bd1ef7b300c
|
[
"MIT"
] | null | null | null |
first-neural-network/Your_first_neural_network.ipynb
|
manasapte/deep-learning
|
d3b1da8c6ef79880d36e3064f5164bd1ef7b300c
|
[
"MIT"
] | null | null | null | 328.914013 | 159,528 | 0.911521 |
[
[
[
"# Your first neural network\n\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.\n\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Load and prepare the data\n\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"_____no_output_____"
]
],
[
[
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)",
"_____no_output_____"
],
[
"rides.head()",
"_____no_output_____"
]
],
[
[
"## Checking out the data\n\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.\n\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"_____no_output_____"
]
],
[
[
"rides[:24*10].plot(x='dteday', y='cnt')",
"_____no_output_____"
]
],
[
[
"### Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.",
"_____no_output_____"
]
],
[
[
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"_____no_output_____"
]
],
[
[
"### Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\n\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"_____no_output_____"
]
],
[
[
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"_____no_output_____"
]
],
[
[
"### Splitting the data into training, testing, and validation sets\n\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"_____no_output_____"
]
],
[
[
"# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"_____no_output_____"
]
],
[
[
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"_____no_output_____"
]
],
[
[
"# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"_____no_output_____"
]
],
[
[
"## Time to build the network\n\nBelow you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n\n<img src=\"assets/neural_network.png\" width=300px>\n\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.\n\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.\n\n> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.\n2. Implement the forward pass in the `train` method.\n3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.\n4. Implement the forward pass in the `run` method.\n ",
"_____no_output_____"
]
],
[
[
"#############\n# In the my_answers.py file, fill out the TODO sections as specified\n#############\n\nfrom my_answers import NeuralNetwork",
"_____no_output_____"
],
[
"def MSE(y, Y):\n return np.mean((y-Y)**2)",
"_____no_output_____"
]
],
[
[
"## Unit tests\n\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.",
"_____no_output_____"
]
],
[
[
"import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)\n",
".....\n----------------------------------------------------------------------\nRan 5 tests in 0.004s\n\nOK\n"
]
],
[
[
"## Training the network\n\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\n\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\n\n### Choose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.\n\n### Choose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n\n### Choose the number of hidden nodes\nIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. \n\nTry a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.",
"_____no_output_____"
]
],
[
[
"import sys\n\n####################\n### Set the hyperparameters in you myanswers.py file ###\n####################\n\nfrom my_answers import iterations, learning_rate, hidden_nodes, output_nodes\n\n\nN_i = train_features.shape[1]\nprint(\"features: \", N_i)\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)",
"features: 56\nProgress: 0.1% ... Training loss: 0.923 ... Validation loss: 1.370"
],
[
"plt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()",
"_____no_output_____"
]
],
[
[
"## Check out your predictions\n\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\n\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"mse on test: 5867.10668199\n"
]
],
[
[
"## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\n \nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\n\n#### Your answer below\n##### How well does the model predict the data?\nThe model predicts data with a pretty high accuracy as seen in the graph above for the most part. Upto Dec 21st the model accuracy is very very high and again after Dec 25, starting on the 26th the model is super accurate.\n##### Where does it fail?\nThe model is failing for the period from Dec 22nd to Dec 25th. We can see in the graph above, the predicted values are way larger than the observed counts on those days.\n##### Why does it fail where it does?\nIt is clear that the training data the model has seen fits very well to the days before Dec 22-25th and also the days after. Also, you can see the predicted values are more in line with the remaining days of the year. My hypothesis is the lower demand for bikes in the period Dec 22-25th is because of christmas / holidays and since we have split the data for the last 21 days into a test set, the model has never seen the effects of major holidays onto the demand. If the model is trained on uniformly sampled data, I feel like it will learn the correlation between this special week of Dec 22nd to 25th and lower demand and then it will be able to predict the demand for these test points much more accurately.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5e220c80b9dd617c47549fa349d04059a672bf
| 884,093 |
ipynb
|
Jupyter Notebook
|
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
|
Jimmy-INL/OpenPNM
|
1546fa1ac2204443bde916f2037fac383c5069ae
|
[
"MIT"
] | 1 |
2020-06-08T19:48:00.000Z
|
2020-06-08T19:48:00.000Z
|
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
|
Jimmy-INL/OpenPNM
|
1546fa1ac2204443bde916f2037fac383c5069ae
|
[
"MIT"
] | null | null | null |
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
|
Jimmy-INL/OpenPNM
|
1546fa1ac2204443bde916f2037fac383c5069ae
|
[
"MIT"
] | null | null | null | 1,358.053763 | 431,496 | 0.956594 |
[
[
[
"# Tutorial 3 of 3: Advanced Topics and Usage\n\n**Learning Outcomes**\n\n* Use different methods to add boundary pores to a network\n* Manipulate network topology by adding and removing pores and throats\n* Explore the ModelsDict design, including copying models between objects, and changing model parameters\n* Write a custom pore-scale model and a custom Phase\n* Access and manipulate objects associated with the network\n* Combine multiple algorithms to predict relative permeability",
"_____no_output_____"
],
[
"## Build and Manipulate Network Topology\n\nFor the present tutorial, we'll keep the topology simple to help keep the focus on other aspects of OpenPNM.",
"_____no_output_____"
]
],
[
[
"import warnings\nimport numpy as np\nimport scipy as sp\nimport openpnm as op\n%matplotlib inline\nnp.random.seed(10)\nws = op.Workspace()\nws.settings['loglevel'] = 40\nnp.set_printoptions(precision=4)\npn = op.network.Cubic(shape=[10, 10, 10], spacing=0.00006, name='net')",
"_____no_output_____"
]
],
[
[
"## Adding Boundary Pores\n\nWhen performing transport simulations it is often useful to have 'boundary' pores attached to the surface(s) of the network where boundary conditions can be applied. When using the **Cubic** class, two methods are available for doing this: ``add_boundaries``, which is specific for the **Cubic** class, and ``add_boundary_pores``, which is a generic method that can also be used on other network types and which is inherited from **GenericNetwork**. The first method automatically adds boundaries to ALL six faces of the network and offsets them from the network by 1/2 of the value provided as the network ``spacing``. The second method provides total control over which boundary pores are created and where they are positioned, but requires the user to specify to which pores the boundary pores should be attached to. Let's explore these two options:",
"_____no_output_____"
]
],
[
[
"pn.add_boundary_pores(labels=['top', 'bottom'])",
"_____no_output_____"
]
],
[
[
"Let's quickly visualize this network with the added boundaries:",
"_____no_output_____"
]
],
[
[
"#NBVAL_IGNORE_OUTPUT\nfig = op.topotools.plot_connections(pn, c='r')\nfig = op.topotools.plot_coordinates(pn, c='b', fig=fig)\nfig.set_size_inches([10, 10])",
"_____no_output_____"
]
],
[
[
"### Adding and Removing Pores and Throats\n\nOpenPNM uses a list-based data storage scheme for all properties, including topological connections. One of the benefits of this approach is that adding and removing pores and throats from the network is essentially as simple as adding or removing rows from the data arrays. The one exception to this 'simplicity' is that the ``'throat.conns'`` array must be treated carefully when trimming pores, so OpenPNM provides the ``extend`` and ``trim`` functions for adding and removing, respectively. To demonstrate, let's reduce the coordination number of the network to create a more random structure:",
"_____no_output_____"
]
],
[
[
"Ts = np.random.rand(pn.Nt) < 0.1 # Create a mask with ~10% of throats labeled True\nop.topotools.trim(network=pn, throats=Ts) # Use mask to indicate which throats to trim",
"_____no_output_____"
]
],
[
[
"When the ``trim`` function is called, it automatically checks the health of the network afterwards, so logger messages might appear on the command line if problems were found such as isolated clusters of pores or pores with no throats. This health check is performed by calling the **Network**'s ``check_network_health`` method which returns a **HealthDict** containing the results of the checks:",
"_____no_output_____"
]
],
[
[
"a = pn.check_network_health()\nprint(a)",
"――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\nkey value\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\ndisconnected_clusters [array([ 0, 1, 2, ..., 1197, 1198, 1199]), array([1080]), array([1010]), array([1012]), array([1015]), array([1040]), array([1059]), array([1061]), array([1067]), array([1071]), array([1075]), array([1184]), array([1170]), array([1105]), array([1114]), array([1120]), array([1136]), array([1141]), array([1146]), array([1152]), array([1153]), array([1159]), array([1101])]\nisolated_pores (22,)\ntrim_pores [1080, 1010, 1012, 1015, 1040, 1059, 1061, 1067, 1071, 1075, 1184, 1170, 1105, 1114, 1120, 1136, 1141, 1146, 1152, 1153, 1159, 1101]\nduplicate_throats []\nbidirectional_throats []\nheadless_throats []\nlooped_throats []\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n"
]
],
[
[
"The **HealthDict** contains several lists including things like duplicate throats and isolated pores, but also a suggestion of which pores to trim to return the network to a healthy state. Also, the **HealthDict** has a ``health`` attribute that is ``False`` is any checks fail.",
"_____no_output_____"
]
],
[
[
"op.topotools.trim(network=pn, pores=a['trim_pores'])",
"_____no_output_____"
]
],
[
[
"Let's take another look at the network to see the trimmed pores and throats:",
"_____no_output_____"
]
],
[
[
"#NBVAL_IGNORE_OUTPUT\nfig = op.topotools.plot_connections(pn, c='r')\nfig = op.topotools.plot_coordinates(pn, c='b', fig=fig)\nfig.set_size_inches([10, 10])",
"_____no_output_____"
]
],
[
[
"## Define Geometry Objects\n\nThe boundary pores we've added to the network should be treated a little bit differently. Specifically, they should have no volume or length (as they are not physically representative of real pores). To do this, we create two separate **Geometry** objects, one for internal pores and one for the boundaries:",
"_____no_output_____"
]
],
[
[
"Ps = pn.pores('*boundary', mode='not')\nTs = pn.throats('*boundary', mode='not')\ngeom = op.geometry.StickAndBall(network=pn, pores=Ps, throats=Ts, name='intern')\nPs = pn.pores('*boundary')\nTs = pn.throats('*boundary')\nboun = op.geometry.Boundary(network=pn, pores=Ps, throats=Ts, name='boun')",
"_____no_output_____"
]
],
[
[
"The **StickAndBall** class is preloaded with the pore-scale models to calculate all the necessary size information (pore diameter, pore.volume, throat lengths, throat.diameter, etc). The **Boundary** class is speciall and is only used for the boundary pores. In this class, geometrical properties are set to small fixed values such that they don't affect the simulation results. ",
"_____no_output_____"
],
[
"## Define Multiple Phase Objects\n\nIn order to simulate relative permeability of air through a partially water-filled network, we need to create each **Phase** object. OpenPNM includes pre-defined classes for each of these common fluids:",
"_____no_output_____"
]
],
[
[
"air = op.phases.Air(network=pn)\nwater = op.phases.Water(network=pn)\nwater['throat.contact_angle'] = 110\nwater['throat.surface_tension'] = 0.072",
"_____no_output_____"
]
],
[
[
"### Aside: Creating a Custom Phase Class\n\nIn many cases you will want to create your own fluid, such as an oil or brine, which may be commonly used in your research. OpenPNM cannot predict all the possible scenarios, but luckily it is easy to create a custom **Phase** class as follows:",
"_____no_output_____"
]
],
[
[
"from openpnm.phases import GenericPhase\n\nclass Oil(GenericPhase):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.add_model(propname='pore.viscosity',\n model=op.models.misc.polynomial,\n prop='pore.temperature',\n a=[1.82082e-2, 6.51E-04, -3.48E-7, 1.11E-10])\n self['pore.molecular_weight'] = 116 # g/mol",
"_____no_output_____"
]
],
[
[
"* Creating a **Phase** class basically involves placing a series of ``self.add_model`` commands within the ``__init__`` section of the class definition. This means that when the class is instantiated, all the models are added to *itself* (i.e. ``self``).\n* ``**kwargs`` is a Python trick that captures all arguments in a *dict* called ``kwargs`` and passes them to another function that may need them. In this case they are passed to the ``__init__`` method of **Oil**'s parent by the ``super`` function. Specifically, things like ``name`` and ``network`` are expected.\n* The above code block also stores the molecular weight of the oil as a constant value\n* Adding models and constant values in this way could just as easily be done in a run script, but the advantage of defining a class is that it can be saved in a file (i.e. 'my_custom_phases') and reused in any project.",
"_____no_output_____"
]
],
[
[
"oil = Oil(network=pn)\nprint(oil)",
"――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\nmain : phase_03\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n# Properties Valid Values\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n1 pore.molecular_weight 1178 / 1178 \n2 pore.pressure 1178 / 1178 \n3 pore.temperature 1178 / 1178 \n4 pore.viscosity 1178 / 1178 \n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n# Labels Assigned Locations\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n1 pore.all 1178 \n2 throat.all 2587 \n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n"
]
],
[
[
"## Define Physics Objects for Each Geometry and Each Phase\n\nIn the tutorial #2 we created two **Physics** object, one for each of the two **Geometry** objects used to handle the stratified layers. In this tutorial, the internal pores and the boundary pores each have their own **Geometry**, but there are two **Phases**, which also each require a unique **Physics**:",
"_____no_output_____"
]
],
[
[
"phys_water_internal = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom)\nphys_air_internal = op.physics.GenericPhysics(network=pn, phase=air, geometry=geom)\nphys_water_boundary = op.physics.GenericPhysics(network=pn, phase=water, geometry=boun)\nphys_air_boundary = op.physics.GenericPhysics(network=pn, phase=air, geometry=boun)",
"_____no_output_____"
]
],
[
[
"> To reiterate, *one* **Physics** object is required for each **Geometry** *AND* each **Phase**, so the number can grow to become annoying very quickly Some useful tips for easing this situation are given below.",
"_____no_output_____"
],
[
"### Create a Custom Pore-Scale Physics Model\n\nPerhaps the most distinguishing feature between pore-network modeling papers is the pore-scale physics models employed. Accordingly, OpenPNM was designed to allow for easy customization in this regard, so that you can create your own models to augment or replace the ones included in the OpenPNM *models* libraries. For demonstration, let's implement the capillary pressure model proposed by [Mason and Morrow in 1994](http://dx.doi.org/10.1006/jcis.1994.1402). They studied the entry pressure of non-wetting fluid into a throat formed by spheres, and found that the converging-diverging geometry increased the capillary pressure required to penetrate the throat. As a simple approximation they proposed $P_c = -2 \\sigma \\cdot cos(2/3 \\theta) / R_t$\n\nPore-scale models are written as basic function definitions:",
"_____no_output_____"
]
],
[
[
"def mason_model(target, diameter='throat.diameter', theta='throat.contact_angle', \n sigma='throat.surface_tension', f=0.6667):\n proj = target.project\n network = proj.network\n phase = proj.find_phase(target)\n Dt = network[diameter]\n theta = phase[theta]\n sigma = phase[sigma]\n Pc = 4*sigma*np.cos(f*np.deg2rad(theta))/Dt\n return Pc[phase.throats(target.name)]",
"_____no_output_____"
]
],
[
[
"Let's examine the components of above code:\n\n* The function receives a ``target`` object as an argument. This indicates which object the results will be returned to. \n* The ``f`` value is a scale factor that is applied to the contact angle. Mason and Morrow suggested a value of 2/3 as a decent fit to the data, but we'll make this an adjustable parameter with 2/3 as the default.\n* Note the ``pore.diameter`` is actually a **Geometry** property, but it is retrieved via the network using the data exchange rules outlined in the second tutorial.\n* All of the calculations are done for every throat in the network, but this pore-scale model may be assigned to a ``target`` like a **Physics** object, that is a subset of the full domain. As such, the last line extracts values from the ``Pc`` array for the location of ``target`` and returns just the subset.\n* The actual values of the contact angle, surface tension, and throat diameter are NOT sent in as numerical arrays, but rather as dictionary keys to the arrays. There is one very important reason for this: if arrays had been sent, then re-running the model would use the same arrays and hence not use any updated values. By having access to dictionary keys, the model actually looks up the current values in each of the arrays whenever it is run.\n* It is good practice to include the dictionary keys as arguments, such as ``sigma = 'throat.contact_angle'``. This way the user can control where the contact angle could be stored on the ``target`` object.",
"_____no_output_____"
],
[
"### Copy Models Between Physics Objects\n\nAs mentioned above, the need to specify a separate **Physics** object for each **Geometry** and **Phase** can become tedious. It is possible to *copy* the pore-scale models assigned to one object onto another object. First, let's assign the models we need to ``phys_water_internal``:",
"_____no_output_____"
]
],
[
[
"mod = op.models.physics.hydraulic_conductance.hagen_poiseuille\nphys_water_internal.add_model(propname='throat.hydraulic_conductance',\n model=mod)",
"_____no_output_____"
],
[
"phys_water_internal.add_model(propname='throat.entry_pressure',\n model=mason_model)",
"_____no_output_____"
]
],
[
[
"Now make a copy of the ``models`` on ``phys_water_internal`` and apply it all the other water **Physics** objects:",
"_____no_output_____"
]
],
[
[
"phys_water_boundary.models = phys_water_internal.models",
"_____no_output_____"
]
],
[
[
"The only 'gotcha' with this approach is that each of the **Physics** objects must be *regenerated* in order to place numerical values for all the properties into the data arrays:",
"_____no_output_____"
]
],
[
[
"phys_water_boundary.regenerate_models()\nphys_air_internal.regenerate_models()\nphys_air_internal.regenerate_models()",
"_____no_output_____"
]
],
[
[
"### Adjust Pore-Scale Model Parameters\n\nThe pore-scale models are stored in a **ModelsDict** object that is itself stored under the ``models`` attribute of each object. This arrangement is somewhat convoluted, but it enables integrated storage of models on the object's wo which they apply. The models on an object can be inspected with ``print(phys_water_internal)``, which shows a list of all the pore-scale properties that are computed by a model, and some information about the model's *regeneration* mode.\n\nEach model in the **ModelsDict** can be individually inspected by accessing it using the dictionary key corresponding to *pore-property* that it calculates, i.e. ``print(phys_water_internal)['throat.capillary_pressure'])``. This shows a list of all the parameters associated with that model. It is possible to edit these parameters directly:",
"_____no_output_____"
]
],
[
[
"phys_water_internal.models['throat.entry_pressure']['f'] = 0.75 # Change value\nphys_water_internal.regenerate_models() # Regenerate model with new 'f' value",
"_____no_output_____"
]
],
[
[
"More details about the **ModelsDict** and **ModelWrapper** classes can be found in :ref:`models`.\n\n## Perform Multiphase Transport Simulations\n\n### Use the Built-In Drainage Algorithm to Generate an Invading Phase Configuration",
"_____no_output_____"
]
],
[
[
"inv = op.algorithms.Porosimetry(network=pn)\ninv.setup(phase=water)\ninv.set_inlets(pores=pn.pores(['top', 'bottom']))\ninv.run()",
"_____no_output_____"
]
],
[
[
"* The inlet pores were set to both ``'top'`` and ``'bottom'`` using the ``pn.pores`` method. The algorithm applies to the entire network so the mapping of network pores to the algorithm pores is 1-to-1.\n* The ``run`` method automatically generates a list of 25 capillary pressure points to test, but you can also specify more pores, or which specific points to tests. See the methods documentation for the details.\n* Once the algorithm has been run, the resulting capillary pressure curve can be viewed with ``plot_drainage_curve``. If you'd prefer a table of data for plotting in your software of choice you can use ``get_drainage_data`` which prints a table in the console.",
"_____no_output_____"
],
[
"### Set Pores and Throats to Invaded\n\nAfter running, the ``mip`` object possesses an array containing the pressure at which each pore and throat was invaded, stored as ``'pore.inv_Pc'`` and ``'throat.inv_Pc'``. These arrays can be used to obtain a list of which pores and throats are invaded by water, using Boolean logic:",
"_____no_output_____"
]
],
[
[
"Pi = inv['pore.invasion_pressure'] < 5000\nTi = inv['throat.invasion_pressure'] < 5000",
"_____no_output_____"
]
],
[
[
"The resulting Boolean masks can be used to manually adjust the hydraulic conductivity of pores and throats based on their phase occupancy. The following lines set the water filled throats to near-zero conductivity for air flow:",
"_____no_output_____"
]
],
[
[
"Ts = phys_water_internal.map_throats(~Ti, origin=water)\nphys_water_internal['throat.hydraulic_conductance'][Ts] = 1e-20",
"_____no_output_____"
]
],
[
[
"* The logic of these statements implicitly assumes that transport between two pores is only blocked if the throat is filled with the other phase, meaning that both pores could be filled and transport is still permitted. Another option would be to set the transport to near-zero if *either* or *both* of the pores are filled as well.\n* The above approach can get complicated if there are several **Geometry** objects, and it is also a bit laborious. There is a pore-scale model for this under **Physics.models.multiphase** called ``conduit_conductance``. The term conduit refers to the path between two pores that includes 1/2 of each pores plus the connecting throat.",
"_____no_output_____"
],
[
"### Calculate Relative Permeability of Each Phase\n\nWe are now ready to calculate the relative permeability of the domain under partially flooded conditions. Instantiate an **StokesFlow** object:",
"_____no_output_____"
]
],
[
[
"water_flow = op.algorithms.StokesFlow(network=pn, phase=water)\nwater_flow.set_value_BC(pores=pn.pores('left'), values=200000)\nwater_flow.set_value_BC(pores=pn.pores('right'), values=100000)\nwater_flow.run()\nQ_partial, = water_flow.rate(pores=pn.pores('right'))",
"_____no_output_____"
]
],
[
[
"The *relative* permeability is the ratio of the water flow through the partially water saturated media versus through fully water saturated media; hence we need to find the absolute permeability of water. This can be accomplished by *regenerating* the ``phys_water_internal`` object, which will recalculate the ``'throat.hydraulic_conductance'`` values and overwrite our manually entered near-zero values from the ``inv`` simulation using ``phys_water_internal.models.regenerate()``. We can then re-use the ``water_flow`` algorithm:",
"_____no_output_____"
]
],
[
[
"phys_water_internal.regenerate_models()\nwater_flow.run()\nQ_full, = water_flow.rate(pores=pn.pores('right'))",
"_____no_output_____"
]
],
[
[
"And finally, the relative permeability can be found from:",
"_____no_output_____"
]
],
[
[
"K_rel = Q_partial/Q_full\nprint(f\"Relative permeability: {K_rel:.5f}\")",
"Relative permeability: 0.97898\n"
]
],
[
[
"* The ratio of the flow rates gives the normalized relative permeability since all the domain size, viscosity and pressure differential terms cancel each other.\n* To generate a full relative permeability curve the above logic would be placed inside a for loop, with each loop increasing the pressure threshold used to obtain the list of invaded throats (``Ti``).\n* The saturation at each capillary pressure can be found be summing the pore and throat volume of all the invaded pores and throats using ``Vp = geom['pore.volume'][Pi]`` and ``Vt = geom['throat.volume'][Ti]``.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a5e3a12f550acbbf9445dc7febd13a149514f30
| 838 |
ipynb
|
Jupyter Notebook
|
HelloGithub.ipynb
|
marcinpopek/dw_matrix
|
a2cd85fbac9a87da13ee03e03ce86a50dced74a3
|
[
"MIT"
] | null | null | null |
HelloGithub.ipynb
|
marcinpopek/dw_matrix
|
a2cd85fbac9a87da13ee03e03ce86a50dced74a3
|
[
"MIT"
] | null | null | null |
HelloGithub.ipynb
|
marcinpopek/dw_matrix
|
a2cd85fbac9a87da13ee03e03ce86a50dced74a3
|
[
"MIT"
] | null | null | null | 838 | 838 | 0.687351 |
[
[
[
"print(\"Hello Github\")",
"Hello Github\n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
4a5e4799b6cf7648539a70932ca772abb51a97f7
| 264,298 |
ipynb
|
Jupyter Notebook
|
Day 1 CNN/1-Understanding-Convolutional-Filters.ipynb
|
shreya888/Summer-School-Computer-Vision-IIIT-H-2019
|
ec013a8aea5dade68100558a2eb37960a002f008
|
[
"MIT"
] | null | null | null |
Day 1 CNN/1-Understanding-Convolutional-Filters.ipynb
|
shreya888/Summer-School-Computer-Vision-IIIT-H-2019
|
ec013a8aea5dade68100558a2eb37960a002f008
|
[
"MIT"
] | null | null | null |
Day 1 CNN/1-Understanding-Convolutional-Filters.ipynb
|
shreya888/Summer-School-Computer-Vision-IIIT-H-2019
|
ec013a8aea5dade68100558a2eb37960a002f008
|
[
"MIT"
] | null | null | null | 889.892256 | 142,896 | 0.953435 |
[
[
[
"## What is convolution and how it works ?\n\n[Convolution][1] is the process of adding each element of the image to its local neighbors, weighted by the [kernel][2]. A kernel, convolution matrix, filter, or mask is a small matrix. It is used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between a kernel and an image. Lets see how to do this.\n\n[1]: https://en.wikipedia.org/wiki/Kernel_(image_processing)#Convolution\n[2]: https://en.wikipedia.org/wiki/Kernel_(image_processing)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy import signal\nimport skimage\nimport skimage.io as sio\nfrom skimage import filters\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"#### Load the Image and show it.",
"_____no_output_____"
]
],
[
[
"img = sio.imread('images/lines.jpg')\nimg = skimage.color.rgb2gray(img)\nprint('Image Shape is:',img.shape)\nplt.figure(figsize = (8,8))\nplt.imshow(img,cmap='gray',aspect='auto'),plt.show()",
"Image Shape is: (800, 800)\n"
]
],
[
[
"#### Generally a convolution filter(kernel) is an odd size squared matrix. Here is an illustration of convolution.\n<img src='images/3D_Convolution_Animation.gif'>",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"<table style=\"width:100%; table-layout:fixed;\">\n <tr>\n <td><img width=\"150px\" src=\"images/no_padding_no_strides.gif\"></td>\n <td><img width=\"150px\" src=\"images/arbitrary_padding_no_strides.gif\"></td>\n <td><img width=\"150px\" src=\"images/same_padding_no_strides.gif\"></td>\n <td><img width=\"150px\" src=\"images/full_padding_no_strides.gif\"></td>\n </tr>\n <tr>\n <td>No padding, no strides</td>\n <td>Arbitrary padding, no strides</td>\n <td>Half padding, no strides</td>\n <td>Full padding, no strides</td>\n </tr>\n <tr>\n <td><img width=\"150px\" src=\"images/no_padding_strides.gif\"></td>\n <td><img width=\"150px\" src=\"images/padding_strides.gif\"></td>\n <td><img width=\"150px\" src=\"images/padding_strides_odd.gif\"></td>\n <td></td>\n </tr>\n <tr>\n <td>No padding, strides</td>\n <td>Padding, strides</td>\n <td>Padding, strides (odd)</td>\n <td></td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"#### Implementation of Convolution operation",
"_____no_output_____"
]
],
[
[
"def convolution2d(img, kernel, stride=1, padding=True):\n kernel_size = kernel.shape[0]\n img_row,img_col = img.shape\n if padding:\n pad_value = kernel_size//2\n img = np.pad(img,(pad_value,pad_value),mode='edge')\n else:\n pad_value = 0\n \n filter_half = kernel_size//2\n img_new_row = (img_row-kernel_size+2*pad_value)//stride + 1\n img_new_col = (img_col-kernel_size+2*pad_value)//stride + 1\n img_new = np.zeros((img_new_row,img_new_col))\n \n ii=0\n for i in range(filter_half,img_row-filter_half,stride):\n jj=0\n for j in range(filter_half,img_col-filter_half,stride):\n curr_img = img[i-filter_half:i+filter_half+1,j-filter_half:j+filter_half+1]\n sum_value = np.sum(np.multiply(curr_img,kernel))\n img_new[ii,jj] = sum_value\n jj += 1\n ii += 1\n\n return img_new",
"_____no_output_____"
],
[
"kernel_size = (7,7) #Defining kernel size\nkernel = np.ones(kernel_size) #Initializing a random kernel\nkernel = kernel/np.sum(kernel) #Averaging the Kernel",
"_____no_output_____"
],
[
"img_conv = convolution2d(img,kernel,padding=True)#Applying the convolution operation\nprint(img_conv.shape)\nplt.figure(figsize = (8,8))\nplt.imshow(img_conv,cmap='gray'),plt.show()",
"(800, 800)\n"
]
],
[
[
"By convolving an image using a kernel can give image features. As we can see here that using a random kernel blurs the image. However, there are predefined kernels such as [Sobel](https://www.researchgate.net/profile/Irwin_Sobel/publication/239398674_An_Isotropic_3x3_Image_Gradient_Operator/links/557e06f508aeea18b777c389/An-Isotropic-3x3-Image-Gradient-Operator.pdf?origin=publication_detail) or Prewitt which are used to get the edges of the image. ",
"_____no_output_____"
]
],
[
[
"kernel_x = np.array([[ 1, 2, 1],\n [ 0, 0, 0],\n [-1,-2,-1]]) / 4.0 #Sobel kernel\nkernel_y = np.transpose(kernel_x)\n\noutput_x = convolution2d(img, kernel_x)\noutput_y = convolution2d(img, kernel_y)\n\noutput = np.sqrt(output_x**2 + output_y**2)\noutput /= np.sqrt(2)\n\nfig, (ax1, ax2,ax3,ax4) = plt.subplots(1, 4, figsize=(20, 20))\nax1.set_title(\"Original Image\",fontweight='bold')\nax1.imshow(img, cmap=plt.cm.Greys_r)\n\nax2.set_title(\"Horizontal Edges\",fontweight='bold')\nax2.imshow(output_x, cmap=plt.cm.Greys_r)\n\nax3.set_title(\"Vertical Edges\",fontweight='bold')\nax3.imshow(output_y, cmap=plt.cm.Greys_r)\n\nax4.set_title(\"All Edges\",fontweight='bold')\nax4.imshow(output, cmap=plt.cm.Greys_r)\n\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5e4e5c61208eb634495613dbe2f8a95ca2db3f
| 90,604 |
ipynb
|
Jupyter Notebook
|
pipelining/exp-cshc/exp-cshc_cshc_1w_ale_plotting.ipynb
|
youyinnn/s2search
|
f965a595386b24ffab0385b860a1028e209fde86
|
[
"Apache-2.0"
] | 2 |
2022-02-07T16:08:04.000Z
|
2022-03-27T19:29:33.000Z
|
pipelining/exp-cshc/exp-cshc_cshc_1w_ale_plotting.ipynb
|
youyinnn/s2search
|
f965a595386b24ffab0385b860a1028e209fde86
|
[
"Apache-2.0"
] | null | null | null |
pipelining/exp-cshc/exp-cshc_cshc_1w_ale_plotting.ipynb
|
youyinnn/s2search
|
f965a595386b24ffab0385b860a1028e209fde86
|
[
"Apache-2.0"
] | 1 |
2022-03-14T19:44:47.000Z
|
2022-03-14T19:44:47.000Z
| 297.062295 | 43,746 | 0.910114 |
[
[
[
"<a href=\"https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/exp-cshc/exp-cshc_cshc_1w_ale_plotting.ipynb\" target=\"_blank\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"### Experiment Description\n\n\n\n> This notebook is for experiment \\<exp-cshc\\> and data sample \\<cshc\\>.",
"_____no_output_____"
],
[
"### Initialization",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\nimport numpy as np, sys, os\nin_colab = 'google.colab' in sys.modules\n# fetching code and data(if you are using colab\nif in_colab:\n !rm -rf s2search\n !git clone --branch pipelining https://github.com/youyinnn/s2search.git\n sys.path.insert(1, './s2search')\n %cd s2search/pipelining/exp-cshc/\n\npic_dir = os.path.join('.', 'plot')\nif not os.path.exists(pic_dir):\n os.mkdir(pic_dir)\n",
"_____no_output_____"
]
],
[
[
"### Loading data",
"_____no_output_____"
]
],
[
[
"\nsys.path.insert(1, '../../')\nimport numpy as np, sys, os, pandas as pd\nfrom getting_data import read_conf\nfrom s2search_score_pdp import pdp_based_importance\n\nsample_name = 'cshc'\n\nf_list = [\n 'title', 'abstract', 'venue', 'authors', \n 'year', \n 'n_citations'\n ]\nale_xy = {}\nale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance', 'absolute mean'])\n\nfor f in f_list:\n file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')\n if os.path.exists(file):\n nparr = np.load(file)\n quantile = nparr['quantile']\n ale_result = nparr['ale_result']\n values_for_rug = nparr.get('values_for_rug')\n \n ale_xy[f] = {\n 'x': quantile,\n 'y': ale_result,\n 'rug': values_for_rug,\n 'weird': ale_result[len(ale_result) - 1] > 20\n }\n \n if f != 'year' and f != 'n_citations':\n ale_xy[f]['x'] = list(range(len(quantile)))\n ale_xy[f]['numerical'] = False\n else:\n ale_xy[f]['xticks'] = quantile\n ale_xy[f]['numerical'] = True\n \n ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f), np.mean(np.abs(ale_result))] \n \n # print(len(ale_result))\n \nprint(ale_metric.sort_values(by=['ale_importance'], ascending=False))\nprint()\n",
" feature_name ale_range ale_importance absolute mean\n1 abstract 17.468598 6.546162 5.761598\n0 title 16.934619 5.800394 4.175670\n2 venue 16.110086 5.094456 2.897120\n3 authors 7.506696 2.373826 1.349949\n4 year 1.484639 0.573632 0.477747\n5 n_citations 1.299431 0.398447 0.262304\n\n"
]
],
[
[
"### ALE Plots",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nfrom matplotlib.ticker import MaxNLocator\n\ncategorical_plot_conf = [\n {\n 'xlabel': 'Title',\n 'ylabel': 'ALE',\n 'ale_xy': ale_xy['title']\n },\n {\n 'xlabel': 'Abstract',\n 'ale_xy': ale_xy['abstract']\n }, \n {\n 'xlabel': 'Authors',\n 'ale_xy': ale_xy['authors'],\n # 'zoom': {\n # 'inset_axes': [0.3, 0.3, 0.47, 0.47],\n # 'x_limit': [89, 93],\n # 'y_limit': [-1, 14],\n # }\n }, \n {\n 'xlabel': 'Venue',\n 'ale_xy': ale_xy['venue'],\n # 'zoom': {\n # 'inset_axes': [0.3, 0.3, 0.47, 0.47],\n # 'x_limit': [89, 93],\n # 'y_limit': [-1, 13],\n # }\n },\n]\n\nnumerical_plot_conf = [\n {\n 'xlabel': 'Year',\n 'ylabel': 'ALE',\n 'ale_xy': ale_xy['year'],\n # 'zoom': {\n # 'inset_axes': [0.15, 0.4, 0.4, 0.4],\n # 'x_limit': [2019, 2023],\n # 'y_limit': [1.9, 2.1],\n # },\n },\n {\n 'xlabel': 'Citations',\n 'ale_xy': ale_xy['n_citations'],\n # 'zoom': {\n # 'inset_axes': [0.4, 0.65, 0.47, 0.3],\n # 'x_limit': [-1000.0, 12000],\n # 'y_limit': [-0.1, 1.2],\n # },\n },\n]\n\ndef pdp_plot(confs, title):\n fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)\n subplot_idx = 0\n plt.suptitle(title, fontsize=20, fontweight='bold')\n # plt.autoscale(False)\n for conf in confs:\n axes = axes if len(confs) == 1 else axes_list[subplot_idx]\n \n sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)\n\n axes.axhline(y=0, color='k', linestyle='-', lw=0.8)\n axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])\n axes.grid(alpha = 0.4)\n\n # axes.set_ylim([-2, 20])\n axes.xaxis.set_major_locator(MaxNLocator(integer=True))\n axes.yaxis.set_major_locator(MaxNLocator(integer=True))\n \n if ('ylabel' in conf):\n axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)\n \n # if ('xticks' not in conf['ale_xy'].keys()):\n # xAxis.set_ticklabels([])\n\n axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)\n \n if not (conf['ale_xy']['weird']):\n if (conf['ale_xy']['numerical']):\n axes.set_ylim([-1.5, 1.5])\n pass\n else:\n axes.set_ylim([-7, 20])\n pass\n \n if 'zoom' in conf:\n axins = axes.inset_axes(conf['zoom']['inset_axes'])\n axins.xaxis.set_major_locator(MaxNLocator(integer=True))\n axins.yaxis.set_major_locator(MaxNLocator(integer=True))\n axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])\n axins.set_xlim(conf['zoom']['x_limit'])\n axins.set_ylim(conf['zoom']['y_limit'])\n axins.grid(alpha=0.3)\n rectpatch, connects = axes.indicate_inset_zoom(axins)\n connects[0].set_visible(False)\n connects[1].set_visible(False)\n connects[2].set_visible(True)\n connects[3].set_visible(True)\n \n subplot_idx += 1\n\npdp_plot(categorical_plot_conf, f\"ALE for {len(categorical_plot_conf)} categorical features\")\n# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')\n\npdp_plot(numerical_plot_conf, f\"ALE for {len(numerical_plot_conf)} numerical features\")\n# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5e52c2f919cbe7bf9b9a97f8e62e6f75dec7b9
| 43,186 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/plugins/sax/sax.ipynb
|
jorgepadilla19/gdsfactory
|
68e1c18257a75d4418279851baea417c8899a165
|
[
"MIT"
] | null | null | null |
docs/notebooks/plugins/sax/sax.ipynb
|
jorgepadilla19/gdsfactory
|
68e1c18257a75d4418279851baea417c8899a165
|
[
"MIT"
] | null | null | null |
docs/notebooks/plugins/sax/sax.ipynb
|
jorgepadilla19/gdsfactory
|
68e1c18257a75d4418279851baea417c8899a165
|
[
"MIT"
] | null | null | null | 29.660714 | 377 | 0.519729 |
[
[
[
"# SAX circuit simulator\n\n[SAX](https://flaport.github.io/sax/) is a circuit solver written in JAX, writing your component models in SAX enables you not only to get the function values but the gradients, this is useful for circuit optimization.\n\nThis tutorial has been adapted from SAX tutorial.\n\nNote that SAX does not work on Windows, so if you use windows you'll need to run from [WSL](https://docs.microsoft.com/en-us/windows/wsl/) or using docker.\n\nYou can install sax with pip\n\n```\n! pip install sax\n```",
"_____no_output_____"
]
],
[
[
"import gdsfactory as gf\nimport gdsfactory.simulation.sax as gs\nimport gdsfactory.simulation.modes as gm\nimport sax",
"_____no_output_____"
]
],
[
[
"## Scatter *dictionaries*\n\nThe core datastructure for specifying scatter parameters in SAX is a dictionary... more specifically a dictionary which maps a port combination (2-tuple) to a scatter parameter (or an array of scatter parameters when considering multiple wavelengths for example). Such a specific dictionary mapping is called ann `SDict` in SAX (`SDict ≈ Dict[Tuple[str,str], float]`).\n\nDictionaries are in fact much better suited for characterizing S-parameters than, say, (jax-)numpy arrays due to the inherent sparse nature of scatter parameters. Moreover, dictonaries allow for string indexing, which makes them much more pleasant to use in this context.\n\n```\no2 o3 \n \\ /\n ========\n / \\\no1 o4 \n```",
"_____no_output_____"
]
],
[
[
"coupling = 0.5\nkappa = coupling ** 0.5\ntau = (1 - coupling) ** 0.5\ncoupler_dict = {\n (\"o1\", \"o4\"): tau,\n (\"o4\", \"o1\"): tau,\n (\"o1\", \"o3\"): 1j * kappa,\n (\"o3\", \"o1\"): 1j * kappa,\n (\"o2\", \"o4\"): 1j * kappa,\n (\"o4\", \"o2\"): 1j * kappa,\n (\"o2\", \"o3\"): tau,\n (\"o3\", \"o2\"): tau,\n}\ncoupler_dict",
"_____no_output_____"
]
],
[
[
" it can still be tedious to specify every port in the circuit manually. SAX therefore offers the `reciprocal` function, which auto-fills the reverse connection if the forward connection exist. For example:",
"_____no_output_____"
]
],
[
[
"coupler_dict = sax.reciprocal(\n {\n (\"o1\", \"o4\"): tau,\n (\"o1\", \"o3\"): 1j * kappa,\n (\"o2\", \"o4\"): 1j * kappa,\n (\"o2\", \"o3\"): tau,\n }\n)\n\ncoupler_dict",
"_____no_output_____"
]
],
[
[
"## Parametrized Models\n\nConstructing such an `SDict` is easy, however, usually we're more interested in having parametrized models for our components. To parametrize the coupler `SDict`, just wrap it in a function to obtain a SAX `Model`, which is a keyword-only function mapping to an `SDict`:",
"_____no_output_____"
]
],
[
[
"def coupler(coupling=0.5) -> sax.SDict:\n kappa = coupling ** 0.5\n tau = (1 - coupling) ** 0.5\n coupler_dict = sax.reciprocal(\n {\n (\"o1\", \"o4\"): tau,\n (\"o1\", \"o3\"): 1j * kappa,\n (\"o2\", \"o4\"): 1j * kappa,\n (\"o2\", \"o3\"): tau,\n }\n )\n return coupler_dict\n\n\ncoupler(coupling=0.3)",
"_____no_output_____"
],
[
"def waveguide(wl=1.55, wl0=1.55, neff=2.34, ng=3.4, length=10.0, loss=0.0) -> sax.SDict:\n dwl = wl - wl0\n dneff_dwl = (ng - neff) / wl0\n neff = neff - dwl * dneff_dwl\n phase = 2 * jnp.pi * neff * length / wl\n transmission = 10 ** (-loss * length / 20) * jnp.exp(1j * phase)\n sdict = sax.reciprocal(\n {\n (\"o1\", \"o2\"): transmission,\n }\n )\n return sdict",
"_____no_output_____"
]
],
[
[
"## Component Models\n\n### Waveguide model\n\nYou can create a dispersive waveguide model in SAX.",
"_____no_output_____"
],
[
"Lets compute the effective index `neff` and group index `ng` for a 1550nm 500nm straight waveguide",
"_____no_output_____"
]
],
[
[
"m = gm.find_mode_dispersion(wavelength=1.55)\nprint(m.neff, m.ng)",
"_____no_output_____"
],
[
"straight_sc = gf.partial(gs.models.straight, neff=m.neff, ng=m.ng)",
"_____no_output_____"
],
[
"gs.plot_model(straight_sc)",
"_____no_output_____"
],
[
"gs.plot_model(straight_sc, phase=True)",
"_____no_output_____"
]
],
[
[
"### Coupler model",
"_____no_output_____"
]
],
[
[
"gm.find_coupling_vs_gap?",
"_____no_output_____"
],
[
"df = gm.find_coupling_vs_gap()\ndf",
"_____no_output_____"
]
],
[
[
"For a 200nm gap the effective index difference `dn` is `0.02`, which means that there is 100% power coupling over 38.2um",
"_____no_output_____"
]
],
[
[
"coupler_sc = gf.partial(gs.models.coupler, dn=0.02, length=0, coupling0=0)\ngs.plot_model(coupler_sc)",
"_____no_output_____"
]
],
[
[
"If we ignore the coupling from the bend `coupling0 = 0` we know that for a 3dB coupling we need half of the `lc` length, which is the length needed to coupler `100%` of power.",
"_____no_output_____"
]
],
[
[
"coupler_sc = gf.partial(gs.models.coupler, dn=0.02, length=38.2/2, coupling0=0)\ngs.plot_model(coupler_sc)",
"_____no_output_____"
]
],
[
[
"### FDTD Sparameters model\n\nYou can also fit a model from Sparameter FDTD simulation data.",
"_____no_output_____"
]
],
[
[
"from gdsfactory.simulation.get_sparameters_path import get_sparameters_path_lumerical\n\nfilepath = get_sparameters_path_lumerical(gf.c.mmi1x2)\nmmi1x2 = gf.partial(gs.read.sdict_from_csv, filepath=filepath)\ngs.plot_model(mmi1x2)",
"_____no_output_____"
]
],
[
[
"## Circuit Models\n\nYou can combine component models into a circuit using `sax.circuit`, which basically creates a new `Model` function:\n\nLets define a [MZI interferometer](https://en.wikipedia.org/wiki/Mach%E2%80%93Zehnder_interferometer)\n\n```\n _________\n | top |\n | |\n lft===| |===rgt\n | |\n |_________|\n bot\n \n o1 top o2\n ----------\no2 o3 o2 o3 \n \\ / \\ /\n ======== ========\n / \\ / \\\no1 lft 04 o1 rgt 04 \n ----------\n o1 bot o2\n```",
"_____no_output_____"
]
],
[
[
"waveguide = straight_sc\ncoupler = coupler_sc\n\nmzi = sax.circuit(\n instances={\n \"lft\": coupler,\n \"top\": waveguide,\n \"bot\": waveguide,\n \"rgt\": coupler,\n },\n connections={\n \"lft,o4\": \"bot,o1\",\n \"bot,o2\": \"rgt,o1\",\n \"lft,o3\": \"top,o1\",\n \"top,o2\": \"rgt,o2\",\n },\n ports={\n \"o1\": \"lft,o1\",\n \"o2\": \"lft,o2\",\n \"o4\": \"rgt,o4\",\n \"o3\": \"rgt,o3\",\n },\n)",
"_____no_output_____"
]
],
[
[
"The `circuit` function just creates a similar function as we created for the waveguide and the coupler, but in stead of taking parameters directly it takes parameter *dictionaries* for each of the instances in the circuit. The keys in these parameter dictionaries should correspond to the keyword arguments of each individual subcomponent. \n\nLet's now do a simulation for the MZI we just constructed:",
"_____no_output_____"
]
],
[
[
"%time mzi()",
"_____no_output_____"
],
[
"import jax\nimport jax.example_libraries.optimizers as opt\nimport jax.numpy as jnp\nimport matplotlib.pyplot as plt # plotting\n\nmzi2 = jax.jit(mzi)",
"_____no_output_____"
],
[
"%time mzi2()",
"_____no_output_____"
],
[
"mzi(top={\"length\": 25.0}, btm={\"length\": 15.0})",
"_____no_output_____"
],
[
"wl = jnp.linspace(1.51, 1.59, 1000)\n%time S = mzi(wl=wl, top={\"length\": 25.0}, btm={\"length\": 15.0})",
"_____no_output_____"
],
[
"plt.plot(wl * 1e3, abs(S[\"o1\", \"o3\"]) ** 2, label='o3')\nplt.plot(wl * 1e3, abs(S[\"o1\", \"o4\"]) ** 2, label='o4')\nplt.ylim(-0.05, 1.05)\nplt.xlabel(\"λ [nm]\")\nplt.ylabel(\"T\")\nplt.ylim(-0.05, 1.05)\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Optimization\n\nYou can optimize an MZI to get T=0 at 1550nm.\nTo do this, you need to define a loss function for the circuit at 1550nm.\nThis function should take the parameters that you want to optimize as positional arguments:",
"_____no_output_____"
]
],
[
[
"@jax.jit\ndef loss(delta_length):\n S = mzi(wl=1.55, top={\"length\": 15.0 + delta_length}, btm={\"length\": 15.0})\n return (abs(S[\"o1\", \"o4\"]) ** 2).mean()",
"_____no_output_____"
],
[
"%time loss(10.0)",
"_____no_output_____"
]
],
[
[
"You can use this loss function to define a grad function which works on the parameters of the loss function:",
"_____no_output_____"
]
],
[
[
"grad = jax.jit(\n jax.grad(\n loss,\n argnums=0, # JAX gradient function for the first positional argument, jitted\n )\n)",
"_____no_output_____"
]
],
[
[
"Next, you need to define a JAX optimizer, which on its own is nothing more than three more functions: \n\n1. an initialization function with which to initialize the optimizer state\n2. an update function which will update the optimizer state (and with it the model parameters). \n3. a function with the model parameters given the optimizer state.",
"_____no_output_____"
]
],
[
[
"initial_delta_length = 10.0\noptim_init, optim_update, optim_params = opt.adam(step_size=0.1)\noptim_state = optim_init(initial_delta_length)",
"_____no_output_____"
],
[
"def train_step(step, optim_state):\n settings = optim_params(optim_state)\n lossvalue = loss(settings)\n gradvalue = grad(settings)\n optim_state = optim_update(step, gradvalue, optim_state)\n return lossvalue, optim_state",
"_____no_output_____"
],
[
"import tqdm\n\nrange_ = tqdm.trange(300)\nfor step in range_:\n lossvalue, optim_state = train_step(step, optim_state)\n range_.set_postfix(loss=f\"{lossvalue:.6f}\")",
"_____no_output_____"
],
[
"delta_length = optim_params(optim_state)\ndelta_length",
"_____no_output_____"
],
[
"S = mzi(wl=wl, top={\"length\": 15.0 + delta_length}, btm={\"length\": 15.0})\nplt.plot(wl * 1e3, abs(S[\"o1\", \"o4\"]) ** 2)\nplt.xlabel(\"λ [nm]\")\nplt.ylabel(\"T\")\nplt.ylim(-0.05, 1.05)\nplt.plot([1550, 1550], [0, 1])\nplt.show()",
"_____no_output_____"
]
],
[
[
"The minimum of the MZI is perfectly located at 1550nm.",
"_____no_output_____"
],
[
"## Model fit\n\nYou can fit a sax model to Sparameter FDTD simulation data.",
"_____no_output_____"
]
],
[
[
"import tqdm\nimport jax\nimport jax.numpy as jnp\nimport jax.example_libraries.optimizers as opt\nimport matplotlib.pyplot as plt\n\nimport gdsfactory as gf\nimport gdsfactory.simulation.modes as gm\nimport gdsfactory.simulation.sax as gs",
"_____no_output_____"
],
[
"gf.config.sparameters_path",
"_____no_output_____"
],
[
"sd = gs.read.sdict_from_csv(gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3)",
"_____no_output_____"
],
[
"coupler_fdtd = gf.partial(gs.read.sdict_from_csv, filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3)",
"_____no_output_____"
],
[
"gs.plot_model(coupler_fdtd)",
"_____no_output_____"
],
[
"gs.plot_model(coupler_fdtd, ports2=('o3', 'o4'))",
"_____no_output_____"
],
[
"modes = gm.find_modes_coupler(gap=0.224)\nmodes",
"_____no_output_____"
],
[
"dn = modes[1].neff - modes[2].neff\ndn",
"_____no_output_____"
],
[
"coupler = gf.partial(gf.simulation.sax.models.coupler, dn=dn, length=20, coupling0=0.3)\ngs.plot_model(coupler)",
"_____no_output_____"
],
[
"coupler_fdtd = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3)\nS = coupler_fdtd\nT_fdtd = abs(S['o1', 'o3'])**2\nK_fdtd = abs(S['o1', 'o4'])**2\n\[email protected]\ndef loss(coupling0, dn, dn1, dn2, dk1, dk2):\n \"\"\"Returns fit least squares error from a coupler model spectrum\n to the FDTD Sparameter spectrum that we want to fit.\n \n Args:\n coupling0: coupling from the bend raegion\n dn: effective index difference between even and odd mode solver simulations.\n dn1: first derivative of effective index difference vs wavelength.\n dn2: second derivative of effective index difference vs wavelength.\n dk1: first derivative of coupling0 vs wavelength.\n dk2: second derivative of coupling vs wavelength.\n\n .. code::\n\n coupling0/2 coupling coupling0/2\n <-------------><--------------------><---------->\n o2 ________ _______o3\n \\ /\n \\ length /\n ======================= gap\n / \\\n ________/ \\________\n o1 o4\n\n ------------------------> K (coupled power)\n /\n / K\n -----------------------------------> T = 1 - K (transmitted power)\n\n T: o1 -> o4\n K: o1 -> o3\n \"\"\"\n S = gf.simulation.sax.models.coupler(dn=dn, length=20, coupling0=coupling0, dn1=dn1, dn2=dn2, dk1=dk1, dk2=dk2)\n T_model = abs(S['o1', 'o4'])**2\n K_model = abs(S['o1', 'o3'])**2\n return jnp.abs(T_fdtd-T_model).mean() + jnp.abs(K_fdtd-K_model).mean()\n\n\nloss(coupling0=0.3, dn=0.016, dk1 = 1.2435, dk2 = 5.3022, dn1 = 0.1169, dn2 = 0.4821)",
"_____no_output_____"
],
[
"grad = jax.jit(\n jax.grad(\n loss,\n argnums=0, # JAX gradient function for the first positional argument, jitted\n )\n)",
"_____no_output_____"
],
[
"def train_step(step, optim_state, dn, dn1, dn2, dk1, dk2):\n settings = optim_params(optim_state)\n lossvalue = loss(settings, dn, dn1, dn2, dk1, dk2)\n gradvalue = grad(settings, dn, dn1, dn2, dk1, dk2)\n optim_state = optim_update(step, gradvalue, optim_state)\n return lossvalue, optim_state\n\n\ncoupling0 = 0.3\noptim_init, optim_update, optim_params = opt.adam(step_size=0.1)\noptim_state = optim_init(coupling0)\n\ndn = 0.0166\ndn1 = 0.11\ndn2 = 0.48\ndk1 = 1.2\ndk2 = 5\n\nrange_ = tqdm.trange(300)\nfor step in range_:\n lossvalue, optim_state = train_step(step, optim_state, dn, dn1, dn2, dk1, dk2)\n range_.set_postfix(loss=f\"{lossvalue:.6f}\")",
"_____no_output_____"
],
[
"coupling0_fit = optim_params(optim_state)\ncoupling0_fit",
"_____no_output_____"
],
[
"coupler = gf.partial(gf.simulation.sax.models.coupler, dn=dn, length=20, coupling0=coupling0_fit)\ngs.plot_model(coupler)",
"_____no_output_____"
],
[
"wl = jnp.linspace(1.50, 1.60, 1000)\nS = gf.simulation.sax.models.coupler(dn=dn, length=20, coupling0=coupling0_fit, dn1=dn1, dn2=dn2, dk1=dk1, dk2=dk2, wl=wl)\nT_model = abs(S['o1', 'o4'])**2\nK_model = abs(S['o1', 'o3'])**2",
"_____no_output_____"
],
[
"coupler_fdtd = S = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3, wl=wl)\nT_fdtd = abs(S['o1', 'o3'])**2\nK_fdtd = abs(S['o1', 'o4'])**2",
"_____no_output_____"
],
[
"plt.plot(wl, T_fdtd, label='fdtd', c='b')\nplt.plot(wl, T_model, label='fit', c='b', ls='-.')\nplt.plot(wl, K_fdtd, label='fdtd', c='r')\nplt.plot(wl, K_model, label='fit', c='r', ls='-.')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Multi-variable optimization\n\nAs you can see we need to fit more than 1 variable `coupling0` to get a good fit.",
"_____no_output_____"
]
],
[
[
"grad = jax.jit(\n jax.grad(\n loss,\n #argnums=0, # JAX gradient function for the first positional argument, jitted\n argnums=[0, 1, 2, 3, 4, 5], # JAX gradient function for all positional arguments, jitted\n )\n)",
"_____no_output_____"
],
[
"def train_step(step, optim_state):\n coupling0, dn, dn1, dn2, dk1, dk2 = optim_params(optim_state)\n lossvalue = loss(coupling0, dn, dn1, dn2, dk1, dk2)\n gradvalue = grad(coupling0, dn, dn1, dn2, dk1, dk2)\n optim_state = optim_update(step, gradvalue, optim_state)\n return lossvalue, optim_state",
"_____no_output_____"
],
[
"coupling0 = 0.3\ndn = 0.0166\ndn1 = 0.11\ndn2 = 0.48\ndk1 = 1.2\ndk2 = 5.0\noptim_init, optim_update, optim_params = opt.adam(step_size=0.01)\noptim_state = optim_init((coupling0, dn, dn1, dn2, dk1, dk2))",
"_____no_output_____"
],
[
"range_ = tqdm.trange(1000)\nfor step in range_:\n lossvalue, optim_state = train_step(step, optim_state)\n range_.set_postfix(loss=f\"{lossvalue:.6f}\")",
"_____no_output_____"
],
[
"coupling0_fit, dn_fit, dn1_fit, dn2_fit, dk1_fit, dk2_fit = optim_params(optim_state)\ncoupling0_fit, dn_fit, dn1_fit, dn2_fit, dk1_fit, dk2_fit",
"_____no_output_____"
],
[
"wl = jnp.linspace(1.5, 1.60, 1000)\ncoupler_fdtd = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv',wl=wl, xkey='wavelength_nm', prefix='S', xunits=1e-3)\nS = coupler_fdtd\nT_fdtd = abs(S['o1', 'o3'])**2\nS = gf.simulation.sax.models.coupler(dn=dn_fit,\n length=20,\n coupling0=coupling0_fit,\n dn1=dn1_fit,\n dn2=dn2_fit,\n dk1=dk1_fit,\n dk2=dk2_fit,\n wl=wl)\nT_model = abs(S['o1', 'o4'])**2\nK_model = abs(S['o1', 'o3'])**2\n\nplt.plot(wl, T_fdtd, label='fdtd', c='b')\nplt.plot(wl, T_model, label='fit', c='b', ls='-.')\nplt.plot(wl, K_fdtd, label='fdtd', c='r')\nplt.plot(wl, K_model, label='fit', c='r', ls='-.')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"As you can see trying to fit many parameters do not give you a better fit,\n\nyou have to make sure you fit the right parameters, in this case `dn1`",
"_____no_output_____"
]
],
[
[
"wl = jnp.linspace(1.50, 1.60, 1000)\nS = gf.simulation.sax.models.coupler(dn=dn_fit,\n length=20,\n coupling0=coupling0_fit,\n dn1=dn1_fit-0.045,\n dn2=dn2_fit,\n dk1=dk1_fit,\n dk2=dk2_fit,\n wl=wl)\nT_model = abs(S['o1', 'o4'])**2\nK_model = abs(S['o1', 'o3'])**2\n\nplt.plot(wl, T_fdtd, label='fdtd', c='b')\nplt.plot(wl, T_model, label='fit', c='b', ls='-.')\nplt.plot(wl, K_fdtd, label='fdtd', c='r')\nplt.plot(wl, K_model, label='fit', c='r', ls='-.')\nplt.legend()",
"_____no_output_____"
],
[
"dn = dn_fit\ndn2 = dn2_fit\ndk1 = dk1_fit\ndk2 = dk2_fit\n\[email protected]\ndef loss(dn1):\n \"\"\"Returns fit least squares error from a coupler model spectrum\n to the FDTD Sparameter spectrum that we want to fit.\n \n \"\"\"\n S = gf.simulation.sax.models.coupler(dn=dn, length=20, coupling0=coupling0, dn1=dn1, dn2=dn2, dk1=dk1, dk2=dk2)\n T_model = jnp.abs(S['o1', 'o4'])**2\n K_model = jnp.abs(S['o1', 'o3'])**2\n return jnp.abs(T_fdtd-T_model).mean() + jnp.abs(K_fdtd-K_model).mean()\n\ngrad = jax.jit(\n jax.grad(\n loss,\n argnums=0, # JAX gradient function for the first positional argument, jitted\n )\n)\n\ndn1 = 0.11\noptim_init, optim_update, optim_params = opt.adam(step_size=0.001)\noptim_state = optim_init(dn1)\n\n\ndef train_step(step, optim_state):\n settings = optim_params(optim_state)\n lossvalue = loss(settings)\n gradvalue = grad(settings)\n optim_state = optim_update(step, gradvalue, optim_state)\n return lossvalue, optim_state\n\nrange_ = tqdm.trange(300)\nfor step in range_:\n lossvalue, optim_state = train_step(step, optim_state)\n range_.set_postfix(loss=f\"{lossvalue:.6f}\")",
"_____no_output_____"
],
[
"dn1_fit = optim_params(optim_state)\ndn1_fit",
"_____no_output_____"
],
[
"wl = jnp.linspace(1.50, 1.60, 1000)\nS = gf.simulation.sax.models.coupler(dn=dn,\n length=20,\n coupling0=coupling0,\n dn1=dn1_fit,\n dn2=dn2,\n dk1=dk1,\n dk2=dk2,\n wl=wl)\nT_model = abs(S['o1', 'o4'])**2\nK_model = abs(S['o1', 'o3'])**2\n\ncoupler_fdtd = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3, wl=wl)\nS = coupler_fdtd\nT_fdtd = abs(S['o1', 'o3'])**2\nK_fdtd = abs(S['o1', 'o4'])**2\n\nplt.plot(wl, T_fdtd, label='fdtd', c='b')\nplt.plot(wl, T_model, label='fit', c='b', ls='-.')\nplt.plot(wl, K_fdtd, label='fdtd', c='r')\nplt.plot(wl, K_model, label='fit', c='r', ls='-.')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## Model fit (linear regression)\n\nFor a better fit of the coupler we can build a linear regression model of the coupler with `sklearn`",
"_____no_output_____"
]
],
[
[
"import sax\nimport gdsfactory as gf\nimport gdsfactory.simulation.sax as gs\nimport jax\nimport jax.numpy as jnp\nimport matplotlib.pyplot as plt\nfrom scipy.constants import c\nfrom sklearn.linear_model import LinearRegression",
"_____no_output_____"
],
[
"f = jnp.linspace(c / 1.0e-6, c / 2.0e-6, 500) * 1e-12 # THz\nwl = c / (f * 1e12) * 1e6 # um\n\nfilepath = gf.config.sparameters_path / \"coupler\" / \"coupler_G224n_L20_S220.csv\"\ncoupler_fdtd = gf.partial(gs.read.sdict_from_csv, filepath, xkey=\"wavelength_nm\", prefix=\"S\", xunits=1e-3)\nsd = coupler_fdtd(wl=wl)\n\nk = sd[\"o1\", \"o3\"]\nt = sd[\"o1\", \"o4\"]\ns = t + k\na = t - k",
"_____no_output_____"
]
],
[
[
"Lets fit the symmetric (t+k) and antisymmetric (t-k) transmission\n\n### Symmetric",
"_____no_output_____"
]
],
[
[
"plt.plot(wl, jnp.abs(s))\nplt.grid(True)\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Transmission\")\nplt.title('symmetric (transmission + coupling)')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(wl, jnp.abs(a))\nplt.grid(True)\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Transmission\")\nplt.title('anti-symmetric (transmission - coupling)')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"r = LinearRegression()\nfX = lambda x, _order=8: x[:,None]**(jnp.arange(_order)[None, :]) # artificially create more 'features' (wl**2, wl**3, wl**4, ...)\nX = fX(wl)\nr.fit(X, jnp.abs(s))\nasm, bsm = r.coef_, r.intercept_\nfsm = lambda x: fX(x)@asm + bsm # fit symmetric module fiir\n\nplt.plot(wl, jnp.abs(s))\nplt.plot(wl, fsm(wl))\nplt.grid(True)\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Transmission\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"r = LinearRegression()\nr.fit(X, jnp.unwrap(jnp.angle(s)))\nasp, bsp = r.coef_, r.intercept_\nfsp = lambda x: fX(x)@asp + bsp # fit symmetric phase\n\nplt.plot(wl, jnp.unwrap(jnp.angle(s)))\nplt.plot(wl, fsp(wl))\nplt.grid(True)\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Angle [deg]\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"fs = lambda x: fsm(x)*jnp.exp(1j*fsp(x))",
"_____no_output_____"
]
],
[
[
"Lets fit the symmetric (t+k) and antisymmetric (t-k) transmission\n\n### Anti-Symmetric",
"_____no_output_____"
]
],
[
[
"r = LinearRegression()\nr.fit(X, jnp.abs(a))\naam, bam = r.coef_, r.intercept_\nfam = lambda x: fX(x)@aam + bam\n\nplt.plot(wl, jnp.abs(a))\nplt.plot(wl, fam(wl))\nplt.grid(True)\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Transmission\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"r = LinearRegression()\nr.fit(X, jnp.unwrap(jnp.angle(a)))\naap, bap = r.coef_, r.intercept_\nfap = lambda x: fX(x)@aap + bap\n\nplt.plot(wl, jnp.unwrap(jnp.angle(a)))\nplt.plot(wl, fap(wl))\nplt.grid(True)\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Angle [deg]\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"fa = lambda x: fam(x)*jnp.exp(1j*fap(x))",
"_____no_output_____"
]
],
[
[
"### Total",
"_____no_output_____"
]
],
[
[
"t_ = 0.5 * (fs(wl) + fa(wl))\n\nplt.plot(wl, jnp.abs(t))\nplt.plot(wl, jnp.abs(t_))\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Transmission\")",
"_____no_output_____"
],
[
"k_ = 0.5 * (fs(wl) - fa(wl))\n\nplt.plot(wl, jnp.abs(k))\nplt.plot(wl, jnp.abs(k_))\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Coupling\")",
"_____no_output_____"
],
[
"@jax.jit\ndef coupler(wl=1.5):\n wl = jnp.asarray(wl)\n wl_shape = wl.shape\n wl = wl.ravel()\n t = (0.5 * (fs(wl) + fa(wl))).reshape(*wl_shape)\n k = (0.5 * (fs(wl) - fa(wl))).reshape(*wl_shape)\n sdict = {\n (\"o1\", \"o4\"): t,\n (\"o1\", \"o3\"): k,\n (\"o2\", \"o3\"): k,\n (\"o2\", \"o4\"): t,\n }\n return sax.reciprocal(sdict)",
"_____no_output_____"
],
[
"f = jnp.linspace(c / 1.0e-6, c / 2.0e-6, 500) * 1e-12 # THz\nwl = c / (f * 1e12) * 1e6 # um\n\nfilepath = gf.config.sparameters_path / \"coupler\" / \"coupler_G224n_L20_S220.csv\"\ncoupler_fdtd = gf.partial(gs.read.sdict_from_csv, filepath, xkey=\"wavelength_nm\", prefix=\"S\", xunits=1e-3)\nsd = coupler_fdtd(wl=wl)\nsd_ = coupler(wl=wl)\n\nT = jnp.abs(sd[\"o1\", \"o4\"]) ** 2\nK = jnp.abs(sd[\"o1\", \"o3\"]) ** 2\nT_ = jnp.abs(sd_[\"o1\", \"o4\"]) ** 2\nK_ = jnp.abs(sd_[\"o1\", \"o3\"]) ** 2\ndP = jnp.unwrap(jnp.angle(sd[\"o1\", \"o3\"]) - jnp.angle(sd[\"o1\", \"o4\"]))\ndP_ = jnp.unwrap(jnp.angle(sd_[\"o1\", \"o3\"]) - jnp.angle(sd_[\"o1\", \"o4\"]))\n\nplt.figure(figsize=(12,3))\nplt.plot(wl, T, label=\"T (fdtd)\", c=\"C0\", ls=\":\", lw=\"6\")\nplt.plot(wl, T_, label=\"T (model)\", c=\"C0\")\n\nplt.plot(wl, K, label=\"K (fdtd)\", c=\"C1\", ls=\":\", lw=\"6\")\nplt.plot(wl, K_, label=\"K (model)\", c=\"C1\")\n\nplt.ylim(-0.05, 1.05)\nplt.grid(True)\n\nplt.twinx()\nplt.plot(wl, dP, label=\"ΔΦ (fdtd)\", color=\"C2\", ls=\":\", lw=\"6\")\nplt.plot(wl, dP_, label=\"ΔΦ (model)\", color=\"C2\")\n\nplt.xlabel(\"Frequency [THz]\")\nplt.ylabel(\"Transmission\")\nplt.figlegend(bbox_to_anchor=(1.08, 0.9))\nplt.savefig(\"fdtd_vs_model.png\", bbox_inches=\"tight\")\nplt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a5e6240e8946bbef6a81bcd39e3fac6d323f372
| 32,317 |
ipynb
|
Jupyter Notebook
|
00.ipynb
|
Programmer-RD-AI/Musical-Instruments-Image-Classification
|
50a650119dfcc478efcd3fa8c900fc57d771e41c
|
[
"Apache-2.0"
] | 1 |
2021-10-06T07:54:52.000Z
|
2021-10-06T07:54:52.000Z
|
00.ipynb
|
Programmer-RD-AI/Musical-Instruments-Image-Classification
|
50a650119dfcc478efcd3fa8c900fc57d771e41c
|
[
"Apache-2.0"
] | null | null | null |
00.ipynb
|
Programmer-RD-AI/Musical-Instruments-Image-Classification
|
50a650119dfcc478efcd3fa8c900fc57d771e41c
|
[
"Apache-2.0"
] | null | null | null | 47.948071 | 2,111 | 0.564656 |
[
[
[
"from torchvision.models import *\nimport wandb\nfrom sklearn.model_selection import train_test_split\nimport os,cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom torch.optim import *\nfrom torch.nn import *\nimport torch,torchvision\nfrom tqdm import tqdm\ndevice = 'cuda'\nPROJECT_NAME = 'Musical-Instruments-Image-Classification'",
"_____no_output_____"
],
[
"def load_data():\n data = []\n labels = {}\n labels_r = {}\n idx = 0\n for label in os.listdir('./data/'):\n idx += 1\n labels[label] = idx\n labels_r[idx] = label\n for folder in os.listdir('./data/'):\n for file in os.listdir(f'./data/{folder}/'):\n img = cv2.imread(f'./data/{folder}/{file}')\n img = cv2.resize(img,(56,56))\n img = img / 255.0\n data.append([\n img,\n np.eye(labels[folder]+1,len(labels))[labels[folder]-1]\n ])\n X = []\n y = []\n for d in data:\n X.append(d[0])\n y.append(d[1])\n X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.125,shuffle=False)\n X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float()\n y_train = torch.from_numpy(np.array(y_train)).to(device).float()\n X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float()\n y_test = torch.from_numpy(np.array(y_test)).to(device).float()\n return X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data",
"_____no_output_____"
],
[
"X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data = load_data()",
"_____no_output_____"
],
[
"# torch.save(labels_r,'labels_r.pt')\n# torch.save(labels,'labels.pt')\n# torch.save(X_train,'X_train.pth')\n# torch.save(y_train,'y_train.pth')\n# torch.save(X_test,'X_test.pth')\n# torch.save(y_test,'y_test.pth')\n# torch.save(labels_r,'labels_r.pth')\n# torch.save(labels,'labels.pth')",
"_____no_output_____"
],
[
"def get_accuracy(model,X,y):\n preds = model(X)\n correct = 0\n total = 0\n for pred,yb in zip(preds,y):\n pred = int(torch.argmax(pred))\n yb = int(torch.argmax(yb))\n if pred == yb:\n correct += 1\n total += 1\n acc = round(correct/total,3)*100\n return acc",
"_____no_output_____"
],
[
"def get_loss(model,X,y,criterion):\n preds = model(X)\n loss = criterion(preds,y)\n return loss.item()",
"_____no_output_____"
],
[
"model = resnet18().to(device)\nmodel.fc = Linear(512,len(labels))\ncriterion = MSELoss()\noptimizer = Adam(model.parameters(),lr=0.001)\nepochs = 100\nbatch_size = 32",
"_____no_output_____"
],
[
"wandb.init(project=PROJECT_NAME,name='baseline')\nfor _ in tqdm(range(epochs)):\n for i in range(0,len(X_train),batch_size):\n X_batch = X_train[i:i+batch_size]\n y_batch = y_train[i:i+batch_size]\n model.to(device)\n preds = model(X_batch)\n loss = criterion(preds,y_batch)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n model.eval()\n torch.cuda.empty_cache()\n wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})\n torch.cuda.empty_cache()\n wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})\n torch.cuda.empty_cache()\n model.train()\nwandb.finish()",
"_____no_output_____"
],
[
"class Model(Module):\n def __init__(self):\n super().__init__()\n self.max_pool2d = MaxPool2d((2,2),(2,2))\n self.activation = ReLU()\n self.conv1 = Conv2d(3,7,(5,5))\n self.conv2 = Conv2d(7,14,(5,5))\n self.conv2bn = BatchNorm2d(14)\n self.conv3 = Conv2d(14,21,(5,5))\n self.linear1 = Linear(21*3*3,256)\n self.linear2 = Linear(256,512)\n self.linear2bn = BatchNorm1d(512)\n self.linear3 = Linear(512,256)\n self.output = Linear(256,len(labels))\n \n def forward(self,X):\n preds = self.max_pool2d(self.activation(self.conv1(X)))\n preds = self.max_pool2d(self.activation(self.conv2bn(self.conv2(preds))))\n preds = self.max_pool2d(self.activation(self.conv3(preds)))\n print(preds.shape)\n preds = preds.view(-1,21*3*3)\n preds = self.activation(self.linear1(preds))\n preds = self.activation(self.linear2bn(self.linear2(preds)))\n preds = self.activation(self.linear3(preds))\n preds = self.output(preds)\n return preds",
"_____no_output_____"
],
[
"model = Model().to(device)\ncriterion = MSELoss()\noptimizer = Adam(model.parameters(),lr=0.001)\nepochs = 100\nbatch_size = 32",
"_____no_output_____"
],
[
"wandb.init(project=PROJECT_NAME,name='baseline')\nfor _ in tqdm(range(epochs)):\n for i in range(0,len(X_train),batch_size):\n X_batch = X_train[i:i+batch_size]\n y_batch = y_train[i:i+batch_size]\n model.to(device)\n preds = model(X_batch)\n loss = criterion(preds,y_batch)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n model.eval()\n torch.cuda.empty_cache()\n wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})\n torch.cuda.empty_cache()\n wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})\n torch.cuda.empty_cache()\n model.train()\nwandb.finish()",
"\u001b[34m\u001b[1mwandb\u001b[0m: wandb version 0.12.4 is available! To upgrade, please run:\n\u001b[34m\u001b[1mwandb\u001b[0m: $ pip install wandb --upgrade\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a5e625530cd7f5073176cdf4101d22014cae4c0
| 1,290 |
ipynb
|
Jupyter Notebook
|
pset_dicts/dict_manip_ops/solutions/nb/p1.ipynb
|
mottaquikarim/pydev-psets
|
9749e0d216ee0a5c586d0d3013ef481cc21dee27
|
[
"MIT"
] | 5 |
2019-04-08T20:05:37.000Z
|
2019-12-04T20:48:45.000Z
|
pset_dicts/dict_manip_ops/solutions/nb/p1.ipynb
|
mottaquikarim/pydev-psets
|
9749e0d216ee0a5c586d0d3013ef481cc21dee27
|
[
"MIT"
] | 8 |
2019-04-15T15:16:05.000Z
|
2022-02-12T10:33:32.000Z
|
pset_dicts/dict_manip_ops/solutions/nb/p1.ipynb
|
mottaquikarim/pydev-psets
|
9749e0d216ee0a5c586d0d3013ef481cc21dee27
|
[
"MIT"
] | 2 |
2019-04-10T00:14:42.000Z
|
2020-02-26T20:35:21.000Z
| 30.714286 | 347 | 0.55969 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a5e7da45d44cb6e20e8d06943ca6075f0f122de
| 7,278 |
ipynb
|
Jupyter Notebook
|
notebooks/4-your-own-recordings.ipynb
|
jrgauthier01/speech-commands-oow2018
|
084eb7349e9d17e98737a3f7536bac50f9b7df5e
|
[
"MIT"
] | 1 |
2020-03-01T22:09:15.000Z
|
2020-03-01T22:09:15.000Z
|
notebooks/4-your-own-recordings.ipynb
|
jrgauthier01/speech-commands-oow2018
|
084eb7349e9d17e98737a3f7536bac50f9b7df5e
|
[
"MIT"
] | null | null | null |
notebooks/4-your-own-recordings.ipynb
|
jrgauthier01/speech-commands-oow2018
|
084eb7349e9d17e98737a3f7536bac50f9b7df5e
|
[
"MIT"
] | null | null | null | 31.102564 | 200 | 0.570624 |
[
[
[
"# (Optional) Testing the Function Endpoint with your Own Audio Clips\n",
"_____no_output_____"
],
[
"Instead of using pre-recorded clips we show you in this notebook how to invoke the deployed Function \nwith your **own** audio clips. \n\nIn the cells below, we will use the [PyAudio library](https://pypi.org/project/PyAudio/) to record a short 1 second clip. we will then submit \nthat short clip to the Function endpoint on Oracle Functions. **Make sure PyAudio is installed on your laptop** before running this notebook. \n\nThe helper function defined below will record a 1-sec audio clip when executed. Speak into the microphone \nof your computer and say one of the words `cat`, `eight`, `right`. \n\nI'd recommend double-checking that you are not muted and that you are using the internal computer mic. No \nheadset.",
"_____no_output_____"
]
],
[
[
"# we will use pyaudio and wave in the \n# bottom half of this notebook. \nimport pyaudio\nimport wave",
"_____no_output_____"
],
[
"print(pyaudio.__version__) ",
"_____no_output_____"
],
[
"def record_wave(duration=1.0, output_wave='./output.wav'): \n \"\"\"Using the pyaudio library, this function will record a video clip of a given duration. \n \n Args: \n - duration (float): duration of the recording in seconds \n - output_wave (str) : filename of the wav file that contains your recording \n \n Returns: \n - frames : a list containing the recorded waveform\n \"\"\"\n \n # number of frames per buffer\n frames_perbuff = 2048 \n # 16 bit int\n format = pyaudio.paInt16\n # mono sound\n channels = 1 \n # Sampling rate -- CD quality (44.1 kHz). Standard \n # for most recording devices. \n sampling_rate = 44100 \n # frames contain the waveform data: \n frames = []\n # number of buffer chunks: \n nchunks = int(duration * sampling_rate / frames_perbuff)\n\n p = pyaudio.PyAudio()\n\n stream = p.open(format=format,\n channels=channels,\n rate=sampling_rate,\n input=True,\n frames_per_buffer=frames_perbuff) \n \n print(\"RECORDING STARTED \")\n for i in range(0, nchunks):\n data = stream.read(frames_perbuff)\n frames.append(data)\n print(\"RECORDING ENDED\")\n \n stream.stop_stream()\n stream.close()\n p.terminate()\n \n # Write the audio clip to disk as a .wav file: \n wf = wave.open(output_wave, 'wb')\n wf.setnchannels(channels)\n wf.setsampwidth(p.get_sample_size(format))\n wf.setframerate(sampling_rate)\n wf.writeframes(b''.join(frames))\n wf.close()",
"_____no_output_____"
],
[
"# let's record your own, 1-sec clip\nmy_own_clip = \"./my_clip.wav\"\nframes = record_wave(output_wave=my_own_clip)\n\n# Playback \nipd.Audio(\"./my_clip.wav\")",
"_____no_output_____"
]
],
[
[
"Looks good? Now let's try to send that clip to our model API endpoint. We will repeat the same process we adopted when we submitted pre-recorded clips.",
"_____no_output_____"
]
],
[
[
"# oci: \nimport oci \nfrom oci.config import from_file\nfrom oci import pagination\nimport oci.functions as functions\nfrom oci.functions import FunctionsManagementClient, FunctionsInvokeClient",
"_____no_output_____"
],
[
"# Lets specify the location of our OCI configuration file: \noci_config = from_file(\"/home/datascience/block_storage/.oci/config\")\n\n# Lets specify the compartment OCID, and the application + function names: \ncompartment_id = 'ocid1.compartment.oc1..aaaaaaaafl3avkal72rrwuy4m5rumpwh7r4axejjwq5hvwjy4h4uoyi7kzyq' \napp_name = 'machine-learning-models'\nfn_name = 'speech-commands'",
"_____no_output_____"
],
[
"fn_management_client = FunctionsManagementClient(oci_config)\n\napp_result = pagination.list_call_get_all_results(\n fn_management_client.list_applications,\n compartment_id,\n display_name=app_name\n )\n\nfn_result = pagination.list_call_get_all_results(\n fn_management_client.list_functions,\n app_result.data[0].id,\n display_name=fn_name\n )\n\ninvoke_client = FunctionsInvokeClient(oci_config, service_endpoint=fn_result.data[0].invoke_endpoint)",
"_____no_output_____"
],
[
"# here we need to be careful. `my_own_clip` was recorded at a 44.1 kHz sampling rate. \n# Yet the training sample has data at a 16 kHz rate. To ensure that we feed data of the same \n# size, we will downsample the data to a 16 kHz rate (sr=16000)\nwaveform, _ = librosa.load(my_own_clip, mono=True, sr=16000)",
"_____no_output_____"
]
],
[
[
"Below we call the deployed Function. Note that the first call could take 60 sec. or more. This is due to the cold start problem of Function. Subsequent calls are much faster. Typically < 1 sec. ",
"_____no_output_____"
]
],
[
[
"%%time\n\nresp = invoke_client.invoke_function(fn_result.data[0].id, \n invoke_function_body=json.dumps({\"input\": waveform.tolist()}))\nprint(resp.data.text)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a5e87ae7f2a33b0bc40ff7208ca21b5d35f933a
| 36,845 |
ipynb
|
Jupyter Notebook
|
Sesion_08_JSON.ipynb
|
UNADCdD/Python-intro
|
15aa68b1804ffe72a27908278f11ce20e2555870
|
[
"MIT"
] | 1 |
2021-05-21T03:44:09.000Z
|
2021-05-21T03:44:09.000Z
|
Sesion_08_JSON.ipynb
|
UNADCdD/Python-intro
|
15aa68b1804ffe72a27908278f11ce20e2555870
|
[
"MIT"
] | null | null | null |
Sesion_08_JSON.ipynb
|
UNADCdD/Python-intro
|
15aa68b1804ffe72a27908278f11ce20e2555870
|
[
"MIT"
] | 6 |
2020-04-09T23:08:16.000Z
|
2021-01-23T19:05:03.000Z
| 30.910235 | 735 | 0.580947 |
[
[
[
"\n\n<font size=3 color=\"midnightblue\" face=\"arial\">\n<h1 align=\"center\">Escuela de Ciencias Básicas, Tecnología e Ingeniería</h1>\n</font>\n\n<font size=3 color=\"navy\" face=\"arial\">\n<h1 align=\"center\">ECBTI</h1>\n</font>\n\n<font size=2 color=\"darkorange\" face=\"arial\">\n<h1 align=\"center\">Curso:</h1>\n</font>\n\n<font size=2 color=\"navy\" face=\"arial\">\n<h1 align=\"center\">Introducción al lenguaje de programación Python</h1>\n</font>\n\n<font size=1 color=\"darkorange\" face=\"arial\">\n<h1 align=\"center\">Febrero de 2020</h1>\n</font>",
"_____no_output_____"
],
[
"<h2 align=\"center\">Sesión 08 - Manipulación de archivos JSON</h2> ",
"_____no_output_____"
],
[
"## Introducción\n\n\n`JSON` (*JavaScript Object Notation*) es un formato ligero de intercambio de datos que los humanos pueden leer y escribir fácilmente. También es fácil para las computadoras analizar y generar. `JSON` se basa en el lenguaje de programación [JavaScript](https://www.javascript.com/ 'JavaScript'). Es un formato de texto que es independiente del lenguaje y se puede usar en `Python`, `Perl`, entre otros idiomas. Se utiliza principalmente para transmitir datos entre un servidor y aplicaciones web. `JSON` se basa en dos estructuras:\n\n- Una colección de pares nombre / valor. Esto se realiza como un objeto, registro, diccionario, tabla hash, lista con clave o matriz asociativa.\n\n\n- Una lista ordenada de valores. Esto se realiza como una matriz, vector, lista o secuencia.",
"_____no_output_____"
],
[
"## JSON en Python\n\nHay una serie de paquetes que admiten `JSON` en `Python`, como [metamagic.json](https://pypi.org/project/metamagic.json/ 'metamagic.json'), [jyson](http://opensource.xhaus.com/projects/jyson/wiki 'jyson'), [simplejson](https://simplejson.readthedocs.io/en/latest/ 'simplejson'), [Yajl-Py](http://pykler.github.io/yajl-py/ 'Yajl-Py'), [ultrajson](https://github.com/esnme/ultrajson 'ultrajson') y [json](https://docs.python.org/3.6/library/json.html 'json'). En este curso, utilizaremos [json](https://docs.python.org/3.6/library/json.html 'json'), que es compatible de forma nativa con `Python`. Podemos usar [este sitio](https://jsonlint.com/ 'jsonlint') que proporciona una interfaz `JSON` para verificar nuestros datos `JSON`.",
"_____no_output_____"
],
[
"A continuación se muestra un ejemplo de datos `JSON`.",
"_____no_output_____"
]
],
[
[
"{\n \"nombre\": \"Jaime\",\n \"apellido\": \"Perez\",\n \"aficiones\": [\"correr\", \"ciclismo\", \"caminar\"],\n \"edad\": 35,\n \"hijos\": [\n {\n \"nombre\": \"Pedro\",\n \"edad\": 6\n },\n {\n \"nombre\": \"Alicia\",\n \"edad\": 8\n }\n ]\n}",
"_____no_output_____"
]
],
[
[
"Como puede verse, `JSON` admite tanto tipos primitivos, cadenas de caracteres y números, como listas y objetos anidados.\n\nNotamos que la representación de datos es muy similar a los diccionarios de `Python` ",
"_____no_output_____"
]
],
[
[
"{\n \"articulo\": [\n {\n \"id\":\"01\",\n \"lenguaje\": \"JSON\",\n \"edicion\": \"primera\",\n \"autor\": \"Derrick Mwiti\"\n },\n\n {\n \"id\":\"02\",\n \"lenguaje\": \"Python\",\n \"edicion\": \"segunda\",\n \"autor\": \"Derrick Mwiti\"\n }\n ],\n \"blog\":[\n {\n \"nombre\": \"Datacamp\",\n \"URL\":\"datacamp.com\"\n }\n ]\n}",
"_____no_output_____"
]
],
[
[
"Reescribámoslo en una forma más familiar",
"_____no_output_____"
]
],
[
[
"{\"articulo\":[{\"id\":\"01\",\"lenguaje\": \"JSON\",\"edicion\": \"primera\",\"author\": \"Derrick Mwiti\"},\n {\"id\":\"02\",\"lenguaje\": \"Python\",\"edicion\": \"segunda\",\"autor\": \"Derrick Mwiti\"}],\n \"blog\":[{\"nombre\": \"Datacamp\",\"URL\":\"datacamp.com\"}]}",
"_____no_output_____"
]
],
[
[
"## `JSON` nativo en `Python`\n\n`Python` viene con un paquete incorporado llamado `json` para codificar y decodificar datos `JSON`.",
"_____no_output_____"
]
],
[
[
"import json",
"_____no_output_____"
]
],
[
[
"## Un poco de vocabulario",
"_____no_output_____"
],
[
"El proceso de codificación de `JSON` generalmente se llama serialización. Este término se refiere a la transformación de datos en una serie de bytes (por lo tanto, en serie) para ser almacenados o transmitidos a través de una red. También puede escuchar el término de clasificación, pero esa es otra discusión. Naturalmente, la deserialización es el proceso recíproco de decodificación de datos que se ha almacenado o entregado en el estándar `JSON`.\n\nDe lo que estamos hablando aquí es leer y escribir. Piénselo así: la codificación es para escribir datos en el disco, mientras que la decodificación es para leer datos en la memoria.",
"_____no_output_____"
],
[
"### Serialización en `JSON`\n\n¿Qué sucede después de que una computadora procesa mucha información? Necesita tomar un volcado de datos. En consecuencia, la biblioteca `json` expone el método `dump()` para escribir datos en archivos. También hay un método `dumps()` (pronunciado como \"*dump-s*\") para escribir en una cadena de `Python`.\n\nLos objetos simples de `Python` se traducen a `JSON` de acuerdo con una conversión bastante intuitiva.",
"_____no_output_____"
],
[
"Comparemos los tipos de datos en `Python` y `JSON`.\n\n|**Python** | **JSON** |\n|:---------:|:----------------:|\n|dict |object |\n|list|array |\n|tuple|\tarray|\n|str|\tstring|\n|int|\tnumber|\n|float|\tnumber|\n|True|\ttrue|\n|False|\tfalse|\n|None| null|\t",
"_____no_output_____"
],
[
"### Serialización, ejemplo\n\ntenemos un objeto `Python` en la memoria que se parece a algo así:",
"_____no_output_____"
]
],
[
[
"data = {\n \"president\": {\n \"name\": \"Zaphod Beeblebrox\",\n \"species\": \"Betelgeusian\"\n }\n}",
"_____no_output_____"
],
[
"print(type(data))",
"_____no_output_____"
]
],
[
[
"Es fundamental que se guarde esta información en el disco, por lo que la tarea es escribirla en un archivo.\n\nCon el administrador de contexto de `Python`, puede crear un archivo llamado `data_file.json` y abrirlo en modo de escritura. (Los archivos `JSON` terminan convenientemente en una extensión `.json`).",
"_____no_output_____"
]
],
[
[
"with open(\"data_file.json\", \"w\") as write_file:\n json.dump(data, write_file)",
"_____no_output_____"
]
],
[
[
"Tenga en cuenta que `dump()` toma dos argumentos posicionales: \n\n1. el objeto de datos que se va a serializar y \n\n\n2. el objeto tipo archivo en el que se escribirán los bytes.\n\n\nO, si estaba tan inclinado a seguir usando estos datos `JSON` serializados en su programa, podría escribirlos en un objeto `str` nativo de `Python`.",
"_____no_output_____"
]
],
[
[
"json_string = json.dumps(data)",
"_____no_output_____"
],
[
"print(type(json_string))",
"_____no_output_____"
]
],
[
[
"Tenga en cuenta que el objeto similar a un archivo está ausente ya que no está escribiendo en el disco. Aparte de eso, `dumps()` es como `dump()`.\n\nSe ha creado un objeto `JSON` y está listo para trabajarlo.",
"_____no_output_____"
],
[
"### Algunos argumentos útiles de palabras clave\n\nRecuerde, `JSON` está destinado a ser fácilmente legible por los humanos, pero la sintaxis legible no es suficiente si se aprieta todo junto. Además, probablemente tenga un estilo de programación diferente a éste presentado, y puede que le resulte más fácil leer el código cuando está formateado a su gusto.\n\n***NOTA:*** Los métodos `dump()` y `dumps()` usan los mismos argumentos de palabras clave.\n\nLa primera opción que la mayoría de la gente quiere cambiar es el espacio en blanco. Puede usar el argumento de sangría de palabras clave para especificar el tamaño de sangría para estructuras anidadas. Compruebe la diferencia por sí mismo utilizando los datos, que definimos anteriormente, y ejecutando los siguientes comandos en una consola:",
"_____no_output_____"
]
],
[
[
"json.dumps(data)",
"_____no_output_____"
],
[
"json.dumps(data, indent=4)",
"_____no_output_____"
]
],
[
[
"Otra opción de formato es el argumento de palabra clave de separadores. Por defecto, esta es una tupla de 2 de las cadenas de separación (`\",\"`, `\": \"`), pero una alternativa común para `JSON` compacto es (`\",\"`, `\":\"`). observe el ejemplo `JSON` nuevamente para ver dónde entran en juego estos separadores.\n\nHay otros, como `sort_keys`. Puede encontrar una lista completa en la [documentación](https://docs.python.org/3/library/json.html#basic-usage) oficial.",
"_____no_output_____"
],
[
"### Deserializando JSON\n\nHemos trabajado un poco de `JSON` muy básico, ahora es el momento de ponerlo en forma. En la biblioteca `json`, encontrará `load()` y `loads()` para convertir datos codificados con `JSON` en objetos de `Python`.\n\nAl igual que la serialización, hay una tabla de conversión simple para la deserialización, aunque probablemente ya puedas adivinar cómo se ve.\n\n|**JSON** | **Python** |\n|:---------:|:----------------:|\n|object |dict |\n|array |list|\n|array|tuple\t|\n|string|str\t|\n|number|int\t|\n|number|float\t|\n|true|True\t|\n|false|False\t|\n|null|None |\t",
"_____no_output_____"
],
[
"Técnicamente, esta conversión no es un inverso perfecto a la tabla de serialización. Básicamente, eso significa que si codifica un objeto de vez en cuando y luego lo decodifica nuevamente más tarde, es posible que no recupere exactamente el mismo objeto. Me imagino que es un poco como teletransportación: descomponga mis moléculas aquí y vuelva a unirlas allí. ¿Sigo siendo la misma persona?\n\nEn realidad, probablemente sea más como hacer que un amigo traduzca algo al japonés y que otro amigo lo traduzca nuevamente al inglés. De todos modos, el ejemplo más simple sería codificar una tupla y recuperar una lista después de la decodificación, así:",
"_____no_output_____"
]
],
[
[
"blackjack_hand = (8, \"Q\")\nencoded_hand = json.dumps(blackjack_hand)\ndecoded_hand = json.loads(encoded_hand)",
"_____no_output_____"
],
[
"blackjack_hand == decoded_hand",
"_____no_output_____"
],
[
"type(blackjack_hand)",
"_____no_output_____"
],
[
"type(decoded_hand)",
"_____no_output_____"
],
[
"blackjack_hand == tuple(decoded_hand)",
"_____no_output_____"
]
],
[
[
"### Deserialización, ejemplo\n\nEsta vez, imagine que tiene algunos datos almacenados en el disco que le gustaría manipular en la memoria. Todavía usará el administrador de contexto, pero esta vez abrirá el archivo de datos existente `archivo_datos.json` en modo de lectura.",
"_____no_output_____"
]
],
[
[
"with open(\"data_file.json\", \"r\") as read_file:\n data = json.load(read_file)",
"_____no_output_____"
]
],
[
[
"Hasta ahora las cosas son bastante sencillas, pero tenga en cuenta que el resultado de este método podría devolver cualquiera de los tipos de datos permitidos de la tabla de conversión. Esto solo es importante si está cargando datos que no ha visto antes. En la mayoría de los casos, el objeto raíz será un diccionario o una lista.\n\nSi ha extraído datos `JSON` de otro programa o ha obtenido una cadena de datos con formato `JSON` en `Python`, puede deserializarlo fácilmente con `loads()`, que naturalmente se carga de una cadena:",
"_____no_output_____"
]
],
[
[
"my_json_string = \"\"\"{\n \"article\": [\n\n {\n \"id\":\"01\",\n \"language\": \"JSON\",\n \"edition\": \"first\",\n \"author\": \"Derrick Mwiti\"\n },\n\n {\n \"id\":\"02\",\n \"language\": \"Python\",\n \"edition\": \"second\",\n \"author\": \"Derrick Mwiti\"\n }\n ],\n\n \"blog\":[\n {\n \"name\": \"Datacamp\",\n \"URL\":\"datacamp.com\"\n }\n ]\n}\n\"\"\"\nto_python = json.loads(my_json_string)",
"_____no_output_____"
],
[
"print(type(to_python))",
"_____no_output_____"
]
],
[
[
"Ahora ya estamos trabajando con `JSON` puro. Lo que se hará de ahora en adelante dependerá del usuario, por lo que hay qué estar muy atentos con lo que se quiere hacer, se hace, y el resultado que se obtiene.",
"_____no_output_____"
],
[
"## Un ejemplo real\n\nPara este ejemplo introductorio, utilizaremos [JSONPlaceholder](https://jsonplaceholder.typicode.com/ \"JSONPlaceholder\"), una excelente fuente de datos `JSON` falsos para fines prácticos.\n\nPrimero cree un archivo de script llamado `scratch.py`, o como desee llamarlo.\n\nDeberá realizar una solicitud de `API` al servicio `JSONPlaceholder`, así que solo use el paquete de solicitudes para realizar el trabajo pesado. Agregue estas importaciones en la parte superior de su archivo:",
"_____no_output_____"
]
],
[
[
"import json\nimport requests",
"_____no_output_____"
]
],
[
[
"Ahora haremos una solicitud a la `API` `JSONPlaceholder`, si no está familiarizado con las solicitudes, existe un práctico método `json()` que hará todo el trabajo, pero puede practicar el uso de la biblioteca `json` para deserializar el atributo de texto del objeto de respuesta. Debería verse más o menos así:",
"_____no_output_____"
]
],
[
[
"response = requests.get(\"https://jsonplaceholder.typicode.com/todos\")\ntodos = json.loads(response.text)",
"_____no_output_____"
]
],
[
[
"Para saber si lo anterior funcionó (por lo menos no sacó ningún error), verifique el tipo de `todos` y luego hacer una consulta a los 10 primeros elementos de la lista.",
"_____no_output_____"
]
],
[
[
"todos == response.json()",
"_____no_output_____"
],
[
"type(todos)",
"_____no_output_____"
],
[
"todos[:10]",
"_____no_output_____"
],
[
"len(todos)",
"_____no_output_____"
]
],
[
[
"Puede ver la estructura de los datos visualizando el archivo en un navegador, pero aquí hay un ejemplo de parte de él:",
"_____no_output_____"
]
],
[
[
"# parte del archivo JSON - TODO\n\n{\n \"userId\": 1,\n \"id\": 1,\n \"title\": \"delectus aut autem\",\n \"completed\": false\n}",
"_____no_output_____"
]
],
[
[
"Hay varios usuarios, cada uno con un ID de usuario único, y cada tarea tiene una propiedad booleana completada. ¿Puedes determinar qué usuarios han completado la mayoría de las tareas?",
"_____no_output_____"
]
],
[
[
"# Mapeo de userID para la cantidad completa de TODOS para cada usuario\ntodos_by_user = {}\n\n# Incrementa el recuento completo de TODOs para cada usuario.\nfor todo in todos:\n if todo[\"completed\"]:\n try:\n # Incrementa el conteo del usuario existente.\n todos_by_user[todo[\"userId\"]] += 1\n except KeyError:\n # Este usuario no ha sido visto, se inicia su conteo en 1.\n todos_by_user[todo[\"userId\"]] = 1\n\n# Crea una lista ordenada de pares (userId, num_complete).\ntop_users = sorted(todos_by_user.items(), \n key=lambda x: x[1], reverse=True)\n\n# obtiene el número máximo completo de TODO\nmax_complete = top_users[0][1]\n\n# Cree una lista de todos los usuarios que hayan completado la cantidad máxima de TODO\nusers = []\nfor user, num_complete in top_users:\n if num_complete < max_complete:\n break\n users.append(str(user))\n\nmax_users = \" y \".join(users)",
"_____no_output_____"
]
],
[
[
"Ahora se pueden manipular los datos `JSON` como un objeto `Python` normal.\n\nAl ejecutar el script se obtienen los siguientes resultados:",
"_____no_output_____"
]
],
[
[
"s = \"s\" if len(users) > 1 else \"\"\nprint(f\"usuario{s} {max_users} completaron {max_complete} TODOs\")",
"_____no_output_____"
]
],
[
[
"Continuando, se creará un archivo `JSON` que contiene los *TODO* completos para cada uno de los usuarios que completaron el número máximo de *TODO*.\n\nTodo lo que necesita hacer es filtrar todos y escribir la lista resultante en un archivo. llamaremos al archivo de salida `filter_data_file.json`. Hay muchas maneras de hacerlo, pero aquí hay una:",
"_____no_output_____"
]
],
[
[
"# Defina una función para filtrar TODO completos de usuarios con TODOS máximos completados.\ndef keep(todo):\n is_complete = todo[\"completed\"]\n has_max_count = str(todo[\"userId\"]) in users\n return is_complete and has_max_count\n\n# Escriba el filtrado de TODO a un archivo.\nwith open(\"filtered_data_file.json\", \"w\") as data_file:\n filtered_todos = list(filter(keep, todos))\n json.dump(filtered_todos, data_file, indent=2)",
"_____no_output_____"
]
],
[
[
"Se han filtrado todos los datos que no se necesitan y se han guardado los necesarios en un archivo nuevo! Vuelva a ejecutar el script y revise `filter_data_file.json` para verificar que todo funcionó. Estará en el mismo directorio que `scratch.py` cuando lo ejecutes.",
"_____no_output_____"
]
],
[
[
"s = \"s\" if len(users) > 1 else \"\"\nprint(f\"usuario{s} {max_users} completaron {max_complete} TODOs\")",
"_____no_output_____"
]
],
[
[
"Por ahora estamos viendo los aspectos básicos de la manipulación de datos en `JSON`. Ahora vamos a tratar de avanzar un poco más en profundidad.",
"_____no_output_____"
],
[
"## Codificación y decodificación de objetos personalizados de `Python`\n\nVeamos un ejemplo de una clase de un juego muy famoso (Dungeons & Dragons) ¿Qué sucede cuando intentamos serializar la clase `Elf` de esa aplicación?",
"_____no_output_____"
]
],
[
[
"class Elf:\n def __init__(self, level, ability_scores=None):\n self.level = level\n self.ability_scores = {\n \"str\": 11, \"dex\": 12, \"con\": 10,\n \"int\": 16, \"wis\": 14, \"cha\": 13\n } if ability_scores is None else ability_scores\n self.hp = 10 + self.ability_scores[\"con\"]",
"_____no_output_____"
],
[
"elf = Elf(level=4)\njson.dumps(elf)",
"_____no_output_____"
]
],
[
[
"`Python` indica que `Elf` no es serializable",
"_____no_output_____"
],
[
"Aunque el módulo `json` puede manejar la mayoría de los tipos de `Python` integrados, no comprende cómo codificar los tipos de datos personalizados de forma predeterminada. Es como tratar de colocar una clavija cuadrada en un orificio redondo: necesita una sierra circular y la supervisión de los padres.",
"_____no_output_____"
],
[
"## Simplificando las estructuras de datos\n\ncómo lidiar con estructuras de datos más complejas?. Se podría intentar codificar y decodificar el `JSON` \"*manualmente*\", pero hay una solución un poco más inteligente que ahorrará algo de trabajo. En lugar de pasar directamente del tipo de datos personalizado a `JSON`, puede lanzar un paso intermedio.\n\nTodo lo que se necesita hacer es representar los datos en términos de los tipos integrados que `json` ya comprende. Esencialmente, traduce el objeto más complejo en una representación más simple, que el módulo `json` luego traduce a `JSON`. Es como la propiedad transitiva en matemáticas: si `A = B` y `B = C`, entonces `A = C`.\n\nPara entender esto, necesitarás un objeto complejo con el que jugar. Puede usar cualquier clase personalizada que desee, pero `Python` tiene un tipo incorporado llamado `complex` para representar números complejos, y no es serializable por defecto.",
"_____no_output_____"
]
],
[
[
"z = 3 + 8j",
"_____no_output_____"
],
[
"type(z)",
"_____no_output_____"
],
[
"json.dumps(z)",
"_____no_output_____"
]
],
[
[
"Una buena pregunta que debe hacerse al trabajar con tipos personalizados es ¿Cuál es la cantidad mínima de información necesaria para recrear este objeto? En el caso de números complejos, solo necesita conocer las partes real e imaginaria, a las que puede acceder como atributos en el objeto `complex`:",
"_____no_output_____"
]
],
[
[
"z.real",
"_____no_output_____"
],
[
"z.imag",
"_____no_output_____"
]
],
[
[
"Pasar los mismos números a un constructor `complex` es suficiente para satisfacer el operador de comparación `__eq__`:",
"_____no_output_____"
]
],
[
[
"complex(3, 8) == z",
"_____no_output_____"
]
],
[
[
"Desglosar los tipos de datos personalizados en sus componentes esenciales es fundamental para los procesos de serialización y deserialización.",
"_____no_output_____"
],
[
"## Codificación de tipos personalizados\n\nPara traducir un objeto personalizado a `JSON`, todo lo que necesita hacer es proporcionar una función de codificación al parámetro predeterminado del método `dump()`. El módulo `json` llamará a esta función en cualquier objeto que no sea serializable de forma nativa. Aquí hay una función de decodificación simple que puede usar para practicar ([aquí](https://www.programiz.com/python-programming/methods/built-in/isinstance \"isinstance\") encontrará información acerca de la función `isinstance`):",
"_____no_output_____"
]
],
[
[
"def encode_complex(z):\n if isinstance(z, complex):\n return (z.real, z.imag)\n else:\n type_name = z.__class__.__name__\n raise TypeError(f\"Object of type '{type_name}' is not JSON serializable\")",
"_____no_output_____"
]
],
[
[
"Tenga en cuenta que se espera que genere un `TypeError` si no obtiene el tipo de objeto que esperaba. De esta manera, se evita serializar accidentalmente a cualquier `Elfo`. Ahora ya podemos intentar codificar objetos complejos.",
"_____no_output_____"
]
],
[
[
"json.dumps(9 + 5j, default=encode_complex)",
"_____no_output_____"
],
[
"json.dumps(elf, default=encode_complex)",
"_____no_output_____"
]
],
[
[
"¿Por qué codificamos el número complejo como una tupla? es la única opción, es la mejor opción? Qué pasaría si necesitáramos decodificar el objeto más tarde?",
"_____no_output_____"
],
[
"El otro enfoque común es subclasificar el `JSONEncoder` estándar y anular el método `default()`:",
"_____no_output_____"
]
],
[
[
"class ComplexEncoder(json.JSONEncoder):\n def default(self, z):\n if isinstance(z, complex):\n return (z.real, z.imag)\n else:\n return super().default(z)",
"_____no_output_____"
]
],
[
[
"En lugar de subir el `TypeError` usted mismo, simplemente puede dejar que la clase base lo maneje. Puede usar esto directamente en el método `dump()` a través del parámetro `cls` o creando una instancia del codificador y llamando a su método `encode()`:",
"_____no_output_____"
]
],
[
[
"json.dumps(2 + 5j, cls=ComplexEncoder)",
"_____no_output_____"
],
[
"encoder = ComplexEncoder()",
"_____no_output_____"
],
[
"encoder.encode(3 + 6j)",
"_____no_output_____"
]
],
[
[
"## Decodificación de tipos personalizados\n\nSi bien las partes reales e imaginarias de un número complejo son absolutamente necesarias, en realidad no son suficientes para recrear el objeto. Esto es lo que sucede cuando intenta codificar un número complejo con `ComplexEncoder` y luego decodifica el resultado:",
"_____no_output_____"
]
],
[
[
"complex_json = json.dumps(4 + 17j, cls=ComplexEncoder)\njson.loads(complex_json)",
"_____no_output_____"
]
],
[
[
"Todo lo que se obtiene es una lista, y se tendría que pasar los valores a un constructor complejo si se quiere ese objeto complejo nuevamente. Recordemos el comentario sobre *teletransportación*. Lo que falta son metadatos o información sobre el tipo de datos que está codificando.\n\nLa pregunta que realmente debería hacerse es ¿Cuál es la cantidad mínima de información necesaria y suficiente para recrear este objeto?\n\nEl módulo `json` espera que todos los tipos personalizados se expresen como objetos en el estándar `JSON`. Para variar, puede crear un archivo `JSON` esta vez llamado `complex_data.json` y agregar el siguiente objeto que representa un número complejo:",
"_____no_output_____"
]
],
[
[
"# JSON\n\n{\n \"__complex__\": true,\n \"real\": 42,\n \"imag\": 36\n}",
"_____no_output_____"
]
],
[
[
"¿Ves la parte inteligente? Esa clave \"`__complex__`\" son los metadatos de los que acabamos de hablar. Realmente no importa cuál sea el valor asociado. Para que este pequeño truco funcione, todo lo que necesitas hacer es verificar que exista la clave:",
"_____no_output_____"
]
],
[
[
"def decode_complex(dct):\n if \"__complex__\" in dct:\n return complex(dct[\"real\"], dct[\"imag\"])\n return dct",
"_____no_output_____"
]
],
[
[
"Si \"`__complex__`\" no está en el diccionario, puede devolver el objeto y dejar que el decodificador predeterminado se encargue de él.\n\nCada vez que el método `load()` intenta analizar un objeto, se le da la oportunidad de interceder antes de que el decodificador predeterminado se adapte a los datos. Puede hacerlo pasando su función de decodificación al parámetro `object_hook`.\n\nAhora regresemos a lo de antes",
"_____no_output_____"
]
],
[
[
"with open(\"complex_data.json\") as complex_data:\n data = complex_data.read()\n z = json.loads(data, object_hook=decode_complex)",
"_____no_output_____"
],
[
"type(z)",
"_____no_output_____"
]
],
[
[
"Si bien `object_hook` puede parecer la contraparte del parámetro predeterminado del método `dump()`, la analogía realmente comienza y termina allí.",
"_____no_output_____"
]
],
[
[
"# JSON\n[\n {\n \"__complex__\":true,\n \"real\":42,\n \"imag\":36\n },\n {\n \"__complex__\":true,\n \"real\":64,\n \"imag\":11\n }\n]",
"_____no_output_____"
]
],
[
[
"Esto tampoco funciona solo con un objeto. Intente poner esta lista de números complejos en `complex_data.json` y vuelva a ejecutar el script:",
"_____no_output_____"
]
],
[
[
"with open(\"complex_data.json\") as complex_data:\n data = complex_data.read()\n numbers = json.loads(data, object_hook=decode_complex)",
"_____no_output_____"
]
],
[
[
"Si todo va bien, obtendrá una lista de objetos complejos:",
"_____no_output_____"
]
],
[
[
"type(z)",
"_____no_output_____"
],
[
"numbers",
"_____no_output_____"
]
],
[
[
"## Finalizando...\n\nAhora puede ejercer el poderoso poder de JSON para todas y cada una de las necesidades de `Python`.\n\nSi bien los ejemplos con los que ha trabajado aquí son ciertamente demasiado simplistas, ilustran un flujo de trabajo que puede aplicar a tareas más generales:\n\n- Importa el paquete json.\n\n\n- Lea los datos con `load()` o `loads()`.\n\n\n- Procesar los datos.\n\n\n- Escriba los datos alterados con `dump()` o `dumps()`.\n\n\nLo que haga con los datos una vez que se hayan cargado en la memoria dependerá de su caso de uso. En general, el objetivo será recopilar datos de una fuente, extraer información útil y transmitir esa información o mantener un registro de la misma.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4a5e8c6eae968cac444f78089584cdac5fc638cd
| 24,137 |
ipynb
|
Jupyter Notebook
|
Katherines Folder/.ipynb_checkpoints/creatingsqlitedb-checkpoint.ipynb
|
klw11j/Sexual-Assualt-Analysis
|
bdba47d08d45b26f59832c926fe2ac4baa9e5f02
|
[
"MIT"
] | null | null | null |
Katherines Folder/.ipynb_checkpoints/creatingsqlitedb-checkpoint.ipynb
|
klw11j/Sexual-Assualt-Analysis
|
bdba47d08d45b26f59832c926fe2ac4baa9e5f02
|
[
"MIT"
] | null | null | null |
Katherines Folder/.ipynb_checkpoints/creatingsqlitedb-checkpoint.ipynb
|
klw11j/Sexual-Assualt-Analysis
|
bdba47d08d45b26f59832c926fe2ac4baa9e5f02
|
[
"MIT"
] | null | null | null | 40.566387 | 1,051 | 0.312093 |
[
[
[
"import pandas as pd\nimport numpy as np\n",
"_____no_output_____"
],
[
"file = pd.read_csv(\"Data/Alltotals_global_dataset_2020.csv\")",
"_____no_output_____"
],
[
"df = pd.DataFrame(file)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"from sqlalchemy import create_engine",
"_____no_output_____"
],
[
"engine = create_engine('sqlite:///trafficking.sqlite', echo=True)\nsqlite_connection = engine.connect()",
"2020-10-04 12:22:22,377 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1\n2020-10-04 12:22:22,380 INFO sqlalchemy.engine.base.Engine ()\n2020-10-04 12:22:22,381 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1\n2020-10-04 12:22:22,381 INFO sqlalchemy.engine.base.Engine ()\n"
],
[
"sqlite_table = \"Sex_Trafficking_Data_US\"\ndf.to_sql(sqlite_table, sqlite_connection, if_exists='fail')",
"2020-10-04 12:22:22,388 INFO sqlalchemy.engine.base.Engine PRAGMA main.table_info(\"Sex_Trafficking_Data_US\")\n2020-10-04 12:22:22,389 INFO sqlalchemy.engine.base.Engine ()\n2020-10-04 12:22:22,390 INFO sqlalchemy.engine.base.Engine PRAGMA temp.table_info(\"Sex_Trafficking_Data_US\")\n2020-10-04 12:22:22,390 INFO sqlalchemy.engine.base.Engine ()\n2020-10-04 12:22:22,393 INFO sqlalchemy.engine.base.Engine \nCREATE TABLE \"Sex_Trafficking_Data_US\" (\n\t\"index\" BIGINT, \n\t\"yearOfRegistration\" BIGINT, \n\t\"ageBroad\" TEXT, \n\tgender TEXT, \n\t\"majorityStatusAtExploit\" TEXT, \n\t\"meansOfControlTakesEarnings\" BIGINT, \n\t\"meansOfControlThreats\" BIGINT, \n\t\"meansOfControlPsychologicalAbuse\" BIGINT, \n\t\"meansOfControlPhysicalAbuse\" BIGINT, \n\t\"meansOfControlSexualAbuse\" BIGINT, \n\t\"meansOfControlPsychoactiveSubstances\" BIGINT, \n\t\"meansOfControlRestrictsMovement\" BIGINT, \n\t\"meansOfControlUsesChildren\" BIGINT, \n\t\"meansOfControlThreatOfLawEnforcement\" BIGINT, \n\t\"isForcedLabour\" BIGINT, \n\t\"isSexualExploit\" BIGINT, \n\t\"isOtherExploit\" BIGINT, \n\t\"isAbduction\" BIGINT, \n\t\"recruiterRelationIntimatePartner\" BIGINT, \n\t\"recruiterRelationFriend\" BIGINT, \n\t\"recruiterRelationFamily\" BIGINT, \n\t\"recruiterRelationOther\" BIGINT\n)\n\n\n2020-10-04 12:22:22,394 INFO sqlalchemy.engine.base.Engine ()\n2020-10-04 12:22:22,405 INFO sqlalchemy.engine.base.Engine COMMIT\n2020-10-04 12:22:22,406 INFO sqlalchemy.engine.base.Engine CREATE INDEX \"ix_Sex_Trafficking_Data_US_index\" ON \"Sex_Trafficking_Data_US\" (\"index\")\n2020-10-04 12:22:22,406 INFO sqlalchemy.engine.base.Engine ()\n2020-10-04 12:22:22,412 INFO sqlalchemy.engine.base.Engine COMMIT\n2020-10-04 12:22:22,414 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)\n2020-10-04 12:22:22,417 INFO sqlalchemy.engine.base.Engine INSERT INTO \"Sex_Trafficking_Data_US\" (\"index\", \"yearOfRegistration\", \"ageBroad\", gender, \"majorityStatusAtExploit\", \"meansOfControlTakesEarnings\", \"meansOfControlThreats\", \"meansOfControlPsychologicalAbuse\", \"meansOfControlPhysicalAbuse\", \"meansOfControlSexualAbuse\", \"meansOfControlPsychoactiveSubstances\", \"meansOfControlRestrictsMovement\", \"meansOfControlUsesChildren\", \"meansOfControlThreatOfLawEnforcement\", \"isForcedLabour\", \"isSexualExploit\", \"isOtherExploit\", \"isAbduction\", \"recruiterRelationIntimatePartner\", \"recruiterRelationFriend\", \"recruiterRelationFamily\", \"recruiterRelationOther\") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n2020-10-04 12:22:22,417 INFO sqlalchemy.engine.base.Engine ((0, 2015, '18--20', 'Female', '-99', 2, 7, 9, 8, 2, 4, 12, 0, 0, 0, 64, 0, 0, 9, 1, 2, 2), (1, 2015, '18--20', 'Female', 'Adult', 2, 1, 3, 4, 0, 3, 5, 0, 0, 0, 9, 0, 0, 4, 1, 0, 0), (2, 2015, '18--20', 'Female', 'Minor', 10, 16, 22, 24, 9, 8, 20, 1, 0, 0, 61, 0, 0, 14, 2, 11, 8), (3, 2015, '21--23', 'Female', '-99', 9, 13, 14, 16, 3, 8, 17, 1, 0, 0, 54, 0, 0, 4, 0, 2, 6), (4, 2015, '21--23', 'Female', 'Adult', 2, 3, 2, 3, 1, 2, 2, 0, 0, 0, 5, 0, 0, 1, 1, 0, 0), (5, 2015, '21--23', 'Female', 'Minor', 5, 8, 14, 10, 5, 11, 11, 2, 0, 0, 37, 0, 0, 12, 3, 2, 4), (6, 2015, '24--26', 'Female', '-99', 5, 8, 14, 5, 0, 4, 5, 0, 0, 0, 39, 0, 0, 11, 1, 2, 1), (7, 2015, '24--26', 'Female', 'Adult', 1, 3, 2, 0, 0, 1, 2, 0, 1, 0, 6, 0, 0, 2, 1, 0, 1) ... displaying 10 of 86 total bound parameter sets ... (84, 2018, '9--17', 'Male', '-99', 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0), (85, 2018, '9--17', 'Male', 'Minor', 0, 5, 10, 5, 3, 2, 0, 0, 0, 0, 14, 0, 0, 2, 1, 10, 1))\n2020-10-04 12:22:22,419 INFO sqlalchemy.engine.base.Engine COMMIT\n2020-10-04 12:22:22,428 INFO sqlalchemy.engine.base.Engine SELECT name FROM sqlite_master WHERE type='table' ORDER BY name\n2020-10-04 12:22:22,429 INFO sqlalchemy.engine.base.Engine ()\n"
],
[
"sqlite_connection.close()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.