markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
The above output for `constructed_list` may seem odd. Referring to the documentation, we see that the argument to the type constructor is an _iterable_, which according to the documentation is "An object capable of returning its members one at a time." In our construtor statement above``` Using the type constructorconstructed_list = list('purple')```the word 'purple' is the object - in this case a ```str``` (string) consisting of the word 'purple' - that when used to construct a list returns its members (individual letters) one at a time.Compare the outputs below:
constructed_list_int = list(123) constructed_list_str = list('123') constructed_list_str
_____no_output_____
Apache-2.0
1.2-The Basics.ipynb
unmrds/cc-python
Lists in Python are:* mutable - the list and list items can be changed* ordered - list items keep the same "place" in the list_Ordered_ here does not mean sorted. The list below is printed with the numbers in the order we added them to the list, not in numeric order:
ordered = [3, 2, 7, 1, 19, 0] ordered # There is a 'sort' method for sorting list items as needed: ordered.sort() ordered
_____no_output_____
Apache-2.0
1.2-The Basics.ipynb
unmrds/cc-python
Info on additional list methods is available at Because lists are ordered, it is possible to access list items by referencing their positions. Note that the position of the first item in a list is 0 (zero), not 1!
string_list = ['apples', 'oranges', 'pears', 'grapes', 'pineapples'] string_list[0] # We can use positions to 'slice' or select sections of a list: string_list[3:] # start at index '3' and continue to the end string_list[:3] # start at index '0' and go up to, but don't include index '3' string_list[1:4] # start at index '1' and go up to and don't include index '4' # If we don't know the position of a list item, we can use the 'index()' method to find out. # Note that in the case of duplicate list items, this only returns the position of the first one: string_list.index('pears') string_list.append('oranges') string_list string_list.index('oranges') # one more time with lists and dictionaries list_ex1 = my_stuff[0] + my_stuff[1] + int(my_stuff[2]) print(list_ex1) # we can use parentheses to split a continuous group of commands over multiple lines list_ex2 = ( str(my_stuff[0]) + str(my_stuff[1]) + my_stuff[2] + my_stuff[3][0] ) print(list_ex2) dict_ex1 = ( more_stuff['item1'] + more_stuff['item2'] + int(more_stuff['item3']) ) print(dict_ex1) dict_ex2 = ( str(more_stuff['item1']) + str(more_stuff['item2']) + more_stuff['item3'] ) print(dict_ex2) # Now try it yourself ... # print out the phrase "The answer: 42" using the following # variables and one or more of your own and the 'print()' function # (remember spaces are characters as well) start = "The" answer = 42
_____no_output_____
Apache-2.0
1.2-The Basics.ipynb
unmrds/cc-python
Bayesian Optimization[Bayesian optimization](https://en.wikipedia.org/wiki/Bayesian_optimization) is a powerful strategy for minimizing (or maximizing) objective functions that are costly to evaluate. It is an important component of [automated machine learning](https://en.wikipedia.org/wiki/Automated_machine_learning) toolboxes such as [auto-sklearn](https://automl.github.io/auto-sklearn/stable/), [auto-weka](http://www.cs.ubc.ca/labs/beta/Projects/autoweka/), and [scikit-optimize](https://scikit-optimize.github.io/), where Bayesian optimization is used to select model hyperparameters. Bayesian optimization is used for a wide range of other applications as well; as cataloged in the review [2], these include interactive user-interfaces, robotics, environmental monitoring, information extraction, combinatorial optimization, sensor networks, adaptive Monte Carlo, experimental design, and reinforcement learning. Problem SetupWe are given a minimization problem$$ x^* = \text{arg}\min \ f(x), $$where $f$ is a fixed objective function that we can evaluate pointwise. Here we assume that we do _not_ have access to the gradient of $f$. We alsoallow for the possibility that evaluations of $f$ are noisy.To solve the minimization problem, we will construct a sequence of points $\{x_n\}$ that converge to $x^*$. Since we implicitly assume that we have a fixed budget (say 100 evaluations), we do not expect to find the exact minumum $x^*$: the goal is to get the best approximate solution we can given the allocated budget.The Bayesian optimization strategy works as follows:1. Place a prior on the objective function $f$. Each time we evaluate $f$ at a new point $x_n$, we update our model for $f(x)$. This model serves as a surrogate objective function and reflects our beliefs about $f$ (in particular it reflects our beliefs about where we expect $f(x)$ to be close to $f(x^*)$). Since we are being Bayesian, our beliefs are encoded in a posterior that allows us to systematically reason about the uncertainty of our model predictions.2. Use the posterior to derive an "acquisition" function $\alpha(x)$ that is easy to evaluate and differentiate (so that optimizing $\alpha(x)$ is easy). In contrast to $f(x)$, we will generally evaluate $\alpha(x)$ at many points $x$, since doing so will be cheap.3. Repeat until convergence: + Use the acquisition function to derive the next query point according to $$ x_{n+1} = \text{arg}\min \ \alpha(x). $$ + Evaluate $f(x_{n+1})$ and update the posterior.A good acquisition function should make use of the uncertainty encoded in the posterior to encourage a balance between exploration—querying points where we know little about $f$—and exploitation—querying points in regions we have good reason to think $x^*$ may lie. As the iterative procedure progresses our model for $f$ evolves and so does the acquisition function. If our model is good and we've chosen a reasonable acquisition function, we expect that the acquisition function will guide the query points $x_n$ towards $x^*$.In this tutorial, our model for $f$ will be a Gaussian process. In particular we will see how to use the [Gaussian Process module](http://docs.pyro.ai/en/0.3.1/contrib.gp.html) in Pyro to implement a simple Bayesian optimization procedure.
import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt import torch import torch.autograd as autograd import torch.optim as optim from torch.distributions import constraints, transform_to import pyro import pyro.contrib.gp as gp assert pyro.__version__.startswith('1.5.2') pyro.set_rng_seed(1)
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
Define an objective functionFor the purposes of demonstration, the objective function we are going to consider is the [Forrester et al. (2008) function](https://www.sfu.ca/~ssurjano/forretal08.html):$$f(x) = (6x-2)^2 \sin(12x-4), \quad x\in [0, 1].$$This function has both a local minimum and a global minimum. The global minimum is at $x^* = 0.75725$.
def f(x): return (6 * x - 2)**2 * torch.sin(12 * x - 4)
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
Let's begin by plotting $f$.
x = torch.linspace(0, 1) plt.figure(figsize=(8, 4)) plt.plot(x.numpy(), f(x).numpy()) plt.show()
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
Setting a Gaussian Process prior [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process) are a popular choice for a function priors due to their power and flexibility. The core of a Gaussian Process is its covariance function $k$, which governs the similarity of $f(x)$ for pairs of input points. Here we will use a Gaussian Process as our prior for the objective function $f$. Given inputs $X$ and the corresponding noisy observations $y$, the model takes the form$$f\sim\mathrm{MultivariateNormal}(0,k(X,X)),$$$$y\sim f+\epsilon,$$where $\epsilon$ is i.i.d. Gaussian noise and $k(X,X)$ is a covariance matrix whose entries are given by $k(x,x^\prime)$ for each pair of inputs $(x,x^\prime)$.We choose the [Matern](https://en.wikipedia.org/wiki/Mat%C3%A9rn_covariance_function) kernel with $\nu = \frac{5}{2}$ (as suggested in reference [1]). Note that the popular [RBF](https://en.wikipedia.org/wiki/Radial_basis_function_kernel) kernel, which is used in many regression tasks, results in a function prior whose samples are infinitely differentiable; this is probably an unrealistic assumption for most 'black-box' objective functions.
# initialize the model with four input points: 0.0, 0.33, 0.66, 1.0 X = torch.tensor([0.0, 0.33, 0.66, 1.0]) y = f(X) gpmodel = gp.models.GPRegression(X, y, gp.kernels.Matern52(input_dim=1), noise=torch.tensor(0.1), jitter=1.0e-4)
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
The following helper function `update_posterior` will take care of updating our `gpmodel` each time we evaluate $f$ at a new value $x$.
def update_posterior(x_new): y = f(x_new) # evaluate f at new point. X = torch.cat([gpmodel.X, x_new]) # incorporate new evaluation y = torch.cat([gpmodel.y, y]) gpmodel.set_data(X, y) # optimize the GP hyperparameters using Adam with lr=0.001 optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001) gp.util.train(gpmodel, optimizer)
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
Define an acquisition function There are many reasonable options for the acquisition function (see references [1] and [2] for a list of popular choices and a discussion of their properties). Here we will use one that is 'simple to implement and interpret,' namely the 'Lower Confidence Bound' acquisition function. It is given by$$\alpha(x) = \mu(x) - \kappa \sigma(x)$$where $\mu(x)$ and $\sigma(x)$ are the mean and square root variance of the posterior at the point $x$, and the arbitrary constant $\kappa>0$ controls the trade-off between exploitation and exploration. This acquisition function will be minimized for choices of $x$ where either: i) $\mu(x)$ is small (exploitation); or ii) where $\sigma(x)$ is large (exploration). A large value of $\kappa$ means that we place more weight on exploration because we prefer candidates $x$ in areas of high uncertainty. A small value of $\kappa$ encourages exploitation because we prefer candidates $x$ that minimize $\mu(x)$, which is the mean of our surrogate objective function. We will use $\kappa=2$.
def lower_confidence_bound(x, kappa=2): mu, variance = gpmodel(x, full_cov=False, noiseless=False) sigma = variance.sqrt() return mu - kappa * sigma
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
The final component we need is a way to find (approximate) minimizing points $x_{\rm min}$ of the acquisition function. There are several ways to proceed, including gradient-based and non-gradient-based techniques. Here we will follow the gradient-based approach. One of the possible drawbacks of gradient descent methods is that the minimization algorithm can get stuck at a local minimum. In this tutorial, we adopt a (very) simple approach to address this issue:- First, we seed our minimization algorithm with 5 different values: i) one is chosen to be $x_{n-1}$, i.e. the candidate $x$ used in the previous step; and ii) four are chosen uniformly at random from the domain of the objective function. - We then run the minimization algorithm to approximate convergence for each seed value. - Finally, from the five candidate $x$s identified by the minimization algorithm, we select the one that minimizes the acquisition function.Please refer to reference [2] for a more detailed discussion of this problem in Bayesian Optimization.
def find_a_candidate(x_init, lower_bound=0, upper_bound=1): # transform x to an unconstrained domain constraint = constraints.interval(lower_bound, upper_bound) unconstrained_x_init = transform_to(constraint).inv(x_init) unconstrained_x = unconstrained_x_init.clone().detach().requires_grad_(True) minimizer = optim.LBFGS([unconstrained_x], line_search_fn='strong_wolfe') def closure(): minimizer.zero_grad() x = transform_to(constraint)(unconstrained_x) y = lower_confidence_bound(x) autograd.backward(unconstrained_x, autograd.grad(y, unconstrained_x)) return y minimizer.step(closure) # after finding a candidate in the unconstrained domain, # convert it back to original domain. x = transform_to(constraint)(unconstrained_x) return x.detach()
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
The inner loop of Bayesian OptimizationWith the various helper functions defined above, we can now encapsulate the main logic of a single step of Bayesian Optimization in the function `next_x`:
def next_x(lower_bound=0, upper_bound=1, num_candidates=5): candidates = [] values = [] x_init = gpmodel.X[-1:] for i in range(num_candidates): x = find_a_candidate(x_init, lower_bound, upper_bound) y = lower_confidence_bound(x) candidates.append(x) values.append(y) x_init = x.new_empty(1).uniform_(lower_bound, upper_bound) argmin = torch.min(torch.cat(values), dim=0)[1].item() return candidates[argmin]
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
Running the algorithm To illustrate how Bayesian Optimization works, we make a convenient plotting function that will help us visualize our algorithm's progress.
def plot(gs, xmin, xlabel=None, with_title=True): xlabel = "xmin" if xlabel is None else "x{}".format(xlabel) Xnew = torch.linspace(-0.1, 1.1) ax1 = plt.subplot(gs[0]) ax1.plot(gpmodel.X.numpy(), gpmodel.y.numpy(), "kx") # plot all observed data with torch.no_grad(): loc, var = gpmodel(Xnew, full_cov=False, noiseless=False) sd = var.sqrt() ax1.plot(Xnew.numpy(), loc.numpy(), "r", lw=2) # plot predictive mean ax1.fill_between(Xnew.numpy(), loc.numpy() - 2*sd.numpy(), loc.numpy() + 2*sd.numpy(), color="C0", alpha=0.3) # plot uncertainty intervals ax1.set_xlim(-0.1, 1.1) ax1.set_title("Find {}".format(xlabel)) if with_title: ax1.set_ylabel("Gaussian Process Regression") ax2 = plt.subplot(gs[1]) with torch.no_grad(): # plot the acquisition function ax2.plot(Xnew.numpy(), lower_confidence_bound(Xnew).numpy()) # plot the new candidate point ax2.plot(xmin.numpy(), lower_confidence_bound(xmin).numpy(), "^", markersize=10, label="{} = {:.5f}".format(xlabel, xmin.item())) ax2.set_xlim(-0.1, 1.1) if with_title: ax2.set_ylabel("Acquisition Function") ax2.legend(loc=1)
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
Our surrogate model `gpmodel` already has 4 function evaluations at its disposal; however, we have yet to optimize the GP hyperparameters. So we do that first. Then in a loop we call the `next_x` and `update_posterior` functions repeatedly. The following plot illustrates how Gaussian Process posteriors and the corresponding acquisition functions change at each step in the algorith. Note how query points are chosen both for exploration and exploitation.
plt.figure(figsize=(12, 30)) outer_gs = gridspec.GridSpec(5, 2) optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001) gp.util.train(gpmodel, optimizer) for i in range(8): xmin = next_x() gs = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=outer_gs[i]) plot(gs, xmin, xlabel=i+1, with_title=(i % 2 == 0)) update_posterior(xmin) plt.show()
_____no_output_____
Apache-2.0
tutorial/source/bo.ipynb
FlorianWilhelm/pyro
UMAP This script generates UMAP representations from spectrograms (previously generated). Installing and loading libraries
import os import pandas as pd import sys import numpy as np from pandas.core.common import flatten import pickle import umap from pathlib import Path import datetime import scipy import matplotlib.pyplot as plt import seaborn as sns import matplotlib import librosa.display from scipy.spatial.distance import pdist, squareform from plot_functions import umap_2Dplot, mara_3Dplot, plotly_viz from preprocessing_functions import pad_spectro, calc_zscore, preprocess_spec_numba, create_padded_data
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Setting constants Setting project, input and output folders.
wd = os.getcwd() DATA = os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "processed") FIGURES = os.path.join(os.path.sep, str(Path(wd).parents[0]), "reports", "figures") DF_DICT = {} for dftype in ['full', 'reduced', 'balanced']: DF_DICT[dftype] = os.path.join(os.path.sep, DATA, "df_focal_"+dftype+".pkl") LOAD_EXISTING = True # if true, load existing embedding instead of creating new OVERWRITE_FIGURES = False # if true, overwrite existing figures
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
UMAP projection Choose dataset
#dftype='full' dftype='reduced' #dftype='balanced' spec_df = pd.read_pickle(DF_DICT[dftype]) labels = spec_df.call_lable.values spec_df.shape
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Choose feature
specs = spec_df.spectrograms.copy() specs = [calc_zscore(x) for x in specs] data = create_padded_data(specs)
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Run UMAP
# 3D embedding_filename = os.path.join(os.path.sep, DATA,'basic_UMAP_3D_'+dftype+'_default_params.csv') print(embedding_filename) if (LOAD_EXISTING and os.path.isfile(embedding_filename)): embedding = np.loadtxt(embedding_filename, delimiter=";") print("File already exists") else: reducer = umap.UMAP(n_components=3, min_dist = 0, random_state=2204) embedding = reducer.fit_transform(data) np.savetxt(embedding_filename, embedding, delimiter=";") # 2D embedding_filename = os.path.join(os.path.sep, DATA,'basic_UMAP_2D_'+dftype+'_default_params.csv') print(embedding_filename) if (LOAD_EXISTING and os.path.isfile(embedding_filename)): embedding2D = np.loadtxt(embedding_filename, delimiter=";") print("File already exists") else: reducer = umap.UMAP(n_components=2, min_dist = 0, random_state=2204) embedding2D = reducer.fit_transform(data) np.savetxt(embedding_filename, embedding2D, delimiter=";")
/home/mthomas/Documents/MPI_work/projects/meerkat/meerkat_umap_pv/data/processed/basic_UMAP_2D_reduced_default_params.csv File already exists
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Visualization
pal="Set2"
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
2D Plots
if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep, FIGURES, 'UMAP_2D_plot_'+dftype+'_nolegend.jpg') else: outname=None print(outname) umap_2Dplot(embedding2D[:,0], embedding2D[:,1], labels, pal, outname=outname, showlegend=False)
None
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
3D Plot Matplotlib
if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep, FIGURES, 'UMAP_3D_plot_'+dftype+'_nolegend.jpg') else: outname=None print(outname) mara_3Dplot(embedding[:,0], embedding[:,1], embedding[:,2], labels, pal, outname, showlegend=False)
None
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
PlotlyInteractive viz in plotly (though without sound or spectrogram)
#plotly_viz(embedding[:,0], # embedding[:,1], # embedding[:,2], # labels, # pal)
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Embedding evaluation Evaluate the embedding based on calltype labels of nearest neighbors.
from evaluation_functions import nn, sil # produce nearest neighbor statistics nn_stats = nn(embedding, np.asarray(labels), k=5)
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Calculate metrics
print("Log final metric (unweighted):",nn_stats.get_S()) print("Abs final metric (unweighted):",nn_stats.get_Snorm()) print(nn_stats.knn_accuracy()) if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep, FIGURES, 'heatS_UMAP_'+dftype+'.png') else: outname=None print(outname) nn_stats.plot_heat_S(outname=outname) if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep, FIGURES, 'heatSnorm_UMAP_'+dftype+'.png') else: outname=None print(outname) nn_stats.plot_heat_Snorm(outname=outname) if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep, FIGURES, 'heatfold_UMAP_'+dftype+'.png') else: outname=None print(outname) nn_stats.plot_heat_fold(outname=outname)
/home/mthomas/Documents/MPI_work/projects/meerkat/meerkat_umap_pv/reports/figures/heatfold_UMAP_reduced.png
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Within vs. outside distances
from evaluation_functions import plot_within_without if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep, FIGURES,"distanceswithinwithout_"+dftype+".png") else: outname=None print(outname) plot_within_without(embedding=embedding, labels=labels, outname=outname)
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Silhouette Plot
sil_stats = sil(embedding, labels) if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep, FIGURES, 'silplot_UMAP_'+dftype+'.png') else: outname=None print(outname) sil_stats.plot_sil(outname=outname) sil_stats.get_avrg_score()
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
How many dimensions? Evaluate, how many dimensions are best for the embedding.
specs = spec_df.spectrograms.copy() # normalize feature specs = [calc_zscore(x) for x in specs] # pad feature maxlen= np.max([spec.shape[1] for spec in specs]) flattened_specs = [pad_spectro(spec, maxlen).flatten() for spec in specs] data = np.asarray(flattened_specs) data.shape embeddings = {} for n_dims in range(1,11): reducer = umap.UMAP(n_components = n_dims, min_dist = 0, metric='euclidean', random_state=2204) embeddings[n_dims] = reducer.fit_transform(data) labels = spec_df.call_lable.values calltypes = sorted(list(set(labels))) k=5 dims_tab = np.zeros((10,1)) for n_dims in range(1,11): nn_stats = nn(embeddings[n_dims], labels, k=k) stats_tab = nn_stats.get_statstab() mean_metric = np.mean(np.diagonal(stats_tab.iloc[:-1,])) print(mean_metric) dims_tab[n_dims-1,:] = mean_metric x = np.arange(1,11,1) y = dims_tab[:,0] plt.plot(x,y, marker='o', markersize=4) plt.xlabel("N_components") plt.ylabel("Embedding score S") plt.xticks(np.arange(0, 11, step=1)) plt.savefig(os.path.join(os.path.sep,FIGURES,'n_dims.png'), facecolor="white")
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Note that this is different than doing UMAP with n=10 components and then selection only the first x dimensions in UMAP space! Graph from embedding evaluation
if OVERWRITE_FIGURES: outname = os.path.join(os.path.sep,FIGURES,'simgraph_test.png') else: outname=None nn_stats.draw_simgraph(outname)
Graph saved at /home/mthomas/Documents/MPI_work/projects/meerkat/meerkat_umap_pv/reports/figures/simgraph_test.png
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Resource: https://en.it1352.com/article/d096c1eadbb84c19b038eb9648153346.html Visualize example nearest neighbors
import random import scipy from sklearn.neighbors import NearestNeighbors knn=5 # Find k nearest neighbors nbrs = NearestNeighbors(metric='euclidean',n_neighbors=knn+1, algorithm='brute').fit(embedding) distances, indices = nbrs.kneighbors(embedding) # need to remove the first neighbor, because that is the datapoint itself indices = indices[:,1:] distances = distances[:,1:] calltypes = sorted(list(set(spec_df['call_lable']))) labels = spec_df.call_lable.values names = spec_df.Name.values # make plots per calltype n_examples = 3 for calltype in calltypes: fig = plt.figure(figsize=(14,6)) fig_name = 'NN_viz_'+calltype k=1 call_indices = np.asarray(np.where(labels==calltype))[0] # randomly choose 3 random.seed(2204) example_indices = random.sample(list(call_indices), n_examples) for i,ind in enumerate(example_indices): img_of_interest = spec_df.iloc[ind,:].spectrograms embedding_of_interest = embedding[ind,:] plt.subplot(n_examples, knn+1, k) #librosa.display.specshow(np.transpose(spec)) plt.imshow(img_of_interest, interpolation='nearest', origin='lower', aspect='equal') #plt.title(calltype+' : 0') #plt.title(calltype) k=k+1 nearest_neighbors = indices[ind] for neighbor in nearest_neighbors: neighbor_label = names[neighbor] neighbor_embedding = embedding[neighbor,:] dist_to_original = scipy.spatial.distance.euclidean(embedding_of_interest, neighbor_embedding) neighbor_img = spec_df.iloc[neighbor,:].spectrograms plt.subplot(n_examples, knn+1, k) plt.imshow(neighbor_img, interpolation='nearest', origin='lower', aspect='equal') k=k+1 plt.tight_layout() plt.savefig(os.path.join(os.path.sep,FIGURES,fig_name), facecolor="white") plt.close() # Randomly choose 10 calls and plot their 4 nearest neighbors n_examples = 10 fig = plt.figure(figsize=(14,25)) fig_name = 'NN_viz' k=1 # randomly choose 3 random.seed(2204) example_indices = random.sample(list(range(embedding.shape[0])), n_examples) for i,ind in enumerate(example_indices): img_of_interest = spec_df.iloc[ind,:].spectrograms embedding_of_interest = embedding[ind,:] plt.subplot(n_examples, knn+1, k) plt.imshow(img_of_interest, interpolation='nearest', origin='lower', aspect='equal') k=k+1 nearest_neighbors = indices[ind] for neighbor in nearest_neighbors: neighbor_label = names[neighbor] neighbor_embedding = embedding[neighbor,:] dist_to_original = scipy.spatial.distance.euclidean(embedding_of_interest, neighbor_embedding) neighbor_img = spec_df.iloc[neighbor,:].spectrograms plt.subplot(n_examples, knn+1, k) plt.imshow(neighbor_img, interpolation='nearest', origin='lower', aspect='equal') k=k+1 plt.tight_layout() plt.savefig(os.path.join(os.path.sep,FIGURES,fig_name), facecolor="white")
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
Visualize preprocessing steps
N_MELS = 40 MEL_BINS_REMOVED_UPPER = 5 MEL_BINS_REMOVED_LOWER = 5 # make plots calltypes = sorted(list(set(spec_df.call_lable.values))) fig = plt.figure(figsize=(10,6)) fig_name = 'preprocessing_examples_mara.png' fig.suptitle('Preprocessing steps', fontsize=16) k=1 # randomly choose 4 examples = spec_df.sample(n=6, random_state=1) examples.reset_index(inplace=True) ori_specs = examples.denoised_spectrograms # original specs = ori_specs vmin = np.min([np.min(x) for x in specs]) vmax = np.max([np.max(x) for x in specs]) for i in range(examples.shape[0]): spec = specs[i] plt.subplot(5, 6, k) #librosa.display.specshow(spec, y_axis='mel', fmin=0, fmax=4000) plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal', norm=None,vmin=vmin, vmax=vmax) if i==0: plt.ylabel('none', rotation=0, labelpad=30) plt.title("Example "+str(i+1)) k=k+1 # z-score specs = ori_specs.copy() #specs = [x[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:] for x in specs] specs = [calc_zscore(s) for s in specs] #vmin = np.min([np.min(x) for x in specs]) #vmax = np.max([np.max(x) for x in specs]) for i in range(examples.shape[0]): spec = specs[i] plt.subplot(5, 6, k) plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal') if i==0: plt.ylabel('zs', rotation=0, labelpad=30) k=k+1 # cut for i in range(examples.shape[0]): spec = ori_specs[i] spec = spec[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:] spec = calc_zscore(spec) plt.subplot(5, 6, k) plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal') if i==0: plt.ylabel('zs-cu', rotation=0, labelpad=30) k=k+1 # floor for i in range(examples.shape[0]): spec = ori_specs[i] spec = spec[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:] spec = calc_zscore(spec) spec = np.where(spec < 0, 0, spec) plt.subplot(5, 6, k) plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal') if i==0: plt.ylabel('zs-cu-fl', rotation=0, labelpad=30) k=k+1 # ceiling for i in range(examples.shape[0]): spec = ori_specs[i] spec = spec[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:] spec = calc_zscore(spec) spec = np.where(spec < 0, 0, spec) spec = np.where(spec > 3, 3, spec) plt.subplot(5, 6, k) plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal') if i==0: plt.ylabel('zs-cu-fl-ce', rotation=0, labelpad=30) k=k+1 plt.tight_layout() outname= os.path.join(os.path.sep,FIGURES,fig_name) print(outname) plt.savefig(outname)
/home/mthomas/Documents/MPI_work/projects/meerkat/meerkat_umap_pv/reports/figures/preprocessing_examples_mara.png
MIT
notebooks/.ipynb_checkpoints/old_meerkat_UMAP_basic-checkpoint.ipynb
marathomas/meerkat_umap
There are 76,670 different agent ids in the training data.
import os import pickle import random import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set(rc={"figure.dpi":100, 'savefig.dpi':100}) sns.set_context('notebook') # Keys to the pickle objects CITY = 'city' LANE = 'lane' LANE_NORM = 'lane_norm' SCENE_IDX = 'scene_idx' AGENT_ID = 'agent_id' P_IN = 'p_in' V_IN = 'v_in' P_OUT = 'p_out' V_OUT = 'v_out' CAR_MASK = 'car_mask' TRACK_ID = 'track_id' # Set the training and test paths TEST_PATH = '../new_val_in/' TRAIN_PATH = '../new_train/' train_path = TRAIN_PATH test_path = TEST_PATH # DUMMY_TRAIN_PATH = './dummy_train/' # DUMMY_TEST_PATH = './dummy_val/' # train_path = DUMMY_TRAIN_PATH # test_path = DUMMY_TEST_PATH
_____no_output_____
MIT
exploratory_analysis/eda_scene.ipynb
and-le/cse-151b-argoverse
Size of training and test data
train_size = len([entry for entry in os.scandir(train_path)]) test_size = len([entry for entry in os.scandir(test_path)]) print(f"Number of training samples = {train_size}") print(f"Number of test samples = {test_size}")
Number of training samples = 205942 Number of test samples = 3200
MIT
exploratory_analysis/eda_scene.ipynb
and-le/cse-151b-argoverse
Scene object
# Open directory containing pickle files with os.scandir(train_path) as entries: scene = None # Get the first pickle file entry = next(entries) # Open the first pickle file and store its data with open(entry, "rb") as file: scene = pickle.load(file) # Look at key-value pairs print('Scene object:') for k, v in scene.items(): if type(v) is np.ndarray: print(f"{k} : shape = {v.shape}") else: print(f"{k} : {type(v)}")
Scene object: city : <class 'str'> lane : shape = (72, 3) lane_norm : shape = (72, 3) scene_idx : <class 'int'> agent_id : <class 'str'> car_mask : shape = (60, 1) p_in : shape = (60, 19, 2) v_in : shape = (60, 19, 2) p_out : shape = (60, 30, 2) v_out : shape = (60, 30, 2) track_id : shape = (60, 30, 1)
MIT
exploratory_analysis/eda_scene.ipynb
and-le/cse-151b-argoverse
Scene Analysis
random.seed(1) def lane_centerline(scene): lane = scene[LANE] lane_norm = scene[LANE_NORM] fig, (ax1) = plt.subplots(nrows=1, ncols=1, figsize=(5, 5)) ax1.quiver(lane[:, 0], lane[:, 1], lane_norm[:, 0], lane_norm[:, 1], color='gray') ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('Lane centerline') def target_agent(scene): lane = scene[LANE] lane_norm = scene[LANE_NORM] pin = scene[P_IN] pout = scene[P_OUT] vin = scene[V_IN] vout = scene[V_OUT] # Get the index of the target agent targ = np.where(scene[TRACK_ID][:, 0, 0] == scene[AGENT_ID])[0][0] fig, (ax1) = plt.subplots(nrows=1, ncols=1, figsize=(5, 5)) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('Target agent motion') ax1.quiver(lane[:, 0], lane[:, 1], lane_norm[:, 0], lane_norm[:, 1], units='xy', color='black') ax1.quiver(pin[targ, :, 0], pin[targ, :, 1], vin[targ, :, 0], vin[targ, :, 1], color='red', units='xy'); ax1.quiver(pout[targ, :, 0], pout[targ, :, 1], vout[targ, :, 0], vout[targ, :, 1], color='blue', units='xy'); def full_scene(scene): lane = scene[LANE] lane_norm = scene[LANE_NORM] pin = scene[P_IN] pout = scene[P_OUT] vin = scene[V_IN] vout = scene[V_OUT] # Get the index of the target agent targ = np.where(scene[TRACK_ID][:, 0, 0] == scene[AGENT_ID])[0][0] actual_idxs = np.where(scene[CAR_MASK][:, 0] == 1) # Row indexes of actually tracked agents pin_other = scene[P_IN][actual_idxs] vin_other = scene[V_IN][actual_idxs] pout_other = scene[P_OUT][actual_idxs] vout_other = scene[V_OUT][actual_idxs] fig, (ax1) = plt.subplots(nrows=1, ncols=1, figsize=(7, 7)) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('Scene ' + str(scene[SCENE_IDX])) ax1.quiver(lane[:, 0], lane[:, 1], lane_norm[:, 0], lane_norm[:, 1], units='xy', color='gray', label='Center line(s)') # Index of the last other agent - can either be the last element in the array or the element right before # target when target is the last element in the array last_other = len(actual_idxs[0]) - 1 if targ != len(actual_idxs[0]) - 1 else targ - 1 for i in range(len(actual_idxs[0])): # Non target agent if i != targ: if i == last_other: ax1.quiver(pin[i, :, 0], pin[i, :, 1], vin[i, :, 0], vin[i, :, 1], color='orange', units='xy', label='Other agent input') ax1.quiver(pout[i, :, 0], pout[i, :, 1], vout[i, :, 0], vout[i, :, 1], color='blue', units='xy', label='Other agent output') else: ax1.quiver(pin[i, :, 0], pin[i, :, 1], vin[i, :, 0], vin[i, :, 1], color='orange', units='xy', label='_nolegend_') ax1.quiver(pout[i, :, 0], pout[i, :, 1], vout[i, :, 0], vout[i, :, 1], color='blue', units='xy', label='_nolegend_') set_other_legend = True else: ax1.quiver(pin[targ, :, 0], pin[targ, :, 1], vin[targ, :, 0], vin[targ, :, 1], color='lightgreen', units='xy', label='Target agent input') ax1.quiver(pout[targ, :, 0], pout[targ, :, 1], vout[targ, :, 0], vout[targ, :, 1], color='darkgreen', units='xy', label='Target agent output') ax1.legend() # Randomly pick a scene scene = None rand = random.choice(os.listdir(train_path)) # Build out full path name rand = train_path + rand with open(rand, "rb") as file: scene = pickle.load(file) scene[SCENE_IDX] # lane_centerline(scene) # target_agent(scene) full_scene(scene)
_____no_output_____
MIT
exploratory_analysis/eda_scene.ipynb
and-le/cse-151b-argoverse
Basic Tensor operations and GradientTape.In this graded assignment, you will perform different tensor operations as well as use [GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape). These are important building blocks for the next parts of this course so it's important to master the basics. Let's begin!
import tensorflow as tf import numpy as np
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Exercise 1 - [tf.constant]((https://www.tensorflow.org/api_docs/python/tf/constant))Creates a constant tensor from a tensor-like object.
# Convert NumPy array to Tensor using `tf.constant` def tf_constant(array): """ Args: array (numpy.ndarray): tensor-like array. Returns: tensorflow.python.framework.ops.EagerTensor: tensor. """ ### START CODE HERE ### tf_constant_array = tf.constant(array) ### END CODE HERE ### return tf_constant_array tmp_array = np.arange(1,10) x = tf_constant(tmp_array) x # Expected output: # <tf.Tensor: shape=(9,), dtype=int64, numpy=array([1, 2, 3, 4, 5, 6, 7, 8, 9])>
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Note that for future docstrings, the type `EagerTensor` will be used as a shortened version of `tensorflow.python.framework.ops.EagerTensor`. Exercise 2 - [tf.square](https://www.tensorflow.org/api_docs/python/tf/math/square)Computes the square of a tensor element-wise.
# Square the input tensor def tf_square(array): """ Args: array (numpy.ndarray): tensor-like array. Returns: EagerTensor: tensor. """ # make sure it's a tensor array = tf.constant(array) ### START CODE HERE ### tf_squared_array = tf.square(array) ### END CODE HERE ### return tf_squared_array tmp_array = tf.constant(np.arange(1, 10)) x = tf_square(tmp_array) x # Expected output: # <tf.Tensor: shape=(9,), dtype=int64, numpy=array([ 1, 4, 9, 16, 25, 36, 49, 64, 81])>
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Exercise 3 - [tf.reshape](https://www.tensorflow.org/api_docs/python/tf/reshape)Reshapes a tensor.
# Reshape tensor into the given shape parameter def tf_reshape(array, shape): """ Args: array (EagerTensor): tensor to reshape. shape (tuple): desired shape. Returns: EagerTensor: reshaped tensor. """ # make sure it's a tensor array = tf.constant(array) ### START CODE HERE ### tf_reshaped_array = tf.reshape(array, shape = shape) ### END CODE HERE ### return tf_reshaped_array # Check your function tmp_array = np.array([1,2,3,4,5,6,7,8,9]) # Check that your function reshapes a vector into a matrix x = tf_reshape(tmp_array, (3, 3)) x # Expected output: # <tf.Tensor: shape=(3, 3), dtype=int64, numpy= # [[1, 2, 3], # [4, 5, 6], # [7, 8, 9]]
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Exercise 4 - [tf.cast](https://www.tensorflow.org/api_docs/python/tf/cast)Casts a tensor to a new type.
# Cast tensor into the given dtype parameter def tf_cast(array, dtype): """ Args: array (EagerTensor): tensor to be casted. dtype (tensorflow.python.framework.dtypes.DType): desired new type. (Should be a TF dtype!) Returns: EagerTensor: casted tensor. """ # make sure it's a tensor array = tf.constant(array) ### START CODE HERE ### tf_cast_array = tf.cast(array, dtype = dtype) ### END CODE HERE ### return tf_cast_array # Check your function tmp_array = [1,2,3,4] x = tf_cast(tmp_array, tf.float32) x # Expected output: # <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 2., 3., 4.], dtype=float32)>
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Exercise 5 - [tf.multiply](https://www.tensorflow.org/api_docs/python/tf/multiply)Returns an element-wise x * y.
# Multiply tensor1 and tensor2 def tf_multiply(tensor1, tensor2): """ Args: tensor1 (EagerTensor): a tensor. tensor2 (EagerTensor): another tensor. Returns: EagerTensor: resulting tensor. """ # make sure these are tensors tensor1 = tf.constant(tensor1) tensor2 = tf.constant(tensor2) ### START CODE HERE ### product = tf.multiply(tensor1, tensor2) ### END CODE HERE ### return product # Check your function tmp_1 = tf.constant(np.array([[1,2],[3,4]])) tmp_2 = tf.constant(np.array(2)) result = tf_multiply(tmp_1, tmp_2) result # Expected output: # <tf.Tensor: shape=(2, 2), dtype=int64, numpy= # array([[2, 4], # [6, 8]])>
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Exercise 6 - [tf.add](https://www.tensorflow.org/api_docs/python/tf/add)Returns x + y element-wise.
# Add tensor1 and tensor2 def tf_add(tensor1, tensor2): """ Args: tensor1 (EagerTensor): a tensor. tensor2 (EagerTensor): another tensor. Returns: EagerTensor: resulting tensor. """ # make sure these are tensors tensor1 = tf.constant(tensor1) tensor2 = tf.constant(tensor2) ### START CODE HERE ### total = tf.add(tensor1, tensor2) ### END CODE HERE ### return total # Check your function tmp_1 = tf.constant(np.array([1, 2, 3])) tmp_2 = tf.constant(np.array([4, 5, 6])) tf_add(tmp_1, tmp_2) # Expected output: # <tf.Tensor: shape=(3,), dtype=int64, numpy=array([5, 7, 9])>
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Exercise 7 - Gradient TapeImplement the function `tf_gradient_tape` by replacing the instances of `None` in the code below. The instructions are given in the code comments.You can review the [docs](https://www.tensorflow.org/api_docs/python/tf/GradientTape) or revisit the lectures to complete this task.
def tf_gradient_tape(x): """ Args: x (EagerTensor): a tensor. Returns: EagerTensor: Derivative of z with respect to the input tensor x. """ with tf.GradientTape() as t: ### START CODE HERE ### # Record the actions performed on tensor x with `watch` t.watch(x) # Define a polynomial of form 3x^3 - 2x^2 + x y = (3 * (x ** 3)) - (2 * (x ** 2)) + x # Obtain the sum of the elements in variable y z = tf.reduce_sum(y) # Get the derivative of z with respect to the original input tensor x dz_dx = t.gradient(z, x) ### END CODE HERE return dz_dx # Check your function tmp_x = tf.constant(2.0) dz_dx = tf_gradient_tape(tmp_x) result = dz_dx.numpy() result # Expected output: # 29.0
_____no_output_____
Apache-2.0
TensorFlow Advanced Techniques Specialization/Course-2/Custom and Distributed Training with TensorFlow/Week-1/C2W1_Assignment.ipynb
nafiul-araf/TensorFlow-Advanced-Techniques-Specialization
Exploratory Data AnalysisIn this notebook, I have illuminated some of the strategies that one can use to explore the data and gain some insights about it.We will start from finding metadata about the data, to determining what techniques to use, to getting some important insights about the data. This is based on the IBM's Data Analysis with Python course on Coursera. The ProblemThe problem is to find the variables that impact the car price. For this problem, we will use a real-world dataset that details information about cars.The dataset used is an open-source dataset made available by Jeffrey C. Schlimmer. The one used in this notebook is hosted on the IBM Cloud. The dataset provides details of some cars. It includes properties like make, horse-power, price, wheel-type and so on. Loading data and finding the metadata Import libraries
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy import stats %matplotlib inline
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Load the data as pandas dataframe
path='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/automobileEDA.csv' df = pd.read_csv(path) df.head()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Metadata: The columns's typesFinding column's types is an important step. It serves two purposes:1. See if we need to convert some data. For example, price may be in string instead of numbers. This is very important as it could throw everything that we do afterwards off.2. Find out what type of analysis we need to do with what column. After fixing the problems given above, the type of the object is often a great indicator of whether the data is categorical or numerical. This is important as it would determine what kind of exploratory analysis we can and want to do. To find out the type, we can simply use `.dtypes` property of the dataframe. Here's an example using the dataframe we loaded above.
df.dtypes
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
From the results above, we can see that we can roughly divide the types into two categories: numeric (int64 and float64) and object. Although object type can contain lots of things, it's used often to store string variables. A quick glance at the table tells us that there's no glaring errors in object types. Now we divide them into two categories: numerical variables and categorical variables. Numerical, as the name states, are the variables that hold numerical data. Categorical variables hold string that describes a certain property of the data (such as Audi as the make).Make a special note that our target variable, price, is numerical. So the relationships we would be exploring would be between numerical-and-numerical data and numerical-and-categorical data. Relationship between Numerical DataFirst we will explore the relationship between two numerical data and see if we can learn some insights out of it.In the beginning, it's helpful to get the correlation between the variables. For this, we can use the `corr()` method to find out the correlation between all the variables. Do note that the method finds out the Pearson correlation. Natively, pandas also support Spearman and the Kendall Tau correlation. You can also pass in a custom callable if you want. Check out the docs for more info.Here's how to do it with the dataframe that we have:
df.corr()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Note that the diagonal elements are always one; because correlation with itself is always one. Now, it seems somewhat daunting, and frankly, unneccessary to have this big of a table and correlation between things we don't care (say bore and stroke). If we want to find out the correlation with just price, using `corrwith()` method is helpful. Here's how to do it:
corr = df.corrwith(df['price']) # Prettify pd.DataFrame(data=corr.values, index=corr.index, columns=['Correlation'])
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
From the table above, we have some idea about what can we expect the relationship should be like. As a refresher, in Pearson correlation, values range in [-1, 1] with -1 and 1 implying a perfect linear relationship and 0 implying none. A positive value implies a positive relationship (value increase in response to increment) and negative value implies negative relationship (value decrease in response to increment).The next step is to have a more visual outlook on the relationship. Visualizing Relationships Continuous numerical variables are variables that may contain any value within some range. In pandas dtype, continuous numerical variables can have the type "int64" or "float64". Scatterplots are a great way to visualize these variables is by using scatterplots.To take it further, it's better to use a scatter plot with a regression line. This should also be able to provide us with some preliminary ways to test our hypothesis of the relationship between them. In this notebook, we would be using the `regplot()` function in the `seaborn` package.Below are some examples. Positive linear relationship Let's plot "engine-size" vs "price" since the correlation between them seems strong.
plt.figure(figsize=(5,5)) sns.regplot(x="engine-size", y="price", data=df);
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
As the engine-size goes up, the price goes up. This indicates a decent positive direct correlation between these two variables. Thus, we can say that the engine size is a good predictor of price since the regression line is almost a perfect diagonal line.We can also check this with the Pearson correlation we got above. It's 0.87, which means sense. Let's also try highway mpg too since the correlation between them is -0.7
sns.regplot(x="highway-mpg", y="price", data=df);
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
The graph shows a decent negative realtionship. So, it could be a potential indicator. Although, it seems that the relationship isn't exactly normal--given the curve of the points. Let's try a higher order regression line.
sns.regplot(x="highway-mpg", y="price", data=df, order=2);
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
There. It seems much better. Weak Linear Relationship Not all variables have to be correlated. Let's check out the graph of "Peak-rpm" as a predictor variable for "price".
sns.regplot(x="peak-rpm", y="price", data=df);
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
From the graph, it's clear that peak rpm is a bad indicator of price. It seems that there is no relationship between them. It seems almost random. A quick check at the correlation value confirms this. The value is -0.1. It's very close to zero, implying no relationship. Although there are cases in which low value can be misguiding, it's usually only for relationships that show a non-linear relationship in which value goes down and up. But the graph confirms there is none. Relationship between Numerical and Categorical dataCategorical variables, like their name imply, divide the data into certain categories. They essentially describe a 'characteristic' of the data unit, and are often selected from a small group of categories.Although they commonly have "object" type, it's possible to have them has "int64" too (for example 'Level of happiness'). Visualizing with BoxplotsBoxplots are a great way to visualize such relationships. Boxplots essentially show the spread of the data. You can use the `boxplot()` function in the seaborn package. Alternatively, you can use boxen or violin plots too.Here's an example by plotting relationship between "body-style" and "price"
sns.boxplot(x="body-style", y="price", data=df);
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
We can infer that there is likely to be no significant relationship as there is a decent over lap. Let's examine engine "engine-location" and "price"
sns.boxplot(x="engine-location", y="price", data=df);
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Although there are a lot of outliers for the front, the distribution of price between these two engine-location categories is distinct enough to take engine-location as a potential good predictor of price.Let's examine "drive-wheels" and "price".
sns.boxplot(x="drive-wheels", y="price", data=df);
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Here we see that the distribution of price between the different drive-wheels categories differs; as such drive-wheels could potentially be a predictor of price. Statistical method to checking for a significant realtionship - ANOVAAlthough visualisation is helpful, it does not give us a concrete and certain vision in this (and often in others) case. So, it follows that we would want a metric to evaluate it by. For correlation between categorical and continuous variable, there are various tests. ANOVA family of tests is a common one to use.The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. Do note that ANOVA is an _omnibus_ test statistic and it can't tell you what groups are the ones that have correlation among them. Only that there are at least two groups with a significant difference. In python, we can calculate the ANOVA statistic fairly easily using the `scipy.stats` module. The function `f_oneway()` calculates and returns: __F-test score__: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means. Although the degree of the 'largeneess' differs from data to data. You can use the F-table to find out the critical F-value by using the significance level and degrees of freedom for numerator and denominator and compare it with the calculated F-test score.__P-value__: P-value tells how statistically significant is our calculated score value.If the variables are strongly correlated, the expectation is to have ANOVA to return a sizeable F-test score and a small p-value. Drive WheelsSince ANOVA analyzes the difference between different groups of the same variable, the `groupby()` function will come in handy. With this, we can easily and concisely seperate the dataset into groups of drive-wheels. Essentially, the function allows us to split the dataset into groups and perform calculations on groups moving forward. Check out Grouping below for more explanation.Let's see if different types 'drive-wheels' impact 'price', we group the data.
grouped_anova = df[['drive-wheels', 'price']].groupby(['drive-wheels']) grouped_anova.head(2)
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
We can obtain the values of the method group using the method `get_group()`
grouped_anova.get_group('4wd')['price']
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Finally, we use the function `f_oneway()` to obtain the F-test score and P-value.
# ANOVA f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price'], grouped_anova.get_group('4wd')['price']) print( "ANOVA results: F=", f_val, ", P =", p_val)
ANOVA results: F= 67.95406500780399 , P = 3.3945443577151245e-23
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
From the result, we can see that we have a large F-test score and a very small p-value. Still, we need to check if all three tested groups are highly correlated? Separately: fwd and rwd
f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price']) print( "ANOVA results: F=", f_val, ", P =", p_val )
ANOVA results: F= 130.5533160959111 , P = 2.2355306355677845e-23
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Seems like the result is significant and they are correlated. Let's examine the other groups 4wd and rwd
f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('rwd')['price']) print( "ANOVA results: F=", f_val, ", P =", p_val)
ANOVA results: F= 8.580681368924756 , P = 0.004411492211225333
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
4wd and fwd
f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('fwd')['price']) print("ANOVA results: F=", f_val, ", P =", p_val)
ANOVA results: F= 0.665465750252303 , P = 0.41620116697845666
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Relationship between Categorical Data: Corrected Cramer's VA good way to test relation between two categorical variable is Corrected Cramer's V. **Note:** A p-value close to zero means that our variables are very unlikely to be completely unassociated in some population. However, this does not mean the variables are strongly associated; a weak association in a large sample size may also result in p = 0.000.**General Rule of Thumb:*** V ∈ [0.1,0.3]: weak association* V ∈ [0.4,0.5]: medium association* V > 0.5: strong associationHere's how to do it in python:```pythonimport scipy.stats as ssimport pandas as pdimport numpy as npdef cramers_corrected_stat(x, y): """ calculate Cramers V statistic for categorial-categorial association. uses correction from Bergsma and Wicher, Journal of the Korean Statistical Society 42 (2013): 323-328 """ result = -1 if len(x.value_counts()) == 1: print("First variable is constant") elif len(y.value_counts()) == 1: print("Second variable is constant") else: conf_matrix = pd.crosstab(x, y) if conf_matrix.shape[0] == 2: correct = False else: correct = True chi2, p = ss.chi2_contingency(conf_matrix, correction=correct)[0:2] n = sum(conf_matrix.sum()) phi2 = chi2/n r, k = conf_matrix.shape phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1)) rcorr = r - ((r-1)**2)/(n-1) kcorr = k - ((k-1)**2)/(n-1) result = np.sqrt(phi2corr / min((kcorr-1), (rcorr-1))) return round(result, 6), round(p, 6)``` Descriptive Statistical Analysis Although the insights gained above are significant, it's clear we need more work. Since we are exploring the data, performing some common and useful descriptive statistical analysis would be nice. However, there are a lot of them and would require a lot of work to do them by scratch. Fortunately, `pandas` library has a neat method that computes all of them for us.The `describe()` method, when invoked on a dataframe automatically computes basic statistics for all continuous variables. Do note that any NaN values are automatically skipped in these statistics. By default, it will show stats for numerical data.Here's what it will show:* Count of that variable* Mean* Standard Deviation (std) * Minimum Value* IQR (Interquartile Range: 25%, 50% and 75%)* Maximum ValueIf you want, you can change the percentiles too. Check out the docs for that. Here's how to do it in our dataframe:
df.describe()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
To get the information about categorical variables, we need to specifically tell it to pandas to include them. For categorical variables, it shows:* Count* Unique values* The most common value or 'top'* Frequency of the 'top'
df.describe(include=['object'])
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Value CountsSometimes, we need to understand the distribution of the categorical data. This could mean understanding how many units of each characteristic/variable we have. `value_counts()` is a method in pandas that can help with it. If we use it with a series, it will give us the unique values and how many of them exist._Caution:_ Using it with DataFrame works like count of unique rows by combination of all columns (like in SQL). This may or may not be what you want. For example, using it with drive-wheels and engine-location would give you the number of rows with unique pair of values. Here's an example of doing it with the drive-wheels column.
df['drive-wheels'].value_counts().to_frame()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
`.to_frame()` method is added to make it into a dataframe, hence making it look better.You can play around and rename the column and index name if you want. We can repeat the above process for the variable 'engine-location'.
df['engine-location'].value_counts().to_frame()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Examining the value counts of the engine location would not be a good predictor variable for the price. This is because we only have three cars with a rear engine and 198 with an engine in the front, this result is skewed. Thus, we are not able to draw any conclusions about the engine location. GroupingGrouping is a useful technique to explore the data. With grouping, we can split data and apply various transforms. For example, we can find out the mean of different body styles. This would help us to have more insight into whether there's a relationsip between our target variable and the variable we are using grouping on.Although oftenly used on categorical data, grouping can also be used with numerical data by seperating them into categories. For example we might seperate car by prices into affordable and luxury groups.In pandas, we can use the `groupby()` method. Let's try it with the 'drive-wheels' variable. First we will find out how many unique values there are. We do that by `unique()` method.
df['drive-wheels'].unique()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
If we want to know, on average, which type of drive wheel is most valuable, we can group "drive-wheels" and then average them.
df[['drive-wheels','body-style','price']].groupby(['drive-wheels']).mean()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price.It's also possible to group with multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'. This groups the dataframe by the unique combinations 'drive-wheels' and 'body-style'.Let's store it in the variable `grouped_by_wheels_and_body`.
grouped_by_wheels_and_body = df[['drive-wheels','body-style','price']].groupby(['drive-wheels','body-style']).mean() grouped_by_wheels_and_body
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Although incredibly useful, it's a little hard to read. It's better to convert it to a pivot table.A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. There are various ways to do so. A way to do that is to use the method `pivot()`. However, with groups like the one above (multi-index), one can simply call the `unstack()` method.
grouped_by_wheels_and_body = grouped_by_wheels_and_body.unstack() grouped_by_wheels_and_body
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Often, we won't have data for some of the pivot cells. Often, it's filled with the value 0, but any other value could potentially be used as well. This could be mean or some other flag.
grouped_by_wheels_and_body.fillna(0)
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Let's do the same for body-style only
df[['price', 'body-style']].groupby('body-style').mean()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Visualizing GroupsHeatmaps are a great way to visualize groups. They can show relationships clearly in this case. Do note that you need to be careful with the color schemes. Since chosing appropriate colorscheme is not only appropriate for your 'story' of the data, it is also important since it can impact the perception of the data. [This resource](https://matplotlib.org/tutorials/colors/colormaps.html) gives a great idea on what to choose as a color scheme and when it's appropriate. It also has samples of the scheme below too for a quick preview along with when should one use them.Here's an example of using it with the pivot table we created with the `seaborn` package.
sns.heatmap(grouped_by_wheels_and_body, cmap="Blues");
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
This heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'. Correlation and Causation Correlation and causation are terms that are used often and confused with each other--or worst considered to imply the other. Here's a quick overview of them: __Correlation__: The degree of association (or resemblance) of variables with each other.__Causation__: A relationship of cause and effect between variables.It is important to know the difference between these two.Note that correlation does __not__ imply causation. Determining correlation is much simpler. We can almost always use methods such as Pearson Correlation, ANOVA method, and graphs. Determining causation may require independent experimentation. Pearson CorrelationDescribed earlier, Pearson Correlation is great way to measure linear dependence between two variables. It's also the default method in the method corr.
df.corr()
_____no_output_____
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Cramer's VCramer's V is a great method to calculate the relationship between two categorical variables. Read above about Cramer's V to get a better estimate.**General Rule of Thumb:*** V ∈ [0.1,0.3]: weak association* V ∈ [0.4,0.5]: medium association* V > 0.5: strong association ANOVA MethodAs discussed previously, ANOVA method is great to conduct analysis to determine whether there's a significant realtionship between categorical and continous variables. Check out the ANOVA section above for more details. Now, just knowing the correlation statistics is not enough. We also need to know whether the relationship is statistically significant or not. We can use p-value for that. P-valueIn very simple terms, p-value checks the probability whether the result we have could be just a random chance. For example, for a p-value of 0.05, we are certain that our results are insignificant about 5% of time and are significant 95% of the time.It's recommended to define a tolerance level of the p-value beforehand. Here's some common interpretations of p-value:* The p-value is $<$ 0.001: A strong evidence that the correlation is significant.* The p-value is $<$ 0.05: A moderate evidence that the correlation is significant.* The p-value is $<$ 0.1: A weak evidence that the correlation is significant.* The p-value is $>$ 0.1: No evidence that the correlation is significant. We can obtain this information using `stats` module in the `scipy` library. Let's calculate it for wheel-base vs price
pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price']) print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value)
The Pearson Correlation Coefficient is 0.5846418222655081 with a P-value of P = 8.076488270732989e-20
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Since the p-value is $<$ 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585)Let's try one more example: horsepower vs price.
pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price']) print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value)
The Pearson Correlation Coefficient is 0.809574567003656 with a P-value of P = 6.369057428259557e-48
Apache-2.0
Exploratory Data Analysis.ipynb
full-void/data-science-concepts
Bogumiła Walkowiak [email protected] Joachim Mąkowski [email protected] Intelligent Systems: Reasoning and Recognition Recognizing Digits using Neural Networks 1. IntroductionThe MNIST (Modified National Institute of Standards and Technology) dataset is a large collection of handwritten digits composed of 60,000 training images and 10,000 test images. The black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced gray-scale levels. Our task was to design and evaluate neural network architectures that can recognize hand-drawn digits using the grayscale this data set. 2. Data preparationFirst of all, we downloaded MNIST data. We decided to combine train and test set provided by MNIST dataset and then we splited data into training set 90% and a test set 10%. In the further part of the project, we'll also create a validation set so the final split of the data will look like this: training data 80%, validating data 10% and testing data 10%.
import numpy as np import tensorflow.compat.v1.keras.backend as K import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score,roc_curve,auc from sklearn.metrics import confusion_matrix, plot_confusion_matrix #physical_devices = tf.config.list_physical_devices('GPU') #tf.config.experimental.set_memory_growth(physical_devices[0], True) (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() #loading the data set X = np.concatenate((x_train, x_test)) y = np.concatenate([y_train, y_test]) train_ratio = 0.9 test_ratio = 0.1 x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = test_ratio) plt.imshow(x_train[0], cmap='gray') x_train = x_train.astype("float32") / 255 x_test = x_test.astype("float32") / 255 # images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) y_train = keras.utils.to_categorical(y_train, 10) y_test = keras.utils.to_categorical(y_test, 10)
_____no_output_____
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
3. Creating neural networks We decided to create a function. Thanks to it we will be able to write less code. Function trains model provided by argument of function, prints model's loss, accuracy, precision, recall and AUC for each digit and plots a history of training.
def predict_model(model, callbacks = [],batch_size=128, epochs = 4,lr=0.001): adam = keras.optimizers.Adam(lr=lr) model.compile(loss="categorical_crossentropy", optimizer=adam, metrics=["accuracy", "Precision","Recall"]) history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.11, callbacks=callbacks) score = model.evaluate(x_test, y_test, verbose=0) y_pred = model.predict(x_test) print("Test loss:", score[0]) print("Test accuracy:", score[1]) print("Test precision:", score[2]) print("Test recall:", score[3]) y_pred = np.argmax(y_pred,axis=1) y_test1 = np.argmax(y_test,axis=1) print("Test f1 score:", f1_score(y_test1,y_pred,average='micro')) for i in range(10): temp_pred = [1 if x==i else 0 for x in y_pred] temp_test = [1 if x==i else 0 for x in y_test1] fpr, tpr, thresholds =roc_curve(temp_test,temp_pred) print("Test AUC for digit:",i, auc(fpr, tpr)) # summarize history for accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show()
_____no_output_____
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
We added an instance of EarlyStopping class, which provides us a mechanism of stopping algorithm before the whole training process is done. When 3 epochs are not achieving a better result (in our example higher validation accuracy) then our training is stopped and we restore the best model.
# simple early stopping es = keras.callbacks.EarlyStopping(monitor='val_accuracy', mode='max', verbose=1, patience = 3, restore_best_weights=True)
_____no_output_____
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Basic Fully Connected Multi-layer Network The first network we have created is basic fully connected mutli-layer network:
model_fc = keras.Sequential([ layers.Dense(32, activation="relu",input_shape=(28,28,1)), layers.Dense(64, activation="relu"), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dropout(.25), layers.Dense(10, activation="softmax") ]) model_fc.summary() predict_model(model_fc, [es], epochs=100)
Model: "sequential_20" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_59 (Dense) (None, 28, 28, 32) 64 _________________________________________________________________ dense_60 (Dense) (None, 28, 28, 64) 2112 _________________________________________________________________ flatten_20 (Flatten) (None, 50176) 0 _________________________________________________________________ dense_61 (Dense) (None, 128) 6422656 _________________________________________________________________ dropout_19 (Dropout) (None, 128) 0 _________________________________________________________________ dense_62 (Dense) (None, 10) 1290 ================================================================= Total params: 6,426,122 Trainable params: 6,426,122 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 6s 13ms/step - loss: 0.5066 - accuracy: 0.8487 - precision: 0.9149 - recall: 0.7786 - val_loss: 0.1463 - val_accuracy: 0.9586 - val_precision: 0.9667 - val_recall: 0.9504 Epoch 2/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1643 - accuracy: 0.9510 - precision: 0.9601 - recall: 0.9434 - val_loss: 0.1087 - val_accuracy: 0.9683 - val_precision: 0.9707 - val_recall: 0.9661 Epoch 3/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1254 - accuracy: 0.9611 - precision: 0.9680 - recall: 0.9554 - val_loss: 0.0970 - val_accuracy: 0.9729 - val_precision: 0.9770 - val_recall: 0.9698 Epoch 4/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1044 - accuracy: 0.9666 - precision: 0.9716 - recall: 0.9629 - val_loss: 0.0932 - val_accuracy: 0.9736 - val_precision: 0.9756 - val_recall: 0.9711 Epoch 5/100 439/439 [==============================] - 5s 12ms/step - loss: 0.0893 - accuracy: 0.9704 - precision: 0.9740 - recall: 0.9670 - val_loss: 0.0962 - val_accuracy: 0.9720 - val_precision: 0.9748 - val_recall: 0.9710 Epoch 6/100 439/439 [==============================] - 5s 12ms/step - loss: 0.0824 - accuracy: 0.9726 - precision: 0.9757 - recall: 0.9695 - val_loss: 0.0853 - val_accuracy: 0.9773 - val_precision: 0.9801 - val_recall: 0.9746 Epoch 7/100 439/439 [==============================] - 5s 12ms/step - loss: 0.0750 - accuracy: 0.9763 - precision: 0.9789 - recall: 0.9733 - val_loss: 0.0796 - val_accuracy: 0.9769 - val_precision: 0.9796 - val_recall: 0.9755 Epoch 8/100 439/439 [==============================] - 5s 12ms/step - loss: 0.0664 - accuracy: 0.9776 - precision: 0.9805 - recall: 0.9752 - val_loss: 0.0869 - val_accuracy: 0.9766 - val_precision: 0.9787 - val_recall: 0.9755 Epoch 9/100 439/439 [==============================] - 5s 12ms/step - loss: 0.0621 - accuracy: 0.9788 - precision: 0.9806 - recall: 0.9770 - val_loss: 0.0832 - val_accuracy: 0.9766 - val_precision: 0.9783 - val_recall: 0.9758 Restoring model weights from the end of the best epoch. Epoch 00009: early stopping Test loss: 0.09832347929477692 Test accuracy: 0.973714292049408 Test precision: 0.9774295687675476 Test recall: 0.9712857007980347 Test f1 score: 0.9737142857142858 Test AUC for digit: 0 0.9907399351644153 Test AUC for digit: 1 0.9923331284991935 Test AUC for digit: 2 0.9870421148228989 Test AUC for digit: 3 0.9851853814963031 Test AUC for digit: 4 0.9845406523282492 Test AUC for digit: 5 0.9781305833322657 Test AUC for digit: 6 0.9906349206349208 Test AUC for digit: 7 0.9875151552516612 Test AUC for digit: 8 0.974920634920635 Test AUC for digit: 9 0.9815057646170433
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
This is basic model achieves about 97,5% accuracy on test set. It is made of 2 hidden layers with reasonable number of units. Training this model is quite fast (on my laptop it was 5s per epoch, using GPU).As we see in plots our model started to overfits, because validation accuracy and loss was staying on the same level, while train accuracy was growing and loss was decreasing. Next, we wanted to demonstrate the effect of changing various parameters of the network. Different number of layers
model_fc_small = keras.Sequential([ layers.Dense(32, activation="relu",input_shape=(28,28,1)), layers.Flatten(), layers.Dense(64, activation="relu"), layers.Dropout(.25), layers.Dense(10, activation="softmax") ]) model_fc_small.summary() predict_model(model_fc_small, [es], epochs=100) model_fc_large = keras.Sequential([ layers.Dense(32, activation="relu",input_shape=(28,28,1)), layers.Dense(64, activation="relu"), layers.Flatten(), layers.Dense(4096, activation="relu"), layers.Dense(1024, activation="relu"), layers.Dense(64, activation="relu"), layers.Dropout(.25), layers.Dense(10, activation="softmax") ]) model_fc_large.summary() predict_model(model_fc_large, [es], epochs=100)
Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_7 (Dense) (None, 28, 28, 32) 64 _________________________________________________________________ dense_8 (Dense) (None, 28, 28, 64) 2112 _________________________________________________________________ flatten_2 (Flatten) (None, 50176) 0 _________________________________________________________________ dense_9 (Dense) (None, 4096) 205524992 _________________________________________________________________ dense_10 (Dense) (None, 1024) 4195328 _________________________________________________________________ dense_11 (Dense) (None, 64) 65600 _________________________________________________________________ dropout_2 (Dropout) (None, 64) 0 _________________________________________________________________ dense_12 (Dense) (None, 10) 650 ================================================================= Total params: 209,788,746 Trainable params: 209,788,746 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 55s 120ms/step - loss: 0.5493 - accuracy: 0.8354 - precision: 0.9044 - recall: 0.7825 - val_loss: 0.1236 - val_accuracy: 0.9652 - val_precision: 0.9722 - val_recall: 0.9605 Epoch 2/100 439/439 [==============================] - 33s 75ms/step - loss: 0.1123 - accuracy: 0.9679 - precision: 0.9741 - recall: 0.9620 - val_loss: 0.0866 - val_accuracy: 0.9762 - val_precision: 0.9798 - val_recall: 0.9717 Epoch 3/100 439/439 [==============================] - 33s 75ms/step - loss: 0.0735 - accuracy: 0.9787 - precision: 0.9823 - recall: 0.9749 - val_loss: 0.0845 - val_accuracy: 0.9747 - val_precision: 0.9812 - val_recall: 0.9709 Epoch 4/100 439/439 [==============================] - 33s 75ms/step - loss: 0.0509 - accuracy: 0.9847 - precision: 0.9870 - recall: 0.9826 - val_loss: 0.0801 - val_accuracy: 0.9778 - val_precision: 0.9823 - val_recall: 0.9752 Epoch 5/100 439/439 [==============================] - 33s 75ms/step - loss: 0.0464 - accuracy: 0.9860 - precision: 0.9879 - recall: 0.9842 - val_loss: 0.0887 - val_accuracy: 0.9781 - val_precision: 0.9806 - val_recall: 0.9762 Epoch 6/100 439/439 [==============================] - 33s 75ms/step - loss: 0.0356 - accuracy: 0.9898 - precision: 0.9909 - recall: 0.9882 - val_loss: 0.0897 - val_accuracy: 0.9804 - val_precision: 0.9828 - val_recall: 0.9785 Epoch 7/100 439/439 [==============================] - 33s 75ms/step - loss: 0.0276 - accuracy: 0.9921 - precision: 0.9929 - recall: 0.9913 - val_loss: 0.0870 - val_accuracy: 0.9821 - val_precision: 0.9832 - val_recall: 0.9811 Epoch 8/100 439/439 [==============================] - 33s 75ms/step - loss: 0.0234 - accuracy: 0.9928 - precision: 0.9935 - recall: 0.9920 - val_loss: 0.0820 - val_accuracy: 0.9804 - val_precision: 0.9823 - val_recall: 0.9776 Epoch 9/100 439/439 [==============================] - 33s 75ms/step - loss: 0.0215 - accuracy: 0.9939 - precision: 0.9947 - recall: 0.9931 - val_loss: 0.1025 - val_accuracy: 0.9801 - val_precision: 0.9812 - val_recall: 0.9786 Epoch 10/100 439/439 [==============================] - 32s 73ms/step - loss: 0.0220 - accuracy: 0.9939 - precision: 0.9943 - recall: 0.9933 - val_loss: 0.0775 - val_accuracy: 0.9828 - val_precision: 0.9849 - val_recall: 0.9821 Epoch 11/100 439/439 [==============================] - 34s 77ms/step - loss: 0.0177 - accuracy: 0.9949 - precision: 0.9953 - recall: 0.9944 - val_loss: 0.0929 - val_accuracy: 0.9843 - val_precision: 0.9850 - val_recall: 0.9831 Epoch 12/100 439/439 [==============================] - 34s 79ms/step - loss: 0.0138 - accuracy: 0.9958 - precision: 0.9962 - recall: 0.9955 - val_loss: 0.0812 - val_accuracy: 0.9815 - val_precision: 0.9828 - val_recall: 0.9808 Epoch 13/100 439/439 [==============================] - 34s 77ms/step - loss: 0.0136 - accuracy: 0.9963 - precision: 0.9965 - recall: 0.9960 - val_loss: 0.0883 - val_accuracy: 0.9824 - val_precision: 0.9834 - val_recall: 0.9818 Epoch 14/100 439/439 [==============================] - 34s 77ms/step - loss: 0.0117 - accuracy: 0.9965 - precision: 0.9967 - recall: 0.9963 - val_loss: 0.0809 - val_accuracy: 0.9844 - val_precision: 0.9858 - val_recall: 0.9844 Epoch 15/100 439/439 [==============================] - 34s 78ms/step - loss: 0.0107 - accuracy: 0.9971 - precision: 0.9972 - recall: 0.9968 - val_loss: 0.1161 - val_accuracy: 0.9805 - val_precision: 0.9811 - val_recall: 0.9798 Epoch 16/100 439/439 [==============================] - 33s 74ms/step - loss: 0.0152 - accuracy: 0.9959 - precision: 0.9960 - recall: 0.9955 - val_loss: 0.1062 - val_accuracy: 0.9815 - val_precision: 0.9827 - val_recall: 0.9814 Epoch 17/100 439/439 [==============================] - 34s 78ms/step - loss: 0.0097 - accuracy: 0.9971 - precision: 0.9973 - recall: 0.9970 - val_loss: 0.0868 - val_accuracy: 0.9830 - val_precision: 0.9841 - val_recall: 0.9824 Epoch 00017: early stopping Test loss: 0.09394106268882751 Test accuracy: 0.9814285635948181 Test precision: 0.9819768071174622 Test recall: 0.9807142615318298 Test f1 score: 0.9814285714285714 Test AUC for digit: 0 0.9988155401137082 Test AUC for digit: 1 0.9947057004231706 Test AUC for digit: 2 0.9863037767355775 Test AUC for digit: 3 0.9898547027800599 Test AUC for digit: 4 0.9875124205303294 Test AUC for digit: 5 0.9887902209336588 Test AUC for digit: 6 0.9899999999999999 Test AUC for digit: 7 0.9909183219141172 Test AUC for digit: 8 0.9815079365079366 Test AUC for digit: 9 0.987926026320382
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Firstly, we tried different numbers of hidden layers. With 1 hidden layer the model the model was achieving around 96,5% on test set. The model is underfitted because this number of layers is not enough to explain the complexity of our data.Model with 4 hidden layers achieved 98,1% of accuracy but the training time was pretty long (34s per epoch). That is because this model had to find weights for over 200,000,000 parameters (compering to 1,600,000 of params for model with 1 hidden layer). We can assume, that after second epoch our model is overfitted because the difference between validation and train loss and accuracy are high. Different number of units per layer
model_fc = keras.Sequential([ layers.Dense(10, activation="relu",input_shape=(28,28,1)), layers.Dense(20, activation="relu"), layers.Flatten(), layers.Dense(40, activation="relu"), layers.Dropout(.25), layers.Dense(10, activation="softmax") ]) model_fc.summary() predict_model(model_fc, [es], epochs=100)
Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_13 (Dense) (None, 28, 28, 10) 20 _________________________________________________________________ dense_14 (Dense) (None, 28, 28, 20) 220 _________________________________________________________________ flatten_3 (Flatten) (None, 15680) 0 _________________________________________________________________ dense_15 (Dense) (None, 40) 627240 _________________________________________________________________ dropout_3 (Dropout) (None, 40) 0 _________________________________________________________________ dense_16 (Dense) (None, 10) 410 ================================================================= Total params: 627,890 Trainable params: 627,890 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 4s 8ms/step - loss: 0.6942 - accuracy: 0.7885 - precision: 0.8959 - recall: 0.6762 - val_loss: 0.2011 - val_accuracy: 0.9401 - val_precision: 0.9534 - val_recall: 0.9297 Epoch 2/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2938 - accuracy: 0.9101 - precision: 0.9330 - recall: 0.8907 - val_loss: 0.1577 - val_accuracy: 0.9567 - val_precision: 0.9660 - val_recall: 0.9476 Epoch 3/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2441 - accuracy: 0.9252 - precision: 0.9418 - recall: 0.9100 - val_loss: 0.1455 - val_accuracy: 0.9571 - val_precision: 0.9666 - val_recall: 0.9519 Epoch 4/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2146 - accuracy: 0.9344 - precision: 0.9488 - recall: 0.9216 - val_loss: 0.1342 - val_accuracy: 0.9613 - val_precision: 0.9710 - val_recall: 0.9534 Epoch 5/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2027 - accuracy: 0.9370 - precision: 0.9503 - recall: 0.9245 - val_loss: 0.1228 - val_accuracy: 0.9639 - val_precision: 0.9711 - val_recall: 0.9589 Epoch 6/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1886 - accuracy: 0.9395 - precision: 0.9515 - recall: 0.9284 - val_loss: 0.1218 - val_accuracy: 0.9613 - val_precision: 0.9689 - val_recall: 0.9582 Epoch 7/100 439/439 [==============================] - 3s 8ms/step - loss: 0.1840 - accuracy: 0.9408 - precision: 0.9528 - recall: 0.9316 - val_loss: 0.1211 - val_accuracy: 0.9649 - val_precision: 0.9713 - val_recall: 0.9612 Epoch 8/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1801 - accuracy: 0.9424 - precision: 0.9536 - recall: 0.9319 - val_loss: 0.1177 - val_accuracy: 0.9646 - val_precision: 0.9717 - val_recall: 0.9613 Epoch 9/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1690 - accuracy: 0.9457 - precision: 0.9563 - recall: 0.9362 - val_loss: 0.1225 - val_accuracy: 0.9657 - val_precision: 0.9714 - val_recall: 0.9615 Epoch 10/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1685 - accuracy: 0.9465 - precision: 0.9569 - recall: 0.9375 - val_loss: 0.1151 - val_accuracy: 0.9675 - val_precision: 0.9729 - val_recall: 0.9644 Epoch 11/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1615 - accuracy: 0.9484 - precision: 0.9589 - recall: 0.9397 - val_loss: 0.1208 - val_accuracy: 0.9658 - val_precision: 0.9719 - val_recall: 0.9596 Epoch 12/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1573 - accuracy: 0.9494 - precision: 0.9584 - recall: 0.9422 - val_loss: 0.1155 - val_accuracy: 0.9685 - val_precision: 0.9716 - val_recall: 0.9641 Epoch 13/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1518 - accuracy: 0.9498 - precision: 0.9592 - recall: 0.9414 - val_loss: 0.1165 - val_accuracy: 0.9658 - val_precision: 0.9719 - val_recall: 0.9616 Epoch 14/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1459 - accuracy: 0.9534 - precision: 0.9620 - recall: 0.9461 - val_loss: 0.1136 - val_accuracy: 0.9670 - val_precision: 0.9723 - val_recall: 0.9641 Epoch 15/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1485 - accuracy: 0.9509 - precision: 0.9603 - recall: 0.9437 - val_loss: 0.1178 - val_accuracy: 0.9667 - val_precision: 0.9704 - val_recall: 0.9641 Epoch 00015: early stopping Test loss: 0.12980997562408447 Test accuracy: 0.9620000123977661 Test precision: 0.9678164124488831 Test recall: 0.9580000042915344 Test f1 score: 0.962 Test AUC for digit: 0 0.9925116601919346 Test AUC for digit: 1 0.9898799893821776 Test AUC for digit: 2 0.9862624960357879 Test AUC for digit: 3 0.9697247767206424 Test AUC for digit: 4 0.9822645042790503 Test AUC for digit: 5 0.9736597292505758 Test AUC for digit: 6 0.9884920634920635 Test AUC for digit: 7 0.9744171471575807 Test AUC for digit: 8 0.9637301587301589 Test AUC for digit: 9 0.9673563983253335
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
In this situation we trained a model with small number of units in each layer. The model didn't achieve it's best. We can see, that train accuracy is much lower than validation accuracy. It is caused by insufficient number of units, so that our model decided to choose higher accuracy in validation data at the expense of accuracy on whole data.
model_fc = keras.Sequential([ layers.Dense(100, activation="relu",input_shape=(28,28,1)), layers.Dense(200, activation="relu"), layers.Flatten(), layers.Dense(400, activation="relu"), layers.Dropout(.25), layers.Dense(10, activation="softmax") ]) model_fc.summary() predict_model(model_fc, [es], epochs=100)
Model: "sequential_4" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_17 (Dense) (None, 28, 28, 100) 200 _________________________________________________________________ dense_18 (Dense) (None, 28, 28, 200) 20200 _________________________________________________________________ flatten_4 (Flatten) (None, 156800) 0 _________________________________________________________________ dense_19 (Dense) (None, 400) 62720400 _________________________________________________________________ dropout_4 (Dropout) (None, 400) 0 _________________________________________________________________ dense_20 (Dense) (None, 10) 4010 ================================================================= Total params: 62,744,810 Trainable params: 62,744,810 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 20s 45ms/step - loss: 0.3810 - accuracy: 0.8801 - precision: 0.9206 - recall: 0.8528 - val_loss: 0.1177 - val_accuracy: 0.9678 - val_precision: 0.9722 - val_recall: 0.9632 Epoch 2/100 439/439 [==============================] - 19s 44ms/step - loss: 0.1207 - accuracy: 0.9633 - precision: 0.9690 - recall: 0.9591 - val_loss: 0.0924 - val_accuracy: 0.9750 - val_precision: 0.9782 - val_recall: 0.9726 Epoch 3/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0939 - accuracy: 0.9698 - precision: 0.9735 - recall: 0.9668 - val_loss: 0.0772 - val_accuracy: 0.9772 - val_precision: 0.9801 - val_recall: 0.9756 Epoch 4/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0664 - accuracy: 0.9785 - precision: 0.9809 - recall: 0.9767 - val_loss: 0.0717 - val_accuracy: 0.9804 - val_precision: 0.9828 - val_recall: 0.9789 Epoch 5/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0588 - accuracy: 0.9814 - precision: 0.9831 - recall: 0.9800 - val_loss: 0.0762 - val_accuracy: 0.9789 - val_precision: 0.9824 - val_recall: 0.9771 Epoch 6/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0463 - accuracy: 0.9847 - precision: 0.9860 - recall: 0.9832 - val_loss: 0.0695 - val_accuracy: 0.9808 - val_precision: 0.9831 - val_recall: 0.9798 Epoch 7/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0436 - accuracy: 0.9851 - precision: 0.9862 - recall: 0.9839 - val_loss: 0.0762 - val_accuracy: 0.9797 - val_precision: 0.9813 - val_recall: 0.9784 Epoch 8/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0367 - accuracy: 0.9878 - precision: 0.9884 - recall: 0.9864 - val_loss: 0.0631 - val_accuracy: 0.9823 - val_precision: 0.9836 - val_recall: 0.9808 Epoch 9/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0343 - accuracy: 0.9876 - precision: 0.9886 - recall: 0.9868 - val_loss: 0.0663 - val_accuracy: 0.9807 - val_precision: 0.9830 - val_recall: 0.9782 Epoch 10/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0300 - accuracy: 0.9894 - precision: 0.9902 - recall: 0.9889 - val_loss: 0.0815 - val_accuracy: 0.9828 - val_precision: 0.9831 - val_recall: 0.9820 Epoch 11/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0253 - accuracy: 0.9917 - precision: 0.9922 - recall: 0.9912 - val_loss: 0.0702 - val_accuracy: 0.9834 - val_precision: 0.9847 - val_recall: 0.9830 Epoch 12/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0224 - accuracy: 0.9931 - precision: 0.9933 - recall: 0.9928 - val_loss: 0.0760 - val_accuracy: 0.9818 - val_precision: 0.9831 - val_recall: 0.9811 Epoch 13/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0239 - accuracy: 0.9915 - precision: 0.9921 - recall: 0.9910 - val_loss: 0.0783 - val_accuracy: 0.9798 - val_precision: 0.9818 - val_recall: 0.9789 Epoch 14/100 439/439 [==============================] - 19s 44ms/step - loss: 0.0202 - accuracy: 0.9927 - precision: 0.9931 - recall: 0.9925 - val_loss: 0.0734 - val_accuracy: 0.9808 - val_precision: 0.9828 - val_recall: 0.9795 Epoch 00014: early stopping Test loss: 0.08533725142478943 Test accuracy: 0.9795714020729065 Test precision: 0.9812401533126831 Test recall: 0.978857159614563 Test f1 score: 0.9795714285714285 Test AUC for digit: 0 0.993617156085807 Test AUC for digit: 1 0.9955623103018043 Test AUC for digit: 2 0.9869502393935624 Test AUC for digit: 3 0.9921123419818196 Test AUC for digit: 4 0.9906739460192185 Test AUC for digit: 5 0.9834615107924303 Test AUC for digit: 6 0.993095238095238 Test AUC for digit: 7 0.990522564434301 Test AUC for digit: 8 0.976031746031746 Test AUC for digit: 9 0.9826922857371192
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
In this model we see that it's overfitting after third epoch. It is caused by too high number of units. Different learning rate
model_fc_01 = keras.Sequential([ layers.Dense(32, activation="relu",input_shape=(28,28,1)), layers.Dense(64, activation="relu"), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dropout(.25), layers.Dense(10, activation="softmax") ]) model_fc_01.summary() predict_model(model_fc_01,[es], epochs=100, lr=0.05)
Model: "sequential_21" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_63 (Dense) (None, 28, 28, 32) 64 _________________________________________________________________ dense_64 (Dense) (None, 28, 28, 64) 2112 _________________________________________________________________ flatten_21 (Flatten) (None, 50176) 0 _________________________________________________________________ dense_65 (Dense) (None, 128) 6422656 _________________________________________________________________ dropout_20 (Dropout) (None, 128) 0 _________________________________________________________________ dense_66 (Dense) (None, 10) 1290 ================================================================= Total params: 6,426,122 Trainable params: 6,426,122 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 6s 13ms/step - loss: 2.5641 - accuracy: 0.5516 - precision: 0.7483 - recall: 0.4283 - val_loss: 0.4579 - val_accuracy: 0.8913 - val_precision: 0.9146 - val_recall: 0.8734 Epoch 2/100 439/439 [==============================] - 5s 12ms/step - loss: 0.6291 - accuracy: 0.8000 - precision: 0.8749 - recall: 0.7328 - val_loss: 0.2885 - val_accuracy: 0.9248 - val_precision: 0.9536 - val_recall: 0.8866 Epoch 3/100 439/439 [==============================] - 5s 12ms/step - loss: 0.5281 - accuracy: 0.8320 - precision: 0.8874 - recall: 0.7802 - val_loss: 0.2769 - val_accuracy: 0.9264 - val_precision: 0.9468 - val_recall: 0.9092 Epoch 4/100 439/439 [==============================] - 5s 12ms/step - loss: 0.5307 - accuracy: 0.8348 - precision: 0.8777 - recall: 0.7971 - val_loss: 0.2822 - val_accuracy: 0.9258 - val_precision: 0.9504 - val_recall: 0.8993 Epoch 5/100 439/439 [==============================] - 5s 12ms/step - loss: 0.6051 - accuracy: 0.8107 - precision: 0.8614 - recall: 0.7684 - val_loss: 0.2608 - val_accuracy: 0.9299 - val_precision: 0.9458 - val_recall: 0.9134 Epoch 6/100 439/439 [==============================] - 5s 12ms/step - loss: 0.4968 - accuracy: 0.8454 - precision: 0.8837 - recall: 0.8143 - val_loss: 0.2928 - val_accuracy: 0.9188 - val_precision: 0.9445 - val_recall: 0.8965 Epoch 7/100 439/439 [==============================] - 5s 12ms/step - loss: 0.5996 - accuracy: 0.8170 - precision: 0.8669 - recall: 0.7753 - val_loss: 0.3559 - val_accuracy: 0.9036 - val_precision: 0.9382 - val_recall: 0.8711 Epoch 8/100 439/439 [==============================] - 5s 12ms/step - loss: 0.5959 - accuracy: 0.8130 - precision: 0.8761 - recall: 0.7515 - val_loss: 0.3750 - val_accuracy: 0.9023 - val_precision: 0.9415 - val_recall: 0.8524 Restoring model weights from the end of the best epoch. Epoch 00008: early stopping Test loss: 0.28543806076049805 Test accuracy: 0.9227142930030823 Test precision: 0.942704439163208 Test recall: 0.9072856903076172 Test f1 score: 0.9227142857142857 Test AUC for digit: 0 0.9789563324393538 Test AUC for digit: 1 0.979840545957394 Test AUC for digit: 2 0.9702686971098221 Test AUC for digit: 3 0.927925323986961 Test AUC for digit: 4 0.9539853844616494 Test AUC for digit: 5 0.9262426101687231 Test AUC for digit: 6 0.9711111111111111 Test AUC for digit: 7 0.960725329011793 Test AUC for digit: 8 0.9488095238095239 Test AUC for digit: 9 0.9511993241314747
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
We took our first model and decided to train it with different learning rates. With learning rate 0.05 we received very bad results (accuracy around 92%). The scores are so bad because our optimizer did not find good weights, because it had to change values with too big "jump".
model_fc_00001 = keras.Sequential([ layers.Dense(32, activation="relu",input_shape=(28,28,1)), layers.Dense(64, activation="relu"), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dropout(.25), layers.Dense(10, activation="softmax") ]) model_fc_00001.summary() predict_model(model_fc_00001,[es], epochs=100, lr = 0.00001)
Model: "sequential_15" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_46 (Dense) (None, 28, 28, 32) 64 _________________________________________________________________ dense_47 (Dense) (None, 28, 28, 64) 2112 _________________________________________________________________ flatten_15 (Flatten) (None, 50176) 0 _________________________________________________________________ dense_48 (Dense) (None, 128) 6422656 _________________________________________________________________ dropout_14 (Dropout) (None, 128) 0 _________________________________________________________________ dense_49 (Dense) (None, 10) 1290 ================================================================= Total params: 6,426,122 Trainable params: 6,426,122 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 6s 13ms/step - loss: 1.8177 - accuracy: 0.5851 - precision: 0.7578 - recall: 0.0475 - val_loss: 0.8030 - val_accuracy: 0.8551 - val_precision: 0.9845 - val_recall: 0.5235 Epoch 2/100 439/439 [==============================] - 5s 12ms/step - loss: 0.7526 - accuracy: 0.8255 - precision: 0.9614 - recall: 0.5864 - val_loss: 0.5059 - val_accuracy: 0.8835 - val_precision: 0.9580 - val_recall: 0.7645 Epoch 3/100 439/439 [==============================] - 5s 12ms/step - loss: 0.5323 - accuracy: 0.8581 - precision: 0.9378 - recall: 0.7558 - val_loss: 0.4048 - val_accuracy: 0.8981 - val_precision: 0.9491 - val_recall: 0.8320 Epoch 4/100 439/439 [==============================] - 5s 12ms/step - loss: 0.4437 - accuracy: 0.8764 - precision: 0.9324 - recall: 0.8142 - val_loss: 0.3561 - val_accuracy: 0.9055 - val_precision: 0.9460 - val_recall: 0.8603 Epoch 5/100 439/439 [==============================] - 5s 12ms/step - loss: 0.3928 - accuracy: 0.8891 - precision: 0.9336 - recall: 0.8411 - val_loss: 0.3259 - val_accuracy: 0.9117 - val_precision: 0.9436 - val_recall: 0.8758 Epoch 6/100 439/439 [==============================] - 5s 12ms/step - loss: 0.3637 - accuracy: 0.8953 - precision: 0.9334 - recall: 0.8587 - val_loss: 0.3050 - val_accuracy: 0.9153 - val_precision: 0.9434 - val_recall: 0.8876 Epoch 7/100 439/439 [==============================] - 5s 12ms/step - loss: 0.3479 - accuracy: 0.8986 - precision: 0.9315 - recall: 0.8666 - val_loss: 0.2869 - val_accuracy: 0.9190 - val_precision: 0.9459 - val_recall: 0.8964 Epoch 8/100 439/439 [==============================] - 5s 12ms/step - loss: 0.3248 - accuracy: 0.9051 - precision: 0.9349 - recall: 0.8769 - val_loss: 0.2723 - val_accuracy: 0.9227 - val_precision: 0.9477 - val_recall: 0.9000 Epoch 9/100 439/439 [==============================] - 5s 12ms/step - loss: 0.3000 - accuracy: 0.9150 - precision: 0.9403 - recall: 0.8894 - val_loss: 0.2609 - val_accuracy: 0.9258 - val_precision: 0.9487 - val_recall: 0.9040 Epoch 10/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2921 - accuracy: 0.9155 - precision: 0.9409 - recall: 0.8923 - val_loss: 0.2493 - val_accuracy: 0.9293 - val_precision: 0.9492 - val_recall: 0.9094 Epoch 11/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2765 - accuracy: 0.9198 - precision: 0.9424 - recall: 0.8979 - val_loss: 0.2389 - val_accuracy: 0.9328 - val_precision: 0.9516 - val_recall: 0.9134 Epoch 12/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2636 - accuracy: 0.9249 - precision: 0.9456 - recall: 0.9054 - val_loss: 0.2288 - val_accuracy: 0.9346 - val_precision: 0.9539 - val_recall: 0.9189 Epoch 13/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2534 - accuracy: 0.9258 - precision: 0.9459 - recall: 0.9079 - val_loss: 0.2207 - val_accuracy: 0.9368 - val_precision: 0.9539 - val_recall: 0.9216 Epoch 14/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2447 - accuracy: 0.9276 - precision: 0.9473 - recall: 0.9091 - val_loss: 0.2116 - val_accuracy: 0.9410 - val_precision: 0.9564 - val_recall: 0.9264 Epoch 15/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2372 - accuracy: 0.9315 - precision: 0.9506 - recall: 0.9155 - val_loss: 0.2028 - val_accuracy: 0.9443 - val_precision: 0.9583 - val_recall: 0.9296 Epoch 16/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2272 - accuracy: 0.9334 - precision: 0.9515 - recall: 0.9177 - val_loss: 0.1975 - val_accuracy: 0.9452 - val_precision: 0.9590 - val_recall: 0.9326 Epoch 17/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2178 - accuracy: 0.9368 - precision: 0.9532 - recall: 0.9223 - val_loss: 0.1886 - val_accuracy: 0.9482 - val_precision: 0.9601 - val_recall: 0.9369 Epoch 18/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2111 - accuracy: 0.9389 - precision: 0.9525 - recall: 0.9257 - val_loss: 0.1827 - val_accuracy: 0.9505 - val_precision: 0.9626 - val_recall: 0.9390 Epoch 19/100 439/439 [==============================] - 5s 12ms/step - loss: 0.2045 - accuracy: 0.9398 - precision: 0.9540 - recall: 0.9263 - val_loss: 0.1758 - val_accuracy: 0.9521 - val_precision: 0.9638 - val_recall: 0.9418 Epoch 20/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1990 - accuracy: 0.9418 - precision: 0.9564 - recall: 0.9299 - val_loss: 0.1696 - val_accuracy: 0.9540 - val_precision: 0.9645 - val_recall: 0.9443 Epoch 21/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1892 - accuracy: 0.9436 - precision: 0.9570 - recall: 0.9323 - val_loss: 0.1645 - val_accuracy: 0.9545 - val_precision: 0.9649 - val_recall: 0.9455 Epoch 22/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1896 - accuracy: 0.9450 - precision: 0.9588 - recall: 0.9329 - val_loss: 0.1605 - val_accuracy: 0.9551 - val_precision: 0.9655 - val_recall: 0.9479 Epoch 23/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1779 - accuracy: 0.9476 - precision: 0.9589 - recall: 0.9375 - val_loss: 0.1546 - val_accuracy: 0.9557 - val_precision: 0.9679 - val_recall: 0.9489 Epoch 24/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1749 - accuracy: 0.9487 - precision: 0.9613 - recall: 0.9386 - val_loss: 0.1504 - val_accuracy: 0.9574 - val_precision: 0.9670 - val_recall: 0.9509 Epoch 25/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1666 - accuracy: 0.9498 - precision: 0.9622 - recall: 0.9410 - val_loss: 0.1468 - val_accuracy: 0.9579 - val_precision: 0.9681 - val_recall: 0.9509 Epoch 26/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1642 - accuracy: 0.9515 - precision: 0.9622 - recall: 0.9420 - val_loss: 0.1422 - val_accuracy: 0.9602 - val_precision: 0.9699 - val_recall: 0.9534 Epoch 27/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1584 - accuracy: 0.9539 - precision: 0.9639 - recall: 0.9446 - val_loss: 0.1384 - val_accuracy: 0.9610 - val_precision: 0.9695 - val_recall: 0.9548 Epoch 28/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1521 - accuracy: 0.9556 - precision: 0.9661 - recall: 0.9471 - val_loss: 0.1349 - val_accuracy: 0.9608 - val_precision: 0.9707 - val_recall: 0.9545 Epoch 29/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1494 - accuracy: 0.9548 - precision: 0.9648 - recall: 0.9473 - val_loss: 0.1314 - val_accuracy: 0.9633 - val_precision: 0.9719 - val_recall: 0.9567 Epoch 30/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1435 - accuracy: 0.9581 - precision: 0.9669 - recall: 0.9507 - val_loss: 0.1290 - val_accuracy: 0.9629 - val_precision: 0.9714 - val_recall: 0.9569 Epoch 31/100 439/439 [==============================] - 5s 12ms/step - loss: 0.1403 - accuracy: 0.9578 - precision: 0.9669 - recall: 0.9508 - val_loss: 0.1257 - val_accuracy: 0.9646 - val_precision: 0.9718 - val_recall: 0.9586 Epoch 32/100
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Model with learning rate equals 0.00001 performed pretty well but it needed 54 epochs to achieve 97,1% accuracy (compared to 6 epochs using standard learning rate equals 0.001). This is because optimizer "jumped" too small distance searching best results, and it had to do many iterations to find the best weights. Basic Multi-layer CNN
model_cnn = keras.Sequential([ layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)), layers.MaxPooling2D (2,2), layers.Conv2D(64, (3,3), activation="relu"), layers.MaxPooling2D (2,2), layers.Flatten(), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn.summary() predict_model(model_cnn, [es], epochs=100)
Model: "sequential_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ flatten_5 (Flatten) (None, 1600) 0 _________________________________________________________________ dropout_5 (Dropout) (None, 1600) 0 _________________________________________________________________ dense_21 (Dense) (None, 10) 16010 ================================================================= Total params: 34,826 Trainable params: 34,826 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 22s 28ms/step - loss: 0.7757 - accuracy: 0.7586 - precision: 0.8819 - recall: 0.6326 - val_loss: 0.0861 - val_accuracy: 0.9740 - val_precision: 0.9813 - val_recall: 0.9697 Epoch 2/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1093 - accuracy: 0.9654 - precision: 0.9719 - recall: 0.9608 - val_loss: 0.0568 - val_accuracy: 0.9833 - val_precision: 0.9858 - val_recall: 0.9804 Epoch 3/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0831 - accuracy: 0.9750 - precision: 0.9788 - recall: 0.9721 - val_loss: 0.0478 - val_accuracy: 0.9866 - val_precision: 0.9887 - val_recall: 0.9846 Epoch 4/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0692 - accuracy: 0.9781 - precision: 0.9811 - recall: 0.9756 - val_loss: 0.0391 - val_accuracy: 0.9886 - val_precision: 0.9910 - val_recall: 0.9869 Epoch 5/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0525 - accuracy: 0.9844 - precision: 0.9866 - recall: 0.9822 - val_loss: 0.0364 - val_accuracy: 0.9895 - val_precision: 0.9920 - val_recall: 0.9879 Epoch 6/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0548 - accuracy: 0.9820 - precision: 0.9843 - recall: 0.9804 - val_loss: 0.0351 - val_accuracy: 0.9899 - val_precision: 0.9918 - val_recall: 0.9890 Epoch 7/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0452 - accuracy: 0.9856 - precision: 0.9872 - recall: 0.9844 - val_loss: 0.0332 - val_accuracy: 0.9903 - val_precision: 0.9923 - val_recall: 0.9895 Epoch 8/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0416 - accuracy: 0.9865 - precision: 0.9884 - recall: 0.9850 - val_loss: 0.0349 - val_accuracy: 0.9898 - val_precision: 0.9910 - val_recall: 0.9892 Epoch 9/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0412 - accuracy: 0.9868 - precision: 0.9884 - recall: 0.9854 - val_loss: 0.0303 - val_accuracy: 0.9918 - val_precision: 0.9928 - val_recall: 0.9908 Epoch 10/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0366 - accuracy: 0.9881 - precision: 0.9893 - recall: 0.9870 - val_loss: 0.0307 - val_accuracy: 0.9911 - val_precision: 0.9919 - val_recall: 0.9899 Epoch 11/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0414 - accuracy: 0.9866 - precision: 0.9877 - recall: 0.9853 - val_loss: 0.0335 - val_accuracy: 0.9911 - val_precision: 0.9918 - val_recall: 0.9906 Epoch 12/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0345 - accuracy: 0.9890 - precision: 0.9897 - recall: 0.9881 - val_loss: 0.0316 - val_accuracy: 0.9911 - val_precision: 0.9922 - val_recall: 0.9903 Epoch 00012: early stopping Test loss: 0.03780661150813103 Test accuracy: 0.9877142906188965 Test precision: 0.988834798336029 Test recall: 0.9868571162223816 Test f1 score: 0.9877142857142858 Test AUC for digit: 0 0.9986987490590519 Test AUC for digit: 1 0.9946251332301316 Test AUC for digit: 2 0.9912251490762208 Test AUC for digit: 3 0.9914630429597434 Test AUC for digit: 4 0.9935349833293058 Test AUC for digit: 5 0.9949048474533845 Test AUC for digit: 6 0.9943650793650793 Test AUC for digit: 7 0.9923824507574553 Test AUC for digit: 8 0.9917460317460318 Test AUC for digit: 9 0.9890579103854441
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Our first convolutional model with 2 convolutional layers was performing even better then fully connected neural networks. This model is not overfitted, because train and validation loss and accuracy are close to each other. It has only 34,826 parameters to train, so the training of such model is pretty fast. On test set model achieves 98.7% accuracy which is great result. Different number of convolutional layers
model_cnn_short = keras.Sequential([ layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)), layers.MaxPooling2D (2,2), layers.Flatten(), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn_short.summary() predict_model(model_cnn_short, [es], epochs=100)
Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_2 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 13, 13, 32) 0 _________________________________________________________________ flatten_6 (Flatten) (None, 5408) 0 _________________________________________________________________ dropout_6 (Dropout) (None, 5408) 0 _________________________________________________________________ dense_22 (Dense) (None, 10) 54090 ================================================================= Total params: 54,410 Trainable params: 54,410 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 4s 8ms/step - loss: 0.7126 - accuracy: 0.7976 - precision: 0.9174 - recall: 0.6466 - val_loss: 0.1849 - val_accuracy: 0.9453 - val_precision: 0.9596 - val_recall: 0.9362 Epoch 2/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1896 - accuracy: 0.9442 - precision: 0.9575 - recall: 0.9324 - val_loss: 0.1285 - val_accuracy: 0.9629 - val_precision: 0.9724 - val_recall: 0.9567 Epoch 3/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1387 - accuracy: 0.9582 - precision: 0.9662 - recall: 0.9508 - val_loss: 0.0996 - val_accuracy: 0.9719 - val_precision: 0.9783 - val_recall: 0.9678 Epoch 4/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1196 - accuracy: 0.9648 - precision: 0.9723 - recall: 0.9583 - val_loss: 0.0876 - val_accuracy: 0.9737 - val_precision: 0.9791 - val_recall: 0.9713 Epoch 5/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1106 - accuracy: 0.9663 - precision: 0.9729 - recall: 0.9611 - val_loss: 0.0805 - val_accuracy: 0.9765 - val_precision: 0.9801 - val_recall: 0.9739 Epoch 6/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0965 - accuracy: 0.9702 - precision: 0.9750 - recall: 0.9659 - val_loss: 0.0732 - val_accuracy: 0.9788 - val_precision: 0.9829 - val_recall: 0.9759 Epoch 7/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0908 - accuracy: 0.9717 - precision: 0.9763 - recall: 0.9680 - val_loss: 0.0660 - val_accuracy: 0.9805 - val_precision: 0.9827 - val_recall: 0.9771 Epoch 8/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0875 - accuracy: 0.9728 - precision: 0.9768 - recall: 0.9696 - val_loss: 0.0633 - val_accuracy: 0.9824 - val_precision: 0.9852 - val_recall: 0.9799 Epoch 9/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0787 - accuracy: 0.9759 - precision: 0.9793 - recall: 0.9729 - val_loss: 0.0615 - val_accuracy: 0.9817 - val_precision: 0.9856 - val_recall: 0.9789 Epoch 10/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0759 - accuracy: 0.9759 - precision: 0.9799 - recall: 0.9731 - val_loss: 0.0580 - val_accuracy: 0.9824 - val_precision: 0.9849 - val_recall: 0.9797 Epoch 11/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0730 - accuracy: 0.9771 - precision: 0.9806 - recall: 0.9748 - val_loss: 0.0567 - val_accuracy: 0.9828 - val_precision: 0.9849 - val_recall: 0.9798 Epoch 12/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0632 - accuracy: 0.9803 - precision: 0.9833 - recall: 0.9784 - val_loss: 0.0567 - val_accuracy: 0.9827 - val_precision: 0.9845 - val_recall: 0.9807 Epoch 13/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0649 - accuracy: 0.9793 - precision: 0.9820 - recall: 0.9770 - val_loss: 0.0559 - val_accuracy: 0.9837 - val_precision: 0.9854 - val_recall: 0.9820 Epoch 14/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0647 - accuracy: 0.9799 - precision: 0.9823 - recall: 0.9780 - val_loss: 0.0534 - val_accuracy: 0.9844 - val_precision: 0.9861 - val_recall: 0.9825 Epoch 15/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0569 - accuracy: 0.9815 - precision: 0.9842 - recall: 0.9795 - val_loss: 0.0537 - val_accuracy: 0.9846 - val_precision: 0.9857 - val_recall: 0.9834 Epoch 16/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0576 - accuracy: 0.9815 - precision: 0.9833 - recall: 0.9797 - val_loss: 0.0508 - val_accuracy: 0.9846 - val_precision: 0.9860 - val_recall: 0.9827 Epoch 17/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0528 - accuracy: 0.9825 - precision: 0.9848 - recall: 0.9807 - val_loss: 0.0503 - val_accuracy: 0.9850 - val_precision: 0.9864 - val_recall: 0.9843 Epoch 18/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0555 - accuracy: 0.9822 - precision: 0.9844 - recall: 0.9808 - val_loss: 0.0502 - val_accuracy: 0.9851 - val_precision: 0.9874 - val_recall: 0.9835 Epoch 19/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0516 - accuracy: 0.9831 - precision: 0.9852 - recall: 0.9814 - val_loss: 0.0498 - val_accuracy: 0.9861 - val_precision: 0.9877 - val_recall: 0.9848 Epoch 20/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0481 - accuracy: 0.9842 - precision: 0.9860 - recall: 0.9827 - val_loss: 0.0483 - val_accuracy: 0.9853 - val_precision: 0.9867 - val_recall: 0.9841 Epoch 21/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0466 - accuracy: 0.9856 - precision: 0.9873 - recall: 0.9842 - val_loss: 0.0496 - val_accuracy: 0.9853 - val_precision: 0.9870 - val_recall: 0.9843 Epoch 22/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0456 - accuracy: 0.9849 - precision: 0.9869 - recall: 0.9834 - val_loss: 0.0469 - val_accuracy: 0.9867 - val_precision: 0.9884 - val_recall: 0.9859 Epoch 23/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0454 - accuracy: 0.9850 - precision: 0.9869 - recall: 0.9837 - val_loss: 0.0479 - val_accuracy: 0.9866 - val_precision: 0.9876 - val_recall: 0.9848 Epoch 24/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0454 - accuracy: 0.9855 - precision: 0.9874 - recall: 0.9840 - val_loss: 0.0476 - val_accuracy: 0.9857 - val_precision: 0.9870 - val_recall: 0.9853 Epoch 25/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0442 - accuracy: 0.9851 - precision: 0.9868 - recall: 0.9840 - val_loss: 0.0483 - val_accuracy: 0.9864 - val_precision: 0.9871 - val_recall: 0.9853 Epoch 00025: early stopping Test loss: 0.06184159591794014 Test accuracy: 0.9821428656578064 Test precision: 0.9840951561927795 Test recall: 0.9811428785324097 Test f1 score: 0.9821428571428571 Test AUC for digit: 0 0.9946403781193142 Test AUC for digit: 1 0.9928970988504665 Test AUC for digit: 2 0.9918050336696427 Test AUC for digit: 3 0.9915450335250312 Test AUC for digit: 4 0.9882236580836722 Test AUC for digit: 5 0.9855790503765007 Test AUC for digit: 6 0.9932539682539682 Test AUC for digit: 7 0.9851409580367276 Test AUC for digit: 8 0.9859523809523809 Test AUC for digit: 9 0.9908959987735204
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Next model has only 1 convolution layer which has more parameters (54,410) because of less number of pooling layers. The results are satisfying, but not as good as previous model (test accuracy equals 98,2%).
model_cnn_long = keras.Sequential([ layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)), layers.MaxPooling2D ((2,2),1), layers.Conv2D(64, (3,3), activation="relu"), layers.MaxPooling2D ((2,2),1), layers.Conv2D(128, (3,3), activation="relu"), layers.MaxPooling2D ((2,2),1), layers.Conv2D(512, (3,3), activation="relu"), layers.MaxPooling2D ((2,2),1), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn_long.summary() predict_model(model_cnn_long, [es], epochs=100)
Model: "sequential_28" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_32 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_17 (MaxPooling (None, 25, 25, 32) 0 _________________________________________________________________ conv2d_33 (Conv2D) (None, 23, 23, 64) 18496 _________________________________________________________________ max_pooling2d_18 (MaxPooling (None, 22, 22, 64) 0 _________________________________________________________________ conv2d_34 (Conv2D) (None, 20, 20, 128) 73856 _________________________________________________________________ max_pooling2d_19 (MaxPooling (None, 19, 19, 128) 0 _________________________________________________________________ conv2d_35 (Conv2D) (None, 17, 17, 512) 590336 _________________________________________________________________ max_pooling2d_20 (MaxPooling (None, 16, 16, 512) 0 _________________________________________________________________ flatten_28 (Flatten) (None, 131072) 0 _________________________________________________________________ dense_85 (Dense) (None, 128) 16777344 _________________________________________________________________ dropout_22 (Dropout) (None, 128) 0 _________________________________________________________________ dense_86 (Dense) (None, 10) 1290 ================================================================= Total params: 17,461,642 Trainable params: 17,461,642 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 19s 42ms/step - loss: 0.5369 - accuracy: 0.8446 - precision: 0.9042 - recall: 0.7958 - val_loss: 0.0644 - val_accuracy: 0.9812 - val_precision: 0.9828 - val_recall: 0.9805 Epoch 2/100 439/439 [==============================] - 18s 40ms/step - loss: 0.0900 - accuracy: 0.9740 - precision: 0.9778 - recall: 0.9702 - val_loss: 0.0481 - val_accuracy: 0.9846 - val_precision: 0.9861 - val_recall: 0.9833 Epoch 3/100 439/439 [==============================] - 18s 40ms/step - loss: 0.0727 - accuracy: 0.9790 - precision: 0.9818 - recall: 0.9766 - val_loss: 0.0360 - val_accuracy: 0.9880 - val_precision: 0.9900 - val_recall: 0.9864 Epoch 4/100 439/439 [==============================] - 18s 41ms/step - loss: 0.0553 - accuracy: 0.9824 - precision: 0.9848 - recall: 0.9804 - val_loss: 0.0303 - val_accuracy: 0.9915 - val_precision: 0.9922 - val_recall: 0.9909 Epoch 5/100 439/439 [==============================] - 18s 40ms/step - loss: 0.0370 - accuracy: 0.9888 - precision: 0.9900 - recall: 0.9874 - val_loss: 0.0300 - val_accuracy: 0.9925 - val_precision: 0.9935 - val_recall: 0.9922 Epoch 6/100 439/439 [==============================] - 18s 40ms/step - loss: 0.0322 - accuracy: 0.9895 - precision: 0.9908 - recall: 0.9885 - val_loss: 0.0272 - val_accuracy: 0.9941 - val_precision: 0.9942 - val_recall: 0.9935 Epoch 7/100 439/439 [==============================] - 18s 41ms/step - loss: 0.0295 - accuracy: 0.9911 - precision: 0.9922 - recall: 0.9904 - val_loss: 0.0321 - val_accuracy: 0.9916 - val_precision: 0.9923 - val_recall: 0.9915 Epoch 8/100 439/439 [==============================] - 18s 41ms/step - loss: 0.0277 - accuracy: 0.9911 - precision: 0.9924 - recall: 0.9905 - val_loss: 0.0221 - val_accuracy: 0.9932 - val_precision: 0.9941 - val_recall: 0.9931 Epoch 9/100 439/439 [==============================] - 18s 41ms/step - loss: 0.0235 - accuracy: 0.9925 - precision: 0.9931 - recall: 0.9920 - val_loss: 0.0280 - val_accuracy: 0.9924 - val_precision: 0.9925 - val_recall: 0.9924 Restoring model weights from the end of the best epoch. Epoch 00009: early stopping Test loss: 0.03389483317732811 Test accuracy: 0.9918571710586548 Test precision: 0.992139458656311 Test recall: 0.9917142987251282 Test f1 score: 0.9918571428571429 Test AUC for digit: 0 0.9990376444107582 Test AUC for digit: 1 0.9981951123812957 Test AUC for digit: 2 0.9929911952983329 Test AUC for digit: 3 0.9945313397184135 Test AUC for digit: 4 0.9947926711881807 Test AUC for digit: 5 0.9956330341254677 Test AUC for digit: 6 0.9968966508108128 Test AUC for digit: 7 0.999361124421019 Test AUC for digit: 8 0.9925846524567535 Test AUC for digit: 9 0.990134076231021
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Next we created a neural network with 4 convolutional layers and with 17 milion parameters. The model was not overfitted. It had accuracy around 99.2% for test, train and validation model. Time needed to train this model was much higher (19s per epoch comparing to 3s per epoch in CNN that we have implemented). This is the best model that we have created for this dataset. Different number of filters per layer
model_cnn_min = keras.Sequential([ layers.Conv2D(4, (3,3), activation="relu", input_shape=(28,28,1)), layers.MaxPooling2D (2,2), layers.Conv2D(16, (3,3), activation="relu"), layers.MaxPooling2D (2,2), layers.Flatten(), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn_min.summary() predict_model(model_cnn_min, [es], epochs=100)
Model: "sequential_9" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_9 (Conv2D) (None, 26, 26, 4) 40 _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 13, 13, 4) 0 _________________________________________________________________ conv2d_10 (Conv2D) (None, 11, 11, 16) 592 _________________________________________________________________ max_pooling2d_8 (MaxPooling2 (None, 5, 5, 16) 0 _________________________________________________________________ flatten_9 (Flatten) (None, 400) 0 _________________________________________________________________ dropout_8 (Dropout) (None, 400) 0 _________________________________________________________________ dense_28 (Dense) (None, 10) 4010 ================================================================= Total params: 4,642 Trainable params: 4,642 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 20s 29ms/step - loss: 1.2412 - accuracy: 0.5863 - precision: 0.8270 - recall: 0.3904 - val_loss: 0.1895 - val_accuracy: 0.9508 - val_precision: 0.9644 - val_recall: 0.9309 Epoch 2/100 439/439 [==============================] - 3s 6ms/step - loss: 0.2660 - accuracy: 0.9202 - precision: 0.9399 - recall: 0.9013 - val_loss: 0.1206 - val_accuracy: 0.9649 - val_precision: 0.9742 - val_recall: 0.9571 Epoch 3/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1972 - accuracy: 0.9398 - precision: 0.9519 - recall: 0.9284 - val_loss: 0.0964 - val_accuracy: 0.9716 - val_precision: 0.9768 - val_recall: 0.9651 Epoch 4/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1631 - accuracy: 0.9512 - precision: 0.9599 - recall: 0.9423 - val_loss: 0.0831 - val_accuracy: 0.9760 - val_precision: 0.9807 - val_recall: 0.9701 Epoch 5/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1448 - accuracy: 0.9565 - precision: 0.9640 - recall: 0.9497 - val_loss: 0.0728 - val_accuracy: 0.9786 - val_precision: 0.9820 - val_recall: 0.9747 Epoch 6/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1358 - accuracy: 0.9586 - precision: 0.9660 - recall: 0.9528 - val_loss: 0.0668 - val_accuracy: 0.9810 - val_precision: 0.9846 - val_recall: 0.9784 Epoch 7/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1321 - accuracy: 0.9604 - precision: 0.9674 - recall: 0.9548 - val_loss: 0.0647 - val_accuracy: 0.9827 - val_precision: 0.9860 - val_recall: 0.9788 Epoch 8/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1219 - accuracy: 0.9629 - precision: 0.9683 - recall: 0.9573 - val_loss: 0.0597 - val_accuracy: 0.9824 - val_precision: 0.9862 - val_recall: 0.9801 Epoch 9/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1195 - accuracy: 0.9628 - precision: 0.9687 - recall: 0.9579 - val_loss: 0.0588 - val_accuracy: 0.9821 - val_precision: 0.9861 - val_recall: 0.9805 Epoch 10/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1119 - accuracy: 0.9654 - precision: 0.9695 - recall: 0.9610 - val_loss: 0.0542 - val_accuracy: 0.9841 - val_precision: 0.9865 - val_recall: 0.9818 Epoch 11/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1154 - accuracy: 0.9644 - precision: 0.9700 - recall: 0.9598 - val_loss: 0.0537 - val_accuracy: 0.9854 - val_precision: 0.9875 - val_recall: 0.9827 Epoch 12/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1106 - accuracy: 0.9667 - precision: 0.9717 - recall: 0.9622 - val_loss: 0.0522 - val_accuracy: 0.9850 - val_precision: 0.9875 - val_recall: 0.9827 Epoch 13/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1048 - accuracy: 0.9677 - precision: 0.9727 - recall: 0.9641 - val_loss: 0.0522 - val_accuracy: 0.9850 - val_precision: 0.9868 - val_recall: 0.9830 Epoch 14/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1069 - accuracy: 0.9672 - precision: 0.9715 - recall: 0.9638 - val_loss: 0.0490 - val_accuracy: 0.9857 - val_precision: 0.9877 - val_recall: 0.9843 Epoch 15/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1051 - accuracy: 0.9676 - precision: 0.9712 - recall: 0.9637 - val_loss: 0.0488 - val_accuracy: 0.9861 - val_precision: 0.9875 - val_recall: 0.9835 Epoch 16/100 439/439 [==============================] - 3s 6ms/step - loss: 0.1029 - accuracy: 0.9683 - precision: 0.9719 - recall: 0.9650 - val_loss: 0.0472 - val_accuracy: 0.9866 - val_precision: 0.9884 - val_recall: 0.9851 Epoch 17/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0990 - accuracy: 0.9694 - precision: 0.9730 - recall: 0.9657 - val_loss: 0.0461 - val_accuracy: 0.9863 - val_precision: 0.9883 - val_recall: 0.9847 Epoch 18/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0960 - accuracy: 0.9705 - precision: 0.9743 - recall: 0.9670 - val_loss: 0.0482 - val_accuracy: 0.9864 - val_precision: 0.9881 - val_recall: 0.9838 Epoch 19/100 439/439 [==============================] - 3s 6ms/step - loss: 0.0978 - accuracy: 0.9701 - precision: 0.9744 - recall: 0.9665 - val_loss: 0.0482 - val_accuracy: 0.9859 - val_precision: 0.9877 - val_recall: 0.9840 Epoch 00019: early stopping Test loss: 0.06978869438171387 Test accuracy: 0.9787142872810364 Test precision: 0.9813513159751892 Test recall: 0.9772857427597046 Test f1 score: 0.9787142857142858 Test AUC for digit: 0 0.9965322040694353 Test AUC for digit: 1 0.9950131545729289 Test AUC for digit: 2 0.9902744281128235 Test AUC for digit: 3 0.9815570919354413 Test AUC for digit: 4 0.9918910464386338 Test AUC for digit: 5 0.9894167367175205 Test AUC for digit: 6 0.9894444444444445 Test AUC for digit: 7 0.9815003368863819 Test AUC for digit: 8 0.9804761904761905 Test AUC for digit: 9 0.9860031281752693
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Next we decided to check how number of filters impact to performance of model. Reducing number of filter in convolutional layers made our model worse than basic model. Accuracy has fallen to 97.8%, because this model was too simple to explain complexity of our data. This model is underfitted.
model_cnn_max = keras.Sequential([ layers.Conv2D(128, (3,3), activation="relu", input_shape=(28,28,1)), layers.MaxPooling2D (2,2), layers.Conv2D(512, (3,3), activation="relu"), layers.MaxPooling2D (2,2), layers.Flatten(), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn_max.summary() predict_model(model_cnn_max, [es], epochs=100)
Model: "sequential_10" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_11 (Conv2D) (None, 26, 26, 128) 1280 _________________________________________________________________ max_pooling2d_9 (MaxPooling2 (None, 13, 13, 128) 0 _________________________________________________________________ conv2d_12 (Conv2D) (None, 11, 11, 512) 590336 _________________________________________________________________ max_pooling2d_10 (MaxPooling (None, 5, 5, 512) 0 _________________________________________________________________ flatten_10 (Flatten) (None, 12800) 0 _________________________________________________________________ dropout_9 (Dropout) (None, 12800) 0 _________________________________________________________________ dense_29 (Dense) (None, 10) 128010 ================================================================= Total params: 719,626 Trainable params: 719,626 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 27s 44ms/step - loss: 0.3990 - accuracy: 0.8824 - precision: 0.9333 - recall: 0.8202 - val_loss: 0.0549 - val_accuracy: 0.9824 - val_precision: 0.9854 - val_recall: 0.9804 Epoch 2/100 439/439 [==============================] - 9s 19ms/step - loss: 0.0521 - accuracy: 0.9837 - precision: 0.9856 - recall: 0.9818 - val_loss: 0.0435 - val_accuracy: 0.9857 - val_precision: 0.9870 - val_recall: 0.9844 Epoch 3/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0429 - accuracy: 0.9862 - precision: 0.9876 - recall: 0.9849 - val_loss: 0.0343 - val_accuracy: 0.9896 - val_precision: 0.9902 - val_recall: 0.9885 Epoch 4/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0320 - accuracy: 0.9897 - precision: 0.9904 - recall: 0.9890 - val_loss: 0.0361 - val_accuracy: 0.9899 - val_precision: 0.9909 - val_recall: 0.9895 Epoch 5/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0249 - accuracy: 0.9921 - precision: 0.9929 - recall: 0.9916 - val_loss: 0.0323 - val_accuracy: 0.9900 - val_precision: 0.9913 - val_recall: 0.9890 Epoch 6/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0233 - accuracy: 0.9925 - precision: 0.9928 - recall: 0.9920 - val_loss: 0.0347 - val_accuracy: 0.9899 - val_precision: 0.9903 - val_recall: 0.9896 Epoch 7/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0185 - accuracy: 0.9941 - precision: 0.9945 - recall: 0.9938 - val_loss: 0.0343 - val_accuracy: 0.9915 - val_precision: 0.9921 - val_recall: 0.9911 Epoch 8/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0156 - accuracy: 0.9946 - precision: 0.9950 - recall: 0.9943 - val_loss: 0.0346 - val_accuracy: 0.9906 - val_precision: 0.9908 - val_recall: 0.9903 Epoch 9/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0137 - accuracy: 0.9956 - precision: 0.9960 - recall: 0.9954 - val_loss: 0.0362 - val_accuracy: 0.9899 - val_precision: 0.9906 - val_recall: 0.9899 Epoch 10/100 439/439 [==============================] - 8s 19ms/step - loss: 0.0112 - accuracy: 0.9961 - precision: 0.9966 - recall: 0.9959 - val_loss: 0.0365 - val_accuracy: 0.9903 - val_precision: 0.9909 - val_recall: 0.9902 Epoch 00010: early stopping Test loss: 0.04324812442064285 Test accuracy: 0.9902856945991516 Test precision: 0.9904244542121887 Test recall: 0.9900000095367432 Test f1 score: 0.9902857142857143 Test AUC for digit: 0 0.9962574850299402 Test AUC for digit: 1 0.9940907921236538 Test AUC for digit: 2 0.9984028163786178 Test AUC for digit: 3 0.993960028357141 Test AUC for digit: 4 0.9914325054702 Test AUC for digit: 5 0.9978038687780799 Test AUC for digit: 6 0.9957936507936508 Test AUC for digit: 7 0.9913140793939336 Test AUC for digit: 8 0.9926190476190476 Test AUC for digit: 9 0.9948151881227197
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Next we increased number of filters. This caused a raise of number of parameters to over 700 thousands but model did not perform better than basic model. Test accuracy was 99%, which is slightly less than basic model's accuracy. It means that we should not use such high number of filters because we do not need them. This model also seems to be overfitted. Different size and type of pooling layers
model_cnn_pool5 = keras.Sequential([ layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)), layers.MaxPooling2D (5,3), layers.Conv2D(64, (3,3), activation="relu"), layers.MaxPooling2D (5,3), layers.Flatten(), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn_pool5.summary() predict_model(model_cnn_pool5, [es], epochs=100)
Model: "sequential_17" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_13 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_11 (MaxPooling (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_14 (Conv2D) (None, 6, 6, 64) 18496 _________________________________________________________________ max_pooling2d_12 (MaxPooling (None, 1, 1, 64) 0 _________________________________________________________________ flatten_17 (Flatten) (None, 64) 0 _________________________________________________________________ dropout_16 (Dropout) (None, 64) 0 _________________________________________________________________ dense_54 (Dense) (None, 10) 650 ================================================================= Total params: 19,466 Trainable params: 19,466 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 17s 25ms/step - loss: 1.6386 - accuracy: 0.4443 - precision: 0.7621 - recall: 0.1895 - val_loss: 0.3402 - val_accuracy: 0.9205 - val_precision: 0.9659 - val_recall: 0.8592 Epoch 2/100 439/439 [==============================] - 3s 8ms/step - loss: 0.5372 - accuracy: 0.8335 - precision: 0.8979 - recall: 0.7583 - val_loss: 0.2031 - val_accuracy: 0.9482 - val_precision: 0.9721 - val_recall: 0.9242 Epoch 3/100 439/439 [==============================] - 3s 8ms/step - loss: 0.3899 - accuracy: 0.8815 - precision: 0.9224 - recall: 0.8415 - val_loss: 0.1599 - val_accuracy: 0.9600 - val_precision: 0.9747 - val_recall: 0.9439 Epoch 4/100 439/439 [==============================] - 3s 7ms/step - loss: 0.3313 - accuracy: 0.8988 - precision: 0.9294 - recall: 0.8673 - val_loss: 0.1329 - val_accuracy: 0.9629 - val_precision: 0.9762 - val_recall: 0.9489 Epoch 5/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2954 - accuracy: 0.9092 - precision: 0.9359 - recall: 0.8834 - val_loss: 0.1220 - val_accuracy: 0.9681 - val_precision: 0.9795 - val_recall: 0.9576 Epoch 6/100 439/439 [==============================] - 3s 8ms/step - loss: 0.2813 - accuracy: 0.9143 - precision: 0.9392 - recall: 0.8897 - val_loss: 0.1032 - val_accuracy: 0.9701 - val_precision: 0.9799 - val_recall: 0.9633 Epoch 7/100 439/439 [==============================] - 3s 8ms/step - loss: 0.2496 - accuracy: 0.9239 - precision: 0.9440 - recall: 0.9045 - val_loss: 0.1051 - val_accuracy: 0.9704 - val_precision: 0.9797 - val_recall: 0.9593 Epoch 8/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2431 - accuracy: 0.9248 - precision: 0.9447 - recall: 0.9058 - val_loss: 0.0936 - val_accuracy: 0.9706 - val_precision: 0.9798 - val_recall: 0.9641 Epoch 9/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2384 - accuracy: 0.9260 - precision: 0.9459 - recall: 0.9090 - val_loss: 0.0853 - val_accuracy: 0.9766 - val_precision: 0.9830 - val_recall: 0.9696 Epoch 10/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2130 - accuracy: 0.9333 - precision: 0.9492 - recall: 0.9185 - val_loss: 0.0826 - val_accuracy: 0.9762 - val_precision: 0.9835 - val_recall: 0.9711 Epoch 11/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2164 - accuracy: 0.9316 - precision: 0.9492 - recall: 0.9175 - val_loss: 0.0828 - val_accuracy: 0.9746 - val_precision: 0.9807 - val_recall: 0.9701 Epoch 12/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2055 - accuracy: 0.9370 - precision: 0.9515 - recall: 0.9241 - val_loss: 0.0853 - val_accuracy: 0.9730 - val_precision: 0.9796 - val_recall: 0.9698 Epoch 00012: early stopping Test loss: 0.09769929945468903 Test accuracy: 0.9715714454650879 Test precision: 0.9770860075950623 Test recall: 0.9685714244842529 Test f1 score: 0.9715714285714285 Test AUC for digit: 0 0.9948361331663899 Test AUC for digit: 1 0.9941565446942954 Test AUC for digit: 2 0.9841899519327072 Test AUC for digit: 3 0.9719724776720643 Test AUC for digit: 4 0.9860265364292893 Test AUC for digit: 5 0.9800909396406366 Test AUC for digit: 6 0.9928571428571429 Test AUC for digit: 7 0.9823115528040143 Test AUC for digit: 8 0.9796031746031746 Test AUC for digit: 9 0.9761439939197929
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Next, we checked different size of pooling layers. We decided to create a MaxPooling layers with size equals (5,5) and stride equals 3. It means that we take a square of values with size 5x5 then we look for max value, we write it in the middle of square and than we "move" 3 numbers in right or down direction. As we can see, the accuracy is worse than basic model, because we lose too much information in MaxPooling layers. The plots also shows that this model is underfitted.
model_cnn_avg = keras.Sequential([ layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)), layers.AveragePooling2D (3,3), layers.Conv2D(64, (3,3), activation="relu"), layers.AveragePooling2D (3,3), layers.Flatten(), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn_avg.summary() predict_model(model_cnn_avg, [es], epochs=100)
Model: "sequential_18" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_15 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ average_pooling2d_2 (Average (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_16 (Conv2D) (None, 6, 6, 64) 18496 _________________________________________________________________ average_pooling2d_3 (Average (None, 2, 2, 64) 0 _________________________________________________________________ flatten_18 (Flatten) (None, 256) 0 _________________________________________________________________ dropout_17 (Dropout) (None, 256) 0 _________________________________________________________________ dense_55 (Dense) (None, 10) 2570 ================================================================= Total params: 21,386 Trainable params: 21,386 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 4s 8ms/step - loss: 1.4358 - accuracy: 0.5197 - precision: 0.7783 - recall: 0.3063 - val_loss: 0.2705 - val_accuracy: 0.9248 - val_precision: 0.9531 - val_recall: 0.8922 Epoch 2/100 439/439 [==============================] - 3s 8ms/step - loss: 0.3783 - accuracy: 0.8837 - precision: 0.9196 - recall: 0.8476 - val_loss: 0.1960 - val_accuracy: 0.9437 - val_precision: 0.9590 - val_recall: 0.9251 Epoch 3/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2821 - accuracy: 0.9156 - precision: 0.9367 - recall: 0.8952 - val_loss: 0.1518 - val_accuracy: 0.9548 - val_precision: 0.9683 - val_recall: 0.9423 Epoch 4/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2438 - accuracy: 0.9272 - precision: 0.9439 - recall: 0.9095 - val_loss: 0.1290 - val_accuracy: 0.9618 - val_precision: 0.9702 - val_recall: 0.9535 Epoch 5/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2236 - accuracy: 0.9325 - precision: 0.9481 - recall: 0.9185 - val_loss: 0.1161 - val_accuracy: 0.9654 - val_precision: 0.9729 - val_recall: 0.9576 Epoch 6/100 439/439 [==============================] - 3s 7ms/step - loss: 0.2054 - accuracy: 0.9364 - precision: 0.9496 - recall: 0.9246 - val_loss: 0.1074 - val_accuracy: 0.9674 - val_precision: 0.9733 - val_recall: 0.9622 Epoch 7/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1861 - accuracy: 0.9435 - precision: 0.9551 - recall: 0.9341 - val_loss: 0.1005 - val_accuracy: 0.9729 - val_precision: 0.9769 - val_recall: 0.9655 Epoch 8/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1737 - accuracy: 0.9474 - precision: 0.9569 - recall: 0.9373 - val_loss: 0.0930 - val_accuracy: 0.9732 - val_precision: 0.9797 - val_recall: 0.9664 Epoch 9/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1644 - accuracy: 0.9495 - precision: 0.9593 - recall: 0.9412 - val_loss: 0.0846 - val_accuracy: 0.9769 - val_precision: 0.9813 - val_recall: 0.9713 Epoch 10/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1554 - accuracy: 0.9541 - precision: 0.9624 - recall: 0.9459 - val_loss: 0.0813 - val_accuracy: 0.9769 - val_precision: 0.9796 - val_recall: 0.9720 Epoch 11/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1508 - accuracy: 0.9546 - precision: 0.9632 - recall: 0.9462 - val_loss: 0.0804 - val_accuracy: 0.9788 - val_precision: 0.9818 - val_recall: 0.9733 Epoch 12/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1422 - accuracy: 0.9573 - precision: 0.9656 - recall: 0.9500 - val_loss: 0.0769 - val_accuracy: 0.9784 - val_precision: 0.9820 - val_recall: 0.9747 Epoch 13/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1312 - accuracy: 0.9601 - precision: 0.9681 - recall: 0.9534 - val_loss: 0.0703 - val_accuracy: 0.9805 - val_precision: 0.9837 - val_recall: 0.9772 Epoch 14/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1299 - accuracy: 0.9603 - precision: 0.9674 - recall: 0.9546 - val_loss: 0.0670 - val_accuracy: 0.9812 - val_precision: 0.9850 - val_recall: 0.9778 Epoch 15/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1196 - accuracy: 0.9636 - precision: 0.9695 - recall: 0.9576 - val_loss: 0.0651 - val_accuracy: 0.9814 - val_precision: 0.9845 - val_recall: 0.9785 Epoch 16/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1184 - accuracy: 0.9636 - precision: 0.9692 - recall: 0.9584 - val_loss: 0.0651 - val_accuracy: 0.9821 - val_precision: 0.9855 - val_recall: 0.9779 Epoch 17/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1094 - accuracy: 0.9654 - precision: 0.9711 - recall: 0.9599 - val_loss: 0.0589 - val_accuracy: 0.9834 - val_precision: 0.9855 - val_recall: 0.9808 Epoch 18/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1074 - accuracy: 0.9668 - precision: 0.9724 - recall: 0.9627 - val_loss: 0.0578 - val_accuracy: 0.9837 - val_precision: 0.9866 - val_recall: 0.9811 Epoch 19/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1010 - accuracy: 0.9683 - precision: 0.9735 - recall: 0.9638 - val_loss: 0.0559 - val_accuracy: 0.9843 - val_precision: 0.9865 - val_recall: 0.9817 Epoch 20/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0975 - accuracy: 0.9704 - precision: 0.9751 - recall: 0.9660 - val_loss: 0.0564 - val_accuracy: 0.9833 - val_precision: 0.9867 - val_recall: 0.9817 Epoch 21/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0961 - accuracy: 0.9713 - precision: 0.9754 - recall: 0.9670 - val_loss: 0.0562 - val_accuracy: 0.9846 - val_precision: 0.9865 - val_recall: 0.9821 Epoch 22/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0930 - accuracy: 0.9708 - precision: 0.9752 - recall: 0.9669 - val_loss: 0.0506 - val_accuracy: 0.9856 - val_precision: 0.9887 - val_recall: 0.9843 Epoch 23/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0942 - accuracy: 0.9710 - precision: 0.9755 - recall: 0.9667 - val_loss: 0.0512 - val_accuracy: 0.9843 - val_precision: 0.9869 - val_recall: 0.9821 Epoch 24/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0892 - accuracy: 0.9731 - precision: 0.9770 - recall: 0.9693 - val_loss: 0.0492 - val_accuracy: 0.9863 - val_precision: 0.9884 - val_recall: 0.9846 Epoch 25/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0863 - accuracy: 0.9739 - precision: 0.9774 - recall: 0.9702 - val_loss: 0.0492 - val_accuracy: 0.9861 - val_precision: 0.9881 - val_recall: 0.9844 Epoch 26/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0864 - accuracy: 0.9734 - precision: 0.9771 - recall: 0.9697 - val_loss: 0.0491 - val_accuracy: 0.9867 - val_precision: 0.9886 - val_recall: 0.9850 Epoch 27/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0884 - accuracy: 0.9740 - precision: 0.9777 - recall: 0.9707 - val_loss: 0.0478 - val_accuracy: 0.9857 - val_precision: 0.9877 - val_recall: 0.9835 Epoch 28/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0799 - accuracy: 0.9758 - precision: 0.9791 - recall: 0.9725 - val_loss: 0.0442 - val_accuracy: 0.9876 - val_precision: 0.9894 - val_recall: 0.9863 Epoch 29/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0820 - accuracy: 0.9755 - precision: 0.9789 - recall: 0.9724 - val_loss: 0.0449 - val_accuracy: 0.9867 - val_precision: 0.9891 - val_recall: 0.9856 Epoch 30/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0779 - accuracy: 0.9748 - precision: 0.9784 - recall: 0.9718 - val_loss: 0.0441 - val_accuracy: 0.9867 - val_precision: 0.9887 - val_recall: 0.9848 Epoch 31/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0730 - accuracy: 0.9779 - precision: 0.9811 - recall: 0.9751 - val_loss: 0.0446 - val_accuracy: 0.9859 - val_precision: 0.9877 - val_recall: 0.9844
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
After, we changed MaxPooling layer to AveragePooling layer. The difference between this two layers is that AveragePooling layer sums the values in the square and divides by the number of values in square. Results are worse than basic model because MaxPooling, by its characteristics, is better when we have black background, because it remembers the most white value in grey-scale. Different number of full conected layers
model_cnn_fc = keras.Sequential([ layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)), layers.MaxPooling2D (2,2), layers.Conv2D(64, (3,3), activation="relu"), layers.MaxPooling2D (2,2), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dense(32, activation="relu"), layers.Dropout(.5), layers.Dense(10, activation="softmax") ]) model_cnn_fc.summary() predict_model(model_cnn_fc, [es], epochs=100)
Model: "sequential_19" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_17 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_13 (MaxPooling (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_18 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_14 (MaxPooling (None, 5, 5, 64) 0 _________________________________________________________________ flatten_19 (Flatten) (None, 1600) 0 _________________________________________________________________ dense_56 (Dense) (None, 128) 204928 _________________________________________________________________ dense_57 (Dense) (None, 32) 4128 _________________________________________________________________ dropout_18 (Dropout) (None, 32) 0 _________________________________________________________________ dense_58 (Dense) (None, 10) 330 ================================================================= Total params: 228,202 Trainable params: 228,202 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 5s 9ms/step - loss: 0.9434 - accuracy: 0.6736 - precision: 0.8816 - recall: 0.5434 - val_loss: 0.0790 - val_accuracy: 0.9759 - val_precision: 0.9819 - val_recall: 0.9717 Epoch 2/100 439/439 [==============================] - 4s 8ms/step - loss: 0.2156 - accuracy: 0.9378 - precision: 0.9588 - recall: 0.9172 - val_loss: 0.0600 - val_accuracy: 0.9835 - val_precision: 0.9864 - val_recall: 0.9815 Epoch 3/100 439/439 [==============================] - 3s 8ms/step - loss: 0.1334 - accuracy: 0.9598 - precision: 0.9733 - recall: 0.9482 - val_loss: 0.0479 - val_accuracy: 0.9863 - val_precision: 0.9893 - val_recall: 0.9843 Epoch 4/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0965 - accuracy: 0.9712 - precision: 0.9809 - recall: 0.9635 - val_loss: 0.0405 - val_accuracy: 0.9895 - val_precision: 0.9916 - val_recall: 0.9879 Epoch 5/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0784 - accuracy: 0.9771 - precision: 0.9837 - recall: 0.9699 - val_loss: 0.0548 - val_accuracy: 0.9866 - val_precision: 0.9877 - val_recall: 0.9853 Epoch 6/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0962 - accuracy: 0.9722 - precision: 0.9800 - recall: 0.9644 - val_loss: 0.0438 - val_accuracy: 0.9895 - val_precision: 0.9899 - val_recall: 0.9889 Epoch 7/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0635 - accuracy: 0.9800 - precision: 0.9857 - recall: 0.9746 - val_loss: 0.0368 - val_accuracy: 0.9903 - val_precision: 0.9909 - val_recall: 0.9895 Epoch 8/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0566 - accuracy: 0.9808 - precision: 0.9863 - recall: 0.9768 - val_loss: 0.0331 - val_accuracy: 0.9905 - val_precision: 0.9918 - val_recall: 0.9893 Epoch 9/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0508 - accuracy: 0.9838 - precision: 0.9878 - recall: 0.9795 - val_loss: 0.0382 - val_accuracy: 0.9916 - val_precision: 0.9922 - val_recall: 0.9909 Epoch 10/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0570 - accuracy: 0.9811 - precision: 0.9860 - recall: 0.9768 - val_loss: 0.0343 - val_accuracy: 0.9921 - val_precision: 0.9931 - val_recall: 0.9915 Epoch 11/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0483 - accuracy: 0.9835 - precision: 0.9882 - recall: 0.9795 - val_loss: 0.0376 - val_accuracy: 0.9925 - val_precision: 0.9932 - val_recall: 0.9921 Epoch 12/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0452 - accuracy: 0.9859 - precision: 0.9890 - recall: 0.9819 - val_loss: 0.0450 - val_accuracy: 0.9902 - val_precision: 0.9910 - val_recall: 0.9896 Epoch 13/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0371 - accuracy: 0.9883 - precision: 0.9905 - recall: 0.9848 - val_loss: 0.0395 - val_accuracy: 0.9918 - val_precision: 0.9922 - val_recall: 0.9909 Epoch 14/100 439/439 [==============================] - 3s 8ms/step - loss: 0.0443 - accuracy: 0.9852 - precision: 0.9887 - recall: 0.9817 - val_loss: 0.0439 - val_accuracy: 0.9913 - val_precision: 0.9918 - val_recall: 0.9908 Epoch 00014: early stopping Test loss: 0.06664026528596878 Test accuracy: 0.9888571500778198 Test precision: 0.9892780780792236 Test recall: 0.9885714054107666 Test f1 score: 0.9888571428571429 Test AUC for digit: 0 0.9990146050287297 Test AUC for digit: 1 0.9951085363883655 Test AUC for digit: 2 0.9946951426069947 Test AUC for digit: 3 0.9941206967376037 Test AUC for digit: 4 0.9936140097241216 Test AUC for digit: 5 0.9941211483938233 Test AUC for digit: 6 0.9940476190476191 Test AUC for digit: 7 0.9912349278979704 Test AUC for digit: 8 0.9904761904761904 Test AUC for digit: 9 0.9916870128535711
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
The performance of a published network (LeNet5, VGG, Yolo, etc) for recognizing MNIST Digits We decided to implement the architecture of LeNet5 network. The LeNet-5 architecture consists of two sets of convolutional and average pooling layers, followed by a flattening convolutional layer, then two fully-connected layers and finally a softmax classifier. LeNet5 This is an implementation of LeNet5 (slightly different, because input shape is 28x28 and in original version was 32x32). Despite of its age the model is pretty accurate (with accuracy 98.9%). This is close to our best models, and it does not have too big number of parameters (only 60,074).
lenet5 = keras.Sequential([ layers.Conv2D(filters=6, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1)), layers.AveragePooling2D(), layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu'), layers.AveragePooling2D(), layers.Flatten(), layers.Dense(units=120, activation='relu'), layers.Dense(units=84, activation='relu'), layers.Dense(units=10, activation='softmax') ]) lenet5.summary() predict_model(lenet5, [es], epochs=100)
Model: "sequential_27" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_30 (Conv2D) (None, 26, 26, 6) 60 _________________________________________________________________ average_pooling2d_12 (Averag (None, 13, 13, 6) 0 _________________________________________________________________ conv2d_31 (Conv2D) (None, 11, 11, 16) 880 _________________________________________________________________ average_pooling2d_13 (Averag (None, 5, 5, 16) 0 _________________________________________________________________ flatten_27 (Flatten) (None, 400) 0 _________________________________________________________________ dense_82 (Dense) (None, 120) 48120 _________________________________________________________________ dense_83 (Dense) (None, 84) 10164 _________________________________________________________________ dense_84 (Dense) (None, 10) 850 ================================================================= Total params: 60,074 Trainable params: 60,074 Non-trainable params: 0 _________________________________________________________________ Epoch 1/100 439/439 [==============================] - 4s 7ms/step - loss: 0.8082 - accuracy: 0.7743 - precision: 0.8916 - recall: 0.6138 - val_loss: 0.1701 - val_accuracy: 0.9485 - val_precision: 0.9569 - val_recall: 0.9416 Epoch 2/100 439/439 [==============================] - 3s 7ms/step - loss: 0.1352 - accuracy: 0.9603 - precision: 0.9669 - recall: 0.9535 - val_loss: 0.0855 - val_accuracy: 0.9750 - val_precision: 0.9798 - val_recall: 0.9723 Epoch 3/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0792 - accuracy: 0.9762 - precision: 0.9797 - recall: 0.9734 - val_loss: 0.0775 - val_accuracy: 0.9766 - val_precision: 0.9808 - val_recall: 0.9743 Epoch 4/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0660 - accuracy: 0.9803 - precision: 0.9830 - recall: 0.9780 - val_loss: 0.0585 - val_accuracy: 0.9827 - val_precision: 0.9855 - val_recall: 0.9812 Epoch 5/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0530 - accuracy: 0.9835 - precision: 0.9857 - recall: 0.9815 - val_loss: 0.0560 - val_accuracy: 0.9837 - val_precision: 0.9865 - val_recall: 0.9815 Epoch 6/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0451 - accuracy: 0.9858 - precision: 0.9874 - recall: 0.9846 - val_loss: 0.0515 - val_accuracy: 0.9848 - val_precision: 0.9877 - val_recall: 0.9838 Epoch 7/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0382 - accuracy: 0.9878 - precision: 0.9892 - recall: 0.9867 - val_loss: 0.0510 - val_accuracy: 0.9838 - val_precision: 0.9872 - val_recall: 0.9825 Epoch 8/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0330 - accuracy: 0.9898 - precision: 0.9910 - recall: 0.9890 - val_loss: 0.0480 - val_accuracy: 0.9869 - val_precision: 0.9881 - val_recall: 0.9848 Epoch 9/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0357 - accuracy: 0.9880 - precision: 0.9894 - recall: 0.9870 - val_loss: 0.0450 - val_accuracy: 0.9864 - val_precision: 0.9880 - val_recall: 0.9848 Epoch 10/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0265 - accuracy: 0.9913 - precision: 0.9920 - recall: 0.9902 - val_loss: 0.0379 - val_accuracy: 0.9889 - val_precision: 0.9899 - val_recall: 0.9879 Epoch 11/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0245 - accuracy: 0.9916 - precision: 0.9922 - recall: 0.9911 - val_loss: 0.0468 - val_accuracy: 0.9851 - val_precision: 0.9865 - val_recall: 0.9843 Epoch 12/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0253 - accuracy: 0.9920 - precision: 0.9927 - recall: 0.9915 - val_loss: 0.0380 - val_accuracy: 0.9893 - val_precision: 0.9905 - val_recall: 0.9883 Epoch 13/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0206 - accuracy: 0.9935 - precision: 0.9939 - recall: 0.9928 - val_loss: 0.0350 - val_accuracy: 0.9903 - val_precision: 0.9913 - val_recall: 0.9887 Epoch 14/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0168 - accuracy: 0.9947 - precision: 0.9953 - recall: 0.9943 - val_loss: 0.0356 - val_accuracy: 0.9899 - val_precision: 0.9909 - val_recall: 0.9893 Epoch 15/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0133 - accuracy: 0.9955 - precision: 0.9959 - recall: 0.9952 - val_loss: 0.0419 - val_accuracy: 0.9893 - val_precision: 0.9900 - val_recall: 0.9883 Epoch 16/100 439/439 [==============================] - 3s 7ms/step - loss: 0.0129 - accuracy: 0.9959 - precision: 0.9962 - recall: 0.9957 - val_loss: 0.0420 - val_accuracy: 0.9874 - val_precision: 0.9884 - val_recall: 0.9870 Restoring model weights from the end of the best epoch. Epoch 00016: early stopping Test loss: 0.039734747260808945 Test accuracy: 0.9887142777442932 Test precision: 0.9896981120109558 Test recall: 0.9881428480148315 Test f1 score: 0.9887142857142858 Test AUC for digit: 0 0.9967844920645857 Test AUC for digit: 1 0.9953059136050564 Test AUC for digit: 2 0.9949662450022941 Test AUC for digit: 3 0.9916327889937758 Test AUC for digit: 4 0.9944156815297652 Test AUC for digit: 5 0.9953975922796036 Test AUC for digit: 6 0.9949879467017794 Test AUC for digit: 7 0.9939484045292735 Test AUC for digit: 8 0.986462230229477 Test AUC for digit: 9 0.9932852101356561
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
The best network
from sklearn.metrics import confusion_matrix y_pred = model_cnn_long.predict(x_test) y_pred1 = list(np.argmax(y_pred, axis=1)) y_test1 = list(np.argmax(y_test, axis = 1)) confusion_matrix = confusion_matrix(y_test1, y_pred1) print(confusion_matrix)
[[654 0 0 0 0 0 0 0 0 0] [ 0 765 0 0 0 0 0 1 0 1] [ 1 1 697 1 0 0 0 3 1 0] [ 0 0 0 707 0 0 0 2 0 0] [ 0 0 0 0 670 0 1 0 0 2] [ 1 1 0 0 0 648 0 0 0 2] [ 1 0 0 1 0 1 697 0 0 0] [ 0 0 0 0 0 1 0 745 0 0] [ 0 0 0 1 0 0 0 0 683 2] [ 0 1 0 0 3 1 0 1 0 703]]
MIT
MNIST_Recognizer.ipynb
ColdBacon/Digit-recognizer
Predicting Movie Review Sentiment with BERT on TF Hub If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT: Bidirectional Encoder Representations from Transformers. It’s a neural network architecture designed by Google researchers that’s totally transformed what’s state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering.Now that BERT's been added to [TF Hub](https://www.tensorflow.org/hub) as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, [finetuning](http://wiki.fast.ai/index.php/Fine_tuning) BERT can provide both an accuracy boost and faster training time in many cases.Here, we'll train a model to predict whether an IMDB movie review is positive or negative using BERT in Tensorflow with tf hub. Some code was adapted from [this colab notebook](https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb). Let's get started!
from sklearn.model_selection import train_test_split import pandas as pd import tensorflow as tf import tensorflow_hub as hub from datetime import datetime tf.logging.set_verbosity(tf.logging.INFO)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)])
Apache-2.0
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
bedman3/bert
In addition to the standard libraries we imported above, we'll need to install BERT's python package.
!pip install bert-tensorflow import bert from bert import run_classifier from bert import optimization from bert import tokenization
WARNING: Logging before flag parsing goes to stderr. W0414 10:19:55.760469 140105573619520 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/bert/optimization.py:87: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
Apache-2.0
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
bedman3/bert
Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist).
# Set the output directory for saving model file # Optionally, set a GCP bucket location OUTPUT_DIR = 'output_files'#@param {type:"string"} #@markdown Whether or not to clear/delete the directory and create a new one DO_DELETE = False #@param {type:"boolean"} #@markdown Set USE_BUCKET and BUCKET if you want to (optionally) store model output on GCP bucket. USE_BUCKET = False #@param {type:"boolean"} BUCKET = 'BUCKET_NAME' #@param {type:"string"} if USE_BUCKET: OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR) from google.colab import auth auth.authenticate_user() if DO_DELETE: try: tf.gfile.DeleteRecursively(OUTPUT_DIR) except: # Doesn't matter if the directory didn't exist pass tf.gfile.MakeDirs(OUTPUT_DIR) print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
***** Model output directory: output_files *****
Apache-2.0
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
bedman3/bert
Data First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from [this Tensorflow tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub).
from tensorflow import keras import os import re # Load all files from a directory in a DataFrame. def load_directory_data(directory): data = {} data["sentence"] = [] data["sentiment"] = [] for file_path in os.listdir(directory): with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f: data["sentence"].append(f.read()) data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1)) return pd.DataFrame.from_dict(data) # Merge positive and negative examples, add a polarity column and shuffle. def load_dataset(directory): pos_df = load_directory_data(os.path.join(directory, "pos")) neg_df = load_directory_data(os.path.join(directory, "neg")) pos_df["polarity"] = 1 neg_df["polarity"] = 0 return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True) # Download and process the dataset files. def download_and_load_datasets(force_download=False): dataset = tf.keras.utils.get_file( fname="aclImdb.tar.gz", origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz", extract=True) train_df = load_dataset(os.path.join(os.path.dirname(dataset), "aclImdb", "train")) test_df = load_dataset(os.path.join(os.path.dirname(dataset), "aclImdb", "test")) return train_df, test_df # train, test = download_and_load_datasets() import pandas as pd def load_dataset(): train_df = pd.read_csv('data/train.csv') test_df = pd.read_csv('data/valid.csv') return train_df, test_df train, test = load_dataset()
_____no_output_____
Apache-2.0
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
bedman3/bert
To keep training fast, we'll take a sample of 5000 train and test examples, respectively.
# train = train.sample(5000) # test = test.sample(5000) train.columns
_____no_output_____
Apache-2.0
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
bedman3/bert
For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)
DATA_COLUMN = 'text' LABEL_COLUMN = 'stars' # label_list is the list of labels, i.e. True, False or 0, 1 or 'dog', 'cat' label_list = [1, 2, 3, 4, 5]
_____no_output_____
Apache-2.0
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
bedman3/bert
Data PreprocessingWe'll need to transform our data into a format BERT understands. This involves two steps. First, we create `InputExample`'s using the constructor provided in the BERT library.- `text_a` is the text we want to classify, which in this case, is the `Request` field in our Dataframe. - `text_b` is used if we're training a model to understand the relationship between sentences (i.e. is `text_b` a translation of `text_a`? Is `text_b` an answer to the question asked by `text_a`?). This doesn't apply to our task, so we can leave `text_b` blank.- `label` is the label for our example, i.e. True, False
# Use the InputExample class from BERT's run_classifier code to create examples from the data train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example text_a = x[DATA_COLUMN], text_b = None, label = x[LABEL_COLUMN]), axis = 1) test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None, text_a = x[DATA_COLUMN], text_b = None, label = x[LABEL_COLUMN]), axis = 1)
_____no_output_____
Apache-2.0
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
bedman3/bert