id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
15,600 | training deep leaning models early stopping one of the simplest techniques for regularization in deep learning is early stopping given training set and validation set and network with sufficient capacitywe observe that with increasing training stepsfirst both the error on the training set and validation set decreasesthen the error of the training set continues to decrease while the error in validation increases (see figure - the key idea with early stopping is to keep track of the model parameters/weights that give the best performance over the validation setand then to stop the training after this best performance so far over the validation set does not improve over predefined number of training steps figure - early stopping early stopping acts as regularizer by restricting the values the parameters/weights of the model can take (see figure - early stopping limits to neighborhood around the starting values (around soif we stop at wsthe values of ws are not possible this essentially restricts the capacity of the model |
15,601 | training deep leaning models early stopping is quite non-invasivein the sense that it does not require any changes to the model it is also inexpensiveas it only requires storing the parameters of the model (the best so far on the validation setit can also be combined easily with other regularization techniques figure - early stopping restricts norm penalties norm penalties are common form of regularization in deep learning (and machine learning in generalthe idea is simply to add term (thto the loss function of neural network (refer to )where typically represents either the norm or the norm and th represents the parameters/weights of the network thusthe regularized loss function becomes lfnn(xth)ya (thinstead of just lfnn(xth)ynote that the term is the regularization parameter |
15,602 | training deep leaning models note in generalan lp norm is defined as || || si | | ) / accordinglythe norm is defined as || || (si | | ) / si | similarlythe norm is defined as || || (si | | ) / (si ( ) ) / let' dive deeper into the regularized loss function lfnn(xth)ya (ththe following points are to be noted as we attempt to minimize the overall loss function lfnn(xth)ya (th)we attempt to reduce the contribution of the lfnn(xth)yterm as well as the regularization term given by (th it follows that for two sets of parameterstha and thbif lfnn(xtha)ylfnn(xthb) )then the optimization algorithm will choose tha if (thar(thband thb if (thar(thb thusthe role of the regularization term is to direct the optimization in the direction of the th that lowers (th it is easy to see that lower values of (thwhen corresponds to regularization will lead to sparser thhence reducing the effective capacity it is easy to see that lower values of (thwhen corresponds to regularization will lead to th closer to hence reducing the effective capacity (see figure - the term is used to control how much emphasis we place on lfnn(xth)yversus (thhigher values of mean more emphasis is placed on regularization |
15,603 | training deep leaning models it must be noted that norm penalties are applied to the weight vectorsnot the bias terms the reasoning behind this is that any regularization is tradeoff between overfitting and underfittingand regularizing the bias term leads to bad tradeoff due to too much underfitting while training deep learning networksdifferent values of can be used for different layers and the appropriate value of is determined via an experiment using the validation set as guide figure - norm leads to th closer to zero tha is picked by the optimization algorithms because of regularizationwithout itthb would be picked dropout dropout is essentially computational cheap alternative of model ensemble/averaging let us first consider the key concept of model ensemble/averaging while individual models with sufficient capacity can overfitif we average or take majority voting on the predictions of multiple models (trained over subsets of dataor different weight initializations or different hyper parameters)we can address overfitting model ensembleaveraging is an extremely useful form of regularization that helps us deal with overfitting howeverit is quite computationally expensive |
15,604 | training deep leaning models given that we have to train multiple models and make predictions on multiple models (and then combine them via voting or averagingthis computational expense is particularly high with deep learning models with multiple layers dropout provides cheap alternative the key idea of dropout is to drop units and their connections randomly while training the network with probability and then to multiply the learned weights with at prediction time (see figure - let us make this idea precise in the form of mathematical expressions standard neural network layer can be expressed as ( )where is the outputx is the inputf is the activation functionand and are the weight vector and bias termsrespectively dropout layer at training time can be expressed as ( ( )where ~bernoulli(pand the symbol denotes pointwise multiplication of two vectors (if [ and [ , , ]then [ , , at prediction timethe dropout layer can be represented as (( xbit is easy to see that the dropout layerwhile trainingactually trains multiple networksas for every distinct rwe have different network it is also easy to see that at prediction timewe are averaging over the multiple networksas (( xbwhile training with dropout with batch stochastic gradienta single value of is used over the entire batch in relevant literaturethe recommended values for are for input units and for hidden units norm regularization found useful with dropout is max-norm regularizationwhere is constrained as || || cwhere is userdefined parameter |
15,605 | training deep leaning models figure - dropout practical implementation in pytorch we will now explore the topics we've discussed so far with practical example for the purpose of this exercisewe will use bank telemarketing dataset hosted at learning repository and was contributed by [moro et al the subset hosted on kaggle is balanced dataset (similar number of positive and negatives sampleswhen compared to the original and makes the purpose of the exercise easier |
15,606 | training deep leaning models so farwe have explored toy datasets crafted using pythonand thus we barely explored the idea of data processing and data engineering that is essential before building deep learning models this would hold true for all forms of data visualization--tabularimagestextaudio/video/speechetc in this exercisewe will look at few basic data processing steps although extensive data processing is beyond the scope of this bookthe objective here is to give you an idea of the kind of processing that might be required for real-life use cases let' get started before downloading the aforementioned datasetyou first need to register and create an account at www kaggle com in listing - we import the essential python packages for our exercise listing - importing the required libraries #import required libraries import torch nn as nn import torch as tch import numpy as nppandas as pd from sklearn metrics import confusion_matrixaccuracy_score from sklearn metrics import precision_scorerecall_score,roc_ curveaucroc_auc_score from sklearn model_selection import train_test_split from sklearn utils import shuffle import matplotlib pyplot as plt sklearn is machine learning library within python that provides comprehensive list of algorithmsmetricsdata processing toolsand other utility functions we use the metrics module within sklearn for handy functions that help in computing model performance--metrics such as precisionrecallaccuracyetc similarlypandas is great python package that provides comprehensive means to processmanipulateand explore tabular dataframes we will use pandas to read and explore the dataset for our exerciseas well as leverage few functions within pandas to tailor the |
15,607 | training deep leaning models dataset to our needs within pytorch listing - illustrates loading data into memory using pandas listing - loading data into memory #load data into memory using pandas df pd read_csv("/users/downloads/dataset csv"print("df shape:",df shapedf head(out[df shape( using pandas with jupyter notebooks provides an elegant means to explore data iteratively the preceding output is the result of the df head(commandwhich prints the first five rows of the datasetthe df shape command presents the shape of the dataset as [rows columnsin this datasetwe are provided with the details of bank telemarketing activity the dataset captures the details of the customer targeted and some details about the previous and current marketing callalong with the success outcome deposit customer attributes include agejobmarital status (marital)educationwhether they have defaulted on payments (default)current bank balances (balance)and indicators for housing loan and personal loan campaign attributes include the type of contact (contact)the time of the contact (day/monthand the duration (duration)the number of contacts performed by the agent (campaign)the number of days before the previous contact (pdays)the number of previous contacts (previous)and the previous outcome (poutcome |
15,608 | training deep leaning models for detailed note on the attributes within the datasetvisit archive ics uci edu/ml/datasets/bank+marketing our objective is to build deep learning model that correctly classifies the outcome (depositfor given customer and campaign combination let' first look at the distribution of the target column in our dataset listing - demonstrates exploring the distribution of the target values listing - distributing the target values print("distribution of target values in dataset -"df deposit value_counts(out[]distribution of target values in dataset no yes namedepositdtypeint we can see that we have roughly similar distribution between yes and no in our dataset listing - explores the distribution of the null values in the dataset listing - distributing the na (nullvalues in the dataset #check if we have 'navalues within the dataset df isna(sum(out[]age job marital education default balance |
15,609 | training deep leaning models housing loan contact day month duration campaign pdays previous poutcome deposit dtypeint the dataset does not have any na or missing values in most reallife datasetsthis might not hold true researchers and data engineers spend significant amount of time treating missing values or outliers the following are additional checks that you should experiment with independentlycheck for outliers identify strategies to treat outliers within data impute with mean impute with mode impute with median use other advanced techniques (cluster-based imputation of regression techniques to treat values |
15,610 | training deep leaning models check for missing values identify strategies to treat missing values drop records (if the number of missing records < %impute records with approaches similar to outliers nextlet' explore the different datatypes within the dataset deep learning models only understand numbers pytorchmore specificallyonly handles -bit floating-point numbers we would need to transform our dataset into suitable form that can be ready to use with pytorch listing - explores the distribution of distinct datatypes listing - distributing the distinct datatypes #check the distinct datatypes within the dataset df dtypes value_counts(out[]int object dtypeint we have six object (stringdatatype-based columnswhich we would need to convert into numeric flags before building models we would convert categorical columns into one-hot encoded forms where each category value is represented as binary flag before doing thathoweverlet' manually convert columns that have yes/no binary categories into single column and leverage pandas-based function to convert the remainder set of categorical columns automatically listing - demonstrates extracting categorical columns from the dataset |
15,611 | training deep leaning models listing - extracting categorical columns from the dataset #extract categorical columns from dataset categorical_columns df select_dtypes(include="object"columns print("categorical cols:",list(categorical_columns)#for each categorical column if values in (yes/noconvert into / flag for col in categorical_columnsif df[colnunique(= df[colnp where(df[col]=="yes", , df head(we can see that our target column deposit and few other columnsincluding loaddefaultand housinghave been converted to binary flag (manuallyfor the remaining set of columns that have non-binary categorical valueswe can leverage pandas get_dummies function to automatically process the same listing - performs one-hot encoding for the categorical variables within the dataset listing - one-hot encoding for the remaining non-binary categorical variables #for the remaining cateogrical variables#create one-hot encoded version of the dataset new_df pd get_dummies(df#define target and predictors for the model |
15,612 | training deep leaning models target "depositpredictors set(new_df columnsset([target]print("new_df shape:",new_df shapenew_df[predictorshead(out[]new_df shape( we have now defined list of predictors that contain all independent predictor column namesand target that contains our -- deposit column name the new_df dataframe has all categorical columns processed as onehot encoded forms by the get_dummies function in pandas the preceding output for listing - limits the view of columns to the first fewwe can see that contact is now transformed as contact_unknowncontact_cellularetc the dataset now has only numeric columns finallybefore designing our neural networkwe would need to convert all the columns to float datatype and split into training and validation datasetsand then convert to pytorch tensors listing - prepares the dataset for training and validation listing - preparing the dataset for training and validation #convert all datatypes within pandas dataframe to float #(compatibility with pytorch tensorsnew_df new_df astype(np float #split dataset into train/test [ : |
15,613 | training deep leaning models x_train,x_testy_train,y_test train_test_split(new_ df[predictors],new_df[target],test_size #convert pandas dataframefirst to numpy and then to torch tensors x_train tch from_numpy(x_train valuesx_test tch from_numpy(x_test valuesy_train tch from_numpy(y_train valuesreshape(- , y_test tch from_numpy(y_test valuesreshape(- , #print the dataset size to verify print("x_train shape:",x_train shapeprint("x_test shape:",x_test shapeprint("y_train shape:",y_train shapeprint("y_test shape:",y_test shapeout[]x_train shapetorch size([ ]x_test shapetorch size([ ]y_train shapetorch size([ ]y_test shapetorch size([ ]we now have the dataset ready for our deep learning experiments before designing our networklet' put in place few essential building blocks that can be reused for our experiments listing - demonstrates the boilerplate code for training model in pytorch note in the exercises in the bookwe will always divide the dataset into training and validation (as opposed to dividing it into trainingvalidationand testingas discussed previously in real-life production experimentswe recommend readers have separate test dataset that can fulfill the required checks before going live in production systems |
15,614 | training deep leaning models listing - defining the function to train the model #define function to train the network def train_network(model,optimizer,loss_function,num_epochsbatch_size,x_train,y_train,lambda_l = )loss_across_epochs [for epoch in range(num_epochs)train_loss #explicitly start model training model train(for in range( ,x_train shape[ ],batch_size)#extract train batch from and input_data x_train[ :min(x_train shape[ ], +batch_size)labels y_train[ :min(x_train shape[ ], +batch_ size)#set the gradients to zero before starting to do backpropragation optimizer zero_grad(#forward pass output_data model(input_data#caculate loss loss loss_function(output_datalabelsl _loss |
15,615 | training deep leaning models #compute penalty to be added with loss for in model parameters() _loss _loss abs(sum(#add penalty to loss loss loss lambda_l _loss #backpropogate loss backward(#update weights optimizer step(train_loss +loss item(input_data size( loss_across_epochs append(train_loss/x_train size( )if epoch% = print("epoch{loss:{ }format(epochtrain_loss/x_train size( )return(loss_across_epochsthe preceding function loops over batches for the defined number of epochs and trains our neural network you are already familiar with this function (refer to )the only new addition to the function is the calculation of an penalty when we use regularization the lambda_l variable is hyperparameter that we can tune to control the effect if the regularizer let' now define function that can be used to plot the loss over epochsroc curve for the training and validation datasetsand evaluate the model for key metrics of interest because this is classification use casewe will compute accuracyprecisionand recall using the functions we imported earlier from sklearn listing - demonstrates the boilerplate code for evaluating model |
15,616 | training deep leaning models listing - defining the function to evaluate the model performance #define function for evaluating nn def evaluate_model(model,x_test,y_test,x_train,y_train,loss_list)model eval(#explicitly set to evaluate mode #predict on train and validation datasets y_test_prob model(x_testy_test_pred =np where(y_test_prob> , , y_train_prob model(x_trainy_train_pred =np where(y_train_prob> , , #compute training and validation metrics print("\ model performance -"print("training accuracy-",round(accuracy_score(y_trainy_train_pred), )print("training precision-",round(precision_score (y_train,y_train_pred), )print("training recall-",round(recall_score(y_trainy_train_pred), )print("training rocauc"round(roc_auc_score(y_train ,y_train_prob detach(numpy()), )print("validation accuracy-",round(accuracy_score(y_testy_test_pred), )print("validation precision-",round(precision_score(y_testy_test_pred), )print("validation recall-",round(recall_score(y_testy_test_pred), )print("validation rocauc"round(roc_auc_score(y_test ,y_test_prob detach(numpy()), )print("\ " |
15,617 | training deep leaning models #plot the loss curve and roc curve plt figure(figsize=( , )plt subplot( plt plot(loss_listplt title('loss across epochs'plt ylabel('loss'plt xlabel('epochs'plt subplot( #validation fpr_vtpr_v_ roc_curve(y_testy_test_prob detach(numpy()roc_auc_v auc(fpr_vtpr_v#training fpr_ttpr_t_ roc_curve(y_trainy_train_prob detach(numpy()roc_auc_t auc(fpr_ttpr_tplt title('receiver operating characteristic:validation'plt plot(fpr_vtpr_v' 'label 'validation auc % froc_auc_vplt plot(fpr_ttpr_t' 'label 'training auc % froc_auc_tplt legend(loc 'lower right'plt plot([ ][ ],' --'plt xlim([ ]plt ylim([ ]plt ylabel('true positive rate'plt xlabel('false positive rate'plt show( |
15,618 | training deep leaning models finallywith all the necessary building blocks in placeit is time to define our neural network and leverage the preceding helper functions to train and evaluate the deep learning model we will begin with vanilla neural network with no regularizerswe will later experiment by adding and dropout to study the effectand take the best one to make predictions listing - defines the structure of our neural network listing - defining the structure of the neural network #define neural network class neuralnetwork(nn module)def __init__(self)super(__init__(tch manual_seed( self fc nn linear( self fc nn linear( self fc nn linear( self out nn linear( self relu nn relu(self final nn sigmoid(def forward(selfx)op self fc (xop self relu(opop self fc (opop self relu(opop self fc (opop self relu(opop self out(opy self final(opreturn |
15,619 | training deep leaning models #define training variables num_epochs batch_size loss_function nn bceloss(#binary crosss entropy loss #hyperparameters weight_decay= #set to no regularizerpassed into the optimizer lambda_l = #set to no regmanually added in loss (train_network#create model instance model neuralnetwork(#define optimizer adam_optimizer tch optim adam(model parameters()lr weight_decay=weight_decay#train model adam_loss train_network(model,adam_optimizer,loss_function ,num_epochs,batch_sizex_train,y_train,lambda_ = #evaluate model evaluate_model(model,x_test,y_test,x_train,y_train,adam_lossout[]epoch loss: epoch loss: epoch loss: epoch loss: epoch loss: |
15,620 | training deep leaning models model performance training accuracy training precision training recall training rocauc validation accuracy validation precision validation recall validation rocauc we defined the number of epochs as and the batch size as while keeping weight_decay= and lambda_l = (which essentially removes the effect of the and regularizerswe will experiment with these values soonas in we used the adam optimizer with bceloss(for our network our network has three hidden layerswith and neuronsrespectively we can play around with different sizes of units within the neural network architecture if we take closer look at the results between the training and validation datasetswe can see huge gap single metric that helps in capturing this difference is roc auc (area under curve)we have auc as %as opposed to for training and validation this gap is huge essentiallywe are facing the overfitting problem to overcome overfittingwe would need to add regularizers that would add penalty to the model' losscueing the model to learn simpler patterns ideallywe would want to see similar results between training as well as validation |
15,621 | training deep leaning models let' start with regularization we added small snippet of code within the train_network(function that computes the sum of absolute values of parameters and adds to the loss computed after multiplying with lambda (hyperparameterto enable regularizationwe would need to pass non-zero value to the lambda_l variable listing - demonstrates regularization for the network listing - regularization # regularization num_epochs batch_size weight_decay= #set to no reg lambda_l #enables regularization model neuralnetwork(loss_function nn bceloss(#binary crosss entropy loss adam_optimizer tch optim adam(model parameters(),lr ,weight_decay=weight_decay#define hyperparater for regularization #train network adam_loss train_network(model,adam_optimizer,loss_functionnum_epochs,batch_size,x_train,y_train,lambda_l =lambda_l #evaluate model evaluate_model(model,x_test,y_test,x_train,y_train,adam_loss |
15,622 | training deep leaning models out[]epoch loss: epoch loss: epoch loss: epoch loss: epoch loss: model performance training accuracy training precision training recall training rocauc validation accuracy validation precision validation recall validation rocauc similarlylet' try regularization by defaultpytorch provides means to enable regularization directly through parameter within the optimizer within adam optimizationwe can add this using the weight_ decay variable listing - demonstrates regularization for the network |
15,623 | training deep leaning models listing - regularization # regularization num_epochs batch_size weight_decay= #enables regularization lambda_l #set to no reg model neuralnetwork(loss_function nn bceloss(#binary crosss entropy loss adam_optimizer tch optim adam(model parameters(),lr weight_decay=weight_decay#train network adam_loss train_network(model,adam_optimizer,loss_functionnum_epochs,batch_size,x_train,y_train,lambda_l =lambda_l #evaluate model evaluate_model(model,x_test,y_test,x_train,y_train,adam_lossout[]epoch loss: epoch loss: epoch loss: epoch loss: epoch loss: model performance training accuracy training precision training recall training rocauc |
15,624 | training deep leaning models validation accuracy validation precision validation recall validation rocauc similar to we see somewhat better results with than without regularization the gap reduced and the validation auc increased by small fraction with and regularization (individually)we saw the gap between training and validation performance reduced as well as reduced overfitting we now have favorable results for our use case before finalizing the resultslet' add dropout layers listing - adds dropout layer to randomly drop of the input neurons during the learning we add the dropout layer to the input layer as well as the hidden layers listing - dropout regularization #define network with dropout layers class neuralnetwork(nn module)#adding dropout layers within neural network to reduce overfitting def __init__(self)super(__init__(tch manual_seed( self fc nn linear( self fc nn linear( self fc nn linear( |
15,625 | training deep leaning models self relu nn relu(self out nn linear( self final nn sigmoid(self drop nn dropout( #dropout layer def forward(selfx)op self drop( #dropout for input layer op self fc (opop self relu(opop self drop(op#dropout for hidden layer op self fc (opop self relu(opop self drop(op#dropout for hidden layer op self fc (opop self relu(opop self drop(op#dropout for hidden layer op self out(opy self final(opreturn num_epochs batch_size weight_decay= #set to no reg lambda_l #set to no reg model neuralnetwork(loss_function nn bceloss(#binary crosss entropy loss adam_optimizer tch optim adam(model parameters(),lr ,weight_decay=weight_decay#train model |
15,626 | training deep leaning models adam_loss train_network(model,adam_optimizer,loss_functionnum_epochs ,batch_size,x_train,y_train ,lambda_l lambda_l #evaluate model evaluate_model(model,x_test,y_test,x_train,y_train,adam_lossout[]epoch loss: epoch loss: epoch loss: epoch loss: epoch loss: model performance training accuracy training precision training recall training rocauc validation accuracy validation precision validation recall validation rocauc |
15,627 | training deep leaning models the gap between training and validation performance has reducedwe can see similar performance across both datasets finallylet' combine all three types of regularizers and study the effect on model performance listing - demonstrates and dropout regularization listing - and dropout regularization #create network with dropout layer class neuralnetwork(nn module)def __init__(self)super(__init__(tch manual_seed( self fc nn linear( self fc nn linear( self fc nn linear( self relu nn relu(self out nn linear( self final nn sigmoid(self drop nn dropout( #dropout layer def forward(selfx)op self drop( #dropout for input layer op self fc (opop self relu(opop self drop(op#dropout for hidden layer op self fc (opop self relu(opop self drop(op#dropout for hidden layer op self fc (opop self relu(opop self drop(op#dropout for hidden layer op self out(op |
15,628 | training deep leaning models self final(opreturn num_epochs batch_size lambda_l #enabled weight_decay = #enabled model neuralnetwork(loss_function nn bceloss(adam_optimizer tch optim adam(model parameters(),lr ,weight_decay=weight_decayadam_loss train_network(model,adam_optimizer,loss_function ,num_epochs,batch_size,x_train,y_train,lambda_l =lambda_l evaluate_model(model,x_test,y_test,x_train,y_train,adam_lossepoch loss: epoch loss: epoch loss: epoch loss: epoch loss: model performance training accuracy training precision training recall training rocauc validation accuracy validation precision validation recall validation rocauc |
15,629 | training deep leaning models overallwe see similar performance in the above three scenarios in an ideal experimentthere are no defined benchmarks that we could use for selecting the type of regularization that would work better we would need to experiment with different types of regularizers as well as different values for the hyperparameterslambda regularization and with hyperparameter values of ( , , , )dropout layer with values ( , etc with results from all the experiments in placewe would be more informed about which type of regularization works best for the data nterpreting the business outcomes for deep learning the results are fairly good we see small gap between training and validation performance (refer to the gap between red and blue line within the roc plot overallwe have accuracy on the validation datasetwith precision at and recall at these results are very encouraging out of predictions made as "yesfor marketing campaign outcomewe are times correct while covering of all customers who would positively respond to the campaign |
15,630 | training deep leaning models let' take moment to understand these results better we started with dataset that had roughly - positive and negative outcomes considering business problemthis would translate (considering the effort from marketing teamas huge effort lost in targeting of the customers with negative outcome assume that we have customers in total (therefore positive outcomes and negative outcomestargeting each customerwe have effort units (for callsand we have successful deposits at the end howeverwith ~ precision and recallwe have the filtered list of customers whom we can target with reduced efforts thusinstead of targeting all customerswe now target just the ones we predicted positivelywhich also includes false-positives if we have positive outcomes in allthen with the preceding model having recall and precisionwe will have predicted ( )/ (with thuswe have total of ~ positive predictionswith false positives and true positives (for every predictionscomparing this with the earlier scenariofor attemptswe have successful deposits in the deep learning modelfor attempts (outcomes predicted as )we have successful deposits although there is tradeoff of losing seven positive deposits from the campaignwe have significantly reduced the effort required to achieve an almost equivalent success criteria these metrics can further be tuned based on business requirements to suit more favorable outcomes note we have not covered similar (elaborateuse case for regression readers are encouraged to experiment independently for regression use caseswhere the target variable is continuous the approach and formulation of the problem remains the samealthough the selection of loss functionthe activation for the output layerand the performance metrics would need to be based on the use case sample regression dataset that we recommend experimenting with |
15,631 | training deep leaning models is the santander group' value prediction challenge (kaggle com/ /santander-value-prediction-challenge/ good choice of loss function would be rmsethe activation for the output layer would be linearand the performance metrics choice can be rmse or mse ummary this covered the process of model training we also described number of critical steps and analyses that should be systematically performed in order to improve the model we also covered regularization techniques commonly used in deep learning--namelynorm penalties and dropout there are several other advanced/domain-specific techniques found in literature that must be mentioned so farwe have covered feed-forward neural networks and all the essential bits around deep learning using toy dataset and practical datasetas well as combination of the two with business use--case you should now have much more intuitive understanding of formulating use casedefining relevant metrics to benchmark modelsevaluating model performanceand evaluating the business viability in the next we will explore one of the most important topics within deep learning--convolutional neural networks--and embrace the field of computer vision |
15,632 | convolutional neural networks convolutional neural network (cnnis essentially neural network that employs the convolution operation (instead of fully connected layeras one of its layers cnns are an incredibly successful technology that has been applied to problems where in the input data on which predictions are to be made has known grid-like topologylike time series ( - gridor an image ( - gridcnns ushered deep learning into modern timessolving one of the most crucial computational problems in the digital era of computer vision with the popularity of cnnsa surge in the research for deep learning was witnessed that continues today this takes brief look at the core concepts of cnns and explores simple example in pytorch to study their practical implementation we will also explore transfer learningwhere we leverage previously trained network for our use case let' start with the basics (cnikhil ketkarjojo moolayil ketkar and moolayildeep learning with python |
15,633 | convolutional neural networks onvolution operation let' start by looking at the convolution operation in one dimension given an input (tand kernel ( )the convolution operation is given by * an equivalent form of this operationgiven the commutativity of the convolution operationis as followss * furthermorethe negative sign (flippingcan be replaced to get cross-correlationas followss * deep learning literature and software implementations use the terms convolution and cross-correlation interchangeably the essence of the operation is that the kernel is much shorter set of data points as compared to the inputand the output of the convolution operation is higher when the input is similar to the kernel figure - and figure - illustrate this key idea we take an arbitrary input and an arbitrary kerneland perform the convolution operation the highest value is achieved when the kernel is similar to particular portion of the input |
15,634 | convolutional neural networks huqho ,qsxw xwsxw &rqyroxwlrq figure - simplified overview of convolution operation |
15,635 | convolutional neural networks figure - convolution operation--one dimension the following points should be noted the input is an arbitrary and large set of data points the kernel is set of data points smaller in number to the input |
15,636 | convolutional neural networks the convolution operationin senseslides the kernel over the input and computes how similar the kernel is with the portion of the input the convolution operation produces the highest value where the kernel is most similar with portion of the input the convolution operation can be extended to two dimensions given an input (mnand kernel (ab)the convolution operation is given by , * , an equivalent form of this operationgiven the commutativity of the convolution operationis as followss , * , furthermorethe negative sign (flippingcan be replaced to get crosscorrelationgiven as followss , * , figure - illustrates the convolution operation in two dimensions note that this is simply extending the idea of convolution to two dimensions |
15,637 | convolutional neural networks figure - convolution operation--two dimensions having introduced the convolution operationwe can now dive deeper into the key constituent parts of cnnwhere convolutional layer is used instead of fully connected layerwhich involves matrix multiplication |
15,638 | convolutional neural networks fully connected layer can be described as ( )where is the input vectory is the output vectorw is set of weightsand is the activation function correspondinglya convolutional layer can be described as ( ( ))where denotes the convolution operation between the input and the weights let' now contrast the fully connected layer with the convolutional layer figure - illustrates fully connected layerand figure - illustrates convolutional layerschematically figure - illustrates parameter sharing in the convolutional layer and the lack of it in the fully connected layer the following points should be notedfor the same number of inputs and outputsthe fully connected layer has lot more connections and correspondingly weights than convolutional layer the interactions among inputs to produce outputs are fewer in convolutional layers as compared to fully connected layer this is referred to as sparse interactions parameters/weights are shared across the convolutional layergiven that the kernel is much smaller than the input and the kernel slides across the input thusthere are lot fewer unique parametersweights in convolutional layer |
15,639 | convolutional neural networks figure - dense interactions in fully connected layers |
15,640 | convolutional neural networks figure - sparse interactions in convolutional layers figure - parameter sharing weights |
15,641 | convolutional neural networks ooling operation let us now look at the pooling operationwhich is almost always used in cnns in conjunction with convolution the idea behind the pooling operation is that the exact location of the feature is not concern if in fact it has been discovered it simply provides translation invariance for instanceassume that the task is to learn to detect faces in photographs also assume that the faces in the photographs are tilted (as they generally areand that we have convolutional layer that detects the eyes we would like to abstract the location of the eyes in the photograph from their orientation the pooling operation achieves this and is an important constituent of cnns figure - illustrates the pooling operation for -dimensional input the following points are to be noted the function is commonly the max operation (leading to max pooling)but other variantssuch as average or normcan be used as an alternative for -dimensional inputthis is rectangular portion the output produced as result of pooling is much smaller in dimensionality as compared to the input |
15,642 | convolutional neural networks figure - poolingor subsampling onvolution-detector-pooling building block let us now look at the convolution-detector-pooling blockwhich can be thought of building block of the cnnand see how all the operations we have covered earlier work in conjunction refer to figure - and figure - the following points are to be noted the detector stage is simply non-linear activation function |
15,643 | convolutional neural networks the convolutiondetectorand pooling operations are applied in sequence to transform the input to the output the output is referred to as feature map the output typically is passed on to other layers (convolutional or fully connectedmultiple convolution-detector-pooling blocks can be applied in parallelconsuming the same input and producing multiple outputs or feature maps figure - convolution followed by detector stage and pooling |
15,644 | convolutional neural networks figure - multiple filters/kernels giving multiple feature maps if the image input consists of three channelsa separate convolution operation is applied to each channeland then the outputs are added up after the convolution (see figure - |
15,645 | convolutional neural networks figure - convolution with multiple channels having covered all the constituent elements of cnnswe can now look at an example cnn in its entirety (see figure - the cnn consists of two stages of convolution-detector-pooling blockswith multiple filterskernels at each stage producing multiple feature maps following these two stageswe have fully connected layer that produces the output in generala cnn may have multiple stages of convolution-detector-pooling blocks (employing multiple filters)typically followed by fully connected layer |
15,646 | convolutional neural networks figure - complete cnn architecture in addition to these basic constructswe will explore few additional topics that are relevant in the context of convolutional layers stride stride can be defined as the amount by which filter/kernel shifts when discussing the sliding of the filter over the input imagewe assumed that the movement was just one unit in the intended direction we canhowevercontrol the sliding movement with number of our choice |
15,647 | convolutional neural networks (though it is common to use onebased on the use casewe can choose more appropriate number larger strides often help in reducing computationgeneralizing learning of featuresetc padding we also saw that applying convolution reduces the size of the feature map when compared to the size of the input image zero-padding is generic way to control the shrinkage of the dimension after applying filters larger than and avoiding information loss at the boundaries batch normalization batch normalization is technique that helps to train very deep neural networks by standardizing the inputs to layer for each mini-batch standardizing the inputs helps to stabilize the learning process and thereby dramatically reduces the number of training epochs required to train deep networks the batch normalization layer is added after the convolutional layer and usually is part of standard block of convolutional operation that isa combination of convolutional layerbatch normalization layeractivationand max pooling operation together in the same sequence is defined as convolutional unit we typically add several such units in cnn filter filters are analogous to kernels in recent implementations (including pytorchand academiathe term filter is more common than kernel in generalfor convolution operationswe use filters of size and earlier implementations also favored filters |
15,648 | convolutional neural networks filter depth filter depth usually refers to the depth corresponding to the number of color channels in the input image for the filters in the later layersthe depth corresponds to the number of filters in the previous layers for regular image with three color channels ( rgand )we use filter with depth of number of filters filters act as feature extractorhenceit is common to have several filters within each convolutional block of the network sample arrangement would be convolutional block with filters of size (and of depth followed by activation/batch normalization and pooling blocksfollowed by another block with filters (now having depth of )and so on summarizing key learnings from cnns so farwe have covered the key constituent concepts behind cnnthe convolution operation and the pooling operationand how they are used in conjunction let' now take step back to internalize the ideas behind cnns using these building blocks the first idea to consider is the capacity of cnns cnns that replace at least one of fully connected layers of neural network with the convolution operation have less capacity than that of fully connected network that isthere exists datasets that fully connected network will be able to model that cnn will not be able to sothe first point to note is that cnns achieve more by limiting the capacity and hence making the training efficient |
15,649 | convolutional neural networks the second idea to consider is that learning the filters driving the convolution operation isin senserepresentation learning for instancethe learned filters might learn to detect edgesshapesetc the important point to consider here is that we are not manually describing the features to be extracted from the input dataratherwe are describing an architecture that learns to engineer the features/representations the third idea to consider is the location invariance introduced by the pooling operation the pooling operation separates the location of the feature from the fact that it is detected filter that detects straight lines might detect this feature in any portion of the imagebut the pooling operation picks the fact that the feature is detected (max poolingthe fourth idea is that of hierarchy cnn can have multiple convolutional and pooling layers stacked upfollowed by fully connected network this allows the cnn to build hierarchy of concepts wherein more abstract concepts are based on simpler concepts (refer to the final idea relates to the presence of fully connected layer at the end of series of convolutional and pooling layers the series of convolutional and pooling layers generates the featuresand standard neural network learns the final classification/regression function it is important to distinguish this aspect of the cnn from traditional machine learning in traditional machine learningan expert would hand-engineer features and feed them to neural network in cnnsthese featuresrepresentations are being learned from data |
15,650 | convolutional neural networks implementing basic cnn using pytorch the modern deep learning frameworks take care of the heavy lifting for bulk of the operations and constructs we need to develop cnn let' use simple example to illustrate how pytorch can be used to definetrainand evaluate cnn we will start with an mnist example that hosts collection of handwritten digit images our task is to classify given image as the digit between and #note computer vision tasks are very compute-intensive and usually require high-end hardware for training and evaluating large robust networks the mnist example we explore is miniature dataset that should be fairly easy for readers to reproduce in commodity hardware for more intensive examples in the we would recommend freewebbasedgpu-enabled compute instancelike kaggleor google colab both versions provide standard compute instance with ~ gb ram and gb gpu memorywith monthly quotas for experimentation purposesthese are great resources for more intensive experimentsreaders would need to explore deep learning instances on the cloud (aws/gcp/azureor custom hardware to startdownload the dataset available from com/ /digit-recognizer/data we will only use the training dataset that has the labels provided the training dataset will be further divided into training and validation now that we have the data readylet' begin the implementation by importing the required packages (listing - |
15,651 | convolutional neural networks listing - importing the required packages #pytorch utility imports import torch from torch utils data import dataloadertensordataset #neural net imports import torch nn as nntorch nn functional as ftorch optim as optim from torch autograd import variable #import external libraries import pandas as pd,numpy as np,matplotlib pyplot as pltos from sklearn model_selection import train_test_split from sklearn metrics import confusion_matrixaccuracy_score %matplotlib inline #set device to gpu or cpu based on availability if torch cuda is_available()device torch device('cuda'elsedevice torch device('cpu'we will now load the dataset using pandas (similar to and separate the label and pixel values note that most image datasets are stored in simple image formats jpeg or pngin simple folder structure that is suitable for pytorch for the simplicity of this examplehoweverwe use dataset wherein pixel values are stored as cross-sectional data in csv file we will then split the dataset into training and testand plot few samples in the next examplewe will use dataset that would be stored in the traditional folder structure in this examplewe will use tensordataseta wrapper provided by pytorch to combine labels and tensors into unified dataset listing - demonstrates loading the dataset into memory |
15,652 | convolutional neural networks listing - loading the dataset into memory input_folder_path "/input/data/mnist/#the csv contains flat file of images# each * image is flattened into row of colums #( column represents pixel value#for cnnwe would need to reshape this to our desired shape train_df pd read_csv(input_folder_path+"train csv"#first column is the target/label train_labels train_df['label'values #pixels values start from the nd column train_images (train_df iloc[:, :valuesastype('float '#training and validation split train_imagesval_imagestrain_labelsval_labels train_test_splittrain_images ,train_labels ,random_state= ,test_size= #here we reshape the flat row into [#images,#channels,#width#height#given this simple grayscale imagewe will have just channel train_images train_images reshape(train_images shape[ ], , val_images val_images reshape(val_images shape[ ], , #alsolet' plot few samples for in range( )plt subplot( ( + )plt imshow(train_images[ireshape( , )cmap=plt get_ cmap('gray')plt title(train_labels[ ] |
15,653 | convolutional neural networks nextwe will normalize the pixel values and convert the dataset into pytorch tensor for training (listing - listing - normalizing the data and preparing the trainingvalidation datasets #covert train images from pandas/numpy to tensor and normalize the values train_images_tensor torch tensor(train_images)/ train_images_tensor train_images_tensor view(- , , , train_labels_tensor torch tensor(train_labels#create train tensordataset train_tensor tensordataset(train_images_tensortrain_labels_ tensor#covert validation images from pandas/numpy to tensor and normalize the values val_images_tensor torch tensor(val_images)/ val_images_tensor val_images_tensor view(- , , , val_labels_tensor torch tensor(val_labels#create validation tensordataset val_tensor tensordataset(val_images_tensorval_labels_ tensor |
15,654 | convolutional neural networks print("train labels shape:",train_labels_tensor shapeprint("train images shape:",train_images_tensor shapeprint("validation labels shape:",val_labels_tensor shapeprint("validation images shape:",val_images_tensor shape#load train and validation tensordatasets into the data generator for training train_loader dataloader(train_tensorbatch_size= num_workers= shuffle=trueval_loader dataloader(val_tensorbatch_size= num_workers= shuffle=trueoutput[train labels shapetorch size([ ]train images shapetorch size([ ]validation labels shapetorch size([ ]validation images shapetorch size([ ]with the training and validation datasets readylet' define the next important aspects for the network this includes the cnn itselfthe functions for trainingas well as evaluating and making predictions most of these constructs are borrowed from our previous example in we will tackle few new code constructs here in our cnnwe need to define convolutional unitas discussed previously each unit combines convolutional layer followed by batch normalization (optional)activationand max-pooling layers an important aspect to consider is the size of resultant image after each unit of convolution in this exampleour original image is of the size when we pass this through the first unit of convolutionthe image size shrinks based on our defined kernel size given that we have added single unit of padding to the input using 'padding= 'the original size remains same after convolution howeverwith the max pooling operationthe size is reduced |
15,655 | convolutional neural networks by half (as we want it to bethereforethe resultant imagewhich was originally will be transformed into tensor of size (where is the number of filters we definedwith each additional convolutional unitwe will see the number being shrunk by half (as result of the max pooling operationthusafter three consecutive convolutional unitsthe final size would be ( - - the fully connected layerfc has input nodes as (where is the number of kernels in the preceding convolutional unitthe forward function connects these convolutional units sequentially with the fully connected layers the last layer will have output nodes as we have multi-class classification problem herei classifying digit as the softmax function in the last layer tailors the output into neat set of probability scores for our multi-class use case in listing - we define the structure of our cnn and the helper functions to evaluate the model' performance and generate predictions listing - defining the cnn and the helper functions #define conv-net class convnet(nn module)def __init__(selfnum_classes= )super(convnetself__init__(#first unit of convolution self conv_unit_ nn sequentialnn conv ( kernel_size= stride= padding= )nn batchnorm ( )nn relu()nn maxpool (kernel_size= stride= )#second unit of convolution self conv_unit_ nn sequentialnn conv ( kernel_size= stride= padding= ) |
15,656 | convolutional neural networks nn batchnorm ( )nn relu()nn maxpool (kernel_size= stride= )#fully connected layers self fc nn linear( * * self fc nn linear( #connect the units def forward(selfx)out self conv_unit_ (xout self conv_unit_ (outout out view(out size( )- out self fc (outout self fc (outout log_softmax(out,dim= return out #define functions for model evaluation and generating predictions def make_predictions(data_loader)#explcitly set the model to eval mode model eval(test_preds torch longtensor(actual torch longtensor(for datatarget in data_loaderif torch cuda is_available()data data cuda(output model(data#predict output/take the index of the output with max value preds output cpu(data max( keepdim=true)[ |
15,657 | convolutional neural networks #combine tensors from each batch test_preds torch cat((test_predspreds)dim= actual torch cat((actual,target),dim= return actual,test_preds #evalute model def evaluate(data_loader)model eval(loss correct for datatarget in data_loaderif torch cuda is_available()data data cuda(target target cuda(output model(dataloss + cross_entropy(outputtargetsize_ average=falsedata item(predicted output data max( keepdim=true)[ correct +(target reshape(- , =predicted reshape(- , )float(sum(loss /len(data_loader datasetprint('\naverage val loss{ }val accuracy{}/{({ }%)\nformatlosscorrectlen(data_loader dataset) correct len(data_loader dataset))with the important constructs in placewe can now create an instance of the model and define our criterion function and optimizeras demonstrated in listing - |
15,658 | convolutional neural networks listing - creating model instance and defining the loss function and optimizer #create model instance model convnet( to(device#define loss and optimizer criterion nn crossentropyloss(optimizer torch optim adam(model parameters()lr= print(modeloutput[listing - demonstrates training cnn model for defined number of epochs--in this casefive listing - training cnn model num_epochs train the model total_step len(train_loaderfor epoch in range(num_epochs)for (imageslabelsin enumerate(train_loader)images images to(devicelabels labels to(device |
15,659 | convolutional neural networks forward pass outputs model(imagesloss criterion(outputslabelsbackward and optimize optimizer zero_grad(loss backward(optimizer step(#after each epoch print train loss and validation loss accuracy print ('epoch [{}/{}]loss{ }format(epoch+ num_epochsloss item())evaluate(val_loaderoutput[epoch [ / ]loss average val loss val accuracy( %epoch [ / ]loss average val loss val accuracy( %epoch [ / ]loss average val loss val accuracy( %epoch [ / ]loss average val loss val accuracy( %epoch [ / ]loss average val loss val accuracy( %we can see that the model has achieved fairly positive results on the validation dataset with accuracy (within five epochs)we can conclude our model has good performance |
15,660 | convolutional neural networks let' make predictions on the validation dataset and visualize the confusion matrix (see listing - listing - making predictions #make predictions on validation dataset actualpredicted make_predictions(val_loaderactual,predicted np array(actualreshape(- , ,np array(predictedreshape(- , print("validation accuracy-",round(accuracy_score(actualpredicted), )* print("\ confusion matrix\ ",confusion_matrix(actual,predicted)output[implementing larger cnn in pytorch sothat was our first sample cnn given the small datasetwe could comfortably train our network on our personal computer (commodity hardwareand still achieve favorable results let' explore similar example but with more complicated images good example in this category would be the cats and dogs dataset hereour objective is to classify the dataset as cats or dogs based on given image |
15,661 | convolutional neural networks this dataset was originally published by microsoft research and was later made available through kaggleat dogs-vs-cats/data the dataset is hosted as simple folder with filenames representing the labelso we might have to reorganize the dataset before we can use it pytorch provides neat abstraction for images with imagefolder and dataloader pytorch expects that data is stored in the following folder structureroot/label_ /root/label_ /root/label_n/for our use casethis would be the following/input/train/cats//input/train/dogs//input/test/cats//input/test/dogs/to simplify the processwe have provided an organized structurewith images suitable for pytorch experimentsat jojomoolayil/catsvsdogs we recommend using kaggle notebook with gpu accelerator for this experiment the settings on the right sidebar show the training data folder structurealong with the accelerator (see figure - we have turned on the internet option and set the accelerator to gpu |
15,662 | convolutional neural networks figure - the environment settings in kaggle notebook let' start with fresh import of the required packages listing - demonstrates importing the packages for this exercise listing - importing the packages for this exercise import required libraries import torch import torchvision transforms as transforms import torchvision datasets as datasets import torchvision models as models import torch nn as nn import torch nn functional as import torch optim as optim from pil import image import matplotlib pyplot as plt |
15,663 | convolutional neural networks import glob,os import matplotlib image as mpimg new_path "/kaggle/input/catsvsdogs/ensure that you have turned on the internet option and selected the accelerator as gpu we confirm that the gpu is available using the command illustrated in listing - listing - enabling the gpu (if availablein the kernel #check if gpu is available if torch cuda is_available()device torch device('cuda'elsedevice torch device('cpu'print("device:",deviceoutput[devicecuda note that it is recommended to use only gpunot mandate using cpuhoweverwill be painstakingly slower for computer vision experiments we can now explore random set of images of cats and dogs listing - randomly plots sample images from the training dataset listing - plotting sample images from the training dataset %matplotlib inline images [#collect cat images for img_path in glob glob(os path join(new_path,"train","cat""jpg"))[: ]images append(mpimg imread(img_path) |
15,664 | convolutional neural networks #collect dog images for img_path in glob glob(os path join(new_path,"train","dog""jpg"))[: ]images append(mpimg imread(img_path)#plot grid of cats and dogs plt figure(figsize=( , )columns for iimage in enumerate(images)plt subplot(len(imagescolumns columnsi plt imshow(imageplots of random images from the training set for computer vision experimentswe would always apply numerous transformations on the raw dataset core reason for this is that most images used in an experiment would be of different sizes alsoat timeswe might need to add more training samples by augmenting the existing samples some examples would include augmenting more training samples with random rotationscropping images from centerflipping across axisstandardizing pixel valuesetc pytorch provides handy functionality to compose several such transformations and orchestrate them on training and validation samples in listing - we compose |
15,665 | convolutional neural networks transformations object that will sequentially resize all images into crop them from center to convert them to tensorsand normalize their pixel values listing - transforming the data and creating the training and validation sets #compose sequence of transformations for image transformations transforms compose(transforms resize( )transforms centercrop( )transforms totensor()transforms normalize(mean=[ ]std=[ ]]load in each dataset and apply transformations using the torchvision datasets as datasets library train_set datasets imagefolder(os path join(new_path,"train"transform transformationsval_set datasets imagefolder(os path join(new_path,"test"transform transformationsput into dataloader using torch library train_loader torch utils data dataloader(train_set batch_size= shuffle=trueval_loader torch utils data dataloader(val_setbatch_size = shuffle=truenote that train_loader and val_loader are objects that take care of shuffling and creating mini-batches of images with labels for our training loop before creating mini-batchesthe transformations object ensures the augmentations are appropriately applied on all images nextlisting - defines our cnn |
15,666 | convolutional neural networks listing - defining the cnn #define convolutional network class convnet(nn module)def __init__(selfnum_classes= )super(convnetself__init__(#first unit of convolution self conv_unit_ nn sequentialnn conv ( kernel_size= stride= padding= )nn relu()nn maxpool (kernel_size= stride= )# #second unit of convolution self conv_unit_ nn sequentialnn conv ( kernel_size= stride= padding= )nn relu()nn maxpool (kernel_size= stride= )# #third unit of convolution self conv_unit_ nn sequentialnn conv ( kernel_size= stride= padding= )nn relu()nn maxpool (kernel_size= stride= )# #fourth unit of convolution self conv_unit_ nn sequentialnn conv ( kernel_size= stride= padding= )nn relu()nn maxpool (kernel_size= stride= )# #fully connected layers self fc nn linear( * * self fc nn linear( self final nn sigmoid( |
15,667 | convolutional neural networks def forward(selfx)out self conv_unit_ (xout self conv_unit_ (outout self conv_unit_ (outout self conv_unit_ (out#reshape the output out out view(out size( ),- out self fc (outout self fc (outout self final(outreturn(outsimilar to the mnist examplethe fully connected layer needs the number of input dimensionswhich would be different here based on the convolutional units because we applied four convolutional units in the original samplethe size of the image would shrinkas ([original -[first -[second -[third -[fourth thereforethe fully connected layer would have input dimensionswith being the number of kernels in the preceding unit listing - defines function for evaluating our new network listing - defining the evaluation function def evaluate(model,data_loader)loss [correct with torch no_grad()for imageslabels in data_loaderimages images to(devicelabels labels to(devicemodel eval( |
15,668 | convolutional neural networks output model(imagespredicted output correct +(labels reshape(- , =predicted reshape(- , )float(sum(#clear memory del([images,labels]if device ="cuda"torch cuda empty_cache(print('\nval accuracy{}/{({ }%)\nformatcorrectlen(data_loader dataset) correct len(data_loader dataset))with that being in placelet' define and create model instance and train our network for epochs listing - demonstrates defining the loss function and optimizercreating model instanceand training for defined number of epochs listing - defining the loss function and optimizercreating the model instanceand training for defined number of epochs num_epochs loss_function nn bceloss(#binary crosss entropy loss model convnet(model cuda(adam_optimizer torch optim adam(model parameters()lr train the model total_step len(train_loaderprint("total batches:",total_step |
15,669 | convolutional neural networks for epoch in range(num_epochs)model train(train_loss for (imageslabelsin enumerate(train_loader)images images to(devicelabels labels to(deviceforward pass outputs model(imagesloss loss_function(outputs float()labels float(view(- , )backward and optimize adam_optimizer zero_grad(loss backward(adam_optimizer step(train_loss +loss item()labels size( #after each epoch print train loss and validation loss accuracy print ('epoch [{}/{}]loss{ }format(epoch+ num_epochsloss item())#evaluate model after each training epoch evaluate(model,val_loaderoutput[total batches epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( % |
15,670 | convolutional neural networks epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %after epochsthe performance is roughly the performance would definitely improve after several more epochshoweverthe time required to train such network is expensive one question we might wonder is whether there is faster and easier alternative to speed this up as it turns outtransfer learning is available for our resource the amazing news about cnns is that once layer is trainedit can essentially be reused for another task the lower-level features--for examplecurvesedgesand circles--and several higher-level features are always common or similar for most computer vision tasks we mighthoweverneed to retrain the last few layers to tailor the network specifically for our use case stillthat brings huge relief when training large network |
15,671 | convolutional neural networks todaywe have plenty of pretrained networks that were trained for several hours on large corpus of datasets that almost represent most common objects we come across large number of these networks are readily available under pytorch we can directly leverage them instead of training our own network from scratch for more information about the available list of pretrained modelsvisit for our use caselet' use vggnet listing - demonstrates downloading and leveraging vggnet for transfer learning listing - downloading and initializing the pretrained model #download the model (pretrainedfrom torchvision import models new_model models vgg (pretrained=truefreeze model weights for param in new_model parameters()param requires_grad false print(new_model classifieroutput[sequential( )linear(in_features= out_features= bias=true( )relu(inplace=true( )dropout( = inplace=false( )linear(in_features= out_features= bias=true( )relu(inplace=true( )dropout( = inplace=false( )linear(in_features= out_features= bias=true |
15,672 | convolutional neural networks the pretrained network has six layers the original network was used to classify , distinct objectshencethe last layer has , output connections howeverour use case is simple binary classification exercisethereforewe need to replace the final layer to suit our use case listing - replaces the last layer in the pretrained network with custom layer that outputs single unit with sigmoid activation listing - replacing the last layer with our custom layer #define our custom model last layer new_model classifier[ nn sequentialnn linear(new_model classifier[ in_features )nn relu()nn dropout( )nn linear( )nn sigmoid()find total parameters and trainable parameters total_params sum( numel(for in new_model parameters()print( '{total_params:,total parameters 'total_trainable_params sum numel(for in new_model parameters(if requires_gradprint( '{total_trainable_params:,training parameters 'output[ , , total parameters , , training parameters herewe have leveraged the existing layers of the vgg pretrained model and added newfully connected layer towards the end to tailor the network structure for our binary use case all the layersapart from |
15,673 | convolutional neural networks the ones we addedhave their weights frozen--that isthe model weights will not be updated during the training processexcept for the last fully connected layer let' now train the new model for our dataset for epochs all components remain similar to the previous example listing - demonstrates training the pretrained network for our use case listing - training the pretrained model for the defined use case #define epochsoptimizer and loss function num_epochs loss_function nn bceloss(#binary crosss entropy loss new_model cuda(adam_optimizer torch optim adam(new_model parameters()lr train the model total_step len(train_loaderprint("total batches:",total_stepfor epoch in range(num_epochs)new_model train(train_loss for (imageslabelsin enumerate(train_loader)images images to(devicelabels labels to(deviceforward pass outputs new_model(imagesloss loss_function(outputs float()labels float(view(- , ) |
15,674 | convolutional neural networks backward and optimize adam_optimizer zero_grad(loss backward(adam_optimizer step(train_loss +loss item()labels size( #after each epoch print train loss and validation loss accuracy print ('epoch [{}/{}]loss{ }format(epoch+ num_epochsloss item())#after each epoch evaluate model evaluate(new_model,val_loaderoutput[total batches epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( % |
15,675 | convolutional neural networks epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %epoch [ / ]loss val accuracy( %with just epochswe can see that our pretrained model gives ~ accuracy over the validation dataset compared to our original model (trained from scratch)that performance improvement is significant cnn thumb rules for computer vision taskswe can delineate few rules that can be good starting points for most experiments the starting point for any given computer vision task should be leveraging pretrained network training network from scratch is always possiblebut the huge compute effortwhen results are already availablewould be futile task in scenarios where the model performance achieved is not up to your benchmarksexperiment with several other pretrained networksnot only one pytorch offers several ready-to-use pretrained models when your image classification task includes very diverse set of imagesthe pretrained networks might not give you the best performance in such casesit is recommended to unfreeze few more top layers incrementally the idea is to experiment with what level of feature representation makes sense for your |
15,676 | convolutional neural networks use case in the worst-case scenarioyou might have to train the entire network from the ground up in most caseshoweveryou are quite likely to be able to save compute efforts with few or more layers from the pretrained networks using dropout is always good idea for most use casesrelus can be blindly be used as the de-facto activation function for fairly acceptable performanceensure that each class has , or more training samples the morethe better the batch size should be as large as the gpu or cpu can handle optimizing the batch size helps to accelerate the training process gpu is always recommended gpu performance is almost or above for most common use cases the cost of acquiring gpu-based instance has come down significantly all major cloud players provide ready-touse deep learning images or virtual machines that can be provisioned on demand with suitable compute and gpu the entire heavy-lifting task ( installing the required dependenciespackagesand driversand configuring deep learningthe python frameworkworkspaceetc is abstracted with single click the cost has also come down to provide an affordable means to train few experiments todayyou can provision powerful machines with gpu that fair well with most research projects for ~ usd/hour |
15,677 | convolutional neural networks many resources are available for free google colab and kaggle provide excellent places to start experimenting with deep learning summary this covered the basics of cnns the key takeaways are the convolution operationthe pooling operationhow they are used in conjunctionand how features are not hand-engineered but learned cnns are the most successful application of deep learning and embody the idea of learning features/representations rather than hand-engineering them the exercises in this explored cnns using both fairly simple dataset and moderately large dataset with training from scratch we also leveraged pretrained networks and saw the performance boost achieved as result in the next we will explore recurrent neural networkswhich are widely used in the field of natural language processing and speech recognition |
15,678 | recurrent neural networks the field of natural language processing (nlphas witnessed phenomenal growth with the advent of deep learning lot of this movement can be credited to recurrent neural networks (rnnsand their variants voicebased ai assistantsauto-completion of text in smartphone keyboardsand text-based reviews classified based on sentiments are all problems effectively solved by rnns this begins by exploring the foundational concepts involved with rnns we then explore few variations of the vanilla rnn model that are more suitable for modern computational tasks finallywe will study the practical implementation of an rnn using pytorch on real-world dataset borrowed from our favorite platformkaggle let' get started introduction to rnns recurrent neural networks (rnnsare essentially neural networks that employ recurrencewhich is using information from previous forward pass over the neural network essentiallyall rnns can be described as recurrence relationship rnns are suited forand have been incredibly successful when applied toproblems wherein the input data on which the (cnikhil ketkarjojo moolayil ketkar and moolayildeep learning with python |
15,679 | recurrent neural networks predictions are to be made is in the form of sequence (series of entities where the order is importantexamples of sequence data include timeseriesnatural language processingspeech analysisetc figure - demonstrates how regular rnn unfolds (across timeto form recurrent neural network in the following sectionwe will explore the basics of an rnn leveraging figure - regular rnn unfolded (source deep learning www deeplearningbook org/contents/rnn htmllet' start by describing the moving parts of an rnn firstwe introduce some notation we will assume that the input consists of sequence of entities ( ) ( ) (tcorresponding to this inputwe need to produce either sequence ( ) ( ) (tor just one output for the entire input sequence (or sequence with different lengthan rnn of different architecture would provide solution to different use case figure - demonstrates the types of rnn based on the input and output length |
15,680 | recurrent neural networks figure - rnn types based on input and output length when we have an rnn that does not leverage information from the previous statewe have traditional neural net with recurrencehoweverwe have several new possibilities today' most common use cases in nlp revolve around many-to-one and many-to-many models examples include named-entity recognition and machine translation ( translating document from french to englishthis explores few simple examplesbut discussing each variant in depth is beyond the scope of this book readers are strongly recommended to explore named-entity recognitionmachine translation (andoptionallymusic generationindependently let' start with the basics to distinguish between what the rnn produces ( predictionsand what it is ideally expected to produce ( actuals)we denote the predictions by yyyor the output the rnn produces similarlywe will denote the ground truth actual values that rnn should ideally producedenoted by ( ) ( ) (tfigure - shows the outputs (predictionsgenerated by the rnn as ( ( to compute disagreement with actualswe would compare these outputs generated with the actual values represented as ( ) ( ) ( |
15,681 | recurrent neural networks rnns produce either an output for every entity in the input sequence (many-to-manyor single output for the entire sequence (many-to-one)as shown figure - let' consider an rnn that produces one output for every entity in the input (essentially referring to the unrolled network shown in figure - figure - an unrolled rnn (many-to-many)representing part of figure - the rnn can be described using the following equationsh tanh ux wh yt softmax vht is the weight to the input to the networkv is the weight to the output from the activation functionand is the weight matrix for the current hidden state the following points about the rnn equations should be noted the rnn computation involves computing the hidden state for an entity in the sequence this is denoted by ( the computation of (tuses the corresponding input at entity (tand the previous hidden state ( |
15,682 | recurrent neural networks the output yt is computed using the hidden state ( while the current hidden state is being computeda set of weights are associated with the input and the previous hidden state this is denoted by and wrespectively there is also bias termdenoted by similarlywhile computing the outputa set of weights are also associated with the current hidden state this is denoted by there is also bias termdenoted by alsowhile computing the hidden statethe tanh activation function (introduced in earlier is used the softmax activation function is used in the computation of the output the rnnas described by the equationscan process an arbitrarily large input sequence the parameters of the rnn--uwvbcetc -are shared across the computation of the hidden layer and output value (for each of entities in the sequencefigure - illustrates the rnn note the recurrence relationship with the self-loop at the hidden state |
15,683 | recurrent neural networks figure - rnn (recurrence using the previous hidden statefigure - also depicts loss function associated with each output associated with each input we will refer back to it when discuss how rnns are trained it is essential to internalize how an rnn is different from all the feedforward neural networks (including convolutional networkswe discussed earlier the key difference is the hidden statewhich represents summary of the entities seen in the past (for the same sequenceignoring for the time being how rnns are trainedit should be clear how trained rnn could be used for given sequence of inputsan rnn would produce an output for each entity in the input let' now consider variation in the rnn where instead of the recurrence using the hidden statewe have recurrence using the output produced in the previous state (figure - |
15,684 | recurrent neural networks figure - rnn (recurrence using the previous outputthe equations describing such an rnn are as followsth(tanh ux yt - yt softmax vht the following points are to be noted the rnn computation involves computing the hidden state for an entity in the sequence this is denoted by ( the computation of (tuses the corresponding input at entity (tand the previous output yt - the output yt is computed using the hidden state ( while computing the current hidden satea set of weights are associated with the input and the previous output this is denoted by and wrespectively there is also bias termdenoted by |
15,685 | recurrent neural networks weights are associated with the hidden state while computing the output this is denoted by there is also bias termdenoted by the tanh activation function is used in the computation of the hidden state the softmax activation function is used in the computation of the output let' now consider variation of the rnn where only single output is produced for the entire sequence (figure - such an rnn is described using the following equationsth(tanh uxt yt - ysoftmax vh( figure - rnn (producing single output for the entire input sequence |
15,686 | recurrent neural networks the following points are to be noted the rnn computation involves computing the hidden state for an entity in the sequence this is denoted by ( the computation of (tuses the corresponding input at entity (tand the previous hidden state ( the computation of (tis done for each entity in the input sequence ( ) ( ) ( the output is computed using only the last hidden state ( while computing the current hidden statea set of weights are associated with the input and the previous hidden this is denoted by and wrespectively there is also bias termdenoted by weights are associated with the hidden state while computing the output this is denoted by there is also bias termdenoted by the tanh activation function is used in the computation of the hidden state the softmax activation function is used in the computation of the output training rnns this section describes how rnns are trained we first need to look at how the rnn looks when we unroll the recurrence relationshipwhich is at the heart of the rnn unrolling the recurrence relationship corresponding to rnn is simply writing out the equations by recursively substituting the value on which recurrence relationship is defined |
15,687 | recurrent neural networks in the case of the rnn in figure - this is (tthat isthe value of (tis defined in terms of ( )which in turn is defined in terms of ( and so on till ( we will assume that ( is either predefined by the userset to zero or learned as another parameter/weight (learned like wvor bunrolling simply means writing out the equations that describe the rnn in terms of ( in order to do soof coursewe need fix what the length of the sequencewhich is denoted by in this sectionwe will explore unrolling the few different rnns we explored above we will start with unrolling the rnn where the previous hidden state was used for recurrence (demonstrated in figure - laterwe will also explore the same for rnn having recurrence using previous output and finally unrolling rnn with single output figure - illustrates the unrolled rnn corresponding to the rnn in figure - assuming an input sequence of size similarlyfigure - and figure - illustrate the unrolled rnns corresponding to the rnns shown in figure - and figure - respectively figure - unrolling the rnn corresponding to figure - |
15,688 | recurrent neural networks figure - unrolls the recurrent network shown in figure - --that isthe recurrence unit is added from the previous hidden state we can note this by referring to being passed to the hidden state for ( similarlythe hidden state is passed to the final step in this illustration the weight and bias are shared across the recurrent units figure - unrolling the rnn corresponding to figure - figure - unrolls the recurrent network shown in figure - --that isthe recurrence unit is added from the previous output state we can note this by referring to ( being passed to the hidden state for ( similarlythe output state ( is passed to the final step in this illustration the weight and bias are shared across the recurrent units |
15,689 | recurrent neural networks figure - unrolling the rnn corresponding to figure - (single outputthe unrolling process operates on the assumption that the length of the input sequence is known beforehand and based on that the recurrence is unrolled once the rnn is unrolledwe essentially have non-recurrent neural network the parameters to be learned--uwvbc etc (denoted in dark in figure - )--are shared across the computation of the hidden layer and output value we have seen such parameter sharing earlier in the context of convolutional neural networks given an input and output of given size (for exampletwhich is assumed to be in figures - through - )we can unroll an rnn and compute gradients for the parameters to be learned with respect to loss function (as described in earlier thustraining an rnn is simply first unrolling the rnn for given size of input and corresponding expected outputand then training the unrolled rnn by computing the gradients and using stochastic gradient descent |
15,690 | recurrent neural networks as previously mentionedrnns can deal with arbitrarily long inputscorrespondinglythey need to be trained on arbitrarily long inputs figures - through - illustrate how an rnn is unrolled for different sizes of inputs note that once the rnn is unrolledthe process of training the rnn is identical to training regular neural networkas covered in earlier in the figure the rnn described in figure - is unrolled for input sizes of and figure - unrolling the rnn corresponding to figure - (step and step figure - demonstrates step and step -- unrolling for input sequences ( and ( )sequentially in step given that we have no previous hidden statewe pass ( to the current hidden state in figure - we limit the time sequences to unroll thereforethe network is unrolled to steps figure - and figure - demonstrate the incremental unrolling steps sequentially |
15,691 | recurrent neural networks figure - unrolling the rnn corresponding to figure - (step herewe have the third input sequence connected to the unrolled network the weights uwand are shared across the network in the nextand finalstepwe can see that the unrolled network is identical to the network shown in figure - ( unrolled for four input sequences |
15,692 | recurrent neural networks figure - unrolling the rnn corresponding to figure - (step identical to figure - given that the dataset to be trained on consists of sequences of varying sizesthe input sequences are grouped so that the sequences of the same size fall into one group thenfor groupwe can unroll the rnn for the sequence length and train it training for different group will require the rnn to be unrolled for different sequence length thusit is possible to train the rnn on inputs of varying sizes by unrolling and training it with the unrolling done based on the sequence length it must be noted that training the unrolled rnn illustrated in figure - is essentially sequential processas the hidden states are dependent on each other in the case of rnns where in the recurrence is over the output instead of the hidden state (figure - )it is possible to use technique |
15,693 | recurrent neural networks called teacher forcingas illustrated in figure - the key idea here is to use ( instead of yt - in the computation of (twhile training while making predictions (when the model is deployed for usagehoweveryt - is used idirectional rnns let' now look at another variation on rnnsthe bidirectional rnn the key idea behind bidirectional rnn is to use the entities that lie further in the sequence to make prediction for the current entity for all the rnns we have considered so farwe have been using the previous entities (captured by the hidden stateand the current entity in the sequence to make the prediction howeverwe have not been using information concerning the entities that lie further in the sequence to make predictions bidirectional rnn leverages this information and can give improved predictive accuracy in many cases (figure - consider the following simple exampleborrowed from andrew ng' coursera lecture sentence he said"teddy bears are beautiful toys sentence he said"teddy rooseveltthe president of united states in these sentencesconsidering classic case of nlp (predicting the next word)there is no means to correctly predict the word after "teddy(assuming unidirectional forward rnnthe context that comes from the right side essentially sheds light on an accurate prediction for the next word consider sentiment analysis task where model is trying to classify sentences as positive or negative with the left and right context building in the networkbidirectional models can effectively "look forwardin the sentence to see if "futuretokens may influence the current decision in |
15,694 | recurrent neural networks the case of sentiment classification (many-to-one rnn)there are sarcastic comments where the words following after positive negate the presence of the positive word--for example" loved the moviebiggest joke ever!herethe context on the right side nullifies the presence of the word "loved bidirectional rnn can be described using the following equationshb tanh wb bb ft tanh yt softmax vb hbt (ft the rnn computation involves computing the forward hidden state and backward hidden state for an entity in the sequence this is denoted by ft and hb respectively the computation of ft uses the corresponding input at entity (tand the previous hidden state ft the computation of hb uses the corresponding input at entity (tand the previous hidden state hb the output yt is computed using the hidden state ft and hb while computing the current hidden statea set of weights are associated with the input and the previous hidden state this is denoted by uf wf uband wbrespectively there are also bias termsdenoted by bf and bb similarlywhile computing the outputa set of weights are associated with the hidden state while computing the output this is denoted by vb and vf there is also bias termdenoted by the tanh activation function is used in the computation of the hidden state the softmax activation function is used in the computation of the output the rnnas described by the equationscan process an arbitrarily large input sequence the parameters of the rnn--uf ubwf wbvbvf bf bbcetc --are shared across the computation of the hidden layer and output value (for each of entities in the sequence |
15,695 | recurrent neural networks figure - bidirectional rnn vanishing and exploding gradients training rnns can be challenging due to vanishing and exploding gradients (figure - vanishing gradients means that when the gradients are computed on the unrolled rnnsthe value of the gradients can drop to very small number (close to zerosimilarlythe gradients can increase to very high valuewhich is referred to as the exploding gradient problem in both casestraining the rnn is challenge vanishing or exploding gradients are usually result of the inappropriate or undesired values set for the network hyperparameters and parameters thereforethe network takes an unusually long time to move out of the slope with each incremental weight updates and learn the best weights for the use case |
15,696 | recurrent neural networks let' look again at the equations that describe the rnn tanh ux wh yt softmax vht by applying the chain rule this we can derive the expression for is illustrated in figure - hj hk hj let' now focus on the part of the expression which involves repeated matrix multiplication of wwhich contributes to both the vanishing and exploding gradient problems intuitivelythis is similar to multiplying real valued number over and over againwhich might lead to the product shrinking to zero or exploding to infinity radient clipping simple technique to deal with exploding gradients is to rescale the norm of the gradients whenever they go over user-defined threshold and if gc then specificallyif the gradient is denoted by gw we set ggthis technique is both simple and computationally gefficientbut it does introduce an extra hyperparameter without gradient clippingthe parameters take huge descent step and flow out of the desired region with clippingthe descent step size is restricted and the parameters stay in the desired region gradient clipping will "clipthe gradientsor cap them to threshold value to prevent |
15,697 | recurrent neural networks them from getting too large in figure - the gradient is clipped from overshooting and the cost function follows the dotted values rather than its original trajectory outside the desired region figure - gradient clipping long short-term memory let' take look at another variation on rnnsthe long short-term memory (lstmnetwork (see figure - the vanilla rnn had several trade-offs that led to poor performing networks in learning long dependencies between sequences in generalthe rnn is more prone towards noise and easily overfits while training they are also computationally very expensive to train lstms fit in perfectly to solve these problems by using more intuitive approach they are generally more robust to noise and capture shortas well long-term dependencies more accuratelywhile being easy to tune and trainas compared to rnns lstms also have faster computation speeds than rnns lstms have the gates that equip the handy functions that help the network remember long-term dependencies as well forget |
15,698 | recurrent neural networks dependencies that do not matter in rnnsthe previous hidden state is the only previous memory the network remembers with lstmsin addition to the previous hidden statethe cell state is also remembered by the network the core concepts within lstm networks are the cell state and gates (inputoutputand forget gatesthese gates and the cell state include several operationssuch as sigmoid and tanh activationpointwise multiplication and additionand vector concatenation these operations help the cell state and gates to train the network to forget or propagate important information through the network cell states connect information throughout the network and thus help in passing long dependencies between sequences as and when needed an lstm can be described with the following set of equations note that the symbol denotes pointwise multiplication of two vectors--that isif [ and [ , , ]then [ , , the functions sg,and are non-linear activation functionsw and are weight matricesand the terms are bias terms wz rz yt - bz wi ri yt - pi - bi yt - ct - ot wo ro yt - po bo yt ot |
15,699 | recurrent neural networks the following points are to be noted the most important element of the lstm is the cell statedenoted by (ti (tz (tf (tc ( the cell state is updated based on the block input (tand the previous cell state ( the input gate (tdetermines which fraction of the block input makes it into the cell state (hencecalled gatethe forget gate (tdetermines how much of the previous cell state to retain the output yt is determined based on the cell state (tand the output gate ( )which determines how much the cell state affects the output the (ttermreferred to as the block inputproduces value based on the current input and the previous output the (ttermreferred to as the input gatedetermines how much of the input to retain in the cell state ( all the terms are peephole connections that allow for faction of the cell state to factor into the computation of the term in question the computation of the cell state (idoes not encounter the issue of the vanishing gradient (this is referred to as the constant error carousal howeverlstms are affected by exploding gradientsand gradient clipping is used while training |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.