id
int64
0
25.6k
text
stringlengths
0
4.59k
11,500
fundamentals of machine learning in kerasweight regularization is added by passing weight regularizer instances to layers as keyword arguments let' add weight regularization to the movie-review classification network listing adding weight regularizat ion he model from keras import regularizers model models sequential(model add(layers dense( kernel_regularizer=regularizers ( )activation='relu'input_shape=( ,))model add(layers dense( kernel_regularizer=regularizers ( )activation='relu')model add(layers dense( activation='sigmoid') ( means every coefficient in the weight matrix of the layer will add weight_coefficient_value to the total loss of the network note that because this penalty is only added at training timethe loss for this network will be much higher at training than at test time figure shows the impact of the regularization penalty as you can seethe model with regularization (dotshas become much more resistant to overfitting than the reference model (crosses)even though both models have the same number of parameters figure effect of weight regularizat ion on validation loss as an alternative to regularizationyou can use one of the following keras weight regularizers listing different weight regularizers available in keras from keras import regularizers regularizers ( regularization regularizers _l ( = = licensed to simultaneous and regularization
11,501
overfitting and underfitting adding dropout dropout is one of the most effective and most commonly used regularization techniques for neural networksdeveloped by geoff hinton and his students at the university of toronto dropoutapplied to layerconsists of randomly dropping out (setting to zeroa number of output features of the layer during training let' say given layer would normally return vector [ for given input sample during training after applying dropoutthis vector will have few zero entries distributed at randomfor example[ the dropout rate is the fraction of the features that are zeroed outit' usually set between and at test timeno units are dropped outinsteadthe layer' output values are scaled down by factor equal to the dropout rateto balance for the fact that more units are active than at training time consider numpy matrix containing the output of layerlayer_outputof shape (batch_sizefeaturesat training timewe zero out at random fraction of the values in the matrixlayer_output *np random randint( high= size=layer_output shapeat training timedrops out of the units in the output at test timewe scale down the output by the dropout rate herewe scale by (because we previously dropped half the units)layer_output * at test time note that this process can be implemented by doing both operations at training time and leaving the output unchanged at test timewhich is often the way it' implemented in practice (see figure )layer_output *np random randint( high= size=layer_output shapelayer_output / note that we're scaling up rather scaling down in this case dropout * at training time figure dropout applied to an act ivat ion mat rix at raining timewith rescaling happening during raining at est timet he activat ion matrix is unchanged this technique may seem strange and arbitrary why would this help reduce overfittinghinton says he was inspired byamong other thingsa fraud-prevention mechanism used by banks in his own words" went to my bank the tellers kept changing and asked one of them why he said he didn' know but they got moved around lot licensed to
11,502
fundamentals of machine learning figured it must be because it would require cooperation between employees to successfully defraud the bank this made me realize that randomly removing different subset of neurons on each example would prevent conspiracies and thus reduce overfitting " the core idea is that introducing noise in the output values of layer can break up happenstance patterns that aren' significant (what hinton refers to as conspiracies)which the network will start memorizing if no noise is present in kerasyou can introduce dropout in network via the dropout layerwhich is applied to the output of the layer right before itmodel add(layers dropout( )let' add two dropout layers in the imdb network to see how well they do at reducing overfitting listing adding dropout he imdb net work model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dropout( )model add(layers dense( activation='relu')model add(layers dropout( )model add(layers dense( activation='sigmoid')figure shows plot of the results againthis is clear improvement over the reference network figure effect of dropout on validat ion loss to recapthese are the most common ways to prevent overfitting in neural networksget more training data reduce the capacity of the network add weight regularization add dropout see the reddit thread "amawe are the google brain team we' love to answer your questions about machine learning,licensed to
11,503
the universal workflow of machine learning in this sectionwe'll present universal blueprint that you can use to attack and solve any machine-learning problem the blueprint ties together the concepts you've learned about in this problem definitionevaluationfeature engineeringand fighting overfitting defining the problem and assembling dataset firstyou must define the problem at handwhat will your input data bewhat are you trying to predictyou can only learn to predict something if you have available training datafor exampleyou can only learn to classify the sentiment of movie reviews if you have both movie reviews and sentiment annotations available as suchdata availability is usually the limiting factor at this stage (unless you have the means to pay people to collect data for youwhat type of problem are you facingis it binary classificationmulticlass classificationscalar regressionvector regressionmulticlassmultilabel classificationsomething elselike clusteringgenerationor reinforcement learningidentifying the problem type will guide your choice of model architectureloss functionand so on you can' move to the next stage until you know what your inputs and outputs areand what data you'll use be aware of the hypotheses you make at this stageyou hypothesize that your outputs can be predicted given your inputs you hypothesize that your available data is sufficiently informative to learn the relationship between inputs and outputs until you have working modelthese are merely hypotheseswaiting to be validated or invalidated not all problems can be solvedjust because you've assembled examples of inputs and targets doesn' mean contains enough information to predict for instanceif you're trying to predict the movements of stock on the stock market given its recent price historyyou're unlikely to succeedbecause price history doesn' contain much predictive information one class of unsolvable problems you should be aware of is nonstationary problems suppose you're trying to build recommendation engine for clothingyou're training it on one month of data (august)and you want to start generating recommendations in the winter one big issue is that the kinds of clothes people buy change from season to seasonclothes buying is nonstationary phenomenon over the scale of few months what you're trying to model changes over time in this casethe right move is to constantly retrain your model on data from the recent pastor gather data at timescale where the problem is stationary for cyclical problem like clothes buyinga few yearsworth of data will suffice to capture seasonal variation--but remember to make the time of the year an input of your modellicensed to
11,504
fundamentals of machine learning keep in mind that machine learning can only be used to memorize patterns that are present in your training data you can only recognize what you've seen before using machine learning trained on past data to predict the future is making the assumption that the future will behave like the past that often isn' the case choosing measure of success to control somethingyou need to be able to observe it to achieve successyou must define what you mean by success--accuracyprecision and recallcustomer-retention rateyour metric for success will guide the choice of loss functionwhat your model will optimize it should directly align with your higher-level goalssuch as the success of your business for balanced-classification problemswhere every class is equally likelyaccuracy and area under the receiver operating characteristic curve (roc aucare common metrics for class-imbalanced problemsyou can use precision and recall for ranking problems or multilabel classificationyou can use mean average precision and it isn' uncommon to have to define your own custom metric by which to measure success to get sense of the diversity of machine-learning success metrics and how they relate to different problem domainsit' helpful to browse the data science competitions on kaggle deciding on an evaluation protocol once you know what you're aiming foryou must establish how you'll measure your current progress we've previously reviewed three common evaluation protocolsmaintaining hold-out validation set--the way to go when you have plenty of data doing -fold cross-validation--the right choice when you have too few samples for hold-out validation to be reliable doing iterated -fold validation--for performing highly accurate model evaluation when little data is available just pick one of these in most casesthe first will work well enough preparing your data once you know what you're training onwhat you're optimizing forand how to evaluate your approachyou're almost ready to begin training models but firstyou should format your data in way that can be fed into machine-learning model--herewe'll assume deep neural networkas you saw previouslyyour data should be formatted as tensors the values taken by these tensors should usually be scaled to small valuesfor examplein the [- range or [ range licensed to
11,505
if different features take values in different ranges (heterogeneous data)then the data should be normalized you may want to do some feature engineeringespecially for small-data problems once your tensors of input data and target data are readyyou can begin to train models developing model that does better than baseline your goal at this stage is to achieve statistical power that isto develop small model that is capable of beating dumb baseline in the mnist digit-classification exampleanything that achieves an accuracy greater than can be said to have statistical powerin the imdb exampleit' anything with an accuracy greater than note that it' not always possible to achieve statistical power if you can' beat random baseline after trying multiple reasonable architecturesit may be that the answer to the question you're asking isn' present in the input data remember that you make two hypothesesyou hypothesize that your outputs can be predicted given your inputs you hypothesize that the available data is sufficiently informative to learn the relationship between inputs and outputs it may well be that these hypotheses are falsein which case you must go back to the drawing board assuming that things go wellyou need to make three key choices to build your first working modellast-layer activation--this establishes useful constraints on the network' output for instancethe imdb classification example used sigmoid in the last layerthe regression example didn' use any last-layer activationand so on loss function--this should match the type of problem you're trying to solve for instancethe imdb example used binary_crossentropythe regression example used mseand so on optimization configuration--what optimizer will you usewhat will its learning rate bein most casesit' safe to go with rmsprop and its default learning rate regarding the choice of loss functionnote that it isn' always possible to directly optimize for the metric that measures success on problem sometimes there is no easy way to turn metric into loss functionloss functionsafter allneed to be computable given only mini-batch of data (ideallya loss function should be computable for as little as single data pointand must be differentiable (otherwiseyou can' use backpropagation to train your networkfor instancethe widely used classification metric roc auc can' be directly optimized hencein classification tasksit' common to optimize for proxy metric of roc aucsuch as crossentropy in generalyou can hope that the lower the crossentropy getsthe higher the roc auc will be table can help you choose last-layer activation and loss function for few common problem types licensed to
11,506
table fundamentals of machine learning choosing he right last -layer activation and loss function for your model problem ype last -layer activat ion loss function binary classification sigmoid binary_crossentropy multiclasssingle-label classification softmax categorical_crossentropy multiclassmultilabel classification sigmoid binary_crossentropy regression to arbitrary values none mse regression to values between and sigmoid mse or binary_crossentropy scaling updeveloping model that overfits once you've obtained model that has statistical powerthe question becomesis your model sufficiently powerfuldoes it have enough layers and parameters to properly model the problem at handfor instancea network with single hidden layer with two units would have statistical power on mnist but wouldn' be sufficient to solve the problem well remember that the universal tension in machine learning is between optimization and generalizationthe ideal model is one that stands right at the border between underfitting and overfittingbetween undercapacity and overcapacity to figure out where this border liesfirst you must cross it to figure out how big model you'll needyou must develop model that overfits this is fairly easy add layers make the layers bigger train for more epochs always monitor the training loss and validation lossas well as the training and validation values for any metrics you care about when you see that the model' performance on the validation data begins to degradeyou've achieved overfitting the next stage is to start regularizing and tuning the modelto get as close as possible to the ideal model that neither underfits nor overfits regularizing your model and tuning your hyperparameters this step will take the most timeyou'll repeatedly modify your modeltrain itevaluate on your validation data (not the test dataat this point)modify it againand repeatuntil the model is as good as it can get these are some things you should tryadd dropout try different architecturesadd or remove layers add and/or regularization licensed to
11,507
try different hyperparameters (such as the number of units per layer or the learning rate of the optimizerto find the optimal configuration optionallyiterate on feature engineeringadd new featuresor remove features that don' seem to be informative be mindful of the followingevery time you use feedback from your validation process to tune your modelyou leak information about the validation process into the model repeated just few timesthis is innocuousbut done systematically over many iterationsit will eventually cause your model to overfit to the validation process (even though no model is directly trained on any of the validation datathis makes the evaluation process less reliable once you've developed satisfactory model configurationyou can train your final production model on all the available data (training and validationand evaluate it one last time on the test set if it turns out that performance on the test set is significantly worse than the performance measured on the validation datathis may mean either that your validation procedure wasn' reliable after allor that you began overfitting to the validation data while tuning the parameters of the model in this caseyou may want to switch to more reliable evaluation protocol (such as iterated -fold validationlicensed to
11,508
fundamentals of machine learning summary define the problem at hand and the data on which you'll train collect this dataor annotate it with labels if need be choose how you'll measure success on your problem which metrics will you monitor on your validation datadetermine your evaluation protocolhold-out validationk-fold validationwhich portion of the data should you use for validationdevelop first model that does better than basic baselinea model with statistical power develop model that overfits regularize your model and tune its hyperparametersbased on performance on the validation data lot of machine-learning research tends to focus only on this step--but keep the big picture in mind licensed to
11,509
deep learning in practice hapters - will help you gain practical intuition about how to solve realworld problems using deep learningand will familiarize you with essential deeplearning best practices most of the code examples in the book are concentrated in this second half licensed to
11,510
11,511
for computer vision this covers understanding convolutional neural networks (convnetsusing data augmentation to mitigate overfitting using pretrained convnet to do feature extraction fine-tuning pretrained convnet visualizing what convnets learn and how they make classification decisions this introduces convolutional neural networksalso known as convnetsa type of deep-learning model almost universally used in computer vision applications you'll learn to apply convnets to image-classification problems--in particular those involving small training datasetswhich are the most common use case if you aren' large tech company licensed to
11,512
deep learning for computer vision introduction to convnets we're about to dive into the theory of what convnets are and why they have been so successful at computer vision tasks but firstlet' take practical look at simple convnet example it uses convnet to classify mnist digitsa task we performed in using densely connected network (our test accuracy then was %even though the convnet will be basicits accuracy will blow out of the water that of the densely connected model from the following lines of code show you what basic convnet looks like it' stack of conv and maxpooling layers you'll see in minute exactly what they do listing inst ant iat ing small convnet from keras import layers from keras import models model models sequential(model add(layers conv ( ( )activation='relu'input_shape=( ))model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')importantlya convnet takes as input tensors of shape (image_heightimage_widthimage_channels(not including the batch dimensionin this casewe'll configure the convnet to process inputs of size ( )which is the format of mnist images we'll do this by passing the argument input_shape=( to the first layer let' display the architecture of the convnet so farmodel summary(layer (typeoutput shape param ===============================================================conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none ===============================================================total params , trainable params , non-trainable params you can see that the output of every conv and maxpooling layer is tensor of shape (heightwidthchannelsthe width and height dimensions tend to shrink licensed to
11,513
as you go deeper in the network the number of channels is controlled by the first argument passed to the conv layers ( or the next step is to feed the last output tensor (of shape ( )into densely connected classifier network like those you're already familiar witha stack of dense layers these classifiers process vectorswhich are dwhereas the current output is tensor first we have to flatten the outputs to dand then add few dense layers on top listing adding classifier on op of he convnet model add(layers flatten()model add(layers dense( activation='relu')model add(layers dense( activation='softmax')we'll do -way classificationusing final layer with outputs and softmax activation here' what the network looks like nowmodel summary(layer (typeoutput shape param ===============================================================conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none flatten_ (flatten(none dense_ (dense(none dense_ (dense(none ===============================================================total params , trainable params , non-trainable params as you can seethe ( outputs are flattened into vectors of shape ( ,before going through two dense layers nowlet' train the convnet on the mnist digits we'll reuse lot of the code from the mnist example in listing training he convnet on mnist images from keras datasets import mnist from keras utils import to_categorical (train_imagestrain_labels)(test_imagestest_labelsmnist load_data(licensed to
11,514
deep learning for computer vision train_images train_images reshape(( )train_images train_images astype('float ' test_images test_images reshape(( )test_images test_images astype('float ' train_labels to_categorical(train_labelstest_labels to_categorical(test_labelsmodel compile(optimizer='rmsprop'loss='categorical_crossentropy'metrics=['accuracy']model fit(train_imagestrain_labelsepochs= batch_size= let' evaluate the model on the test datatest_losstest_acc model evaluate(test_imagestest_labelstest_acc whereas the densely connected network from had test accuracy of %the basic convnet has test accuracy of %we decreased the error rate by (relativenot badbut why does this simple convnet work so wellcompared to densely connected modelto answer thislet' dive into what the conv and maxpooling layers do the convolution operation the fundamental difference between densely connected layer and convolution layer is thisdense layers learn global patterns in their input feature space (for examplefor mnist digitpatterns involving all pixels)whereas convolution layers learn local patterns (see figure )in the case of imagespatterns found in small windows of the inputs in the previous examplethese windows were all figure images can be broken int local pat terns such as edgestext uresand so on licensed to
11,515
this key characteristic gives convnets two interesting propertiesthe patterns they learn are translation invariant after learning certain pattern in the lower-right corner of picturea convnet can recognize it anywherefor examplein the upper-left corner densely connected network would have to learn the pattern anew if it appeared at new location this makes convnets data efficient when processing images (because the visual world is fundamentally translation invariant)they need fewer training samples to learn representations that have generalization power they can learn spatial hierarchies of patterns (see figure first convolution layer will learn small local patterns such as edgesa second convolution layer will learn larger patterns made of the features of the first layersand so on this allows convnets to efficiently learn increasingly complex and abstract visual concepts (because the visual world is fundamentally spatially hierarchical"catfigure the visual world forms spat ial hierarchy of visual moduleshyperlocal edges combine int local object such as eyes or earswhich combine int high-level concept such as cat convolutions operate over tensorscalled feature mapswith two spatial axes (height and widthas well as depth axis (also called the channels axisfor an rgb imagethe dimension of the depth axis is because the image has three color channelsredgreenand blue for black-and-white picturelike the mnist digitsthe depth is (levels of graythe convolution operation extracts patches from its input feature map and applies the same transformation to all of these patchesproducing an output feature map this output feature map is still tensorit has width and height its depth can be arbitrarybecause the output depth is parameter of the layerand the licensed to
11,516
deep learning for computer vision different channels in that depth axis no longer stand for specific colors as in rgb inputratherthey stand for filters filters encode specific aspects of the input dataat high levela single filter could encode the concept "presence of face in the input,for instance in the mnist examplethe first convolution layer takes feature map of size ( and outputs feature map of size ( )it computes filters over its input each of these output channels contains grid of valueswhich is response map of the filter over the inputindicating the response of that filter pattern at different locations in the input (see figure that is what the term feature map meansevery dimension in the depth axis is feature (or filter)and the tensor output[::nis the spatial map of the response of this filter over the input response mapquantifying the presence of the filter' pattern at different locations original input single filter figure the concept of response mapa map of he presence of patt ern at different locations in an input convolutions are defined by two key parameterssize of the patches extracted from the inputs--these are typically or in the examplethey were which is common choice depth of the output feature map--the number of filters computed by the convolution the example started with depth of and ended with depth of in keras conv layersthese parameters are the first arguments passed to the layerconv (output_depth(window_heightwindow_width) convolution works by sliding these windows of size or over the input feature mapstopping at every possible locationand extracting the patch of surrounding features (shape (window_heightwindow_widthinput_depth)each such patch is then transformed (via tensor product with the same learned weight matrixcalled the convolution kernel into vector of shape (output_depth,all of these vectors are then spatially reassembled into output map of shape (heightwidthoutput_depthevery spatial location in the output feature map corresponds to the same location in the input feature map (for examplethe lower-right corner of the output contains information about the lower-right corner of the inputfor instancewith windowsthe vector output[ij:comes from the patch input[ - : + - : + :the full process is detailed in figure licensed to
11,517
width height input depth input feature map input patches dot product with kernel output depth transformed patches output feature map output depth figure how convolut ion works note that the output width and height may differ from the input width and height they may differ for two reasonsborder effectswhich can be countered by padding the input feature map the use of strideswhich 'll define in second let' take deeper look at these notions understanding border effects and padding consider feature map ( tiles totalthere are only tiles around which you can center windowforming grid (see figure hencethe output feature map will be it shrinks littleby exactly two tiles alongside each dimensionin this case you can see this border effect in action in the earlier exampleyou start with inputswhich become after the first convolution layer licensed to
11,518
figure deep learning for computer vision valid locat ions of patches in input feat ure map if you want to get an output feature map with the same spatial dimensions as the inputyou can use padding padding consists of adding an appropriate number of rows and columns on each side of the input feature map so as to make it possible to fit center convolution windows around every input tile for windowyou add one column on the rightone column on the leftone row at the topand one row at the bottom for windowyou add two rows (see figure etc figure padding input in order be able extract patches in conv layerspadding is configurable via the padding argumentwhich takes two values"valid"which means no padding (only valid window locations will be used)and "same"which means "pad in such way as to have an output with the same width and height as the input the padding argument defaults to "validlicensed to
11,519
introduction to convnets understanding convolution strides the other factor that can influence output size is the notion of strides the description of convolution so far has assumed that the center tiles of the convolution windows are all contiguous but the distance between two successive windows is parameter of the convolutioncalled its stridewhich defaults to it' possible to have strided convolutionsconvolutions with stride higher than in figure you can see the patches extracted by convolution with stride over input (without paddingfigure convolution patches wit st rides using stride means the width and height of the feature map are downsampled by factor of (in addition to any changes induced by border effectsstrided convolutions are rarely used in practicealthough they can come in handy for some types of modelsit' good to be familiar with the concept to downsample feature mapsinstead of strideswe tend to use the max-pooling operationwhich you saw in action in the first convnet example let' look at it in more depth the max-pooling operation in the convnet exampleyou may have noticed that the size of the feature maps is halved after every maxpooling layer for instancebefore the first maxpooling layersthe feature map is but the max-pooling operation halves it to that' the role of max poolingto aggressively downsample feature mapsmuch like strided convolutions max pooling consists of extracting windows from the input feature maps and outputting the max value of each channel it' conceptually similar to convolutionexcept that instead of transforming local patches via learned linear transformation (the convolution kernel)they're transformed via hardcoded max tensor operation big difference from convolution is that max pooling is usually done with windows and licensed to
11,520
deep learning for computer vision stride in order to downsample the feature maps by factor of on the other handconvolution is typically done with windows and no stride (stride why downsample feature maps this waywhy not remove the max-pooling layers and keep fairly large feature maps all the way uplet' look at this option the convolutional base of the model would then look like thismodel_no_max_pool models sequential(model_no_max_pool add(layers conv ( ( )activation='relu'input_shape=( ))model_no_max_pool add(layers conv ( ( )activation='relu')model_no_max_pool add(layers conv ( ( )activation='relu')here' summary of the modelmodel_no_max_pool summary(layer (typeoutput shape param ===============================================================conv d_ (conv (none conv d_ (conv (none conv d_ (conv (none ===============================================================total params , trainable params , non-trainable params what' wrong with this setuptwo thingsit isn' conducive to learning spatial hierarchy of features the windows in the third layer will only contain information coming from windows in the initial input the high-level patterns learned by the convnet will still be very small with regard to the initial inputwhich may not be enough to learn to classify digits (try recognizing digit by only looking at it through windows that are pixels!we need the features from the last convolution layer to contain information about the totality of the input the final feature map has , total coefficients per sample this is huge if you were to flatten it to stick dense layer of size on topthat layer would have million parameters this is far too large for such small model and would result in intense overfitting in shortthe reason to use downsampling is to reduce the number of feature-map coefficients to processas well as to induce spatial-filter hierarchies by making successive convolution layers look at increasingly large windows (in terms of the fraction of the original input they covernote that max pooling isn' the only way you can achieve such downsampling as you already knowyou can also use strides in the prior convolution layer and you can licensed to
11,521
use average pooling instead of max poolingwhere each local input patch is transformed by taking the average value of each channel over the patchrather than the max but max pooling tends to work better than these alternative solutions in nutshellthe reason is that features tend to encode the spatial presence of some pattern or concept over the different tiles of the feature map (hencethe term feature map)and it' more informative to look at the maximal presence of different features than at their average presence so the most reasonable subsampling strategy is to first produce dense maps of features (via unstrided convolutionsand then look at the maximal activation of the features over small patchesrather than looking at sparser windows of the inputs (via strided convolutionsor averaging input patcheswhich could cause you to miss or dilute feature-presence information at this pointyou should understand the basics of convnets--feature mapsconvolutionand max pooling--and you know how to build small convnet to solve toy problem such as mnist digits classification now let' move on to more usefulpractical applications licensed to
11,522
deep learning for computer vision training convnet from scratch on small dataset having to train an image-classification model using very little data is common situationwhich you'll likely encounter in practice if you ever do computer vision in professional context "fewsamples can mean anywhere from few hundred to few tens of thousands of images as practical examplewe'll focus on classifying images as dogs or catsin dataset containing , pictures of cats and dogs ( , cats , dogswe'll use , pictures for training-- , for validationand , for testing in this sectionwe'll review one basic strategy to tackle this problemtraining new model from scratch using what little data you have you'll start by naively training small convnet on the , training sampleswithout any regularizationto set baseline for what can be achieved this will get you to classification accuracy of at that pointthe main issue will be overfitting then we'll introduce data augmentationa powerful technique for mitigating overfitting in computer vision by using data augmentationyou'll improve the network to reach an accuracy of in the next sectionwe'll review two more essential techniques for applying deep learning to small datasetsfeature extraction with pretrained network (which will get you to an accuracy of to %and fine-tuning pretrained network (this will get you to final accuracy of %togetherthese three strategies--training small model from scratchdoing feature extraction using pretrained modeland fine-tuning pretrained model--will constitute your future toolbox for tackling the problem of performing image classification with small datasets the relevance of deep learning for small-data problems you'll sometimes hear that deep learning only works when lots of data is available this is valid in partone fundamental characteristic of deep learning is that it can find interesting features in the training data on its ownwithout any need for manual feature engineeringand this can only be achieved when lots of training examples are available this is especially true for problems where the input samples are very highdimensionallike images but what constitutes lots of samples is relative--relative to the size and depth of the network you're trying to trainfor starters it isn' possible to train convnet to solve complex problem with just few tens of samplesbut few hundred can potentially suffice if the model is small and well regularized and the task is simple because convnets learn localtranslation-invariant featuresthey're highly data efficient on perceptual problems training convnet from scratch on very small image dataset will still yield reasonable results despite relative lack of datawithout the need for any custom feature engineering you'll see this in action in this section what' moredeep-learning models are by nature highly repurposableyou can takesayan image-classification or speech-to-text model trained on large-scale dataset and reuse it on significantly different problem with only minor changes specificallylicensed to
11,523
in the case of computer visionmany pretrained models (usually trained on the imagenet datasetare now publicly available for download and can be used to bootstrap powerful vision models out of very little data that' what you'll do in the next section let' start by getting your hands on the data downloading the data the dogs vs cats dataset that you'll use isn' packaged with keras it was made available by kaggle as part of computer-vision competition in late back when convnets weren' mainstream you can download the original dataset from www kaggle com/ /dogs-vs-cats/data (you'll need to create kaggle account if you don' already have one--don' worrythe process is painlessthe pictures are medium-resolution color jpegs figure shows some examples figure samples from he dogs vs cat dataset sizes weren' modifiedt he samples are heterogeneous in sizeappearanceand so on unsurprisinglythe dogs-versus-cats kaggle competition in was won by entrants who used convnets the best entries achieved up to accuracy in this exampleyou'll get fairly close to this accuracy (in the next section)even though you'll train your models on less than of the data that was available to the competitors this dataset contains , images of dogs and cats ( , from each classand is mb (compressedafter downloading and uncompressing ityou'll create new dataset containing three subsetsa training set with , samples of each classa validation set with samples of each classand test set with samples of each class licensed to
11,524
deep learning for computer vision following is the code to do this listing copying images rainingvalidationand est direct ories directory where you'll store your smaller dataset path to the directory where the original dataset was uncompressed import osshutil original_dataset_dir '/users/fchollet/downloads/kaggle_original_database_dir '/users/fchollet/downloads/cats_and_dogs_smallos mkdir(base_dirtrain_dir os path join(base_dir'train'os mkdir(train_dirvalidation_dir os path join(base_dir'validation'os mkdir(validation_dirtest_dir os path join(base_dir'test'os mkdir(test_dirdirectories for the trainingvalidationand test splits train_cats_dir os path join(train_dir'cats'os mkdir(train_cats_dirdirectory with training cat pictures train_dogs_dir os path join(train_dir'dogs'os mkdir(train_dogs_dirdirectory with training dog pictures validation_cats_dir os path join(validation_dir'cats'os mkdir(validation_cats_dirdirectory with validation cat pictures validation_dogs_dir os path join(validation_dir'dogs'os mkdir(validation_dogs_dirdirectory with validation dog pictures test_cats_dir os path join(test_dir'cats'os mkdir(test_cats_dirdirectory with test cat pictures test_dogs_dir os path join(test_dir'dogs'os mkdir(test_dogs_dirdirectory with test dog pictures fnames ['cat {jpgformat(ifor in range( )for fname in fnamessrc os path join(original_dataset_dirfnamedst os path join(train_cats_dirfnameshutil copyfile(srcdstcopies the first , cat images to train_cats_dir fnames ['cat {jpgformat(ifor in range( )for fname in fnamessrc os path join(original_dataset_dirfnamedst os path join(validation_cats_dirfnameshutil copyfile(srcdstcopies the next cat images to validation_cats_dir fnames ['cat {jpgformat(ifor in range( )for fname in fnamessrc os path join(original_dataset_dirfnamedst os path join(test_cats_dirfnameshutil copyfile(srcdstcopies the next cat images to test_cats_dir licensed to
11,525
training convnet from scratch on small dataset fnames ['dog {jpgformat(ifor in range( )for fname in fnamessrc os path join(original_dataset_dirfnamedst os path join(train_dogs_dirfnameshutil copyfile(srcdstcopies the first , dog images to train_dogs_dir fnames ['dog {jpgformat(ifor in range( )for fname in fnamessrc os path join(original_dataset_dirfnamedst os path join(validation_dogs_dirfnameshutil copyfile(srcdstcopies the next dog images to validation_dogs_dir fnames ['dog {jpgformat(ifor in range( )for fname in fnamessrc os path join(original_dataset_dirfnamedst os path join(test_dogs_dirfnameshutil copyfile(srcdstcopies the next dog images to test_dogs_dir as sanity checklet' count how many pictures are in each training split (train/validation/test)print('total training cat images:'len(os listdir(train_cats_dir))total training cat images print('total training dog images:'len(os listdir(train_dogs_dir))total training dog images print('total validation cat images:'len(os listdir(validation_cats_dir))total validation cat images print('total validation dog images:'len(os listdir(validation_dogs_dir))total validation dog images print('total test cat images:'len(os listdir(test_cats_dir))total test cat images print('total test dog images:'len(os listdir(test_dogs_dir))total test dog images so you do indeed have , training images , validation imagesand , test images each split contains the same number of samples from each classthis is balanced binary-classification problemwhich means classification accuracy will be an appropriate measure of success building your network you built small convnet for mnist in the previous exampleso you should be familiar with such convnets you'll reuse the same general structurethe convnet will be stack of alternated conv (with relu activationand maxpooling layers but because you're dealing with bigger images and more complex problemyou'll make your network largeraccordinglyit will have one more conv maxpooling stage this serves both to augment the capacity of the network and to further reduce the size of the feature maps so they aren' overly large when you reach the flatten layer herebecause you start from inputs of size ( somewhat arbitrary choice)you end up with feature maps of size just before the flatten layer licensed to
11,526
deep learning for computer vision note the depth of the feature maps progressively increases in the network (from to )whereas the size of the feature maps decreases (from to this is pattern you'll see in almost all convnets because you're attacking binary-classification problemyou'll end the network with single unit ( dense layer of size and sigmoid activation this unit will encode the probability that the network is looking at one class or the other listing inst ant iat ing small convnet for dogs vs cats classificat ion from keras import layers from keras import models model models sequential(model add(layers conv ( ( )activation='relu'input_shape=( ))model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')model add(layers maxpooling (( ))model add(layers flatten()model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')let' look at how the dimensions of the feature maps change with every successive layermodel summary(layer (typeoutput shape param ===============================================================conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none maxpooling d_ (maxpooling (none flatten_ (flatten(none dense_ (dense(none licensed to
11,527
training convnet from scratch on small dataset dense_ (dense(none ===============================================================total params , , trainable params , , non-trainable params for the compilation stepyou'll go with the rmsprop optimizeras usual because you ended the network with single sigmoid unityou'll use binary crossentropy as the loss (as remindercheck out table for cheatsheet on what loss function to use in various situationslisting configuring the model for raining from keras import optimizers model compile(loss='binary_crossentropy'optimizer=optimizers rmsprop(lr= - )metrics=['acc']data preprocessing as you know by nowdata should be formatted into appropriately preprocessed floatingpoint tensors before being fed into the network currentlythe data sits on drive as jpeg filesso the steps for getting it into the network are roughly as follows read the picture files decode the jpeg content to rgb grids of pixels convert these into floating-point tensors rescale the pixel values (between and to the [ interval (as you knowneural networks prefer to deal with small input valuesit may seem bit dauntingbut fortunately keras has utilities to take care of these steps automatically keras has module with image-processing helper toolslocated at keras preprocessing image in particularit contains the class imagedatageneratorwhich lets you quickly set up python generators that can automatically turn image files on disk into batches of preprocessed tensors this is what you'll use here listing using imagedatagenerator read images from direct ories from keras preprocessing image import imagedatagenerator train_datagen imagedatagenerator(rescale= / test_datagen imagedatagenerator(rescale= / rescales all images by / train_generator train_datagen flow_from_directorytrain_dirtarget_size=( resizes all images to target batch_size= directory class_mode='binary'validation_generator test_datagen flow_from_directoryvalidation_dirlicensed to because you use binary_crossentropy lossyou need binary labels
11,528
deep learning for computer vision target_size=( )batch_size= class_mode='binary'understanding python generat ors python generator is an object that acts as an iteratorit' an object you can use with the for in operator generators are built using the yield operator here is an example of generator that yields integersdef generator() while truei + yield for item in generator()print(itemif item break it prints this let' look at the output of one of these generatorsit yields batches of rgb images (shape ( )and binary labels (shape ( ,)there are samples in each batch (the batch sizenote that the generator yields these batches indefinitelyit loops endlessly over the images in the target folder for this reasonyou need to break the iteration loop at some pointfor data_batchlabels_batch in train_generatorprint('data batch shape:'data_batch shapeprint('labels batch shape:'labels_batch shapebreak data batch shape( labels batch shape( ,let' fit the model to the data using the generator you do so using the fit_generator methodthe equivalent of fit for data generators like this one it expects as its first argument python generator that will yield batches of inputs and targets indefinitelylike this one does because the data is being generated endlesslythe keras model needs to know how many samples to draw from the generator before declaring an epoch over this is the role of the steps_per_epoch argumentafter having drawn steps_per_epoch batches from the generator--that isafter having run for licensed to
11,529
steps_per_epoch gradient descent steps--the fitting process will go to the next epoch in this casebatches are samplesso it will take batches until you see your target of , samples when using fit_generatoryou can pass validation_data argumentmuch as with the fit method it' important to note that this argument is allowed to be data generatorbut it could also be tuple of numpy arrays if you pass generator as validation_datathen this generator is expected to yield batches of validation data endlesslythus you should also specify the validation_steps argumentwhich tells the process how many batches to draw from the validation generator for evaluation listing fit ing he model using bat ch generat or history model fit_generatortrain_generatorsteps_per_epoch= epochs= validation_data=validation_generatorvalidation_steps= it' good practice to always save your models after training listing saving he model model save('cats_and_dogs_small_ 'let' plot the loss and accuracy of the model over the training and validation data during training (see figures and listing displaying curves of loss and accuracy during raining import matplotlib pyplot as plt acc history history['acc'val_acc history history['val_acc'loss history history['loss'val_loss history history['val_loss'epochs range( len(acc plt plot(epochsacc'bo'label='training acc'plt plot(epochsval_acc' 'label='validation acc'plt title('training and validation accuracy'plt legend(plt figure(plt plot(epochsloss'bo'label='training loss'plt plot(epochsval_loss' 'label='validation loss'plt title('training and validation loss'plt legend(plt show(licensed to
11,530
deep learning for computer vision figure training and validat ion accuracy figure training and validat ion loss these plots are characteristic of overfitting the training accuracy increases linearly over timeuntil it reaches nearly %whereas the validation accuracy stalls at - the validation loss reaches its minimum after only five epochs and then stallswhereas the training loss keeps decreasing linearly until it reaches nearly because you have relatively few training samples ( , )overfitting will be your number-one concern you already know about number of techniques that can help mitigate overfittingsuch as dropout and weight decay ( regularizationwe're now going to work with new onespecific to computer vision and used almost universally when processing images with deep-learning modelsdata augmentation using data augmentation overfitting is caused by having too few samples to learn fromrendering you unable to train model that can generalize to new data given infinite datayour model licensed to
11,531
training convnet from scratch on small dataset would be exposed to every possible aspect of the data distribution at handyou would never overfit data augmentation takes the approach of generating more training data from existing training samplesby augmenting the samples via number of random transformations that yield believable-looking images the goal is that at training timeyour model will never see the exact same picture twice this helps expose the model to more aspects of the data and generalize better in kerasthis can be done by configuring number of random transformations to be performed on the images read by the imagedatagenerator instance let' get started with an example listing setting up data augmentation configuration via imagedatagenerator datagen imagedatageneratorrotation_range= width_shift_range= height_shift_range= shear_range= zoom_range= horizontal_flip=truefill_mode='nearest'these are just few of the options available (for moresee the keras documentationlet' quickly go over this coderotation_range is value in degrees ( - ) range within which to randomly rotate pictures width_shift and height_shift are ranges (as fraction of total width or heightwithin which to randomly translate pictures vertically or horizontally shear_range is for randomly applying shearing transformations zoom_range is for randomly zooming inside pictures horizontal_flip is for randomly flipping half the images horizontally--relevant when there are no assumptions of horizontal asymmetry (for examplereal-world picturesfill_mode is the strategy used for filling in newly created pixelswhich can appear after rotation or width/height shift let' look at the augmented images (see figure listing displaying some randomly augment ed raining images from keras preprocessing import image fnames [os path join(train_cats_dirfnamefor fname in os listdir(train_cats_dir)img_path fnames[ module with imagepreprocessing utilities chooses one image to augment img image load_img(img_pathtarget_size=( )licensed to reads the image and resizes it
11,532
deep learning for computer vision image img_to_array(imgx reshape(( , shapeconverts it to numpy array with shape ( reshapes it to ( for batch in datagen flow(xbatch_size= )plt figure(iimgplot plt imshow(image array_to_img(batch[ ]) + if = break generates batches of randomly transformed images loops indefinitelyso you need to break the loop at some pointplt show(figure generat ion of cat pictures via random dat augment at ion if you train new network using this data-augmentation configurationthe network will never see the same input twice but the inputs it sees are still heavily intercorrelatedbecause they come from small number of original images--you can' produce new informationyou can only remix existing information as suchthis may not be enough to completely get rid of overfitting to further fight overfittingyou'll also add dropout layer to your modelright before the densely connected classifier licensed to
11,533
training convnet from scratch on small dataset listing defining new convnet hat includes dropout model models sequential(model add(layers conv ( ( )activation='relu'input_shape=( ))model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')model add(layers maxpooling (( ))model add(layers conv ( ( )activation='relu')model add(layers maxpooling (( ))model add(layers flatten()model add(layers dropout( )model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')model compile(loss='binary_crossentropy'optimizer=optimizers rmsprop(lr= - )metrics=['acc']let' train the network using data augmentation and dropout listing training he convnet using dat -augment at ion generat ors train_datagen imagedatageneratorrescale= / rotation_range= width_shift_range= height_shift_range= shear_range= zoom_range= horizontal_flip=true,test_datagen imagedatagenerator(rescale= / note that the validation data shouldn' be augmentedtrain_generator train_datagen flow_from_directorytrain_dirtarget target_size=( )resizes all images to directory batch_size= class_mode='binary'validation_generator test_datagen flow_from_directoryvalidation_dirtarget_size=( )batch_size= class_mode='binary'history model fit_generatortrain_generatorsteps_per_epoch= epochs= validation_data=validation_generatorvalidation_steps= licensed to because you use binary_crossentropy lossyou need binary labels
11,534
deep learning for computer vision let' save the model--you'll use it in section listing saving he model model save('cats_and_dogs_small_ 'and let' plot the results againsee figures and thanks to data augmentation and dropoutyou're no longer overfittingthe training curves are closely tracking the validation curves you now reach an accuracy of % relative improvement over the non-regularized model figure training and validat ion accuracy with dat augmentat ion figure training and validat ion loss with dat augmentat ion by using regularization techniques even furtherand by tuning the network' parameters (such as the number of filters per convolution layeror the number of layers in the network)you may be able to get an even better accuracylikely up to or but it would prove difficult to go any higher just by training your own convnet from scratchbecause you have so little data to work with as next step to improve your accuracy on this problemyou'll have to use pretrained modelwhich is the focus of the next two sections licensed to
11,535
using pretrained convnet common and highly effective approach to deep learning on small image datasets is to use pretrained network pretrained network is saved network that was previously trained on large datasettypically on large-scale image-classification task if this original dataset is large enough and general enoughthen the spatial hierarchy of features learned by the pretrained network can effectively act as generic model of the visual worldand hence its features can prove useful for many different computervision problemseven though these new problems may involve completely different classes than those of the original task for instanceyou might train network on imagenet (where classes are mostly animals and everyday objectsand then repurpose this trained network for something as remote as identifying furniture items in images such portability of learned features across different problems is key advantage of deep learning compared to many oldershallow-learning approachesand it makes deep learning very effective for small-data problems in this caselet' consider large convnet trained on the imagenet dataset ( million labeled images and , different classesimagenet contains many animal classesincluding different species of cats and dogsand you can thus expect to perform well on the dogs-versus-cats classification problem you'll use the vgg architecturedeveloped by karen simonyan and andrew zisserman in it' simple and widely used convnet architecture for imagenet although it' an older modelfar from the current state of the art and somewhat heavier than many other recent modelsi chose it because its architecture is similar to what you're already familiar with and is easy to understand without introducing any new concepts this may be your first encounter with one of these cutesy model names--vggresnetinceptioninception-resnetxceptionand so onyou'll get used to thembecause they will come up frequently if you keep doing deep learning for computer vision there are two ways to use pretrained networkfeature extraction and fine-tuning we'll cover both of them let' start with feature extraction feature extraction feature extraction consists of using the representations learned by previous network to extract interesting features from new samples these features are then run through new classifierwhich is trained from scratch as you saw previouslyconvnets used for image classification comprise two partsthey start with series of pooling and convolution layersand they end with densely connected classifier the first part is called the convolutional base of the model in the case of convnetsfeature extraction consists of taking the convolutional base of karen simonyan and andrew zisserman"very deep convolutional networks for large-scale image recognition,arxiv ( )licensed to
11,536
deep learning for computer vision previously trained networkrunning the new data through itand training new classifier on top of the output (see figure prediction prediction prediction trained classifier trained classifier new classifier (randomly initializedtrained convolutional base trained convolutional base trained convolutional base (frozeninput input input figure swapping classifiers while keeping the same convolut ional base why only reuse the convolutional basecould you reuse the densely connected classifier as wellin generaldoing so should be avoided the reason is that the representations learned by the convolutional base are likely to be more generic and therefore more reusablethe feature maps of convnet are presence maps of generic concepts over picturewhich is likely to be useful regardless of the computer-vision problem at hand but the representations learned by the classifier will necessarily be specific to the set of classes on which the model was trained--they will only contain information about the presence probability of this or that class in the entire picture additionallyrepresentations found in densely connected layers no longer contain any information about where objects are located in the input imagethese layers get rid of the notion of spacewhereas the object location is still described by convolutional feature maps for problems where object location mattersdensely connected features are largely useless note that the level of generality (and therefore reusabilityof the representations extracted by specific convolution layers depends on the depth of the layer in the model layers that come earlier in the model extract localhighly generic feature maps (such as visual edgescolorsand textures)whereas layers that are higher up extract more-abstract concepts (such as "cat earor "dog eye"so if your new dataset differs lot from the dataset on which the original model was trainedyou may be better off using only the first few layers of the model to do feature extractionrather than using the entire convolutional base licensed to
11,537
in this casebecause the imagenet class set contains multiple dog and cat classesit' likely to be beneficial to reuse the information contained in the densely connected layers of the original model but we'll choose not toin order to cover the more general case where the class set of the new problem doesn' overlap the class set of the original model let' put this in practice by using the convolutional base of the vgg networktrained on imagenetto extract interesting features from cat and dog imagesand then train dogs-versus-cats classifier on top of these features the vgg modelamong otherscomes prepackaged with keras you can import it from the keras applications module here' the list of image-classification models (all pretrained on the imagenet datasetthat are available as part of keras applicationsxception inception resnet vgg vgg mobilenet let' instantiate the vgg model listing inst ant iat ing he vgg convolutional base from keras applications import vgg conv_base vgg (weights='imagenet'include_top=falseinput_shape=( )you pass three arguments to the constructorweights specifies the weight checkpoint from which to initialize the model include_top refers to including (or notthe densely connected classifier on top of the network by defaultthis densely connected classifier corresponds to the , classes from imagenet because you intend to use your own densely connected classifier (with only two classescat and dog)you don' need to include it input_shape is the shape of the image tensors that you'll feed to the network this argument is purely optionalif you don' pass itthe network will be able to process inputs of any size here' the detail of the architecture of the vgg convolutional base it' similar to the simple convnets you're already familiar withconv_base summary(layer (typeoutput shape param ===============================================================input_ (inputlayer(none licensed to
11,538
deep learning for computer vision block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none block _conv (convolution (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none block _conv (convolution (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none block _conv (convolution (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none ===============================================================total params , , trainable params , , non-trainable params the final feature map has shape ( that' the feature on top of which you'll stick densely connected classifier at this pointthere are two ways you could proceedrunning the convolutional base over your datasetrecording its output to numpy array on diskand then using this data as input to standalonedensely connected classifier similar to those you saw in part of this book this solution is fast and cheap to runbecause it only requires running the convolutional base once for every input imageand the convolutional base is by far the most expensive part of the pipeline but for the same reasonthis technique won' allow you to use data augmentation licensed to
11,539
extending the model you have (conv_baseby adding dense layers on topand running the whole thing end to end on the input data this will allow you to use data augmentationbecause every input image goes through the convolutional base every time it' seen by the model but for the same reasonthis technique is far more expensive than the first we'll cover both techniques let' walk through the code required to set up the first onerecording the output of conv_base on your data and using these outputs as inputs to new model fast feature extraction without data augmentation you'll start by running instances of the previously introduced imagedatagenerator to extract images as numpy arrays as well as their labels you'll extract features from these images by calling the predict method of the conv_base model listing ext ract ing feat ures using he pretrained convolut ional base import os import numpy as np from keras preprocessing image import imagedatagenerator base_dir '/users/fchollet/downloads/cats_and_dogs_smalltrain_dir os path join(base_dir'train'validation_dir os path join(base_dir'validation'test_dir os path join(base_dir'test'datagen imagedatagenerator(rescale= / batch_size def extract_features(directorysample_count)features np zeros(shape=(sample_count )labels np zeros(shape=(sample_count)generator datagen flow_from_directorynote that because generators directoryyield data indefinitely in looptarget_size=( )you must break after every batch_size=batch_sizeimage has been seen once class_mode='binary' for inputs_batchlabels_batch in generatorfeatures_batch conv_base predict(inputs_batchfeatures[ batch_size ( batch_sizefeatures_batch labels[ batch_size ( batch_sizelabels_batch + if batch_size >sample_countbreak return featureslabels train_featurestrain_labels extract_features(train_dir validation_featuresvalidation_labels extract_features(validation_dir test_featurestest_labels extract_features(test_dir the extracted features are currently of shape (samples you'll feed them to densely connected classifierso first you must flatten them to (samples )licensed to
11,540
deep learning for computer vision train_features np reshape(train_features( )validation_features np reshape(validation_features( )test_features np reshape(test_features( )at this pointyou can define your densely connected classifier (note the use of dropout for regularizationand train it on the data and labels that you just recorded listing defining and raining the densely connect ed classifier from keras import models from keras import layers from keras import optimizers model models sequential(model add(layers dense( activation='relu'input_dim= )model add(layers dropout( )model add(layers dense( activation='sigmoid')model compile(optimizer=optimizers rmsprop(lr= - )loss='binary_crossentropy'metrics=['acc']history model fit(train_featurestrain_labelsepochs= batch_size= validation_data=(validation_featuresvalidation_labels)training is very fastbecause you only have to deal with two dense layers--an epoch takes less than one second even on cpu let' look at the loss and accuracy curves during training (see figures and listing plot ing he result import matplotlib pyplot as plt acc history history['acc'val_acc history history['val_acc'loss history history['loss'val_loss history history['val_loss'epochs range( len(acc plt plot(epochsacc'bo'label='training acc'plt plot(epochsval_acc' 'label='validation acc'plt title('training and validation accuracy'plt legend(plt figure(plt plot(epochsloss'bo'label='training loss'plt plot(epochsval_loss' 'label='validation loss'plt title('training and validation loss'plt legend(plt show(licensed to
11,541
using pretrained convnet figure training and validation accuracy for simple feature extraction figure training and validat ion loss for simple feat ure ext ract ion you reach validation accuracy of about %--much better than you achieved in the previous section with the small model trained from scratch but the plots also indicate that you're overfitting almost from the start--despite using dropout with fairly large rate that' because this technique doesn' use data augmentationwhich is essential for preventing overfitting with small image datasets feature extraction with data augmentation nowlet' review the second technique mentioned for doing feature extractionwhich is much slower and more expensivebut which allows you to use data augmentation during trainingextending the conv_base model and running it end to end on the inputs this technique is so expensive that you should only attempt it if you have access to gpu--it' absolutely intractable on cpu if you can' run your code on gputhen the previous technique is the way to go note licensed to
11,542
deep learning for computer vision because models behave just like layersyou can add model (like conv_baseto sequential model just like you would add layer listing adding densely connect ed classifier on op of he convolut ional base from keras import models from keras import layers model models sequential(model add(conv_basemodel add(layers flatten()model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')this is what the model looks like nowmodel summary(layer (typeoutput shape param ===============================================================vgg (model(none flatten_ (flatten(none dense_ (dense(none dense_ (dense(none ===============================================================total params , , trainable params , , non-trainable params as you can seethe convolutional base of vgg has , , parameterswhich is very large the classifier you're adding on top has million parameters before you compile and train the modelit' very important to freeze the convolutional base freezing layer or set of layers means preventing their weights from being updated during training if you don' do thisthen the representations that were previously learned by the convolutional base will be modified during training because the dense layers on top are randomly initializedvery large weight updates would be propagated through the networkeffectively destroying the representations previously learned in kerasyou freeze network by setting its trainable attribute to falseprint('this is the number of trainable weights 'before freezing the conv base:'len(model trainable_weights)this is the number of trainable weights before freezing the conv base conv_base trainable false print('this is the number of trainable weights 'after freezing the conv base:'len(model trainable_weights)this is the number of trainable weights after freezing the conv base licensed to
11,543
using pretrained convnet with this setuponly the weights from the two dense layers that you added will be trained that' total of four weight tensorstwo per layer (the main weight matrix and the bias vectornote that in order for these changes to take effectyou must first compile the model if you ever modify weight trainability after compilationyou should then recompile the modelor these changes will be ignored now you can start training your modelwith the same data-augmentation configuration that you used in the previous example listing training he model end end with frozen convolut ional base from keras preprocessing image import imagedatagenerator from keras import optimizers train_datagen imagedatageneratorrescale= / rotation_range= width_shift_range= height_shift_range= shear_range= zoom_range= horizontal_flip=truefill_mode='nearest'note that the validation data shouldn' be augmentedtest_datagen imagedatagenerator(rescale= / train_generator train_datagen flow_from_directorytrain_dirtarget target_size=( )resizes all images to directory batch_size= class_mode='binary'validation_generator test_datagen flow_from_directoryvalidation_dirtarget_size=( )batch_size= class_mode='binary'because you use binary_crossentropy lossyou need binary labels model compile(loss='binary_crossentropy'optimizer=optimizers rmsprop(lr= - )metrics=['acc']history model fit_generatortrain_generatorsteps_per_epoch= epochs= validation_data=validation_generatorvalidation_steps= let' plot the results again (see figures and as you can seeyou reach validation accuracy of about this is much better than you achieved with the small convnet trained from scratch licensed to
11,544
deep learning for computer vision figure training and validat ion accuracy for feat ure extraction wit data augment ation figure training and validat ion loss for feat ure ext ract ion wit dat augment ation fine-tuning another widely used technique for model reusecomplementary to feature extractionis fine-tuning (see figure fine-tuning consists of unfreezing few of the top layers of frozen model base used for feature extractionand jointly training both the newly added part of the model (in this casethe fully connected classifierand these top layers this is called fine-tuning because it slightly adjusts the more abstract representations of the model being reusedin order to make them more relevant for the problem at hand licensed to
11,545
convolution convolution conv block frozen maxpooling convolution convolution conv block frozen maxpooling convolution convolution conv block frozen convolution maxpooling convolution convolution conv block frozen convolution maxpooling convolution convolution convolution we fine-tune conv block maxpooling flatten dense dense we fine-tune our own fully connected classifier figure fine- uning he last convolutional block of the vgg network licensed to
11,546
deep learning for computer vision stated earlier that it' necessary to freeze the convolution base of vgg in order to be able to train randomly initialized classifier on top for the same reasonit' only possible to fine-tune the top layers of the convolutional base once the classifier on top has already been trained if the classifier isn' already trainedthen the error signal propagating through the network during training will be too largeand the representations previously learned by the layers being fine-tuned will be destroyed thus the steps for fine-tuning network are as follow add your custom network on top of an already-trained base network freeze the base network train the part you added unfreeze some layers in the base network jointly train both these layers and the part you added you already completed the first three steps when doing feature extraction let' proceed with step you'll unfreeze your conv_base and then freeze individual layers inside it as reminderthis is what your convolutional base looks likeconv_base summary(layer (typeoutput shape param ===============================================================input_ (inputlayer(none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none block _conv (convolution (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none block _conv (convolution (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none licensed to
11,547
block _conv (convolution (none block _conv (convolution (none block _conv (convolution (none block _pool (maxpooling (none ===============================================================total params you'll fine-tune the last three convolutional layerswhich means all layers up to block _pool should be frozenand the layers block _conv block _conv and block _conv should be trainable why not fine-tune more layerswhy not fine-tune the entire convolutional baseyou could but you need to consider the followingearlier layers in the convolutional base encode more-genericreusable featureswhereas layers higher up encode more-specialized features it' more useful to fine-tune the more specialized featuresbecause these are the ones that need to be repurposed on your new problem there would be fast-decreasing returns in fine-tuning lower layers the more parameters you're trainingthe more you're at risk of overfitting the convolutional base has million parametersso it would be risky to attempt to train it on your small dataset thusin this situationit' good strategy to fine-tune only the top two or three layers in the convolutional base let' set this upstarting from where you left off in the previous example listing freezing all layers up to specific one conv_base trainable true set_trainable false for layer in conv_base layersif layer name ='block _conv 'set_trainable true if set_trainablelayer trainable true elselayer trainable false now you can begin fine-tuning the network you'll do this with the rmsprop optimizerusing very low learning rate the reason for using low learning rate is that you want to limit the magnitude of the modifications you make to the representations of the three layers you're fine-tuning updates that are too large may harm these representations licensed to
11,548
listing deep learning for computer vision fine- uning he model model compile(loss='binary_crossentropy'optimizer=optimizers rmsprop(lr= - )metrics=['acc']history model fit_generatortrain_generatorsteps_per_epoch= epochs= validation_data=validation_generatorvalidation_steps= let' plot the results using the same plotting code as before (see figures and figure training and validat ion accuracy for fine- uning figure training and validat ion loss for fine- uning these curves look noisy to make them more readableyou can smooth them by replacing every loss and accuracy with exponential moving averages of these quantities here' trivial utility function to do this (see figures and licensed to
11,549
listing smoot hing he plot def smooth_curve(pointsfactor= )smoothed_points [for point in pointsif smoothed_pointsprevious smoothed_points[- smoothed_points append(previous factor point ( factor)elsesmoothed_points append(pointreturn smoothed_points plt plot(epochssmooth_curve(acc)'bo'label='smoothed training acc'plt plot(epochssmooth_curve(val_acc)' 'label='smoothed validation acc'plt title('training and validation accuracy'plt legend(plt figure(plt plot(epochssmooth_curve(loss)'bo'label='smoothed training loss'plt plot(epochssmooth_curve(val_loss)' 'label='smoothed validation loss'plt title('training and validation loss'plt legend(plt show(figure smoot hed curves for raining and validation accuracy for fine-tuning licensed to
11,550
figure deep learning for computer vision smoot hed curves for raining and validation loss for fine-tuning the validation accuracy curve look much cleaner you're seeing nice absolute improvement in accuracyfrom about to above note that the loss curve doesn' show any real improvement (in factit' deterioratingyou may wonderhow could accuracy stay stable or improve if the loss isn' decreasingthe answer is simplewhat you display is an average of pointwise loss valuesbut what matters for accuracy is the distribution of the loss valuesnot their averagebecause accuracy is the result of binary thresholding of the class probability predicted by the model the model may still be improving even if this isn' reflected in the average loss you can now finally evaluate this model on the test datatest_generator test_datagen flow_from_directorytest_dirtarget_size=( )batch_size= class_mode='binary'test_losstest_acc model evaluate_generator(test_generatorsteps= print('test acc:'test_acchere you get test accuracy of in the original kaggle competition around this datasetthis would have been one of the top results but using modern deep-learning techniquesyou managed to reach this result using only small fraction of the training data available (about %there is huge difference between being able to train on , samples compared to , sampleslicensed to
11,551
wrapping up here' what you should take away from the exercises in the past two sectionsconvnets are the best type of machine-learning models for computer-vision tasks it' possible to train one from scratch even on very small datasetwith decent results on small datasetoverfitting will be the main issue data augmentation is powerful way to fight overfitting when you're working with image data it' easy to reuse an existing convnet on new dataset via feature extraction this is valuable technique for working with small image datasets as complement to feature extractionyou can use fine-tuningwhich adapts to new problem some of the representations previously learned by an existing model this pushes performance bit further now you have solid set of tools for dealing with image-classification problems--in particular with small datasets licensed to
11,552
deep learning for computer vision visualizing what convnets learn it' often said that deep-learning models are "black boxes"learning representations that are difficult to extract and present in human-readable form although this is partially true for certain types of deep-learning modelsit' definitely not true for convnets the representations learned by convnets are highly amenable to visualizationin large part because they're representations of visual concepts since wide array of techniques have been developed for visualizing and interpreting these representations we won' survey all of thembut we'll cover three of the most accessible and useful onesvisualizing intermediate convnet outputs (intermediate activations)--useful for understanding how successive convnet layers transform their inputand for getting first idea of the meaning of individual convnet filters visualizing convnets filters--useful for understanding precisely what visual pattern or concept each filter in convnet is receptive to visualizing heatmaps of class activation in an image--useful for understanding which parts of an image were identified as belonging to given classthus allowing you to localize objects in images for the first method--activation visualization--you'll use the small convnet that you trained from scratch on the dogs-versus-cats classification problem in section for the next two methodsyou'll use the vgg model introduced in section visualizing intermediate activations visualizing intermediate activations consists of displaying the feature maps that are output by various convolution and pooling layers in networkgiven certain input (the output of layer is often called its activationthe output of the activation functionthis gives view into how an input is decomposed into the different filters learned by the network you want to visualize feature maps with three dimensionswidthheightand depth (channelseach channel encodes relatively independent featuresso the proper way to visualize these feature maps is by independently plotting the contents of every channel as image let' start by loading the model that you saved in section from keras models import load_model model load_model('cats_and_dogs_small_ 'model summary(as reminder layer (typeoutput shape param ===============================================================conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none maxpooling d_ (maxpooling (none licensed to
11,553
visualizing what convnets learn conv d_ (conv (none maxpooling d_ (maxpooling (none conv d_ (conv (none maxpooling d_ (maxpooling (none flatten_ (flatten(none dropout_ (dropout(none dense_ (dense(none dense_ (dense(none ===============================================================total params , , trainable params , , non-trainable params nextyou'll get an input image-- picture of catnot part of the images the network was trained on listing preprocessing single image img_path '/users/fchollet/downloads/cats_and_dogs_small/test/cats/cat jpgfrom keras preprocessing import image import numpy as np preprocesses the image into tensor img image load_img(img_pathtarget_size=( )img_tensor image img_to_array(imgimg_tensor np expand_dims(img_tensoraxis= img_tensor / its shape is ( print(img_tensor shaperemember that the model was trained on inputs that were preprocessed this way let' display the picture (see figure listing displaying the est pict ure import matplotlib pyplot as plt plt imshow(img_tensor[ ]plt show(licensed to
11,554
deep learning for computer vision figure the test cat picture in order to extract the feature maps you want to look atyou'll create keras model that takes batches of images as inputand outputs the activations of all convolution and pooling layers to do thisyou'll use the keras class model model is instantiated using two argumentsan input tensor (or list of input tensorsand an output tensor (or list of output tensorsthe resulting class is keras modeljust like the sequential models you're familiar withmapping the specified inputs to the specified outputs what sets the model class apart is that it allows for models with multiple outputsunlike sequential for more information about the model classsee section listing inst ant iat ing model from an input ensor and list of out put ensors from keras import models layer_outputs [layer output for layer in model layers[: ]activation_model models model(inputs=model inputoutputs=layer_outputsextracts the outputs of the top eight layers creates model that will return these outputsgiven the model input when fed an image inputthis model returns the values of the layer activations in the original model this is the first time you've encountered multi-output model in this bookuntil nowthe models you've seen have had exactly one input and one output in the general casea model can have any number of inputs and outputs this one has one input and eight outputsone output per layer activation licensed to
11,555
visualizing what convnets learn listing running he model in predict mode activations activation_model predict(img_tensorreturns list of five numpy arraysone array per layer activation for instancethis is the activation of the first convolution layer for the cat image inputfirst_layer_activation activations[ print(first_layer_activation shape( it' feature map with channels let' try plotting the fourth channel of the activation of the first layer of the original model (see figure listing visualizing he fourt channel import matplotlib pyplot as plt plt matshow(first_layer_activation[ :: ]cmap='viridis'figure fourt channel of he activat ion of the first layer on he test cat picture this channel appears to encode diagonal edge detector let' try the seventh channel (see figure )--but note that your own channels may varybecause the specific filters learned by convolution layers aren' deterministic listing visualizing he sevent channel plt matshow(first_layer_activation[ :: ]cmap='viridis'licensed to
11,556
deep learning for computer vision figure sevent channel of the act ivation of he first layer on he est cat pict ure this one looks like "bright green dotdetectoruseful to encode cat eyes at this pointlet' plot complete visualization of all the activations in the network (see figure you'll extract and plot every channel in each of the eight activation mapsand you'll stack the results in one big image tensorwith channels stacked side by side listing visualizing every channel in every int ermediat act ivat ion layer_names [for layer in model layers[: ]layer_names append(layer namenames of the layersso you can have them as part of your plot displays the feature maps images_per_row number of features in the feature map tiles the activation channels in this matrix post-processes the feature to make it visually palatable for layer_namelayer_activation in zip(layer_namesactivations)n_features layer_activation shape[- size layer_activation shape[ the feature map has shape ( sizesizen_featuresn_cols n_features /images_per_row display_grid np zeros((size n_colsimages_per_row size)for col in range(n_cols)tiles each filter into for row in range(images_per_row) big horizontal grid channel_image layer_activation[ ::col images_per_row rowchannel_image -channel_image mean(channel_image /channel_image std(channel_image * channel_image + channel_image np clip(channel_image astype('uint 'display_grid[col size (col sizerow size (row sizechannel_image scale size plt figure(figsize=(scale display_grid shape[ ]scale display_grid shape[ ])plt title(layer_nameplt grid(falseplt imshow(display_gridaspect='auto'cmap='viridis'licensed to displays the grid
11,557
figure every channel of every layer act ivat ion on he est cat pict ure licensed to
11,558
deep learning for computer vision there are few things to note herethe first layer acts as collection of various edge detectors at that stagethe activations retain almost all of the information present in the initial picture as you go higherthe activations become increasingly abstract and less visually interpretable they begin to encode higher-level concepts such as "cat earand "cat eye higher presentations carry increasingly less information about the visual contents of the imageand increasingly more information related to the class of the image the sparsity of the activations increases with the depth of the layerin the first layerall filters are activated by the input imagebut in the following layersmore and more filters are blank this means the pattern encoded by the filter isn' found in the input image we have just evidenced an important universal characteristic of the representations learned by deep neural networksthe features extracted by layer become increasingly abstract with the depth of the layer the activations of higher layers carry less and less information about the specific input being seenand more and more information about the target (in this casethe class of the imagecat or doga deep neural network effectively acts as an information distillation pipelinewith raw data going in (in this casergb picturesand being repeatedly transformed so that irrelevant information is filtered out (for examplethe specific visual appearance of the image)and useful information is magnified and refined (for examplethe class of the imagethis is analogous to the way humans and animals perceive the worldafter observing scene for few secondsa human can remember which abstract objects were present in it (bicycletreebut can' remember the specific appearance of these objects in factif you tried to draw generic bicycle from memorychances are you couldn' get it even remotely righteven though you've seen thousands of bicycles in your lifetime (seefor examplefigure try it right nowthis effect is absolutely real you brain has learned to completely abstract its visual input--to transform it into high-level visual concepts while filtering out irrelevant visual details--making it tremendously difficult to remember how things around you look figure left at empt to draw bicycle from memory rightwhat schematic bicycle should look like licensed to
11,559
visualizing what convnets learn visualizing convnet filters another easy way to inspect the filters learned by convnets is to display the visual pattern that each filter is meant to respond to this can be done with gradient ascent in input space applying gradient descent to the value of the input image of convnet so as to maximize the response of specific filterstarting from blank input image the resulting input image will be one that the chosen filter is maximally responsive to the process is simpleyou'll build loss function that maximizes the value of given filter in given convolution layerand then you'll use stochastic gradient descent to adjust the values of the input image so as to maximize this activation value for instancehere' loss for the activation of filter in the layer block _conv of the vgg networkpretrained on imagenet listing defining he loss ensor for filt er visualizat ion from keras applications import vgg from keras import backend as model vgg (weights='imagenet'include_top=falselayer_name 'block _conv filter_index layer_output model get_layer(layer_nameoutput loss mean(layer_output[:::filter_index]to implement gradient descentyou'll need the gradient of this loss with respect to the model' input to do thisyou'll use the gradients function packaged with the backend module of keras listing obt aining he gradient of he loss wit regard he input grads gradients(lossmodel input)[ the call to gradients returns list of tensors (of size in this casehenceyou keep only the first element-which is tensor non-obvious trick to use to help the gradient-descent process go smoothly is to normalize the gradient tensor by dividing it by its norm (the square root of the average of the square of the values in the tensorthis ensures that the magnitude of the updates done to the input image is always within the same range listing gradient -normalizat ion rick grads /( sqrt( mean( square(grads)) - add - before dividing to avoid accidentally dividing by now you need way to compute the value of the loss tensor and the gradient tensorgiven an input image you can define keras backend function to do thisiterate is licensed to
11,560
deep learning for computer vision function that takes numpy tensor (as list of tensors of size and returns list of two numpy tensorsthe loss value and the gradient value listing fet ching numpy out put values given numpy input values iterate function([model input][lossgrads]import numpy as np loss_valuegrads_value iterate([np zeros(( ))]at this pointyou can define python loop to do stochastic gradient descent listing loss maximizat ion via st ochast ic gradient descent starts from gray image with some noise input_img_data np random random(( ) step magnitude of each gradient update for in range( )loss_valuegrads_value iterate([input_img_data]input_img_data +grads_value step computes the loss value and gradient value runs gradient ascent for steps adjusts the input image in the direction that maximizes the loss the resulting image tensor is floating-point tensor of shape ( )with values that may not be integers within [ henceyou need to postprocess this tensor to turn it into displayable image you do so with the following straightforward utility function listing ut ilit function convert ensor int valid image def deprocess_image( ) - mean( /( std( - * + np clip( normalizes the tensorcenters on ensures that std is clips to [ * np clip( astype('uint 'return converts to an rgb array now you have all the pieces let' put them together into python function that takes as input layer name and filter indexand returns valid image tensor representing the pattern that maximizes the activation of the specified filter licensed to
11,561
visualizing what convnets learn listing funct ion generat filt er visualizat ions builds loss function that maximizes the activation of the nth filter of the layer under consideration computes the gradient of the input picture with regard to this loss def generate_pattern(layer_namefilter_indexsize= )layer_output model get_layer(layer_nameoutput loss mean(layer_output[:::filter_index]grads gradients(lossmodel input)[ grads /( sqrt( mean( square(grads)) - iterate function([model input][lossgrads]normalization tricknormalizes the gradient returns the loss and grads given the input picture input_img_data np random random(( sizesize ) runs gradient ascent for steps step for in range( )loss_valuegrads_value iterate([input_img_data]input_img_data +grads_value step starts from gray image with some noise img input_img_data[ return deprocess_image(imglet' try it (see figure )plt imshow(generate_pattern('block _conv ' )figure pat ern hat he zerot channel in layer block _conv responds to maximally it seems that filter in layer block _conv is responsive to polka-dot pattern now the fun partyou can start visualizing every filter in every layer for simplicityyou'll only look at the first filters in each layerand you'll only look at the first layer of each convolution block (block _conv block _conv block _conv block conv block _conv you'll arrange the outputs on an grid of filter patternswith some black margins between each filter pattern (see figures licensed to
11,562
listing deep learning for computer vision generat ing grid of all filt er response pat erns in layer layer_name 'block _conv size margin empty (blackimage to store results results np zeros(( size margin size margin )for in range( )iterates over the rows of the results grid for in range( )iterates over the columns of the results grid filter_img generate_pattern(layer_namei ( )size=sizegenerates the pattern for filter ( in layer_name horizontal_start size margin horizontal_end horizontal_start size vertical_start size margin vertical_end vertical_start size results[horizontal_starthorizontal_endvertical_startvertical_end:filter_img plt figure(figsize=( )plt imshow(resultsfigure displays the results grid filter pat erns for layer block _conv licensed to puts the result in the square (ijof the results grid
11,563
figure filter pat erns for layer block _conv figure filter pat erns for layer block _conv licensed to
11,564
figure deep learning for computer vision filter pat erns for layer block _conv these filter visualizations tell you lot about how convnet layers see the worldeach layer in convnet learns collection of filters such that their inputs can be expressed as combination of the filters this is similar to how the fourier transform decomposes signals onto bank of cosine functions the filters in these convnet filter banks get increasingly complex and refined as you go higher in the modelthe filters from the first layer in the model (block _conv encode simple directional edges and colors (or colored edgesin some casesthe filters from block _conv encode simple textures made from combinations of edges and colors the filters in higher layers begin to resemble textures found in natural imagesfeatherseyesleavesand so on visualizing heatmaps of class activation 'll introduce one more visualization techniqueone that is useful for understanding which parts of given image led convnet to its final classification decision this is helpful for debugging the decision process of convnetparticularly in the case of classification mistake it also allows you to locate specific objects in an image this general category of techniques is called class activation map (camvisualizationand it consists of producing heatmaps of class activation over input images class activation heatmap is grid of scores associated with specific output classcomputed for every location in any input imageindicating how important each location is with licensed to
11,565
visualizing what convnets learn respect to the class under consideration for instancegiven an image fed into dogsversus-cats convnetcam visualization allows you to generate heatmap for the class "cat,indicating how cat-like different parts of the image areand also heatmap for the class "dog,indicating how dog-like parts of the image are the specific implementation you'll use is the one described in "grad-camvisual explanations from deep networks via gradient-based localization " it' very simpleit consists of taking the output feature map of convolution layergiven an input imageand weighing every channel in that feature map by the gradient of the class with respect to the channel intuitivelyone way to understand this trick is that you're weighting spatial map of "how intensely the input image activates different channelsby "how important each channel is with regard to the class,resulting in spatial map of "how intensely the input image activates the class we'll demonstrate this technique using the pretrained vgg network again listing loading he vgg net work wit pret rained weight from keras applications vgg import vgg model vgg (weights='imagenet'note that you include the densely connected classifier on topin all previous casesyou discarded it consider the image of two african elephants shown in figure (under creative commons license)possibly mother and her calfstrolling on the savanna let' convert this image into something the vgg model can readthe model was trained on images of size preprocessed according to few rules that are packaged in the utility function keras applications vgg preprocess_input so you need to load the imageresize it to convert it to numpy float tensorand apply these preprocessing rules figure test pict ure of african elephant ramprasaath selvaraju et al arxiv ( )licensed to
11,566
listing deep learning for computer vision preprocessing an input image for vgg from keras preprocessing import image from keras applications vgg import preprocess_inputdecode_predictions import numpy as np img_path '/users/fchollet/downloads/creative_commons_elephant jpgimg image load_img(img_pathtarget_size=( ) image img_to_array(imgfloat numpy array of shape ( np expand_dims(xaxis= preprocess_input(xpython imaging library (pilimage of size adds dimension to transform the array into batch of size ( preprocesses the batch (this does channel-wise color normalizationlocal path to the target image you can now run the pretrained network on the image and decode its prediction vector back to human-readable formatpreds model predict(xprint('predicted:'decode_predictions(predstop= )[ ]predicted:'[( ' ' 'african_elephant' )( ' ' 'tusker' )( ' ' 'indian_elephant' )the top three classes predicted for this image are as followsafrican elephant (with probabilitytusker (with probabilityindian elephant (with probabilitythe network has recognized the image as containing an undetermined quantity of african elephants the entry in the prediction vector that was maximally activated is the one corresponding to the "african elephantclassat index np argmax(preds[ ] to visualize which parts of the image are the most african elephant-likelet' set up the grad-cam process listing set ing up the grad-cam algorit hm african elephantentry in the prediction vector african_e lephant_output model output[: last_conv_layer model get_layer('block _conv 'licensed to output feature map of the block _conv layerthe last convolutional layer in vgg
11,567
visualizing what convnets learn gradient of the african elephantclass with regard to the output feature map of block _conv vector of shape ( ,)where each entry is the mean intensity of the gradient over specific feature-map channel grads gradients(african_elephant_outputlast_conv_layer output)[ pooled_grads mean(gradsaxis=( )iterate function([model input][pooled_gradslast_conv_layer output[ ]]pooled_grads_valueconv_layer_output_value iterate([ ]for in range( )conv_layer_output_value[:: *pooled_grads_value[iheatmap np mean(conv_layer_output_valueaxis=- the channel-wise mean of the resulting feature map is the heatmap of the class activation values of these two quantitiesas numpy arraysgiven the sample image of two elephants lets you access the values of the quantities you just definedpooled_grads and the output feature map of block _conv given sample image multiplies each channel in the feature-map array by how important this channel iswith regard to the elephantclass for visualization purposesyou'll also normalize the heatmap between and the result is shown in figure listing heat map post -processing heatmap np maximum(heatmap heatmap /np max(heatmapplt matshow(heatmap figure african elephant class act ivation heatmap over the est picture licensed to
11,568
deep learning for computer vision finallyyou'll use opencv to generate an image that superimposes the original image on the heatmap you just obtained (see figure listing superimposing he heat map wit he original pict ure import cv img cv imread(img_pathuses cv to load the original image resizes the heatmap to be the same size as the original image heatmap cv resize(heatmap(img shape[ ]img shape[ ])heatmap np uint ( heatmapheatmap cv applycolormap(heatmapcv colormap_jetconverts the heatmap to rgb superimposed_img heatmap img cv imwrite('/users/fchollet/downloads/elephant_cam jpg'superimposed_img here is heatmap intensity factor saves the image to disk applies the heatmap to the original image figure superimposing he class act ivation heatmap on the original pict ure this visualization technique answers two important questionswhy did the network think this image contained an african elephantwhere is the african elephant located in the picturein particularit' interesting to note that the ears of the elephant calf are strongly activatedthis is probably how the network can tell the difference between african and indian elephants licensed to
11,569
summary convnets are the best tool for attacking visual-classification problems convnets work by learning hierarchy of modular patterns and concepts to represent the visual world the representations they learn are easy to inspect--convnets are the opposite of black boxesyou're now capable of training your own convnet from scratch to solve an image-classification problem you understand how to use visual data augmentation to fight overfitting you know how to use pretrained convnet to do feature extraction and fine-tuning you can generate visualizations of the filters learned by your convnetsas well as heatmaps of class activity licensed to
11,570
text and sequences this covers preprocessing text data into useful representations working with recurrent neural networks using convnets for sequence processing this explores deep-learning models that can process text (understood as sequences of word or sequences of characters)timeseriesand sequence data in general the two fundamental deep-learning algorithms for sequence processing are recurrent neural networks and convnetsthe one-dimensional version of the convnets that we covered in the previous we'll discuss both of these approaches in this applications of these algorithms include the followingdocument classification and timeseries classificationsuch as identifying the topic of an article or the author of book timeseries comparisonssuch as estimating how closely related two documents or two stock tickers are licensed to
11,571
sequence-to-sequence learningsuch as decoding an english sentence into french sentiment analysissuch as classifying the sentiment of tweets or movie reviews as positive or negative timeseries forecastingsuch as predicting the future weather at certain locationgiven recent weather data this examples focus on two narrow taskssentiment analysis on the imdb dataseta task we approached earlier in the bookand temperature forecasting but the techniques demonstrated for these two tasks are relevant to all the applications just listedand many more licensed to
11,572
deep learning for text and sequences working with text data text is one of the most widespread forms of sequence data it can be understood as either sequence of characters or sequence of wordsbut it' most common to work at the level of words the deep-learning sequence-processing models introduced in the following sections can use text to produce basic form of natural-language understandingsufficient for applications including document classificationsentiment analysisauthor identificationand even question-answering (qa(in constrained contextof coursekeep in mind throughout this that none of these deeplearning models truly understand text in human senseratherthese models can map the statistical structure of written languagewhich is sufficient to solve many simple textual tasks deep learning for natural-language processing is pattern recognition applied to wordssentencesand paragraphsin much the same way that computer vision is pattern recognition applied to pixels like all other neural networksdeep-learning models don' take as input raw textthey only work with numeric tensors vectorizing text is the process of transforming text into numeric tensors this can be done in multiple wayssegment text into wordsand transform each word into vector segment text into charactersand transform each character into vector extract -grams of words or charactersand transform each -gram into vector -grams are overlapping groups of multiple consecutive words or characters collectivelythe different units into which you can break down text (wordscharactersor -gramsare called tokensand breaking text into such tokens is called tokenization all text-vectorization processes consist of applying some tokenization scheme and then associating numeric vectors with the generated tokens these vectorspacked into sequence tensorsare fed into deep neural networks there are multiple ways to associate vector with token in this sectioni'll present two major onesone-hot encoding of tokensand token embedding (typically used exclusively for wordsand called word embeddingthe remainder of this section explains these techniques and shows how to use them to go from raw text to numpy tensor that you can send to keras network text "the cat sat on the mat tokens "the""cat""sat""on""the""mat" the vector encoding of the tokens cat sat on the mat figure from ext tokens to vect ors licensed to
11,573
understanding -grams and bag-of-words word -grams are groups of (or fewerconsecutive words that you can extract from sentence the same concept may also be applied to characters instead of words here' simple example consider the sentence the cat sat on the mat it may be decomposed into the following set of -grams{"the""the cat""cat""cat sat""sat""sat on""on""on the""the""the mat""mat"it may also be decomposed into the following set of -grams{"the""the cat""cat""cat sat""the cat sat""sat""sat on""on""cat sat on""on the""the""sat on the""the mat""mat""on the mat"such set is called bag-of- -grams or bag-of- -gramsrespectively the term bag here refers to the fact that you're dealing with set of tokens rather than list or sequencethe tokens have no specific order this family of tokenization methods is called bag-of-words because bag-of-words isn' an order-preserving tokenization method (the tokens generated are understood as setnot sequenceand the general structure of the sentences is lost)it tends to be used in shallow language-processing models rather than in deep-learning models extracting -grams is form of feature engineeringand deep learning does away with this kind of rigidbrittle approachreplacing it with hierarchical feature learning one-dimensional convnets and recurrent neural networksintroduced later in this are capable of learning representations for groups of words and characters without being explicitly told about the existence of such groupsby looking at continuous word or character sequences for this reasonwe won' cover -grams any further in this book but do keep in mind that they're powerfulunavoidable feature-engineering tool when using lightweightshallow text-processing models such as logistic regression and random forests one-hot encoding of words and characters one-hot encoding is the most commonmost basic way to turn token into vector you saw it in action in the initial imdb and reuters examples in (done with wordsin that caseit consists of associating unique integer index with every word and then turning this integer index into binary vector of size (the size of the vocabulary)the vector is all zeros except for the th entrywhich is of courseone-hot encoding can be done at the character levelas well to unambiguously drive home what one-hot encoding is and how to implement itlistings and show two toy examplesone for wordsthe other for characters licensed to
11,574
listing deep learning for text and sequences word-level one-hot encoding ( oy examplebuilds an index of all tokens in the data initial dataone entry per sample (in this examplea sample is sentencebut it could be an entire documenttokenizes the samples via the split method in real lifeyou' also strip punctuation and special characters from the samples import numpy as np samples ['the cat sat on the mat ''the dog ate my homework 'token_index {for sample in samplesfor word in sample split()if word not in token_indextoken_index[wordlen(token_index max_length results np zeros(shape=(len(samples)max_lengthmax(token_index values() )for isample in enumerate(samples)for jword in list(enumerate(sample split()))[:max_length]index token_index get(wordresults[ijindex this is where you store the results assigns unique index to each unique word note that you don' attribute index to anything listing vectorizes the samples you'll only consider the first max_length words in each sample charact er-level one-hot encoding (toy exampleimport string samples ['the cat sat on the mat ''the dog ate my homework 'characters string printable token_index dict(zip(range( len(characters )characters)max_length results np zeros((len(samples)max_lengthmax(token_index keys() )for isample in enumerate(samples)for jcharacter in enumerate(sample)index token_index get(characterall printable ascii results[ijindex characters note that keras has built-in utilities for doing one-hot encoding of text at the word level or character levelstarting from raw text data you should use these utilitiesbecause they take care of number of important features such as stripping special characters from strings and only taking into account the most common words in your dataset ( common restrictionto avoid dealing with very large input vector spaceslicensed to
11,575
working with text data listing using keras for word-level one-hot encoding from keras preprocessing text import tokenizer creates tokenizerconfigured to only take into account the , most common words samples ['the cat sat on the mat ''the dog ate my homework 'builds the word index tokenizer tokenizer(num_words= tokenizer fit_on_texts(samplessequences tokenizer texts_to_sequences(samplesturns strings into lists of integer indices one_hot_results tokenizer texts_to_matrix(samplesmode='binary'word_index tokenizer word_index print('found % unique tokens len(word_index)you could also directly get the one-hot binary representations vectorization modes other than one-hot encoding are supported by this tokenizer how you can recover the word index that was computed variant of one-hot encoding is the so-called one-hot hashing trickwhich you can use when the number of unique tokens in your vocabulary is too large to handle explicitly instead of explicitly assigning an index to each word and keeping reference of these indices in dictionaryyou can hash words into vectors of fixed size this is typically done with very lightweight hashing function the main advantage of this method is that it does away with maintaining an explicit word indexwhich saves memory and allows online encoding of the data (you can generate token vectors right awaybefore you've seen all of the available datathe one drawback of this approach is that it' susceptible to hash collisionstwo different words may end up with the same hashand subsequently any machine-learning model looking at these hashes won' be able to tell the difference between these words the likelihood of hash collisions decreases when the dimensionality of the hashing space is much larger than the total number of unique tokens being hashed listing word-level one-hot encoding wit hashing rick ( oy examplesamples ['the cat sat on the mat ''the dog ate my homework 'dimensionality max_length results np zeros((len(samples)max_lengthdimensionality)for isample in enumerate(samples)for jword in list(enumerate(sample split()))[:max_length]index abs(hash(word)dimensionality results[ijindex stores the words as vectors of size , if you have close to , words (or more)you'll see many hash collisionswhich will decrease the accuracy of this encoding method licensed to hashes the word into random integer index between and ,
11,576
deep learning for text and sequences using word embeddings another popular and powerful way to associate vector with word is the use of dense word vectorsalso called word embeddings whereas the vectors obtained through one-hot encoding are binarysparse (mostly made of zeros)and very high-dimensional (same dimensionality as the number of words in the vocabulary)word embeddings are lowdimensional floating-point vectors (that isdense vectorsas opposed to sparse vectors)see figure unlike the word vectors obtained via one-hot encodingword embeddings are learned from data it' common to see word embeddings that are -dimensional -dimensionalor , -dimensional when dealing with very large vocabularies on the other handone-hot encoding words generally leads to vectors that are , -dimensional or greater (capturing vocabulary of , tokensin this casesoword embeddings pack more information into far fewer dimensions one-hot word vectorssparse high-dimensional hardcoded word embeddingsdense lower-dimensional learned from data figure whereas word represent ations obtained from one-hot encoding or hashing are sparsehigh-dimensionaland hardcodedword embeddings are denserelat ively lowdimensionaland learned from data there are two ways to obtain word embeddingslearn word embeddings jointly with the main task you care about (such as document classification or sentiment predictionin this setupyou start with random word vectors and then learn word vectors in the same way you learn the weights of neural network load into your model word embeddings that were precomputed using different machine-learning task than the one you're trying to solve these are called pretrained word embeddings let' look at both licensed to
11,577
learning word embeddings with the embedding layer the simplest way to associate dense vector with word is to choose the vector at random the problem with this approach is that the resulting embedding space has no structurefor instancethe words accurate and exact may end up with completely different embeddingseven though they're interchangeable in most sentences it' difficult for deep neural network to make sense of such noisyunstructured embedding space to get bit more abstractthe geometric relationships between word vectors should reflect the semantic relationships between these words word embeddings are meant to map human language into geometric space for instancein reasonable embedding spaceyou would expect synonyms to be embedded into similar word vectorsand in generalyou would expect the geometric distance (such as distancebetween any two word vectors to relate to the semantic distance between the associated words (words meaning different things are embedded at points far away from each otherwhereas related words are closerin addition to distanceyou may want specific directions in the embedding space to be meaningful to make this clearerlet' look at concrete example in figure four words are embedded on planecatdogwolfand tiger with the vector representations we wolf chose heresome semantic relationships between these tiger words can be encoded as geometric transformations for instancethe same vector allows us to go from cat to tiger dog cat and from dog to wolfthis vector could be interpreted as the "from pet to wild animalvector similarlyanother vector lets us go from dog to cat and from wolf to tigerwhich could be interpreted as "from canine to felinevector figure toy example in real-world word-embedding spacescommon examof word-embedding space ples of meaningful geometric transformations are "gendervectors and "pluralvectors for instanceby adding "femalevector to the vector "king,we obtain the vector "queen by adding "pluralvectorwe obtain "kings word-embedding spaces typically feature thousands of such interpretable and potentially useful vectors is there some ideal word-embedding space that would perfectly map human language and could be used for any natural-language-processing taskpossiblybut we have yet to compute anything of the sort alsothere is no such thing as human language--there are many different languagesand they aren' isomorphicbecause language is the reflection of specific culture and specific context but more pragmaticallywhat makes good word-embedding space depends heavily on your taskthe perfect word-embedding space for an english-language movie-review sentimentanalysis model may look different from the perfect embedding space for an englishlanguage legal-document-classification modelbecause the importance of certain semantic relationships varies from task to task licensed to
11,578
deep learning for text and sequences it' thus reasonable to learn new embedding space with every new task fortunatelybackpropagation makes this easyand keras makes it even easier it' about learning the weights of layerthe embedding layer listing inst ant iat ing an embedding layer from keras layers import embedding embedding_layer embedding( the embedding layer takes at least two argumentsthe number of possible tokens (here , maximum word indexand the dimensionality of the embeddings (here the embedding layer is best understood as dictionary that maps integer indices (which stand for specific wordsto dense vectors it takes integers as inputit looks up these integers in an internal dictionaryand it returns the associated vectors it' effectively dictionary lookup (see figure word index embedding layer figure the embedding layer corresponding word vector the embedding layer takes as input tensor of integersof shape (samplessequence_length)where each entry is sequence of integers it can embed sequences of variable lengthsfor instanceyou could feed into the embedding layer in the previous example batches with shapes ( (batch of sequences of length or ( (batch of sequences of length all sequences in batch must have the same lengththough (because you need to pack them into single tensor)so sequences that are shorter than others should be padded with zerosand sequences that are longer should be truncated this layer returns floating-point tensor of shape (samplessequence_ lengthembedding_dimensionalitysuch tensor can then be processed by an rnn layer or convolution layer (both will be introduced in the following sectionswhen you instantiate an embedding layerits weights (its internal dictionary of token vectorsare initially randomjust as with any other layer during trainingthese word vectors are gradually adjusted via backpropagationstructuring the space into something the downstream model can exploit once fully trainedthe embedding space will show lot of structure-- kind of structure specialized for the specific problem for which you're training your model let' apply this idea to the imdb movie-review sentiment-prediction task that you're already familiar with firstyou'll quickly prepare the data you'll restrict the movie reviews to the top , most common words (as you did the first time you worked with this datasetand cut off the reviews after only words the network will learn -dimensional embeddings for each of the , wordsturn the input integer licensed to
11,579
working with text data sequences ( integer tensorinto embedded sequences ( float tensor)flatten the tensor to dand train single dense layer on top for classification listing loading he imdb dat for use wit an embedding layer from keras datasets import imdb from keras import preprocessing max_features maxlen number of words to consider as features cuts off the text after this number of words (among the max_features most common words(x_trainy_train)(x_testy_testimdb load_datanum_words=max_featuresloads the data as lists of integers x_train preprocessing sequence pad_sequences(x_trainmaxlen=maxlen x_test preprocessing sequence pad_sequences(x_testmaxlen=maxlenturns the lists of integers into integer tensor of shape (samplesmaxlenlisting using an embedding layer and classifier on he imdb data specifies the maximum input length to the embedding layer so you can later flatten the embedded inputs after the embedding layerthe activations have shape (samplesmaxlen from keras models import sequential from keras layers import flattendense flattens the tensor of embeddings into tensor of shape (samplesmaxlen model sequential(model add(embedding( input_length=maxlen)model add(flatten()adds the classifier on top model add(dense( activation='sigmoid')model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['acc']model summary(history model fit(x_trainy_trainepochs= batch_size= validation_split= you get to validation accuracy of ~ %which is pretty good considering that you're only looking at the first words in every review but note that merely flattening the embedded sequences and training single dense layer on top leads to model that treats each word in the input sequence separatelywithout considering inter-word relationships and sentence structure (for examplethis model would likely treat both "this movie is bomband "this movie is the bombas being negative reviewsit' much better to add recurrent layers or convolutional layers on top of the embedded sequences to learn features that take into account each sequence as whole that' what we'll focus on in the next few sections licensed to
11,580
deep learning for text and sequences using pretrained word embeddings sometimesyou have so little training data available that you can' use your data alone to learn an appropriate task-specific embedding of your vocabulary what do you do theninstead of learning word embeddings jointly with the problem you want to solveyou can load embedding vectors from precomputed embedding space that you know is highly structured and exhibits useful properties--that captures generic aspects of language structure the rationale behind using pretrained word embeddings in natural-language processing is much the same as for using pretrained convnets in image classificationyou don' have enough data available to learn truly powerful features on your ownbut you expect the features that you need to be fairly generic--that iscommon visual features or semantic features in this caseit makes sense to reuse features learned on different problem such word embeddings are generally computed using word-occurrence statistics (observations about what words co-occur in sentences or documents)using variety of techniquessome involving neural networksothers not the idea of denselowdimensional embedding space for wordscomputed in an unsupervised waywas initially explored by bengio et al in the early , but it only started to take off in research and industry applications after the release of one of the most famous and successful word-embedding schemesthe word vec algorithm (archive/ /word vec)developed by tomas mikolov at google in word vec dimensions capture specific semantic propertiessuch as gender there are various precomputed databases of word embeddings that you can download and use in keras embedding layer word vec is one of them another popular one is called global vectors for word representation (gloveedu/projects/glove)which was developed by stanford researchers in this embedding technique is based on factorizing matrix of word co-occurrence statistics its developers have made available precomputed embeddings for millions of english tokensobtained from wikipedia data and common crawl data let' look at how you can get started using glove embeddings in keras model the same method is valid for word vec embeddings or any other word-embedding database you'll also use this example to refresh the text-tokenization techniques introduced few paragraphs agoyou'll start from raw text and work your way up putting it all togetherfrom raw text to word embeddings you'll use model similar to the one we just went overembedding sentences in sequences of vectorsflattening themand training dense layer on top but you'll do so using pretrained word embeddingsand instead of using the pretokenized imdb data packaged in kerasyou'll start from scratch by downloading the original text data yoshua bengio et al neural probabilistic language models (springer licensed to
11,581
working with text data downloading the imdb data as raw text firsthead to nowlet' collect the individual training reviews into list of stringsone string per review you'll also collect the review labels (positive/negativeinto labels list listing processing he labels of he raw imdb dat import os imdb_dir '/users/fchollet/downloads/aclimdbtrain_dir os path join(imdb_dir'train'labels [texts [for label_type in ['neg''pos']dir_name os path join(train_dirlabel_typefor fname in os listdir(dir_name)if fname[- :=txt' open(os path join(dir_namefname)texts append( read() close(if label_type ='neg'labels append( elselabels append( tokenizing the data let' vectorize the text and prepare training and validation splitusing the concepts introduced earlier in this section because pretrained word embeddings are meant to be particularly useful on problems where little training data is available (otherwisetask-specific embeddings are likely to outperform them)we'll add the following twistrestricting the training data to the first samples so you'll learn to classify movie reviews after looking at just examples listing tokenizing he ext of he raw imdb dat from keras preprocessing text import tokenizer from keras preprocessing sequence import pad_sequences import numpy as np maxlen cuts off reviews after words training_samples trains on samples validation_samples validates on , samples max_words tokenizer tokenizer(num_words=max_wordstokenizer fit_on_texts(textssequences tokenizer texts_to_sequences(textslicensed to considers only the top , words in the dataset
11,582
deep learning for text and sequences word_index tokenizer word_index print('found % unique tokens len(word_index)data pad_sequences(sequencesmaxlen=maxlenlabels np asarray(labelsprint('shape of data tensor:'data shapeprint('shape of label tensor:'labels shapeindices np arange(data shape[ ]np random shuffle(indicesdata data[indiceslabels labels[indicessplits the data into training set and validation setbut first shuffles the databecause you're starting with data in which samples are ordered (all negative firstthen all positivex_train data[:training_samplesy_train labels[:training_samplesx_val data[training_samplestraining_samples validation_samplesy_val labels[training_samplestraining_samples validation_samplesdownloading the glove word embeddings go to embeddings from english wikipedia it' an mb zip file called glove zipcontaining -dimensional embedding vectors for , words (or nonword tokensunzip it preprocessing the embeddings let' parse the unzipped file ( txt fileto build an index that maps words (as stringsto their vector representation (as number vectorslisting parsing he glove word-embeddings file glove_dir '/users/fchollet/downloads/glove bembeddings_index { open(os path join(glove_dir'glove txt')for line in fvalues line split(word values[ coefs np asarray(values[ :]dtype='float 'embeddings_index[wordcoefs close(print('found % word vectors len(embeddings_index)nextyou'll build an embedding matrix that you can load into an embedding layer it must be matrix of shape (max_wordsembedding_dim)where each entry contains the embedding_dim-dimensional vector for the word of index in the reference word index (built during tokenizationnote that index isn' supposed to stand for any word or token--it' placeholder licensed to
11,583
working with text data listing preparing he glove word-embeddings mat rix embedding_dim embedding_matrix np zeros((max_wordsembedding_dim)for wordi in word_index items()if max_wordsembedding_vector embeddings_index get(wordif embedding_vector is not noneembedding_matrix[iembedding_vector words not found in the embedding index will be all zeros defining model you'll use the same model architecture as before listing model definit ion from keras models import sequential from keras layers import embeddingflattendense model sequential(model add(embedding(max_wordsembedding_diminput_length=maxlen)model add(flatten()model add(dense( activation='relu')model add(dense( activation='sigmoid')model summary(loading the glove embeddings in the model the embedding layer has single weight matrixa float matrix where each entry is the word vector meant to be associated with index simple enough load the glove matrix you prepared into the embedding layerthe first layer in the model listing loading pret rained word embeddings int he embedding layer model layers[ set_weights([embedding_matrix]model layers[ trainable false additionallyyou'll freeze the embedding layer (set its trainable attribute to false)following the same rationale you're already familiar with in the context of pretrained convnet featureswhen parts of model are pretrained (like your embedding layerand parts are randomly initialized (like your classifier)the pretrained parts shouldn' be updated during trainingto avoid forgetting what they already know the large gradient updates triggered by the randomly initialized layers would be disruptive to the already-learned features licensed to
11,584
deep learning for text and sequences training and evaluating the model compile and train the model listing training and evaluat ion model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['acc']history model fit(x_trainy_trainepochs= batch_size= validation_data=(x_valy_val)model save_weights('pre_trained_glove_model 'nowplot the model' performance over time (see figures and listing plot ing he result import matplotlib pyplot as plt acc history history['acc'val_acc history history['val_acc'loss history history['loss'val_loss history history['val_loss'epochs range( len(acc plt plot(epochsacc'bo'label='training acc'plt plot(epochsval_acc' 'label='validation acc'plt title('training and validation accuracy'plt legend(plt figure(plt plot(epochsloss'bo'label='training loss'plt plot(epochsval_loss' 'label='validation loss'plt title('training and validation loss'plt legend(plt show(figure training and validat ion loss when using pretrained word embeddings licensed to
11,585
working with text data figure training and validat ion accuracy when using pret rained word embeddings the model quickly starts overfittingwhich is unsurprising given the small number of training samples validation accuracy has high variance for the same reasonbut it seems to reach the high note that your mileage may varybecause you have so few training samplesperformance is heavily dependent on exactly which samples you choose--and you're choosing them at random if this works poorly for youtry choosing different random set of samplesfor the sake of the exercise (in real lifeyou don' get to choose your training datayou can also train the same model without loading the pretrained word embeddings and without freezing the embedding layer in that caseyou'll learn taskspecific embedding of the input tokenswhich is generally more powerful than pretrained word embeddings when lots of data is available but in this caseyou have only training samples let' try it (see figures and listing training he same model wit hout pret rained word embeddings from keras models import sequential from keras layers import embeddingflattendense model sequential(model add(embedding(max_wordsembedding_diminput_length=maxlen)model add(flatten()model add(dense( activation='relu')model add(dense( activation='sigmoid')model summary(model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['acc']history model fit(x_trainy_trainepochs= batch_size= validation_data=(x_valy_val)licensed to
11,586
deep learning for text and sequences figure training and validation loss wit hout using pret rained word embeddings figure training and validation accuracy wit hout using pret rained word embeddings validation accuracy stalls in the low so in this casepretrained word embeddings outperform jointly learned embeddings if you increase the number of training samplesthis will quickly stop being the case--try it as an exercise finallylet' evaluate the model on the test data firstyou need to tokenize the test data listing tokenizing he dat of he est set test_dir os path join(imdb_dir'test'labels [texts [for label_type in ['neg''pos']dir_name os path join(test_dirlabel_typefor fname in sorted(os listdir(dir_name))if fname[- :=txt' open(os path join(dir_namefname)texts append( read()licensed to
11,587
close(if label_type ='neg'labels append( elselabels append( sequences tokenizer texts_to_sequences(textsx_test pad_sequences(sequencesmaxlen=maxleny_test np asarray(labelsnextload and evaluate the first model listing evaluat ing the model on he est set model load_weights('pre_trained_glove_model 'model evaluate(x_testy_testyou get an appalling test accuracy of working with just handful of training samples is difficultwrapping up now you're able to do the followingturn raw text into something neural network can process use the embedding layer in keras model to learn task-specific token embeddings use pretrained word embeddings to get an extra boost on small naturallanguage-processing problems licensed to
11,588
deep learning for text and sequences understanding recurrent neural networks major characteristic of all neural networks you've seen so farsuch as densely connected networks and convnetsis that they have no memory each input shown to them is processed independentlywith no state kept in between inputs with such networksin order to process sequence or temporal series of data pointsyou have to show the entire sequence to the network at onceturn it into single data point for instancethis is what you did in the imdb examplean entire movie review was transformed into single large vector and processed in one go such networks are called feedforward networks in contrastas you're reading the present sentenceyou're processing it word by word--or rathereye saccade by eye saccade--while keeping memories of what came beforethis gives you fluid representation of the meaning conveyed by this sentence biological intelligence processes information incrementally while maintaining an internal model of what it' processingbuilt from past information and constantly updated as new information comes in recurrent neural network (rnnadopts the same principlealbeit in an extremely simplified versionit processes sequences by iterating through the sequence elements and maintaining state containing information relative output to what it has seen so far in effectan rnn is type of neural network that has an internal loop (see figure the state of the rnn is reset between processing two difrnn recurrent ferentindependent sequences (such as two different connection imdb reviews)so you still consider one sequence sininput gle data pointa single input to the network what changes is that this data point is no longer processed in figure recurrent single stepratherthe network internally loops over networka network wit loop sequence elements to make these notions of loop and state clearlet' implement the forward pass of toy rnn in numpy this rnn takes as input sequence of vectorswhich you'll encode as tensor of size (timestepsinput_featuresit loops over timestepsand at each timestepit considers its current state at and the input at (of shape (input_ features,)and combines them to obtain the output at you'll then set the state for the next step to be this previous output for the first timestepthe previous output isn' definedhencethere is no current state soyou'll initialize the state as an allzero vector called the initial state of the network in pseudocodethis is the rnn listing pseudocode rnn state_t the state at for input_t in input_sequenceiterates over sequence elements output_t (input_tstate_tstate_t output_t the previous output becomes the state for the next iteration licensed to
11,589
understanding recurrent neural networks you can even flesh out the function fthe transformation of the input and state into an output will be parameterized by two matricesw and uand bias vector it' similar to the transformation operated by densely connected layer in feedforward network listing more det ailed pseudocode for he rnn state_t for input_t in input_sequenceoutput_t activation(dot(winput_tdot(ustate_tbstate_t output_t to make these notions absolutely unambiguouslet' write naive numpy implementation of the forward pass of the simple rnn listing numpy implement at ion of simple rnn number of timesteps in the input sequence dimensionality of the input feature space import numpy as np timesteps input_features output_features input datarandom noise for the sake of the example dimensionality of the output feature space initial statean all-zero vector inputs np random random((timestepsinput_features)state_t np zeros((output_features,) np random random((output_featuresinput_features) np random random((output_featuresoutput_features) np random random((output_features,)creates random weight matrices input_t is vector of successive_outputs [shape (input_features,for input_t in inputsoutput_t np tanh(np dot(winput_tnp dot(ustate_tbsuccessive_outputs append(output_tstate_t output_t final_output_sequence np concatenate(successive_outputsaxis= the final output is tensor of shape (timestepsoutput_featuresstores this output in list combines the input with the current state (the previous outputto obtain the current output updates the state of the network for the next timestep easy enoughin summaryan rnn is for loop that reuses quantities computed during the previous iteration of the loopnothing more of coursethere are many different rnns fitting this definition that you could build--this example is one of the simplest rnn formulations rnns are characterized by their step functionsuch as the following function in this case (see figure )output_t np tanh(np dot(winput_tnp dot(ustate_tblicensed to
11,590
deep learning for text and sequences output - output output_t activationw*input_t *state_t bostate input - figure output + state + input input + simple rnnunrolled over ime in this examplethe final output is tensor of shape (timestepsoutput_features)where each timestep is the output of the loop at time each timestep in the output tensor contains information about timesteps to in the input sequence--about the entire past for this reasonin many note casesyou don' need this full sequence of outputsyou just need the last output (output_t at the end of the loop)because it already contains information about the entire sequence recurrent layer in keras the process you just naively implemented in numpy corresponds to an actual keras layer--the simplernn layerfrom keras layers import simplernn there is one minor differencesimplernn processes batches of sequenceslike all other keras layersnot single sequence as in the numpy example this means it takes inputs of shape (batch_sizetimestepsinput_features)rather than (timestepsinput_featureslike all recurrent layers in kerassimplernn can be run in two different modesit can return either the full sequences of successive outputs for each timestep ( tensor of shape (batch_sizetimestepsoutput_features)or only the last output for each input sequence ( tensor of shape (batch_sizeoutput_features)these two modes are controlled by the return_sequences constructor argument let' look at an example that uses simplernn and returns only the output at the last timestepfrom keras models import sequential from keras layers import embeddingsimplernn model sequential(model add(embedding( )model add(simplernn( )model summary(licensed to
11,591
layer (typeoutput shape param ===============================================================embedding_ (embedding(nonenone simplernn_ (simplernn(none ===============================================================total params , trainable params , non-trainable params the following example returns the full state sequencemodel sequential(model add(embedding( )model add(simplernn( return_sequences=true)model summary(layer (typeoutput shape param ===============================================================embedding_ (embedding(nonenone simplernn_ (simplernn(nonenone ===============================================================total params , trainable params , non-trainable params it' sometimes useful to stack several recurrent layers one after the other in order to increase the representational power of network in such setupyou have to get all of the intermediate layers to return full sequence of outputsmodel sequential(model add(embedding( )model add(simplernn( return_sequences=true)model add(simplernn( return_sequences=true)last layer only returns model add(simplernn( return_sequences=true)the last output model add(simplernn( )model summary(layer (typeoutput shape param ===============================================================embedding_ (embedding(nonenone simplernn_ (simplernn(nonenone simplernn_ (simplernn(nonenone simplernn_ (simplernn(nonenone simplernn_ (simplernn(none ===============================================================total params , trainable params , non-trainable params licensed to
11,592
deep learning for text and sequences nowlet' use such model on the imdb movie-review-classification problem firstpreprocess the data listing preparing he imdb dat from keras datasets import imdb from keras preprocessing import sequence max_features maxlen batch_size number of words to consider as features cuts off texts after this many words (among the max_features most common wordsprint('loading data '(input_trainy_train)(input_testy_testimdb load_datanum_words=max_featuresprint(len(input_train)'train sequences'print(len(input_test)'test sequences'print('pad sequences (samples time)'input_train sequence pad_sequences(input_trainmaxlen=maxleninput_test sequence pad_sequences(input_testmaxlen=maxlenprint('input_train shape:'input_train shapeprint('input_test shape:'input_test shapelet' train simple recurrent network using an embedding layer and simplernn layer listing training he model with embedding and simplernn layers from keras layers import dense model sequential(model add(embedding(max_features )model add(simplernn( )model add(dense( activation='sigmoid')model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['acc']history model fit(input_trainy_trainepochs= batch_size= validation_split= nowlet' display the training and validation loss and accuracy (see figures and listing plot ing result import matplotlib pyplot as plt acc history history['acc'val_acc history history['val_acc'loss history history['loss'val_loss history history['val_loss'epochs range( len(acc plt plot(epochsacc'bo'label='training acc'plt plot(epochsval_acc' 'label='validation acc'licensed to
11,593
plt title('training and validation accuracy'plt legend(plt figure(plt plot(epochsloss'bo'label='training loss'plt plot(epochsval_loss' 'label='validation loss'plt title('training and validation loss'plt legend(plt show(figure training and validation loss on imdb wit simplernn figure training and validation accuracy on imdb wit simplernn as reminderin the first naive approach to this dataset got you to test accuracy of unfortunatelythis small recurrent network doesn' perform well compared to this baseline (only validation accuracypart of the problem is that your inputs only consider the first wordsrather than full sequences--hencethe rnn has access to less information than the earlier baseline model the remainder of the problem is that simplernn isn' good at processing long sequencessuch as text licensed to
11,594
deep learning for text and sequences other types of recurrent layers perform much better let' look at some moreadvanced layers understanding the lstm and gru layers simplernn isn' the only recurrent layer available in keras there are two otherslstm and gru in practiceyou'll always use one of thesebecause simplernn is generally too simplistic to be of real use simplernn has major issuealthough it should theoretically be able to retain at time information about inputs seen many timesteps beforein practicesuch long-term dependencies are impossible to learn this is due to the vanishing gradient probleman effect that is similar to what is observed with non-recurrent networks (feedforward networksthat are many layers deepas you keep adding layers to networkthe network eventually becomes untrainable the theoretical reasons for this effect were studied by hochreiterschmidhuberand bengio in the early the lstm and gru layers are designed to solve this problem let' consider the lstm layer the underlying long short-term memory (lstmalgorithm was developed by hochreiter and schmidhuber in ; it was the culmination of their research on the vanishing gradient problem this layer is variant of the simplernn layer you already know aboutit adds way to carry information across many timesteps imagine conveyor belt running parallel to the sequence you're processing information from the sequence can jump onto the conveyor belt at any pointbe transported to later timestepand jump offintactwhen you need it this is essentially what lstm doesit saves information for laterthus preventing older signals from gradually vanishing during processing to understand this in detaillet' start from the simplernn cell (see figure because you'll have lot of weight matricesindex the and matrices in the cell with the letter (wo and uofor output output - output output_t activationwo*input_t uo*state_t bostate input - figure output + state + input input + the starting point of an lstm layera simplernn seefor exampleyoshua bengiopatrice simardand paolo frasconi"learning long-term dependencies with gradient descent is difficult,ieee transactions on neural networks no ( sepp hochreiter and jurgen schmidhuber"long short-term memory,neural computation no ( licensed to
11,595
understanding recurrent neural networks let' add to this picture an additional data flow that carries information across timesteps call its values at different timesteps ctwhere stands for carry this information will have the following impact on the cellit will be combined with the input connection and the recurrent connection (via dense transformationa dot product with weight matrix followed by bias add and the application of an activation function)and it will affect the state being sent to the next timestep (via an activation function an multiplication operationconceptuallythe carry dataflow is way to modulate the next output and the next state (see figure simple so far output - - output ct ct state input - figure output + + output_t activationwo*input_t uo*state_t vo*c_t bocarry track ct state + input input + going from simplernn to an lstmadding carry rack now the subtletythe way the next value of the carry dataflow is computed it involves three distinct transformations all three have the form of simplernn celly activation(dot(state_tudot(input_twbbut all three transformations have their own weight matriceswhich you'll index with the letters ifand here' what you have so far (it may seem bit arbitrarybut bear with melisting pseudocode det ails of he lstm archit ect ure ( output_t activation(dot(state_tuodot(input_twodot(c_tvoboi_t activation(dot(state_tuidot(input_twibif_t activation(dot(state_tufdot(input_twfbfk_t activation(dot(state_tukdot(input_twkbkyou obtain the new carry state (the next c_tby combining i_tf_tand k_t listing pseudocode det ails of he lstm archit ect ure ( c_t+ i_t k_t c_t f_t add this as shown in figure and that' it not so complicated--merely tad complex licensed to
11,596
deep learning for text and sequences output - - output compute new carry ct ct state compute new carry output_t activationwo*input_t uo*state_t vo*c_t boinput - figure output + input + carry track ct state + input + anatomy of an lstm if you want to get philosophicalyou can interpret what each of these operations is meant to do for instanceyou can say that multiplying c_t and f_t is way to deliberately forget irrelevant information in the carry dataflow meanwhilei_t and k_t provide information about the presentupdating the carry track with new information but at the end of the daythese interpretations don' mean muchbecause what these operations actually do is determined by the contents of the weights parameterizing themand the weights are learned in an end-to-end fashionstarting over with each training roundmaking it impossible to credit this or that operation with specific purpose the specification of an rnn cell (as just describeddetermines your hypothesis space--the space in which you'll search for good model configuration during training--but it doesn' determine what the cell doesthat is up to the cell weights the same cell with different weights can be doing very different things so the combination of operations making up an rnn cell is better interpreted as set of constraints on your searchnot as design in an engineering sense to researcherit seems that the choice of such constraints--the question of how to implement rnn cells--is better left to optimization algorithms (like genetic algorithms or reinforcement learning processesthan to human engineers and in the futurethat' how we'll build networks in summaryyou don' need to understand anything about the specific architecture of an lstm cellas humanit shouldn' be your job to understand it just keep in mind what the lstm cell is meant to doallow past information to be reinjected at later timethus fighting the vanishing-gradient problem concrete lstm example in keras now let' switch to more practical concernsyou'll set up model using an lstm layer and train it on the imdb data (see figures and the network is similar to the one with simplernn that was just presented you only specify the output dimensionality of the lstm layerleave every other argument (there are manyat the keras licensed to
11,597
defaults keras has good defaultsand things will almost always "just workwithout you having to spend time tuning parameters by hand listing using he lstm layer in keras from keras layers import lstm model sequential(model add(embedding(max_features )model add(lstm( )model add(dense( activation='sigmoid')model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['acc']history model fit(input_trainy_trainepochs= batch_size= validation_split= figure training and validat ion loss on imdb wit lstm figure training and validat ion accuracy on imdb wit lstm licensed to
11,598
deep learning for text and sequences this timeyou achieve up to validation accuracy not badcertainly much better than the simplernn network--that' largely because lstm suffers much less from the vanishing-gradient problem--and slightly better than the fully connected approach from even though you're looking at less data than you were in you're truncating sequences after timestepswhereas in you were considering full sequences but this result isn' groundbreaking for such computationally intensive approach why isn' lstm performing betterone reason is that you made no effort to tune hyperparameters such as the embeddings dimensionality or the lstm output dimensionality another may be lack of regularization but honestlythe primary reason is that analyzing the globallong-term structure of the reviews (what lstm is good atisn' helpful for sentiment-analysis problem such basic problem is well solved by looking at what words occur in each reviewand at what frequency that' what the first fully connected approach looked at but there are far more difficult naturallanguage-processing problems out therewhere the strength of lstm will become apparentin particularquestion-answering and machine translation wrapping up now you understand the followingwhat rnns are and how they work what lstm isand why it works better on long sequences than naive rnn how to use keras rnn layers to process sequence data nextwe'll review number of more advanced features of rnnswhich can help you get the most out of your deep-learning sequence models licensed to
11,599
advanced use of recurrent neural networks in this sectionwe'll review three advanced techniques for improving the performance and generalization power of recurrent neural networks by the end of the sectionyou'll know most of what there is to know about using recurrent networks with keras we'll demonstrate all three concepts on temperature-forecasting problemwhere you have access to timeseries of data points coming from sensors installed on the roof of buildingsuch as temperatureair pressureand humiditywhich you use to predict what the temperature will be hours after the last data point this is fairly challenging problem that exemplifies many common difficulties encountered when working with timeseries we'll cover the following techniquesrecurrent dropout--this is specificbuilt-in way to use dropout to fight overfitting in recurrent layers stacking recurrent layers--this increases the representational power of the network (at the cost of higher computational loadsbidirectional recurrent layers--these present the same information to recurrent network in different waysincreasing accuracy and mitigating forgetting issues temperature-forecasting problem until nowthe only sequence data we've covered has been text datasuch as the imdb dataset and the reuters dataset but sequence data is found in many more problems than just language processing in all the examples in this sectionyou'll play with weather timeseries dataset recorded at the weather station at the max planck institute for biogeochemistry in jenagermany in this dataset different quantities (such air temperatureatmospheric pressurehumiditywind directionand so onwere recorded every minutesover several years the original data goes back to but this example is limited to data from - this dataset is perfect for learning to work with numerical timeseries you'll use it to build model that takes as input some data from the recent past ( few daysworth of data pointsand predicts the air temperature hours in the future download and uncompress the data as followscd ~/downloads mkdir jena_climate cd jena_climate wget unzip jena_climate_ csv zip let' look at the data olaf kollewww bgc-jena mpg de/wetter licensed to