id
int64
0
25.6k
text
stringlengths
0
4.59k
11,700
generative deep learning stochasticity is good to induce robustness because gan training results in dynamic equilibriumgans are likely to get stuck in all sorts of ways introducing randomness during training helps prevent this we introduce randomness in two waysby using dropout in the discriminator and by adding random noise to the labels for the discriminator sparse gradients can hinder gan training in deep learningsparsity is often desirable propertybut not in gans two things can induce gradient sparsitymax pooling operations and relu activations instead of max poolingwe recommend using strided convolutions for downsamplingand we recommend using leakyrelu layer instead of relu activation it' similar to relubut it relaxes sparsity constraints by allowing small negative activation values in generated imagesit' common to see checkerboard artifacts caused by unequal coverage of the pixel space in the generator (see figure to fix thiswe use kernel size that' divisible by the stride size whenever we use strided conv dtranpose or conv in both the generator and the discriminator figure checkerboard art ifacts caused by mismat ching strides and kernel sizesresult ing in unequal pixel-space coverageone of he many got chas of gans the generator firstlet' develop generator model that turns vector (from the latent space-during training it will be sampled at randominto candidate image one of the many issues that commonly arise with gans is that the generator gets stuck with generated images that look like noise possible solution is to use dropout on both the discriminator and the generator listing gan generat or net work import keras from keras import layers import numpy as np latent_dim height width channels licensed to
11,701
introduction to generative adversarial networks generator_input keras input(shape=(latent_dim,) layers dense( )(generator_inputx layers leakyrelu()(xx layers reshape(( ))(xtransforms the input into -channel feature map layers conv ( padding='same')(xx layers leakyrelu()(xx layers conv dtranspose( strides= padding='same')(xx layers leakyrelu()(xupsamples to layers conv ( padding='same')(xx layers leakyrelu()(xx layers conv ( padding='same')(xx layers leakyrelu()(xx layers conv (channels activation='tanh'padding='same')(xgenerator keras models model(generator_inputxgenerator summary(instantiates the generator modelwhich maps the input of shape (latent_dim,into an image of shape ( produces -channel feature map (shape of cifar imagethe discriminator nextyou'll develop discriminator model that takes as input candidate image (real or syntheticand classifies it into one of two classes"generated imageor "real image that comes from the training set listing the gan discriminat or net work discriminator_input layers input(shape=(heightwidthchannels) layers conv ( )(discriminator_inputx layers leakyrelu()(xx layers conv ( strides= )(xx layers leakyrelu()(xx layers conv ( strides= )(xone dropout layerx layers leakyrelu()(xan important trickx layers conv ( strides= )(xx layers leakyrelu()(xx layers flatten()(xclassification layer layers dropout( )(xx layers dense( activation='sigmoid')(xdiscriminator keras models model(discriminator_inputxdiscriminator summary(discriminator_optimizer keras optimizers rmsproplr= clipvalue= decay= - discriminator compile(optimizer=discriminator_optimizerloss='binary_crossentropy'licensed to instantiates the discriminator modelwhich turns ( input into binary classifi-cation decision (fake/realuses gradient clipping (by valuein the optimizer to stabilize traininguses learning-rate decay
11,702
generative deep learning the adversarial network finallyyou'll set up the ganwhich chains the generator and the discriminator when trainedthis model will move the generator in direction that improves its ability to fool the discriminator this model turns latent-space points into classification decision--"fakeor "real"--and it' meant to be trained with labels that are always "these are real images sotraining gan will update the weights of generator in way that makes discriminator more likely to predict "realwhen looking at fake images it' very important to note that you set the discriminator to be frozen during training (non-trainable)its weights won' be updated when training gan if the discriminator weights could be updated during this processthen you' be training the discriminator to always predict "real,which isn' what you wantlisting adversarial net work discriminator trainable false gan_input keras input(shape=(latent_dim,)gan_output discriminator(generator(gan_input)gan keras models model(gan_inputgan_outputsets discriminator weights to non-trainable (this will only apply to the gan modelgan_optimizer keras optimizers rmsprop(lr= clipvalue= decay= - gan compile(optimizer=gan_optimizerloss='binary_crossentropy'how to train your dcgan now you can begin training to recapitulatethis is what the training loop looks like schematically for each epochyou do the following draw random points in the latent space (random noisegenerate images with generator using this random noise mix the generated images with real ones train discriminator using these mixed imageswith corresponding targetseither "real(for the real imagesor "fake(for the generated imagesdraw new random points in the latent space train gan using these random vectorswith targets that all say "these are real images this updates the weights of the generator (onlybecause the discriminator is frozen inside ganto move them toward getting the discriminator to predict "these are real imagesfor generated imagesthis trains the generator to fool the discriminator let' implement it listing implement ing gan raining import os from keras preprocessing import image loads cifar data (x_trainy_train)(__keras datasets cifar load_data(licensed to
11,703
introduction to generative adversarial networks x_train x_train[y_train flatten(= selects frog images (class x_train x_train reshape(x_train shape[ ],(heightwidthchannels)astype('float ' iterations batch_size save_dir 'your_dirnormalizes data specifies where you want to save generated images start for step in range(iterations)random_latent_vectors np random normal(size=(batch_sizelatent_dim)decodes them to fake images generated_images generator predict(random_latent_vectorscombines them stop start batch_size with real images real_images x_train[startstopcombined_images np concatenate([generated_imagesreal_images]labels np concatenate([np ones((batch_size ))np zeros((batch_size ))]labels + np random random(labels shapetrains the discriminator assembles labels that say these are all real images(it' lie!samples random points in the latent space assembles labelsdiscriminating real from fake images d_loss discriminator train_on_batch(combined_imageslabelsrandom_latent_vectors np random normal(size=(batch_sizelatent_dim)samples random points in the latent space misleading_targets np zeros((batch_size )a_loss gan train_on_batch(random_latent_vectorsmisleading_targetsstart +batch_size if start len(x_trainbatch_sizestart if step = gan save_weights('gan 'prints metrics print('discriminator loss:'d_lossprint('adversarial loss:'a_lossadds random noise to the labels--an important tricktrains the generator (via the gan modelwhere the discriminator weights are frozenoccasionally saves and plots (every stepssaves model weights saves one generated image img image array_to_img(generated_images[ scale=falseimg save(os path join(save_dir'generated_frogstr(steppng')img image array_to_img(real_images[ scale=falseimg save(os path join(save_dir'real_frogstr(steppng')saves one real image for comparison licensed to
11,704
generative deep learning when trainingyou may see the adversarial loss begin to increase considerablywhile the discriminative loss tends to zero--the discriminator may end up dominating the generator if that' the casetry reducing the discriminator learning rateand increase the dropout rate of the discriminator figure play he discriminat orin each rowt wo images were dreamed up by he ganand one image comes from he training set can you ell hem apart (answerst he real images in each column are middletopbot tommiddle wrapping up gan consists of generator network coupled with discriminator network the discriminator is trained to differenciate between the output of the generator and real images from training datasetand the generator is trained to fool the discriminator remarkablythe generator nevers sees images from the training set directlythe information it has about the data comes from the discriminator gans are difficult to trainbecause training gan is dynamic process rather than simple gradient descent process with fixed loss landscape getting gan to train correctly requires using number of heuristic tricksas well as extensive tuning gans can potentially produce highly realistic images but unlike vaesthe latent space they learn doesn' have neat continuous structure and thus may not be suited for certain practical applicationssuch as image editing via latentspace concept vectors licensed to
11,705
summary with creative applications of deep learningdeep networks go beyond annotating existing content and start generating their own you learned the followinghow to generate sequence dataone timestep at time this is applicable to text generation and also to note-by-note music generation or any other type of timeseries data how deepdream worksby maximizing convnet layer activations through gradient ascent in input space how to perform style transferwhere content image and style image are combined to produce interesting-looking results what gans and vaes arehow they can be used to dream up new imagesand how latent-space concept vectors can be used for image editing these few techniques cover only the basics of this fast-expanding field there' lot more to discover out there--generative deep learning is deserving of an entire book of its own licensed to
11,706
this covers important takeaways from this book the limitations of deep learning the future of deep learningmachine learningand ai resources for learning further and working in the field you've almost reached the end of this book this last will summarize and review core concepts while also expanding your horizons beyond the relatively basic notions you've learned so far understanding deep learning and ai is journeyand finishing this book is merely the first step on it want to make sure you realize this and are properly equipped to take the next steps of this journey on your own we'll start with bird' -eye view of what you should take away from this book this should refresh your memory regarding some of the concepts you've learned nextwe'll present an overview of some key limitations of deep learning to use tool appropriatelyyou should not only understand what it can do but also be aware of what it can' do finallyi'll offer some speculative thoughts about the future evolution of the fields of deep learningmachine learningand ai this should be especially interesting to you if you' like to get into fundamental research the ends with short list of resources and strategies for learning further about ai and staying up to date with new advances licensed to
11,707
key concepts in review this section briefly synthesizes the key takeaways from this book if you ever need quick refresher to help you recall what you've learnedyou can read these few pages various approaches to ai first of alldeep learning isn' synonymous with ai or even with machine learning artificial intelligence is an ancientbroad field that can generally be defined as "all attempts to automate cognitive processes"--in other wordsthe automation of thought this can range from the very basicsuch as an excel spreadsheetto the very advancedlike humanoid robot that can walk and talk machine learning is specific subfield of ai that aims at automatically developing programs (called modelspurely from exposure to training data this process of turning data into program is called learning although machine learning has been around for long timeit only started to take off in the deep learning is one of many branches of machine learningwhere the models are long chains of geometric functionsapplied one after the other these operations are structured into modules called layersdeep-learning models are typically stacks of layers--ormore generallygraphs of layers these layers are parameterized by weightswhich are the parameters learned during training the knowledge of model is stored in its weightsand the process of learning consists of finding good values for these weights even though deep learning is just one among many approaches to machine learningit isn' on an equal footing with the others deep learning is breakout success here' why what makes deep learning special within the field of machine learning in the span of only few yearsdeep learning has achieved tremendous breakthroughs across wide range of tasks that have been historically perceived as extremely difficult for computersespecially in the area of machine perceptionextracting useful information from imagesvideossoundand more given sufficient training data (in particulartraining data appropriately labeled by humans)it' possible to extract from perceptual data almost anything that human could extract henceit' sometimes said that deep learning has solved perceptionalthough that' true only for fairly narrow definition of perception due to its unprecedented technical successesdeep learning has singlehandedly brought about the third and by far the largest ai summera period of intense interestinvestmentand hype in the field of ai as this book is being writtenwe're in the middle of it whether this period will end in the near futureand what happens after it endsare topics of debate one thing is certainin stark contrast with previous ai summersdeep learning has provided enormous business value to number of large technology companiesenabling human-level speech recognitionsmart assistantshuman-level licensed to
11,708
conclusions image classificationvastly improved machine translationand more the hype may (and likely willrecedebut the sustained economic and technological impact of deep learning will remain in that sensedeep learning could be analogous to the internetit may be overly hyped up for few yearsbut in the longer term it will still be major revolution that will transform our economy and our lives ' particularly optimistic about deep learning because even if we were to make no further technological progress in the next decadedeploying existing algorithms to every applicable problem would be game changer for most industries deep learning is nothing short of revolutionand progress is currently happening at an incredibly fast ratedue to an exponential investment in resources and headcount from where standthe future looks brightalthough short-term expectations are somewhat overoptimisticdeploying deep learning to the full extent of its potential will take well over decade how to think about deep learning the most surprising thing about deep learning is how simple it is ten years agono one expected that we would achieve such amazing results on machine-perception problems by using simple parametric models trained with gradient descent nowit turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples as feynman once said about the universe"it' not complicatedit' just lot of it " in deep learningeverything is vectoreverything is point in geometric space model inputs (textimagesand so onand targets are first vectorizedturned into an initial input vector space and target vector space each layer in deep-learning model operates one simple geometric transformation on the data that goes through it togetherthe chain of layers in the model forms one complex geometric transformationbroken down into series of simple ones this complex transformation attempts to map the input space to the target spaceone point at time this transformation is parameterized by the weights of the layerswhich are iteratively updated based on how well the model is currently performing key characteristic of this geometric transformation is that it must be differentiablewhich is required in order for us to be able to learn its parameters via gradient descent intuitivelythis means the geometric morphing from inputs to outputs must be smooth and continuous-- significant constraint the entire process of applying this complex geometric transformation to the input data can be visualized in by imagining person trying to uncrumple paper ballthe crumpled paper ball is the manifold of the input data that the model starts with each movement operated by the person on the paper ball is similar to simple geometric transformation operated by one layer the full uncrumpling gesture sequence is the complex transformation of the entire model deep-learning models are mathematical machines for uncrumpling complicated manifolds of high-dimensional data richard feynmaninterviewthe world from another point of viewyorkshire television licensed to
11,709
that' the magic of deep learningturning meaning into vectorsinto geometric spacesand then incrementally learning complex geometric transformations that map one space to another all you need are spaces of sufficiently high dimensionality in order to capture the full scope of the relationships found in the original data the whole thing hinges on single core ideathat meaning is derived from the pairwise relationship between things (between words in languagebetween pixels in an imageand so onand that these relationships can be captured by distance function but note that whether the brain implements meaning via geometric spaces is an entirely separate question vector spaces are efficient to work with from computational standpointbut different data structures for intelligence can easily be envisioned--in particulargraphs neural networks initially emerged from the idea of using graphs as way to encode meaningwhich is why they're named neural networks the surrounding field of research used to be called connectionism nowadays the name neural network exists purely for historical reasons--it' an extremely misleading name because they're neither neural nor networks in particularneural networks have hardly anything to do with the brain more appropriate name would have been layered representations learning or hierarchical representations learningor maybe even deep differentiable models or chained geometric transformsto emphasize the fact that continuous geometric space manipulation is at their core key enabling technologies the technological revolution that' currently unfolding didn' start with any single breakthrough invention ratherlike any other revolutionit' the product of vast accumulation of enabling factors--slowly at firstand then suddenly in the case of deep learningwe can point out the following key factorsincremental algorithmic innovationsfirst spread over two decades (starting with backpropagationand then happening increasingly faster as more research effort was poured into deep learning after the availability of large amounts of perceptual datawhich is requirement in order to realize that sufficiently large models trained on sufficiently large data are all we need this is in turn byproduct of the rise of the consumer internet and moore' law applied to storage media the availability of fasthighly parallel computation hardware at low priceespecially the gpus produced by nvidia--first gaming gpus and then chips designed from the ground up for deep learning early onnvidia ceo jensen huang took note of the deep-learning boom and decided to bet the company' future on it complex stack of software layers that makes this computational power available to humansthe cuda languageframeworks like tensorflow that do automatic differentiationand keraswhich makes deep learning accessible to most people licensed to
11,710
conclusions in the futuredeep learning will not only be used by specialists--researchersgraduate studentsand engineers with an academic profile--but will also be tool in the toolbox of every developermuch like web technology today everyone needs to build intelligent appsjust as every business today needs websiteevery product will need to intelligently make sense of user-generated data bringing about this future will require us to build tools that make deep learning radically easy to use and accessible to anyone with basic coding abilities keras is the first major step in that direction the universal machine-learning workflow having access to an extremely powerful tool for creating models that map any input space to any target space is greatbut the difficult part of the machine-learning workflow is often everything that comes before designing and training such models (andfor production modelswhat comes afteras wellunderstanding the problem domain so as to be able to determine what to attempt to predictgiven what dataand how to measure successis prerequisite for any successful application of machine learningand it isn' something that advanced tools like keras and tensorflow can help you with as reminderhere' quick summary of the typical machine-learning workflow as described in define the problemwhat data is availableand what are you trying to predictwill you need to collect more data or hire people to manually label datasetidentify way to reliably measure success on your goal for simple tasksthis may be prediction accuracybut in many cases it will require sophisticated domain-specific metrics prepare the validation process that you'll use to evaluate your models in particularyou should define training seta validation setand test set the validationand test-set labels shouldn' leak into the training datafor instancewith temporal predictionthe validation and test data should be posterior to the training data vectorize the data by turning it into vectors and preprocessing it in way that makes it more easily approachable by neural network (normalizationand so ondevelop first model that beats trivial common-sense baselinethus demonstrating that machine learning can work on your problem this may not always be the casegradually refine your model architecture by tuning hyperparameters and adding regularization make changes based on performance on the validation data onlynot the test data or the training data remember that you should get your model to overfit (thus identifying model capacity level that' greater than you needand only then begin to add regularization or downsize your model licensed to
11,711
be aware of validation-set overfitting when turning hyperparametersthe fact that your hyperparameters may end up being overspecialized to the validation set avoiding this is the purpose of having separate test setkey network architectures the three families of network architectures that you should be familiar with are densely connected networksconvolutional networksand recurrent networks each type of network is meant for specific input modalitya network architecture (denseconvolutionalrecurrentencodes assumptions about the structure of the dataa hypothesis space within which the search for good model will proceed whether given architecture will work on given problem depends entirely on the match between the structure of the data and the assumptions of the network architecture these different network types can easily be combined to achieve larger multimodal networksmuch as you combine lego bricks in waydeep-learning layers are lego bricks for information processing here' quick overview of the mapping between input modalities and appropriate network architecturesvector data--densely connected network (dense layersimage data-- convnets sound data (for examplewaveform)--either convnets (preferredor rnns text data--either convnets (preferredor rnns timeseries data--either rnns (preferredor convnets other types of sequence data--either rnns or convnets prefer rnns if data ordering is strongly meaningful (for examplefor timeseriesbut not for textvideo data--either convnets (if you need to capture motion effectsor combination of frame-level convnet for feature extraction followed by either an rnn or convnet to process the resulting sequences volumetric data-- convnets nowlet' quickly review the specificities of each network architecture densely connected networks densely connected network is stack of dense layersmeant to process vector data (batches of vectorssuch networks assume no specific structure in the input featuresthey're called densely connected because the units of dense layer are connected to every other unit the layer attempts to map relationships between any two input featuresthis is unlike convolution layerfor instancewhich only looks at local relationships densely connected networks are most commonly used for categorical data (for examplewhere the input features are lists of attributes)such as the boston housing price dataset used in they're also used as the final classification or regression stage of most networks for instancethe convnets covered in typically end with one or two dense layersand so do the recurrent networks in licensed to
11,712
conclusions rememberto perform binary classificationend your stack of layers with dense layer with single unit and sigmoid activationand use binary_crossentropy as the loss your targets should be either or from keras import models from keras import layers model models sequential(model add(layers dense( activation='relu'input_shape=(num_input_features,))model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')model compile(optimizer='rmsprop'loss='binary_crossentropy'to perform single-label categorical classification (where each sample has exactly one classno more)end your stack of layers with dense layer with number of units equal to the number of classesand softmax activation if your targets are one-hot encodeduse categorical_crossentropy as the lossif they're integersuse sparse_categorical_ crossentropymodel models sequential(model add(layers dense( activation='relu'input_shape=(num_input_features,))model add(layers dense( activation='relu')model add(layers dense(num_classesactivation='softmax')model compile(optimizer='rmsprop'loss='categorical_crossentropy'to perform multilabel categorical classification (where each sample can have several classes)end your stack of layers with dense layer with number of units equal to the number of classes and sigmoid activationand use binary_crossentropy as the loss your targets should be -hot encodedmodel models sequential(model add(layers dense( activation='relu'input_shape=(num_input_features,))model add(layers dense( activation='relu')model add(layers dense(num_classesactivation='sigmoid')model compile(optimizer='rmsprop'loss='binary_crossentropy'to perform regression toward vector of continuous valuesend your stack of layers with dense layer with number of units equal to the number of values you're trying to predict (often single onesuch as the price of house)and no activation several losses can be used for regressionmost commonly mean_squared_error (mseand mean_absolute_error (mae)model models sequential(model add(layers dense( activation='relu'input_shape=(num_input_features,))model add(layers dense( activation='relu')model add(layers dense(num_values)model compile(optimizer='rmsprop'loss='mse'licensed to
11,713
convnets convolution layers look at spatially local patterns by applying the same geometric transformation to different spatial locations (patchesin an input tensor this results in representations that are translation invariantmaking convolution layers highly data efficient and modular this idea is applicable to spaces of any dimensionality (sequences) (images) (volumes)and so on you can use the conv layer to process sequences (especially text--it doesn' work as well on timeserieswhich often don' follow the translation-invariance assumption)the conv layer to process imagesand the conv layers to process volumes convnetsor convolutional networksconsist of stacks of convolution and max-pooling layers the pooling layers let you spatially downsample the datawhich is required to keep feature maps to reasonable size as the number of features growsand to allow subsequent convolution layers to "seea greater spatial extent of the inputs convnets are often ended with either flatten operation or global pooling layerturning spatial feature maps into vectorsfollowed by dense layers to achieve classification or regression note that it' highly likely that regular convolutions will soon be mostly (or completelyreplaced by an equivalent but faster and representationally efficient alternativethe depthwise separable convolution (separableconv layerthis is true for dand inputs when you're building new network from scratchusing depthwise separable convolutions is definitely the way to go the separableconv layer can be used as drop-in replacement for conv dresulting in smallerfaster network that also performs better on its task here' typical image-classification network (categorical classificationin this case)model models sequential(model add(layers separableconv ( activation='relu'input_shape=(heightwidthchannels))model add(layers separableconv ( activation='relu')model add(layers maxpooling ( )model add(layers separableconv ( activation='relu')model add(layers separableconv ( activation='relu')model add(layers maxpooling ( )model add(layers separableconv ( activation='relu')model add(layers separableconv ( activation='relu')model add(layers globalaveragepooling ()model add(layers dense( activation='relu')model add(layers dense(num_classesactivation='softmax')model compile(optimizer='rmsprop'loss='categorical_crossentropy'rnns recurrent neural networks (rnnswork by processing sequences of inputs one timestep at time and maintaining state throughout ( state is typically vector or set of vectorslicensed to
11,714
conclusions point in geometric space of statesthey should be used preferentially over convnets in the case of sequences where patterns of interest aren' invariant by temporal translation (for instancetimeseries data where the recent past is more important than the distant pastthree rnn layers are available in kerassimplernngruand lstm for most practical purposesyou should use either gru or lstm lstm is the more powerful of the two but is also more expensiveyou can think of gru as simplercheaper alternative to it in order to stack multiple rnn layers on top of each othereach layer prior to the last layer in the stack should return the full sequence of its outputs (each input timestep will correspond to an output timestep)if you aren' stacking any further rnn layersthen it' common to return only the last outputwhich contains information about the entire sequence following is single rnn layer for binary classification of vector sequencesmodel models sequential(model add(layers lstm( input_shape=(num_timestepsnum_features))model add(layers dense(num_classesactivation='sigmoid') model compile(optimizer='rmsprop'loss='binary_crossentropy'and this is stacked rnn layer for binary classification of vector sequencesmodel models sequential(model add(layers lstm( return_sequences=trueinput_shape=(num_timestepsnum_features))model add(layers lstm( return_sequences=true)model add(layers lstm( )model add(layers dense(num_classesactivation='sigmoid')model compile(optimizer='rmsprop'loss='binary_crossentropy'the space of possibilities what will you build with deep learningrememberbuilding deep-learning models is like playing with lego brickslayers can be plugged together to map essentially anything to anythinggiven that you have appropriate training data available and that the mapping is achievable via continuous geometric transformation of reasonable complexity the space of possibilities is infinite this section offers few examples to inspire you to think beyond the basic classification and regression tasks that have traditionally been the bread and butter of machine learning 've sorted my suggested applications by input and output modalities note that quite few of them stretch the limits of what is possible--although model could be trained on all of these tasksin some cases such model probably wouldn' generalize far from its training data sections and will address how these limitations could be lifted in the future licensed to
11,715
mapping vector data to vector data predictive healthcare--mapping patient medical records to predictions of patient outcomes behavioral targeting--mapping set of website attributes with data on how long user will spend on the website product quality control--mapping set of attributes relative to an instance of manufactured product with the probability that the product will fail by next year mapping image data to vector data doctor assistant--mapping slides of medical images with prediction about the presence of tumor self-driving vehicle--mapping car dash-cam video frames to steering wheel angle commands board game ai --mapping go and chess boards to the next player move diet helper--mapping pictures of dish to its calorie count age prediction--mapping selfies to the age of the person mapping timeseries data to vector data weather prediction--mapping timeseries of weather data in grid of locations of weather data the following week at specific location brain-computer interfaces--mapping timeseries of magnetoencephalogram (megdata to computer commands behavioral targeting--mapping timeseries of user interactions on website to the probability that user will buy something mapping text to text smart reply--mapping emails to possible one-line replies answering questions--mapping general-knowledge questions to answers summarization--mapping long article to short summary of the article mapping images to text captioning--mapping images to short captions describing the contents of the images mapping text to images conditioned image generation--mapping short text description to images matching the description logo generation/selection--mapping the name and description of company to the company' logo mapping images to images super-resolution--mapping downsized images to higher-resolution versions of the same images visual depth sensing--mapping images of indoor environments to maps of depth predictions licensed to
11,716
conclusions mapping images and text to text visual qa --mapping images and natural-language questions about the contents of images to natural-language answers mapping video and text to text video qa --mapping short videos and natural-language questions about the contents of videos to natural-language answers almost anything is possible--but not quite anything let' see in the next section what we can' do with deep learning licensed to
11,717
the limitations of deep learning the space of applications that can be implemented with deep learning is nearly infinite and yetmany applications are completely out of reach for current deeplearning techniques--even given vast amounts of human-annotated data sayfor instancethat you could assemble dataset of hundreds of thousands--even millions--of english-language descriptions of the features of software productwritten by product manageras well as the corresponding source code developed by team of engineers to meet these requirements even with this datayou could not train deep-learning model to read product description and generate the appropriate codebase that' just one example among many in generalanything that requires reasoning--like programming or applying the scientific method--long-term planningand algorithmic data manipulation is out of reach for deep-learning modelsno matter how much data you throw at them even learning sorting algorithm with deep neural network is tremendously difficult this is because deep-learning model is just chain of simplecontinuous geometric transformations mapping one vector space into another all it can do is map one data manifold into another manifold yassuming the existence of learnable continuous transform from to deep-learning model can be interpreted as kind of programbutinverselymost programs can' be expressed as deep-learning models--for most taskseither there exists no corresponding deep-neural network that solves the task oreven if one existsit may not be learnablethe corresponding geometric transform may be far too complexor there may not be appropriate data available to learn it scaling up current deep-learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues it won' solve the more fundamental problems that deep-learning models are limited in what they can represent and that most of the programs you may wish to learn can' be expressed as continuous geometric morphing of data manifold the risk of anthropomorphizing machine-learning models one real risk with contemporary ai is misinterpreting what deep-learning models do and overestimating their abilities fundamental feature of humans is our theory of mindour tendency to project intentionsbeliefsand knowledge on the things around us drawing smiley face on rock suddenly makes it "happy"--in our minds applied to deep learningthis means thatfor instancewhen we're able to somewhat successfully train model to generate captions to describe pictureswe're led to believe that the model "understandsthe contents of the pictures and the captions it generates then we're surprised when any slight departure from the sort of images present in the training data causes the model to generate completely absurd captions (see figure licensed to
11,718
conclusions figure failure of an image-capt ioning syst em based on deep learning the boy is holding baseball bat in particularthis is highlighted by adversarial exampleswhich are samples fed to deep-learning network that are designed to trick the model into misclassifying them you're already aware thatfor instanceit' possible to do gradient ascent in input space to generate inputs that maximize the activation of some convnet filter--this is the basis of the filter-visualization technique introduced in as well as the deepdream algorithm in similarlythrough gradient ascentyou can slightly modify an image in order to maximize the class prediction for given class by taking picture of panda and adding to it gibbon gradientwe can get neural network to classify the panda as gibbon (see figure this evidences both the brittleness of these models and the deep difference between their input-to-output mapping and our human perception (xpanda panda (xgibbon class gradient gibbonadversarial example figure an adversarial exampleimperceptible changes in an image can upend model' classificat ion of he image licensed to
11,719
the limitations of deep learning in shortdeep-learning models don' have any understanding of their input--at leastnot in human sense our own understanding of imagessoundsand language is grounded in our sensorimotor experience as humans machine-learning models have no access to such experiences and thus can' understand their inputs in humanrelatable way by annotating large numbers of training examples to feed into our modelswe get them to learn geometric transform that maps data to human concepts on specific set of examplesbut this mapping is simplistic sketch of the original model in our minds--the one developed from our experience as embodied agents it' like dim image in mirror (see figure embodied human experience real world abstract concepts in human mind labeled data exemplifying these concepts machine-learning model (xmay not always transfer well to the real world figure doesn' match the human mental model it came from matches the training data current machine-learning modelslike dim image in mirror as machine-learning practitioneralways be mindful of thisand never fall into the trap of believing that neural networks understand the task they perform--they don'tat least not in way that would make sense to us they were trained on differentfar narrower task than the one we wanted to teach themthat of mapping training inputs to training targetspoint by point show them anything that deviates from their training dataand they will break in absurd ways local generalization vs extreme generalization there are fundamental differences between the straightforward geometric morphing from input to output that deep-learning models doand the way humans think and learn it isn' only the fact that humans learn by themselves from embodied experience instead of being presented with explicit training examples in addition to the different learning processesthere' basic difference in the nature of the underlying representations humans are capable of far more than mapping immediate stimuli to immediate responsesas deep networkor maybe an insectwould we maintain complexabstract models of our current situationof ourselvesand of other peopleand can use these models to anticipate different possible futures and perform long-term planning we can merge together known concepts to represent something we've never experienced licensed to
11,720
conclusions before--like picturing horse wearing jeansfor instanceor imagining what we' do if we won the lottery this ability to handle hypotheticalsto expand our mental model space far beyond what we can experience directly--to perform abstraction and reasoning--is arguably the defining characteristic of human cognition call it extreme generalizationan ability to adapt to novelnever-before-experienced situations using little data or even no new data at all this stands in sharp contrast with what deep nets dowhich call local generalization (see figure the mapping from inputs to outputs performed by deep net quickly stops making sense if new inputs differ even slightly from what the net saw at training time considerfor instancethe problem of learning the appropriate launch parameters to get rocket to land on the moon if you used deep net for this task and trained it using supervised learning or reinforcement learningyou' have to feed it thousands or even millions of launch trialsyou' need to expose it to dense sampling of the input spacein order for it to learn reliable mapping from input space to output space in contrastas humans we can use our power of abstraction to come up with physical models--rocket science--and derive an exact solution that will land the rocket on the moon in one or few trials similarlyif you developed deep net controlling human bodyand you wanted it to learn to safely navigate city without getting hit by carsthe net would have to die many thousands of times in various situations until it could infer that cars are dangerousand develop appropriate avoidance behaviors dropped into new citythe net would have to relearn most of what it knows on the other handhumans are able to learn safe behaviors without having to die even once--againthanks to our power of abstract modeling of hypothetical situations the same set of data points or experience local generalizationgeneralization power of pattern recognition extreme generalizationgeneralization power achieved via abstraction and reasoning figure local generalizat ion vs extreme generalizat ion in shortdespite our progress on machine perceptionwe're still far from humanlevel ai our models can only perform local generalizationadapting to new situations that must be similar to past datawhereas human cognition is capable of licensed to
11,721
extreme generalizationquickly adapting to radically novel situations and planning for long-term future situations wrapping up here' what you should rememberthe only real success of deep learning so far has been the ability to map space to space using continuous geometric transformgiven large amounts of human-annotated data doing this well is game-changer for essentially every industrybut it' still long way from human-level ai to lift some of the limitations we have discussed and create ai that can compete with human brainswe need to move away from straightforward input-to-output mappings and on to reasoning and abstraction likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs we said previously that machine-learning models can be defined as learnable programscurrently we can only learn programs that belong to narrow and specific subset of all possible programs but what if we could learn any programin modular and reusable waylet' see in the next section what the road ahead may look like licensed to
11,722
conclusions the future of deep learning this is more speculative section aimed at opening horizons for people who want to join research program or begin doing independent research given what we know of how deep nets worktheir limitationsand the current state of the research landscapecan we predict where things are headed in the medium termfollowing are some purely personal thoughts note that don' have crystal ballso lot of what anticipate may fail to become reality ' sharing these predictions not because expect them to be proven completely right in the futurebut because they're interesting and actionable in the present at high levelthese are the main directions in which see promisemodels closer to general-purpose computer programsbuilt on top of far richer primitives than the current differentiable layers this is how we'll get to reasoning and abstractionthe lack of which is the fundamental weakness of current models new forms of learning that make the previous point possibleallowing models to move away from differentiable transforms models that require less involvement from human engineers it shouldn' be your job to tune knobs endlessly greatersystematic reuse of previously learned features and architecturessuch as metalearning systems using reusable and modular program subroutines additionallynote that these considerations aren' specific to the sort of supervised learning that has been the bread and butter of deep learning so far--ratherthey're applicable to any form of machine learningincluding unsupervisedself-supervisedand reinforcement learning it isn' fundamentally important where your labels come from or what your training loop looks likethese different branches of machine learning are different facets of the same construct let' dive in models as programs as noted in the previous sectiona necessary transformational development that we can expect in the field of machine learning is move away from models that perform purely pattern recognition and can only achieve local generalizationtoward models capable of abstraction and reasoning that can achieve extreme generalization current ai programs that are capable of basic forms of reasoning are all hardcoded by human programmersfor instancesoftware that relies on search algorithmsgraph manipulationand formal logic in deepmind' alphagofor examplemost of the intelligence on display is designed and hardcoded by expert programmers (such as monte carlo tree search)learning from data happens only in specialized submodules (value networks and policy networksbut in the futuresuch ai systems may be fully learnedwith no human involvement what path could make this happenconsider well-known type of networkrnns it' important to note that rnns have slightly fewer limitations than feedforward networks that' because rnns are bit more than mere geometric transformationslicensed to
11,723
they're geometric transformations repeatedly applied inside for loop the temporal for loop is itself hardcoded by human developersit' built-in assumption of the network naturallyrnns are still extremely limited in what they can representprimarily because each step they perform is differentiable geometric transformationand they carry information from step to step via points in continuous geometric space (state vectorsnow imagine neural network that' augmented in similar way with programming primitives--but instead of single hardcoded for loop with hardcoded geometric memorythe network includes large set of programming primitives that the model is free to manipulate to expand its processing functionsuch as if brancheswhile statementsvariable creationdisk storage for long-term memorysorting operatorsadvanced data structures (such as listsgraphsand hash tables)and many more the space of programs that such network could represent would be far broader than what can be represented with current deep-learning modelsand some of these programs could achieve superior generalization power we'll move away from havingon one handhardcoded algorithmic intelligence (handcrafted softwareandon the other handlearned geometric intelligence (deep learninginsteadwe'll have blend of formal algorithmic modules that provide reasoning and abstraction capabilitiesand geometric modules that provide informal intuition and pattern-recognition capabilities the entire system will be learned with little or no human involvement related subfield of ai that think may be about to take off in big way is program synthesisin particular neural program synthesis program synthesis consists of automatically generating simple programs by using search algorithm (possibly genetic searchas in genetic programmingto explore large space of possible programs the search stops when program is found that matches the required specificationsoften provided as set of input-output pairs this is highly reminiscent of machine learninggiven training data provided as input-output pairswe find program that matches inputs to outputs and can generalize to new inputs the difference is that instead of learning parameter values in hardcoded program ( neural network)we generate source code via discrete search process definitely expect this subfield to see wave of renewed interest in the next few years in particulari expect the emergence of crossover subfield between deep learning and program synthesiswhere instead of generating programs in generalpurpose languagewe'll generate neural networks (geometric data-processing flowsaugmented with rich set of algorithmic primitivessuch as for loops and many others (see figure this should be far more tractable and useful than directly generating source codeand it will dramatically expand the scope of problems that can be solved with machine learning--the space of programs that we can generate automaticallygiven appropriate training data contemporary rnns can be seen as prehistoric ancestor of such hybrid algorithmic-geometric models licensed to
11,724
modular task-level program learned on the fly to solve specific task geometric subroutine conclusions data and feedback algorithmic subroutine task # actions geometric subroutine algorithmic subroutine figure learned program relying on both geomet ric primit ives (patt ern recognit ionint uit ionand algorithmic primitives (reasoningsearchmemorybeyond backpropagation and differentiable layers if machine-learning models become more like programsthen they will mostly no longer be differentiable--these programs will still use continuous geometric layers as subroutineswhich will be differentiablebut the model as whole won' be as resultusing backpropagation to adjust weight values in fixedhardcoded network can' be the method of choice for training models in the future--at leastit can' be the entire story we need to figure out how to train non-differentiable systems efficiently current approaches include genetic algorithmsevolution strategiescertain reinforcement-learning methodsand alternating direction method of multipliers (admmnaturallygradient descent isn' going anywheregradient information will always be useful for optimizing differentiable parametric functions but our models will become increasingly more ambitious than mere differentiable parametric functionsand thus their automatic development (the learning in machine learningwill require more than backpropagation in additionbackpropagation is end to endwhich is great thing for learning good chained transformations but is computationally inefficient because it doesn' fully take advantage of the modularity of deep networks to make something more efficientthere' one universal recipeintroduce modularity and hierarchy so we can make backpropagation more efficient by introducing decoupled training modules with synchronization mechanism between themorganized in hierarchical fashion this strategy is somewhat reflected in deepmind' recent work on synthetic gradients expect more along these lines in the near future can imagine future where models that are globally non-differentiable (but feature differentiable partsare trained-grown--using an efficient search process that doesn' use gradientswhereas the differentiable parts are trained even faster by taking advantage of gradients using more efficient version of backpropagation automated machine learning in the futuremodel architectures will be learned rather than be handcrafted by engineer-artisans learning architectures goes hand in hand with the use of richer sets of primitives and program-like machine-learning models licensed to
11,725
currentlymost of the job of deep-learning engineer consists of munging data with python scripts and then tuning the architecture and hyperparameters of deep network at length to get working model--or even to get state-of-the-art modelif the engineer is that ambitious needless to saythat isn' an optimal setup but ai can help unfortunatelythe data-munging part is tough to automatebecause it often requires domain knowledge as well as clearhigh-level understanding of what the engineer wants to achieve hyperparameter tuninghoweveris simple search procedureand in that case we know what the engineer wants to achieveit' defined by the loss function of the network being tuned it' already common practice to set up basic automl systems that take care of most model knob tuning even set up my ownyears agoto win kaggle competitions at the most basic levelsuch system would tune the number of layers in stacktheir orderand the number of units or filters in each layer this is commonly done with libraries such as hyperoptwhich we discussed in but we can also be far more ambitious and attempt to learn an appropriate architecture from scratchwith as few constraints as possiblefor instancevia reinforcement learning or genetic algorithms another important automl direction involves learning model architecture jointly with model weights because training new model from scratch every time we try slightly different architecture is tremendously inefficienta truly powerful automl system would evolve architectures at the same time the features of the model were being tuned via backpropagation on the training data such approaches are beginning to emerge as write these lines when this starts to happenthe jobs of machine-learning engineers won' disappear--ratherengineers will move up the value-creation chain they will begin to put much more effort into crafting complex loss functions that truly reflect business goals and understanding how their models impact the digital ecosystems in which they're deployed (for examplethe users who consume the model' predictions and generate the model' training data)--problems that only the largest companies can afford to consider at present lifelong learning and modular subroutine reuse if models become more complex and are built on top of richer algorithmic primitivesthen this increased complexity will require higher reuse between tasksrather than training new model from scratch every time we have new task or new dataset many datasets don' contain enough information for us to develop newcomplex model from scratchand it will be necessary to use information from previously encountered datasets (much as you don' learn english from scratch every time you open new book--that would be impossibletraining models from scratch on every new task is also inefficient due to the large overlap between the current tasks and previously encountered tasks licensed to
11,726
conclusions remarkable observation has been made repeatedly in recent yearstraining the same model to do several loosely connected tasks at the same time results in model that' better at each task for instancetraining the same neural machine-translation model to perform both english-to-german translation and french-to-italian translation will result in model that' better at each language pair similarlytraining an image-classification model jointly with an image-segmentation modelsharing the same convolutional baseresults in model that' better at both tasks this is fairly intuitivethere' always some information overlap between seemingly disconnected tasksand joint model has access to greater amount of information about each individual task than model trained on that specific task only currentlywhen it comes to model reuse across taskswe use pretrained weights for models that perform common functionssuch as visual feature extraction you saw this in action in in the futurei expect generalized version of this to be commonplacewe'll use not only previously learned features (submodel weightsbut also model architectures and training procedures as models become more like programswe'll begin to reuse program subroutines like the functions and classes found in human programming languages think of the process of software development todayonce an engineer solves specific problem (http queries in pythonfor instance)they package it as an abstractreusable library engineers who face similar problem in the future will be able to search for existing librariesdownload oneand use it in their own project in similar wayin the futuremetalearning systems will be able to assemble new programs by sifting through global library of high-level reusable blocks when the system finds itself developing similar program subroutines for several different tasksit can come up with an abstractreusable version of the subroutine and store it in the global library (see figure such process will implement abstractiona necessary component for achieving extreme generalization subroutine that' useful across different tasks and domains can be said to abstract some aspect of problem solving this definition of abstraction is similar to the notion of abstraction in software engineering these subroutines can be either geometric (deep-learning modules with pretrained representationsor algorithmic (closer to the libraries that contemporary software engineers manipulatelicensed to
11,727
the future of deep learning global library of abstract subroutines geometric subroutine algorithmic subroutine algorithmic subroutine geometric subroutine algorithmic subroutine algorithmic subroutine geometric subroutine algorithmic subroutine algorithmic subroutine push reusable subroutines fetch relevant subroutines perpetual meta-learner capable of quickly growing task-level model across variety of tasks design choices task # data and feedback modular task-level program learned on the fly to solve specific task geometric subroutine task # task # data and feedback algorithmic subroutine task # actions geometric subroutine algorithmic subroutine figure met -learner capable of quickly developing ask-specific models using reusable primitives (bot algorit hmic and geomet ric) hus achieving ext reme generalizat ion the long-term vision in shorthere' my long-term vision for machine learningmodels will be more like programs and will have capabilities that go far beyond the continuous geometric transformations of the input data we currently work with these programs will arguably be much closer to the abstract mental models that humans maintain about their surroundings and themselvesand they will be capable of stronger generalization due to their rich algorithmic nature in particularmodels will blend algorithmic modules providing formal reasoningsearchand abstraction capabilities with geometric modules providing informal intuition and pattern-recognition capabilities alphago ( system that required lot of manual software engineering and human-made design decisionsprovides an early example of what such blend of symbolic and geometric ai could look like such models will be grown automatically rather than hardcoded by human engineersusing modular parts stored in global library of reusable subroutines-- library evolved by learning high-performing models on thousands of previous tasks and datasets as frequent problem-solving patterns are identified by the meta-learning systemthey will be turned into reusable subroutines--much like functions and classes in software engineering--and added to the global library this will achieve abstraction this global library and associated model-growing system will be able to achieve some form of human-like extreme generalizationgiven new task or situationlicensed to
11,728
conclusions the system will be able to assemble new working model appropriate for the task using very little datathanks to rich program-like primitives that generalize welland extensive experience with similar tasks in the same wayhumans can quickly learn to play complex new video game if they have experience with many previous gamesbecause the models derived from this previous experience are abstract and program-likerather than basic mapping between stimuli and action as suchthis perpetually learning model-growing system can be interpreted as an artificial general intelligence (agibut don' expect any singularitarian robot apocalypse to ensuethat' pure fantasycoming from long series of profound misunderstandings of both intelligence and technology such critiquehoweverdoesn' belong in this book licensed to
11,729
staying up to date in fast-moving field as final parting wordsi want to give you some pointers about how to keep learning and updating your knowledge and skills after you've turned the last page of this book the field of modern deep learningas we know it todayis only few years olddespite longslow prehistory stretching back decades with an exponential increase in financial resources and research headcount since the field as whole is now moving at frenetic pace what you've learned in this book won' stay relevant foreverand it isn' all you'll need for the rest of your career fortunatelythere are plenty of free online resources that you can use to stay up to date and expand your horizons here are few practice on real-world problems using kaggle one effective way to acquire real-world experience is to try your hand at machinelearning competitions on kaggle (through practice and actual coding--that' the philosophy of this bookand kaggle competitions are the natural continuation of this on kaggleyou'll find an array of constantly renewed data-science competitionsmany of which involve deep learningprepared by companies interested in obtaining novel solutions to some of their most challenging machine-learning problems fairly large monetary prizes are offered to top entrants most competitions are won using either the xgboost library (for shallow machine learningor keras (for deep learningso you'll fit right inby participating in few competitionsmaybe as part of teamyou'll become more familiar with the practical side of some of the advanced best practices described in this bookespecially hyperparameter tuningavoiding validation-set overfittingand model ensembling read about the latest developments on arxiv deep-learning researchin contrast with some other scientific fieldstakes places completely in the open papers are made publicly and freely accessible as soon as they're finalizedand lot of related software is open source arxiv (for physicsmathematicsand computer science research papers it has become the de facto way to stay up to date on the bleeding edge of machine learning and deep learning the large majority of deep-learning researchers upload any paper they write to arxiv shortly after completion this allows them to plant flag and claim specific finding without waiting for conference acceptance (which takes months)which is necessary given the fast pace of research and the intense competition in the field it also allows the field to move extremely fastall new findings are immediately available for all to see and to build on an important downside is that the sheer quantity of new papers posted every day on arxiv makes it impossible to even skim them alland the fact that they aren' peer reviewed makes it difficult to identify those that are both important and high quality licensed to
11,730
conclusions it' difficultand becoming increasingly more soto find the signal in the noise currentlythere isn' good solution to this problem but some tools can helpan auxiliary website called arxiv sanity preserver (recommendation engine for new papers and can help you keep track of new developments within specific narrow vertical of deep learning additionallyyou can use google scholar (favorite authors explore the keras ecosystem with about , users as of november and growing fastkeras has large ecosystem of tutorialsguidesand related open source projectsyour main reference for working with keras is the online documentation at fchollet/keras you can ask for help and join deep-learning discussions on the keras slack channelthe keras blogrelated to deep learning you can follow me on twitter@fchollet licensed to
11,731
final words this is the end of deep learning with pythoni hope you've learned thing or two about machine learningdeep learningkerasand maybe even cognition in general learning is lifelong journeyespecially in the field of aiwhere we have far more unknowns on our hands than certitudes so please go on learningquestioningand researching never stop because even given the progress made so farmost of the fundamental questions in ai remain unanswered many haven' even been properly asked yet licensed to
11,732
installing keras and its dependencies on ubuntu the process of setting up deep-learning workstation is fairly involved and consists of the following stepswhich this appendix will cover in detail install the python scientific suite--numpy and scipy--and make sure you have basic linear algebra subprogram (blaslibrary installed so your models run fast on cpu install two extras packages that come in handy when using kerashdf (for saving large neural-network filesand graphviz (for visualizing neuralnetwork architecturesmake sure your gpu can run deep-learning codeby installing cuda drivers and cudnn install backend for kerastensorflowcntkor theano install keras it may seem like daunting process in factthe only difficult part is setting up gpu support--otherwisethe entire process can be done with few commands and takes only couple of minutes we'll assume you have fresh installation of ubuntuwith an nvidia gpu available before you startmake sure you have pip installed and that your package manager is up to datesudo apt-get update sudo apt-get upgrade sudo apt-get install python-pip python-dev licensed to
11,733
pyt hon vs pyt hon by defaultubuntu uses python when it installs python packages such as pythonpip if you wish to use python insteadyou should use the python prefix instead of python for instancesudo apt-get install python -pip python -dev when you're installing packages using pipkeep in mind that by defaultit targets python to target python you should use pip sudo pip install tensorflow-gpu installing the python scientific suite if you use macwe recommend that you install the python scientific suite via anacondawhich you can get at www continuum io/downloads note that this won' include hdf and graphvizwhich you have to install manually following are the steps for manual installation of the python scientific suite on ubuntu install blas library (openblasin this case)to ensure that you can run fast tensor operations on your cpusudo apt-get install build-essential cmake git unzip pkg-config libopenblas-dev liblapack-dev install the python scientific suitenumpyscipy and matplotlib this is necessary in order to perform any kind of machine learning or scientific computing in pythonregardless of whether you're doing deep learningsudo apt-get install python-numpy python-scipy pythonmatplotlib python-yaml install hdf this libraryoriginally developed by nasastores large files of numeric data in an efficient binary format it will allow you to save your keras models to disk quickly and efficientlysudo apt-get install libhdf -serial-dev python- py install graphviz and pydot-ngtwo packages that will let you visualize keras models they aren' necessary to run kerasso you could skip this step and install these packages when you need them here are the commandssudo apt-get install graphviz sudo pip install pydot-ng install additional packages that are used in some of our code examplessudo apt-get install python-opencv licensed to
11,734
appendix installing keras and its dependencies on ubuntu setting up gpu support using gpu isn' strictly necessarybut it' strongly recommended all the code examples found in this book can be run on laptop cpubut you may sometimes have to wait for several hours for model to traininstead of mere minutes on good gpu if you don' have modern nvidia gpuyou can skip this step and go directly to section to use your nvidia gpu for deep learningyou need to install two thingscuda -- set of drivers for your gpu that allows it to run low-level programming language for parallel computing cudnn -- library of highly optimized primitives for deep learning when using cudnn and running on gpuyou can typically increase the training speed of your models by to tensorflow depends on particular versions of cuda and the cudnn library at the time of writingit uses cuda version and cudnn version please consult the tensorflow website for detailed instructions about which versions are currently recommendedwww tensorflow org/install/install_linux follow these steps download cuda for ubuntu (and other linux flavors)nvidia provides ready-to-use package that you can download from nvidia com/cuda-downloadswget /cuda-repo-ubuntu - _amd deb install cuda the easiest way to do so is to use ubuntu' apt on this package this will allow you to easily install updates via apt as they become availablesudo dpkg - cuda-repo-ubuntu - _amd deb sudo apt-key adv --fetch-keys / fa af pub sudo apt-get update sudo apt-get install cuda- - install cudnna register for free nvidia developer account (unfortunatelythis is necessary in order to gain access to the cudnn download)and download cudnn at linux flavors--we'll use the version for ubuntu note that if you're working with an ec installyou won' be able to download the cudnn archive directly to your instanceinsteaddownload it to your local machine and then upload it to your ec instance (via scpb install cudnnsudo dpkg - dpkg - libcudnn deb licensed to
11,735
install tensorflowa tensorflow with or without gpu support can be installed from pypi using pip here' the command without gpu supportsudo pip install tensorflow here' the command to install tensorflow with gpu supportsudo pip install tensorflow-gpu installing theano (optionalbecause you've already installed tensorflowyou don' have to install theano in order to run keras code but it can sometimes be useful to switch back and forth from tensorflow to theano when building keras models theano can also be installed from pypisudo pip install theano if you're using gputhen you should configure theano to use your gpu you can create theano configuration file with this commandnano ~theanorc thenfill in the file with the following configuration[globalfloatx float device gpu [nvccfastmath true installing keras you can install keras from pypisudo pip install keras alternativelyyou can install keras from github doing so will allow you to access the keras/examples folderwhich contains many example scripts for you to learn fromgit clone cd keras sudo python setup py install you can now try to run keras scriptsuch as this mnist examplepython examples/mnist_cnn py note that running this example to completion may take few minutesso feel free to force-quit it (ctrl-conce you've verified that it' working normally after you've run keras at least oncethe keras configuration file can be found at ~keras/keras json you can edit it to select the backend that keras runs ontensorflowtheanoor cntk your configuration file should like thislicensed to
11,736
appendix installing keras and its dependencies on ubuntu "image_data_format""channels_last""epsilon" - "floatx""float ""backend""tensorflowwhile the keras script examples/mnist_cnn py is runningyou can monitor gpu utilization in different shell windowwatch - nvidia-smi - --display=utilization you're all setcongratulations--you can now begin building deep-learning applications licensed to
11,737
running jupyter notebooks on an ec gpu instance this appendix provides step-by-step guide to running deep-learning jupyter notebooks on an aws gpu instance and editing the notebooks from anywhere in your browser this is the perfect setup for deep-learning research if you don' have gpu on your local machine the original (and up-to-dateversion of this guide can be found at what are jupyter notebookswhy run jupyter notebooks on aws gpusa jupyter notebook is web app that allows you to write and annotate python code interactively it' great way to experimentdo researchand share what you're working on many deep-learning applications are very computationally intensive and can take hours or even days when running on laptop' cpu cores running on gpu can speed up training and inference by considerable factor (often to timeswhen going from modern cpu to single modern gpubut you may not have access to gpu on your local machine running jupyter notebooks on aws gives you the same experience as running on your local machinewhile allowing you to use one or several gpus on aws and you only pay for what you usewhich can compare favorably to investing in your own gpu(sif you use deep learning only occasionally licensed to
11,738
appendix running jupyter notebooks on an ec gpu instance why would you not want to use jupyter on aws for deep learningaws gpu instances can quickly become expensive the one we suggest using costs $ per hour this is fine for occasional usebut if you're going to run experiments for several hours per day every daythen you're better off building your own deeplearning machine with titan or gtx ti in summaryuse the jupyter-on-ec setup if you don' have access to local gpu or if you don' want to deal with installing keras dependenciesin particular gpu drivers if you have access to local gpuwe recommend running your models locallyinstead in that caseuse the installation guide in appendix you'll need an active aws account some familiarity with aws ec will helpbut it isn' mandatory note setting up an aws gpu instance the following setup process will take to minutes navigate to the ec control panel at and click the launch instance link (see figure figure the ec cont rol panel select aws marketplace (see figure )and search for "deep learningin the search box scroll down until you find the ami named deep learning ami ubuntu version (see figure )select it figure the ec ami marketplace licensed to
11,739
figure the ec deep learning ami select the xlarge instance (see figure this instance type provides access to single gpu and costs $ per hour of usage (as of march figure the xlarge instance you can keep the default configuration for the steps configure instanceadd storageand add tagsbut you'll customize the configure security group step create custom tcp rule to allow port (see figure )this rule can be allowed either for your current public ip (such as that of your laptopor for any ip (such as / if the former isn' possible note that if you allow port for any ipthen literally anyone will be able to listen to that port on your instance (which is where you'll run ipython notebooksyou'll add password protection to the notebooks to mitigate the risk of random strangers modifying thembut that may be pretty weak protection if at all possibleyou should consider restricting access to specific ip but if your ip address changes constantlythen that isn' practical choice if you're going to leave access open to any ipthen remember not to leave sensitive data on the instance licensed to
11,740
figure appendix running jupyter notebooks on an ec gpu instance configure new security group note at the end of the launch processyou'll be asked if you want to create new connection keys or if you want to reuse existing keys if you've never used ec beforecreate new keys and download them to connect to your instanceselect it on the ec control panelclick the connect buttonand follow the instructions (see figure note that it may take few minutes for the instance to boot up if you can' connect at firstwait bit and try again figure connect ion instruct ions once you're logged in to the instance via sshcreate an ssl directory at the root of the instanceand cd to it (not mandatorybut cleaner)mkdir ssl cd ssl licensed to
11,741
setting up an aws gpu instance create new ssl certificate using openssland create cert key and cert pem files in the current ssl directoryopenssl req - -nodes -days -newkey rsa: -keyout "cert key-out "cert pem-batch configuring jupyter before you use jupyteryou need to touch up its default configuration follow these steps generate new jupyter config file (still on the remote instance)jupyter notebook --generate-config optionallyyou can generate jupyter password for your notebooks because your instance may be configured to be accessible from any ip (depending on the choice you made when configuring the security group)it' better to restrict access to jupyter via password to generate passwordopen an ipython shell (ipython commandand run the followingfrom ipython lib import passwd passwd(exit the passwd(command will ask you to enter and verify password after you doit will display hash of your password copy that hash--you'll need it soon it looks something like thissha : cf ec : edb fd eab aa note that this is hash of the word passwordwhich isn' password you should be using use vi (or your favorite available text editorto edit the jupyter config filevi ~jupyter/jupyter_notebook_config py the config file is python file with all lines commented out insert the following lines of python code at the beginning of the filepath to the private key you generated for the certificate path to the certificate you generated serves the notebooks locally gets the config object inline figure when using matplotlib get_config( notebookapp certfile '/home/ubuntu/ssl/cert pemc notebookapp keyfile '/home/ubuntu/ssl/cert keyc ipkernelapp pylab 'inlinec notebookapp ip '* notebookapp open_browser false notebookapp password 'sha : cf ec : edb fd eab aa don' open browser window by default when using notebooks password hash you generated earlier licensed to
11,742
appendix running jupyter notebooks on an ec gpu instance in case you aren' accustomed to using viremember that you need to press to begin inserting content when you're finishedpress escenter :wqand press enter to quit vi and save your changes (:wq stands for write-quitnote installing keras you're almost ready to start using jupyter but firstyou need to update keras version of keras is preinstalled on the amibut it may not necessarily be up to date on the remote instancerun this commandsudo pip install keras --upgrade because you'll probably use python (the notebooks provided with this book use python )you should also update keras using pip sudo pip install keras --upgrade if there' an existing keras configuration file on the instance (there shouldn' bebut the ami may have changed since wrote this)you should delete itjust in case keras will re-create standard configuration file when it' launched for the first time if the following code snippet returns an error saying that the file doesn' existyou can ignore itrm - ~keras/keras json setting up local port forwarding in shell on your local machine (not the remote instance)start forwarding your local port (the https portto port of the remote instancesudo ssh - awskeys pem - local_port:local_machine:remote_port remote_machine in my caseit would look like the followingsudo ssh - awskeys pem - : ubuntu@ec- - compute- amazonaws com using jupyter from your local browser on the remote instanceclone the github repository containing the jupyter notebooks associated with this bookgit clone cd deep-learning-with-python-notebooks start jupyter notebook by running this commandstill on the remote instancejupyter notebook thenin your local browsernavigate to the local address you're forwarding to the remote notebook process (or you'll get an ssl error licensed to
11,743
you should see the safety warning shown in figure this warning is due to the fact that the ssl certificate you generated isn' verified by trusted authority (obviously--you generated your ownclick advancedand proceed to navigate figure safet warning you can ignore you should be prompted to enter your jupyter password you'll then arrive at the jupyter dashboard (see figure figure the jupyt er dashboard choose new notebook to get started (see figure you can use the python version of your choice all setfigure licensed to create new notebook
11,744
11,745
symbols operator operator numerics tensors see scalars convolutions - poolingfor sequence data tensors see vectors tensors see matrices embeddings activation activation functions ad targeting add_loss method admm (alternating direction method of multipliers adversarial networks see also generative deep learninggenerative adversarial networks adversary network affine transformations amazon web services see aws amd analytical engine annotations application program interfaces see functional apis architecture of networks - convnets densely connected networks - recurrent neural networks - architecture patterns of models - batch normalization - depthwise separable convolution - residual connections - - arrow of time artificial intelligence - expectations for history of - arxiv preprint server - assembling datasets - augmented intelligence augmenting data - feature extraction with - feature extraction without - autoencoders see vaes (variational autoencodersautoml systems autonomous driving aws (amazon web servicesgpu instances running jupyter on setting up - using jupyter on licensed to babbagecharles backend enginekeras backpropagation algorithm - backward pass bag-of- -grams bag-of- -grams bag-of-words baidu batch axis batch normalization batchnormalization layer batch_size bengioyoshua bidirectional layers binary classifications - binary crossentropy black boxes blas (basic linear algebra subprograms border effects - broadcasting operations - browserslocalusing jupyter from - callbackswriting - cam (class activation map categorical encoding categorical_crossentropy function
11,746
cern channels axis channels-first convention channels-last convention character-level neural language model ciresandan class activationvisualizing heatmaps of - classes classification cloudrunning jobs in clustering cnns see convnets (convolutional neural networkscntk (microsoft cognitive toolkit compilation step concept vectorsfor editing images - conditioning data connect buttonec control panel connectionsresidual - content loss conv layer conv layer convnets (convolutional neural networks - combining with recurrent neural networks - overview - convolution operations - max-pooling operations - processing sequences with - training on small datasets - building networks - data preprocessing - relevance for small-data problems - using data augmentation - using pretrained convnets - feature extraction - fine-tuning - index visualizing convnet learning - convnet filters - heatmaps of class activation - intermediate activations - convnets filters convolution base - convolution operations - border effects - convolution strides paddling - convolution strides convolutions - depthwise separable - cortescorinna crossentropy cuda drivers cudnn library curvature data augmenting - feature extraction with - feature extraction without - batches of - generating sequence data heterogeneous homogenous learning representations from - missing preparing - for character-level lstm text generation for recurrent neural networks - preprocessing - - redundancy representations for neural networks - tensors data batches - examples of data tensors licensed to higher-dimensional tensors image data - key attributes of tensors - manipulating tensors in numpy matrices ( tensors - scalars ( tensors sequence data - timeseries data - vector data vectors ( tensors video data shuffling splitting tokenizing - transformations transforming vectorization data augmentation data distillation data points data representativeness data tensorsexamples of datasetsassembling - dcgans (deep convolutional gansoverview training - decision boundaries decision trees - deep convnets deep learning - accomplishments of - achievements of - democratization of enabling technologies - future of - - automated machine learning - lifelong learning - long-term vision - models as programs - modular subroutine reuse - geometric interpretation of - hardware and - investment in - limitations of - local generalization vs extreme generalization -
11,747
index deep learninglimitations of (continuedrisk of anthropomorphizing machine-learning models - overview - - possible uses of - reasons for interest in - see also generative deep learning deep learning amiec deepdream technique - overview implementing in keras - deepmindgoogle dense layers - dense sampling densely connected networks - depthwise separable convolution derivativesdefined - developer accountnvidia digital assistants dimension dimensionality directed acyclic graphs of layers - inception modules - residual connections - discriminator networks overview implementing - distance function dot operations - dot product downloading glove word embeddings raw text dropout layer dtype attribute - earlystopping callbacks eckdouglas editing imagesconcept vectors for - eigen element-wise operations - embedding layerslearning word embeddings with - engineering features - ensembling models - epochs epsilon evaluating models - evaluation protocolschoosing expert systems extreme generalizationlocal generalization vs - extreme inception feature engineering feature learning - feature maps features engineering - extracting - with data augmentation - without data augmentation - features axis feedforward networks feynmanrichard fill_mode filter visualizations filters overview convnetsvisualizing - fine-tuning - fit method fit_generator method flatten layer flickr float for loop forward pass freezing layers fully connected layers functional apiskeras - directed acyclic graphs of layers - layer weight sharing - models as layers - multi-input models - multi-output models - licensed to galyarin gans (generative adversarial networks gated recurrent unit layers see gru layers gatysleon gaussian distribution generalization - generative deep learning generating images with variational autoencoders - concept vectors for image editing - sampling from latent spaces of images - generating text with lstm - generating sequence data history of generative recurrent networks implementing characterlevel lstm text generation - sampling strategy - generative adversarial networks - adversarial networks discriminator networks - generator networks - schematic implementation of training dcgans - neural style transfer - content loss in keras - style loss - generative deep learningdeepdream - generative recurrent networkshistory of generator function generator networksimplementing - geometric interpretation of deep learning - of tensor operations - geometric space
11,748
glove (global vectors for word representationdownloading word embeddings loading embeddings in models goodfellowian gpus (graphics processing unitsinstalling on aws instanceson aws - overview selecting - supportsetting up on ubuntu - gradient boosting machines - gradient descent gradient propagation gradient-based optimization - backpropagation algorithm - derivativesdefined - gradients stochastic gradient descent - gradients gram matrix graphsdirected acyclic of layers - graphviz - gravesalex greedy sampling ground-truth gru (gated recurrent unitlayers - handwriting transcription hardware - hash collisions hdf heatmaps of class activationvisualizing - overview height_shift range heterogeneous data hidden layers hidden unit hierarchical representation learning index hintongeoffrey hochreitersepp hold-out validation - homogenous data horizontal_flip hsv (hue-saturation-valueformat hyperas library hyperopt hyperparameters optimizing - overview tuning - hyperplanes hypothesis space keras - openblas opencv python scientific suite on ubuntu tensorflow theano on ubuntu intel intermediate activationsvisualizing - investments in deep learning - ipython command joint feature learning jupyter notebooks configuring - overview running on aws gpu instances installing keras setting up aws gpu instances - setting up local port forwarding using from local browsers - using on aws idsia ilsvrc (imagenet large scale visual recognition challenge image classification image data - image segmentation image-classification task imagedatagenerator class imagenet class images editing concept vectors for - flipping generating with variational autoencoders - concept vectors for image editing - overview sampling from latent spaces of - inception blocks inception modules - include_top argument information bottlenecks information distillation pipeline information leaks initial state input data - input_shape argument input_tensor installing cuda cudnn licensed to nvidia kaggle platform overview practice on real-world problems using keras api - directed acyclic graphs of layers - exploring functional apis - implementing deepdream in - installing - layer weight sharing - models as layers - multi-input models - multi-output models - neural style transfer in - recurrent layers in - using callbacks -
11,749
keras framework - cntk developing with - running tensorflow theano keras library keras applications module keras callbacks module keras preprocessing image kernel methods - kernel trick -fold validationiterated with shuffling - kingmadiederik krizhevskyalex regularization regularization label lambda layer language models sampling from - training - last-layer activation latent spaces of imagessampling from - overview layer compatibility layered representations learning layers differentiable directed acyclic graphs of - inception modules - residual connections - freezing models as - overview - recurrent in keras - stacking - unfreezing weight sharing - layer-wise pretraining leakyrelu layer lecunyann lenet network lhc (large hadron collider lifelong learning - linear transformations local generalizationextreme generalization vs - local port forwardingsetting up logistic regression algorithm logreg (logistic regression logs argument lookback parameter lookback timesteps loss function loss plateau loss value lovelaceada lstm (long short-term memory - generating text with - generating sequence data history of generative recurrent networks implementing characterlevel text generation - sampling strategy - overview machine learning automated - basic approaches - branches of - reinforcement learning - self-supervised learning - supervised learning unsupervised learning data preprocessing - deep learning vs - evaluating models of - choosing evaluation protocols test sets - training sets - validation sets - feature engineering - feature learning - licensed to history of - decision trees - gradient boosting machines - kernel methods - neural networks - probabilistic modeling random forests - learning representations from data - modelsrisk of anthropomorphizing - overfitting and underfitting - adding dropout - adding weight regularization - reducing network size - workflow of - - assembling datasets - choosing evaluation protocol choosing measure of success defining problems - developing models - preparing data - regularizing models - tuning hyperparameters - see also non-machine learning mae (mean absolute error matplotlib library matrices ( tensors - maximum operation max-pooling operations - maxpooling layer maxpooling layer mean_squared_error memorization capacity metrics metricslogging microsoft cognitive toolkit see cntk mikolovtomas
11,750
mini-batch mini-batch sgd (mini-batch stochastic gradient descent minskymarvin mnist dataset model checkpointing model class model depth model plot model fit(function model fit_generator(function modelcheckpoint callbacks models architecture patterns - batch normalization - depthwise separable convolution - residual connections - - as layers - as programs - defining developing achieving statistical power - determining capacity ensembling - evaluating - hyperparameter optimization - language sampling from - training - loading glove embeddings in machine learningrisk of anthropomorphizing - multi-input - multi-output - regularizing - training - using keras callbacks - using tensorboard - modular subroutinesreusing - modulesinception - momentum moore' law index mse (mean squared error multiclass classifications - multihead networks multi-input models - multilabel classification multimodal inputs multi-output models - classes naive bayes algorithm naive_add national institute of standards and technology see nist ndim attribute nervana systems neural layers neural networks anatomy of - layers - loss functions models - optimizers binary classifications - breakthroughs in data preprocessing for - handling missing values value normalization - vectorization data representations for - tensors data batches - examples of data tensors higher-dimensional tensors image data - key attributes of tensors - manipulating tensors in numpy matrices ( tensors - scalars ( tensors sequence data - timeseries data - licensed to vector data vectors ( tensors video data gradient-based optimization - backpropagation algorithm - derivatives - gradients stochastic gradient descent - keras - cntk developing with - tensorflow theano multiclass classifications - regression - setting up workstations - gpus for deep learning - jupyter notebooks running jobs in cloud running keras tensor operations - broadcasting - dot - element-wise - geometric interpretation of - geometric interpretation of deep learning - reshaping - neural style transfer - content loss in keras - style loss - -grams nist (national institute of standards and technology non-linearity function non-machine learningbaselines - nonstationary problems normalizing batches - normalizing values - numpy arrays numpy librarymanipulating tensors in numpy matrix numpy tensors nvidia
11,751
movielens data set measuring rating disagreement us baby names - analyzing naming trends conclusions and the path ahead ipythonan interactive computing and development environment ipython basics tab completion introspection the %run command executing code from the clipboard keyboard shortcuts exceptions and tracebacks magic commands qt-based rich gui console matplotlib integration and pylab mode using the command history searching and reusing the command history input and output variables logging the input and output interacting with the operating system shell commands and aliases directory bookmark system software development tools interactive debugger timing code%time and %timeit basic profiling%prun and %run - profiling function line-by-line ipython html notebook tips for productive code development using ipython reloading module dependencies code design tips advanced ipython features making your own classes ipython-friendly profiles and configuration credits numpy basicsarrays and vectorized computation the numpy ndarraya multidimensional array object creating ndarrays data types for ndarrays iv table of contents www it-ebooks info
11,752
basic indexing and slicing boolean indexing fancy indexing transposing arrays and swapping axes universal functionsfast element-wise array functions data processing using arrays expressing conditional logic as array operations mathematical and statistical methods methods for boolean arrays sorting unique and other set logic file input and output with arrays storing arrays on disk in binary format saving and loading text files linear algebra random number generation examplerandom walks simulating many random walks at once getting started with pandas introduction to pandas data structures series dataframe index objects essential functionality reindexing dropping entries from an axis indexingselectionand filtering arithmetic and data alignment function application and mapping sorting and ranking axis indexes with duplicate values summarizing and computing descriptive statistics correlation and covariance unique valuesvalue countsand membership handling missing data filtering out missing data filling in missing data hierarchical indexing reordering and sorting levels summary statistics by level using dataframe' columns table of contents www it-ebooks info
11,753
integer indexing panel data data loadingstorageand file formats reading and writing data in text format reading text files in pieces writing data out to text format manually working with delimited formats json data xml and htmlweb scraping binary data formats using hdf format reading microsoft excel files interacting with html and web apis interacting with databases storing and loading data in mongodb data wranglingcleantransformmergereshape combining and merging data sets database-style dataframe merges merging on index concatenating along an axis combining data with overlap reshaping and pivoting reshaping with hierarchical indexing pivoting "longto "wideformat data transformation removing duplicates transforming data using function or mapping replacing values renaming axis indexes discretization and binning detecting and filtering outliers permutation and random sampling computing indicator/dummy variables string manipulation string object methods regular expressions vectorized string functions in pandas exampleusda food database vi table of contents www it-ebooks info
11,754
brief matplotlib api primer figures and subplots colorsmarkersand line styles tickslabelsand legends annotations and drawing on subplot saving plots to file matplotlib configuration plotting functions in pandas line plots bar plots histograms and density plots scatter plots plotting mapsvisualizing haiti earthquake crisis data python visualization tool ecosystem chaco mayavi other packages the future of visualization tools data aggregation and group operations groupby mechanics iterating over groups selecting column or subset of columns grouping with dicts and series grouping with functions grouping by index levels data aggregation column-wise and multiple function application returning aggregated data in "unindexedform group-wise operations and transformations applygeneral split-apply-combine quantile and bucket analysis examplefilling missing values with group-specific values examplerandom sampling and permutation examplegroup weighted average and correlation examplegroup-wise linear regression pivot tables and cross-tabulation cross-tabulationscrosstab example federal election commission database donation statistics by occupation and employer bucketing donation amounts donation statistics by state table of contents vii www it-ebooks info
11,755
date and time data types and tools converting between string and datetime time series basics indexingselectionsubsetting time series with duplicate indices date rangesfrequenciesand shifting generating date ranges frequencies and date offsets shifting (leading and laggingdata time zone handling localization and conversion operations with time zone-aware timestamp objects operations between different time zones periods and period arithmetic period frequency conversion quarterly period frequencies converting timestamps to periods (and backcreating periodindex from arrays resampling and frequency conversion downsampling upsampling and interpolation resampling with periods time series plotting moving window functions exponentially-weighted functions binary moving window functions user-defined moving window functions performance and memory usage notes financial and economic data applications data munging topics time series and cross-section alignment operations with time series of different frequencies time of day and "as ofdata selection splicing together data sources return indexes and cumulative returns group transforms and analysis group factor exposures decile and quartile analysis more example applications signal frontier analysis future contract rolling viii table of contents www it-ebooks info
11,756
advanced numpy ndarray object internals numpy dtype hierarchy advanced array manipulation reshaping arrays versus fortran order concatenating and splitting arrays repeating elementstile and repeat fancy indexing equivalentstake and put broadcasting broadcasting over other axes setting array values by broadcasting advanced ufunc usage ufunc instance methods custom ufuncs structured and record arrays nested dtypes and multidimensional fields why use structured arraysstructured array manipulationsnumpy lib recfunctions more about sorting indirect sortsargsort and lexsort alternate sort algorithms numpy searchsortedfinding elements in sorted array numpy matrix class advanced array input and output memory-mapped files hdf and other array storage options performance tips the importance of contiguous memory other speed optionscythonf pyc appendixpython language essentials index table of contents ix www it-ebooks info
11,757
11,758
the scientific python ecosystem of open source libraries has grown substantially over the last years by late had long felt that the lack of centralized learning resources for data analysis and statistical applications was stumbling block for new python programmers engaged in such work key projects for data analysis (especially numpyipythonmatplotliband pandashad also matured enough that book written about them would likely not go out-of-date very quickly thusi mustered the nerve to embark on this writing project this is the book that wish existed when started using python for data analysis in hope you find it useful and are able to apply these tools productively in your work conventions used in this book the following typographical conventions are used in this bookitalic indicates new termsurlsemail addressesfilenamesand file extensions constant width used for program listingsas well as within paragraphs to refer to program elements such as variable or function namesdatabasesdata typesenvironment variablesstatementsand keywords constant width bold shows commands or other text that should be typed literally by the user constant width italic shows text that should be replaced with user-supplied values or by values determined by context this icon signifies tipsuggestionor general note xi www it-ebooks info
11,759
using code examples this book is here to help you get your job done in generalyou may use the code in this book in your programs and documentation you do not need to contact us for permission unless you're reproducing significant portion of the code for examplewriting program that uses several chunks of code from this book does not require permission selling or distributing cd-rom of examples from 'reilly books does require permission answering question by citing this book and quoting example code does not require permission incorporating significant amount of example code from this book into your product' documentation does require permission we appreciatebut do not requireattribution an attribution usually includes the titleauthorpublisherand isbn for example"python for data analysis by william wesley mckinney ( 'reillycopyright william mckinney- - if you feel your use of code examples falls outside fair use or the permission given abovefeel free to contact us at permissions@oreilly com safari(rbooks online safari books online (www safaribooksonline comis an on-demand digital library that delivers expert content in both book and video form from the world' leading authors in technology and business technology professionalssoftware developersweb designersand business and creative professionals use safari books online as their primary resource for researchproblem solvinglearningand certification training safari books online offers range of product mixes and pricing programs for organizationsgovernment agenciesand individuals subscribers have access to thousands of bookstraining videosand prepublication manuscripts in one fully searchable database from publishers like 'reilly mediaprentice hall professionaladdison-wesley professionalmicrosoft presssamsquepeachpit pressfocal presscisco pressjohn wiley sonssyngressmorgan kaufmannibm redbookspacktadobe pressft pressapressmanningnew ridersmcgraw-hilljones bartlettcourse technologyand dozens more for more information about safari books onlineplease visit us online xii preface www it-ebooks info
11,760
please address comments and questions concerning this book to the publishero'reilly mediainc gravenstein highway north sebastopolca (in the united states or canada(international or local(faxwe have web page for this bookwhere we list errataexamplesand any additional information you can access this page at to comment or ask technical questions about this booksend email to bookquestions@oreilly com for more information about our bookscoursesconferencesand newssee our website at find us on facebookfollow us on twitterwatch us on youtubepreface xiii www it-ebooks info
11,761
11,762
preliminaries what is this book aboutthis book is concerned with the nuts and bolts of manipulatingprocessingcleaningand crunching data in python it is also practicalmodern introduction to scientific computing in pythontailored for data-intensive applications this is book about the parts of the python language and libraries you'll need to effectively solve broad set of data analysis problems this book is not an exposition on analytical methods using python as the implementation language when say "data"what am referring to exactlythe primary focus is on structured dataa deliberately vague term that encompasses many different common forms of datasuch as multidimensional arrays (matricestabular or spreadsheet-like data in which each column may be different type (stringnumericdateor otherwisethis includes most kinds of data commonly stored in relational databases or tabor comma-delimited text files multiple tables of data interrelated by key columns (what would be primary or foreign keys for sql userevenly or unevenly spaced time series this is by no means complete list even though it may not always be obviousa large percentage of data sets can be transformed into structured form that is more suitable for analysis and modeling if notit may be possible to extract features from data set into structured form as an examplea collection of news articles could be processed into word frequency table which could then be used to perform sentiment analysis most users of spreadsheet programs like microsoft excelperhaps the most widely used data analysis tool in the worldwill not be strangers to these kinds of data www it-ebooks info
11,763
for many people (myself among them)the python language is easy to fall in love with since its first appearance in python has become one of the most popular dynamicprogramming languagesalong with perlrubyand others python and ruby have become especially popular in recent years for building websites using their numerous web frameworkslike rails (rubyand django (pythonsuch languages are often called scripting languages as they can be used to write quick-and-dirty small programsor scripts don' like the term "scripting languageas it carries connotation that they cannot be used for building mission-critical software among interpreted languages python is distinguished by its large and active scientific computing community adoption of python for scientific computing in both industry applications and academic research has increased significantly since the early for data analysis and interactiveexploratory computing and data visualizationpython will inevitably draw comparisons with the many other domain-specific open source and commercial programming languages and tools in wide usesuch as rmatlabsasstataand others in recent yearspython' improved library support (primarily pandashas made it strong alternative for data manipulation tasks combined with python' strength in general purpose programmingit is an excellent choice as single language for building data-centric applications python as glue part of python' success as scientific computing platform is the ease of integrating cc++and fortran code most modern computing environments share similar set of legacy fortran and libraries for doing linear algebraoptimizationintegrationfast fourier transformsand other such algorithms the same story has held true for many companies and national labs that have used python to glue together yearsworth of legacy software most programs consist of small portions of code where most of the time is spentwith large amounts of "glue codethat doesn' run often in many casesthe execution time of the glue code is insignificanteffort is most fruitfully invested in optimizing the computational bottleneckssometimes by moving the code to lower-level language like in the last few yearsthe cython project (preferred ways of both creating fast compiled extensions for python and also interfacing with and +code solving the "two-languageproblem in many organizationsit is common to researchprototypeand test new ideas using more domain-specific computing language like matlab or then later port those preliminaries www it-ebooks info
11,764
people are increasingly finding is that python is suitable language not only for doing research and prototyping but also building the production systemstoo believe that more and more companies will go down this path as there are often significant organizational benefits to having both scientists and technologists using the same set of programmatic tools why not pythonwhile python is an excellent environment for building computationally-intensive scientific applications and building most kinds of general purpose systemsthere are number of uses for which python may be less suitable as python is an interpreted programming languagein general most python code will run substantially slower than code written in compiled language like java or +as programmer time is typically more valuable than cpu timemany are happy to make this tradeoff howeverin an application with very low latency requirements (for examplea high frequency trading system)the time spent programming in lower-levellower-productivity language like +to achieve the maximum possible performance might be time well spent python is not an ideal language for highly concurrentmultithreaded applicationsparticularly applications with many cpu-bound threads the reason for this is that it has what is known as the global interpreter lock (gil) mechanism which prevents the interpreter from executing more than one python bytecode instruction at time the technical reasons for why the gil exists are beyond the scope of this bookbut as of this writing it does not seem likely that the gil will disappear anytime soon while it is true that in many big data processing applicationsa cluster of computers may be required to process data set in reasonable amount of timethere are still situations where single-processmultithreaded system is desirable this is not to say that python cannot execute truly multithreadedparallel codethat code just cannot be executed in single python process as an examplethe cython project features easy integration with openmpa framework for parallel computingin order to to parallelize loops and thus significantly speed up numerical algorithms essential python libraries for those who are less familiar with the scientific python ecosystem and the libraries used throughout the booki present the following overview of each library essential python libraries www it-ebooks info
11,765
numpyshort for numerical pythonis the foundational package for scientific computing in python the majority of this book will be based on numpy and libraries built on top of numpy it providesamong other thingsa fast and efficient multidimensional array object ndarray functions for performing element-wise computations with arrays or mathematical operations between arrays tools for reading and writing array-based data sets to disk linear algebra operationsfourier transformand random number generation tools for integrating connecting cc++and fortran code to python beyond the fast array-processing capabilities that numpy adds to pythonone of its primary purposes with regards to data analysis is as the primary container for data to be passed between algorithms for numerical datanumpy arrays are much more efficient way of storing and manipulating data than the other built-in python data structures alsolibraries written in lower-level languagesuch as or fortrancan operate on the data stored in numpy array without copying any data pandas pandas provides rich data structures and functions designed to make working with structured data fasteasyand expressive it isas you will seeone of the critical ingredients enabling python to be powerful and productive data analysis environment the primary object in pandas that will be used in this book is the dataframea twodimensional tabularcolumn-oriented data structure with both row and column labelsframe total_bill tip sex female male male male female male male male male male smoker no no no no no no no no no no day sun sun sun sun sun sun sun sun sun sun time dinner dinner dinner dinner dinner dinner dinner dinner dinner dinner size pandas combines the high performance array-computing features of numpy with the flexible data manipulation capabilities of spreadsheets and relational databases (such as sqlit provides sophisticated indexing functionality to make it easy to reshapeslice and diceperform aggregationsand select subsets of data pandas is the primary tool that we will use in this book preliminaries www it-ebooks info
11,766
and tools well-suited for working with financial data in facti initially designed pandas as an ideal tool for financial data analysis applications for users of the language for statistical computingthe dataframe name will be familiaras the object was named after the similar data frame object they are not the samehoweverthe functionality provided by data frame in is essentially strict subset of that provided by the pandas dataframe while this is book about pythoni will occasionally draw comparisons with as it is one of the most widely-used open source data analysis environments and will be familiar to many readers the pandas name itself is derived from panel dataan econometrics term for multidimensional structured data setsand python data analysis itself matplotlib matplotlib is the most popular python library for producing plots and other data visualizations it was originally created by john hunter (jdhand is now maintained by large team of developers it is well-suited for creating plots suitable for publication it integrates well with ipython (see below)thus providing comfortable interactive environment for plotting and exploring data the plots are also interactiveyou can zoom in on section of the plot and pan around the plot using the toolbar in the plot window ipython ipython is the component in the standard scientific python toolset that ties everything together it provides robust and productive environment for interactive and exploratory computing it is an enhanced python shell designed to accelerate the writingtestingand debugging of python code it is particularly useful for interactively working with data and visualizing data with matplotlib ipython is usually involved with the majority of my python workincluding runningdebuggingand testing code aside from the standard terminal-based ipython shellthe project also provides mathematica-like html notebook for connecting to ipython through web browser (more on this latera qt framework-based gui console with inline plottingmultiline editingand syntax highlighting an infrastructure for interactive parallel and distributed computing will devote to ipython and how to get the most out of its features strongly recommend using it while working through this book essential python libraries www it-ebooks info
11,767
scipy is collection of packages addressing number of different standard problem domains in scientific computing here is sampling of the packages includedscipy integratenumerical integration routines and differential equation solvers scipy linalglinear algebra routines and matrix decompositions extending beyond those provided in numpy linalg scipy optimizefunction optimizers (minimizersand root finding algorithms scipy signalsignal processing tools scipy sparsesparse matrices and sparse linear system solvers scipy specialwrapper around specfuna fortran library implementing many common mathematical functionssuch as the gamma function scipy statsstandard continuous and discrete probability distributions (density functionssamplerscontinuous distribution functions)various statistical testsand more descriptive statistics scipy weavetool for using inline +code to accelerate array computations together numpy and scipy form reasonably complete computational replacement for much of matlab along with some of its add-on toolboxes installation and setup since everyone uses python for different applicationsthere is no single solution for setting up python and required add-on packages many readers will not have complete scientific python environment suitable for following along with this bookso here will give detailed instructions to get set up on each operating system recommend using one of the following base python distributionsenthought python distributiona scientific-oriented python distribution from enthought (distribution (with numpyscipymatplotlibchacoand ipythonand epd fulla comprehensive suite of more than scientific packages across many domains epd full is free for academic use but has an annual subscription for non-academic users python( , (distribution for windows will be using epdfree for the installation guidesthough you are welcome to take another approach depending on your needs at the time of this writingepd includes python though this might change at some point in the future after installingyou will have the following packages installed and importable preliminaries www it-ebooks info
11,768
ipython notebook dependenciestornado and pyzmq these are included in epdfree pandas (version or higherat some point while reading you may wish to install one or more of the following packagesstatsmodelspytablespyqt (or equivalentlypyside)xlrdlxmlbasemappymongoand requests these are used in various examples installing these optional libraries is not necessaryand would would suggest waiting until you need them for exampleinstalling pyqt or pytables from source on os or linux can be rather arduous for nowit' most important to get up and running with the bare minimumepdfree and pandas for information on each python package and links to binary installers or other helpsee the python package index (pypiresource for finding new python packages to avoid confusion and to keep things simplei am avoiding discussion of more complex environment management tools like pip and virtualenv there are many excellent guides available for these tools on the internet some users may be interested in alternate python implementationssuch as ironpythonjythonor pypy to make use of the tools presented in this bookit is (currentlynecessary to use the standard -based python interpreterknown as cpython windows to get started on windowsdownload the epdfree installer from thought comwhich should be an msi installer named like epd_free--winx msi run the installer and accept the default installation location :\python if you had previously installed python in this locationyou may want to delete it manually first (or using add/remove programsnextyou need to verify that python has been successfully added to the system path and that there are no conflicts with any prior-installed python versions firstopen command prompt by going to the start menu and starting the command prompt applicationalso known as cmd exe try starting the python interpreter by typing python you should see message that matches the version of epdfree you installedc:\users\wes>python python |epd_free ( -bit)(defaultapr : : on win type "credits""demoor "enthoughtfor more information installation and setup www it-ebooks info
11,769
need to clean up your windows environment variables on windows you can start typing "environment variablesin the programs search field and select edit environ ment variables for your account on windows xpyou will have to go to control panel system advanced environment variables on the window that pops upyou are looking for the path variable it needs to contain the following two directory pathsseparated by semicolonsc:\python ; :\python \scripts if you installed other versions of pythonbe sure to delete any other python-related directories from both the system and user path variables after making path alternationyou have to restart the command prompt for the changes to take effect once you can launch python successfully from the command promptyou need to install pandas the easiest way is to download the appropriate binary installer from correctly by importing pandas and making simple matplotlib plotc:\users\wes>ipython --pylab python |epd_free ( -bit)type "copyright""creditsor "licensefor more information ipython -an enhanced interactive python -introduction and overview of ipython' features %quickref -quick reference help -python' own help system object-details about 'object'use 'object??for extra details welcome to pylaba matplotlib-based python environment [backendwxaggfor more informationtype 'help(pylab)in [ ]import pandas in [ ]plot(arange( )if successfulthere should be no error messages and plot window will appear you can also check that the ipython html notebook can be successfully run by typingipython notebook --pylab=inline if you use the ipython notebook application on windows and normally use internet exploreryou will likely need to install and run mozilla firefox or google chrome instead epdfree on windows contains only -bit executables if you want or need -bit setup on windowsusing epd full is the most painless way to accomplish that if you would rather install from scratch and not pay for an epd subscriptionchristoph gohlke at the university of californiairvinepublishes unofficial binary installers for preliminaries www it-ebooks info
11,770
apple os to get started on os xyou must first install xcodewhich includes apple' suite of software development tools the necessary component for our purposes is the gcc and +compiler suite the xcode installer can be found on the os install dvd that came with your computer or downloaded from apple directly once you've installed xcodelaunch the terminal (terminal appby navigating to applications utilities type gcc and press enter you should hopefully see something likegcc -apple-darwin -gcc-no input files now you need to install epdfree download the installer which should be disk image named something like epd_free--macosx- dmg double-click the dmg file to mount itthen double-click the mpkg file inside to run the installer when the installer runsit automatically appends the epdfree executable path to your bash_profile file this is located at /users/your_unamebash_profilesetting path for epd_freepath="/library/frameworks/python framework/versions/current/bin:${path}export path should you encounter any problems in the following stepsyou'll want to inspect your bash_profile and potentially add the above directory to your path nowit' time to install pandas execute this command in the terminalsudo easy_install pandas searching for pandas reading reading reading best matchpandas downloading processing pandaszip writing /tmp/easy_install- mix /pandas-/setup cfg running pandas-/setup py - bdist_egg --dist-dir /tmp/easy_install- mix pandas-/egg-dist-tmp-rhlg adding pandas to easy-install pth file installed /library/frameworks/python framework/versions/ /lib/python site-packages/pandas--py -macosx- - egg processing dependencies for pandas finished processing dependencies for pandas to verify everything is workinglaunch ipython in pylab mode and test importing pandas then making plot interactivelyinstallation and setup www it-ebooks info
11,771
: ~/virtualbox vms/windowsxp ipython python |epd_free ( -bit)(defaultapr : : type "copyright""creditsor "licensefor more information ipython -an enhanced interactive python -introduction and overview of ipython' features %quickref -quick reference help -python' own help system object-details about 'object'use 'object??for extra details welcome to pylaba matplotlib-based python environment [backendwxaggfor more informationtype 'help(pylab)in [ ]import pandas in [ ]plot(arange( )if this succeedsa plot window with straight line should pop up gnu/linux somebut not alllinux distributions include sufficiently up-to-date versions of all the required python packages and can be installed using the built-in package management tool like apt detail setup using epdfree as it' easily reproducible across distributions linux details will vary bit depending on your linux flavorbut here give details for debian-based gnu/linux systems like ubuntu and mint setup is similar to os with the exception of how epdfree is installed the installer is shell script that must be executed in the terminal depending on whether you have -bit or -bit systemyou will either need to install the ( -bitor ( -bitinstaller you will then have file named something similar to epd_free--rh - sh to install itexecute this script with bashbash epd_free--rh - sh after accepting the licenseyou will be presented with choice of where to put the epdfree files recommend installing the files in your home directorysay /home/wesmepd (substituting your own username for wesmonce the installer has finishedyou need to add epdfree' bin directory to your $path variable if you are using the bash shell (the default in ubuntufor example)this means adding the following path addition in your bashrcexport path=/home/wesm/epd/bin:$path obviouslysubstitute the installation directory you used for /home/wesm/epdafter doing this you can either start new terminal process or execute your bashrc again with source ~bashrc preliminaries www it-ebooks info
11,772
gccbut others may not on debian systemsyou can install gcc by executingsudo apt-get install gcc if you type gcc on the command line it should say something likegcc gccno input files nowtime to install pandaseasy_install pandas if you installed epdfree as rootyou may need to add sudo to the command and enter the sudo or root password to verify things are workingperform the same checks as in the os section python and python the python community is currently undergoing drawn-out transition from the python series of interpreters to the python series until the appearance of python all python code was backwards compatible the community decided that in order to move the language forwardcertain backwards incompatible changes were necessary am writing this book with python as its basisas the majority of the scientific python community has not yet transitioned to python the good news is thatwith few exceptionsyou should have no trouble following along with the book if you happen to be using python integrated development environments (ideswhen asked about my standard development environmenti almost always say "ipython plus text editori typically write program and iteratively test and debug each piece of it in ipython it is also useful to be able to play around with data interactively and visually verify that particular set of data manipulations are doing the right thing libraries like pandas and numpy are designed to be easy-to-use in the shell howeversome will still prefer to work in an ide instead of text editor they do provide many nice "code intelligencefeatures like completion or quickly pulling up the documentation associated with functions and classes here are some that you can exploreeclipse with pydev plugin python tools for visual studio (for windows userspycharm spyder komodo ide installation and setup www it-ebooks info
11,773
outside of an internet searchthe scientific python mailing lists are generally helpful and responsive to questions some ones to take look at arepydataa google group list for questions related to python for data analysis and pandas pystatsmodelsfor statsmodels or pandas-related questions numpy-discussionfor numpy-related questions scipy-userfor general scipy or scientific python questions deliberately did not post urls for these in case they change they can be easily located via internet search each year many conferences are held all over the world for python programmers pycon and europython are the two main general python conferences in the united states and europerespectively scipy and euroscipy are scientific-oriented python conferences where you will likely find many "birds of featherif you become more involved with using python for data analysis after reading this book navigating this book if you have never programmed in python beforeyou may actually want to start at the end of the bookwhere have placed condensed tutorial on python syntaxlanguage featuresand built-in data structures like tupleslistsand dicts these things are considered prerequisite knowledge for the remainder of the book the book starts by introducing you to the ipython environment nexti give short introduction to the key features of numpyleaving more advanced numpy use for another at the end of the book theni introduce pandas and devote the rest of the book to data analysis topics applying pandasnumpyand matplotlib (for visualizationi have structured the material in the most incremental way possiblethough there is occasionally some minor cross-over between data files and related material for each are hosted as git repository on githubi encourage you to download the data and use it to replicate the book' code examples and experiment with the tools presented in each will happily accept contributionsscriptsipython notebooksor any other materials you wish to contribute to the book' repository for all to enjoy preliminaries www it-ebooks info
11,774
most of the code examples in the book are shown with input and output as it would appear executed in the ipython shell in [ ]code out[ ]output at timesfor claritymultiple code examples will be shown side by side these should be read left to right and executed separately in [ ]code out[ ]output in [ ]code out[ ]output data for examples data sets for the examples in each are hosted in repository on githubhttp//github com/pydata/pydata-book you can download this data either by using the git revision control command-line program or by downloading zip file of the repository from the website have made every effort to ensure that it contains everything necessary to reproduce the examplesbut may have made some mistakes or omissions if soplease send me an -mailwesmckinn@gmail com import conventions the python community has adopted number of naming conventions for commonlyused modulesimport numpy as np import pandas as pd import matplotlib pyplot as plt this means that when you see np arangethis is reference to the arange function in numpy this is done as it' considered bad practice in python software development to import everything (from numpy import *from large package like numpy jargon 'll use some terms common both to programming and data science that you may not be familiar with thushere are some brief definitionsmunge/munging/wrangling describes the overall process of manipulating unstructured and/or messy data into structured or clean form the word has snuck its way into the jargon of many modern day data hackers munge rhymes with "lungenavigating this book www it-ebooks info
11,775
description of an algorithm or process that takes code-like form while likely not being actual valid source code syntactic sugar programming syntax which does not add new featuresbut makes something more convenient or easier to type acknowledgements it would have been difficult for me to write this book without the support of large number of people on the 'reilly staffi' very grateful for my editors meghan blanchette and julie steele who guided me through the process mike loukides also worked with me in the proposal stages and helped make the book reality received wealth of technical review from large cast of characters in particularmartin blais and hugh white were incredibly helpful in improving the book' examplesclarityand organization from cover to cover james longdrew conwayfernando perezbrian grangerthomas kluyveradam kleinjosh kleinchang sheand stefan van der walt each reviewed one or more providing pointed feedback from many different perspectives got many great ideas for examples and data sets from friends and colleagues in the data communityamong themmike dewarjeff hammerbacherjames johndrowkristian lumadam kleinhilary masonchang sheand ashley williams am of course indebted to the many leaders in the open source scientific python community who've built the foundation for my development work and gave encouragement while was writing this bookthe ipython core team (fernando perezbrian grangermin ragan-kellythomas kluyverand others)john hunterskipper seaboldtravis oliphantpeter wangeric jonesrobert kernjosef perktoldfrancesc altedchris fonnesbeckand too many others to mention several other people provided great deal of supportideasand encouragement along the waydrew conwaysean taylorgiuseppe paleologojared landerdavid epsteinjohn krowasjoshua bloomden pilsworthjohn myles-whiteand many others 've forgotten ' also like to thank number of people from my formative years firstmy former aqr colleagues who've cheered me on in my pandas work over the yearsalex reyfmanmichael wongtim sargenoktay kurbanovmatthew tschantzroni israelovmichael katzchris ugaprasad ramananted squareand hoon kim lastlymy academic advisors haynes miller (mitand mike west (dukeon the personal sidecasey dinkin provided invaluable day-to-day support during the writing processtolerating my highs and lows as hacked together the final draft on preliminaries www it-ebooks info
11,776
to always follow my dreams and to never settle for less acknowledgements www it-ebooks info
11,777
11,778
introductory examples this book teaches you the python tools to work productively with data while readers may have many different end goals for their workthe tasks required generally fall into number of different broad groupsinteracting with the outside world reading and writing with variety of file formats and databases preparation cleaningmungingcombiningnormalizingreshapingslicing and dicingand transforming data for analysis transformation applying mathematical and statistical operations to groups of data sets to derive new data sets for exampleaggregating large table by group variables modeling and computation connecting your data to statistical modelsmachine learning algorithmsor other computational tools presentation creating interactive or static graphical visualizations or textual summaries in this will show you few data sets and some things we can do with them these examples are just intended to pique your interest and thus will only be explained at high level don' worry if you have no experience with any of these toolsthey will be discussed in great detail throughout the rest of the book in the code examples you'll see input and output prompts like in [ ]:these are from the ipython shell usa gov data from bit ly in url shortening service bit ly partnered with the united states government website usa gov to provide feed of anonymous data gathered from users who shorten links ending with gov or mil as of this writingin addition to providing live feedhourly snapshots are available as downloadable text files www it-ebooks info
11,779
web data known as jsonwhich stands for javascript object notation for exampleif we read just the first line of file you may see something like in [ ]path 'ch /usagov_bitly_data- txtin [ ]open(pathreadline(out[ ]'" ""mozilla\\/ (windows nt wow applewebkit\\/ (khtmllike geckochrome\\ safari\\/ "" ""us""nk" "tz""america\\/new_york""gr""ma"" "" qovh"" ""wflqtf"" ""orofrog""al""en-us,en; = ""hh"" usa gov"" ""http:\\/\\/www facebook com\\/ \\/ aqefzjsi\\/ usa gov\\/wflqtf"" ""http:\\/\\/www ncbi nlm nih gov\\/pubmed\\/ "" " "hc" "cy""danvers""ll" - }\npython has numerous built-in and rd party modules for converting json string into python dictionary object here 'll use the json module and its loads function invoked on each line in the sample file downloadedimport json path 'ch /usagov_bitly_data- txtrecords [json loads(linefor line in open(path)if you've never programmed in python beforethe last expression here is called list comprehensionwhich is concise way of applying an operation (like json loadsto collection of strings or other objects convenientlyiterating over an open file handle gives you sequence of its lines the resulting object records is now list of python dictsin [ ]records[ out[ ]{ ' ' 'mozilla/ (windows nt wow applewebkit/ (khtmllike geckochrome safari/ ' 'al' 'en-us,en; = ' ' ' 'us' 'cy' 'danvers' ' ' ' qovh' 'gr' 'ma' ' ' 'wflqtf' 'hc' 'hh' ' usa gov' ' ' 'orofrog' 'll'[ - ] 'nk' ' ' ' ' ' 'tz' 'america/new_york' ' ' ' introductory examples www it-ebooks info
11,780
now easy to access individual values within records by passing string for the key you wish to accessin [ ]records[ ]['tz'out[ ] 'america/new_yorkthe here in front of the quotation stands for unicodea standard form of string encoding note that ipython shows the time zone string object representation here rather than its print equivalentin [ ]print records[ ]['tz'america/new_york counting time zones in pure python suppose we were interested in the most often-occurring time zones in the data set (the tz fieldthere are many ways we could do this firstlet' extract list of time zones again using list comprehensionin [ ]time_zones [rec['tz'for rec in recordskeyerror traceback (most recent call last/home/wesm/book_scripts/whettingin (---- time_zones [rec['tz'for rec in recordskeyerror'tzoopsturns out that not all of the records have time zone field this is easy to handle as we can add the check if 'tzin rec at the end of the list comprehensionin [ ]time_zones [rec['tz'for rec in records if 'tzin recin [ ]time_zones[: out[ ][ 'america/new_york' 'america/denver' 'america/new_york' 'america/sao_paulo' 'america/new_york' 'america/new_york' 'europe/warsaw' '' '' ''just looking at the first time zones we see that some of them are unknown (emptyyou can filter these out also but 'll leave them in for now nowto produce counts by time zone 'll show two approachesthe harder way (using just the python standard libraryand the easier way (using pandasone way to do the counting is to use dict to store counts while we iterate through the time zonesdef get_counts(sequence)counts { usa gov data from bit ly www it-ebooks info
11,781
if in countscounts[ + elsecounts[ return counts if you know bit more about the python standard libraryyou might prefer to write the same thing more brieflyfrom collections import defaultdict def get_counts (sequence)counts defaultdict(intvalues will initialize to for in sequencecounts[ + return counts put this logic in function just to make it more reusable to use it on the time zonesjust pass the time_zones listin [ ]counts get_counts(time_zonesin [ ]counts['america/new_york'out[ ] in [ ]len(time_zonesout[ ] if we wanted the top time zones and their countswe have to do little bit of dictionary acrobaticsdef top_counts(count_dictn= )value_key_pairs [(counttzfor tzcount in count_dict items()value_key_pairs sort(return value_key_pairs[- :we have thenin [ ]top_counts(countsout[ ][( 'america/sao_paulo')( 'europe/madrid')( 'pacific/honolulu')( 'asia/tokyo')( 'europe/london')( 'america/denver')( 'america/los_angeles')( 'america/chicago')( '')( 'america/new_york') introductory examples www it-ebooks info
11,782
which makes this task lot easierin [ ]from collections import counter in [ ]counts counter(time_zonesin [ ]counts most_common( out[ ][( 'america/new_york' )( '' )( 'america/chicago' )( 'america/los_angeles' )( 'america/denver' )( 'europe/london' )( 'asia/tokyo' )( 'pacific/honolulu' )( 'europe/madrid' )( 'america/sao_paulo' )counting time zones with pandas the main pandas data structure is the dataframewhich you can think of as representing table or spreadsheet of data creating dataframe from the original set of records is simplein [ ]from pandas import dataframeseries in [ ]import pandas as pd in [ ]frame dataframe(recordsin [ ]frame out[ ]int index entries to data columns_heartbeat_ non-null values non-null values al non-null values non-null values cy non-null values non-null values gr non-null values non-null values hc non-null values hh non-null values kw non-null values non-null values ll non-null values nk non-null values non-null values non-null values tz non-null values usa gov data from bit ly www it-ebooks info
11,783
non-null values dtypesfloat ( )object( in [ ]frame['tz'][: out[ ] america/new_york america/denver america/new_york america/sao_paulo america/new_york america/new_york europe/warsaw nametz the output shown for the frame is the summary viewshown for large dataframe objects the series object returned by frame['tz'has method value_counts that gives us what we're looking forin [ ]tz_counts frame['tz'value_counts(in [ ]tz_counts[: out[ ]america/new_york america/chicago america/los_angeles america/denver europe/london asia/tokyo pacific/honolulu europe/madrid america/sao_paulo thenwe might want to make plot of this data using plotting librarymatplotlib you can do bit of munging to fill in substitute value for unknown and missing time zone data in the records the fillna function can replace missing (navalues and unknown (empty stringsvalues can be replaced by boolean array indexingin [ ]clean_tz frame['tz'fillna('missing'in [ ]clean_tz[clean_tz ='''unknownin [ ]tz_counts clean_tz value_counts(in [ ]tz_counts[: out[ ]america/new_york unknown america/chicago america/los_angeles america/denver missing introductory examples www it-ebooks info
11,784
asia/tokyo pacific/honolulu europe/madrid making horizontal bar plot can be accomplished using the plot method on the counts objectsin [ ]tz_counts[: plot(kind='barh'rot= see figure - for the resulting figure we'll explore more tools for working with this kind of data for examplethe field contains information about the browserdeviceor application used to perform the url shorteningin [ ]frame[' '][ out[ ] 'googlemaps/rochesternyin [ ]frame[' '][ out[ ] 'mozilla/ (windows nt rv:gecko/ firefox/in [ ]frame[' '][ out[ ] 'mozilla/ (linuxuandroid en-uslg- / build/frg gapplewebkit/ ( figure - top time zones in the usa gov sample data parsing all of the interesting information in these "agentstrings may seem like daunting task luckilyonce you have mastered python' built-in string functions and regular expression capabilitiesit is really not so bad for examplewe could split off the first token in the string (corresponding roughly to the browser capabilityand make another summary of the user behaviorin [ ]results series([ split()[ for in frame dropna()]in [ ]results[: out[ ] mozilla/ googlemaps/rochesterny mozilla/ mozilla/ mozilla/ usa gov data from bit ly www it-ebooks info
11,785
out[ ]mozilla/ mozilla/ googlemaps/rochesterny opera/ test_internet_agent googleproducer mozilla/ blackberry nowsuppose you wanted to decompose the top time zones into windows and nonwindows users as simplificationlet' say that user is on windows if the string 'windowsis in the agent string since some of the agents are missingi'll exclude these from the datain [ ]cframe frame[frame notnull()we want to then compute value whether each row is windows or notin [ ]operating_system np where(cframe[' 'str contains('windows')'windows''not windows'in [ ]operating_system[: out[ ] windows not windows windows not windows windows namea thenyou can group the data by its time zone column and this new list of operating systemsin [ ]by_tz_os cframe groupby(['tz'operating_system]the group countsanalogous to the value_counts function abovecan be computed using size this result is then reshaped into table with unstackin [ ]agg_counts by_tz_os size(unstack(fillna( in [ ]agg_counts[: out[ ] tz africa/cairo africa/casablanca africa/ceuta africa/johannesburg africa/lusaka america/anchorage america/argentina/buenos_aires not windows windows introductory examples www it-ebooks info
11,786
america/argentina/mendoza finallylet' select the top overall time zones to do soi construct an indirect index array from the row counts in agg_countsuse to sort in ascending order in [ ]indexer agg_counts sum( argsort(in [ ]indexer[: out[ ]tz africa/cairo africa/casablanca africa/ceuta africa/johannesburg africa/lusaka america/anchorage america/argentina/buenos_aires america/argentina/cordoba america/argentina/mendoza then use take to select the rows in that orderthen slice off the last rowsin [ ]count_subset agg_counts take(indexer)[- :in [ ]count_subset out[ ] not windows tz america/sao_paulo europe/madrid pacific/honolulu asia/tokyo europe/london america/denver america/los_angeles america/chicago america/new_york windows thenas shown in the preceding code blockthis can be plotted in bar ploti'll make it stacked bar plot by passing stacked=true (see figure - in [ ]count_subset plot(kind='barh'stacked=truethe plot doesn' make it easy to see the relative percentage of windows users in the smaller groupsbut the rows can easily be normalized to sum to then plotted again (see figure - )in [ ]normed_subset count_subset div(count_subset sum( )axis= in [ ]normed_subset plot(kind='barh'stacked=true usa gov data from bit ly www it-ebooks info
11,787
figure - percentage windows and non-windows users in top-occurring time zones all of the methods employed here will be examined in great detail throughout the rest of the book movielens data set grouplens research ( introductory examples www it-ebooks info
11,788
demographic data about the users (agezip codegenderand occupationsuch data is often of interest in the development of recommendation systems based on machine learning algorithms while will not be exploring machine learning techniques in great detail in this booki will show you how to slice and dice data sets like these into the exact form you need the movielens data set contains million ratings collected from users on movies it' spread across tablesratingsuser informationand movie information after extracting the data from the zip fileeach table can be loaded into pandas dataframe object using pandas read_tableimport pandas as pd unames ['user_id''gender''age''occupation''zip'users pd read_table('ml- /users dat'sep='::'header=nonenames=unamesrnames ['user_id''movie_id''rating''timestamp'ratings pd read_table('ml- /ratings dat'sep='::'header=nonenames=rnamesmnames ['movie_id''title''genres'movies pd read_table('ml- /movies dat'sep='::'header=nonenames=mnamesyou can verify that everything succeeded by looking at the first few rows of each dataframe with python' slice syntaxin [ ]users[: out[ ]user_id gender age occupation in [ ]ratings[: out[ ]user_id movie_id rating in [ ]movies[: out[ ]movie_id zip timestamp title toy story ( jumanji ( grumpier old men ( waiting to exhale ( genres animation|children' |comedy adventure|children' |fantasy comedy|romance comedy|drama movielens data set www it-ebooks info
11,789
father of the bride part ii ( comedy in [ ]ratings out[ ]int index entries to data columnsuser_id non-null values movie_id non-null values rating non-null values timestamp non-null values dtypesint ( note that ages and occupations are coded as integers indicating groups described in the data set' readme file analyzing the data spread across three tables is not simple taskfor examplesuppose you wanted to compute mean ratings for particular movie by sex and age as you will seethis is much easier to do with all of the data merged together into single table using pandas' merge functionwe first merge ratings with users then merging that result with the movies data pandas infers which columns to use as the merge (or joinkeys based on overlapping namesin [ ]data pd merge(pd merge(ratingsusers)moviesin [ ]data out[ ]int index entries to data columnsuser_id non-null values movie_id non-null values rating non-null values timestamp non-null values gender non-null values age non-null values occupation non-null values zip non-null values title non-null values genres non-null values dtypesint ( )object( in [ ]data ix[ out[ ]user_id movie_id rating timestamp gender age occupation zip title toy story ( genres animation|children' |comedy name introductory examples www it-ebooks info
11,790
is straightforward once you build some familiarity with pandas to get mean movie ratings for each film grouped by genderwe can use the pivot_table methodin [ ]mean_ratings data pivot_table('rating'rows='title'cols='gender'aggfunc='mean'in [ ]mean_ratings[: out[ ]gender title $ , , duck ( 'night mother ( 'til there was you ( 'burbsthe ( and justice for all ( this produced another dataframe containing mean ratings with movie totals as row labels and gender as column labels firsti' going to filter down to movies that received at least ratings ( completely arbitrary number)to do thisi group the data by title and use size(to get series of group sizes for each titlein [ ]ratings_by_title data groupby('title'size(in [ ]ratings_by_title[: out[ ]title $ , , duck ( 'night mother ( 'til there was you ( 'burbsthe ( and justice for all ( - ( things hate about you ( dalmatians ( dalmatians ( angry men ( in [ ]active_titles ratings_by_title index[ratings_by_title > in [ ]active_titles out[ ]index(['burbsthe ( ) things hate about you ( ) dalmatians ( )young sherlock holmes ( )zero effect ( )existenz ( )]dtype=objectthe index of titles receiving at least ratings can then be used to select rows from mean_ratings abovein [ ]mean_ratings mean_ratings ix[active_titlesin [ ]mean_ratings out[ ]index entries'burbsthe ( to existenz ( movielens data set www it-ebooks info
11,791
non-null values non-null values dtypesfloat ( to see the top films among female viewerswe can sort by the column in descending orderin [ ]top_female_ratings mean_ratings sort_index(by=' 'ascending=falsein [ ]top_female_ratings[: out[ ]gender close shavea ( wrong trousersthe ( sunset blvd ( sunset boulevard( wallace gromitthe best of aardman animation ( schindler' list ( shawshank redemptionthe ( grand day outa ( to kill mockingbird ( creature comforts ( usual suspectsthe ( measuring rating disagreement suppose you wanted to find the movies that are most divisive between male and female viewers one way is to add column to mean_ratings containing the difference in meansthen sort by thatin [ ]mean_ratings['diff'mean_ratings[' 'mean_ratings[' 'sorting by 'diffgives us the movies with the greatest rating difference and which were preferred by womenin [ ]sorted_by_diff mean_ratings sort_index(by='diff'in [ ]sorted_by_diff[: out[ ]gender dirty dancing ( jumpinjack flash ( grease ( little women ( steel magnolias ( anastasia ( rocky horror picture showthe ( color purplethe ( age of innocencethe ( free willy ( french kiss ( little shop of horrorsthe ( guys and dolls ( mary poppins ( patch adams ( diff - - - - - - - - - - - - - - - introductory examples www it-ebooks info
11,792
preferred by men that women didn' rate as highlyreverse order of rowstake first rows in [ ]sorted_by_diff[::- ][: out[ ]gender goodthe bad and the uglythe ( kentucky fried moviethe ( dumb dumber ( longest daythe ( cable guythe ( evil dead ii (dead by dawn( hiddenthe ( rocky iii ( caddyshack ( for few dollars more ( porky' ( animal house ( exorcistthe ( fright night ( barb wire ( diff suppose instead you wanted the movies that elicited the most disagreement among viewersindependent of gender disagreement can be measured by the variance or standard deviation of the ratingsstandard deviation of rating grouped by title in [ ]rating_std_by_title data groupby('title')['rating'std(filter down to active_titles in [ ]rating_std_by_title rating_std_by_title ix[active_titlesorder series by value in descending order in [ ]rating_std_by_title order(ascending=false)[: out[ ]title dumb dumber ( blair witch projectthe ( natural born killers ( tank girl ( rocky horror picture showthe ( eyes wide shut ( evita ( billy madison ( fear and loathing in las vegas ( bicentennial man ( namerating you may have noticed that movie genres are given as pipe-separated (|string if you wanted to do some analysis by genremore work would be required to transform the genre information into more usable form will revisit this data later in the book to illustrate such transformation movielens data set www it-ebooks info
11,793
the united states social security administration (ssahas made available data on the frequency of baby names from through the present hadley wickhaman author of several popular packageshas often made use of this data set in illustrating data manipulation in in [ ]names head( out[ ]name sex births mary anna emma elizabeth minnie margaret ida alice bertha sarah year there are many things you might want to do with the data setvisualize the proportion of babies given particular name (your ownor another nameover time determine the relative rank of name determine the most popular names in each year or the names with largest increases or decreases analyze trends in namesvowelsconsonantslengthoverall diversitychanges in spellingfirst and last letters analyze external sources of trendsbiblical namescelebritiesdemographic changes using the tools we've looked at so farmost of these kinds of analyses are very straightforwardso will walk you through many of them encourage you to download and explore the data yourself if you find an interesting pattern in the datai would love to hear about it as of this writingthe us social security administration makes available data filesone per yearcontaining the total number of births for each sex/name combination the raw archive of these files can be obtained herein the event that this page has been moved by the time you're reading thisit can most likely be located again by internet search after downloading the "national datafile names zip and unzipping ityou will have directory containing series of files like yob txt use the unix head command to look at the first lines of one of the files (on windowsyou can use the more command or open it in text editor) introductory examples www it-ebooks info
11,794
mary, , anna, , emma, , elizabeth, , minnie, , margaret, , ida, , alice, , bertha, , sarah, , as this is nicely comma-separated formit can be loaded into dataframe with pandas read_csvin [ ]import pandas as pd in [ ]names pd read_csv('names/yob txt'names=['name''sex''births']in [ ]names out[ ]int index entries to data columnsname non-null values sex non-null values births non-null values dtypesint ( )object( these files only contain names with at least occurrences in each yearso for simplicity' sake we can use the sum of the births column by sex as the total number of births in that yearin [ ]names groupby('sex'births sum(out[ ]sex namebirths since the data set is split into files by yearone of the first things to do is to assemble all of the data into single dataframe and further to add year field this is easy to do using pandas concat is the last available year right now years range( pieces [columns ['name''sex''births'for year in yearspath 'names/yob% txtyear frame pd read_csv(pathnames=columnsframe['year'year pieces append(frameus baby names - www it-ebooks info
11,795
names pd concat(piecesignore_index=truethere are couple things to note here firstremember that concat glues the dataframe objects together row-wise by default secondlyyou have to pass ignore_index=true because we're not interested in preserving the original row numbers returned from read_csv so we now have very large dataframe containing all of the names datanow the names dataframe looks likein [ ]names out[ ]int index entries to data columnsname non-null values sex non-null values births non-null values year non-null values dtypesint ( )object( with this data in handwe can already start aggregating the data at the year and sex level using groupby or pivot_tablesee figure - in [ ]total_births names pivot_table('births'rows='year'cols='sex'aggfunc=sumin [ ]total_births tail(out[ ]sex year in [ ]total_births plot(title='total births by sex and year'nextlet' insert column prop with the fraction of babies given each name relative to the total number of births prop value of would indicate that out of every babies was given particular name thuswe group the data by year and sexthen add the new column to each groupdef add_prop(group)integer division floors births group births astype(floatgroup['prop'births births sum(return group names names groupby(['year''sex']apply(add_prop introductory examples www it-ebooks info
11,796
remember that because births is of integer typewe have to cast either the numerator or denominator to floating point to compute fraction (unless you are using python !the resulting complete data set now has the following columnsin [ ]names out[ ]int index entries to data columnsname non-null values sex non-null values births non-null values year non-null values prop non-null values dtypesfloat ( )int ( )object( when performing group operation like thisit' often valuable to do sanity checklike verifying that the prop column sums to within all the groups since this is floating point datause np allclose to check that the group sums are sufficiently close to (but perhaps not exactly equal to in [ ]np allclose(names groupby(['year''sex']prop sum() out[ ]true now that this is donei' going to extract subset of the data to facilitate further analysisthe top names for each sex/year combination this is yet another group operationdef get_top (group)return group sort_index(by='births'ascending=false)[: us baby names - www it-ebooks info
11,797
top grouped apply(get_top if you prefer do-it-yourself approachyou could also dopieces [for yeargroup in names groupby(['year''sex'])pieces append(group sort_index(by='births'ascending=false)[: ]top pd concat(piecesignore_index=truethe resulting data set is now quite bit smallerin [ ]top out[ ]int index entries to data columnsname non-null values sex non-null values births non-null values year non-null values prop non-null values dtypesfloat ( )int ( )object( we'll use this top , data set in the following investigations into the data analyzing naming trends with the full data set and top , data set in handwe can start analyzing various naming trends of interest splitting the top , names into the boy and girl portions is easy to do firstin [ ]boys top [top sex =' 'in [ ]girls top [top sex =' 'simple time serieslike the number of johns or marys for each year can be plotted but require bit of munging to be bit more useful let' form pivot table of the total number of births by year and namein [ ]total_births top pivot_table('births'rows='year'cols='name'aggfunc=sumnowthis can be plotted for handful of names using dataframe' plot methodin [ ]total_births out[ ]int index entries to columns entriesaaden to zuri dtypesfloat ( in [ ]subset total_births[['john''harry''mary''marilyn']in [ ]subset plot(subplots=truefigsize=( )grid=falsetitle="number of births per year" introductory examples www it-ebooks info
11,798
have grown out of favor with the american population but the story is actually more complicated than thatas will be explored in the next section figure - few boy and girl names over time measuring the increase in naming diversity one explanation for the decrease in plots above is that fewer parents are choosing common names for their children this hypothesis can be explored and confirmed in the data one measure is the proportion of births represented by the top most popular nameswhich aggregate and plot by year and sexin [ ]table top pivot_table('prop'rows='year'cols='sex'aggfunc=sumin [ ]table plot(title='sum of table prop by year and sex'yticks=np linspace( )xticks=range( )see figure - for this plot so you can see thatindeedthere appears to be increasing name diversity (decreasing total proportion in the top , another interesting metric is the number of distinct namestaken in order of popularity from highest to lowestin the top of births this number is bit more tricky to compute let' consider just the boy names from in [ ]df boys[boys year = in [ ]df out[ ]int index entries to data columnsus baby names - www it-ebooks info
11,799
non-null values sex non-null values births non-null values year non-null values prop non-null values dtypesfloat ( )int ( )object( figure - proportion of births represented in top names by sex after sorting prop in descending orderwe want to know how many of the most popular names it takes to reach you could write for loop to do thisbut vectorized numpy way is bit more clever taking the cumulative sumcumsumof prop then calling the method searchsorted returns the position in the cumulative sum at which would need to be inserted to keep it in sorted orderin [ ]prop_cumsum df sort_index(by='prop'ascending=falseprop cumsum(in [ ]prop_cumsum[: out[ ] in [ ]prop_cumsum searchsorted( out[ ] introductory examples www it-ebooks info