id
int64
0
25.6k
text
stringlengths
0
4.59k
16,200
ost popular artist the next information the readers may be interested in iswho are the most popular artists in the datasetthe code to plot this information is quite similar to the code given previouslyso we will not include the exact code snippet the resultant graph is shown in figure - figure - most popular artists we can read the plot to see that coldplay is one of the most popular artists according to our dataset keen music enthusiast can see that we don' have lot of representation from the classic artists like or the beatlesexcept for maybe metallica and radiohead this underlines two key points--firstthe data is mostly sourced from the generation that' not just always listening to the classic artists online and secondthat very rarely we have an artist that' not of the present generation but still scores high when it comes to online plays surprisingly the only example of such behavior in our dataset is metallica and radiohead they have their origins in the but are still pretty popular when it comes to online play counts diverse music genre representations are depicted however in the top artists with popular rap artists like eminemalternative rock bands like linkin park and the killers and even pop\rock bands like train and onerepublicbesides classic rock or metal bands like radiohead and metallica
16,201
another slightly off reading is that coldplay is the most popular artistbut they don' have candidate in the most popular songs list this indirectly hints at an even distribution of those play counts across all of their tracks you can take it as exercise to determine song play distribution for each of the artists who appear in the plot in figure - this will give you clue as to whether the artist holds skewed or uniform popularity this idea can be further developed to full-blown recommendation engine user vs songs distribution the last information that we can seek from our dataset is regarding the distribution of song count for users this information will tell us how the number of songs that users listen to on average are distributed we can use this information to create different categories of users and modify the recommendation engine on that basis the users who are listening to very select number of songs can be used for developing simple recommendation engines and the users who provide us lots of insight into their behavior can be candidates for developing complex recommendation engines before we go on to plot that distributionlet' try to find some statistical information about that distribution the following code calculates that distribution and then shows summary statistics about it in [ ]user_song_count_distribution triplet_dataset_sub_song_merged[['user','title']groupby('user'count(reset_index(sort_values(by='title'ascending falseuser_song_count_distribution title describe(out[ ]count mean std min max nametitledtypefloat this gives us some important information about how the song counts are distributed across the users we see that on an averagea user will listen to songsbut some users have more voracious appetite for song diversification let' try to visualize this particular distribution the code that follows will help us plot the distribution of play counts across our dataset we have intentionally kept the number of bins to small amountas that can give approximate information about the number of classes of users we have in [ ] user_song_count_distribution title nbinspatches plt hist( facecolor='green'alpha= plt xlabel('play counts'plt ylabel('probability'plt title( '$\mathrm{histogramofuserplaycountdistribution}$'plt grid(trueplt show(the distribution plot generated by the code is shown in figure - the image clearly shows thatalthough we have huge variance in the minimum and maximum play countthe bulk of the mass of the distribution is centered on song counts
16,202
figure - distribution of play counts for users given the nature of the datawe can perform large variety of cool visualizationsfor exampleplotting how the top tracks of artists are playedthe play count distribution on yearly basisetc but we believe by now you are sufficiently skilled in both the art of asking pertinent questions and answering them through visualizations so we will conclude our exploratory data analysis and move on to the major focus of the which is development of recommendation engines but feel free to try out additional analysis and visualizations on this data and if you find out something coolas alwaysfeel free to send across pull request to the book' code repositoryrecommendation engines the work of recommendation engine is succinctly captured in its name--all it needs to do is make recommendations but we must admit that this description is deceptively simple recommendation engines are way of modeling and rearranging information available about user preferences and then using this information to provide informed recommendations on the basis of that information the basis of the recommendation engine is always the recorded interaction between the users and products for examplea movie recommendation engine will be based on the ratings provided to different movies by the usersa news article recommender will take into account the articles the user has read in pastetc this section uses the user-song play count dataset to uncover different ways in which we can recommend new tracks to different users we will start with very basic system and try to evolve linearly into sophisticated recommendation system before we go into building those systemswe will examine their utility and the various types of recommendation engines
16,203
types of recommendation engines the major area of distinction in different recommendation engines comes from the entity that they assume is the most important in the process of generating recommendations there are different options for choosing the central entity and that choice will determine the type of recommendation engine we will develop user-based recommendation enginesin these types of recommendation enginesthe user is the central entity the algorithm will look for similarities among users and on the basis of those similarities will come up with the recommendation content-based recommendation engineson the other end of the recommendation engine spectrumwe have the content-based recommendation engine in thesethe central entity is the content that we are trying to recommendfor examplein our case the entity will be songs we are trying to recommend these algorithm will attempt to find features about the content and find similar content then these similarities will be used to make recommendations to the end users hybrid-recommendation enginesthese types of recommendation engines will take into account both the features of the users and the content to develop recommendations these are also sometimes termed as collaborative filtering recommendation engines as they "collaborateby using the similarities of content as well as users these are one of the most effective classes of recommendation enginesas they take the best features of both classes of recommendation engines utility of recommendation engines in the previous we discussed an important requirement of any organizationunderstanding the customer this requirement is made more important for online businesseswhich have almost no physical interaction with their customers recommendation engines provide wonderful opportunities to these organizations to not only understand their clientele but also to use that information to increase their revenues another important advantage of recommendation engines is that they potentially have very limited downside the worst thing the user can do is not pay attention to the recommendation made to him the organization can easily integrate crude recommendation engine in its interaction with the users and thenon the basis of its performancemake the decision to develop more sophisticated version although unverified claims are often made about the impact of recommendation engines on the sales of major online service providers like netflixamazonyoutubeetc an interesting insight into their effectiveness is provided by several papers we encourage you to read one such paper at amazonaws com/ba / cb baa pdf the study claims that good recommendation engine tends to increase sales volume by around and also leads customers to discover more productswhich in turns adds to positive customer experience before we start discussing various recommendation engineswe would like to thank our friend and fellow data scientistauthor and course instructorsiraj raval for helping us with major chunk of the code used in this pertaining to recommendation engines as well as sharing his codebase for developing our recommendation engines (check out siraj' github at be modifying some of his code samples to develop the recommendation engines that we discuss in the subsequent sections interested readers can also check out siraj' youtube channel at www youtube com/csirajology where he makes excellent videos on machine learningdeep learningartificial intelligence and other fun educational content
16,204
popularity-based recommendation engine the simplest recommendation engine is naturally the easiest to develop as we can easily develop recommendation enginesthis type of recommendation engine is very straightforward one to develop the driving logic of this recommendation engine is that if some item is liked (or listened toby vast majority of our user basethen it is good idea to recommend that item to users who have not interacted with that item the code to develop this kind of recommendation is extremely easy and is effectively just summarization procedure to develop these recommendationswe will determine which songs in our dataset have the most users listening to them and then that will become our standard recommendation set for each user the code that follows defines function that will do this summarization and return the resultant dataframe in [ ]def create_popularity_recommendation(train_datauser_iditem_id)#get count of user_ids for each unique song as recommendation score train_data_grouped train_data groupby([item_id]agg({user_id'count'}reset_index(train_data_grouped rename(columns {user_id'score'},inplace=true#sort the songs based upon recommendation score train_data_sort train_data_grouped sort_values(['score'item_id]ascending [ , ]#generate recommendation rank based upon score train_data_sort['rank'train_data_sort['score'rank(ascending= method='first'#get the top recommendations popularity_recommendations train_data_sort head( return popularity_recommendations in [ ]recommendations create_popularity_recommendation(triplet_dataset_sub_song_ merged,'user','title'in [ ]recommendations we can use this function on our dataset to generate the top recommendations to each of our users the output of our plain vanilla recommendation system is shown in figure - here you can see that the recommendations are very similar to the list of the most popular songs that you saw in the last sectionwhich is expected as the logic behind both is the same--only the output is different
16,205
figure - recommendation by the popularity recommendation engine item similarity based recommendation engine in the last sectionwe witnessed one of the simplest recommendation engines in this section we deal with slightly more complex solution this recommendation engine is based on calculating similarities between user' items and the other items in our dataset before we proceed further with our development effortlet' describe how we plan to calculate "itemitemsimilaritywhich is central to our recommendation engine usually to define similarity among set of itemswe need feature set on the basis of which both items can be described in our case it will mean features of the songs on the basis of which one song can be differentiated from another although as we don' have ready access to these features (or do we?)we will define the similarity in terms of the users who listen to these songs confusedconsider this mathematical formulawhich should give you little more insight into the metric similarityij intersection(usersi usersj )/union(usersi usersjthis similarity metric is known as the jaccard index (indexand in our case we can use it to define the similarities between two songs the basic idea remains that if two songs are being listened to by large fraction of common users out of the total listenersthe two songs can be said to be similar to each other on the basis of this similarity metricwe can define the steps that the algorithm will take to recommend song to user
16,206
determine the songs listened to by the user calculate the similarity of each song in the user' list to those in our datasetusing the similarity metric defined previously determine the songs that are most similar to the songs already listened to by the user select subset of these songs as recommendation based on the similarity score as the step can become computation-intensive step when we have large number of songswe will subset our data to , songs to make the computation more feasible we will select the most popular , songs so it is quite unlikely that we would miss out on any important recommendations in [ ]song_count_subset song_count_df head( = user_subset list(play_count_subset usersong_subset list(song_count_subset songtriplet_dataset_sub_song_merged_sub triplet_dataset_sub_song_merged[triplet_dataset_sub_ song_merged song isin(song_subset)this code will subset our dataset to contain only most popular , songs we will then create our similarity based recommendation engine and generate recommendation for random user we leverage siraj' recommenders module here for the item similarity based recommendation system in [ ]train_datatest_data train_test_split(triplet_dataset_sub_song_merged_subtest_size random_state= is_model recommenders item_similarity_recommender_py(is_model create(train_data'user''title'user_id list(train_data user)[ user_items is_model get_user_items(user_idis_model recommend(user_idthe recommendations for random users are shown in figure - notice the stark difference from the popularity based recommendation engine so this particular person is almost guaranteed to not like the most popular songs in our dataset
16,207
figure - recommendation by the item similarity based recommendation engine ##note at the start of this sectionwe mentioned we don' readily have access to song' features that we can use to define similarity as part of the million song databasewe have those features available for each of the songs in the dataset you are encouraged to replace this implicit similaritybased on common userswith the explicit similarity based on features of the song and see how the recommendations change matrix factorization based recommendation engine matrix factorization based recommendation engines are probably the most used recommendation engines when it comes to implementing recommendation engines in production in this sectionwe give an intuition-based introduction to matrix factorization based recommendation engines we avoid going into heavily mathematical discussionas from practitioner' perspective the intent is to see how we can leverage this to get valuable recommendations from real data matrix factorization refers to identification of two or more matrices from an initial matrixsuch that when these matrices are multiplied we get the original matrix matrix factorization can be used to discover latent features between two different kinds of entities what are these latent featureslet' discuss that for moment before we go for mathematical explanation consider for moment why you like certain song--the answer may range from the soulful lyricsto catchy musicto it being melodiousand many more we can try to explain song in mathematical terms by measuring its beatstempoand other such features and then define similar features in terms of the user for examplewe can define that from user' listening history we know that he likes songs with beats that are bit on the higher sideetc once we have consciously defined such "features,we can use them to find matches for user based on some similarity criteria but more often than notthe tough part in this process is defining these featuresas we have no handbook for what will make good feature it is mostly based on domain experts and bit of experimentation but as you will see in this sectionyou can use matrix factorization to discover these latent features and they seem to work great
16,208
the starting point of any matrix factorization-based method is the utility matrixas shown in table - the utility matrix is matrix of user item dimension in which each row represents user and each column stands for an item table - example of utility matrix item user item item user user item item notice that we have lot of missing values in the matrixthese are the items that the user hasn' ratedeither because he hasn' watched it or because he doesn' want to watch it we can right away guesssay item is recommendation for user because user and user don' like item so it is likely they may end up liking the same itemsin this case item the process of matrix factorization means finding out low rank approximation of the utility matrix so we want to break down the utility matrix into two low rank matrices so that we can recreate the matrix by multiplying those two matrices mathematicallyr it and = * here is our original rating matrixu is our user matrixand is our item matrix assuming the process helps us identify latent featuresour aim is to find two matrices and such that their product (matrix multiplicationapproximates |ux matrix ( matrix with dimensions of num_users factorsy |px matrix ( matrix with dimensions of factors num_moviesfigure - matrix factorization
16,209
we can also try to explain the concept of matrix factorization as an image based on figure - we can regenerate the original matrix by multiplying the two matrices together to make recommendation to the userwe can multiply the corresponding user' row from the first matrix by the item matrix and determine the items from the row with maximum ratings that will become our recommendations for the user the first matrix represents the association between the users and the latent featureswhile the second matrix takes care of the associations between items (songs in our caseand the latent features figure - depicts typical matrix factorization operation for movie recommender system but the intent is to understand the methodology and extend it to build music recommendation system in this scenario matrix factorization and singular value decomposition there are multiple algorithms available for determining factorization of any matrix we use one of the simplest algorithmswhich is the singular value decomposition or svd remember that we discussed the mathematics behind svd in herewe explain how the decomposition provided by svd can be used as matrix factorization you may remember from that singular value decomposition of matrix returns three different matricesusand you can follow these steps to determine the factorization of matrix using the output of svd function factorize the matrix to obtain usand matrices reduce the matrix to first components (the function we are using will only provide dimensionsso we can skip this step compute the square root of reduced matrix sk to obtain the matrix sk / compute the two resultant matrix *sk / and sk / * as these will serve as our two factorized matricesas depicted in figure - we can then generate the prediction of user for product by taking the dot product of the ith row of the first matrix with the jth column of the second matrix this information gives us all the knowledge required to build matrix factorization based recommendation engine for our data building matrix factorization based recommendation engine after the discussion of the mechanics of matrix factorization based recommendation engineslet' try to create such recommendation engine on our data the first thing that we notice is that we have no concept of "ratingin our dataall we have are the play counts of various songs this is well known problem in the case of recommendation engines and is called the "implicit feedbackproblem there are many ways to solve this problem but we will look at very simple and intuitive solution we will replace the play count with fractional play count the logic being that this will measure the strength of "likenessfor song in the range of [ , we can argue about better methods to address this problembut this is an acceptable solution to our problem the following code will complete the task in [ ]triplet_dataset_sub_song_merged_sum_df triplet_dataset_sub_song_merged[['user','listen_ count']groupby('user'sum(rese t_index(triplet_dataset_sub_song_merged_sum_df rename(columns={'listen_count':'total_listen_ count'},inplace=truetriplet_dataset_sub_song_merged pd merge(triplet_dataset_sub_song_merged,triplet_dataset_ sub_song_merged_sum_dftriplet_dataset_sub_song_merged['fractional_play_count'triplet_dataset_sub_song_ merged['listen_count']/triplet_dataset_s ub_song_merged['total_listen_count'
16,210
the modified dataframe is shown in figure - figure - dataset with implicit feedback the next transformation of data that is required is to convert our dataframe into numpy matrix in the format of utility matrix we will convert our dataframe into sparse matrixas we will have lot of missing values and sparse matrices are suitable for representation of such matrix since we won' be able to transform our song ids and user ids into numpy matrixwe will convert these indices into numerical indices then we will use these transformed indices to create our sparse numpy matrix the following code will create such matrix in [ ]from scipy sparse import coo_matrix small_set triplet_dataset_sub_song_merged user_codes small_set user drop_duplicates(reset_index(song_codes small_set song drop_duplicates(reset_index(user_codes rename(columns={'index':'user_index'}inplace=truesong_codes rename(columns={'index':'song_index'}inplace=truesong_codes['so_index_value'list(song_codes indexuser_codes['us_index_value'list(user_codes indexsmall_set pd merge(small_set,song_codes,how='left'small_set pd merge(small_set,user_codes,how='left'mat_candidate small_set[['us_index_value','so_index_value','fractional_play_count']data_array mat_candidate fractional_play_count values row_array mat_candidate us_index_value values col_array mat_candidate so_index_value values data_sparse coo_matrix((data_array(row_arraycol_array)),dtype=floatin [ ]data_sparse out with stored elements in coordinate formatonce we have converted our matrix into sparse matrixwe will use the svds function provided by the scipy library to break down our utility matrix into three different matrices we can specify the number of latent factors we want to factorize our data in the example we will use latent factors but users are encouraged to experiment with different values of latent factors and observe how the recommendation change as result the following code creates the decomposition of our matrix and predicts the recommendation for the same user as in the item similarity recommendation use case we leverage our compute_svdfunction to perform the svd operation and then use the compute_estimated_matrixfor the low rank matrix approximation after factorization detailed steps with function implementations as always are present in the jupyter notebook
16,211
in [ ] = #initialize sample user rating matrix urm data_sparse max_pid urm shape[ max_uid urm shape[ #compute svd of the input user ratings matrix usvt compute_svd(urmkutest [ #get estimated rating for test user print("predicted ratings:"utest_recommended_items compute_estimated_matrix(urmusvtutestktruefor user in utestprint("recommendation for user with user id {}format(user)rank_value for in utest_recommended_items[user, : ]song_details small_set[small_set so_index_value =idrop_duplicates('so_index_ value')[['title','artist_name']print("the number {recommended song is {by {}format(rank_valuelist(song_ details['title'])[ ],list(song_details['artist_name'])[ ])rank_value+= the recommendations made by the matrix factorization based system are also shown in figure - if you refer to the jupyter notebook for the codeyou will observe that the user with id is the same one for whom we performed the item similarity based recommendations earlier refer to the notebook for further details on how the different functions we used earlier have been implemented and used for our recommendationsnote how the recommendations have changed from the previous two systems it looks like this user might like listening to some coldplay and radioheaddefinitely interestingfigure - recommendation using the matrix factorization based recommender we used one of the simplest matrix factorization algorithms for our use caseyou can try to find out other more sophisticated implementation of the matrix factorization routinewhich will lead to different recommendation system another topic which we have ignored bit is the conversion of song play count into measure of "implicit feedbackthe system that we chose is acceptable but is far from perfect there is lot of literature that discusses the handling of this issue we encourage you to find different ways of handling it and experiment with various measures
16,212
 note on recommendation engine libraries you might have noticed that we did not use any readily available packages for building our recommendation system like for every possible taskpython has multiple libraries available for building recommendation engines too but we have refrained from using such libraries because we wanted to give you taste of what goes on behind recommendation engine most of these libraries will take the sparse matrix format as input data and allow you to develop recommendation engines we encourage you to continue with your experimentation with at least one of those librariesas it will give you an idea of exploring different possible implementations of the same problem and the differences those implementations can make some libraries that you can use for such exploration include scikit-surpriselightfmcrabrec_sysetc summary in this we learned about recommendation systemswhich are an important and widely known machine learning application we discovered very popular data source that allowed us to peek inside small section of online music listeners we then started on the learning curve of building different types of recommendation engines we started with simple vanilla version based on popularity of songs after thatwe upped the complexity quotient of our recommendation engines by developing an item similarity based recommendation engine we urge you to extend that recommendation engine using the metadata provided as part of million song database finallywe concluded the by building recommendation engine that takes an entirely different point of view we explored some basics of matrix factorization and learned how very basic factorization method like singular value decomposition can be used for developing recommendations we ended the by mentioning about the different libraries that you can use to develop sophisticated engines from the dataset that we created we would like to close this by further underlining the importance of recommendation enginesespecially in the context of online content delivery an uncorroborated fact that' often associated with recommendation systems is that " of netflix movie viewings are through recommendationsaccording to wikipedianetflix has an annual revenue of around $ billion even if half of that is coming from movies and even if the figure is instead of %it means that around $ billion of netflix revenue can be attributed to recommendation engines although we will never be able to verify these figuresthey are definitely strong argument for recommendation engines and the value they can generate
16,213
forecasting stock and commodity prices in the so farwe covered variety of concepts and solved diverse real-world problems in this we will dive into forecast/prediction use cases predictive analytics or modeling involves concepts from data miningadvanced statisticsmachine learningand so on to model historical data to forecast future events predictive modeling has use cases across domains such as financial serviceshealthcaretelecommunicationsetc there are number of techniques developed over the years to understand temporal data and model patterns to score future events and behaviors time series analysis forms the descriptive aspect of such data and the understanding helps in modeling and forecasting the same traditional approaches like regression analysis (see and box-jenkins methodologies have been deeply studied and applied over the years more recentlyimprovements in computation and machine learning algorithms have seen machine learning techniques like neural networks or to be more specificdeep learningmaking headway in forecasting use cases with some amazing results this discusses forecasting using stock and commodity price datasets we will utilize traditional time series models as well as deep learning models like recurrent neural networks to forecast the prices through this we will cover the following topicsbrief overview of time series analysis forecasting commodity pricing using traditional approaches like arima forecasting stock prices with newer deep learning approaches like rnns and lstms the code samplesjupyter notebooksand sample datasets for this is available in the github repository for this book at under the directory/folder for time series data and analysis time series is sequence of observed values in time-ordered manner the observations are recorded at equally spaced time intervals time series data is available and utilized in the fields of statisticseconomicsfinanceweather modelingpattern recognitionand many more time series analysis is the study of underlying structure and forces that produced observations it provides descriptive frameworks to analyze characteristics of data and other meaningful statistics it also provides techniques to then utilize this to fit model to forecastmonitorand control there is another school of thought that separates the descriptive and modeling components of time series heretime series analysis typically is concerned with only the descriptive analysis of time series data to understand various components and underlying structure the modeling aspect that utilizes time series (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
16,214
for prediction/forecasting use cases is termed as time series forecasting though in generalboth schools of thought utilize the same set of tools and techniques it is more about how the concepts are grouped for structured learning and application ##note time series and its analysis is complete field of study on its own and we merely discuss certain concepts and techniques for our use cases here this is by no means complete guide on time series and its analysis time series can be analyzed both in frequency and time domains the frequency domain analysis includes spectral and wavelet analysis techniqueswhile time domain includes autoand cross-correlation analysis in this we primarily focus on time series forecasting in the time domain with brief discussion around descriptive characteristics of time series data moreoverwe concentrate on univariate time series analysis (time is an implicit variableto better understand concepts related to time serieswe utilize sample dataset the following snippet loads web site visit data with daily frequency using pandas you can refer to the jupyter notebook notebook_getting_started_time_series ipynb for the necessary code snippets and examples the dataset is available at in [ import pandas as pd #load data input_df pd read_csv( 'website-traffic csv'input_df['date_of_visit'pd to_datetime(input_df monthday str catinput_df year astype(str)sep=')the snippet first creates new attribute called the date_of_visit using combination of daymonthand year values available in the base dataframe since the dataset is about web site visits per daythe variable of interest is the visit count for the day with the time dimensioni date_of_visit being the implicit one the output plot of visits per day is shown in figure -
16,215
figure - web site visits per day time series components the time series at hand is data related to web site visits per day for given web site as mentioned earliertime series analysis deals with understanding the underlying structure and forces that result in the series as we see it let' now try to deconstruct various components that make up the time series at hand time series is said to be comprised of the following three major componentsseasonalitythese are the periodic fluctuations in the observed data for exampleweather patterns or sale patterns trendthis is the increasing or decreasing behavior of the series with time for examplepopulation growth patterns residualthis is the remaining signal after removing the seasonality and trend signals it can be further decomposed to remove the noise component as well it is interesting to note that most real-world time series data have combination or all of these components available yetit is mostly the noise that' always apparently presentwith trend and seasonality being optional in certain cases in the following snippetwe utilize statsmodels to decompose our web site visit time series into its three constituents and then plot the same in [ from statsmodels tsa seasonal import seasonal_decompose extract visits as series from the dataframe ts_visits pd series(input_df visits values
16,216
index=pd date_rangeinput_df date_of_visit min()input_df date_of_visit max()freq=' 'deompose seasonal_decompose(ts_visits interpolate()freq= deompose plot(we first create pandas series object with additional care taken to set the frequency of the time series index it is important to note that statsmodels has number of time series modeling modules available and they rely on underlying data structures (such as pandasnumpyetc to specify the frequency of the time series in this casesince the data is at daily levelwe set the frequency of ts_visits object to 'ddenoting daily frequency we then simply use the seasonal_decompose(function from statsmodels to get the required constituents the decomposed series is shown in the plot in figure - figure - web site visit time series and its constituent signals it is apparent from figure - the time series at hand has both upward and downward trends in it it shows gradual increasing trend until octoberpost which it starts downward behavior the series certainly has monthly periodicity or seasonality to it the remaining signal is what is marked as residual in figure -
16,217
smoothing techniques as discussed in the previous preprocessing the raw data depends upon the data as well as the use case requirements yet there are certain standard preprocessing techniques for each type of data unlike the datasets we have seen so farwhere we consider each observation to be independent of other (past or futureobservationstime series have inherent dependency on historical observations as seen from the decomposition of web site visits seriesthere are multiple factors impacting each observation it is an inherent property of time series data to have random variation to it apart from its other constituents to better understandmodeland utilize time series for prediction related taskswe usually perform preprocessing step better termed as smoothing smoothing helps reduce the effect of random variation and helps clearly reveal the seasonalitytrendand residual components of the series there are various methods to smooth out time series they are broadly categorized as follows moving average instead of taking an average of complete time series (which we do in cases of non-temporal datato summarizemoving average makes use of rolling windowed approach in this casewe compute the mean of each successive smaller windows of past data to smoothen out the impact of random variation the following is general formula for moving average calculation mat xt xt - xt - / xt - wheremat is the moving average for time period txtxt- and so on denote observed values at particular time periodsand is the window size for examplethe following snippet calculates the moving average for visits with window size of in [ moving average input_df['moving_average'input_df['visits'rolling(window= center=falsemean(print(input_df[['visits','moving_average']head( )plt plot(input_df visits,'-',color='black',alpha= plt plot(input_df moving_average,color=' 'plt title('website visit and moving average smoothening'plt legend(plt show(the moving average calculated using window size has the following results it should be pretty clear that for window size of the first two observations would not have any moving averages availablehence the nan out[ ]visits moving_average nan nan
16,218
figure - smoothening using moving average the plot in figure - shows the smoothened visits time series the smoothened series captures the overall structure of the original series all the while reducing the random variation in it you are encouraged to explore and experiment with different window sizes and compare the results depending on the use case and data at handwe also try different variations of moving average like centered moving averagedouble moving averageand so on apart from different window sizes we will utilize some of these concepts when we deal with actual use cases in the coming sections of the exponential smoothing moving average based smoothening is effective yet it is pretty simple preprocessing technique in case of moving averageall past observations in the window are given equal weight unlike the previous methodexponential smoothening techniques apply exponentially decreasing weights to older observations in simple wordsexponential smoothening methods give more weight to recent past observations as compared to older observations depending on the level of smoothening requiredthere may be one or more smoothening parameters to set in case of exponential smoothening
16,219
exponential smoothening is also called exponentially weighted moving average or ewma for short single exponential smoothening is one of the simplest to get started with the general formula is given as et yt - ( et - whereet is the th smoothened observationy is the actual observed value at - instanceand is smoothing constant between and there are different methods to bootstrap the value of (the time period from which smoothing beginsit can be done by setting it to or an average of first time periods and so on alsothe value of determines how much of the past is accounted for value closer to dampens out the past observations quickly while values closer to dampen out slowly the following snippet uses the pandas ewm(function to calculate the smoothened series for visits the parameter halflife is used to calculate in this case in [ input_df['ewma'input_df['visits'ewm(halflife= ignore_na=falsemin_periods= adjust=truemean(plt plot(input_df visits,'-',color='black',alpha= plt plot(input_df ewma,color=' 'plt title('website visit and exponential smoothening'plt legend(plt show(the plot depicted in figure - showcases the ewma smoothened series along with the original one figure - smoothening using ewma
16,220
in the coming sectionswe will apply our understanding of time seriespreprocessing techniquesand so on to solve stock and commodity price forecasting problems using different forecasting methods forecasting gold price goldthe yellow shiny metalhas been the fancy of mankind since ages from making jewelry to being used as an investmentgold covers huge spectrum of use cases goldlike other metalsis also traded on the commodities indexes across the world for better understanding time series in real-world scenariowe will work with gold prices collected historically and predict its future value let' begin by first formally stating the problem statement problem statement metals such as gold have been traded for years across the world prices of gold are determined and used for trading the metal on commodity exchanges on daily basis using variety of factors using this daily price-level information onlyour task is to predict future price of gold dataset for any problemfirst and foremost is the data stock and commodity exchanges do wonderful job of storing and sharing daily level pricing data for the purpose of this use casewe will utilize gold pricing from quandl quandl is platform for financialeconomicand alternative datasets you can refer to the jupyter notebook notebook_gold_forecast_arima ipynb for the necessary code snippets and examples to access publicly shared datasets on quandlwe can use the pandas-datareader library as well as quandl (library from quandl itself for this use casewe will depend upon quandl kindly install the same using pip or conda the following snippet shows quick one-liner to get your hands on gold pricing information since in [ ]import quandl gold_df quandl get("bundesbank/bbk _wt "end_date=""the get(function takes the stock/commodity identifier as first parameter followed by the date until which we need the data note that not all datasets are publicfor some of themapi access must be obtained traditional approaches time series analysis and forecasting have been long studied in detail there are matured and extensive set of modeling techniques available for the same out of the manythe following are few most commonly used and explored techniques simple moving average and exponential smoothing based forecasting holt'sholt-winter' exponential smoothing based forecasting box-jenkins methodology (armaarimas-arimaetc
16,221
##note causal or cross-sectional forecasting/modeling is where the target variable has relationship with one or more predictor variablesexample regression models (see time series forecasting is about forecasting variable(sthat is changing over time both these techniques are grouped under quantitative techniques as mentionedthere are quite handful of techniques availableeach deep topic of research and study for the scope of this section and we will focus upon arima models (from the box-jenkin' methodologyto forecast gold prices before we move ahead and discuss arimalet' look at few key concepts ey concepts stationarityone the key assumptions behind the arima models we will be discussing next stationarity refers to the property where for time series its meanvarianceand autocorrelation are time invariant in other wordsmeanvarianceand autocorrelation do not change with time for instancea time series having an upward (or downwardtrend is clear indicator of non-stationarity because its mean would change with time (see web site visit data example in the previous sectiondifferencingone of the methods of stationarizing series though there can be other transformationsdifferencing is widely used to stabilize the mean of time series we simply compute difference between consecutive observations to obtain differenced series we can then apply different tests to confirm if the resulting series is stationary or not we can also perform second order differencingseasonal differencingand so ondepending on the time series at hand unit root testsstatistical tests that help us understand if given series is stationary or not the augmented dickey fuller test begins with null hypothesis of series being non-stationarywhile kwiatkowski-phillips-schmidt-shin test or kpss has null hypothesis that the series is stationary we then perform regression fit to reject or fail to reject the null hypothesis rima the box-jenkin' methodology consists of wide range of statistical models which are widely used to model time series for forecasting for this sectionwe will be concentrating on one such model called as arima arima stands for auto regressive integrated moving average model sounds pretty complexrightlet' look at the basics and constituents of this model and then build on our understanding to forecast gold prices auto regressive or ar modelinga simple linear regression model where current observation is regressed upon one or more prior observations the model is denoted as - / -
16,222
wherext is the observation at time tet is the noise and ae aqi = the dependency on prior values is denoted by or the order of ar model moving average or ma modelingis again essentially linear regression model that models the impact of noise/error from prior observations to current one the model is denoted as - / jqe - wherem is the series meanei are the noise termsand is the order of the model the ar and ma models were known long before box-jenkin' methodology was presented yet this methodology presented systematic approach to identify and apply these models for forecasting ##note boxjenkinsand reinsel presented this methodology in their book titled time series analysisforecasting and control you are encouraged to go through it for deeper understanding the arima model is logical progression and combination of the two models yet if we combine ar and ma with differenced serieswhat we get is called as arima( , ,qmodel wherep is the order of autoregression is the order of moving average is the order of differencing thusfor stationary time series arima models combine autoregressive and moving average concepts to model the behavior of long running time series and helps in forecasting let' now apply these concepts to model gold price forecasting modeling while describing the datasetwe extracted the gold price information using quandl let' first plot and see how this time series looks the following snippet uses the pandas to plot the same in [ ]gold_df plot(figsize=( )plt show(
16,223
the plot in figure - shows general upward trend with sudden rise in the and then near figure - gold prices over the years since stationarity is one of the primary assumptions of arima modelswe will utilize augmented dickey fuller test to check our series for stationarity the following snippet helps us calculate the ad fuller test statistics and plot rolling characteristics of the series in [ ]dickey fuller test for stationarity def ad_fuller_test(ts)dftest adfuller(tsautolag='aic'dfoutput pd series(dftest[ : ]index=['test statistic'' -value''#lags used''number of observations used']for key,value in dftest[ items()dfoutput['critical value (% )'%keyvalue print(dfoutputplot rolling stats for time series def plot_rolling_stats(ts)rolling_mean ts rolling(window= ,center=falsemean(rolling_std ts rolling(window= ,center=falsestd(#plot rolling statisticsorig plt plot(tscolor='blue',label='original'mean plt plot(rolling_meancolor='red'label='rolling mean'std plt plot(rolling_stdcolor='black'label 'rolling std'plt legend(loc='best'plt title('rolling mean standard deviation'plt show(block=false
16,224
if the test statistic of ad fuller test is less than the critical value( )we reject the null hypothesis of nonstationarity the ad fuller test is available as part of the statsmodel library since it is quite evident that our original series of gold prices is non-stationarywe will perform log transformation and see if we are able to obtain stationarity the following snippet uses the rolling stats plots and ad fuller tests to check the same in [ ]log_series np log(gold_df valuead_fuller_test(log_seriesplot_rolling_stats(log_seriesthe test statistic of - is greater than either of critical valueshence we fail to reject the null hypothesisi the series is non-stationary even after log transformation the output and plot depicted in figure - confirm the same test statistic - -value #lags used number of observations used critical value ( %- critical value ( %- critical value ( %- dtypefloat figure - rolling mean and standard deviation plot for log transformed gold price
16,225
the plot points out time varying mean of the series and hence the non-stationarity as discussed in the key conceptsdifferencing series helps in achieving stationarity in the following snippetwe prepare first order differenced log series and perform the same tests in [ ]log_series_shift log_series log_series shift(log_series_shift log_series_shift[~np isnan(log_series_shift)ad_fuller_test(log_series_shiftplot_rolling_stats(log_series_shiftthe test statistic at - is lower than even critical valuethus we reject the null hypothesis for ad fuller test the following are the test results test statistic - -value #lags used number of observations used critical value ( %- critical value ( %- critical value ( %- dtypefloat figure - rolling mean and standard deviation plot for log differenced gold price series this exercise points us to the fact that we need to use log differenced series for arima to model the dataset at hand see figure - yet we still need to figure out the order of autoregression and moving average componentsi and
16,226
building an arima model requires some experience and intuition as compared other models identifying pdand parameters of the model can be done using different methodsthough arriving at the right set of numbers is dependent both upon requirements and experience one of the commonly used methods is the plotting of acf and pacf plots to determine and values acf or auto correlation function plot and pacf or the partial auto correlation function plot helps us narrow down the search space of determining the and values with few caveats there are certain rules or heuristics developed over the years to best utilize these plots and thus these do not guarantee the best possible values the acf plot helps us understand the correlation of an observation with its lag (or previous valuethe acf plot is used to determine the ma orderi the value at which acf drops is the order of the ma model on the same linespacf points toward correlation between an observation and specific lagged valueexcluding effect of other lags the value at which pacf drops points toward the order of ar model or the in arima( , , ##note further details on acf and pacf is available at section /autocopl htm let' again utilize statsmodels to generate acf and pacf plots for our series and try to determine and values the following snippet uses the log differenced series to generate the required plots in [ ]fig plt figure(figsize=( , )ax fig add_subplot( fig sm graphics tsa plot_acf(log_series_shift squeeze()lags= ax=ax ax fig add_subplot( fig sm graphics tsa plot_pacf(log_series_shiftlags= ax=ax the output plots (figure - show sudden drop at lag for both acf and pacfthus pointing toward possible values of and to be eachrespectively
16,227
figure - acf and pacf plots the acf and pacf plot also help us understand if series is stationary or not if series has gradually decreasing values for acf and pacfit points toward non-stationarity property in the series ##note identifying the pdand values for any arima model is as much science as it is art more details on this are available at another method to derive the pdq parameters is to perform grid search of the parameter space this is more in tune with the machine learning way of hyperparameter tuning though statsmodels does not provide such utility (for obvious reasons though)we can write our own utility to identify the best fitting model alsopretty much like any other machine learning/data science use casewe need to split our dataset into train and test sets we utilize scikit-learn' timeseriessplit utility to help us get proper training and testing sets we write utility function arima_grid_search_cv(to grid search and cross validate the results using the gold prices at hand the function is available in arima_utils py module for reference the following snippet performs five-fold cross validation with auto-arima to find the best fitting model in [ ]results_dict arima_gridsearch_cv(gold_df log_series,cv_splits= note that we are passing the log transformed series as input to the arima_gridsearch_cv(function as we saw earlierthe log differenced series was what helped us achieve stationarityhence we use the log transformation as our starting point and fit an arima model with set to the function call generates detailed output for each train-test split (we have five of them lined up)each iteration performing grid search over pdand figure - shows the output of the first iterationwhere the training set included only observations
16,228
figure - auto arima similar to our findings using acf-pacfauto arima suggests that the best fitting model is arima( , , based upon the aic criteria note that aic or akaike information criterion measures the goodness of fit and parsimony it is relative metric and does not point toward quality of models in an absolute sensei if all models being compared are pooraic will not be able to point that out thusaic should be used as heuristic low value points toward better fitting model the following is the summary generated by the arima( , , fitsee figure - figure - summary of arima( , , the summary is quite self-explanatory the top section shows details about the training sampleaicand other metrics the middle section talks about the coefficients of the fitted mode in case of arima( , , for iteration both ar and ma coefficients are statistically significant the forecast plot for iteration with arima( , , fitted is shown in figure -
16,229
figure - the forecast plot for arima( , , as evident from the plot in figure - the model captures the overall upward trend though it misses out on the sudden jump in values around yet it seems to give pretty nice idea of what can be achieved using this methodology the arima_gridsearch_cv(function produces similar statistics and plots for five different train-test splits we observe that arima( , , provides us decent enough fitalthough we can define additional performance and error criteria to select particular model in this casewe generated forecast for time periods for which we already had data this helps us in visualizing and understanding how the model is performing this is also called as back testing out of sample forecasting is also supported by statsmodels through its forecast(method alsothe plot in figure - showcases values in the transformed scalei log scale inverse transformation can be easily applied to get data back in original form you should also note that commodity prices are impacted by whole lot of other factors like global demandeconomic conditions like recession and so on hencewhat we showcased here was in certain ways naive modeling of complex process we would need more features and attributes to have sophisticated forecasts stock price prediction stocks and financial instrument trading is lucrative proposition stock markets across the world facilitate such trades and thus wealth exchanges hands stock prices move up and down all the time and having ability to predict its movement has immense potential to make one rich stock price prediction has kept people interested from long time there are hypothesis like the efficient market hypothesiswhich says that it is almost impossible to beat the market consistently and there are others which disagree with it there are number of known approaches and new research going on to find the magic formula to make you rich one of the traditional methods is the time series forecastingwhich we saw in the previous section fundamental analysis is another method where numerous performance ratios are analyzed to assess given stock on the emerging frontthere are neural networksgenetic algorithmsand ensembling techniques ##note stock price prediction (along with gold price prediction in the previous sectionis an attempt to explain concepts and techniques to model real-world data and use cases this is by no means and extensive guide to algorithmic trading algorithmic trading is complete field of study on its own and you may explore it further knowledge from this alone would not be sufficient to perform trading of any sort and is beyond both the scope and intent of this book
16,230
in this sectionwe learn how to apply recurrent neural networks (rnnsto the problem of stock price prediction and understand the intricacies problem statement stock price prediction is the task of forecasting the future value of given stock given the historical daily close price for & indexprepare and compare forecasting solutions & or standard and poor' index is an index comprising of stocks from different sectors of us economy and is an indicator of us equities other such indices are the dow nifty nikkei etc for the purpose of understandingwe are utilizing & indexconceptsand knowledge can be applied to other stocks as well dataset similar to gold price datasetthe historical stock price information is also publicly available for our current use casewe will utilize the pandas_datareader library to get the required & index history using yahoo finance databases we will utilize the closing price information from the dataset available though other information such as opening priceadjusted closing priceetc are also available we prepare utility function get_raw_data(to extract required information in pandas dataframe the function takes index ticker name as input for & indexthe ticker name is ^gspc the following snippet uses the utility function to get the required data in [ ]sp_df get_raw_data('^gspc'sp_close_series sp_df close sp_close_series plot(the plot for closing price is depicted in figure - figure - the & index
16,231
the plot in figure - shows that we have closing price information available since up until recently kindly note that the same information is also available through quandl (which we used in the previous section to get gold price informationyou may use the same to get this data as well recurrent neural networkslstm artificial neural networks are being employed to solve number of use cases in variety of domains recurrent neural networks are class of neural networks with capabilities of modeling sequential data lstms or long short term memory is an rnn architecture that' useful for modeling arbitrary intervals of information rnnsparticularly lstms were discussed in while practical use case was explored in for analyzing textual data from movie reviews for sentiment analysis figure - basic structure of rnn and lstm units (sourcechristopher olah' blogcolah github ioas quick refresherfigure - points toward the general architecture of an rnn along with the internals of typical lstm unit lstm comprises of three major gates--the inputoutputand forget gates these gates work in tandem to learn and store long and short term sequence related information for more detailsrefer to advanced supervised deep learning models
16,232
for stock price predictionwe will utilize lstm units to implement an rnn model rnns are typically useful in sequence modeling applicationssome of them are as followssequence classification tasks like sentiment analysis of given corpus (see for the detailed use casesequence tagging tasks like pos tagging given sentence sequence mapping tasks like speech recognition unlike traditional forecasting approaches (like arima)which require preprocessing of time series information to conform to stationarity and other assumptions along with parameter identification (pdqfor instance)neural networks (particularly rnnsimpose far fewer restrictions since stock price information is also time series datawe will explore the application of lstms to this use case and generate forecasts there are number of ways this problem can be modeled to forecast values the following sections covers two such approaches regression modeling we introduced regression modeling in to analyze bike demand based on certain predictor variables in essenceregression modeling refers to the process of investigating relationship between dependent and independent variables to model our current use case as regression problemwe state that the stock price at timestamp + (dependent variableis function of stock price at timestamps tt - - - where is the past window of stock prices now that we have framework defined on how we would model our time serieswe need to transform our time series data into windowed form see figure - figure - transformation of stock price time series into windowed format the windowed transformation is outlined in figure - where window size of is used the value at time + is forecasted using past four values we have data for the & index since hence we would apply this windowed transformation in rolling fashion and create multiple such sequences thusif we have time series of length and window size of nthere would be - - total windows generated
16,233
figure - rolling/sliding windows from original time series for the hands-on examples in this sectionyou can refer to the jupyter notebook notebook_stock_prediction_regression_modeling_lstm ipynb for the necessary code snippets and examples for using lstms to model our time serieswe need to apply one more level of transformation to be able to input our data lstms accept tensors as inputwe transform each of the windows (or sequencesin (nwfformat heren is the number of samples or windows from the original time seriesw is the size of each window or the number of historical time steps and is the number of features per time step in our caseas we are only using the closing pricef is equal to and are configurable the following function performs the windowing and tensor transformations using pandas and numpy def get_reg_train_test(timeseries,sequence_length train_size= ,roll_mean_window= normalize=true,scale=false)smoothen out series if roll_mean_windowtimeseries timeseries rolling(roll_mean_windowmean(dropna(create windows result [for index in range(len(timeseriessequence_length)result append(timeseries[indexindex sequence_length]normalize data as variation of th index if normalizenormalised_data [for window in resultnormalised_window [((float(pfloat(window[ ]) for in windownormalised_data append(normalised_windowresult normalised_data identify train-test splits result np array(resultrow round(train_size result shape[ ]split train and test sets train result[:int(row):test result[int(row)::
16,234
scale data in - range scaler none if scalescaler=minmaxscaler(feature_range=( )train scaler fit_transform(traintest scaler transform(testsplit independent and dependent variables x_train train[::- y_train train[:- x_test test[::- y_test test[:- transforms for lstm input x_train np reshape(x_train(x_train shape[ ]x_train shape[ ] )x_test np reshape(x_test(x_test shape[ ]x_test shape[ ] )return x_train,y_train,x_test,y_test,scaler the function get_reg_train_test(also performs number of other optional preprocessing steps it allows us to smoothen the time series using rolling mean before the windowing is applied we can also normalize the data as well as scale based on requirements neural networks are sensitive to input values and it is generally advised to scale inputs before training the network for this use casewe will utilize the normalization of the time series whereinfor each windowevery time step is the percentage change from the first value in that window (we could also use scaling or both and repeat the processfor our casewe begin with window size of six days (you can experiment with smaller or larger windows and observe the differencethe following snippet uses the get_reg_train_test(function with normalization set to true in [ window pred_length int(window/ x_train,y_train,x_test,y_test,scaler get_reg_train_test(sp_close_seriessequence_length=window + roll_mean_window=nonenormalize=truescale=falsethis snippet creates seven-day window that is comprised of six days of historical data (x_trainand one-day forecast y_train the shapes of the train and test variables are as follows in [ print("x_train shape={}format(x_train shape)print("y_train shape={}format(y_train shape)print("x_test shape={}format(x_test shape)print("y_test shape={}format(y_test shape)
16,235
x_train shape=( y_train shape=( ,x_test shape=( y_test shape=( ,the x_train and x_test tensors conform to the (nwfformat we discussed earlier and is required for input to our rnn we have sequences in our training seteach with six time steps and one value to forecast similarlywe have sequences in our test set now that we have our datasets preprocessed and readywe build up an rnn network using keras the keras framework provides us high level abstractions to work with neural networks over theano and tensorflow backends the following snippet showcases the model prepared using the get_reg_model(function in [ ]lstm_model get_reg_model(layer_units=[ , ]window_size=windowthe generated lstm model architecture has two hidden lstm layers stacked over each other with first one having lstm units and the second one having the output layer is dense layer with linear activation function we use mean squared error as our loss function to optimize upon since we are stacking lstm layerswe need to set return_sequences to true in order for the subsequent layer to get the required values as is evidentkeras abstracts most of the heavy lifting and makes it pretty intuitive to build even complex architectures with just few lines of code the next step is to train our lstm network we use batch size of with epochs and validation set of the following snippet uses the fit(function to train the model in [ ]use early stopping to avoid overfitting callbacks [keras callbacks earlystopping(monitor='val_loss'patience= verbose= )lstm_model fit(x_trainy_trainepochs= batch_size= verbose= ,validation_split= callbacks=callbacksthe model generates information regarding training and validation loss for every epoch it runs the callback for stopping enables us to stop the training if there is no further improvement observed for two consecutive epochs we start with batch size of you may experiment with larger batch sizes and observe the difference once the model is fitthe next step is to forecast using the predict(function since we have modeled this as regression problem with fixed window sizewe would generate forecasts for every sequence to do sowe write another utility function called predict_reg_multiple(this function takes the lstm modelwindowed datasetwindow and prediction lengths as input parameters to return list of predictions for every input window the predict_reg_multiple(function works as follows for every sequence in the list of windowed sequencesrepeat steps -ca use keras' predict(function to generate one output value append this output value to the end of the input sequence and remove the first value to maintain the window size then repeat this process (steps and buntil the required prediction length is achieved the function utilizes predicted values to forecast subsequent ones
16,236
the function is available in the script lstm_utils py the following snippet uses the predict_reg_ multiple(function to get predictions on the test set in [ test_pred_seqs predict_reg_multiple(lstm_modelx_testwindow_size=windowprediction_len=pred_lengthto analyze the performancewe will calculate the rmse for the fitted sequence we use sklearn' metrics module for the same the following snippet calculates the rmse score in [ test_rmse math sqrt(mean_squared_error(y_test[ :]np array(test_pred_seqsflatten())print('test score rmse(test_rmse)test score rmse the output is an rmse of as an exerciseyou may compare rmse with different window sizes and prediction lengths and observe the overall model performance to visualize our predictionswe plot the predictions against the normalized testing data we use the function plot_reg_results()which is also available for reference in lstm_utils py the following snippet generates the required plot using the same function in [ ]plot_reg_results(test_pred_seqs,y_test,prediction_len=pred_lengthfigure - forecast plot with lstmwindow size and prediction length in figure - the gray line is the original/true test data (normalizedand the black lines denote the predicted/forecast values in three-day periods the dotted line is used to explain the overall flow of the predicted series as is evidentthe forecasts are off the mark to some extent from the actual data trends yet they seem to have some similarity to the actual data before we conclude this sectionthere are few important points to be kept in mind lstms are vastly powerful units with memory to store and use past information in our current scenariowe utilized windowed approach with stacked lstm architecture (two lstm layers in our modelthis is also termed as many-to-one architecture where multiple input values are used to generate single output another important point here is that the window size along with other hyperparameters of the network (like epochsbatch sizelstm unitsetc have an impact on the final results (this is left as an exercise for you to explorethuswe should be careful before deploying such models in production
16,237
sequence modeling in the previous section we modeled our time series like regression use case in essencethe problem formulation though utilized past window to forecastit did not use the time step information in this sectionwe will solve the same stock prediction problem using lstms by modeling it as sequence for the hands-on examples in this sectionyou can refer to the jupyter notebook notebook_stock_prediction_ sequence_modeling_lstm ipynb for the necessary code snippets and examples recurrent neural networks are naturally suited for sequence modeling tasks like machine translationspeech recognition and so on rnns utilize memory (unlike normal feed forward neural networksto keep track of context and utilize the same to generate outputs in general feed forward neural networks assume inputs are independent of each other this independence may not hold in many scenarios (such as time series datarnns apply same transformations to each element of the sequencewith outcomes being dependent upon previous values in case of our stock price time serieswe would like to model it now as sequence where value at each time step is function of previous values unlike the regression-like modelinghere we do not divide the time series into windows of fixed sizesrather we would utilize the lstms to learn from the data and determine which past values to utilize for forecasting to do sowe need to perform certain tweaks to the way we processed our data in the previous case and also how we built our rnn in the previous sectionwe utilized the ( , ,fformat as input in the current settingthe format remain the same with the following changes (number of sequences)this will be set to since we are dealing with only one stock' price information (length of sequence)this will be set to total number of days worth of price information we have with us here we use the whole series as one big sequence (features per timestamp)this is again as we are only dealing with closing stock value per timestamp talking about the outputin the previous section we had one output for every window/sequence in consideration while modeling our data as sequence/time series we expect our output to be sequence as well thusthe output is also tensor following the same format as the input tensor we write small utility function get_seq_train_test(to help us scale and generate train and test datasets out of our time series we use - split in this case we then use numpy to reshape our time series into tensors the following snippet utilizes the get_seq_train_test(function to do the same in [ ]train,test,scaler get_seq_train_test(sp_close_seriesscaling=truetrain_size=train_percenttrain np reshape(train,( ,train shape[ ], )test np reshape(test,( ,test shape[ ], )train_x train[:,:- ,:train_y train[:, :,:test_x test[:,:- ,:test_y test[:, :,:print("data split complete"print("train_x shape={}format(train_x shape)
16,238
print("train_y shape={}format(train_y shape)print("test_x shape={}format(test_x shape)print("test_y shape={}format(test_y shape)data split complete train_x shape=( train_y shape=( test_x shape=( test_y shape=( having prepared the datasetslet we'll now move onto setting up the rnn network since we are planning on generating sequence as output as opposed to single output in the previous casewe need to tweak our network architecture the requirement in this case is to apply similar transformations/processing for every time step and be able to get output for every input timestamp rather than waiting for the whole sequence to be processed to enable such scenarioskeras provides wrapper over dense layers called timedistributed this wrapper applies the same task to every time step and provides hooks to get output after each such time step we use timedistributed wrapper over dense layer to get output from each of the time steps being processed the following snippet showcases get_seq_model(function to generate the required model def get_seq_model(hidden_units= ,input_shape=( , ),verbose=false)create and fit the lstm network model sequential(input shape timesteps*features model add(lstm(input_shape=input_shapeunits hidden_unitsreturn_sequences=true )timedistributeddense uses the processing for all time steps model add(timedistributed(dense( ))start time time(model compile(loss="mse"optimizer="rmsprop"if verboseprint("compilation time "time time(startprint(model summary()return model this function returns single hidden layer rnn network with four lstm units and timedistributed dense output layer we again use mean squared error as our loss function ##note timedistributed is powerful yet tricky utility available through keras you may explore more on this at com/questions/ /the-difference-between-dense-and-timedistributeddense-of-keras
16,239
we have our dataset preprocessed and split into train and test along with model object using the function get_seq_model(the next step is to simply train the model using the fit(function while modeling stock price information as sequencewe are assuming the whole time series as one big sequence hencewhile training the modelwe set the batch size as as there is only one stock to train in this case the following snippet gets the model object and then trains the same using the fit(function in [ ]get the model seq_lstm_model get_seq_model(input_shape=(train_x shape[ ], )verbose=verbosetrain the model seq_lstm_model fit(train_xtrain_yepochs= batch_size= verbose= this snippet returns model object along with its summary we also see the output of each of the epochs while the model trains on the training data figure - rnn summary figure - shows the total parameters which the rnn tries to learna complete of them we urge you to explore the summary on the model prepared in the previous sectionthe results should surprise most (hintthis model has far few parameters to learn!this summary also points toward an important factthe shape of the first lstm layer this clearly shows that the model expects the inputs to adhere to this shape (the shape of the training datasetfor training as well as predicting since our test dataset is smaller (shape( , , ))we need some way to match the required shape while modeling sequences with rnnsit is common practice to pad sequences in order to match given shape usually in cases where there are multiple sequences to train upon (exampletext generation)the size of the longest sequence is used and the shorter ones are padded to match it we do so only for programmatic reasons and discard the padded values otherwise (see keras masking for more on thisthe padding utility is available from the keras preprocessing sequence module the following snippet pads the test dataset with post the actual data (you can choose between pre-pad and post padand then uses the padded sequence to predict/forecast we also calculate and print the rmse score of the forecast in [ ]pad input sequence testpredict pad_sequences(test_xmaxlen=train_x shape[ ]padding='post'dtype='float '
16,240
forecast values testpredict seq_lstm_model predict(testpredictevaluate performance testscore math sqrt(mean_squared_error(test_y[ ]testpredict[ ][:test_x shape[ ]])print('test score rmse(testscore)test score rmse we can perform the same steps on the training set as well and check the performance while generating the train and test datasetsthe function get_seq_train_test(also returned the scaler object we next use this scaler object to perform an inverse transformation to get the prediction values in the original scale the following snippet performs inverse transformation and then plots the series in [ ]inverse transformation trainpredict scaler inverse_transform(trainpredict reshape(trainpredict shape[ ])testpredict scaler inverse_transform(testpredict reshape(testpredict shape[ ])train_size len(trainpredict)+ plot the true and forecasted values plt plot(sp_close_series indexsp_close_series values, ='black'alpha= ,label='true data'plt plot(sp_close_series index[ :train_size]trainpredictlabel='training fit', =' 'plt plot(sp_close_series index[train_size+ :]testpredict[:test_x shape[ ]]label='forecast'plt title('forecast plot'plt legend(plt show(figure - forecast for & using lstm based sequence modeling
16,241
the forecast plot in figure - shows promising picture we can see that the training fit is nearly perfect which is kind of expected the testing performance or the forecast also shows decent performance even though the forecast deviates from the actual at placesthe overall performance both in terms of rmse and the fit seemed to have worked through the use of timedistributed layer wrapper we achieved the goal of modeling this data as time series the model not just had better performance in terms of overall fitit required far less feature engineering and much simpler model (in terms of number of training parametersin this modelwe also truly utilized the power of lstms by allowing it to learn and figure out what and how much of past information impacts the forecast (as compared to regression modeling case where we had restricted the window sizestwo important points before we conclude this section firstboth the models have their own advantages and disadvantages the aim of this section was to chalk out potential ways of modeling given problem the actual usage mostly depends upon the requirements of the use case secondly and more importantlyeither of the models is for learning/demonstration purposes actual stock price forecasting requires far more rigor and knowledgewe just scraped the tip of the iceberg upcoming techniquesprophet the data science landscape is ever evolving and new algorithmstweaks and tools are coming up at rapid pace one such tool is called prophet this is frameworkopen sourced by facebook' data science team for analyzing and forecasting time series prophet uses an additive model that can work with trending and seasonal data the aim of this tool is to enable forecasting at scale this is still in betayet has some really useful features more on this is available at is available in the paper available at paper_ pdf the installation steps are outlined on the web site and are straightforward through pip and conda prophet also uses scikit style apis of fit(and predict(with additional utilities to better handle time series data for the hands-on examples in this sectionyou can refer to the jupyter notebook notebook_stock_prediction_fbprophet ipynb for the necessary code snippets and examples ##note prophet is still in beta and is undergoing changes alsoits installation on windows platform is known to cause issues kindly use conda install (steps mentioned on the web sitewith anaconda distribution to avoid issues since we already have the & index price information available in dataframe/series we now test how we can use this tool to forecast we begin with converting the time series index into column of its own (simply how prophet expects the datafollowed by splitting the series into training and testing ( - splitthe following snippet performs the required actions in [ reset index to get date_time as column prophet_df sp_df reset_index(prepare the required dataframe prophet_df rename(columns={'index':'ds','close':' '},inplace=trueprophet_df prophet_df[['ds',' ']
16,242
prepare train and test sets train_size int(prophet_df shape[ ]* train_df prophet_df ix[:train_sizetest_df prophet_df ix[train_size+ :once we have the datasets preparedwe create an object of the prophet class and simply fit the model using fit(function kindly note that the model expects the time series value to be in column named 'yand timestamp in column named 'dsto make forecastsprophet requires the set of dates for which we need to forecast for thisit provides clean utility called the make_future_dataframe()which takes the number of days required for the forecast as input the following snippet uses this dataframe to forecast values in [ prepare future dataframe test_dates pro_model make_future_dataframe(periods=test_df shape[ ]forecast values forecast_df pro_model predict(test_datesthe output from the predict(function is dataframe that includes both in-sample predictions as well as forecasted values the dataframe also includes the confidence interval values all of this can be easily plotted using the plot(function of the model object the following snippet plots the forecasted values against the original time series along with its confidence intervals in [ plot against true data plt plot(forecast_df yhat, =' ',label='forecast'plt plot(forecast_df yhat_lower iloc[train_size+ :]linestyle='--', =' ',alpha= label='confidence interval'plt plot(forecast_df yhat_upper iloc[train_size+ :]linestyle='--', =' ',alpha= label='confidence interval'plt plot(prophet_df , =' ',label='true data'plt legend(plt title('prophet model forecast against true data'plt show(figure - forecasts from prophet against true/observed values
16,243
the model' forecasts are bit off the mark (see figure - )but the exercise clearly demonstrates the possibilities here the forecast dataframe provides even more details about seasonalityweekly trendsand so on you are encouraged to explore this further prophet is based upon stan stan is statistical modeling language/framework that provides algorithms exposed through interfaces for all major languagesincluding python you may explore more on this at ummary this introduced the concepts of time series forecasting and analysis using stock and commodity price information through this we covered the basic components of time series along with common techniques for preprocessing such data we then worked on the gold price prediction use case this use case utilized the quandl library to get daily gold price information we then discussed traditional time series analysis techniques and introduced key concepts related to box-jenkin' methodology and arima in particular we also discussed techniques for identification and transformation of non-stationary time series into one using ad fuller testsacf and pacf plots we modeled the gold price information using arima based on statsmodel apis while developing some key utility functions like auto_arima(and arima_gridsearch_cv(key insights and caveats were also discussed the next section of the introduced the stock price prediction use case herewe utilized pandas_datareader to get & daily closing price information to solve this use casewe utilized rnn based models primarily we provided two alternative perspectives of formulating the forecasting problemboth using lstms the first formulation closely imitated the regression concepts discussed in earlier two-layer stacked lstm network was used to forecast stock price information the second perspective utilized timedistributed layer wrapper from keras to enable sequence modeling of the stock price information various utilities and key concepts were discussed while working on the use case finallyan upcoming tool (still in beta)prophet from facebook was discussed the tool is made available by facebook' data science team to perform forecasting at scale we utilized the framework to quickly evaluate its performance on the same stock price information and shared the results multitude of techniques and concepts were introduced in this along with the intuition on how to formulate certain time series problems stay tuned for some more exciting use cases in the next
16,244
deep learning for computer vision deep learning is not just keyword abuzz in the industry and academicsit has thrown wide open whole new field of possibilities deep learning models are being employed in all sorts of use cases and domainssome of which we saw in the previous deep neural networks have tremendous potential to learn complex non-linear functionspatternsand representations their power is driving research in multiple fieldsincluding computer visionaudio-visual analysischatbots and natural language understandingto name few in this we touch on some of the advanced areas in the field of computer visionwhich have recently come into prominence with the advent of deep learning this includes real-world applications like image categorization and classification and the very popular concept of image artistic style transfer computer vision is all about the art and science of making machines understand high-level useful patterns and representations from images and videos so that it would be able to make intelligent decisions similar to what human would do upon observing its surroundings building on core concepts like convolutional neural networks and transfer learningthis provides you with glimpse into the forefront of deep learning research with several real-world case studies from computer vision this discusses convolutional neural networks through the task of image classification using publicly available datasets like cifarimagenetand mnist we will utilize our understanding of cnns to then take on the task of style transfer and understand how neural networks can be used to understand high-level features through this we cover the following topics in detailbrief overview of convolutional neural networks image classification using cnns from scratch transfer learningimage classification using pretrained models neural style transfer using cnns the code samplesjupyter notebooksand sample datasets for this are available in the github repository for this book at under the directory/folder for convolutional neural networks convolutional neural networks (cnnsare similar to the general neural networks we have discussed over the course of this book the additional explicit assumption of input being an image (tensoris what makes cnns optimized and different than the usual neural networks this explicit assumption is what allows us to design deep cnns while keeping the number of trainable parameters in check (in comparison to general neural networks(cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
16,245
we touched upon the concepts of cnns in (in the section "deep learning"and (in the section :feature engineering on image datahoweveras quick refresherthe following are the key concepts worth reiteratingconvolutional layerthis is the key differentiating component of cnn as compared to other neural networks convolutional layer or conv layer is set of learnable filters these filters help capture spatial features these are usually small (along the width and heightbut cover the full depth (color rangeof the image during the forward passwe slide the filter across the width and the height of the image while computing the dot product between the filter attributes and the input at any position the output is two-dimensional activation map from each filterwhich are then stacked to get the final output pooling layerthese are basically down-sampling layers used to reduce spatial size and number of parameters these layers also help in controlling overfitting pooling layers are inserted in between conv layers pooling layers can perform down sampling using functions such as maxaveragel -normand so on fully connected layeralso known as fc layer these are similar to fully connected layers in general neural networks these have full connections to all neurons in the previous layer this layer helps perform the tasks of classification parameter sharingthe unique thing about cnns apart from the conv layer is parameter sharing conv layers use same set of weights across the filters thus reducing the overall number of parameters required typical cnn architecture with all the components is depicted in figure - which is lenet cnn model (sourcedeeplearning netfigure - lenet cnn model (sourcedeeplearning netcnns have been studied in-depth and are being constantly improved and experimented with for an in-depth understanding of cnnsrefer to courses such as one from stanford available at github io/convolutional-networks
16,246
image classification with cnns convolutional neural networks are prime examples of the potential and power of neural networks to learn detailed feature representations and patterns from images and perform complex tasksranging from object recognition to image classification and many more cnns have gone through tremendous research and advancements have led to more complex and power architectureslike vgg- vgg- inception and many more interesting models we begin with getting some hands-on experience with cnns by working on an image classification problem we shared an example of cnn based classification in through the notebookbonus classifying handwritten digits using deep cnns ipynbwhich talks about classifying and predicting human handwritten digits by leveraging cnn based deep learning in case you haven' gone through itdo not worry as we will go through detailed example here for our deep learning needswe will be utilizing the keras framework with the tensorflow backendsimilar to what we used in the previous problem statement given set of images containing real-world objectsit is fairly easy for humans to recognize them our task here is to build multiclass ( classes or categoriesimage classifier that can identify the correct class label of given image for this taskwe will be utilizing the cifar dataset dataset the cifar dataset is collection of tiny labeled images spanning across different classes the dataset was collected by alex krizhevskyvinod nairand geoffrey hinton and is available at this dataset contains tiny images of size with , training and , test samples each image can fall into one and only one of the following classes automobile airplane bird cat deer dog frog horse ship truck each class is mutually exclusive there is another larger version of the dataset called the cifar for the purpose of this sectionwe will consider the cifar dataset we would be accessing the cifar dataset through the keras datasets module download the required files if they are not already present
16,247
cnn based deep learning classifier from scratch similar to any machine learning algorithmneural networks also require the input data to be certain shapesizeand type sobefore we reach the modeling stepthe first thing is to preprocess the data itself the following snippet gets the dataset and then performs one hot encoding of the labels remember there are classes to work with and hence we are dealing with multi-class classification problem in [ ]import keras from keras datasets import cifar num_classes (x_trainy_train)(x_testy_testcifar load_data(convert class vectors to binary class matrices y_train keras utils to_categorical(y_trainnum_classesy_test keras utils to_categorical(y_testnum_classesthe datasetif not already present locallywould be downloaded automatically the following are the shapes of the objects obtained in [ ]print('x_train shape:'x_train shapeprint(x_train shape[ ]'train samples'print(x_test shape[ ]'test samples'x_train shape( train samples test samples now that we have training and test datasets the next step is to build the cnn model since we have two dimensional images (the third dimension is the channel information)we will be using conv layers as discussed in the previous sectioncnns uses combination of convolutional layers and pooling layers followed by fully connected end to identify/classify the data the model architecture is built as follows in [ ]model sequential(model add(conv ( kernel_size=( )activation='relu'input_shape=input_shape)model add(conv ( ( )activation='relu')model add(maxpooling (pool_size=( ))model add(dropout( )model add(flatten()model add(dense( activation='relu')model add(dropout( )model add(dense(num_classesactivation='softmax')it starts off with convolutional layer with total of filters and activation function as the rectified linear unit (reluthe input shape resembles each image sizei (color image has three channels--rgbthis is followed by another convolutional layer and max-pooling layer finallywe have the fully connected dense layer since we have classes to choose fromthe final output layer has softmax activation
16,248
the next step involves compiling we use categorical_crossentropy as our loss function since we are dealing with multiple classes besides thiswe use the adadelta optimizer and then train the classifier on the training data the following snippet showcases the same in [ ]model compile(loss=keras losses categorical_crossentropyoptimizer=keras optimizers adadelta()metrics=['accuracy']model fit(x_trainy_trainbatch_size=batch_sizeepochs=epochsverbose= epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc from the preceding outputit is clear that we trained the model for epochs this takes anywhere between - secs on cputhe performance improves manifold when done using gpu we can see the accuracy is around based on the last epoch we will now evaluate the testing performancethis is checked using the evaluate function of the model object the results are as follows test loss test accuracy thuswe can see that our very simple cnn based deep learning model achieved an accuracy of %given the fact that we have built very simple model and that we haven' done much preprocessing or model tuning you are encouraged to try different cnn architectures and experiment with hyperparameter tuning to see how the results can be improved the initial few conv layers of the model kind of work toward feature extraction while the last couple of layers (fully connectedhelp in classifying the data thusit would be interesting to see how the image data is manipulated by the conv-net we just created luckilykeras provides hooks to extract information at intermediate steps in the model they depict how various regions of the image activate the conv layers and how the corresponding feature representations and patterns are extracted
16,249
figure - sample image from the cifar dataset sample flow of how an image is viewed by the cnn is explained in the notebook notebook_cnn_cifar _classifier ipynb it contains rest of the code discussed in this section figure - shows an image from the test dataset it looks like ship and the model correctly identifies the same as well as depicted in this snippet actual image id img_idx actual image label in [ ]y_test[img_idxarray( ]predict label with our model in [ ]test_image =np expand_dims(x_test[img_idx]axis= model predict_classes(test_image,batch_size= / [============================== out[ ]array([ ]dtype=int you can extract and view the activation maps of the image based on what representations are learned and extracted by the conv layers using the get_activationsand display_activationsfunctions in the notebook figure - shows the activations of initial conv layers of the cnn model we just built figure - sample image through cnn layer
16,250
we also recommend you go through the section "automated feature engineering with deep learningin to learn more about extracting feature representations from images using convolutional layers cnn based deep learning classifier with pretrained models building classifier from scratch has its own set of pros and cons yeta more refined approach is to leverage pre-trained models over large complex datasets there are many famous cnn architectures like lenetresnetvgg- vgg- and so on these models have deep and complex architectures that have been fine-tuned and trained over diverselarge datasets hencethese models have been proven to have amazing performance on complex object recognition tasks since obtaining large labeled datasets and training highly complex and deep neural networks is timeconsuming task (training complex cnn like vgg- could take few weekseven using gpusin practicewe utilize conceptwhat is formally termed as transfer learning this concept of transfer learning helps us leverage existing models for our tasks the core idea is to leverage the learningwhich the model learned from being trained over large dataset and then transfer this learning by re-using the same model to extract feature representations from new images there are several strategies of performing transfer learningsome of which are mentioned as followspre-trained model as feature extractorthe pre-trained model is used to extract features for our dataset we build fully connected classifier on top of these features in this case we only need to train the fully connected classifierwhich does not take much time fine-tuning pre-trained modelsit is possible to fine-tune an existing pre-trained model by fixing some of the layers and allowing others to learn/update weights apart from the fully connected layers usually it is observed that initial layers capture generic features while the deeper ones become more specific in terms of feature extraction thusdepending upon the requirementswe fix certain layers and fine-tune the rest in this sectionwe see an example where we will utilize pre-trained conv-network as feature extractor and build fully connected layer based classifier on top of it and train the model we will not train the feature extraction layers and hence leverage principles of transfer learning by using the pre-trained conv layers for feature extraction the vgg- model from the visual geometry group of the oxford university is one state-of-the-art convolutional neural network this has been shown to perform extremely well on various benchmarks and competitions vgg is -layer conv-net trained on imagenet dataset imagenet is visual database of hand-annotated images amounting to million spanning across , categories this model has been widely studied and used in tasks such as transfer learning ##note more details on this and other research by the vgg group is available at ox ac uk/~vgg/research/very_deepthis pretrained model is available through the keras applications module as mentionedwe will utilize vgg- to act as feature extractor to help us build classifier on cifar dataset since we would be using vgg- for feature extractionwe do not need the top (or fully connectedlayers of this model keras makes this as simple as setting single flag value to false the following snippet loads the vgg- model architecture consisting of the conv layers and leaves out the fully connected layers
16,251
in [ ]from keras import applications vgg_model applications vgg (include_top=falseweights='imagenet'now that the pre-trained model is availablewe will utilize it to extract features from our training dataset remember vgg- is trained upon imagenet while we would be using cifar to build classifier since imagenet contains over million images spanning across , categoriesit is safe to assume that cifar ' categories would subset here before moving on to feature extraction using the vgg- modelit would be good idea to check out the model' architecture in [ ]vgg_model summary(layer (typeoutput shape param ================================================================input_ (inputlayer(nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _pool (maxpooling (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _pool (maxpooling (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _pool (maxpooling (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _pool (maxpooling (nonenonenone block _conv (conv (nonenonenone block _conv (conv (nonenonenone
16,252
block _conv (conv (nonenonenone block _conv (conv (nonenonenone block _pool (maxpooling (nonenonenone ================================================================total params , , trainable params , , non-trainable params from the preceding outputyou can see that the architecture is huge with lot of layers figure - depicts the same in an easier-to-understand visual depicting all the layers remember that we do not use the fully connected layers depicted in the extreme right of figure - we recommend checking out the paper very deep convolutional networks for large-scale image recognition by karen simonyan and andrew zisserman of the visual geometry groupdepartment of engineering scienceuniversity of oxford the paper is available at figure - visual depiction of the vgg- architecture loading the cifar training and test datasets is the same as discussed in the previous section we perform similar one hot encoding of the labels as well since the vgg model has been loaded without the final fully connected layersthe predictfunction of the model helps us get the extracted features on our dataset the following snippet extracts the features for both training and test datasets in [ ]bottleneck_features_train vgg_model predict(x_trainverbose= bottleneck_features_test vgg_model predict(x_testverbose=
16,253
these features are widely known as bottleneck features due to the fact that there is an overall decrease in the volume of the input data points it would be worth exploring the model summary to understand how the vgg model transforms the data the output of this stage (the bottleneck featuresis used as input to the classifier we are going to build next the following snippet builds simple fully connected two-layer classifier in [ ]clf_model sequential(clf_model add(flatten(input_shape=bottleneck_features_train shape[ :])clf_model add(dense( activation='relu')clf_model add(dropout( )clf_model add(dense( activation='relu')clf_model add(dropout( )clf_model add(dense(num_classesactivation='softmax')clf_model compile(loss=keras losses categorical_crossentropyoptimizer=keras optimizers adadelta()metrics=['accuracy']the model' input layer matches the dimensions of the bottleneck features (for obvious reasonsas with the cnn model we built from scratchthis model also has dense output layer with softmax activation function training this model as opposed to complete vgg is fairly simple and fastas depicted in the following snippet in [ ]clf_model fit(bottleneck_features_trainy_trainbatch_size=batch_sizeepochs=epochsverbose= epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc we can add hooks to stop the training early based of early stop criteriaetc but for nowwe keep things simple complete code for this section is available in the notebook notebook_pretrained_cnn_cifar classifier ipynb overallwe achieve an accuracy of on the training dataset and around on the test dataset now you'll see the performance of this classifier built on top pre-trained model on the test dataset the following snippet showcases utility function that takes the index number of an image in the test dataset as input and compares the actual labels and the predicted labels def predict_label(img_idx,show_proba=true)plt imshow(x_test[img_idx],aspect='auto'plt title("image to be labeled"plt show(
16,254
print("actual class:{}format(np nonzero(y_test[img_idx])[ ][ ])test_image =np expand_dims(x_test[img_idx]axis= bf vgg_model predict(test_image,verbose= pred_label clf_model predict_classes(bf,batch_size= ,verbose= print("predicted class:{}format(pred_label[ ])if show_probaprint("predicted probabilities"print(clf_model predict_proba(bf)the following is the output of the predict_labelfunction when tested against couple of images from the test dataset as depicted in figure - we correctly predicted the images belong to class label (dogand (truck)figure - predicted labels from pre-trained cnn based classifier this section demonstrated the power and advantages of transfer learning instead of spending time reinventing the wheelwith few lines of codewe were able to leverage state of the art neural network for our classification task the concept of transfer learning is what forms the basis of neural style transferwhich we will discuss in the next section artistic style transfer with cnns paintings (or for that matter any form of artrequire special skill which few have mastered paintings present complex interplay of content and style photographs on the other hand are combination of perspectives and light when the two are combinedthe results are spectacular and surprising one such example is shown in figure -
16,255
figure - left imagethe original photograph depicting the neckarfront in tubingengermany right imagethe painting (insetthe starry night by vincent van goghthat provided the style for the respective generated image sourcea neural algorithm of artistic stylegatys et al (arxiv: the results in figure - showcase how painting' (van gogh' the starry nightstyle has been transferred to photograph of the neckarfront at first glancethe process seems to have picked up the content from the photographthe stylecolorsand stroke patterns from the painting and generated the final outcome the results are amazingbut what is more surprising ishow was it donefigure - showcases process termed as artistic style transfer the process is an outcome of research by gatys et al and is presented in their paper neural algorithm for artistic style in this sectionwe discuss the intricacies of this paper from an implementation point of view and see how we can perform this technique ourselves ##note prisma is an app that transforms photos into works of art using techniques of artistic style transfer based on convolution neural networks more about the app is available at background formallyneural style transfer is the process of applying the "styleof reference image to specific target image such that in the processthe original "contentof the target image remains unchanged herestyle is defined as colorspatternsand textures present in the reference imagewhile content is defined as the overall structure and higher-level components of the image the main objective here is thento retain the content of the original target imagewhile superimposing or adopting the style of the reference image on the target image to define this concept mathematicallyconsider three images--the original content (denoted as )the reference style (denoted as )and the generated image (denoted as gthuswe need way to measure how different are images and in terms of their content function that tends to if and are completely different and grows otherwise this can be concisely stated in terms of loss function aslcontent distance(cgwhere distance is norm function like on the same lineswe can define another function that captures how different images and are in terms of their style in other wordsthis can be stated as followslstyle distance(sg
16,256
thusfor the overall process of neural style transferwe have an overall loss functionwhich can be defined as combination of content and style loss functions lstyle transfer argming(alcontent(cgblstyle(sg)where and are weights used to control the impact of content and style components on the overall loss the loss function we will try to minimize consists of three parts namelythe content lossthe style lossand the total variation losswhich we will be talking about later the beauty of deep learning is that by leveraging architectures like deep convolutional neural networks (cnns)we can mathematically define the above-mentioned style and content functions we will be using principles of transfer learning in building our system for neural style transfer we introduced the concept of transfer learning using pre-trained deep cnn model like vgg- we will be leveraging the same pre-trained model for the task of neural style transfer the main steps are outlined as follows leverage vgg- to help compute layer activations for the stylecontentand generated image use these activations to define specific loss functions mentioned earlier finallyuse gradient descent to minimize the overall loss we recommend you follow along this section with the notebook titled neural style transfer ipynbwhich contains step-by-step details of the style transfer process we would also like to give special mention and thanks to francois chollet as well as harish narayanan for providing some excellent resources on style transfer details on the same will be mentioned later we also recommend you check out the following papers (detailed links shared later ona neural algorithm of artistic style by leon gatysalexander eckerand matthias bethge perceptual losses for real-time style transfer and super-resolution by justin johnsonalexandre alahiand li fei-fei preprocessing the first and foremost step toward implementation of such network is to preprocess the data or images in this case the following are quick utilities to preprocess images for size and channel adjustments import numpy as np from keras applications import vgg from keras preprocessing image import load_imgimg_to_array def preprocess_image(image_pathheight=nonewidth=none)height if not height else height width width if width else int(width height heightimg load_img(image_pathtarget_size=(heightwidth)img img_to_array(imgimg np expand_dims(imgaxis= img vgg preprocess_input(imgreturn img def deprocess_image( )remove zero-center by mean pixel [:: +
16,257
[:: + [:: + 'bgr'->'rgbx [::::- np clip( astype('uint 'return as we would be writing custom loss functions and manipulation routineswe would need to define certain placeholders keras is high-level library that utilizes tensor manipulation backends (like tensorflowtheanoand cntkto perform the heavy lifting thusthese placeholders provide high-level abstractions to work with the underlying tensor object the following snippet prepares placeholders for stylecontentand generated images along with the input tensor for the neural network in [ ]this is the path to the image you want to transform target_img 'data/city_road jpgthis is the path to the style image reference_style_img 'data/style pngwidthheight load_img(target_imgsize img_height img_width int(width img_height heighttarget_image constant(preprocess_image(target_imgheight=img_heightwidth=img_width)style_image constant(preprocess_image(reference_style_imgheight=img_heightwidth=img_width)placeholder for our generated image generated_image placeholder(( img_heightimg_width )combine the images into single batch input_tensor concatenate([target_imagestyle_imagegenerated_image]axis= we will load the pre-trained vgg- model as we did in the previous sectioni without the top fully connected layers the only difference here is that we would be providing the model constructorthe size dimensions of the input tensor the following snippet fetches the pretrained model in [ ]model vgg vgg (input_tensor=input_tensorweights='imagenet'include_top=falseyou may use the summary(function to understand the architecture of the pre-trained model
16,258
loss functions as discussed in the background subsectionthe problem of neural style transfer revolves around loss functions of content and style in this subsectionwe will discuss and define the required loss functions content loss in any cnn-based modelactivations from top layers contain more global and abstract information (high-level structures like faceand bottom layers will contain local information (low-level structures like eyesnoseedgesand cornersabout the image we would want to leverage the top layers of cnn for capturing the right representations for the content of an image hencefor the content lossconsidering we will be using the pretrained vgg- cnnwe can define our loss function as the norm (scaled and squared euclidean distancebetween the activations of top layer (giving feature representationscomputed over the target image and the activations of the same layer computed over the generated image assuming we usually get feature representations relevant to the content of images from the top layers of cnnthe generated image is expected to look similar to the base target image the following snippet showcases the function to compute the content loss def content_loss(basecombination)return sum( square(combination base)style loss the original paper on neural style transfera neural algorithm of artistic style by gatys et al leverages multiple convolutional layers in the cnn (instead of oneto extract meaningful patterns and representations capturing information pertaining to appearance or style from the reference style image across all spatial scales irrespective of the image content staying true to the original paperwe will be leveraging the gram matrix and computing the same over the feature representations generated by the convolution layers the gram matrix computes the inner product between the feature maps produced in any given conv layer the inner products terms are proportional to the covariances of corresponding feature sets and hence captures patterns of correlations between the features of layer that tend to activate together these feature correlations help capture relevant aggregate statistics of the patterns of particular spatial scalewhich correspond to the styletextureand appearance and not the components and objects present in an image the style loss is thus defined as the scaled and squared frobenius norm of the difference between the gram matrices of the reference style and the generated images minimizing this loss helps ensure that the textures found at different spatial scales in the reference style image will be similar in generated image the following snippet thus defines style loss function based on gram matrix calculation def style_loss(stylecombinationheightwidth)def build_gram_matrix( )features batch_flatten( permute_dimensions( ( ))gram_matrix dot(featuresk transpose(features)return gram_matrix build_gram_matrix(stylec build_gram_matrix(combinationchannels size height width return sum( square( )( (channels * (size * )
16,259
total variation loss it was observed that optimization to reduce only the style and content losses led to highly pixelated and noisy outputs to cover the sametotal variation loss was introduced the total variation loss is analogous to regularization loss this is introduced for ensuring spatial continuity and smoothness in the generated image to avoid noisy and overly pixelated results the same is defined in the function as follows def total_variation_loss( ) squarex[::img_height :img_width : [: ::img_width :] squarex[::img_height :img_width : [::img_height ::]return sum( pow( )overall loss function having defined the components of the overall loss function for neural style transferthe next step is to piece together these building blocks since content and style information is captured by the cnns at different depths in the networkwe need to apply and calculate loss at appropriate layers for each type of loss utilizing insights and research by gatys et al and johnson et al in their respective paperswe define the following utility to identify the content and style layers from the vgg- model even though johnson et al leverages the vgg- model for faster and better performancewe constrain ourselves to the vgg- model for ease of understanding and consistency across runs define function to set layers based on source paper followed def set_cnn_layers(source='gatys')if source ='gatys'config from gatys et al content_layer 'block _conv style_layers ['block _conv ''block _conv ''block _conv ''block _conv ''block _conv 'elif source ='johnson'config from johnson et al content_layer 'block _conv style_layers ['block _conv ''block _conv ''block _conv ''block _conv ''block _conv 'elseuse gatys config as the default anyway content_layer 'block _conv style_layers ['block _conv ''block _conv ''block _conv ''block _conv ''block _conv 'return content_layerstyle_layers the following snippet then applies the overall loss function based on the layers selected from the set_cnn_layers(function for content and style in [ ]weights for the weighted average loss function content_weight style_weight total_variation_weight -
16,260
set the source research paper followed and set the content and style layers source_paper 'gatyscontent_layerstyle_layers set_cnn_layers(source=source_paper#build the weighted loss function initialize total loss loss variable( add content loss layer_features layers[content_layertarget_image_features layer_features[ :::combination_features layer_features[ :::loss +content_weight content_loss(target_image_featurescombination_featuresadd style loss for layer_name in style_layerslayer_features layers[layer_namestyle_reference_features layer_features[ :::combination_features layer_features[ :::sl style_loss(style_reference_featurescombination_featuresheight=img_heightwidth=img_widthloss +(style_weight len(style_layers)sl add total variation loss loss +total_variation_weight total_variation_loss(generated_imagecustom optimizer the objective is to iteratively minimize the overall loss with the help of an optimization algorithm in the paper by gatys et al optimization was done using the -bfgs algorithmwhich is an optimization algorithm based on quasi-newton methodswhich is popularly used for solving non-linear optimization problems and parameter estimation this method usually converges faster than standard gradient descent scipy has an implementation available in scipy optimize fmin_l_bfgs_b(howeverlimitations include the function being applicable only to flat vectorsunlike image matrices which we are dealing withand the fact that value of loss function and gradients need to be passed as two separate functions we build an evaluator class based on patterns followed by keras creator francois chollet to compute both loss and gradients values in one pass instead of independent and separate computations this will return the loss value when called the first time and will cache the gradients for the next call thusit would be more efficient than computing both independently the following snippet defines the evaluator class class evaluator(object)def __init__(selfheight=nonewidth=none)self loss_value none self grads_values none self height height self width width
16,261
def loss(selfx)assert self loss_value is none reshape(( self heightself width )outs fetch_loss_and_grads([ ]loss_value outs[ grad_values outs[ flatten(astype('float 'self loss_value loss_value self grad_values grad_values return self loss_value def grads(selfx)assert self loss_value is not none grad_values np copy(self grad_valuesself loss_value none self grad_values none return grad_values the loss and gradients are retrieved as follows the snippet also creates an object of the evaluator class in [ ]get the gradients of the generated image wrt the loss grads gradients(lossgenerated_image)[ function to fetch the values of the current loss and the current gradients fetch_loss_and_grads function([generated_image][lossgrads]evaluator object evaluator evaluator(height=img_heightwidth=img_widthstyle transfer in action the final piece of the puzzle is to use all the building blocks and see the style transfer in action the art/style and content images are available data directory for reference the following snippet outlines how loss and gradients are evaluated we also write back outputs after regular intervals ( and so on iterationsto later understand how the process of neural style transfer transforms the images in consideration in [ ]result_prefix 'style_transfer_result_'+target_img split(')[ result_prefix result_prefix+' '+source_paper iterations run scipy-based optimization ( -bfgsover the pixels of the generated image so as to minimize the neural style loss this is our initial statethe target image note that `scipy optimize fmin_l_bfgs_bcan only process flat vectors preprocess_image(target_imgheight=img_heightwidth=img_widthx flatten(for in range(iterations)print('start of iteration'( + )start_time time time(xmin_valinfo fmin_l_bfgs_b(evaluator lossxfprime=evaluator gradsmaxfun=
16,262
print('current loss value:'min_valif ( + = or = save current generated image only every iterations img copy(reshape((img_heightimg_width )img deprocess_image(imgfname result_prefix '_at_iteration_% png%( + imsave(fnameimgprint('image saved as'fnameend_time time time(print('iteration % completed in %ds( + end_time start_time)it must be pretty evident by now that neural style transfer is computationally expensive task for the set of images in considerationeach iteration took between - seconds on intel cpu with gb ram on an averageeach iteration takes around seconds but if you run multiple networks togethereach iteration takes up to , seconds you may observe speedups if the same is done using gpus the following is the output of some of the iterations we print the loss and time taken for each iteration and save the image after five iterations start of iteration current loss value + image saved as style_transfer_result_city_road_gatys_at_iteration_ png iteration completed in start of iteration current loss value + iteration completed in start of iteration current loss value + iteration completed in start of iteration current loss value + iteration completed in start of iteration current loss value + image saved as style_transfer_result_city_road_gatys_at_iteration_ png iteration completed in start of iteration current loss value + iteration completed in start of iteration current loss value + image saved as style_transfer_result_city_road_gatys_at_iteration_ png iteration completed in now you'll learn how the neural style transfer has worked out for the images in consideration remember that we performed checkpoint outputs after certain iterations for every pair of style and content images
16,263
##note the style we use for our first imagedepicted in figure - is named edtaonisl this is master piece by francis picabia through this oil painting francis picabia pioneered new visual language more details about this painting are available at we utilize matplotlib and skimage libraries to load and understand the style transfer magicthe following snippet loads the city road image as our content and edtaonisl painting as our style image in [ ]from skimage import io from glob import glob from matplotlib import pyplot as plt cr_content_image io imread('results/city road/city_road jpg'cr_style_image io imread('results/city road/style png'fig plt figure(figsize ( )ax fig add_subplot( , ax imshow(cr_content_imaget ax set_title('city road image'ax fig add_subplot( , ax imshow(cr_style_imaget ax set_title('edtaonisl style'figure - the city road image as content and the edtaonisl painting as style image for neural style transfer the following snippet loads the generated images (style transferred imagesas observed after the firsttenthand twentieth iteration in [ ]fig plt figure(figsize ( )ax fig add_subplot( , ax imshow(cr_iter ax set_title('iteration 'ax fig add_subplot( , ax imshow(cr_iter
16,264
ax set_title('iteration 'ax fig add_subplot( , ax imshow(cr_iter ax set_title('iteration ' fig suptitle('city road image after style transfer'figure - the city road image style transfer at the firsttenthand twentieth iteration the results depicted in figure - sure seem pleasant and amazing it is quite apparent how the generated image in the initial iterations resembles the structure of the content andas the iterations progressthe style starts influencing the texturecolorstrokesand so onmore and more ##note the style used in our next example depicted in figure - is the famous painting named the great wave by katsushika hokusai the artwork was completed in - it is amazing to see the styles of such talented artists being transferred to everyday photographs more on this artwork is available at metmuseum org/art/collection/search/ figure - the italy street image as content and wave style painting as the style image for neural style transfer we experimented with few more sets of images and the results truly were surprising and pleasant to look at the output from neural style transfer for an image depicting an italian street (see figure - is shown in figure - at different iterations
16,265
figure - italian street image style transfer at the firsttenth and twentieth iteration the results depicted in figure - are definitely pleasure to look at and give the feeling of an entire city underwaterwe encourage you to use images of your own with this same framework also feel free to experiment with leveraging different convolution layers for the style and content feature representations as mentioned in gatys et al and johnson et al ##note the concept and details of neural style transfer were introduced and explained by gatys et al and johnson et al in their respective papers available at chollet as well as harish narayanan' excellent blog for detailed step-by-step guide on neural style transfers ummary this presented topics from the very forefront of the machine learning landscape through this we utilized our learnings about machine learning in general and deep learning in particular to understand the concepts of image classificationtransfer learningand style transfer the started off with quick brush up of concepts related to convolutional neural networks and how they are optimized architectures to handle image related data we then worked towards developing image classifiers the first classifier was developed from scratch and with the help of keras we were able to achieve decent results the second classifier utilized pre-trained vgg- deep cnn model as an image feature extractor the pre-trained model based classifier helped us understand the concept of transfer learning and how it is beneficial the closing section of the introduced the advanced topic of neural style transferthe main highlight of this style transfer is the process of applying the style of reference image to specific target image such that in the processthe original content of the target image remains unchanged this process utilizes the potential of cnns to understand image features at different granularities along with transfer learning based on our understanding of these concepts and the research work by gatys et al and johnson et al we provided step-by-step guide to implement system of neural style transfer we concluded the section by presenting some amazing results from the process of neural style transfer deep learning is opening new doors every day its application to different domains and problems is showcasing its potential to solve problems previously unknown machine learning is an ever evolving and very involved field through this bookwe traveled from the basics of machine learning frameworkspython ecosystem to different algorithms and concepts we then covered multiple use cases across showcasing different scenarios and ways problem can be solved using the tools from the machine learning toolbox the universe of machine learning is expanding at breakneck speedsour attempt here was to get you started on the right trackon this wonderful journey
16,266
advanced supervised deep learning models dense layer embedding layer lstm-based classification model lstm cell architecture data flow model performance metricslstm most_common(countfunction norm_train_reviews and norm_test_reviews pad_index parametersembedding layer rnn and lstm unitsstructure text sentiment class labels tokenized_train corpus word embeddings generation afinn lexicon - algorithmic trading annotations anomaly detection applied computer sciencesee practical computer science area under curve (auc) array elements advanced indexing basic indexing and slicing - boolean indexing integer array indexing linear algebra - operations - artificial intelligence (aidefined major facets nlp objectives text analytics artificial neural networks (anns) - artistic style transfercnns in action - background - custom optimizer - loss functions content loss overall loss function - style loss total variation loss preprocessing - association rule-mining method autoencoders auto feature generation auto regressive integrated moving average (arimamodel - axis controls adjust axis log scale tick range - -axis backpropagation algorithm backward elimination bagging methods bag of -grams model bag of words model numeric vector visual - bar plots - batch learning methods bayes theorem bias and variance generalization error overfitting tradeoff - underfitting bike sharing dataset eda correlations - distribution and trends (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
16,267
bike sharing dataset (cont outliers - preprocessing - linear regression - modeling (see modelingbike sharing datasetproblem statement regression analysis assumptions cross validation normality test - residual analysis -squared types binary classification model bin-counting scheme bing liu' lexicon binning image intensity distribution boosting methods botsee web crawler box-cox transform - box plots - building machine intelligence calinski-harabaz index - candidate model canny edge detector categorical data encoding features bin-counting scheme dummy coding scheme effect coding scheme one hot encoding scheme feature hashing scheme nominal - ordinal - categorical variables channel pixels - chi-square test - classification models binary classification confusion matrix accuracy score precision - structure - test dataset handwritten digit - multi-class classification output formats clustering methods clustering models calinski-harabaz index - completeness density based distance between data points evaluation hierarchical - homogeneity partition based - sc -measure clustering strategy data cleaning cluster analysis - customerid field data preprocessing - frequency and monetary value - -means clustering - recency - separate transactionsgeographical region rfm modelcustomer value clustering vs customer segmentation comma separated values (csvdataframe - dict pandas reader function sample file computationtheory of computer science (csalgorithms code data structures defined practical programming languages theoretical conditional probability confusion matrix accuracy score precision - structure - test dataset content-based recommendation engines convolutional neural networks (cnnsarchitecture artistic style transfer (see artistic style transfercnnscomponents - feature map visualizations - image classification dataset deep learning classifierpretrained models - deep learning classifierscratch - problem statement
16,268
two-layer pooling stride visualizing cross industry standard process for data mining (crisp-dmprocess model assessment stage attribute generation building machine intelligence business context and requirements business problem data collection data description data integration data mining lifecycle - data mining problem data preparation data quality analysis data understanding data wrangling deployment eda evaluation phase ml pipelines data preparation data processing and wrangling data retrieval deployment and monitoring feature extraction and engineering feature scaling and selection model evaluation and tuning modeling standard supervised unsupervised model assessment project plan training model tuned models cross sellingassociation rule-mining dependencies eda - - fp growth - - market basket analysis - orange table data structure transaction set cross validation (cvk-fold model building and tuning - single data point curse of dimensionality customer segmentation clustering strategy (see clustering strategyobjectives customer understanding higher revenue latent customer segments optimal product placement target marketing strategies clustering eda custom optimizer darraysee matrix data collection csv file - defined - html - json - sql web scraping (see web scrapingdata description categorical defined numeric text data-driven decisions data mining problem data mungingsee data wrangling data science - datasets data structures data summarization agg(function groupby(function quantity_purchased user_class data visualization defined matplotlib annotations axis controls - figure and subplots - global parameters graph legend - plot formatting - pylab pyplot pandas bar plots - box plots - histograms - line charts - pie charts - scatter plots - data wrangling defined downstream steps
16,269
data wrangling (cont product purchase transactions dataset attributes/features/properties - categorical data - duplicates filtering data missing values - normalization process string data transformations - typecasting date-based features decision tree decision tree based regression algorithms hyperparameters - node splitting - stopping criteria testing training - decision tree regressor - deep learning ann architectures autoencoder backpropagation characteristics cnn - comparing learning pipelines comparison of machine learning and distributed representational hierarchical layered representation keras lstms mlp model building process neural network - power - representational learning rnn tensorflow packages theano packages - deep neural network (dnn) density based clustering models deployment model custom development persistence model service descriptive statistics distributed machine learning community (dmlc) document object model (domparser dummy coding scheme edasee exploratory data analysis (edaeffect coding scheme efficient market hypothesis eigen decomposition - embedded methods ensemble model euler' number exploratory data analysis (eda) - correlations - data enhancing - distribution and trends loading and trimming data - outliers - preprocessing - visual analysis popular artist popular songs - user vs songs distribution - extensible markup language (xmlannotated with key components attributes content dom parser element elementtree parser - sax parser tag false positive rate (fpr) feature engineering business and domain business problem data anddatasets data types definitions evaluating model feature model accuracy models predictive models raw data representation of data unseen data feature extraction methods feature hashing scheme feature scaling jupyter notebook min-max - online videos
16,270
robust standardized feature selection methods figure module axes objects cosine curve - sine curve - filter methods fine-tuning pre-trained models fixed-width binning - forecasting gold price dataset modeling acf and pacf plots arima( , , ) arima_grid_search_cv() forecast(method mean and standard deviation plot - plotgold prices stationarity statsmodel library test statistic problem statement traditional approaches arima model - differencing stationarity unit root tests generalized linear models gini impurity/index global parameters global vectors for word representation (glove) gradient boosting machines (gbmmodel graph legend - grayscale image pixels grid search breast cancer dataset svm gridsearchcv(method hacking skills hierarchical clustering model - histogram of oriented gradients (hog) histogramsprice distribution - hybrid-recommendation engines hyperparameters decision tree definition grid search breast cancer dataset svm randomized search - hyper text markup language (html) - hypothesis models image data binning image intensity distribution canny edge detector channel pixels - exif data grayscale image pixels hog image aggregation statistics raw image - surf two-layer cnn feature map visualizations - pooling stride visualizing vbow - inferential statistics information gain instance based learning methods internet movie database (imdb) interpretation model decision trees logistic regression model data point with no cancer - features malignant cancer worst area feature - skater - inter-quartile range (iqr) java script object notation (jsondict nested attributes object structure pandas - sample file -fold cross validation - -means algorithm - -means clustering model knowledge discovery of databases (kdd)
16,271
for support files and downloads related to your bookplease visit xxx bdlu vcdpn did you know that packt offers ebook versions of every book publishedwith pdf and epub files availableyou can upgrade to the ebook version at xxx bdlu vcdpnand as print book customeryou are entitled to discount on the ebook copy get in touch with us at tfswjdf!qbdluqvcdpn for more details at xxx bdlu vcdpnyou can also read collection of free technical articlessign up for range of free newsletters and receive exclusive discounts and offers on packt books and ebooks iuuqt xxxqbdluqvcdpnnbqu get the most in-demand software skills with mapt mapt gives you full access to all packt books and video coursesas well as industry-leading tools to help you plan your personal development and advance your career why subscribefully searchable across every book published by packt copy and pasteprintand bookmark content on demand and accessible via web browser
16,272
thanks for purchasing this packt book at packtquality is at the heart of our editorial process to help us improveplease leave us an honest review on this book' amazon page at iuuqt xxxbnb[podpneq if you' like to join our team of regular reviewersyou can email us at dvtupnfssfwjfxt!qbdluqvcdpn we award our regular reviewers with free ebooks and videos in exchange for their valuable feedback help us be relentless in improving our products
16,273
preface getting started installing enthought canopy giving the installation test run if you occasionally get problems opening your ipnyb files using and understanding ipython (jupyternotebooks python basics part understanding python code importing modules data structures experimenting with lists pre colon post colon negative syntax adding list to list the append function complex data structures dereferencing single element the sort function reverse sort tuples dereferencing an element list of tuples dictionaries iterating through entries python basics part functions in python lambda functions functional programming understanding boolean expressions the if statement the if-else loop looping the while loop exploring activity running python scripts more options than just the ipython/jupyter notebook running python scripts in command prompt
16,274
summary statistics and probability refresherand python practice types of data numerical data discrete data continuous data categorical data ordinal data meanmedianand mode mean median the factor of outliers mode using meanmedianand mode in python calculating mean using the numpy package visualizing data using matplotlib calculating median using the numpy package analyzing the effect of outliers calculating mode using the scipy package some exercises standard deviation and variance variance measuring variance standard deviation identifying outliers with standard deviation population variance versus sample variance the mathematical explanation analyzing standard deviation and variance on histogram using python to compute standard deviation and variance try it yourself probability density function and probability mass function the probability density function and probability mass functions probability density functions probability mass functions types of data distributions uniform distribution normal or gaussian distribution the exponential probability distribution or power law binomial probability mass function poisson probability mass function ii
16,275
percentiles quartiles computing percentiles in python moments computing moments in python summary matplotlib and advanced probability concepts crash course in matplotlib generating multiple plots on one graph saving graphs as images adjusting the axes adding grid changing line types and colors labeling axes and adding legend fun example generating pie charts generating bar charts generating scatter plots generating histograms generating box-and-whisker plots try it yourself covariance and correlation defining the concepts measuring covariance correlation computing covariance and correlation in python computing correlation the hard way computing correlation the numpy way correlation activity conditional probability conditional probability exercises in python conditional probability assignment my assignment solution bayestheorem summary predictive models linear regression the ordinary least squares technique iii
16,276
the co-efficient of determination or -squared computing -squared interpreting -squared computing linear regression and -squared using python activity for linear regression polynomial regression implementing polynomial regression using numpy computing the -squared error activity for polynomial regression multivariate regression and predicting car prices multivariate regression using python activity for multivariate regression multi-level models summary machine learning with python machine learning and train/test unsupervised learning supervised learning evaluating supervised learning -fold cross validation using train/test to prevent overfitting of polynomial regression activity bayesian methods concepts implementing spam classifier with naive bayes activity -means clustering limitations to -means clustering clustering people based on income and age activity measuring entropy decision trees concepts decision tree example walking through decision tree random forests technique decision trees predicting hiring decisions using python ensemble learning using random forest activity ensemble learning support vector machine overview iv
16,277
activity summary recommender systems what are recommender systemsuser-based collaborative filtering limitations of user-based collaborative filtering item-based collaborative filtering understanding item-based collaborative filtering how item-based collaborative filtering workscollaborative filtering using python finding movie similarities understanding the code the corrwith function improving the results of movie similarities making movie recommendations to people understanding movie recommendations with an example using the groupby command to combine rows removing entries with the drop command improving the recommendation results summary more data mining and machine learning techniques -nearest neighbors concepts using knn to predict rating for movie activity dimensionality reduction and principal component analysis dimensionality reduction principal component analysis pca example with the iris dataset activity data warehousing overview etl versus elt reinforcement learning -learning the exploration problem the simple approach the better way fancy words markov decision process dynamic programming [
16,278
dealing with real-world data bias/variance trade-off -fold cross-validation to avoid overfitting example of -fold cross-validation using scikit-learn data cleaning and normalisation cleaning web log data applying regular expression on the web log modification one filtering the request field modification two filtering post requests modification three checking the user agents modification four applying website-specific filters activity for web log data normalizing numerical data detecting outliers dealing with outliers activity for outliers summary apache spark machine learning on big data installing spark installing spark on windows installing spark on other operating systems installing the java development kit installing spark spark introduction it' scalable it' fast it' young it' not difficult components of spark python versus scala for spark spark and resilient distributed datasets (rddthe sparkcontext object creating rdds filtering the activity of spiders/robots creating an rdd using python list loading an rdd from text file more ways to create rdds rdd operations vi
16,279
using map(actions introducing mllib some mllib capabilities special mllib data types the vector data type labeledpoint data type rating data type decision trees in spark with mllib exploring decision trees code creating the sparkcontext importing and cleaning our data creating test candidate and building our decision tree running the script -means clustering in spark within set sum of squared errors (wssserunning the code tf-idf tf-idf in practice using tfidf searching wikipedia with spark mllib import statements creating the initial rdd creating and transforming hashingtf object computing the tf-idf score using the wikipedia search engine algorithm running the algorithm using the spark dataframe api for mllib how spark mllib works implementing linear regression summary testing and experimental design / testing concepts / tests measuring conversion for / testing how to attribute conversions variance is your enemy -test and -value the -statistic or -test the -value vii
16,280
running / test on some experimental data when there' no real difference between the two groups does the sample size make differencesample size increased to six-digits sample size increased seven-digits / testing determining how long to run an experiment for / test gotchas novelty effects seasonal effects selection bias auditing selection bias issues data pollution attribution errors summary index viii
16,281
being data scientist in the tech industry is one of the most rewarding careers on the planet today went and studied actual job descriptions for data scientist roles at tech companies and distilled those requirements down into the topics that you'll see in this course hands-on data science and python machine learning is really comprehensive we'll start with crash course on python and do review of some basic statistics and probabilitybut then we're going to dive right into over topics in data mining and machine learning that includes things such as bayestheoremclusteringdecision treesregression analysisexperimental designwe'll look at them all some of these topics are really fun we're going to develop an actual movie recommendation system using actual user movie rating data we're going to create search engine that actually works for wikipedia data we're going to build spam classifier that can correctly classify spam and nonspam emails in your email accountand we also have whole section on scaling this work up to cluster that runs on big data using apache spark if you're software developer or programmer looking to transition into career in data sciencethis course will teach you the hottest skills without all the mathematical notation and pretense that comes along with these topics we're just going to explain these concepts and show you some python code that actually works that you can dive in and mess around with to make those concepts sink homeand if you're working as data analyst in the finance industrythis course can also teach you to make the transition into the tech industry all you need is some prior experience in programming or scripting and you should be good to go the general format of this book is 'll start with each conceptexplaining it in bunch of sections and graphical examples will introduce you to some of the notations and fancy terminologies that data scientists like to use so you can talk the same languagebut the concepts themselves are generally pretty simple after thati'll throw you into some actual python code that actually works that we can run and mess around withand that will show you how to actually apply these ideas to actual data these are going to be presented as ipython notebook filesand that' format where can intermix code and notes surrounding the code that explain what' going on in the concepts you can take these notebook files with you after going through this book and use that as handy-quick reference later on in your careerand at the end of each concepti'll encourage you to actually dive into that python codemake some modificationsmess around with itand just gain more familiarity by getting hands-on and actually making some modificationsand seeing the effects they have
16,282
who this book is for if you are budding data scientist or data analyst who wants to analyze and gain actionable insights from data using pythonthis book is for you programmers with some experience in python who want to enter the lucrative world of data science will also find this book to be very useful conventions in this bookyou will find number of text styles that distinguish between different kinds of information here are some examples of these styles and an explanation of their meaning code words in textdatabase table namesfolder namesfilenamesfile extensionspathnamesdummy urlsuser inputand twitter handles are shown as follows"we can measure that using the @tdpsf function from tlmfbsonfusjdt block of code is set as followsjnqpsuovnqzbtoq jnqpsuqboebtbtqe gspntlmfbsojnqpsuusff joqvu@gjmf  tqbsl%bub djfodf btu)jsftdtweg qesfbe@dtw joqvu@gjmfifbefs when we wish to draw your attention to particular part of code blockthe relevant lines or items are set in boldjnqpsuovnqzbtoq jnqpsuqboebtbtqe gspntlmfbsojnqpsuusff input_file " :/spark/datascience/pasthires csveg qesfbe@dtw joqvu@gjmfifbefs any command-line input or output is written as followsspark-submit sparkkmeans py [
16,283
new terms and important words are shown in bold words that you see on the screenfor examplein menus or dialog boxesappear in the text like this"on windows you'll need to open up the start menu and go to windows system control panel to open up control panel warnings or important notes appear like this tips and tricks appear like this reader feedback feedback from our readers is always welcome let us know what you think about this book-what you liked or disliked reader feedback is important for us as it helps us develop titles that you will really get the most out of to send us general feedbacksimply email gffecbdl!qbdluqvcdpnand mention the book' title in the subject of your message if there is topic that you have expertise in and you are interested in either writing or contributing to booksee our author guide at xxxqbdluqvcdpnbvuipst customer support now that you are the proud owner of packt bookwe have number of things to help you to get the most from your purchase downloading the example code you can download the example code files for this book from your account at iuuq xxxqbdluqvcdpn if you purchased this book elsewhereyou can visit iuuq xxxqbdluqvcdpntvqqpsu and register to have the files emailed directly to you [
16,284
you can download the code files by following these steps log in or register to our website using your email address and password hover the mouse pointer on the support tab at the top click on code downloads errata enter the name of the book in the search box select the book for which you're looking to download the code files choose from the drop-down menu where you purchased this book from click on code download you can also download the code files by clicking on the code files button on the book' webpage at the packt publishing website this page can be accessed by entering the book' name in the search box please note that you need to be logged in to your packt account once the file is downloadedplease make sure that you unzip or extract the folder using the latest version ofwinrar -zip for windows zipeg izip unrarx for mac -zip peazip for linux the code bundle for the book is also hosted on github at iuuqt hjuivcdpn bdlu vcmjtijoh)boet %bub djfodfboe zuipobdijof-fbsojoh we also have other code bundles from our rich catalog of books and videos available at iuuqt hjuivcdpn bdlu vcmjtijohcheck them outdownloading the color images of this book we also provide you with pdf file that has color images of the screenshots/diagrams used in this book the color images will help you better understand the changes in the output you can download this file from iuuqt xxxqbdluqvcdpntjuftefgbvmugjmftepxompbet)boet %bub djfodfboe zu ipo bdijof-fbsojoh@$pmps*nbhftqeg [
16,285
errata although we have taken every care to ensure the accuracy of our contentmistakes do happen if you find mistake in one of our books-maybe mistake in the text or the codewe would be grateful if you could report this to us by doing soyou can save other readers from frustration and help us improve subsequent versions of this book if you find any errataplease report them by visiting iuuq xxxqbdluqvcdpntvcnjufssbubselecting your bookclicking on the errata submission form linkand entering the details of your errata once your errata are verifiedyour submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the errata section of that title to view the previously submitted erratago to iuuqt xxxqbdluqvcdpncppltdpoufoutvqqpsu and enter the name of the book in the search field the required information will appear under the errata section piracy piracy of copyrighted material on the internet is an ongoing problem across all media at packtwe take the protection of our copyright and licenses very seriously if you come across any illegal copies of our works in any form on the internetplease provide us with the location address or website name immediately so that we can pursue remedy please contact us at dpqzsjhiu!qbdluqvcdpn with link to the suspected pirated material we appreciate your help in protecting our authors and our ability to bring you valuable content questions if you have problem with any aspect of this bookyou can contact us at rvftujpot!qbdluqvcdpnand we will do our best to address the problem [
16,286
getting started since there' going to be code associated with this book and sample data that you need to get as welllet me first show you where to get that and then we'll be good to go we need to get some setup out of the way first first things firstlet' get the code and the data that you need for this book so you can play along and actually have some code to mess around with the easiest way to do that is by going right to this getting started in this we will first install and get ready in working python environmentinstalling enthought canopy installing python libraries how to work with the ipython/jupyter notebook how to useread and run the code files for this book then we'll dive into crash course into understanding python codepython basics part understanding python code importing modules experimenting with lists tuples python basics part running python scripts you'll have everything you need for an amazing journey into data science with pythononce we've set up your environment and familiarized you with python in this
16,287
installing enthought canopy let' dive right in and get what you need installed to actually develop python code with data science on your desktop ' going to walk you through installing package called enthought canopy which has both the development environment and all the python packages you need pre-installed it makes life really easybut if you already know python you might have an existing python environment already on your pcand if you want to keep using itmaybe you can the most important thing is that your python environment has python or newerthat it supports jupyter notebooks (because that' what we're going to use in this course)and that you have the key packages you need for this book installed on your environment 'll explain exactly how to achieve full installation in few simple steps it' going to be very easy let' first overview those key packagesmost of which canopy will be installing for us automatically for us canopy will install python for usand some further packages we need includingtdjlju@mfbsoymseand tubutnpefmt we'll need to manually use the qjq commandto install package called qzepuqmvt and that will be it it' very easy with canopyonce the following installation steps are completewe'll have everything we need to actually get up and runningand so we'll open up little sample file and do some data science for real now let' get you set up with everything you need to get started as quickly as possible the first thing you will need is development environmentcalled an idefor python code what we're going to use for this book is enthought canopy it' scientific computing environmentand it' going to work well with this book[
16,288
to get canopy installedjust go to xxxfouipvhiudpn and click on downloadscanopy[
16,289
enthought canopy is freefor the canopy express edition which is what you want for this book you must then select your operating system and architecture for methat' windows -bitbut you'll want to click on corresponding download button for your operating system and with the python option we don' have to give them any personal information at this step there' pretty standard windows installerso just let that download[
16,290
after that' downloaded we go ahead and open up the canopy installerand run ityou might want to read the license before you agree to itthat' up to youand then just wait for the installation to complete once you hit the finish button at the end of the install processallow it to launch canopy automatically you'll see that canopy then sets up the python environment by itselfwhich is greatbut this will take minute or two once the installer is done setting up your python environmentyou should get screen that looks like the one below it says welcome to canopy and bunch of big friendly buttons
16,291
the beautiful thing is that pretty much everything you need for this book comes pre-installed with enthought canopythat' why recommend using it there is just one last thing we need to set upso go ahead and click the editor button there on the canopy welcome screen you'll then see the editor screen come upand if you click down in the window at the bottomi want you to just type inqjqjotubmmqzepuqmvt here' how that' going to look on your screen as you type the above line in at the bottom of the canopy editor windowdon' forget to press the return button of course one you hit the return buttonthis will install that one extra module that we need for later on in the bookwhen we get to talking about decision treesand rendering decision trees once it has finished installing pydotplusit should come back and say it' successfully installed andvoilayou have everything you need now to get startedthe installation is doneat this point but let' just take few more steps to confirm our installation is running nicely giving the installation test run let' now give your installation test run the first thing to do is actually to entirely close the canopy windowthis is because we're not actually going to be editing and using our code within this canopy editor instead we're going to be using something called an ipython notebookwhich is also now known as the jupyter notebook
16,292
let me show you how that works if you now open window in your operating system to view the accompanying book files that you downloadedas described in the preface of this book it should look something like thiswith the set of jqzoc code files you downloaded for this book
16,293
now go down to the outliers file in the listthat' the vumjfstjqzoc filedouble-click itand what should happen is it' going to start up canopy first and then it' going to kick off your web browserthis is because ipython/jupyter notebooks actually live within your web browser there can be small pause at firstand it can be little bit confusing first timebut you'll soon get used to the idea you should soon see canopy come up and for me my default web browser chrome comes up you should see the following jupyter notebook pagesince we double-clicked on the vumjfstjqzoc fileif you see this screenit means that everything' working great in your installation and you're all set for the journey across rest of this book
16,294
if you occasionally get problems opening your ipnyb files just occasionallyi've noticed that things can go little bit wrong when you double-click on jqzoc file don' panicjust sometimescanopy can get little bit flakyand you might see screen that is looking for some password or tokenor you might occasionally see screen that says it can' connect at all don' panic if either of those things happen to youthey are just random quirkssometimes things just don' start up in the right order or they don' start up in time on your pc and it' okay all you have to do is go back and try to open that file second time sometimes it takes two or three tries to actually get it loaded up properlybut if you do it couple of times it should pop up eventuallyand jupyter notebook screen like the one we saw previously about dealing with outliersis what you should see using and understanding ipython (jupyternotebooks congratulations on your installationlet' now explore using jupyter notebookswhich is also known as ipython notebook these daysthe more modern name is the jupyter notebookbut lot of people still call it an ipython notebookand consider the names interchangeable for working developers as result do also find the name ipython notebooks helps me remember the notebook file name suffix which is jqzoc as you'll get to know very well in this bookokay so now let' take it right from the top again with our first exploration of the ipython/jupyter notebook if you haven' yet done soplease navigate to the %bub djfodf folder where we have downloaded all the materials for this book for methat' %bub djfodfand if you didn' do so during the preceding installation sectionplease now double-click and open up the vumjfstjqzoc file
16,295
now what' going to happen when we double-click on this ipython jqzoc file is that first of all it' going to spark up canopyif it' not sparked up alreadyand then it' going to launch web browser this is how the full vumjfst notebook webpage looks within my browser
16,296
as you can see herenotebooks are structured in such way that can intersperse my little notes and commentary about what you're seeing here within the actual code itselfand you can actually run this code within your web browsersoit' very handy format for me to give you sort of little reference that you can use later on in life to go and remind yourself how these algorithms work that we're going to talk aboutand actually experiment with them and play with them yourself the way that the ipython/jupyter notebook files work is that they actually run from within your browserlike webpagebut they're backed by the python engine that you installed so you should be seeing screen similar to the one shown in the previous screenshot you'll notice as you scroll down the notebook in your browserthere are code blocks they're easy to spot because they contain our actual code please find the code box for this code in the outliers notebookquite near the topnbuqmpumjcjomjof jnqpsuovnqzbtoq jodpnft oqsboepnopsnbm jodpnft oqbqqfoe jodpnftjnqpsunbuqmpumjcqzqmpubtqmu qmuijtu jodpnftqmutipx let' take quick look at this code while we're here we are setting up little income distribution in this code we're simulating the distribution of income in population of peopleand to illustrate the effect that an outlier can have on that distributionwe're simulating donald trump entering the mix and messing up the mean value of the income distribution by the wayi' not making political statementthis was all done before trump became politician so you knowfull disclosure there we can select any code block in the notebook by clicking on it so if you now click in the code block that contains the code we just looked at abovewe can then hit the run button at the top to run it here' the area at the top of the screen where you'll find the run button
16,297
hitting the run button with the code block selectedwill cause this graph to be regeneratedsimilarlywe can click on the next code block little further downyou'll spot the one which has the following single line of code jodpnftnfbo if you select the code block containing this lineand hit the run button to run the codeyou'll see the output below itwhich ends up being very large value because of the effect of that outliersomething like this
16,298
let' keep going and have some fun in the next code block downyou'll see the following codewhich tries to detect outliers like donald trump and remove them from the datasetefgsfkfdu@pvumjfst ebub oqnfejbo ebub oqtue ebub gjmufsfe sfuvsogjmufsfe gjmufsfe sfkfdu@pvumjfst jodpnft qmuijtu gjmufsfeqmutipx so select the corresponding code block in the notebookand press the run button again when you do thatyou'll see this graph insteadnow we see much better histogram that represents the more typical american now that we've taken out our outlier that was messing things up soat this pointyou have everything you need to get started in this course we have all the data you needall the scriptsand the development environment for python and python notebooks solet' rock and roll up next we're going to do little crash course on python itselfand even if you're familiar with pythonit might be good little refresher so you might want to watch it regardless let' dive in and learn python
16,299
python basics part if you already know pythonyou can probably skip the next two sections howeverif you need refresheror if you haven' done python beforeyou'll want to go through these there are few quirky things about the python scripting language that you need to knowso let' dive in and just jump into the pool and learn some python by writing some actual code like said beforein the requirements for this bookyou should have some sort of programming background to be successful in this book you've coded in some sort of languageeven if it' scripting languagejavascripti don' care whether it is ++javaor somethingbut if you're new to pythoni' going to give you little bit of crash course here ' just going to dive right in and go right into some examples in this section there are few quirks about python that are little bit different than other languages you might have seenso just want to walk through what' different about python from other scripting languages you may have worked withand the best way to do that is by looking at some real examples let' dive right in and look at some python code