id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
16,500 | let' add sbujohtifbe command and then run those cells what we end up with is something like the following table that was pretty quickwe end up with new dataframe that contains the vtfs@je and rating for each movie that user ratedand we have both the npwjf@je and the ujumf that we can read and see what it really is sothe way to read this is vtfs@je number rated the pz upsz movie starsvtfs@je number rated the pz upsz movie starsand so on and so forth andif we were to keep looking at more and more of this dataframewe' see different ratings for different movies as we go through it now the real magic of pandas comes in sowhat we really want is to look at relationships between movies based on all the users that watched each pair of moviesso we needat the enda matrix of every movieand every userand all the ratings that every user gave to every movie the qjwpu@ubcmf command in pandas can do that for us it can basically construct new table from given dataframepretty much any way that you want it for thiswe can use the following codenpwjf bujoht sbujohtqjwpu@ubcmf joefy dpmvnot wbmvft sbujoh npwjf bujohtifbe sowhat we're saying with this code is-take our ratings dataframe and create new dataframe called npwjf bujoht and we want the index of it to be the user idsso we'll have row for every vtfs@jeand we're going to have every column be the movie title sowe're going to have column for every title that we encounter in that dataframeand each cell will contain the sbujoh valueif it exists solet' go ahead and run it |
16,501 | andwe end up with new dataframe that looks like the following tableit' kind of amazing how that just put it all together for us nowyou'll see some /bvalueswhich stands for not numberand its just how pandas indicates missing value sothe way to interpret this isvtfs@je number for exampledid not watch the movie but vtfs@je number did watch %bmnbujbot and rated it stars the vtfs@je number also watched "ohsz fo and rated it starsbut did not watch the movie %bztjouif bmmfz for exampleokaysowhat we end up with here is sparse matrix basicallythat contains every userand every movieand at every intersection where user rated movie there' rating value soyou can see nowwe can very easily extract vectors of every movie that our user watchedand we can also extract vectors of every user that rated given moviewhich is what we want sothat' useful for both user-based and item-based collaborative filteringrightif wanted to find relationships between usersi could look at correlations between these user rowsbut if want to find correlations between moviesfor item-based collaborative filteringi can look at correlations between columns based on the user behavior sothis is where the real flipping things on its head for user versus item-based similarities comes into play |
16,502 | nowwe're going with item-based collaborative filteringso we want to extract columnsto do this let' run the following codetubs bst bujoht npwjf bujoht ubs bst tubs bst bujohtifbe nowwith the help of thatlet' go ahead and extract all the users who rated ubs bst andwe can see most people havein factwatched and rated ubs bst and everyone liked itat least in this little sample that we took from the head of the dataframe sowe end up with resulting set of user ids and their ratings for ubs bst the user id did not rate ubs bst so we have /bvalueindicating missing value therebut that' okay we want to make sure that we preserve those missing values so we can directly compare columns from different movies sohow do we do thatthe corrwith function wellpandas keeps making it easy for usand has dpssxjui function that you can see in the following code that we can usetjnjmbs pwjft npwjf bujohtdpssxjui tubs bst bujoht tjnjmbs pwjft tjnjmbs pwjftespqob eg qe%bub'sbnf tjnjmbs pwjft egifbe |
16,503 | that code will go ahead and correlate given column with every other column in the dataframeand compute the correlation scores and give that back to us sowhat we're doing here is using dpssxjui on the entire npwjf bujoht dataframethat' that entire matrix of user movie ratingscorrelating it with just the tubs bst bujoht columnand then dropping all of the missing results with espqob sothat just leaves us with items that had correlationwhere there was more than one person that viewed itand we create new dataframe based on those results and then display the top results so againjust to recap we're going to build the correlation score between star wars and every other movie drop all the /bvaluesso that we only have movie similarities that actually existwhere more than one person rated it andwe're going to construct new dataframe from the results and look at the top results and here we are with the results shown in the following screenshot |
16,504 | we ended up with this result of correlation scores between each individual movie for star wars and we can seefor examplea surprisingly high correlation score with the movie jm ifsf bt:pv negative correlation with the movie and very weak correlation with %bmnbujbot nowall we should have to do is sort this by similarity scoreand we should have the top movie similarities for star warsrightlet' go ahead and do that tjnjmbs pwjfttpsu@wbmvft btdfoejoh 'bmtf just call tpsu@wbmvft on the resulting dataframeagain pandas makes it really easyand we can say btdfoejoh 'bmtfto actually get it sorted in reverse order by correlation score solet' do thatokayso ubs bst came out pretty close to topbecause it is similar to itselfbut what' all this other stuffwhat the heckwe can see in the preceding outputsome movies such as'vmm qffe bopguif:fbs if vumbx these are allyou knowfairly obscure moviesthat most of them 've never even heard ofand yet they have perfect correlations with star wars that' kinda weirdsoobviously we're doing something wrong here what could it be |
16,505 | wellit turns out there' perfectly reasonable explanationand this is good lesson in why you always need to examine your results when you're done with any sort of data science task-question the resultsbecause often there' something you missedthere might be something you need to clean in your datathere might be something you did wrong but you should also always look skeptically at your resultsdon' just take them on faithokayif you do soyou're going to get in troublebecause if were to actually present these as recommendations to people who liked star warsi would get fired don' get firedpay attention to your resultssolet' dive into what went wrong in our next section improving the results of movie similarities let' figure out what went wrong with our movie similarities there we went through all this exciting work to compute correlation scores between movies based on their user ratings vectorsand the results we got kind of sucked sojust to remind youwe looked for movies that are similar to star wars using that techniqueand we ended up with bunch of weird recommendations at the top that had perfect correlation andmost of them were very obscure movies sowhat do you think might be going on therewellone thing that might make sense islet' say we have lot of people watch star wars and some other obscure film we' end up with good correlation between these two movies because they're tied together by star warsbut at the end of the daydo we really want to base our recommendations on the behavior of one or two people that watch some obscure movieprobably noti mean yesthe two people in the worldor whatever it isthat watch the movie full speedand both liked it in addition to star warsmaybe that is good recommendation for thembut it' probably not good recommendation for the rest of the world we need to have some sort of confidence level in our similarities by enforcing minimum boundary of how many people watched given movie we can' make judgment that given movie is good just based on the behavior of one or two people solet' try to put that insight into action using the following codejnqpsuovnqzbtoq npwjf ubut sbujohthspvqcz npwjf ubutifbe ujumf bhh sbujoh what we're going to do is try to identify the movies that weren' actually rated by many people and we'll just throw them out and see what we get soto do that we're going to take our original ratings dataframe and we're going to say hspvqcz ujumf again pandas has all sorts of magic in it andthis will basically construct new dataframe that aggregates together all the rows for given title into one row |
16,506 | we can say that we want to aggregate specifically on the ratingand we want to show both the sizethe number of ratings for each movieand the mean average scorethe mean rating for that movie sowhen we do thatwe end up with something like the followingthis is telling usfor examplefor the movie %bmnbujbot people rated that movie and their average rating was starsso not that great of score reallysoif we just eyeball this datawe can say okay wellmovies that consider obscurelike had ratingsbut %bmnbujbot 've heard of thatyou know "ohsz fo 've heard of that it seems like there' sort of natural cutoff value at around ratingswhere maybe that' the magic value where things start to make sense let' go ahead and get rid of movies rated by fewer than peopleand yesyou know ' kind of doing this intuitively at this point as we'll talk about laterthere are more principled ways of doing thiswhere you could actually experiment and do train/test experiments on different threshold valuesto find the one that actually performs the best but initiallylet' just use our common sense and filter out movies that were rated by fewer than people againpandas makes that really easy to do let' figure it out with the following exampleqpqvmbs pwjft npwjf ubut npwjf ubuttpsu@wbmvft sbujoh nfbo btdfoejoh 'bmtf > |
16,507 | we can just say qpqvmbs pwjfta new dataframeis going to be constructed by looking at npwjf ubut and we're going to only take rows where the rating size is greater than or equal to and ' then going to sort that by nfbo ratingjust for funto see the top ratedwidely watched movies what we have here is list of movies that were rated by more than peoplesorted by their average rating scoreand this in itself is recommender system these are highly-rated popular movies "$mptf ibwf apparentlywas really good movie and lot of people watched it and they really liked it so againthis is very old datasetfrom the late sso even though you might not be familiar with the film "$mptf ibwf it might be worth going back and rediscovering itadd it to your netflix dijoemfs -jtu not big surprise therethat comes up on the top of most top movies lists if spoh spvtfst another example of an obscure film that apparently was really good and was also pretty popular sosome interesting discoveries there alreadyjust by doing that |
16,508 | things look little bit better nowso let' go ahead and basically make our new dataframe of star wars recommendationsmovies similar to star warswhere we only base it on movies that appear in this new dataframe sowe're going to use the kpjo operationto go ahead and join our original tjnjmbs pwjft dataframe to this new dataframe of only movies that have greater than ratingsokayeg npwjf ubutkpjo qe%bub'sbnf tjnjmbs pwjftdpmvnot egifbe in this codewe create new dataframe based on tjnjmbs pwjft where we extract the tjnjmbsjuz columnjoin that with our npwjf ubut dataframewhich is our qpqvmbs pwjft dataframeand we look at the combined results andthere we go with that outputnow we haverestricted only to movies that are rated by more than peoplethe similarity score to star wars sonow all we need to do is sort that using the following codeegtpsu@wbmvft btdfoejoh 'bmtf |
16,509 | herewe're reverse sorting it and we're just going to take look at the first results if you run that nowyou should see the followingthis is starting to look little bit betterso ubs bst comes out on top because it' similar to itself if&nqjsf usjlft#bdl is number fuvsopguif +fej is number bjefstpguif-ptu"sl number you knowit' still not perfectbut these make lot more senserightsoyou would expect the three star wars films from the original trilogy to be similar to each otherthis data goes back to before the next three filmsand bjefstpguif-ptu"sl is also very similar movie to star wars in styleand it comes out as number soi' starting to feel little bit better about these results there' still room for improvementbut heywe got some results that make sensewhoo-hoo |
16,510 | nowideallywe' also filter out star warsyou don' want to be looking at similarities to the movie itself that you started frombut we'll worry about that latersoif you want to play with this little bit morelike said was sort of an arbitrary cutoff for the minimum number of ratings if you do want to experiment with different cutoff valuesi encourage you to go back and do so see what that does to the results you knowyou can see in the preceding table that the results that we really like actually had much more than ratings in common sowe end up with "vtujo pxfst *oufsobujpobm bopg ztufsz coming in there pretty high with only ratings so maybe isn' high enough jopddijp snuck in at not very similar to star warssoyou might want to consider an even higher threshold there and see what it does please keep in mind toothis is very smalllimited dataset that we used for experimentation purposesand it' based on very old dataso you're only going to see older movies sointerpreting these results intuitively might be little bit challenging as resultbut not bad results now let' move on and actually do full-blown item-based collaborative filtering where we recommend movies to people using more complete systemwe'll do that next making movie recommendations to people okaylet' actually build full-blown recommender system that can look at all the behavior information of everybody in the systemand what movies they ratedand use that to actually produce the best recommendation movies for any given user in our dataset kind of amazing and you'll be surprised how simple it is let' golet' begin using the *ufn#btfe$'jqzoc file and let' start off by importing the movielens dataset that we have againwe're using subset of it that just contains , ratings for now butthere are larger datasets you can get from grouplens org-up to millions of ratingsif you're so inclined keep in mind thoughwhen you start to deal with that really big datayou're going to be pushing the limits of what you can do in single machine and what pandas can handle without further adohere' the first block of codejnqpsuqboebtbtqe @dpmt sbujoht qesfbe@dtw tvoephdpotvmuqbdluebubtdjfodfnm ebub tfq = obnft @dpmtvtfdpmt sbohf @dpmt npwjft qesfbe@dtw tvoephdpotvmuqbdluebubtdjfodfnm jufn tfq obnft @dpmtvtfdpmt sbohf |
16,511 | sbujoht qenfshf npwjftsbujoht sbujohtifbe just like earlierwe're going to import the ebub file that contains all the individual ratings for every user and what movie they ratedand then we're going to tie that together with the movie titlesso we don' have to just work with numerical movie ids go ahead and hit the run cell buttonand we end up with the following dataframe the way to read this isfor examplevtfs@je number rated pz upsz starand vtfs@je number rated pz upsz star andthis will contain every ratingfor every userfor every movie and againjust like earlierwe use the wonderful qjwpu@ubcmf command in pandas to construct new dataframe based on the informationvtfs bujoht sbujohtqjwpu@ubcmf joefy dpmvnot wbmvft sbujoh vtfs bujohtifbe |
16,512 | hereeach row is the vtfs@jethe columns are made up of all the unique movie titles in my datasetand each cell contains ratingwhat we end up with is this incredibly useful matrix shown in the preceding outputthat contains users for every row and movies for every column and we have basically every user rating for every movie in this matrix sovtfs@je number for examplegave %bmnbujbot -star rating andagain all these /bvalues represent missing data sothat just indicatesfor examplevtfs@je number did not rate the movie againit' very useful matrix to have if we were doing user-based collaborative filteringwe could compute correlations between each individual user rating vector to find similar users since we're doing item-based collaborative filteringwe're more interested in relationships between the columns sofor exampledoing correlation score between any two columnswhich will give us correlation score for given movie pair sohow do we do thatit turns out that pandas makes that incredibly easy to do as well it has built-in dpss function that will actually compute the correlation score for every column pair found in the entire matrix-it' almost like they were thinking of us dpss busjy vtfs bujohtdpss dpss busjyifbe |
16,513 | let' go ahead and run the preceding code it' fairly computationally expensive thing to doso it will take moment to actually come back with result butthere we have itsowhat do we have in the preceding outputwe have here new dataframe where every movie is on the rowand in the column sowe can look at the intersection of any two given movies and find their correlation score to each other based on this vtfs bujoht data that we had up here originally how cool is thatfor examplethe movie %bmnbujbot is perfectly correlated with itself of coursebecause it has identical user rating vectors butif you look at %bmnbujbot movie' relationship to the movie "ohsz fo it' much lower correlation score because those movies are rather dissimilarmakes senserighti have this wonderful matrix now that will give me the similarity score of any two movies to each other it' kind of amazingand very useful for what we're going to be doing now just like earlierwe have to deal with spurious results soi don' want to be looking at relationships that are based on small amount of behavior information it turns out that the pandas dpss function actually has few parameters you can give it one is the actual correlation score method that you want to useso ' going to say use qfbstpo correlation dpss busjy vtfs bujohtdpss nfuipe qfbstpo njo@qfsjpet dpss busjyifbe |
16,514 | you'll notice that it also has njo@qfsjpet parameter you can give itand that basically says only want you to consider correlation scores that are backed up by at leastin this example people that rated both movies running that will get rid of the spurious relationships that are based on just handful of people the following is the matrix that we get after running the codeit' little bit different to what we did in the item similarities exercise where we just threw out any movie that was rated by less than people what we're doing hereis throwing out movie similarities where less than people rated both of those moviesokaysoyou can see in the preceding matrix that we have lot more /bvalues in facteven movies that are similar to themselves get thrown outso for examplethe movie waspresumablywatched by fewer than people so it just gets tossed entirely the movie%bmnbujbot howeversurvives with correlation score of and there are actually no movies in this little sample of the dataset that are different from each other that had people in common that watched both butthere are enough movies that survive to get meaningful results |
16,515 | understanding movie recommendations with an example sowhat we do with this datawellwhat we want to do is recommend movies for people the way we do that is we look at all the ratings for given personfind movies similar to the stuff that they ratedand those are candidates for recommendations to that person let' start by creating fake person to create recommendations for 've actually already added fake user by handid number to the movielens dataset that we're processing you can see that user with the following codenz bujoht vtfs bujohtmpdespqob nz bujoht this gives the following outputthat kind of represents someone like mewho loved star wars and the empire strikes backbut hated the movie gone with the wind sothis represents someone who really loves star warsbut does not like old styleromantic dramasokaysoi gave rating of star to if&nqjsf usjlft#bdl and ubs bst and rating of star to (pofxjuiuif joe soi' going to try to find recommendations for this fictitious user sohow do do thatwelllet' start by creating series called tjn$boejebuft and ' going to go through every movie that rated tjn$boejebuft qe fsjft gpsjjosbohf mfo nz bujohtjoefy qsjou"eejohtjntgpsnz bujohtjoefy fusjfwftjnjmbsnpwjftupuijtpofuibu*sbufe tjnt dpss busjy>espqob /pxtdbmfjuttjnjmbsjuzczipxxfmm*sbufeuijtnpwjf tjnt tjntnbq mbnceby nz bujoht "eeuiftdpsfupuifmjtupgtjnjmbsjuzdboejebuft tjn$boejebuft tjn$boejebuftbqqfoe tjnt |
16,516 | (mbodfbupvssftvmuttpgbs qsjoutpsujohtjn$boejebufttpsu@wbmvft joqmbdf svfbtdfoejoh 'bmtf qsjoutjn$boejebuftifbe for in range through the number of ratings that have in nz bujohti am going to add up similar movies to the ones that rated soi' going to take that dpss busjy dataframethat magical one that has all of the movie similaritiesand am going to create correlation matrix with nz bujohtdrop any missing valuesand then am going to scale that resulting correlation score by how well rated that movie sothe idea here is ' going to go through all the similarities for the empire strikes backfor exampleand will scale it all by because really liked the empire strikes back butwhen go through and get the similarities for gone with the windi' only going to scale those by because did not like gone with the wind sothis will give more strength to movies that are similar to movies that likedand less strength to movies that are similar to movies that did not likeokaysoi just go through and build up this list of similarity candidatesrecommendation candidates if you willsort the results and print them out let' see what we getheythose don' look too badrightsoobviously if&nqjsf usjlft#bdl and ubs bst come out on topbecause like those movies explicitlyi already watched them and rated them butbubbling up to the top of the list is fuvsopguif +fej which we would expect and bjefstpguif-ptu"sl |
16,517 | let' start to refine these results little bit more we're seeing that we're getting duplicate values back if we have movie that was similar to more than one movie that ratedit will come back more than once in the resultsso we want to combine those together if do in fact have the same moviemaybe that should get added up together into combinedstronger recommendation score return of the jedifor examplewas similar to both star wars and the empire strikes back how would we do thatusing the groupby command to combine rows we'll go ahead and explore that we're going to use the hspvqcz command again to group together all of the rows that are for the same movie nextwe will sum up their correlation score and look at the resultstjn$boejebuft tjn$boejebufthspvqcz tjn$boejebuftjoefy tvn tjn$boejebufttpsu@wbmvft joqmbdf svfbtdfoejoh 'bmtf tjn$boejebuftifbe following is the resultheythis is looking really goodso fuvsopguif+fej comes out way on topas it shouldwith score of bjefstpguif-ptu"sl close second at and then we start to get to *oejbob+poftboeuif-btu$svtbef and some more movies if#sjehf pouif jwfs,xbj #bdlupuif'vuvsf if ujoh these are all movies that would actually enjoy watchingyou knowi actually do like oldschool disney movies tooso $joefsfmmb isn' as crazy as it might seem the last thing we need to do is filter out the movies that 've already ratedbecause it doesn' make sense to recommend movies you've already seen |
16,518 | removing entries with the drop command soi can quickly drop any rows that happen to be in my original ratings series using the following codegjmufsfe jnt tjn$boejebuftespq nz bujohtjoefy gjmufsfe jntifbe running that will let me see the final top resultsand there we have it fuvsopguif+fej bjefstpguif-ptu"sl *oejbob+poftboeuif-btu$svtbef all the top results for my fictitious userand they all make sense ' seeing few family-friendly filmsyou know$joefsfmmb if [bsepg %vncp creeping inprobably based on the presence of gone with the wind in thereeven though it was weighted downward it' still in thereand still being counted andthere we have our resultsso there you have itpretty coolwe have actually generated recommendations for given user and we could do that for any user in our entire dataframe sogo ahead and play with that if you want to also want to talk about how you can actually get your hands dirty little bit moreand play with these resultstry to improve upon them there' bit of an art to thisyou knowyou need to keep iterating and trying different ideas and different techniques until you get better and better resultsand you can do this pretty much forever meani made whole career out of it soi don' expect you to spend the nextyou know years trying to refine this like didbut there are some simple things you can doso let' talk about that |
16,519 | improving the recommendation results as an exercisei want to challenge you to go and make those recommendations even better solet' talk about some ideas haveand maybe you'll have some of your own too that you can actually try out and experiment withget your hands dirtyand try to make better movie recommendations okaythere' lot of room for improvement still on these recommendation results there' lot of decisions we made about how to weigh different recommendation results based on your rating of that item that it came fromor what threshold you want to pick for the minimum number of people that rated two given movies sothere' lot of things you can tweaka lot of different algorithms you can tryand you can have lot of fun with trying to make better movie recommendations out of the system soif you're feeling up to iti' challenging you to go and do just thathere are some ideas on how you might actually try to improve upon the results in this firstyou can just go ahead and play with the *ufncbtfe$'jqzoc file and tinker with it sofor examplewe saw that the correlation method actually had some parameters for the correlation computationwe used pearson in our examplebut there are other ones you can look up and try outsee what it does to your results we used minimum period value of maybe that' too highmaybe it' too lowwe just kind of picked it arbitrarily what happens if you play with that valueif you were to lower that for examplei would expect you to see some new movies maybe you've never heard ofbut might still be good recommendation for that person orif you were to raise it higheryou would seeyou know nothing but blockbusters sometimes you have to think about what the result is that you want out of recommender system is there good balance to be had between showing people movies that they've heard of and movies that they haven' heard ofhow important is discovery of new movies to these people versus having confidence in the recommender system by seeing lot of movies that they have heard ofso againthere' sort of an art to that we can also improve upon the fact that we saw lot of movies in the results that were similar to gone with the windeven though didn' like gone with the wind you know we weighted those results lower than similarities to movies that enjoyedbut maybe those movies should actually be penalized if hated gone with the wind that muchmaybe similarities to gone with the windlike the wizard of ozshould actually be penalized andyou know lowered in their score instead of raised at all |
16,520 | that' another simple modification you can make and play around with there are probably some outliers in our user rating datasetwhat if were to throw away people that rated some ridiculous number of moviesmaybe they're skewing everything you could actually try to identify those users and throw them outas another idea andif you really want big projectif you really want to sink your teeth into this stuffyou could actually evaluate the results of this recommender engine by using the techniques of train/test sowhat if instead of having an arbitrary recommendation score that sums up the correlation scores of each individual movieactually scale that down to predicted rating for each given movie if the output of my recommender system were movie and my predicted rating for that moviein train/test system could actually try to figure out how well do predict movies that the user has in fact watched and rated beforeokaysoi could set aside some of the ratings data and see how well my recommender system is able to predict the user' ratings for those movies andthat would be quantitative and principled way to measure the error of this recommender engine but againthere' little bit more of an art than science to this even though the netflix prize actually used that error metriccalled root-meansquare error is what they used in particularis that really measure of good recommender systembasicallyyou're measuring the ability of your recommender system to predict the ratings of movies that person already watched but isn' the purpose of recommender engine to recommend movies that person hasn' watchedthat they might enjoythose are two different things so unfortunatelyit' not very easy to measure the thing you really want to be measuring so sometimesyou do kind of have to go with your gut instinct andthe right way to measure the results of recommender engine is to measure the results that you're trying to promote through it maybe ' trying to get people to watch more moviesor rate new movies more highlyor buy more stuff running actual controlled experiments on real website would be the right way to optimize for thatas opposed to using train/test soyou knowi went into little bit more detail there than probably should havebut the lesson isyou can' always think about these things in black and white sometimesyou can' really measure things directly and quantitativelyand you have to use little bit of common senseand this is an example of that |
16,521 | anywaythose are some ideas on how to go back and improve upon the results of this recommender engine that we wrote soplease feel free to tinker around with itsee if you can improve upon it however you wish toand have some fun with it this is actually very interesting part of the bookso hope you enjoy itsummary sogo give it trysee if you can improve on our initial results there there' some simple ideas there to try to make those recommendations betterand some much more complicated ones too nowthere' no right or wrong answeri' not going to ask you to turn in your workand ' not going to review your work you knowyou decide to play around with it and get some familiarity with itand experimentand see what results you get that' the whole point just to get you more familiar with using python for this sort of thingand get more familiar with the concepts behind item-based collaborative filtering we've looked at different recommender systems in this we ruled out user-based collaborative filtering system and dove straight in to an item-based system we then used various functions from pandas to generate and refine our resultsand hope you've seen the power of pandas here in the next we'll take look at more advanced data mining and machine learning techniques including -nearest neighbors look forward to explaining those to you and seeing how they can be useful |
16,522 | more data mining and machine learning techniques in this we talk about few more data mining and machine learning techniques we will talk about really simple technique called -nearest neighbors (knnwe'll then use knn to predict rating for movie after thatwe'll go on to talk about dimensionality reduction and principal component analysis we'll also look at an example of pca where we will reduce data to two dimensions while still preserving its variance we'll then walk through the concept of data warehousing and see the advantages of the newer elt process over the etl process we'll learn the fun concept of reinforcement learning and see the technique used behind the intelligent pac-man agent of the pac-man game lastlywe'll see some fancy terminology used for reinforcement learning we'll cover the following topicsthe concept of -nearest neighbors implementation of knn to predict the rating of movie dimensionality reduction and principal component analysis example of pca with the iris dataset data warehousing and etl versus elt what is reinforcement learning the working behind the intelligent pac-man game some fancy words used for reinforcement learning |
16,523 | -nearest neighbors concepts let' talk about few data mining and machine learning techniques that employers expect you to know about we'll start with really simple one called knn for short you're going to be surprised at just how simple good supervised machine learning technique can be let' take lookknn sounds fancy but it' actually one of the simplest techniques out therelet' say you have scatter plot and you can compute the distance between any two points on that scatter plot let' assume that you have bunch of data that you've already classifiedthat you can train the system from if have new data pointall do is look at the knn based on that distance metric and let them all vote on the classification of that new point let' imagine that the following scatter plot is plotting movies the squares represent science fiction moviesand the triangles represent drama movies we'll say that this is plotting ratings versus popularityor anything else you can dream upherewe have some sort of distance that we can compute based on rating and popularity between any two points on the scatter plot let' say new point comes ina new movie that we don' know the genre for what we could do is set to and take the nearest neighbors to this point on the scatter plotthey can all then vote on the classification of the new point/movie you can see if take the three nearest neighbors ( = ) have drama movies and science fiction movie would then let them all voteand we would choose the classification of drama for this new point based on those nearest neighbors nowif were to expand this circle to include nearest neighborsthat is = get different answer soin that case pick up science fiction and drama movies if let them all vote would end up with classification of science fiction for the new movie instead |
16,524 | our choice of can be very important you want to make sure it' small enough that you don' go too far and start picking up irrelevant neighborsbut it has to be big enough to enclose enough data points to get meaningful sample sooften you'll have to use train/test or similar technique to actually determine what the right value of is for given dataset butat the end of the dayyou have to just start with your intuition and work from there that' all there is to itit' just that simple soit is very simple technique all you're doing is literally taking the nearest neighbors on scatter plotand letting them all vote on classification it does qualify as supervised learning because it is using the training data of set of known pointsthat isknown classificationsto inform the classification of new point but let' do something little bit more complicated with it and actually play around with moviesjust based on their metadata let' see if we can actually figure out the nearest neighbors of movie based on just the intrinsic values of those moviesfor examplethe ratings for itthe genre information for itin theorywe could recreate something similar to customers who watched this item also watched (the above image is screenshot from amazonjust using -nearest neighbors andi can take it one step furtheronce identify the movies that are similar to given movie based on the -nearest neighbors algorithmi can let them all vote on predicted rating for that movie that' what we're going to do in our next example so you now have the concepts of knnk-nearest neighbors let' go ahead and apply that to an example of actually finding movies that are similar to each other and using those nearest neighbor movies to predict the rating for another movie we haven' seen before |
16,525 | using knn to predict rating for movie alrightwe're going to actually take the simple idea of knn and apply that to more complicated problemand that' predicting the rating of movie given just its genre and rating information solet' dive in and actually try to predict movie ratings just based on the knn algorithm and see where we get soif you want to follow alonggo ahead and open up the ,//jqzoc and you can play along with me what we're going to do is define distance metric between movies just based on their metadata by metadata just mean information that is intrinsic to the moviethat isthe information associated with the movie specificallywe're going to look at the genre classifications of the movie every movie in our pwjf-fot dataset has additional information on what genre it belongs to movie can belong to more than one genrea genre being something like science fictionor dramaor comedyor animation we will also look at the overall popularity of the moviegiven by the number of people who rated itand we also know the average rating of each movie can combine all this information together to basically create metric of distance between two movies just based on rating information and genre information let' see what we get we'll use pandas again to make life simpleand if you are following alongagain make sure to change the path to the pwjf-fot dataset to wherever you installed itwhich will almost certainly not be what is in this python notebook please go ahead and change that if you want to follow along as beforewe're just going to import the actual ratings data file itselfwhich is ebub using the sfbe@dtw function in pandas we're going to tell that it actually has tab-delimiter and not comma we're going to import the first columnswhich represent the vtfs@jenpwjf@jeand ratingfor every individual movie rating in our datasetjnqpsuqboebtbtqe @dpmt sbujoht qesfbe@dtw =%bub djfodf=nm = ebub tfq = obnft @dpmtvtfdpmt sbohf sbujohtifbe sbujohtifbe |
16,526 | if we go ahead and run that and look at the top of itwe can see that it' workinghere' how the output should look likewe end up with %bub'sbnf that has vtfs@jenpwjf@jeand sbujoh for examplevtfs@jerated npwjf@jewhich believe is star wars starsand so on and so forth the next thing we have to figure out is aggregate information about the ratings for each movie we use the hspvqcz function in pandas to actually group everything by npwjf@je we're going to combine together all the ratings for each individual movieand we're going to output the number of ratings and the average rating scorethat is the meanfor each movienpwjf spqfsujft sbujohthspvqcz npwjf spqfsujftifbe npwjf@je bhh sbujoh let' go ahead and do that comes back pretty quicklyhere' how the output looks like |
16,527 | this gives us another %bub'sbnf that tells usfor examplenpwjf@jehad ratings (which is measure of its popularitythat ishow many people actually watched it and rated it)and mean review score of so people watched npwjf@jeand they gave it an average review of which is pretty good nowthe raw number of ratings isn' that useful to us mean don' know if means it' popular or not soto normalize thatwhat we're going to do is basically measure that against the maximum and minimum number of ratings for each movie we can do that using the mbnceb function sowe can apply function to an entire %bub'sbnf this way what we're going to do is use the oqnjo and oqnby functions to find the maximum number of ratings and the minimum number of ratings found in the entire dataset sowe'll take the most popular movie and the least popular movie and find the range thereand normalize everything against that rangenpwjf/vn bujoht qe%bub'sbnf npwjf spqfsujft npwjf/psnbmj[fe/vn bujoht npwjf/vn bujohtbqqmz mbnceby oqnjo oqnby oqnjo npwjf/psnbmj[fe/vn bujohtifbe what this gives uswhen we run itis the followingthis is basically measure of popularity for each movieon scale of to soa score of here would mean that nobody watched itit' the least popular movieand score of would mean that everybody watchedit' the most popular movieor more specificallythe movie that the most people watched sowe have measure of movie popularity now that we can use for our distance metric |
16,528 | nextlet' extract some general information soit turns out that there is jufn file that not only contains the movie namesbut also all the genres that each movie belongs tonpwjf%jdu \xjuipqfo %bub djfodfnm jufn btg ufnq gpsmjofjog gjfmet mjofstusjq = tqmju npwjf*jou gjfmet obnf gjfmet hfosft gjfmet hfosft nbq jouhfosft npwjf%jdu obnfhfosftnpwjf/psnbmj[fe/vn bujohtmpdhfu tj[ npwjf spqfsujftmpd<npwj *%>sbujohhfu nfbo the code above will actually go through each line of jufn we're doing this the hard waywe're not using any pandas functionswe're just going to use straight-up python this time againmake sure you change that path to wherever you installed this information nextwe open our jufn fileand then iterate through every line in the file one at time we strip out the new line at the end and split it based on the pipe-delimiters in this file thenwe extract the npwjf*%the movie name and all of the individual genre fields so basicallythere' bunch of and in different fields in this source datawhere each one of those fields represents given genre we then construct python dictionary in the end that maps movie ids to their namesgenresand then we also fold back in our rating information sowe will have namegenrepopularity on scale of to and the average rating sothat' what this little snippet of code does let' run thatandjust to see what we end up withwe can extract the value for npwjf@jenpwjf%jdu following is the output of the preceding codeentry in our dictionary for npwjf@jehappens to be toy storyan old pixar film from you've probably heard of next is list of all the genreswhere indicates it is not part of that genreand indicates it is part of that genre there is data file in the pwjf-fot dataset that will tell you what each of these genre fields actually corresponds to |
16,529 | for our purposesthat' not actually importantrightwe're just trying to measure distance between movies based on their genres soall that matters mathematically is how similar this vector of genres is to another movieokaythe actual genres themselvesnot importantwe just want to see how same or different two movies are in their genre classifications so we have that genre listwe have the popularity score that we computedand we have there the mean or average rating for toy story okaylet' go ahead and figure out how to combine all this information together into distance metricso we can find the -nearest neighbors for toy storyfor example 've rather arbitrarily computed this $pnqvuf%jtubodf functionthat takes two movie ids and computes distance score between the two we're going to base thisfirst of allon the similarityusing cosine similarity metricbetween the two genre vectors like saidwe're just going to take the list of genres for each movie and see how similar they are to each other againa indicates it' not part of that genrea indicates it is we will then compare the popularity scores and just take the raw differencethe absolute value of the difference between those two popularity scores and use that toward the distance metric as well thenwe will use that information alone to define the distance between two movies sofor exampleif we compute the distance between movie ids and this function would return some distance function based only on the popularity of that movie and on the genres of those movies nowimagine scatter plot if you willlike we saw back in our example from the previous sectionswhere one axis might be measure of genre similaritybased on cosine metricthe other axis might be popularityokaywe're just finding the distance between these two thingsgspntdjqzjnqpsutqbujbm efg$pnqvuf%jtubodf hfosftb hfosftc hfosf%jtubodf tqbujbmejtubodfdptjof hfosft"hfosftqpqvmbsjuzb qpqvmbsjuzc qpqvmbsjuz%jtubodf bct qpqvmbsjuz"qpqvmbsjuzsfuvsohfosf%jtubodf qpqvmbsjuz%jtubodf $pnqvuf%jtubodf npwjf%jdunpwjf%jdu |
16,530 | for this examplewhere we're trying to compute the distance using our distance metric between movies and we end up with score of remembera far distance means it' not similarrightwe want the nearest neighborswith the smallest distance soa score of is pretty high number on scale of to so that' telling me that these movies really aren' similar let' do quick sanity check and see what these movies really areqsjounpwjf%jdu qsjounpwjf%jdu it turns out it' the movies goldeneye and get shortywhich are pretty darn different moviesyou knowyou have james bond action-adventureand comedy movie not very similar at allthey're actually comparable in terms of popularitybut the genre difference did it in okaysolet' put it all togethernextwe're going to write little bit of code to actually take some given movieid and find the knn soall we have to do is compute the distance between toy story and all the other movies in our movie dictionaryand sort the results based on their distance score that' what the following little snippet of code does if you want to take moment to wrap your head around itit' fairly straightforward we have little hfu/fjhicpst function that will take the movie that we're interested inand the neighbors that we want to find it'll iterate through every movie that we haveif it' actually different movie than we're looking atit will compute that distance score from beforeappend that to the list of results that we haveand sort that result then we will pluck off the top results |
16,531 | in this examplewe're going to set to find the nearest neighbors we will find the nearest neighbors using hfu/fjhicpst and then we will iterate through all these nearest neighbors and compute the average rating from each neighbor that average rating will inform us of our rating prediction for the movie in question as side effectwe also get the nearest neighbors based on our distance functionwhich we could call similar movies sothat information itself is useful going back to that "customers who watched also watchedexampleif you wanted to do similar feature that was just based on this distance metric and not actual behavior datathis might be reasonable place to startrightjnqpsupqfsbups efghfu/fjhicpst npwjf*%ejtubodft gpsnpwjfjonpwjf%jdu jg npwjfnpwjf*ejtu $pnqvuf%jtubodf npwjf%jdunpwjf%jdu ejtubodftbqqfoe npwjfejtu ejtubodfttpsu lfz pqfsbupsjufnhfuufs ofjhicpst gpsyjosbohf ofjhicpstbqqfoe ejtubodft sfuvsoofjhicpst bwh bujoh ofjhicpst hfu/fjhicpst gpsofjhicpsjoofjhicpst bwh bujoh npwjf%jdu qsjounpwjf%jdu tus npwjf%jdu bwh bujohgmpbu |
16,532 | solet' go ahead and run thisand see what we end up with the output of the following code is as followsthe results aren' that unreasonable sowe are using as an example the movie toy storywhich is movieid and what we came back withfor the top nearest neighborsare pretty good selection of comedy and children' movies sogiven that toy story is popular comedy and children' moviewe got bunch of other popular comedy and children' moviessoit seems to workwe didn' have to use bunch of fancy collaborative filtering algorithmsthese results aren' that bad nextlet' use knn to predict the ratingwhere we're thinking of the rating as the classification in this examplebwh bujoh following is the output of the preceding codewe end up with predicted rating of which actually isn' all that different from the actual rating for that moviewhich was so not greatbut it' not too bad eitheri mean it actually works surprisingly wellgiven how simple this algorithm isactivity most of the complexity in this example was just in determining our distance metricand you know we intentionally got little bit fancy there just to keep it interestingbut you can do anything else you want to soif you want fiddle around with thisi definitely encourage you to do so our choice of for was completely out of thin airi just made that up how does this respond to different valuesdo you get better results with higher value of kor with lower value of kdoes it matter |
16,533 | if you really want to do more involved exercise you can actually try to apply it to train/testto actually find the value of that most optimally can predict the rating of the given movie based on knn andyou can use different distance metricsi kind of made that up toosoplay around the distance metricmaybe you can use different sources of informationor weigh things differently it might be an interesting thing to do maybepopularity isn' really as important as the genre informationor maybe it' the other way around see what impact that has on your results too sogo ahead and mess with these algorithmsmess with the code and run with itand see what you can getandif you do find significant way of improving on thisshare that with your classmates that is knn in actionsoa very simple concept but it can be actually pretty powerful sothere you have itsimilar movies just based on the genre and popularity and nothing else works out surprisingly wellandwe used the concept of knn to actually use those nearest neighbors to predict rating for new movieand that actually worked out pretty well too sothat' knn in actionvery simple technique but often it works out pretty darn gooddimensionality reduction and principal component analysis alrighttime to get all trippywe're going to talking about higher dimensionsand dimensionality reduction sounds scarythere is some fancy math involvedbut conceptually it' not as hard to grasp as you might think solet' talk about dimensionality reduction and principal component analysis next very dramatic soundingusually when people talk about thisthey're talking about technique called principal component analysis or pcaand specific technique called singular value decomposition or svd so pca and svd are the topics of this section let' dive into itdimensionality reduction sowhat is the curse of dimensionalitywella lot of problems can be thought of having many different dimensions sofor examplewhen we were doing movie recommendationswe had attributes of various moviesand every individual movie could be thought of as its own dimension in that data space |
16,534 | if you have lot of moviesthat' lot of dimensions and you can' really wrap your head around more than because that' what we grew up to evolve within you might have some sort of data that has many different features that you care about you knowin moment we'll look at an example of flowers that we want to classifyand that classification is based on different measurements of the flowers those different featuresthose different measurements can represent dimensionswhich againis very hard to visualize for this reasondimensionality reduction techniques exist to find way to reduce higher dimensional information into lower dimensional information not only can that make it easier to look atand classify thingsbut it can also be useful for things like compressing data soby preserving the maximum amount of variancewhile we reduce the number of dimensionswe're more compactly representing dataset very common application of dimensionality reduction is not just for visualizationbut also for compressionand for feature extraction we'll talk about that little bit more in moment very simple example of dimensionality reduction can be thought of as -means clusteringso you knowfor examplewe might start off with many points that represent many different dimensions in dataset butultimatelywe can boil that down to different centroidsand your distance to those centroids that' one way of boiling data down to lower dimensional representation principal component analysis usuallywhen people talk about dimensionality reductionthey're talking about technique called principal component analysis this is much more-fancy techniqueit gets into some pretty involved mathematics butat high-levelall you need to know is that it takes higher dimensional data spaceand it finds planes within that data space and higher dimensions |
16,535 | these higher dimensional planes are called hyper planesand they are defined by things called eigenvectors you take as many planes as you want dimensions in the endproject that data onto those hyperplanesand those become the new axes in your lower dimensional data spaceyou knowunless you're familiar with higher dimensional math and you've thought about it beforeit' going to be hard to wrap your head aroundbutat the end of the dayit means we're choosing planes in higher dimensional space that still preserve the most variance in our dataand project the data onto those higher dimensional planes that we then bring into lower dimensional spaceokayyou don' really have to understand all the math to use itthe important point is that it' very principled way of reducing dataset down to lower dimensional space while still preserving the variance within it we talked about image compression as one application of this so you knowif want to reduce the dimensionality in an imagei could use pca to boil it down to its essence |
16,536 | facial recognition is another example soif have dataset of facesmaybe each face represents third dimension of imagesand want to boil that downsvd and principal component analysis can be way to identify the features that really count in face soit might end up focusing more on the eyes and the mouthfor examplethose important features that are necessary for preserving the variance within that dataset soit can produce some very interesting and very useful results that just emerge naturally out of the datawhich is kind of coolto make it realwe're going to use simpler exampleusing what' called the iris dataset this is dataset that' included with scikit-learn it' used pretty commonly in examplesand here' the idea behind itsoan iris actually has different kinds of petals on its flower one' called petalwhich is the flower petals you're familiar withand it also has something called sepalwhich is kind of this supportive lower set of petals on the flower we can take bunch of different species of irisand measure the petal length and widthand the sepal length and width sotogether the length and width of the petaland the length and width of the sepal are different measurements that correspond to different dimensions in our dataset want to use that to classify what species an iris might belong to nowpca will let us visualize this in dimensions instead of while still preserving the variance in that dataset solet' see how well that works and actually write some python code to make pca happen on the iris dataset sothose were the concepts of dimensionality reductionprincipal component analysisand singular value decomposition all big fancy words and yeahit is kind of fancy thing you knowwe're dealing with reducing higher dimensional spaces down to smaller dimensional spaces in way that preserves their variance fortunatelyscikit-learn makes this extremely easy to dolike lines of code is all you need to actually apply pca so let' make that happena pca example with the iris dataset let' apply principal component analysis to the iris dataset this is dataset that we're going to reduce down to dimensions we're going to see that we can actually still preserve most of the information in that dataseteven by throwing away half of the dimensions it' pretty cool stuffand it' pretty simple too let' dive in and do some principal component analysis and cure the curse of dimensionality go ahead and open up the $"jqzoc file |
16,537 | it' actually very easy to do using scikit-learnas usualagainpca is dimensionality reduction technique it sounds very science-fictionyall this talk of higher dimensions butjust to make it more concrete and real againa common application is image compression you can think of an image of black and white pictureas dimensionswhere you have widthas your -axisand your -axis of heightand each individual cell has some brightness value on scale of to that is black or whiteor some value in between sothat would be datayou have spatial dimensionsand then brightness and intensity dimension on top of that if you were to distill that down to say dimensions alonethat would be compressed image andif you were to do that in technique that preserved the variance in that image as well as possibleyou could still reconstruct the imagewithout whole lot of loss in theory sothat' dimensionality reductiondistilled down to practical example nowwe're going to use different example here using the iris datasetand scikit-learn includes this all it is is dataset of various iris flower measurementsand the species classification for each iris in that dataset and it has alsolike said beforethe length and width measurement of both the petal and the sepal for each iris specimen sobetween the length and width of the petaland the length and width of the sepal we have dimensions of feature data in our dataset we want to distill that down to something we can actually look at and understandbecause your mind doesn' deal with dimensions very wellbut you can look at dimensions on piece of paper pretty easily let' go ahead and load that upgspntlmfbsoebubtfutjnqpsumpbe@jsjt gspntlmfbsoefdpnqptjujpojnqpsu $jnqpsuqzmbcbtqm gspnjufsuppmtjnqpsudzdmf jsjt mpbe@jsjt ovn bnqmftovn'fbuvsft jsjtebubtibqf qsjouovn bnqmft qsjouovn'fbuvsft qsjoumjtu jsjtubshfu@obnft |
16,538 | there' handy dandy mpbe@jsjt function built into scikit-learn that will just load that up for you with no additional workso you can just focus on the interesting part let' take look at what that dataset looks likethe output of the preceding code is as followsyou can see that we are extracting the shape of that datasetwhich means how many data points we have in itthat is and how many featuresor how many dimensions that dataset hasand that is sowe have iris specimens in our datasetwith dimensions of information againthat is the length and width of the sepaland the length and width of the petalfor total of featureswhich we can think of as dimensions we can also print out the list of target names in this datasetwhich are the classificationsand we can see that each iris belongs to one of three different speciessetosaversicoloror virginica that' the data that we're working with iris specimensclassified into one of speciesand we have features associated with each iris let' look at how easy pca is even though it' very complicated technique under the hooddoing it is just few lines of code we're going to assign the entire iris dataset and we're going to call it we will then create pca modeland we're going to keep @dpnqpofout because we want dimensionsthat iswe're going to go from to we're going to use xijufo svfthat means that we're going to normalize all the dataand make sure that everything is nice and comparable normally you will want to do that to get good results thenwe will fit the pca model to our iris dataset we can use that model thento transform that dataset down to dimensions let' go ahead and run that it happened pretty quickly jsjtebub qdb $ @dpnqpofout xijufo svf gju @qdb qdbusbotgpsn please think about what just happened there we actually created pca model to reduce dimensions down to and it did that by choosing vectorsto create hyperplanes aroundto project that data down to dimensions you can actually see what those vectors arethose eigenvectorsby printing out the actual components of pca sopca stands for principal component analysisthose principal components are the eigenvectors that we chose to define our planes aboutqsjouqdbdpnqpofout |
16,539 | output to the preceding code is as followsyou can actually look at those valuesthey're not going to mean lot to youbecause you can' really picture dimensions anywaybut we did that just so you can see that it' actually doing something with principal components solet' evaluate our resultsqsjouqdbfyqmbjofe@wbsjbodf@sbujpqsjoutvn qdbfyqmbjofe@wbsjbodf@sbujpthe pca model gives us back something called fyqmbjofe@wbsjbodf@sbujp basicallythat tells you how much of the variance in the original data was preserved as reduced it down to dimensions solet' go ahead and take look at thatwhat it gives you back is actually list of items for the dimensions that we preserved this is telling me that in the first dimension can actually preserve of the variance in the dataand the second dimension only gave me an additional of variance if sum it togetherthese dimensions that projected my data down intoi still preserved over of the variance in the source data we can see that dimensions weren' really necessary to capture all the information in this datasetwhich is pretty interesting it' pretty cool stuffif you think about itwhy do you think that might bewellmaybe the overall size of the flower has some relationship to the species at its center maybe it' the ratio of length to width for the petal and the sepal you knowsome of these things probably move together in concert with each other for given speciesor for given overall size of flower soperhaps there are relationships between these dimensions that pca is extracting on its own it' pretty cooland pretty powerful stuff let' go ahead and visualize this the whole point of reducing this down to dimensions was so that we could make nice little scatter plot of itat least that' our objective for this little example here sowe're going to do little bit of matplotlib magic here to do that there is some sort of fancy stuff going on here that should at least mention sowhat we're going to do is create list of colorsredgreen and blue we're going to create list of target idsso that the values and map to the different iris species that we have |
16,540 | what we're going to do is zip all this up with the actual names of each species the for loop will iterate through the different iris speciesand as it does thatwe're going to have the index for that speciesa color associated with itand the actual human-readable name for that species we'll take one species at time and plot it on our scatter plot just for that species with given color and the given label we will then add in our legend and show the resultsdpmpst dzdmf shc ubshfu@jet sbohf mfo jsjtubshfu@obnft qmgjhvsf gpsj mbcfmjo[jq ubshfu@jetdpmpstjsjtubshfu@obnft qmtdbuufs @qdb @qdbd mbcfm mbcfm qmmfhfoe qmtipx the following is what we end up withthat is our iris data projected down to dimensions pretty interesting stuffyou can see it still clusters together pretty nicely you knowyou have all the virginicas sitting togetherall the versicolors sitting in the middleand the setosas way off on the left side it' really hard to imagine what these actual values represent butthe important point iswe've projected data down to dand in such way that we still preserve the variance we can still see clear delineations between these species little bit of intermingling going on in thereit' not perfect you know but by and largeit was pretty effective |
16,541 | activity as you recall from fyqmbjofe@wbsjbodf@sbujpwe actually captured most of the variance in single dimension maybe the overall size of the flower is all that really matters in classifying itand you can specify that with one feature sogo ahead and modify the results if you are feeling up to it see if you can get away with dimensionsor dimension instead of sogo change that @dpnqpofout to and see what kind of variance ratio you get what happensdoes it makes senseplay around with itget some familiarity with it that is dimensionality reductionprincipal component analysisand singular value decomposition all in action veryvery fancy termsand you knowto be fair it is some pretty fancy math under the hood but as you can seeit' very powerful technique and with scikit-learnit' not hard to apply sokeep that in your tool chest and there you have ita dataset of flower information boiled down to dimensions that we can both easily visualizeand also still see clear delineations between the classifications that we're interested in sopca works really well in this example againit' useful tool for things like compressionor feature extractionor facial recognition as well sokeep that in your toolbox data warehousing overview nextwe're going to talk little bit about data warehousing this is field that' really been upended recently by the advent of hadoopand some big data techniques and cloud computing soa lot of big buzz words therebut concepts that are important for you to understand let' dive in and explore these conceptslet' talk about elt and etland data warehousing in general this is more of conceptas opposed to specific practical techniqueso we're going to talk about it conceptually butit is something that' likely to come up in the setting of job interview solet' make sure you understand these concepts we'll start by talking about data warehousing in general what is data warehousewellit' basically giant database that contains information from many different sources and ties them together for you for examplemaybe you work at big ecommerce company and they might have an ordering system that feeds information about the stuff people bought into your data warehouse |
16,542 | you might also have information from web server logs that get ingested into the data warehouse this would allow you to tie together browsing information on the website with what people ultimately ordered for example maybe you could also tie in information from your customer service systemsand measure if there' relationship between browsing behavior and how happy the customers are at the end of the day data warehouse has the challenge of taking data from many different sourcestransforming them into some sort of schema that allows us to query these different data sources simultaneouslyand it lets us make insightsthrough data analysis solarge corporations and organizations have this sort of thing pretty commonly we're going into the concept of big data here you can have giant oracle databasefor examplethat contains all this stuff and maybe it' partitioned in some wayand replicated and it has all sorts of complexity you can just query that through sqlstructured query languageorthrough graphical toolslike tableau which is very popular one these days that' what data analyst doesthey query large datasets using stuff like tableau that' kind of the difference between data analyst and data scientist you might be actually writing code to perform more advanced techniques on data that border on aias opposed to just using tools to extract graphs and relationships out of data warehouse it' very complicated problem at amazonwe had an entire department for data warehousing that took care of this stuff full timeand they never had enough peoplei can tell you thatit' big jobyou knowthere are lot of challenges in doing data warehousing one is data normalizationsoyou have to figure out how do all the fields in these different data sources actually relate to each otherhow do actually make sure that column in one data source is comparable to column from another data source and has the same set of dataat the same scaleusing the same terminologyhow do deal with missing datahow do deal with corrupt data or data from outliersor from robots and things like thatthese are all very big challenges maintaining those data feeds is also very big problem lot can go wrong when you're importing all this information into your data warehouseespecially when you have very large transformation that needs to happen to take the raw datasaved from web logsinto an actual structured database table that can be imported into your data warehouse scaling also can get tricky when you're dealing with monolithic data warehouse eventuallyyour data will get so large that those transformations themselves start to become problem this starts to get into the whole topic of elt versus etl thing |
16,543 | etl versus elt let' first talk about etl what does that stand forit stands for extracttransformand load and that' sort of the conventional way of doing data warehousing basicallyfirst you extract the data that you want from the operational systems that you want sofor examplei might extract all of the web logs from our web servers each day then need to transform all that information into an actual structured database table that can import into my data warehouse this transformation stage might go through every line of those web server logstransform that into an actual tablewhere ' plucking out from each web log line things like session idwhat page they looked atwhat time it waswhat the referrer was and things like thatand can organize that into tabular structure that can then load into the data warehouse itselfas an actual table in database soas data becomes larger and largerthat transformation step can become real problem think about how much processing work is required to go through all of the web logs on googleor amazonor any large websiteand transform that into something database can ingest that itself becomes scalability challenge and something that can introduce stability problems through the entire data warehouse pipeline that' where the concept of elt comes inand it kind of flips everything on its head it says"wellwhat if we don' use huge oracle instancewhat if instead we use some of these newer techniques that allow us to have more distributed database over hadoop cluster that lets us take the power of these distributed databases like hiveor sparkor mapreduceand use that to actually do the transformation after it' been loadedthe idea here is we're going to extract the information we want as we did beforesay from set of web server logs but thenwe're going to load that straight in to our data repositoryand we're going to use the power of the repository itself to actually do the transformation in place sothe idea here isinstead of doing an offline process to transform my web logsas an exampleinto structured formati' just going to suck those in as raw text files and go through them one line at timeusing the power of something like hadoopto actually transform those into more structured format that can then query across my entire data warehouse solution |
16,544 | things like hive let you host massive database on hadoop cluster there' things like spark sql that let you also do queries in very sql-like data warehouse-like manneron data warehouse that is actually distributed on hadoop cluster there are also distributed nosql data stores that can be queried using spark and mapreduce the idea is that instead of using monolithic database for data warehouseyou're instead using something built on top of hadoopor some sort of clusterthat can actually not only scale up the processing and querying of that databut also scale the transformation of that data as well once againyou first extract your raw databut then we're going to load it into the data warehouse system itself as is andthen use the power of the data warehousewhich might be built on hadoopto do that transformation as the third step then can query things together soit' very big projectvery big topic you knowdata warehousing is an entire discipline in and of itself we're going to talk about spark some more in this book very soonwhich is one way of handling this thing there' something called spark sql in particular that' relevant the overall concept here is that if you move from monolithic database built on oracle or mysql to one of these more modern distributed databases built on top of hadoopyou can take that transform stage and actually do that after you've loaded in the raw dataas opposed to before that can end up being simpler and more scalableand taking advantage of the power of large computing clusters that are available today that' etl versus eltthe legacy way of doing it with lot of clusters all over the place in cloud-based computing versus way that makes sense todaywhen we do have large clouds of computing available to us for transforming large datasets that' the concept etl is kind of the old school way of doing ityou transform bunch of data offline before importing it in and loading it into giant data warehousemonolithic database but with today' techniqueswith cloud-based databasesand hadoopand hiveand sparkand mapreduceyou can actually do it little bit more efficiently and take the power of cluster to actually do that transformation step after you've loaded the raw data into your data warehouse this is really changing the field and it' important that you know about it againthere' lot more to learn on the subjectso encourage you to explore more on this topic butthat' the basic conceptand now you know what people are talking about when they talk about etl versus elt |
16,545 | reinforcement learning our next topic' fun onereinforcement learning we can actually use this idea with an example of pac-man we can actually create little intelligent pac-man agent that can play the game pac-man really well on its own you'll be surprised how simple the technique is for building up the smarts behind this intelligent pac-man let' take looksothe idea behind reinforcement learning is that you have some sort of agentin this case pac-manthat explores some sort of spaceand in our example that space will be the maze that pac-man is in as it goesit learns the value of different state changes within different conditions for examplein the preceding imagethe state of pac-man might be defined by the fact that it has ghost to the southand wall to the westand empty spaces to the north and eastand that might define the current state of pac-man the state changes it can take would be to move in given direction can then learn the value of going in certain direction sofor exampleif were to move northnothing would really happenthere' no real reward associated with that butif were to move south would be destroyed by the ghostand that would be negative value as go and explore the entire spacei can build up set of all the possible states that pacman can be inand the values associated with moving in given direction in each one of those statesand that' reinforcement learning and as it explores the whole spaceit refines these reward values for given stateand it can then use those stored reward values to choose the best decision to make given current set of conditions in addition to pac-manthere' also game called cat mouse that is an example that' used commonly that we'll look at later |
16,546 | the benefit of this technique is that once you've explored the entire set of possible states that your agent can be inyou can very quickly have very good performance when you run different iterations of this soyou knowyou can basically make an intelligent pac-man by running reinforcement learning and letting it explore the values of different decisions it can make in different states and then storing that informationto very quickly make the right decision given future state that it sees in an unknown set of conditions -learning soa very specific implementation of reinforcement learning is called -learningand this formalizes what we just talked about little bit moreso againyou start with set of environmental states of the agent (is there ghost next to meis there power pill in front of methings like that )we're going to call that have set of possible actions that can take in those stateswe're going to call that set of actions in the case of pac-manthose possible actions are move updownleftor right then we have value for each state/action pair that we'll call qthat' why we call it -learning sofor each statea given set of conditions surrounding pacmana given action will have value somoving up might have given value qmoving down might have negative value if it means encountering ghostfor example sowe start off with value of for every possible state that pac-man could be in andas pac-man explores mazeas bad things happen to pac-manwe reduce the value for the state that pac-man was in at the time soif pac-man ends up getting eaten by ghostwe penalize whatever he did in that current state as good things happen to pac-manas he eats power pillor eats ghostwe'll increase the value for that actionfor the state that he was in thenwhat we can do is use those values to inform pac-man' future choicesand sort of build little intelligent agent that can perform optimallyand make perfect little pac-man from the same image of pac-man that we saw just abovewe can further define the current state of pac-man by defining that he has wall to the westempty space to the north and easta ghost to the south |
16,547 | we can look at the actions he can takehe can' actually move left at allbut he can move updownor rightand we can assign value to all those actions by going up or rightnothing really happens at allthere' no power pill or dots to consume but if he goes leftthat' definitely negative value we can say for the state given by the current conditions that pac-man is surrounded bymoving down would be really bad choicethere should be negative value for that moving left just can' be done at all moving up or right or staying neutralthe value would remain for those action choices for that given state nowyou can also look ahead little bitto make an even more intelligent agent soi' actually two steps away from getting power pill here soas pac-man were to explore this stateif were to hit the case of eating that power pill on the next statei could actually factor that into the value for the previous state if you just have some sort of discount factorbased on how far away you are in timehow many steps away you areyou can factor that all in together sothat' way of actually building in little bit of memory into the system you can "look aheadmore than one step by using discount factor when computing (here is previous statesis current state) ( , +discount (reward( ,amax( ( ') ( , )sothe value that experience when consume that power pill might actually give boost to the previous values that encountered along the way sothat' way to make qlearning even better the exploration problem one problem that we have in reinforcement learning is the exploration problem how do make sure that efficiently cover all the different states and actions within those states during the exploration phasethe simple approach one simple approach is to always choose the action for given state with the highest value that 've computed so farand if there' tiejust choose at random soinitially all of my values might be and 'll just pick actions at random at first as start to gain information about better values for given actions and given statesi'll start to use those as go butthat ends up being pretty inefficientand can actually miss lot of paths that way if just tie myself into this rigid algorithm of always choosing the best value that 've computed thus far |
16,548 | the better way soa better way is to introduce little bit of random variation into my actions as ' exploring sowe call that an epsilon term sosuppose we have some valuethat roll the dicei have random number if it ends up being less than this epsilon valuei don' actually follow the highest valuei don' do the thing that makes sensei just take path at random to try it outand see what happens that actually lets me explore much wider range of possibilitiesa much wider range of actionsfor wider range of states more efficiently during that exploration stage sowhat we just did can be described in very fancy mathematical termsbut you know conceptually it' pretty simple fancy words explore some set of actions that can take for given set of statesi use that to inform the rewards associated with given action for given set of statesand after that exploration is done can use that informationthose valuesto intelligently navigate through an entirely new maze for example this can also be called markov decision process so againa lot of data science is just assigning fancyintimidating names to simple conceptsand there' ton of that in reinforcement learning markov decision process soif you look up the definition of markov decision processesit is " mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of decision makerdecision makingwhat action do we take given set of possibilities for given statein situations where outcomes are partly randomhmmkind of like our random exploration there partly under the control of decision makerthe decision maker is our values that we computed |
16,549 | somdpsmarkov decision processesare fancy way of describing our exploration algorithm that we just described for reinforcement learning the notation is even similarstates are still described as sand sis the next state that we encounter we have state transition functions that are defined as pa for given state of and swe have our valueswhich are basically represented as reward functionan ra value for given and ssomoving from one state to another has given reward associated with itand moving from one state to another is defined by state transition functionstates are still described as and 'state transition functions are described as pa( , 'our values are described as reward function ra( , 'so againdescribing what we just didonly mathematical notationand fancier sounding wordmarkov decision processes andif you want to sound even smarteryou can also call markov decision process by another namea discrete time stochastic control process that sounds intelligentbut the concept itself is the same thing that we just described dynamic programming soeven more fancy wordsdynamic programming can be used to describe what we just did as well wowthat sounds like artificial intelligencecomputers programming themselvesterminator skynet stuff but noit' just what we just did if you look up the definition of dynamic programmingit is method for solving complex problem by breaking it down into collection of simpler subproblemssolving each of those subproblems just onceand storing their solutions ideallyusing memory-based data structure the next time the same subproblem occursinstead of recomputing its solutionone simply looks up the previously computed solution thereby saving computation time at the expense of (hopefullymodest expenditure in storage spacea method for solving complex problemsame as creating an intelligent pacmanthat' pretty complicated end result by breaking it down into collection of simpler subproblemssofor examplewhat is the optimal action to take for given state that pac-man might be in there are many different states pac-man could find himself inbut each one of those states represents simpler subproblemwhere there' limited set of choices can makeand there' one right answer for the best move to make |
16,550 | storing their solutionsthose solutions being the values that associated with each possible action at each state ideallyusing memory-based data structurewellof course need to store those values and associate them with states somehowrightthe next time the same subproblem occursthe next time pac-man is in given state that have set of values for instead of recomputing its solutionone simply looks up the previously computed solutionthe value already have from the exploration stage thereby saving computation time at the expense of modest expenditure in storage spacethat' exactly what we just did with reinforcement learning we have complicated exploration phase that finds the optimal rewards associated with each action for given state once we have that table of the right action to take for given statewe can very quickly use that to make our pac-man move in an optimal manner in whole new maze that he hasn' seen before soreinforcement learning is also form of dynamic programming wowto recapyou can make an intelligent pac-man agent by just having it semi-randomly explore different choices of movement given different conditionswhere those choices are actions and those conditions are states we keep track of the reward or penalty associated with each action or state as we goand we can actually discountgoing back multiple steps if you want to make it even better then we store those values that we end up associating with each stateand we can use that to inform its future choices so we can go into whole new mazeand have really smart pac-man that can avoid the ghosts and eat them up pretty effectivelyall on its own it' pretty simple conceptvery powerful though you can also say that you understand bunch of fancy terms because it' all called the same thing -learningreinforcement learningmarkov decision processesdynamic programmingall tied up in the same concept don' knowi think it' pretty cool that you can actually make sort of an artificially intelligent pac-man through such simple techniqueand it really does workif you want to go look at it in more detailfollowing are few examples you can look at that have one actual source code you can look atand potentially play withpython markov decision process toolboxiuuq qznequppmcpysfbeuifepdtpshfombuftubqjneqiunm |
16,551 | there is python markov decision process toolbox that wraps it up in all that terminology we talked about there' an example you can look ata working example of the cat and mouse gamewhich is similar andthere is actually pac-man example you can look at online as wellthat ties in more directly with what we were talking about feel free to explore these linksand learn even more about it and so that' reinforcement learning more generallyit' useful technique for building an agent that can navigate its way through possible different set of states that have set of actions that can be associated with each state sowe've talked about it mostly in the context of maze game butyou can think more broadlyand you know whenever you have situation where you need to predict behavior of something given set of current conditions and set of actions it can take reinforcement learning and -learning might be way of doing it sokeep that in mindsummary in this we saw one of the simplest techniques of machine learning called -nearest neighbors we also looked at an example of knn which predicts the rating for movie we analysed the concepts of dimensionality reduction and principal component analysis and saw an example of pcawhich reduced data to two dimensions while still preserving its variance nextwe learned the concept of data warehousing and saw how using the elt process instead of etl makes more sense today we walked through the concept of reinforcement learning and saw how it is used behind the pac-man game finallywe saw some fancy words used for reinforcement learning ( -learningmarkov decision processand dynamic learningin the next we'll see how to deal with real-world data |
16,552 | dealing with real-world data in this we're going to talk about the challenges of dealing with real-world dataand some of the quirks you might run into the starts by talking about the bias-variance trade-offwhich is kind of more principled way of talking about the different ways you might overfit and underfit dataand how it all interrelates with each other we then talk about the -fold cross-validation techniquewhich is an important tool in your chest to combat overfittingand look at how to implement it using python nextwe analyze the importance of cleaning your data and normalizing it before actually applying any algorithms on it we see an example to determine the most popular pages on website which will demonstrate the importance of cleaning data the also covers the importance of remembering to normalize numerical data finallywe look at how to detect outliers and deal with them specificallythis covers the following topicsanalyzing the bias/variance trade-off the concept of -fold cross-validation and its implementation the importance of cleaning and normalizing data an example to determine the popular pages of website normalizing numerical data detecting outliers and dealing with them |
16,553 | bias/variance trade-off one of the basic challenges that we face when dealing with real-world data is overfitting versus underfitting your regressions to that dataor your modelsor your predictions when we talk about underfitting and overfittingwe can often talk about that in the context of bias and varianceand the bias-variance trade-off solet' talk about what that means so conceptuallybias and variance are pretty simple bias is just how far off you are from the correct valuesthat ishow good are your predictions overall in predicting the right overall value if you take the mean of all your predictionsare they more or less on the right spotor are your errors all consistently skewed in one direction or anotherif sothen your predictions are biased in certain direction variance is just measure of how spread outhow scattered your predictions are soif your predictions are all over the placethen that' high variance butif they're very tightly focused on what the correct values areor even an incorrect value in the case of high biasthen your variance is small let' look at some examples let' imagine that the following dartboard represents bunch of predictions we're making where the real value we're trying to predict is in the center of the bullseyestarting with the dartboard in the upper left-hand corneryou can see that our points are all scattered about the center so overallyou know the mean error comes out to be pretty close to reality our bias is actually very lowbecause our predictions are all around the same correct point howeverwe have very high variancebecause these points are scattered about all over the place sothis is an example of low bias and high variance |
16,554 | if we move on to the dartboard in the upper right cornerwe see that our points are all consistently skewed from where they should beto the northwest so this is an example of high bias in our predictionswhere they're consistently off by certain amount we have low variance because they're all clustered tightly around the wrong spotbut at least they're close togetherso we're being consistent in our predictions that' low variance butthe bias is high so againthis is high biaslow variance in the dartboard in the lower left corneryou can see that our predictions are scattered around the wrong mean point sowe have high biaseverything is skewed to some place where it shouldn' be but our variance is also high sothis is kind of the worst of both worlds herewe have high bias and high variance in this example finallyin wonderful perfect worldyou would have an example like the lower right dartboardwhere we have low biaswhere everything is centered around where it should beand low variancewhere things are all clustered pretty tightly around where they should be soin perfect world that' what you end up with in realityyou often need to choose between bias and variance it comes down to over fitting vs underfitting your data let' take look at the following example it' little bit of different way of thinking of bias and variance soin the left graphwe have straight lineand you can think of that as having very low variancerelative to these observations sothere' not lot of variance in this linethat isthere is low variance but the biasthe error from each individual pointis actually high |
16,555 | nowcontrast that to the overfitted data in the graph at the rightwhere we've kind of gone out of our way to fit the observations the line has high variancebut low biasbecause each individual point is pretty close to where it should be sothis is an example of where we traded off variance for bias at the end of the dayyou're not out to just reduce bias or just reduce varianceyou want to reduce error that' what really mattersand it turns out you can express error as function of bias and variancelooking at thiserror is equal to bias squared plus variance sothese things both contribute to the overall errorwith bias actually contributing more but keep in mindit' error you really want to minimizenot the bias or the variance specificallyand that an overly complex model will probably end up having high variance and low biaswhereas too simple model will have low variance and high bias howeverthey could both end up having similar error terms at the end of the day you just have to find the right happy medium of these two things when you're trying to fit your data we'll talk about some more principled ways of actually avoiding overfitting in our forthcoming sections butit' just the concept of bias and variance that want to get acrossbecause people do talk about it and you're going to be expected to know what means now let' tie that back to some earlier concepts in this book for examplein -nearest neighbors if we increase the value of kwe start to spread out our neighborhood that were averaging across to larger area that has the effect of decreasing variance because we're kind of smoothing things out over larger spacebut it might increase our bias because we'll be picking up larger population that may be less and less relevant to the point we started from by smoothing out knn over larger number of neighborswe can decrease the variance because we're smoothing things out over more values butwe might be introducing bias because we're introducing more and more points that are less than less related to the point we started with decision trees is another example we know that single decision tree is prone to overfittingso that might imply that it has high variance butrandom forests seek to trade off some of that variance for bias reductionand it does that by having multiple trees that are randomly variant and averages all their solutions together it' like when we average things out by increasing in knnwe can average out the results of decision tree by using more than one decision tree using random forests similar idea |
16,556 | this is bias-variance trade-off you know the decision you have to make between how overall accurate your values areand how spread out they are or how tightly clustered they are that' the bias-variance trade-off and they both contribute to the overall errorwhich is the thing you really care about minimizing sokeep those terms in mindk-fold cross-validation to avoid overfitting earlier in the bookwe talked about train and test as good way of preventing overfitting and actually measuring how well your model can perform on data it' never seen before we can take that to the next level with technique called -fold cross-validation solet' talk about this powerful tool in your arsenal for fighting overfittingk-fold cross-validation and learn how that works to recall from train/testthe idea was that we split all of our data that we're building machine learning model based off of into two segmentsa training datasetand test dataset the idea is that we train our model only using the data in our training datasetand then we evaluate its performance using the data that we reserved for our test dataset that prevents us from overfitting to the data that we have because we're testing the model against data that it' never seen before howevertrain/test still has its limitationsyou could still end up overfitting to your specific train/test split maybe your training dataset isn' really representative of the entire datasetand too much stuff ended up in your training dataset that skews things sothat' where kfold cross-validation comes init takes train/test and kicks it up notch the ideaalthough it sounds complicatedis fairly simple instead of dividing our data into two bucketsone for training and one for testingwe divide it into buckets we reserve one of those buckets for testing purposesfor evaluating the results of our model we train our model against the remaining buckets that we havek- and then we take our test dataset and use that to evaluate how well our model did amongst all of those different training datasets we average those resulting error metricsthat isthose -squared valuestogether to get final error metric from -fold cross-validation |
16,557 | that' all it is it is more robust way of doing train/testand that' one way of doing it nowyou might think to yourself wellwhat if ' overfitting to that one test dataset that reservedi' still using the same test dataset for every one of those training datasets what if that test dataset isn' really representative of things eitherthere are variations of -fold cross-validation that will randomize that as well soyou could randomly pick what the training dataset is as well each time aroundand just keep randomly assigning things to different buckets and measuring the results but usuallywhen people talk about -fold cross-validationthey're talking about this specific technique where you reserve one bucket for testingand the remaining buckets for trainingand you evaluate all of your training datasets against the test dataset when you build model for each one example of -fold cross-validation using scikitlearn fortunatelyscikit-learn makes this really easy to doand it' even easier than doing normal train/testit' extremely simple to do -fold cross-validationso you may as well just do it nowthe way this all works in practice is you will have model that you're trying to tuneand you will have different variations of that modeldifferent parameters you might want to tweak on itrightlikefor examplethe degree of polynomial for polynomial fit sothe idea is to try different values of your modeldifferent variationsmeasure them all using -fold crossvalidationand find the one that minimizes error against your test dataset that' kind of your sweet spot there in practiceyou want to use -fold cross-validation to measure the accuracy of your model against test datasetand just keep refining that modelkeep trying different values within itkeep trying different variations of that model or maybe even different models entirelyuntil you find the technique that reduces error the mostusing kfold cross validation let' go dive into an example and see how it works we're going to apply this to our iris dataset againrevisiting svcand we'll play with -fold cross-validation and see how simple it is let' actually put -fold cross-validation and train/test into practice here using some real python code you'll see it' actually very easy to usewhich is good thing because this is technique you should be using to measure the accuracythe effectiveness of your models in supervised learning |
16,558 | please go ahead and open up the ,'pme$sptt bmjebujpojqzoc and follow along if you will we're going to look at the iris dataset againremember we introduced this when we talk about dimensionality reductionjust to refresh your memorythe iris dataset contains set of iris flower measurementswhere each flower has length and width of its petaland length and width of its sepal we also know which one of different species of iris each flower belongs to the challenge here is to create model that can successfully predict the species of an iris flowerjust given the length and width of its petal and sepal solet' go ahead and do that we're going to use the svc model if you remember back againthat' just way of classifying data that' pretty robust there' section on that if you need to go and refresh your memoryjnqpsuovnqzbtoq gspntlmfbsojnqpsudsptt@wbmjebujpo gspntlmfbsojnqpsuebubtfut gspntlmfbsojnqpsutwn jsjt ebubtfutmpbe@jsjt qmjuuifjsjtebubjoupusbjouftuebubtfutxjui sftfswfegpsuftujoh @usbjo @uftu @usbjo @uftu dsptt@wbmjebujpousbjo@uftu@tqmju jsjtebubjsjtubshfuuftu@tj[ sboepn@tubuf #vjmebo $npefmgpsqsfejdujohjsjtdmbttjgjdbujpot vtjohusbjojohebub dmg twn lfsofm mjofbs gju @usbjo @usbjo /pxnfbtvsfjutqfsgpsnbodfxjuiuifuftuebub dmgtdpsf @uftu @uftu what we do is use the dsptt@wbmjebujpo library from scikit-learnand we start by just doing conventional train test splitjust single train/test splitand see how that will work to do that we have usbjo@uftu@tqmju function that makes it pretty easy sothe way this works is we feed into usbjo@uftu@tqmju set of feature data jsjtebub just contains all the actual measurements of each flower jsjtubshfu is basically the thing we're trying to predict |
16,559 | in this caseit contains all the species for each flower uftu@tj[ says what percentage do we want to train versus test so means we're going to extract of that data randomly for testing purposesand use for training purposes what this gives us back is datasetsbasicallya training dataset and test dataset for both the feature data and the target data so @usbjo ends up containing of our iris measurementsand @uftu contains of the measurements used for testing the results of our model @usbjo and @uftu contain the actual species for each one of those segments then after that we go ahead and build an svc model for predicting iris species given their measurementsand we build that only using the training data we fit this svc modelusing linear kernelusing only the training feature dataand the training species datathat istarget data we call that model dmg thenwe call the tdpsf function on dmg to just measure its performance against our test dataset sowe score this model against the test data we reserved for the iris measurementsand the test iris speciesand see how well it doesit turns out it does really wellover of the timeour model is able to correctly predict the species of an iris that it had never seen beforejust based on the measurements of that iris so that' pretty coolbutthis is fairly small datasetabout flowers if remember right sowe're only using of flowers for training and only of flowers for testing these are still fairly small numbersso we could still be overfitting to our specific train/test split that we made solet' use -fold cross-validation to protect against that it turns out that using -fold cross-validationeven though it' more robust techniqueis actually even easier to use than train/test sothat' pretty coolsolet' see how that works fhjwfdsptt@wbm@tdpsfbnpefmuiffoujsfebubtfuboejutsfbmwbmvftboeuifovncfspggpmet tdpsft dsptt@wbmjebujpodsptt@wbm@tdpsf dmgjsjtebubjsjtubshfudw sjouuifbddvsbdzgpsfbdigpme qsjoutdpsft "oeuifnfbobddvsbdzpgbmmgpmet qsjoutdpsftnfbo |
16,560 | we have model alreadythe svc model that we defined for this predictionand all you need to do is call dsptt@wbm@tdpsf on the dsptt@wbmjebujpo package soyou pass in this function model of given type (dmg)the entire dataset that you have of all of the measurementsthat isall of my feature data (jsjtebuband all of my target data (all of the species)jsjtubshfu want dw which means it' actually going to use different training datasets while reserving for testing basicallyit' going to run it timesand that' all we need to do that will automatically evaluate our model against the entire datasetsplit up five different waysand give us back the individual results if we print back the output of thatit gives us back list of the actual error metric from each one of those iterationsthat iseach one of those folds we can average those together to get an overall error metric based on -fold cross-validationwhen we do this over foldswe can see that our results are even better than we thought accuracy so that' pretty coolin factin couple of the runs we had perfect accuracy so that' pretty amazing stuff now let' see if we can do even better we used linear kernel beforewhat if we used polynomial kernel and got even fancierwill that be overfitting or will it actually better fit the data that we havethat kind of depends on whether there' actually linear relationship or polynomial relationship between these petal measurements and the actual species or not solet' try that outdmg twn lfsofm qpmz gju @usbjo @usbjo tdpsft dsptt@wbmjebujpodsptt@wbm@tdpsf dmgjsjtebubjsjtubshfudw qsjoutdpsft qsjoutdpsftnfbo we'll just run this all againusing the same technique but this timewe're using polynomial kernel we'll fit that to our training datasetand it doesn' really matter where you fit to in this casebecause dsptt@wbm@tdpsf will just keep re-running it for you |
16,561 | it turns out that when we use polynomial fitwe end up with an overall score that' even lower than our original run sothis tells us that the polynomial kernel is probably overfitting when we use -fold cross-validation it reveals an actual lower score than with our linear kernel the important point here is that if we had just used single train/test splitwe wouldn' have realized that we were overfitting we would have actually gotten the same result if we just did single train/test split here as we did on the linear kernel sowe might inadvertently be overfitting our data thereand not have even known it had we not use kfold cross-validation sothis is good example of where -fold comes to the rescueand warns you of overfittingwhere single train/test split might not have caught that sokeep that in your tool chest if you want to play around with this some morego ahead and try different degrees soyou can actually specify different number of degrees the default is degrees for the polynomial kernelbut you can try different oneyou can try two does that do betterif you go down to onethat degrades basically to linear kernelrightsomaybe there is still polynomial relationship and maybe it' only second degree polynomial try it out and see what you get back that' -fold cross-validation as you can seeit' very easy to use thanks to scikit-learn it' an important way to measure how good your model is in very robust manner data cleaning and normalisation nowthis is one of the simplestbut yet it might be the most important section in this whole book we're going to talk about cleaning your input datawhich you're going to spend lot of your time doing how well you clean your input data and understand your raw input data is going to have huge impact on the quality of your results maybe even more so than what model you choose or how well you tune your models sopay attentionthis is important stuffcleaning your raw input data is often the most importantand timeconsumingpart of your job as data scientist |
16,562 | let' talk about an inconvenient truth of data scienceand that' that you spend most of your time actually just cleaning and preparing your dataand actually relatively little of it analyzing it and trying out new algorithms it' not quite as glamorous as people might make it out to be all the time butthis is an extremely important thing to pay attention to there are lot of different things that you might find in raw data data that comes in to youjust raw datais going to be very dirtyit' going to be polluted in many different ways if you don' deal with it it' going to skew your resultsand it will ultimately end up in your business making the wrong decisions if it comes back that you made mistake where you ingested bunch of bad data and didn' account for itdidn' clean that data upand what you told your business was to do something based on those results that later turn out to be completely wrongyou're going to be in lot of troublesopay attentionthere are lot of different kinds of problems and data that you need to watch out foroutliersso maybe you have people that are behaving kind of strangely in your dataand when you dig into themthey turn out to be data you shouldn' be looking at the in first place good example would be if you're looking at web log dataand you see one session id that keeps coming back overand overand over againand it keeps doing something at ridiculous rate that human could never do what you're probably seeing there is robota script that' being run somewhere to actually scrape your website it might even be some sort of malicious attack but at any rateyou don' want that behavior data informing your models that are meant to predict the behavior of real human beings using your website sowatching for outliers is one way to identify types of data that you might want to strip out of your model when you're building it missing datawhat do you do when data' just not theregoing back to the example of web logyou might have referrer in that lineor you might not what do you do if it' not theredo you create new classification for missingor not specifiedor do you throw that line out entirelyyou have to think about what the right thing to do is there malicious datathere might be people trying to game your systemthere might be people trying to cheat the systemand you don' want those people getting away with it let' say you're making recommender system there could be people out there trying to fabricate behavior data in order to promote their new itemrightsoyou need to be on the lookout for that sort of thingand make sure that you're identifying the shilling attacksor other types of attacks on your input dataand filtering them out from results and don' let them win |
16,563 | erroneous datawhat if there' software bug somewhere in some system that' just writing out the wrong values in some set of situationsit can happen unfortunatelythere' no good way for you to know about that butif you see data that just looks fishy or the results don' make sense to youdigging in deeply enough can sometimes uncover an underlying bug that' causing the wrong data to be written in the first place maybe things aren' being combined properly at some point maybe sessions aren' being held throughout the entire session people might be dropping their session id and getting new session ids as they go through websitefor example irrelevant dataa very simple one here maybe you're only interested in data from new york city peopleor something for some reason in that case all the data from people from the rest of the world is irrelevant to what you're trying to find out the first thing you want to do is just throw all that data that away and restrict your datawhittle it down to the data that you actually care about inconsistent datathis is huge problem for examplein addressespeople can write the same address in many different waysthey might abbreviate street or they might not abbreviate streetthey might not put street at the end of the street name at all they might combine lines together in different waysthey might spell things differentlythey might use zip code in the us or zip plus code in the usthey might have country on itthey might not have country on it you need to somehow figure out what are the variations that you see and how can you normalize them all together maybe ' looking at data about movies movie might have different names in different countriesor book might have different names in different countriesbut they mean the same thing soyou need to look out for these things where you need to normalize your datawhere the same data can be represented in many different waysand you need to combine them together in order to get the correct results formattingthis can also be an issuethings can be inconsistently formatted take the example of datesin the us we always do monthdayyear (mm/dd/yy)but in other countries they might do daymonthyear (dd/mm/yy)who knows you need to be aware of these formatting differences maybe phone numbers have parentheses around the area codemaybe they don'tmaybe they have dashes between each section of the numbersmaybe they don'tmaybe social security numbers have dashesmaybe they don' these are all things that you need to watch out forand you need to make sure that variations in formatting don' get treated as different entitiesor different classifications during your processing |
16,564 | sothere are lots of things to watch out forand the previous list names just the main ones to be aware of remembergarbage ingarbage out your model is only as good as the data that you give to itand this is extremelyextremely trueyou can have very simple model that performs very well if you give it large amount of clean dataand it could actually outperform complex model on more dirty dataset thereforemaking sure that you have enough dataand high-quality data is often most of the battle you' be surprised how simple some of the most successful algorithms used in the real world are they're only successful by virtue of the quality of the data going into itand the amount of data going into it you don' always need fancy techniques to get good results oftenthe quality and quantity of your data counts just as much as anything else always question your resultsyou don' want to go back and look for anomalies in your input data only when you get result that you don' like that will introduce an unintentional bias into your results where you're letting results that you likeor expectgo through unquestionedrightyou want to question things all the time to make sure that you're always looking out for these things because even if you find result you likeif it turns out to be wrongit' still wrongit' still going to be informing your company in the wrong direction that could come back to bite you later on as an examplei have website called no-hate news it' non-profitso ' not trying to make any money by telling you about it let' say just want to find the most popular pages on this website that own that sounds like pretty simple problemdoesn' iti should just be able to go through my web logsand count up how many hits each page hasand sort themrighthow hard can it be?wellturns out it' really hardsolet' dive into this example and see why it' difficultand see some examples of real-world data cleanup that has to happen cleaning web log data we're going to show the importance of cleaning your data have some web log data from little website that own we are just going to try to find the top viewed pages on that website sounds pretty simplebut as you'll seeit' actually quite challengingsoif you want to follow alongthe pq bhftjqzoc is the notebook that we're working from here let' start |
16,565 | actually have an access log that took from my actual website it' real http access log from apache and is included in your book materials soif you do want to play along heremake sure you update the path to move the access log to wherever you saved the book materialslogpath " :\\sundog-consult\\packt\\datascience\\access_log txtapplying regular expression on the web log soi went and got the following little snippet of code off of the internet that will parse an apache access log line into bunch of fieldsgpsnbu@qbu sfdpnqjmf iptu =ts jefoujuz = =ts vtfs = =ts==ts sfrvftu = tubuvt = =ts czuft = =ts sfgfsfs = vtfs@bhfou = this code contains things like the hostthe userthe timethe actual page requestthe statusthe referrervtfs@bhfou (meaning which browser actually was used to view this pageit builds up what' called regular expressionand we're using the sf library to use it that' basically very powerful language for doing pattern matching on large string sowe can actually apply this regular expression to each line of our access logand automatically group the bits of information in that access log line into these different fields let' go ahead and run this the obvious thing to do herelet' just whip up little script that counts up each url that we encounter that was requestedand keeps count of how many times it was requested then we can sort that list and get our top pagesrightsounds simple enoughsowe're going to construct little python dictionary called -$pvout we're going to open up our log fileand for each linewe're going to apply our regular expression if it actually comes back with successful match for the pattern that we're trying to matchwe'll sayokay this looks like decent line in our access log |
16,566 | let' extract the request field out of itwhich is the actual http requestthe page which is actually being requested by the browser we're going to split that up into its three componentsit consists of an actionlike get or postthe actual url being requestedand the protocol being used given that information split outwe can then just see if that url already exists in my dictionary if soi will increment the count of how many times that url has been encountered by otherwisei'll introduce new dictionary entry for that url and initialize it to the value of do that for every line in the logsort the results in reverse ordernumericallyand print them out -$pvout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsmjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu sfrvftu bddftt bdujpo -qspupdpm sfrvftutqmju jg -$pvoutibt@lfz -$pvout -$pvout fmtf -$pvout sftvmut tpsufe -$pvoutlfz mbncebj jou -$pvout sfwfstf svf gpssftvmujosftvmut qsjousftvmu tus -$pvout solet' go ahead and run thatoopswe end up with this big old error here it' telling us thatwe need more than value to unpack so apparentlywe're getting some requests fields that don' contain an actiona urland protocol that they contain something else |
16,567 | let' see what' going on theresoif we print out all the requests that don' contain three itemswe'll see what' actually showing up sowhat we're going to do here is similar little snippet of codebut we're going to actually do that split on the request fieldand print out cases where we don' get the expected three fields -$pvout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsmjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu sfrvftu bddftt gjfmet sfrvftutqmju jg mfo gjfmet qsjougjfmet let' see what' actually in theresowe have bunch of empty fields that' our first problem butthen we have the first field that' full just garbage who knows where that came frombut it' clearly erroneous data okayfinelet' modify our script |
16,568 | modification one filtering the request field we'll actually just throw out any lines that don' have the expected fields in the request that seems like legitimate thing to dobecause this does in fact have completely useless data inside of itit' not like we're missing out on anything here by doing that sowe'll modify our script to do that we've introduced an jg mfo gjfmet line before it actually tries to process it we'll run that -$pvout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsmjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu sfrvftu bddftt gjfmet sfrvftutqmju jg mfo gjfmet gjfmet jg -$pvoutibt@lfz -$pvout -$pvout fmtf -$pvout sftvmut tpsufe -$pvoutlfz mbncebj jou -$pvout sfwfstf svf gpssftvmujosftvmut qsjousftvmu tus -$pvout |
16,569 | heywe got resultbut this doesn' really look like the top pages on my website rememberthis is news site sowe're getting bunch of php file hitsthat' perl scripts what' going on thereour top result is this ynmsqdqiq scriptand then @mphjoqiqfollowed by the homepage sonot very useful then there is spcputuyuthen bunch of xml files you know when looked into this later onit turned out that my site was actually under malicious attacksomeone was trying to break into it this ynmsqdqiq script was the way they were trying to guess at my passwordsand they were trying to log in using the login script fortunatelyi shut them down before they could actually get through to this website this was an example of malicious data being introduced into my data stream that have to filter out soby looking at thatwe can see that not only was that malicious attack looking at php filesbut it was also trying to execute stuff it wasn' just doing get requestit was doing post request on the script to actually try to execute code on my website |
16,570 | modification two filtering post requests nowi know that the data that care aboutyou know in the spirit of the thing ' trying to figure out ispeople getting web pages from my website soa legitimate thing for me to do is to filter out anything that' not get requestout of these logs solet' do that next we're going to check again if we have three fields in our request fieldand then we're also going to check if the action is get if it' notwe're just going to ignore that line entirely -$pvout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsmjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu sfrvftu bddftt gjfmet sfrvftutqmju jg mfo gjfmet bdujpo -qspupdpm gjfmet jg bdujpo (& jg -$pvoutibt@lfz -$pvout -$pvout fmtf -$pvout sftvmut tpsufe -$pvoutlfz mbncebj jou -$pvout sfwfstf svf gpssftvmujosftvmut qsjousftvmu tus -$pvout |
16,571 | we should be getting closer to what we want nowthe following is the output of the preceding codeyeahthis is starting to look more reasonable butit still doesn' really pass sanity check this is news websitepeople go to it to read news are they really reading my little blog on it that just has couple of articlesi don' think sothat seems little bit fishy solet' dive in little bitand see who' actually looking at those blog pages if you were to actually go into that file and examine it by handyou would see that lot of these blog requests don' actually have any user agent on them they just have user agent of which is highly unusualif real human being with real browser was trying to get this pageit would say something like mozillaor internet exploreror chrome or something like that soit seems that these requests are coming from some sort of scraper againpotentially malicious traffic that' not identifying who it is |
16,572 | modification three checking the user agents maybewe should be looking at the useragents tooto see if these are actual humans making requestsor not let' go ahead and print out all the different useragents that we're encountering soin the same spirit of the code that actually summed up the different urls we were seeingwe can look at all the different useragents that we were seeingand sort them by the most popular vtfs@bhfou strings in this log tfs"hfout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsmjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu bhfou bddftt jg tfs"hfoutibt@lfz bhfou tfs"hfout tfs"hfout fmtf tfs"hfout sftvmut tpsufe tfs"hfoutlfz mbncebj jou tfs"hfout sfwfstf svf gpssftvmujosftvmut qsjousftvmu tus tfs"hfout |
16,573 | we get the following resultyou can see most of it looks legitimate soif it' scraperand in this case it actually was malicious attack but they were actually pretending to be legitimate browser but this dash vtfs@bhfou shows up lot too soi don' know what that isbut know that it isn' an actual browser the other thing ' seeing is bunch of traffic from spidersfrom web crawlers sothere is baidu which is search engine in chinathere is googlebot just crawling the page think saw yandex in there tooa russian search engine soour data is being polluted by lot of crawlers that are just trying to mine our website for search engine purposes againthat traffic shouldn' count toward the intended purpose of my analysisof seeing what pages these actual human beings are looking at on my website these are all automated scripts |
16,574 | filtering the activity of spiders/robots alrightso this gets little bit tricky there' no real good way of identifying spiders or robots just based on the user string alone but we can at least take legitimate crack at itand filter out anything that has the word "botin itor anything from my caching plugin that might be requesting pages in advance as well we'll also strip out our friend single dash sowe will once again refine our script toin addition to everything elsestrip out any useragents that look fishy -$pvout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsmjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu bhfou bddftt jg opu cpu jobhfoups tqjefs jobhfoups #pu jobhfoups qjefs jobhfoups pubm$bdif jobhfoupsbhfou sfrvftu bddftt gjfmet sfrvftutqmju jg mfo gjfmet bdujpo -qspupdpm gjfmet jg bdujpo (& jg -$pvoutibt@lfz -$pvout -$pvout fmtf -$pvout sftvmut tpsufe -$pvoutlfz mbncebj jou -$pvout sfwfstf svf gpssftvmujosftvmut qsjousftvmu tus -$pvout -$pvout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsmjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu bhfou bddftt jg opu cpu jobhfoups tqjefs jobhfoups #pu jobhfoups qjefs jobhfoups pubm$bdif jobhfoupsbhfou sfrvftu bddftt |
16,575 | gjfmet sfrvftutqmju jg mfo gjfmet bdujpo -qspupdpm gjfmet jg -foetxjui jg bdujpo (& jg -$pvoutibt@lfz -$pvout -$pvout fmtf -$pvout sftvmut tpsufe -$pvoutlfz mbncebj jou -$pvout sfwfstf svf gpssftvmujosftvmut qsjousftvmu tus -$pvout what do we getalrightso here we gothis is starting to look more reasonable for the first two entriesthe homepage is most popularwhich would be expected orlando headlines is also popularbecause use this website more than anybody elseand live in orlando but after thatwe get bunch of stuff that aren' webpages at alla bunch of scriptsa bunch of css files those aren' web pages |
16,576 | modification four applying website-specific filters can just apply some knowledge about my sitewhere happen to know that all the legitimate pages on my site just end with slash in their url solet' go ahead and modify this againto strip out anything that doesn' end with slash -$pvout \xjuipqfo mph buisbtg gpsmjofjo stusjq gpsjog nbudi gpsnbu@qbunbudi mjof jgnbudi bddftt nbudihspvqejdu bhfou bddftt jg opu cpu jobhfoups tqjefs jobhfoups #pu jobhfoups qjefs jobhfoups pubm$bdif jobhfoupsbhfou sfrvftu bddftt gjfmet sfrvftutqmju jg mfo gjfmet bdujpo -qspupdpm gjfmet jg -foetxjui jg bdujpo (& jg -$pvoutibt@lfz -$pvout -$pvout fmtf -$pvout sftvmut tpsufe -$pvoutlfz mbncebj jou -$pvout sfwfstf svf gpssftvmujosftvmut qsjousftvmu tus -$pvout |
16,577 | let' run thatfinallywe're getting some results that seem to make sensesoit looks likethat the top page requested from actual human beings on my little no-hate news site is the homepagefollowed by psmboepifbemjoftfollowed by world newsfollowed by the comicsthen the weatherand the about screen sothis is starting to look more legitimate if you were to dig even deeper thoughyou' see that there are still problems with this analysis for examplethose feed pages are still coming from robots just trying to get rss data from my website sothis is great parable in how seemingly simple analysis requires huge amount of pre-processing and cleaning of the source data before you get results that make any sense againmake sure the things you're doing to clean your data along the way are principledand you're not just cherry-picking problems that don' match with your preconceived notions soalways question your resultsalways look at your source dataand look for weird things that are in it |
16,578 | activity for web log data alrightif you want to mess with this some more you can solve that feed problem go ahead and strip out things that include feed because we know that' not real web pagejust to get some familiarity with the code orgo look at the log little bit more closelygain some understanding as to where those feed pages are actually coming from maybe there' an even better and more robust way of identifying that traffic as larger class sofeel free to mess around with that but hope you learned your lessondata cleaning hugely important and it' going to take lot of your timesoit' pretty surprising how hard it was to get some reasonable results on simple question like "what are the top viewed pages on my website?you can imagine if that much work had to go into cleaning the data for such simple problemthink about all the nuanced ways that dirty data might actually impact the results of more complex problemsand complex algorithms it' very important to understand your source datalook at itlook at representative sample of itmake sure you understand what' coming into your system always question your results and tie it back to the original source data to see where questionable results are coming from normalizing numerical data this is very quick sectioni just want to remind you about the importance of normalizing your datamaking sure that your various input feature data is on the same scaleand is comparable andsometimes it mattersand sometimes it doesn' butyou just have to be cognizant of when it does just keep that in the back of your head because sometimes it will affect the quality of your results if you don' sosometimes models will be based on several different numerical attributes if you remember multivariant modelswe might have different attributes of car that we're looking atand they might not be directly comparable measurements orfor exampleif we're looking at relationships between ages and incomesages might range from to but incomes in dollars might range from to billionsand depending on the currency it could be an even larger rangesome models are okay with that |
16,579 | if you're doing regressionusually that' not big deal butother models don' perform so well unless those values are scaled down first to common scale if you're not carefulyou can end up with some attributes counting more than others maybe the income would end up counting much more than the ageif you were trying to treat those two values as comparable values in your model so this can introduce also bias in the attributeswhich can also be problem maybe one set of your data is skewedyou knowsometimes you need to normalize things versus the actual range seen for that set of values and not just to to whatever the maximum is scale there' no set rule as to when you should and shouldn' do this sort of normalization all can say is always read the documentation for whatever technique you're using sofor examplein scikit-learn their pca implementation has xijufo option that will automatically normalize your data for you you should probably use that it also has some preprocessing modules available that will normalize and scale things for you automatically as well be aware too of textual data that should actually be represented numericallyor ordinally if you have zft or op data you might need to convert that to or and do that in consistent matter so againjust read the documentation most techniques do work fine with rawunnormalized databut before you start using new technique for the first timejust read the documentation and understand whether or not the inputs should be scaled or normalized or whitened first if soscikit-learn will probably make it very easy for you to do soyou just have to remember to do itdon' forget to rescale your results when you're done if you are scaling the input data if you want to be able to interpret the results you getsometimes you need to scale them back up to their original range after you're done if you are scaling things and maybe even biasing them towards certain amount before you input them into modelmake sure that you unscale them and unbias them before you actually present those results to somebody or else they won' make any senseand just little remindera little bit of parable if you willalways check to see if you should normalize or whiten your data before you pass it into given model no exercise associated with this sectionit' just something want you to remember ' just trying to drive the point home some algorithms require whiteningor normalizationsome don' soalways read the documentationif you do need to normalize the data going into an algorithm it will usually tell you soand it will make it very easy to do so please just be aware of that |
16,580 | detecting outliers common problem with real-world data is outliers you'll always have some strange usersor some strange agents that are polluting your datathat act abnormally and atypically from the typical user they might be legitimate outliersthey might be caused by real people and not by some sort of malicious trafficor fake data so sometimesit' appropriate to remove themsometimes it isn' make sure you make that decision responsibly solet' dive into some examples of dealing with outliers for exampleif ' doing collaborative filteringand ' trying to make movie recommendations or something like thatyou might have few power users that have watched every movie ever madeand rated every movie ever made they could end up having an inordinate influence on the recommendations for everybody else you don' really want handful of people to have that much power in your system sothat might be an example where it would be legitimate thing to filter out an outlierand identify them by how many ratings they've actually put into the system ormaybe an outlier would be someone who doesn' have enough ratings we might be looking at web log datalike we saw in our example earlier when we were doing data cleaningoutliers could be telling you that there' something very wrong with your data to begin with it could be malicious trafficit could be botsor other agents that should be discarded that don' represent actual human beings that you're trying to model if someone really wanted the mean average income in the united states (and not the median)you shouldn' just throw out donald trump because you don' like him you know the fact ishis billions of dollars are going to push that mean amount upeven if it doesn' budge the median sodon' fudge your numbers by throwing out outliers but throw out outliers if it' not consistent with what you're trying to model in the first place nowhow do we identify outlierswellremember our old friend standard deviationwe covered that very early in this book it' very useful tool for detecting outliers you canin very principled mattercompute the standard deviation of dataset that should have more or less normal distribution if you see data point that' outside of one or two standard deviationsthere you have an outlier rememberwe talked earlier too about the box and whisker diagrams tooand those also have built-in way of detecting and visualizing outliers those diagrams define outliers as lying outside the interquartile range |
16,581 | what multiple do you choosewellyou kind of have to use common senseyou knowthere' no hard and fast rule as to what is an outlier you have to look at your data and kind of eyeball itlook at the distributionlook at the histogram see if there' actual things that stick out to you as obvious outliersand understand what they are before you just throw them away dealing with outliers solet' take some example codeand see how you might handle outliers in practice let' mess around with some outliers it' pretty simple section little bit of review actually if you want to follow alongwe're in vumjfstjqzoc sogo ahead and open that up if you' likejnqpsuovnqzbtoq jodpnft oqsboepnopsnbm jodpnft oqbqqfoe jodpnftjnqpsunbuqmpumjcqzqmpubtqmu qmuijtu jodpnftqmutipx we did something very similarvery early in the bookwhere we created fake histogram of income distribution in the united states what we're going to do is start off with normal distribution of incomes here that are have mean of $ , per yearwith standard deviation of , ' going to create , fake americans that have an income in that distribution this is totally made-up databy the wayalthough it' not that far off from reality theni' going to stick in an outlier call it donald trumpwho has billion dollars we're going to stick this guy in at the end of our dataset sowe have normally distributed dataset around $ , and then we're going to stick in donald trump at the end |
16,582 | we'll go ahead and plot that as histogramwowthat' not very helpfulwe have the entire normal distribution of everyone else in the country squeezed into one bucket of the histogram on the other handwe have donald trump out at the right side screwing up the whole thing at billion dollars the other problem too is that if ' trying to answer the question how much money does the typical american make if take the mean to try and figure that outit' not going to be very gooduseful numberjodpnftnfbo the output of the preceding code is as followsdonald trump has pushed that number up all by himself to $ , and some odd of changewhen know that the real mean of my normally distributed data that excludes donald trump is only $ , sothe right thing to do there would be to use the median instead of the mean |
16,583 | butlet' say we had to use the mean for some reasonand the right way to deal with this would be to exclude these outliers like donald trump sowe need to figure out how do we identify these people wellyou could just pick some arbitrary cutoffand say" ' going to throw out all the billionaires"but that' not very principled thing to do where did billion come fromit' just some accident of how we count numbers soa better thing to do would be to actually measure the standard deviation of your datasetand identify outliers as being some multiple of standard deviation away from the mean sofollowing is little function that wrote that does just that it' called sfkfdu@pvumjfst efgsfkfdu@pvumjfst ebub oqnfejbo ebub oqtue ebub gjmufsfe sfuvsogjmufsfe gjmufsfe sfkfdu@pvumjfst jodpnft qmuijtu gjmufsfeqmutipx it takes in list of data and finds the median it also finds the standard deviation of that dataset soi filter that outso only preserve data points that are within two standard deviations of the median for my data soi can use this handy dandy sfkfdu@pvumjfst function on my income datato actually strip out weird outliers automatically |
16,584 | sure enoughit worksi get much prettier graph now that excludes donald trump and focuses in on the more typical dataset here in the center sopretty cool stuffsothat' one example of identifying outliersand automatically removing themor dealing with them however you see fit rememberalways do this in principled manner don' just throw out outliers because they're inconvenient understand where they're coming fromand how they actually affect the thing you're trying to measure in spirit by the wayour mean is also much more meaningful nowmuch closer to , that it should benow that we've gotten rid of that outlier activity for outliers soif you want to play around with thisyou know just fiddle around with it like normally ask you to do try different multiples of the standard deviationtry adding in more outlierstry adding in outliers that aren' quite as outlier-ish as donald trump you knowjust fabricate some extra fake data there and play around with itsee if you can identify those people successfully so there you have itoutlierspretty simple concept sothat' an example of identifying outliers by looking at standard deviationsand just looking at the number of standard deviations from the mean or median that you care about median is probably better choice actuallygiven that the outliers might be skewing the mean in and of themselvesrightsoby using the standard deviationthat' good way of identifying outliers in more principled manner than just picking some arbitrary cutoff againyou need to decide what the right thing to do is with those outliers what are you actually trying to measureis it appropriate to actually discard them or notsokeep that in your headsummary in this we talked about the importance of striking balance between bias and variance and minimizing error nextwe saw the concept of -fold cross-validation and how to implement it in python to prevent overfitting we learned the importance of cleaning data and normalizing it before processing it we then saw an example to determine the popular pages of website in $ibqufsapache spark machine learning on big data we'll learn machine learning on big data using apache spark |
16,585 | apache spark machine learning on big data so far in this book we've talked about lot of general data mining and machine learning techniques that you can use in your data science careerbut they've all been running on your desktop as suchyou can only run as much data as single machine can process using technologies such as python and scikit-learn noweveryone talks about big dataand odds are you might be working for company that does in fact have big data to process big data meaning that you can' actually control it allyou can' actually wrangle it all on just one system you need to actually compute it using the resources of an entire clouda cluster of computing resources and that' where apache spark comes in apache spark is very powerful tool for managing big dataand doing machine learning on large datasets by the end of the you will have an indepth knowledge of the following topicsinstalling and working with spark resilient distributed datasets (rddsthe mllib (machine learning librarydecision trees in spark -means clustering in spark |
16,586 | installing spark in this sectioni' going to get you set up using apache sparkand show you some examples of actually using apache spark to solve some of the same problems that we solved using single computer in the past in this book the first thing we need to do is get spark set up on your computer sowe're going to walk you through how to do that in the next couple of sections it' pretty straightforward stuffbut there are few gotchas sodon' just skip these sectionsthere are few things you need to pay special attention to get spark running successfullyespecially on windows system let' get apache spark set up on your systemso you can actually dive in and start playing around with it we're going to be running this just on your own desktop for now butthe same programs that we're going to write in this could be run on an actual hadoop cluster soyou can take these scripts that we're writing and running locally on your desktop in spark standalone modeand actually run them from the master node of an actual hadoop clusterthen let it scale up to the entire power of hadoop cluster and process massive datasets that way even though we're going to set things up to run locally on your own computerkeep in mind that these same concepts will scale up to running on cluster as well installing spark on windows getting spark installed on windows involves several steps that we'll walk you through here ' just going to assume that you're on windows because most people use this book at home we'll talk little bit about dealing with other operating systems in moment if you're already familiar with installing stuff and dealing with environment variables on your computerthen you can just take the following little cheat sheet and go off and do it if you're not so familiar with windows internalsi will walk you through it one step at time in the upcoming sections here are the quick steps for those windows pros install jdkyou need to first install jdkthat' java development kit you can just go to sun' website and download that and install it if you need to we need the jdk becauseeven though we're going to be developing in python during this coursethat gets translated under the hood to scala codewhich is what spark is developed in natively andscalain turnruns on top of the java interpreter soin order to run python codeyou need scala systemwhich will be installed by default as part of spark alsowe need javaor more specifically java' interpreterto actually run that scala code it' like technology layer cake install pythonobviously you're going to need pythonbut if you've gotten to this point in the bookyou should already have python environment set uphopefully with enthought canopy sowe can skip this step |
16,587 | install prebuilt version of spark for hadoopfortunatelythe apache website makes available prebuilt versions of spark that will just run out of the box that are precompiled for the latest hadoop version you don' have to build anythingyou can just download that to your computer and stick it in the right place and be good to go for the most part create conf/log properties filewe have few configuration things to take care of one thing we want to do is adjust our warning level so we don' get bunch of warning spam when we run our jobs we'll walk through how to do that basicallyyou need to rename one of the properties filesand then adjust the error setting within it add spark_home environment variablenextwe need to set up some environment variables to make sure that you can actually run spark from any path that you might have we're going to add spark_home environment variable pointing to where you installed sparkand then we will add " ,@) &=cjo to your system pathso that when you run spark submitor pyspark or whatever spark command you needwindows will know where to find it set hadoop_home variableon windows there' one more thing we need to dowe need to set )"% @) variable as well because it' going to expect to find one little bit of hadoopeven if you're not using hadoop on your standalone system install winutils exefinallywe need to install file called xjovujmtfyf there' link to xjovujmtfyf within the resources for this bookso you can get that there if you want to walk through the steps in more detailyou can refer to the upcoming sections installing spark on other operating systems quick note on installing spark on other operating systemsthe same steps will basically apply on them too the main difference is going to be in how you set environment variables on your systemin such way that they will automatically be applied whenever you log in that' going to vary from os to os macos does it differently from various flavors of linuxso you're going to have to be at least little bit familiar with using unix terminal command promptand how to manipulate your environment to do that but most macos or linux users who are doing development already have those fundamentals under their belt and of courseyou're not going to need xjovujmtfyf if you're not on windows sothose are the main differences for installing on different oses |
16,588 | installing the java development kit for installing the java development kitgo back to the browseropen new taband just search for kel (short for java development kitthis will bring you to the oracle sitefrom where you can download java |
16,589 | on the oracle websiteclick on jdk download nowclick on accept license agreement and then you can select the download option for your operating system |
16,590 | for methat' going to be windows -bitand wait for mb of goodness to downloadonce the download is finishedlocate the installer and start it running note that we can' just accept the default settings in the installer on windows here sothis is windowsspecific workaroundbut as of the writing of this bookthe current version of spark is and it turns out there' an issue with spark with java on windows the issue is that if you've installed java to path that has space in itit doesn' workso we need to make sure that java is installed to path that does not have space in it this means that you can' skip this step even if you have java installed alreadyso let me show you how to do that on the installerclick on nextand you will seeas in the following screenthat it wants to install by default to the = sphsbn'jmft=+bwb=kel pathwhatever the version is |
16,591 | |
16,592 | the space in the sphsbn'jmft path is going to cause troubleso let' click on the change button and install to =kela nice simple patheasy to rememberand with no spaces in itnowit also wants to install the java runtime environmentso just to be safei' also going to install that to path with no spaces |
16,593 | at the second step of the jdk installationwe should have this showing on our screen |
16,594 | will change that destination folder as welland we will make new folder called =ksf for thatalrightsuccessfully installed woohoonowyou'll need to remember the path that we installed the jdk intowhich in our case was =kel we still have few more steps to go here nextwe need to install spark itself |
16,595 | installing spark let' get back to new browser tab herehead to tqbslbqbdifpshand click on the download spark buttonnowwe have used spark in this bookbut anything beyond should work just fine |
16,596 | make sure you get prebuilt versionand select the direct download option so all these defaults are perfectly fine go ahead and click on the link next to instruction number to download that package nowit downloads tgz (tar in gzipfilewhich you might not be familiar with windows is kind of an afterthought with spark quite honestly because on windowsyou're not going to have built-in utility for actually decompressing tgz files this means that you might need to install oneif you don' have one already the one use is called winrarand you can pick that up from xxxsbsmbcdpn go to the downloads page if you need itand download the installer for winrar -bit or -bitdepending on your operating system install winrar as normaland that will allow you to actually decompress tgz files on windows |
16,597 | solet' go ahead and decompress the tgz files ' going to open up my %pxompbet folder to find the spark archive that we downloadedand let' go ahead and right-click on that archive and extract it to folder of my choosing ' just going to put it in my %pxompbet folder for now againwinrar is doing this for me at this point |
16,598 | soi should now have folder in my %pxompbet folder associated with that package let' open that up and there is spark itself you should see something like the folder content shown below soyou need to install that in some place that you can rememberyou don' want to leave it in your %pxompbet folder obviouslyso let' go ahead and open up new file explorer window here go to my drive and create new folderand let' just call it tqbsl somy spark installation is going to live in =tqbsl againnice and easy to remember open that folder nowi go back to my downloaded tqbsl folder and use ctrl to select everything in the spark distributionctrl to copy itand then go back to =tqbslwhere want to put itand ctrl to paste it in |
16,599 | remembering to paste the contents of the tqbsl foldernot the tqbsl folder itself is very important sowhat should have now is my drive with tqbsl folder that contains all of the files and folders from the spark distribution wellthere are still few things we need to configure sowhile we're in =tqbsl let' open up the dpog folderand in order to make sure that we don' get spammed to death by log messageswe're going to change the logging level setting here so to do thatright-click on the mph qspqfsujftufnqmbuf file and select rename |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.