id
int64
0
25.6k
text
stringlengths
0
4.59k
16,400
againdoing that from first principleswe can take the correlation between two sets of attributescompute the standard deviation of eachthen compute the covariance between these two thingsand divide by the standard deviations of each dataset that gives us the correlation valuewhich is normalized to - to we end up with value of - which tells us there is some correlation between these two things in the negative directionit' not perfect linethat would be - but there' something interesting going on there - correlation coefficient means perfect negative correlation means no correlationand means perfect positive correlation computing correlation the numpy way nownumpy can actually compute correlation for you using the dpssdpfg let' look at the following codefunction oqdpssdpfg qbhftffetqvsdibtf"npvou this single line gives the following outputbssbz soif we wanted to do this the easy waywe could just use oqdpssdpfg qbhf qffetqvsdibtf"npvou and what that gives you back is an array that gives you the correlation between every possible combination of the sets of data that you pass in the way to read the output isthe implies there is perfect correlation between comparing qbhf qffet to itself and qvsdibtf"npvou to itselfwhich is expected but when you start comparing qbhf qffet to qvsdibtf"npvou or qvsdibtf"npvou to the qbhf qffetyou end up with the - valuewhich is roughly what we got when we did it the hard way there' going to be little precision errorsbut it' not really important
16,401
now we could force perfect correlation by fabricating totally linear relationshipso let' take look at an example of thatqvsdibtf"npvou qbhf qffet tdbuufs qbhf qffetqvsdibtf"npvou dpssfmbujpo qbhf qffetqvsdibtf"npvou and againhere we would expect the correlation to come out to - for perfect negative correlationand in factthat' what we end up withagaina remindercorrelation does not imply causality just because people might spend more if they have faster page speedsmaybe that just means that they can afford better internet connection maybe that doesn' mean that there' actually causation between how fast your pages render and how much people spendbut it tells you there' an interesting relationship that' worth investigating more you cannot say anything about causality without running an experimentbut correlation can tell you what experiments you might want to run
16,402
correlation activity so get your hands dirtyroll up your sleevesi want you to use the ovnqzdpw function that' actually way to get numpy to compute covariance for you we saw how to compute correlation using the dpssdpfg function so go back and rerun these examples just using the ovnqzdpw function and see if you get the same results or not it should be pretty darn closeso instead of doing it the hard way with the covariance function that wrote from scratchjust use numpy and see if you can get the same results againthe point of this exercise is to get you familiar with using numpy and applying it to actual data so have at itsee where you get and there you have itcovariance and correlation both in theory and in practice very useful technique to haveso definitely remember this section let' move on conditional probability nextwe're going to talk about conditional probability it' very simple concept it' trying to figure out the probability of something happening given that something else occurred although it sounds simpleit can be actually very difficult to wrap your head around some of the nuances of it so get an extra cup of coffeemake sure your thinking cap' onand if you're ready for some more challenging concepts here let' do this conditional probability is way to measure the relationship between two things happening to each other let' say want to find the probability of an event happening given that another event already happened conditional probability gives you the tools to figure that out what ' trying to find out with conditional probability is if have two events that depend on each other that iswhat' the probability that both will occurin mathematical notationthe way we indicate things here is that ( ,brepresents the probability of both and occurring independent of each other that iswhat' the probability of both of these things happening irrespective of everything else whereas this notationp( | )is read as the probability of given sowhat is the probability of given that event has already occurredit' little bit differentand these things are related like this
16,403
the probability of given is equal to the probability of and occurring over the probability of alone occurringso this teases out the probability of being dependent on the probability of it'll make more sense with an example hereso bear with me let' say that give youmy readerstwo testsand of you pass both tests now the first test was easier of you passed that one can use this information to figure out what percentage of readers who passed the first test also passed the second so here' real example of the difference between the probability of given and the probability of and ' going to represent as the probability of passing the first testand as the probability of passing the second test what ' looking for is the probability of passing the second test given that you passed the firstthat isp ( |aso the probability of passing the second test given that you passed the first is equal to the probability of passing both testsp( , ( know that of you passed both tests irrespective of each other)divided by the probability of passing the first testp( )which is it' worked out to passed both tests passed the first testtherefore the probability of passing the second given that you passed the first works out to okit' little bit tough to wrap your head around this concept it took me little while to really internalize the difference between the probability of something given something and the probability of two things happening irrespective of each other make sure you internalize this example and how it' really working before you move on conditional probability exercises in python alrightlet' move on and do another more complicated example using some real python code we can then see how we might actually implement these ideas using python let' put conditional probability into action here and use some of the ideas to figure out if there' relationship between age and buying stuff using some fabricated data go ahead and open up the $poejujpobm spcbcjmjuz&yfsdjtfjqzoc here and follow along with me if you like
16,404
what ' going to do is write little bit of python code that creates some fake datagspnovnqzjnqpsusboepn sboepntffe upubmt \qvsdibtft \upubm vsdibtft gps@josbohf bhf%fdbef sboepndipjdf qvsdibtf spcbcjmjuz gmpbu bhf%fdbef upubmt jg sboepnsboepn qvsdibtf spcbcjmjuz upubm vsdibtft qvsdibtft what ' going to do is take , virtual people and randomly assign them to an age bracket they can be in their stheir stheir stheir stheir sor their ' also going to assign them number of things that they bought during some period of timeand ' going to weight the probability of purchasing something based on their age what this code ends up doing is randomly assigning each person to an age group using the sboepndipjdf function from numpy then ' going to assign probability of purchasing somethingand have weighted it such that younger people are less likely to buy stuff than older people ' going to go through , people and add everything up as goand what end up with are two python dictionariesone that gives me the total number of people in each age groupand another that gives me the total number of things bought within each age group ' also going to keep track of the total number of things bought overall let' go ahead and run that code if you want to take second to kind of work through that code in your head and figure out how it worksyou've got the ipython notebook you can go back into that later too let' take look what we ended up with
16,405
our upubmt dictionary is telling us how many people are in each age bracketand it' pretty evenly distributedjust like we expected the amount purchased by each age group is in fact increasing by ageso -year-olds only bought about , things and -year-olds bought about , thingsand overall the entire population bought about , things let' use this data to play around with the ideas of conditional probability let' first figure out what' the probability of buying something given that you're in your the notation for that will be ( |fif we're calling purchase eand as the event that you're in your now we have this fancy equation that gave you way of computing ( |fgiven ( , )and ( )but we don' need that you don' just blindly apply equations whenever you see something you have to think about your data intuitively what is it telling usi want to figure out the probability of purchasing something given that you're in your well have all the data need to compute that directly &gmpbu qvsdibtft gmpbu upubmt have how much stuff -year-olds purchased in the purchases[ bucketand know how many -year-olds there are so can just divide those two numbers to get the ratio of year-old purchases over the number of -year-olds can then output that using the print commandqsjou  qvsdibtf]  & end up with probability of purchasing something given that you're in your of being about % qvsdibtf] note that if you're using python the print command doesn' have the surrounding bracketsso it would beqsjou qvsdibtf]  &if want to find ( )that' just the probability of being overalli can take the total number of -year-olds divided by the number of people in my datasetwhich is , gmpbu upubmt qsjou    againremove those brackets around the print statement if you're using python that should give the following output  
16,406
know the probability of being in your  is about we'll now find out ( )which just represents the overall probability of buying something irrespective of your age gmpbu upubm vsdibtft qsjou  vsdibtf  vsdibtf that works out to bein this exampleabout can just take the total number of things purchased by everybody regardless of age and divide it by the total number of people to get the overall probability of purchase alrightso what do have herei have the probability of purchasing something given that you're in your being about %and then have the probability of purchasing something overall at about now if and were independentif age didn' matterthen would expect the ( |fto be about the same as (ei would expect the probability of buying something given that you're in your to be about the same as the overall probability of buying somethingbut they're notrightand because they're differentthat tells me that they are in fact dependentsomehow so that' little way of using conditional probability to tease out these dependencies in the data let' do some more notation stuff here if you see something like ( ) (ftogetherthat means multiply these probabilities together can just take the overall probability of purchase multiplied by the overall probability of being in your tqsjou   vsdibtf   vsdibtf that worked out to about just from the way probabilities worki know that if want to get the probability of two things happening togetherthat would be the same thing as multiplying their individual probabilities so it turns out that ( ,fhappeningis the same thing as ( ) (fqsjou    vsdibtf gmpbu qvsdibtft    vsdibtf 
16,407
now because of the random distribution of datait doesn' work out to be exactly the same thing we're talking about probabilities hererememberbut they're in the same ballparkso that makes senseabout versus %close enough now that is different again from ( | )so the probability of both being in your  and buying something is different than the probability of buying something given that you're in your  now let' just do little sanity check here we can check our equation that we saw in the conditional probability section earlierthat said that the probability of buying something given that you're in your  is the same as the probability of being in your  and buying something over the probability of buying something that iswe check if ( | )= ( , )/ (fgmpbu qvsdibtft  this gives us vu sure enoughit does work out if take the probability of buying something given that you're in your  over the overall probabilitywe end up with about %which is pretty much what we came up with originally for ( |fso the equation worksyayalrightit' tough to wrap your head around some of this stuff it' little bit confusingi knowbut if you need togo through this againstudy itand make sure you understand what' going on here 've tried to put in enough examples here to illustrate different combinations of thinking about this stuff once you've got it internalizedi' going to challenge you to actually do little bit of work yourself here conditional probability assignment what want you to do is modify the following python code which was used in the preceding section gspnovnqzjnqpsusboepn sboepntffe upubmt \qvsdibtft \upubm vsdibtft gps@josbohf bhf%fdbef sboepndipjdf qvsdibtf spcbcjmjuz 
16,408
upubmt jg sboepnsboepn qvsdibtf spcbcjmjuz upubm vsdibtft qvsdibtft modify it to actually not have dependency between purchases and age make that an evenly distributed chance as well see what that does to your results do you end up with very different conditional probability of being in your  and purchasing something versus the overall probability of purchasing somethingwhat does that tell you about your data and the relationship between those two different attributesgo ahead and try thatand make sure you can actually get some results from this data and understand what' going onand 'll run through my own solution to that exercise in just minute so that' conditional probabilityboth in theory and in practice you can see there' lot of little nuances to it and lot of confusing notation go back and go through this section again if you need to wrap your head around it gave you homework assignmentso go off and do that nowsee if you can actually modify my code in that ipython notebook to produce constant probability of purchase for those different age groups come back and we'll take look at how solved that problem and what my results were my assignment solution did you do your homeworki hope so let' take look at my solution to the problem of seeing how conditional probability tells us about whether there' relationship between age and purchase probability in fake dataset to remind youwhat we were trying to do was remove the dependency between age and probability of purchasing and see if we could actually reflect that in our conditional probability values here' what 've gotgspnovnqzjnqpsusboepn sboepntffe upubmt \qvsdibtft \upubm vsdibtft gps@josbohf bhf%fdbef sboepndipjdf qvsdibtf spcbcjmjuz upubmt jg sboepnsboepn qvsdibtf spcbcjmjuz upubm vsdibtft qvsdibtft
16,409
what 've done here is 've taken the original snippet of code for creating our dictionary of age groups and how much was purchased by each age group for set of , random people instead of making purchase probability dependent on agei've made it constant probability of now we just have people randomly being assigned to an age groupand they all have the same probability of buying something let' go ahead and run that now this timeif compute the ( | )that isthe probability of buying something given that you're in your ti come up with about &gmpbu qvsdibtft gmpbu upubmt qsjou  qvsdibtf]  & qvsdibtf] if compare that to the overall probability of purchasingthat too is about gmpbu upubm vsdibtft qsjou  vsdibtf  vsdibtf  can see here that the probability of purchasing something given that you're in your  is about the same as the probability of purchasing something irrespective of your age (that isp( |fis pretty close to ( )that suggests that there' no real relationship between those two thingsand in facti know there isn' from this data now in practiceyou could just be seeing random chanceso you' want to look at more than one age group you' want to look at more than one data point to see if there really is relationship or notbut this is an indication that there' no relationship between age and probability of purchase in this sample data that we modified sothat' conditional probability in action hopefully your solution was fairly close and had similar results if notgo back and study my solution it' right there in the data files for this bookconditionalprobabilitysolution ipynbif you need to open it up and study it and play around with it obviouslythe random nature of the data will make your results little bit different and will depend on what choice you made for the overall purchase probabilitybut that' the idea and with that behind uslet' move on to bayestheorem
16,410
bayestheorem now that you understand conditional probabilityyou can understand how to apply bayestheoremwhich is based on conditional probability it' very important conceptespecially if you're going into the medical fieldbut it is broadly applicable tooand you'll see why in minute you'll hear about this lotbut not many people really understand what it means or its significance it can tell you very quantitatively sometimes when people are misleading you with statisticsso let' see how that works firstlet' talk about bayestheorem at high level bayestheorem is simply thisthe probability of given is equal to the probability of times the probability of given over the probability of so you can substitute and with whatever you want the key insight is that the probability of something that depends on depends very much on the base probability of and people ignore this all the time one common example is drug testing we might saywhat' the probability of being an actual user of drug given that you tested positive for it the reason bayestheorem is important is that it calls out that this very much depends on both the probability of and the probability of the probability of being drug user given that you tested positive depends very much on the base overall probability of being drug user and the overall probability of testing positive the probability of drug test being accurate depends lot on the overall probability of being drug user in the populationnot just the accuracy of the test it also means that the probability of given is not the same thing as the probability of given that isthe probability of being drug user given that you tested positive can be very different from the probability of testing positive given that you're drug user you can see where this is going that is very real problem where diagnostic tests in medicine or drug tests yield lot of false positives you can still say that the probability of test detecting user can be very highbut it doesn' necessarily mean that the probability of being user given that you tested positive is high those are two different thingsand bayestheorem allows you to quantify that difference
16,411
let' nail that example home little bit more againa drug test can be common example of applying bayestheorem to prove point even highly accurate drug test can produce more false positives than true positives so in our example herewe're going to come up with drug test that can accurately identify users of drug of the time and accurately has negative result for of non-usersbut only of the overall population actually uses the drug in question so we have very small probability of actually being user of drug what seems like very high accuracy of isn' actually high enoughrightwe can work out the math as followsevent is user of the drug event tested positively for the drug so let event mean that you're user of some drugand event the event that you tested positively for the drug using this drug test we need to work out the probability of testing positively overall we can work that out by taking the sum of probability of testing positive if you are user and the probability of testing positive if you're not user sop(bworks out to ( * + * in this example so we have probability of bthe probability of testing positively for the drug overall without knowing anything else about you let' do the math and calculate the probability of being user of the drug given that you tested positively so the probability of positive test result given that you're actually drug user works out as the probability of being user of the drug overall ( ( ))which is (you know that of the population is drug usermultiplied by ( |athat is the probability of testing positively given that you're user divided by the probability of testing positively overall which is againthis test has what sounds like very high accuracy of we have of the population which uses drug multiplied by the accuracy of divided by the probability of testing positively overallwhich is so the probability of being an actual user of this drug given that you tested positive for it is only so even though this drug test is accurate of the timeit' still providing false result in most of the cases where you're testing positive
16,412
even though ( |ais high ( %)it doesn' mean ( |bis high people overlook this all the timeso if there' one lesson to be learned from bayestheoremit is to always take these sorts of things with grain of salt apply bayestheorem to these actual problems and you'll often find that what sounds like high accuracy rate can actually be yielding very misleading results if you're dealing with low overall incidence of given problem we see the same thing in cancer screening and other sorts of medical screening as well that' very real problemthere' lot of people getting veryvery real and very unnecessary surgery as result of not understanding bayestheorem if you're going into the medical profession with big datapleasepleaseplease remember this theorem so that' bayestheorem always remember that the probability of something given something else is not the same thing as the other way aroundand it actually depends lot on the base probabilities of both of those two things that you're measuring it' very important thing to keep in mindand always look at your results with that in mind bayestheorem gives you the tools to quantify that effect hope it proves useful summary in this we talked about plotting and graphing your data and how to make your graphs look pretty using the nbuqmpumjc library in python we also walked through the concepts of covariance and correlation we looked at some examples and figured out covariance and correlation using python we analyzed the concept of conditional probability and saw some examples to understand it better finallywe saw bayestheorem and its importanceespecially in the medical field in the next we'll talk about predictive models
16,413
predictive models in this we're going to look at what predictive modeling is and how it uses statistics to predict outcomes from existing data we'll cover real world examples to understand the concepts better we'll see what regression analysis means and analyze some of its forms in detail we'll also look at an example which predicts the price of car for us these are the topics that we'll cover in this linear regression and how to implement it in python polynomial regressionits application and examples multivariate regression and how to implement it in python an example we'll build that predicts the price of car using python the concept of multi-level models and some things to know about them linear regression let' talk about regression analysisa very popular topic in data science and statistics it' all about trying to fit curve or some sort of functionto set of observations and then using that function to predict new values that you haven' seen yet that' all there is to linear regression
16,414
solinear regression is fitting straight line to set of observations for examplelet' say that have bunch of people that measured and the two features that measured of these people are their weight and their heighti' showing the weight on the -axis and the height on the -axisand can plot all these data pointsas in the people' weight versus their heightand can say"hmmthat looks like linear relationshipdoesn' itmaybe can fit straight line to it and use that to predict new values"and that' what linear regression does in this examplei end up with slope of and -intercept of which define straight line (the equation of straight line is =mx+bwhere is the slope and is the -interceptgiven slope and yinterceptthat fits the data that have besti can use that line to predict new values you can see that the weights that observed only went up to people that weighed kilograms what if had someone who weighed kilogramswelli could use that line to then figure out where would the height be for someone with kilograms based on this previous data don' know why they call it regression regression kind of implies that you're doing something backwards guess you can think of it in terms of you're creating line to predict new values based on observations you made in the pastbackwards in timebut it seems like little bit of stretch it' just confusing term quite honestlyand one way that we kind of obscure what we do with very simple concepts using very fancy terminology all it isis fitting straight line to set of data points
16,415
the ordinary least squares technique how does linear regression workwell internallyit uses technique called ordinary least squaresit' also known asols you might see that term tossed around as well the way it works is it tries to minimize the squared error between each point and the linewhere the error is just the distance between each point and the line that you have sowe sum up all the squares of those errorswhich sounds lot like when we computed variancerightexcept that instead of relative to the meanit' relative to the line that we're defining we can measure the variance of the data points from that lineand by minimizing that variancewe can find the line that fits it the bestnow you'll never have to actually do this yourself the hard waybut if you did have to for some reasonor if you're just curious about what happens under the hoodi'll now describe the overall algorithm for you and how you would actually go about computing the slope and -intercept yourself the hard way if you need to one day it' really not that complicated remember the slope-intercept equation of lineit is =mx+ the slope just turns out to be the correlation between the two variables times the standard deviation in divided by the standard deviation in it might seem little bit weird that standard deviation just kind of creeps into the math naturally therebut remember correlation had standard deviation baked into it as wellso it' not too surprising that you have to reintroduce that term
16,416
the intercept can then be computed as the mean of the minus the slope times the mean of againeven though that' really not that difficultpython will do it all for youbut the point is that these aren' complicated things to run they can actually be done very efficiently remember that least squares minimize the sum of squared errors from each point to the line another way of thinking about linear regression is that you're defining line that represents the maximum likelihood of an observation line therethat isthe maximum probability of the value being something for given value people sometimes call linear regression maximum likelihood estimationand it' just another example of people giving fancy name to something that' very simpleso if you hear someone talk about maximum likelihood estimationthey're really talking about regression they're just trying to sound really smart but now you know that term tooso you too can sound smart the gradient descent technique there is more than one way to do linear regression we've talked about ordinary least squares as being simple way of fitting line to set of databut there are other techniques as wellgradient descent being one of themand it works best in three-dimensional data soit tries to follow the contours of the data for you it' very fancy and obviously little bit more computationally expensivebut python does make it easy for you to try it out if you want to compare it to ordinary least squares using the gradient descent technique can make sense when dealing with data usually thoughleast squares is perfectly good choice for doing linear regressionand it' always legitimate thing to dobut if you do run into gradient descentyou will know that that is just an alternate way of doing linear regressionand it' usually seen in higher dimensional data
16,417
the co-efficient of determination or -squared so how do know how good my regression ishow well does my line fit my datathat' where -squared comes inand -squared is also known as the coefficient of determination againsomeone trying to sound smart might call it thatbut usually it' called -squared it is the fraction of the total variation in that is captured by your models so how well does your line follow that variation that' happeningare we getting an equal amount of variance on either side of your line or notthat' what -squared is measuring computing -squared to actually compute the valuetake minus the sum of the squared errors over the sum of the squared variations from the meansoit' not very difficult to computebut againpython will give you functions that will just compute that for youso you'll never have to actually do that math yourself interpreting -squared for -squaredyou will get value that ranges from to now means your fit is terrible it doesn' capture any of the variance in your data while is perfect fitwhere all of the variance in your data gets captured by this lineand all of the variance you see on either side of your line should be the same in that case so is badand is good that' all you really need to know something in between is something in between low -squared value means it' poor fita high -squared value means it' good fit as you'll see in the coming sectionsthere' more than one way to do regression linear regression is one of them it' very simple techniquebut there are other techniques as welland you can use -squared as quantitative measure of how good given regression is to set of data pointsand then use that to choose the model that best fits your data
16,418
computing linear regression and -squared using python let' now play with linear regression and actually compute some linear regression and rsquared we can start by creating little bit of python code here that generates some random-ish data that is in fact linearly correlated in this example ' going to fake some data about page rendering speeds and how much people purchasejust like previous example we're going to fabricate linear relationship between the amount of time it takes for website to load and the amount of money people spend on that websitenbuqmpumjcjomjof jnqpsuovnqzbtoq gspnqzmbcjnqpsu qbhf qffet oqsboepnopsnbm qvsdibtf"npvou qbhf qffet oqsboepnopsnbm  tdbuufs qbhf qffetqvsdibtf"npvou all 've done here is 've made randoma normal distribution of page speeds centered around seconds with standard deviation of second 've made the purchase amount linear function of that soi' making it minus the page speeds plus some normal random distribution around ittimes and if we scatter thatwe can see that the data ends up looking like this
16,419
you can see just by eyeballing it that there' definitely linear relationship going on thereand that' because we did hardcode real linear relationship in our source data now let' see if we can tease that out and find the best fit line using ordinary least squares we talked about how to do ordinary least squares and linear regressionbut you don' have to do any of that math yourself because the scipy package has tubut package that you can importgspntdjqzjnqpsutubut tmpqfjoufsdfqu @wbmvf @wbmvftue@fss tubutmjosfhsftt qbhf qffetqvsdibtf"npvou you can import tubut from tdjqzand then you can just call tubutmjosfhsftt on your two features sowe have list of page speeds (qbhf qffetand corresponding list of purchase amounts (qvsdibtf"npvouthe mjosfhsftt function will give us back bunch of stuffincluding the slopethe interceptwhich is what need to define my best fit line it also gives us the @wbmvffrom which we can get -squared to measure the quality of that fitand couple of things that we'll talk about later on for nowwe just need slopeinterceptand @wbmvfso let' go ahead and run these we'll begin by finding the linear regression best fits@wbmvf this is what your output should look likenow the -squared value of the line that we got back is that' almost that means we have really good fitwhich isn' too surprising because we made sure there was real linear relationship between this data even though there is some variance around that lineour line captures that variance we have roughly the same amount of variance on either side of the linewhich is good thing it tells us that we do have linear relationship and our model is good fit for the data that we have let' plot that linejnqpsunbuqmpumjcqzqmpubtqmu efgqsfejdu sfuvsotmpqf joufsdfqu gju-jof qsfejdu qbhf qffet qmutdbuufs qbhf qffetqvsdibtf"npvou qmuqmpu qbhf qffetgju-jof qmutipx
16,420
the following is the output to the preceding codethis little bit of code will create function to draw the best fit line alongside the data there' little bit more matplotlib magic going on here we're going to make gju-jof list and we're going to use the qsfejdu function we wrote to take the qbhf qffetwhich is our -axisand create the function from that so instead of taking the observations for amount spentwe're going to find the predicted ones just using the tmpqf times plus the joufsdfqu that we got back from the mjosfhsftt call above essentially herewe're going to do scatter plot like we did before to show the raw data pointswhich are the observations then we're also going to call qmpu on that same qzqmpu instance using our gju-jof that we created using the line equation that we got backand show them all both together when we do thatit looks like the following graph
16,421
you can see that our line is in fact great fit for our datait goes right smack down the middleand all you need to predict new values is this predict function given new previously unseen page speedwe could predict the amount spent just using the slope times the page speed plus the intercept that' all there is to itand think it' greatactivity for linear regression time now to get your hands dirty try increasing the random variation in the test data and see if that has any impact rememberthe -squared is measure of the fitof how much do we capture the varianceso the amount of variancewell why don' you see if it actually makes difference or not that' linear regressiona pretty simple concept all we're doing is fitting straight line to set of observationsand then we can use that line to make predictions of new values that' all there is to it but why limit yourself to linethere' other types of regression we can do that are more complex we'll explore these next
16,422
polynomial regression we've talked about linear regression where we fit straight line to set of observations polynomial regression is our next topicand that' using higher order polynomials to fit your data sosometimes your data might not really be appropriate for straight line that' where polynomial regression comes in polynomial regression is more general case of regression so why limit yourself to straight linemaybe your data doesn' actually have linear relationshipor maybe there' some sort of curve to itrightthat happens pretty frequently not all relationships are linearbut the linear regression is just one example of whole class of regressions that we can do if you remember the linear regression line that we ended up with was of the form mx bwhere we got back the values and from our linear regression analysis from ordinary least squaresor whatever method you choose now this is just first order or first-degree polynomial the order or the degree is the power of that you see so that' the first-order polynomial now if we wantedwe could also use second-order polynomialwhich would look like ax^ bx if we were doing regression using second-order polynomialwe would get back values for aband or we could do third-order polynomial that has the form ax^ bx^ cx the higher the orders getthe more complex the curves you can represent sothe more powers of you have blended togetherthe more complicated shapes and relationships you can get but more degrees aren' always better usually there' some natural relationship in your data that isn' really all that complicatedand if you find yourself throwing very large degrees at fitting your datayou might be overfittingbeware of overfittingdon' use more degrees than you need visualize your data first to see how complex of curve there might really be visualize the fit and check if your curve going out of its way to accommodate outliers high -squared simply means your curve fits your training data wellit may or may not be good predictor
16,423
if you have data that' kind of all over the place and has lot of varianceyou can go crazy and create line that just like goes up and down to try to fit that data as closely as it canbut in fact that doesn' represent the intrinsic relationship of that data it doesn' do good job of predicting new values so always start by just visualizing your data and think about how complicated does the curve really needs to be now you can use -squared to measure how good your fit isbut rememberthat' just measuring how well this curve fits your training data--that isthe data that you're using to actually make your predictions based off of it doesn' measure your ability to predict accurately going forward laterwe'll talk about some techniques for preventing overfitting called train/testbut for now you're just going to have to eyeball it to make sure that you're not overfitting and throwing more degrees at function than you need to this will make more sense when we explore an exampleso let' do that next implementing polynomial regression using numpy fortunatelynumpy has qpmzgju function that makes it super easy to play with this and experiment with different resultsso let' go take look time for fun with polynomial regression really do think it' funby the way it' kind of cool seeing all that high school math actually coming into some practical application go ahead and open the pmzopnjbm fhsfttjpojqzoc and let' have some fun let' create new relationship between our page speedsand our purchase amount fake dataand this time we're going to create more complex relationship that' not linear we're going to take the page speed and make it some function of the division of page speed for the purchase amountnbuqmpumjcjomjof gspnqzmbcjnqpsu oqsboepntffe qbhf qffet oqsboepnopsnbm qvsdibtf"npvou oqsboepnopsnbm qbhf qffet tdbuufs qbhf qffetqvsdibtf"npvou
16,424
if we do scatter plotwe end up with the followingby the wayif you're wondering what the oqsboepntffe line doesit creates random seed valueand it means that when we do subsequent random operations they will be deterministic by doing this we can make sure thatevery time we run this bit of codewe end up with the same exact results that' going to be important later on because ' going to suggest that you come back and actually try different fits to this data to compare the fits that you get soit' important that you're starting with the same initial set of points you can see that that' not really linear relationship we could try to fit line to it and it would be okay for lot of the datamaybe down at the right side of the graphbut not so much towards the left we really have more of an exponential curve now it just happens that numpy has qpmzgju function that allows you to fit any degree polynomial you want to this data sofor examplewe could say our -axis is an array of the page speeds (qbhf qffetthat we haveand our -axis is an array of the purchase amounts (qvsdibtf"npvouthat we have we can then just call oqqpmzgju yzmeaning that we want fourth degree polynomial fit to this data oqbssbz qbhf qffet oqbssbz qvsdibtf"npvou qoqqpmz oqqpmzgju  
16,425
let' go ahead and run that it runs pretty quicklyand we can then plot that sowe're going to create little graph here that plots our scatter plot of original points versus our predicted points jnqpsunbuqmpumjcqzqmpubtqmu yq oqmjotqbdf qmutdbuufs  qmuqmpu yqqyq  qmutipx the output looks like the following graphat this pointit looks like reasonably good fit what you want to ask yourself though is"am overfittingdoes my curve look like it' actually going out of its way to accommodate outliers? find that that' not really happening don' really see whole lot of craziness going on if had really high order polynomialit might swoop up at the top to catch that one outlier and then swoop downwards to catch the outliers thereand get little bit more stable through where we have lot of densityand maybe then it could potentially go all over the place trying to fit the last set of outliers at the end if you see that sort of nonsenseyou know you have too many orderstoo many degrees in your polynomialand you should probably bring it back down becausealthough it fits the data that you observedit' not going to be useful for predicting data you haven' seen
16,426
imagine have some curve that swoops way up and then back down again to fit outliers my prediction for something in between there isn' going to be accurate the curve really should be in the middle later in this book we'll talk about the main ways of detecting such overfittingbut for nowplease just observe it and know we'll go deeper later computing the -squared error now we can measure the -squared error by taking the and the predicted values (qy in the @tdpsf function that we have in tlmfbsonfusjdtwe can compute that gspntlmfbsonfusjdtjnqpsus@tdpsf ss@tdpsf qy qsjousthe output is as followsour code compares set of observations to set of predictions and computes -squared for youand with just one line of codeour -squared for this turns out to be which isn' too bad rememberzero is badone is good is to pretty close to onenot perfectand intuitivelythat makes sense you can see that our line is pretty good in the middle section of the databut not so good out at the extreme left and not so good down at the extreme right so sounds about right activity for polynomial regression recommend that you get down and dirty with this stuff try different orders of polynomials go back up to where we ran the qpmzgju function and try different values there besides you can use and that would go back to linear regressionor you could try some really high amount like and maybe you'll start to see overfitting so see what effect that has you're going to want to change that for examplelet' go to third-degree polynomial oqbssbz qbhf qffet oqbssbz qvsdibtf"npvou qoqqpmz oqqpmzgju  
16,427
just keep hitting run to go through each step and you can see the it' effect as our third-degree polynomial is definitely not as good fit as the fourth-degree polynomial if you actually measure the -squared errorit would actually turn out worsequantitativelybut if go too highyou might start to see overfitting so just have some fun with thatplay around different valuesand get sense of what different orders of polynomials do to your regression go get your hands dirty and try to learn something so that' polynomial regression againyou need to make sure that you don' put more degrees at the problem than you need to use just the right amount to find what looks like an intuitive fit to your data too many can lead to overfittingwhile too few can lead to poor fit so you can use both your eyeballs for nowand the -squared metricto figure out what the right number of degrees are for your data let' move on multivariate regression and predicting car prices what happens thenif we're trying to predict some value that is based on more than one other attributelet' say that the height of people not only depends on their weightbut also on their genetics or some other things that might factor into it wellthat' where multivariate analysis comes in you can actually build regression models that take more than one factor into account at once it' actually pretty easy to do with python
16,428
let' talk about multivariate regressionwhich is little bit more complicated the idea of multivariate regression is thiswhat if there' more than one factor that influences the thing you're trying to predictin our previous exampleswe looked at linear regression we talked about predicting people' heights based on their weightfor example we assumed that the weight was the only thing that influenced their heightbut maybe there are other factors too we also looked at the effect of page speed on purchase amounts maybe there' more that influences purchase amounts than just page speedand we want to find how these different factors all combine together to influence that value so that' where multivariate regression comes in the example we're going to look at now is as follows let' say that you're trying to predict the price that car will sell for it might be based on many different features of that carsuch as the body stylethe brandthe mileagewho knowseven on how good the tires are some of those features are going to be more important than others toward predicting the price of carbut you want to take into account all of them at once so our way forwards here is still going to use the least-squares approach to fit model to your set of observations the difference is that we're going to have bunch of coefficients for each different feature that you have sofor examplethe price model that we end up with might be linear relationship of alphasome constantkind of like your -intercept wasplus some coefficient of the mileageplus some coefficient of the ageplus some coefficient of how many doors it hasonce you end up with those coefficientsfrom least squares analysiswe can use that information to figure outwellhow important are each of these features to my model soif end up with very small coefficient for something like the number of doorsthat implies that the number of doors isn' that importantand maybe should just remove it from my model entirely to keep it simpler this is something that really should say more often in this book you always want to do the simplest thing that works in data science don' over complicate thingsbecause it' usually the simple models that work the best if you can find just the right amount of complexitybut no morethat' usually the right model to go with anywaythose coefficients give you way of actually"hey some of these things are more important than others maybe can discard some of these factors
16,429
now we can still measure the quality of fit with multivariate regression using -squared it works the same wayalthough one thing you need to assume when you're doing multivariate regression is that the factors themselves are not dependent on each other and that' not always true so sometimes you need to keep that little caveat in the back of your head for examplein this model we're going to assume that mileage and age of the car are not relatedbut in factthey're probably pretty tightly relatedthis is limitation of this techniqueand it might not be capturing an effect at all multivariate regression using python fortunately there' tubutnpefm package available for python that makes doing multivariate regression pretty easy let' just dive in and see how it works let' do some multivariate regression using python we're going to use some real data here about car values from the kelley blue book jnqpsuqboebtbtqe eg qesfbe@fydfm iuuq deotvoephtpgudpn efnz%bub djfodfdbstymt we're going to introduce new package here called qboebtwhich lets us deal with tabular data really easily it lets us read in tables of data and rearrange themand modify themand slice them and dice them in different ways we're going to be using that lot going forward we're going to import qboebt as qeand qe has sfbe@&ydfm function that we can use to go ahead and read microsoft excel spreadsheet from the web through http sopretty awesome capabilities of pandas there 've gone ahead and hosted that file for you on my own domainand if we run thatit will load it into what' called %bub'sbnf object that we're referring to as eg now can call ifbe on this %bub'sbnf to just show the first few lines of itegifbe
16,430
the following is the output for the preceding codethe actual dataset is much larger this is just the first few samples sothis is real data of mileagemakemodeltrimtypedoorscruisesound and leather oknow we're going to use qboebt to split that up into the features that we care about we're going to create model that tries to predict the price just based on the mileagethe modeland the number of doorsand nothing else jnqpsutubutnpefmtbqjbttn eg qe$bufhpsjdbm egpefm dpeft egz eg tnbee@dpotubou ftu tn -  gju ftutvnnbsz now the problem that run into is that the model is textlike century for buickand as you recalleverything needs to be number when ' doing this sort of analysis in the codei use this $bufhpsjdbm gvodujpo in qboebt to actually convert the set of model names that it sees in the %bub'sbnf into set of numbersthat isa set of codes ' going to say my input for this model on the -axis is mileage jmfbhf)model converted to an ordinal value pefm@pse)and the number of doors (%ppstwhat ' trying to predict on the -axis is the price ( sjdf
16,431
the next two lines of the code just create model that ' calling ftu that uses ordinary least squaresolsand fits that using the columns that give itjmfbhfpefm@pseand %ppst then can use the summary call to print out what my model looks likeyou can see here that the -squared is pretty low it' not that good of modelreallybut we can get some insight into what the various errors areand interestinglythe lowest standard error is associated with the mileage
16,432
now have said before that the coefficient is way of determining which items matterand that' only true though if your input data is normalized that isif everything' on the same scale of to if it' notthen these coefficients are kind of compensating for the scale of the data that it' seeing if you're not dealing with normalized dataas in this caseit' more useful to look at the standard errors in this casewe can see that the mileage is actually the biggest factor of this particular model could we have figured that out earlierwellwe could have just done little bit of slicing and dicing to figure out that the number of doors doesn' actually influence the price much at all let' run the following little linezhspvqcz eg%ppst nfbo little bit of qboebt syntax there it' pretty cool that you can do it in python in one line of codethat will print out new %bub'sbnf that shows the mean price for the given number of doorsi can see the average two-door car sells for actually more than the average four-door car if anything there' negative correlation between number of doors and pricewhich is little bit surprising this is small datasetthoughso we can' read whole lot of meaning into it of course activity for multivariate regression as an activityplease mess around with the fake input data where you want you can download the data and mess around with the spreadsheet read it from your local hard drive instead of from httpand see what kind of differences you can have maybe you can fabricate dataset that has different behavior and has better model that fits it maybe you can make wiser choice of features to base your model off of sofeel free to mess around with that and let' move on there you have itmultivariate analysis and an example of it running just as important as the concept of multivariate analysiswhich we exploredwas some of the stuff that we did in that python notebook soyou might want to go back there and study exactly what' going on
16,433
we introduced pandas and the way to work with pandas and dataframe objects pandas very powerful tool we'll use it more in future sectionsbut make sure you're starting to take notice of these things because these are going to be important techniques in your python skills for managing large amounts of data and organizing your data multi-level models it makes sense now to talk about multi-level models this is definitely an advanced topicand ' not going to get into whole lot of detail here my objective right now is to introduce the concept of multi-level models to youand let you understand some of the challenges and how to think about them when you're putting them together that' it the concept of multi-level models is that some effects happen at various levels in the hierarchy for exampleyour health your health might depend on how healthy your individual cells areand those cells might be function of how healthy the organs that they're inside areand the health of your organs might depend on the health of you as whole your health might depend in part on your family' health and the environment your family gives you and your family' health in turn might depend on some factors of the city that you live inhow much crime is therehow much stress is therehow much pollution is there and even beyond thatit might depend on factors in the entire world that we live in maybe just the state of medical technology in the world is factorrightanother exampleyour wealth how much money do you makewellthat' factor of your individual hard workbut it' also factor of the work that your parents didhow much money were they able to invest into your education and the environment that you grew up inand in turnhow about your grandparentswhat sort of environment were they able to create and what sort of education were they able to offer for your parentswhich in turn influenced the resources they have available for your own education and upbringing these are all examples of multi-level models where there is hierarchy of effects that influence each other at larger and larger scales now the challenge of multi-level models is to try to figure out"wellhow do model these interdependencieshow do model all these different effects and how they affect each other?
16,434
the challenge here is to identify the factors in each level that actually affect the thing you're trying to predict if ' trying to predict overall sat scoresfor examplei know that depends in part on the individual child that' taking the testbut what is it about the child that matterswellit might be the geneticsit might be their individual healththe individual brain size that they have you can think of any number of factors that affect the individual that might affect their sat score and then if you go up another levellook at their home environmentlook at their family what is it about their families that might affect their sat scoreshow much education were they able to offerare the parents able to actually tutor the children in the topics that are on the satthese are all factors at that second level that might be important what about their neighborhoodthe crime rate of the neighborhood might be important the facilities they have for teenagers and keeping them off the streetsthings like that the point is you want to keep looking at these higher levelsbut at each level identify the factors that impact the thing you're trying to predict can keep going up to the quality of the teachers in their schoolthe funding of the school districtthe education policies at the state level you can see there are different factors at different levels that all feed into this thing you're trying to predictand some of these factors might exist at more than one level crime ratefor exampleexists at the local and state levels you need to figure out how those all interplay with each other as well when you're doing multi-level modeling as you can imaginethis gets very hard and very complicated very quickly it is really way beyond the scope of this bookor any introductory book in data science this is hard stuff there are entire thick books about ityou could do an entire book about it that would be very advanced topic so why am even mentioning multi-level modelsit is because 've seen it mentioned on job descriptionsin couple of casesas something that they want you to know about in couple of cases 've never had to use it in practicebut think the important thing from the standpoint of getting career in data science is that you at least are familiar with the conceptand you know what it means and some of the challenges involved in creating multi-level model hope 've given you those concepts with thatwe can move on to the next section
16,435
there you have the concepts of multi-level models it' very advanced topicbut you need to understand what the concept isat leastand the concept itself is pretty simple you just are looking at the effects at different levelsdifferent hierarchies when you're trying to make prediction so maybe there are different layers of effects that have impacts on each otherand those different layers might have factors that interrelate with each other as well multilevel modeling tries to take account of all those different hierarchies and factors and how they interplay with each other rest assured that' all you need to know for now summary in this we talked about regression analysiswhich is trying to fit curve to set of training data and then using it to predict new values we saw its different forms we looked at the concept of linear regression and its implementation in python we learned what polynomial regression isthat isusing higher degree polynomials to create bettercomplex curves for multi-dimensional data we also saw its implementation in python we then talked about multivariate regressionwhich is little bit more complicated we saw how it is used when there are multiple factors affecting the data that we're predicting we looked at an interesting examplewhich predicts the price of car using python and very powerful toolpandas finallywe looked at the concept of multi-level models we understood some of the challenges and how to think about them when you're putting them together in the next we'll learn some machine learning techniques using python
16,436
machine learning with python in this we get into machine learning and how to actually implement machine learning models in python we'll examine what supervised and unsupervised learning meansand how they're different from each other we'll see techniques to prevent overfittingand then look at an interesting example where we implement spam classifier we'll analyze what -means clustering is long the waywith working example that clusters people based on their income and age using scikit-learnwe'll also cover really interesting application of machine learning called decision trees and we'll build working example in python that predict shiring decisions in company finallywe'll walk through the fascinating concepts of ensemble learning and svmswhich are some of my favourite machine learning areasmore specificallywe'll cover the following topicssupervised and unsupervised learning avoiding overfitting by using train/test bayesian methods implementation of an -mail spam classifier with naive bayes concept of -means clustering example of clustering in python entropy and how to measure it concept of decision trees and its example in python what is ensemble learning support vector machine (svmand its example using scikit-learn
16,437
machine learning and train/test so what is machine learningwellif you look it up on wikipedia or whateverit'll say that it is algorithms that can learn from observational data and can make predictions based on it it sounds really fancyrightlike artificial intelligence stufflike you have throbbing brain inside of your computer but in realitythese techniques are usually very simple we've already looked at regressionswhere we took set of observational datawe fitted line to itand we used that line to make predictions so by our new definitionthat was machine learningand your brain works that way too another fundamental concept in machine learning is something called train/testwhich lets us very cleverly evaluate how good machine learning model we've made as we look now at unsupervised and supervised learningyou'll see why train/test is so important to machine learning unsupervised learning let' talk in detail now about two different types of machine learningsupervised and unsupervised learning sometimes there can be kind of blurry line between the twobut the basic definition of unsupervised learning is that you're not giving your model any answers to learn from you're just presenting it with group of data and your machine learning algorithm tries to make sense out of it given no additional informationlet' say give it bunch of different objectslike these balls and cubes and sets of dice and what not let' then say have some algorithm that will cluster these objects into things that are similar to each other based on some similarity metric
16,438
now haven' told the machine learning algorithmahead of timewhat categories certain objects belong to don' have cheat sheet that it can learn from where have set of existing objects and my correct categorization of it the machine learning algorithm must infer those categories on its own this is an example of unsupervised learningwhere don' have set of answers that ' getting it learn from ' just trying to let the algorithm gather its own answers based on the data presented to it alone the problem with this is that we don' necessarily know what the algorithm will come up withif gave it that bunch of objects shown in the preceding imageis it going to group things into things that are roundthings that are large versus smallthings that are red versus bluei don' know it' going to depend on the metric that give it for similarity between items primarily but sometimes you'll find clusters that are surprisingand emerged that you didn' expect to see so that' really the point of unsupervised learningif you don' know what you're looking forit can be powerful tool for discovering classifications that you didn' even know were there we call this latent variable some property of your data that you didn' even know was there originallycan be teased out by unsupervised learning let' take another example around unsupervised learning say was clustering people instead of balls and dice ' writing dating site and want to see what sorts of people tend to cluster together there are some attributes that people tend to cluster aroundwhich decide whether they tend to like each other and date each other for example now you might find that the clusters that emerge don' conform to your predisposed stereotypes maybe it' not about college students versus middle-aged peopleor people who are divorced and whatnotor their religious beliefs maybe if you look at the clusters that actually emerged from that analysisyou'll learn something new about your users and actually figure out that there' something more important than any of those existing features of your people that really count towardto decide whether they like each other so that' an example of supervised learning providing useful results another example could be clustering movies based on their properties if you were to run clustering on set of movies from like imdb or somethingmaybe the results would surprise you perhaps it' not just about the genre of the movie maybe there are other propertieslike the age of the movie or the running length or what country it was released inthat are more important you just never know or we could analyze the text of product descriptions and try to find the terms that carry the most meaning for certain category againwe might not necessarily know ahead of time what termsor what wordsare most indicative of product being in certain categorybut through unsupervised learningwe can tease out that latent information
16,439
supervised learning now in contrastsupervised learning is case where we have set of answers that the model can learn from we give it set of training datathat the model learns from it can then infer relationships between the features and the categories that we wantand apply that to unseen new values and predict information about them going back to our earlier examplewhere we were trying to predict car prices based on the attributes of those cars that' an example where we are training our model using actual answers so have set of known cars and their actual prices that they sold for train the model on that set of complete answersand then can create model that ' able to use to predict the prices of new cars that haven' seen before that' an example of supervised learningwhere you're giving it set of answers to learn from you've already assigned categories or some organizing criteria to set of dataand your algorithm then uses that criteria to build model from which it can predict new values evaluating supervised learning so how do you evaluate supervised learningwellthe beautiful thing about supervised learning is that we can use trick called train/test the idea here is to split our observational data that want my model to learn from into two groupsa training set and testing set so when train/build my model based on the data that havei only do that with part of my data that ' calling my training setand reserve another part of my data that ' going to use for testing purposes can build my model using subset of my data for training dataand then ' in position to evaluate the model that comes out of thatand see if it can successfully predict the correct answers for my testing data so you see what did therei have set of data where already have the answers that can train my model frombut ' going to withhold portion of that data and actually use that to test my model that was generated using the training setthat it gives me very concrete way to test how good my model is on unseen data because actually have bit of data that set aside that can test it with you can then measure quantitatively how well it did using -squared or some other metriclike root-mean-square errorfor example you can use that to test one model versus another and see what the best model is for given problem you can tune the parameters of that model and use train/test to maximize the accuracy of that model on your testing data so this is great way to prevent overfitting
16,440
there are some caveats to supervised learning need to make sure that both your training and test datasets are large enough to actually be representative of your data you also need to make sure that you're catching all the different categories and outliers that you care aboutin both training and testingto get good measure of its successand to build good model you have to make sure that you've selected from those datasets randomlyand that you're not just carving your dataset in two and saying everything left of here is training and right here is testing you want to sample that randomlybecause there could be some pattern sequentially in your data that you don' know about nowif your model is overfittingand just going out of its way to accept outliers in your training datathen that' going to be revealed when you put it against unset scene of testing data this is because all that gyrations for outliers won' help with the outliers that it hasn' seen before let' be clear here that train/test is not perfectand it is possible to get misleading results from it maybe your sample sizes are too smalllike we already talked aboutor maybe just due to random chance your training data and your test data look remarkably similarthey actually do have similar set of outliers and you can still be overfitting as you can see in the following exampleit really can happen
16,441
-fold cross validation now there is way around this problemcalled -fold cross-validationand we'll look at an example of this later in the bookbut the basic concept is you train/test many times so you actually split your data not into just one training set and one test setbut into multiple randomly assigned segmentsk segments that' where the comes from and you reserve one of those segments as your test dataand then you start training your model on the remaining segments and measure their performance against your test dataset then you take the average performance from each of those training setsmodelsresults and take their -squared average score so this wayyou're actually training on different slices of your datameasuring them against the same test setand if you have model that' overfitting to particular segment of your training datathen it will get averaged out by the other ones that are contributing to -fold cross-validation here are the -fold cross validation steps split your data into randomly-assigned segments reserve one segment as your test data train on each of the remaining - segments and measure their performance against the test set take the average of the - -squared scores this will make more sense later in the bookright now would just like for you to know that this tool exists for actually making train/test even more robust than it already is so let' go and actually play with some data and actually evaluate it using train/test next using train/test to prevent overfitting of polynomial regression let' put train/test into action so you might remember that regression can be thought of as form of supervised machine learning let' just take polynomial regressionwhich we covered earlierand use train/test to try to find the right degree polynomial to fit given set of data
16,442
just like in our previous examplewe're going to set up little fake dataset of randomly generated page speeds and purchase amountsand ' going to create quirky little relationship between them that' exponential in nature nbuqmpumjcjomjof jnqpsuovnqzbtoq gspnqzmbcjnqpsu oqsboepntffe qbhf qffet oqsboepnopsnbm qvsdibtf"npvou oqsboepnopsnbm qbhf qffet tdbuufs qbhf qffetqvsdibtf"npvou let' go ahead and generate that data we'll use normal distribution of random data for both page speeds and purchase amount using the relationship as shown in the following screenshotnextwe'll split that data we'll take of our dataand we're going to reserve that for our training data so only of these points are going to be used for training the modeland then we're going to reserve the other for testing that model against unseen data
16,443
we'll use python' syntax here for splitting the list the first points are going to go to the training setand the last everything after is going to go to test set you may remember this from our python basics earlier onwhere we covered the syntax to do thisand we'll do the same thing for purchase amounts hereusbjo qbhf qffet uftu qbhf qffet usbjoqvsdibtf"npvou uftuqvsdibtf"npvou now in our earlier sectionsi've said that you shouldn' just slice your dataset in two like thisbut that you should randomly sample it for training and testing in this case thoughit works out because my original data was randomly generated anywayso there' really no rhyme or reason to where things fell but in real-world data you'll want to shuffle that data before you split it we'll look now at handy method that you can use for that purpose of shuffling your data alsoif you're using the pandas packagethere' some handy functions in there for making training and test datasets automatically for you but we're going to do it using python list here so let' visualize our training dataset that we ended up with we'll do scatter plot of our training page speeds and purchase amounts tdbuufs usbjo usbjothis is what your output should now look like
16,444
basically points that were selected at random from the original complete dataset have been plotted it has basically the same shapeso that' good thing it' representative of our data that' importantnow let' plot the remaining points that we reserved as test data tdbuufs uftu uftuherewe see our remaining for testing also has the same general shape as our original data so think that' representative test set too it' little bit smaller than you would like to see in the real worldfor sure you probably get little bit of better result if you had , points instead of for exampleto choose from and reserved instead of now we're going to try to fit an th degree polynomial to this dataand we'll just pick the number at random because know it' really high order and is probably overfitting let' go ahead and fit our th degree polynomial using oqqpmz oqqpmzgju zwhere is an array of the training data onlyand is an array of the training data only we are finding our model using only those points that we reserved for training now we have this qfunction that results that we can use to predict new valuesy oqbssbz usbjo oqbssbz usbjoqoqqpmz oqqpmzgju  
16,445
now we'll plot the polynomial this came up with against the training data we can scatter our original data for the training data setand then we can plot our predicted values against themjnqpsunbuqmpumjcqzqmpubtqmu yq oqmjotqbdf byft qmubyft byfttfu@ymjn byfttfu@zmjn qmutdbuufs  qmuqmpu yqqyq  qmutipx you can see in the following graph that it looks like pretty good fitbut you know that clearly it' doing some overfittingwhat' this craziness out at the righti' pretty sure our real dataif we had it out therewouldn' be crazy highas this function would implicate so this is great example of overfitting your data it fits the data you gave it very wellbut it would do terrible job of predicting new values beyond the point where the graph is going crazy high on the right so let' try to tease that out let' give it our test datasetuftuy oqbssbz uftu uftuz oqbssbz uftubyft qmubyft
16,446
byfttfu@ymjn byfttfu@zmjn qmutdbuufs uftuyuftuz qmuqmpu yqqyq  qmutipx indeedif we plot our test data against that same functionwellit doesn' actually look that bad we got lucky and none of our test is actually out here to begin withbut you can see that it' reasonable fitbut far from perfect and in factif you actually measure the -squared scoreit' worse than you might think we can measure that using the @tdpsf function from tlmfbsonfusjdt we just give it our original data and our predicted values and it just goes through and measures all the variances from the predictions and squares them all up for yougspntlmfbsonfusjdtjnqpsus@tdpsf ss@tdpsf uftuzquftuy qsjouswe end up with an -squared score of just so that' not that hotyou can see that it fits the training data lot bettergspntlmfbsonfusjdtjnqpsus@tdpsf ss@tdpsf oqbssbz usbjoqoqbssbz usbjo qsjous
16,447
the -squared value turns out to be which isn' too surprisingbecause we trained it on the training data the test data is sort of its unknownits testand it did fail the testquite frankly %that' an fso this has been an example where we've used train/test to evaluate supervised learning algorithmand like said beforepandas has some means of making this even easier we'll look at that little bit laterand we'll also look at more examples of train/testincluding kfold cross validationlater in the book as well activity you can probably guess what your homework is so we know that an th order polynomial isn' very useful can you do betterso want you to go back through our exampleand use different values for the degree polynomial that you're going to use to fit change that to different values and see if you can figure out what degree polynomial actually scores best using train/test as metric where do you get your best -squared score for your test datawhat degree fits herego play with that it should be pretty easy exercise and very enlightening one for you as well so that' train/test in actiona very important technique to have under your beltand you're going to use it over and over again to make sure that your results are good fit for the model that you haveand that your results are good predictor of unseen values it' great way to prevent overfitting when you're doing your modeling bayesian methods concepts did you ever wonder how the spam classifier in your -mail workshow does it know that an -mail might be spam or notwellone popular technique is something called naive bayesand that' an example of bayesian method let' learn more about how that works let' discuss bayesian methods we did talk about bayestheorem earlier in this book in the context of talking about how things like drug tests could be very misleading in their results but you can actually apply the same bayestheorem to larger problemslike spam classifiers so let' dive into how that might workit' called bayesian method
16,448
so just refresher on bayestheorem -rememberthe probability of given is equal to the overall probability of times the probability of given over the overall probability of bhow can we use that in machine learningi can actually build spam classifier for thatan algorithm that can analyze set of known spam -mails and known set of non-spam emailsand train model to actually predict whether new -mails are spam or not this is real technique used in actual spam classifiers in the real world as an examplelet' just figure out the probability of an -mail being spam given that it contains the word "freeif people are promising you free stuffit' probably spamso let' work that out the probability of an email being spam given that you have the word "freein that -mail works out to the overall probability of it being spam message times the probability of containing the word "freegiven that it' spam over the probability overall of being freethe numerator can just be thought of as the probability of message being qbn and containing the word 'sff but that' little bit different than what we're looking forbecause that' the odds out of the complete dataset and not just the odds within things that contain the word 'sff the denominator is just the overall probability of containing the word 'sff sometimes that won' be immediately accessible to you from the data that you have if it' notyou can expand that out to the following expression if you need to derive itthis gives you the percentage of -mails that contain the word "freethat are spamwhich would be useful thing to know when you're trying to figure out if it' spam or not what about all the other words in the english languagethoughso our spam classifier should know about more than just the word "freeit should automatically pick up every word in the messageideallyand figure out how much does that contribute to the likelihood of particular -mail being spam so what we can do is train our model on every word that we encounter during trainingthrowing out things like "aand "theand "andand meaningless words like that then when we go through all the words in new -mailwe can multiply the probability of being spam for each word togetherand we get the overall probability of that -mail being spam
16,449
now it' called naive bayes for reason it' naive is because we're assuming that there' no relationships between the words themselves we're just looking at each word in isolationindividually within messageand basically combining all the probabilities of each word' contribution to it being spam or not we're not looking at the relationships between the words so better spam classifier would do thatbut obviously that' lot harder so this sounds like lot of work but the overall idea is not that hardand scikit-learn in python makes it actually pretty easy to do it offers feature called countvectorizer that makes it very simple to actually split up an -mail to all of its component words and process those words individually then it has vmujopnjbm/functionwhere nb stands for naive bayeswhich will do all the heavy lifting for naive bayes for us implementing spam classifier with naive bayes let' write spam classifier using naive bayes you're going to be surprised how easy this is in factmost of the work ends up just being reading all the input data that we're going to train on and actually parsing that data in the actual spam classification bitthe machine learning bitis itself just few lines of code so that' usually how it works outreading in and massaging and cleaning up your data is usually most of the work when you're doing data scienceso get used to the ideajnqpsupt jnqpsujp jnqpsuovnqz gspnqboebtjnqpsu%bub'sbnf gspntlmfbsogfbuvsf@fyusbdujpoufyujnqpsu$pvou fdupsj[fs gspntlmfbsoobjwf@cbzftjnqpsu vmujopnjbm/efgsfbe'jmft qbui gpssppuejsobnftgjmfobnftjoptxbml qbui gpsgjmfobnfjogjmfobnft qbui ptqbuikpjo sppugjmfobnf jo#pez 'bmtf mjoft jppqfo qbuis fodpejoh mbujogpsmjofjog jgjo#pez mjoftbqqfoe mjof fmjgmjof = jo#pez svf
16,450
dmptf nfttbhf = kpjo mjoft zjfmeqbuinfttbhf efgebub'sbnf'spn%jsfdupsz qbuidmbttjgjdbujpo spxt joefy gpsgjmfobnfnfttbhfjosfbe'jmft qbui spxtbqqfoe nfttbhf nfttbhfdmbtt dmbttjgjdbujpojoefybqqfoe gjmfobnf sfuvso%bub'sbnf spxtjoefy joefy ebub %bub'sbnf nfttbhf dmbtt ebub ebubbqqfoe ebub'sbnf'spn%jsfdupsz tvoephdpotvmu efnz%bub djfodffnbjmttqbn tqbn ebub ebubbqqfoe ebub'sbnf'spn%jsfdupsz tvoephdpotvmu efnz%bub djfodffnbjmtibn ibn so the first thing we need to do is read all those -mails in somehowand we're going to again use pandas to make this little bit easier againpandas is useful tool for handling tabular data we import all the different packages that we're going to use within our example herethat includes the os librarythe io librarynumpypandasand countvectorizer and multinomialnb from scikit-learn let' go through this code in detail now we can skip past the function definitions of sfbe'jmft and ebub'sbnf'spn%jsfdupsz for now and go down to the first thing that our code actually does which is to create pandas dataframe object we're going to construct this from dictionary that initially contains little empty list for messages in an empty list of class so this syntax is saying" want dataframe that has two columnsone that contains the messagethe actual text of each -mailand one that contains the class of each -mailthat iswhether it' spam or hamso it' saying want to create little database of -mailsand this database has two columnsthe actual text of the -mail and whether it' spam or not now we needed to put something in that databasethat isinto that dataframein python syntax so we call the two methods bqqfoe and ebub'sbnf'spn%jsfdupsz to actually throw into the dataframe all the spam -mails from my tqbn folderand all the ham -mails from the ibn folder
16,451
if you are playing along heremake sure you modify the path passed to the ebub'sbnf'spn%jsfdupsz function to match wherever you installed the book materials in your systemand againif you're on mac or linuxplease pay attention to backslashes and forward slashes and all that stuff in this caseit doesn' matterbut you won' have drive letterif you're not on windows so just make sure those paths are actually pointing to where your tqbn and ibn folders are for this example nextebub'sbnf'spn%jsfdupsz is function wrotewhich basically says have path to directoryand know it' given classificationspam or hamthen it uses the sfbe'jmft functionthat also wrotewhich will iterate through every single file in directory so sfbe'jmft is using the ptxbml function to find all the files in directory then it builds up the full pathname for each individual file in that directoryand then it reads it in and while it' reading it init actually skips the header for each -mail and just goes straight to the textand it does that by looking for the first blank line it knows that everything after the first empty line is actually the message bodyand everything in front of that first empty line is just bunch of header information that don' actually want to train my spam classifier on so it gives me back boththe full path to each file and the body of the message so that' how we read in all of the dataand that' the majority of the codeso what have at the end of the day is dataframe objectbasically database with two columnsthat contains message bodiesand whether it' spam or not we can go ahead and run thatand we can use the ifbe command from the dataframe to actually preview what this looks likeebubifbe the first few entries in our dataframe look like thisfor each path to given file full of emails we have classification and we have the message body
16,452
alrightnow for the fun partwe're going to use the vmujopnjbm/scikit-learn to actually perform naive bayes on the data that we have function from wfdupsj[fs $pvou fdupsj[fs dpvout wfdupsj[fsgju@usbotgpsn ebubwbmvft dmbttjgjfs vmujopnjbm/ubshfut ebubwbmvft dmbttjgjfsgju dpvoutubshfut this is what your output should now look likeonce we build vmujopnjbm/classifierit needs two inputs it needs the actual data that we're training on (dpvout)and the targets for each thing (ubshfutso dpvout is basically list of all the words in each -mail and the number of times that word occurs so this is what $pvou fdupsj[fs doesit takes the nfttbhf column from the dataframe and takes all the values from it ' going to call wfdupsj[fsgju@usbotgpsn which basically tokenizes or converts all the individual words seen in my data into numbersinto values it then counts up how many times each word occurs this is more compact way of representing how many times each word occurs in an -mail instead of actually preserving the words themselvesi' representing those words as different values in sparse matrixwhich is basically saying that ' treating each word as numberas numerical indexinto an array what that does isjust in plain englishit split each message up into list of words that are in itand counts how many times each word occurs so we're calling that dpvout it' basically that information of how many times each word occurs in each individual message mean while ubshfut is the actual classification data for each -mail that 've encountered so can call dmbttjgjfsgju using my vmujopnjbm/function to actually create model using naive bayeswhich will predict whether new -mails are spam or not based on the information we've given it
16,453
let' go ahead and run that it runs pretty quicklyi' going to use couple of examples here let' try message body that just says 'sff pofzopxwhich is pretty clearly spamand more innocent message that just says ) #pcipxbcpvubhbnfpg hpmgupnpsspxso we're going to pass these in fybnqmft 'sff pofzopx) #pcipxbcpvubhbnfpghpmg upnpsspxfybnqmf@dpvout wfdupsj[fsusbotgpsn fybnqmft qsfejdujpot dmbttjgjfsqsfejdu fybnqmf@dpvout qsfejdujpot the first thing we do is convert the messages into the same format that trained my model on so use that same vectorizer that created when creating the model to convert each message into list of words and their frequencieswhere the words are represented by positions in an array then once 've done that transformationi can actually use the qsfejdu function on my classifieron that array of examples that have transformed into lists of wordsand see what we come up withbssbz euzqf ] and sure enoughit workssogiven this array of two input messages'sff pofz opxand ) #pcit' telling me that the first result came back as spam and the second result came back as hamwhich is what would expect that' pretty cool so there you have it activity we had pretty small dataset hereso you could try running some different -mails through it if you want and see if you get different results if you really want to challenge yourselftry applying train/test to this example so the real measure of whether or not my spam classifier is good or not is not just intuitively whether it can figure out that 'sff pofzopxis spam you want to measure that quantitatively so if you want little bit of challengego ahead and try to split this data up into training set and test dataset you can actually look up online how pandas can split data up into train sets and testing sets pretty easily for youor you can do it by hand whatever works for you see if you can actually apply your vmujopnjbm/classifier to test dataset and measure its performance soif you want little bit of an exercisea little bit of challengego ahead and give that try
16,454
how cool is thatwe just wrote our own spam classifier just using few lines of code in python it' pretty easy using scikit-learn and python that' naive bayes in actionand you can actually go and classify some spam or ham messages now that you have that under your belt pretty cool stuff let' talk about clustering next -means clustering nextwe're going to talk about -means clusteringand this is an unsupervised learning technique where you have collection of stuff that you want to group together into various clusters maybe it' movie genres or demographics of peoplewho knowsbut it' actually pretty simple ideaso let' see how it works -means clustering is very common technique in machine learning where you just try to take bunch of data and find interesting clusters of things just based on the attributes of the data itself sounds fancybut it' actually pretty simple all we do in -means clustering is try to split our data into groups that' where the comes fromit' how many different groups you're trying to split your data into and it does this by finding centroids sobasicallywhat group given data point belongs to is defined by which of these centroid points it' closest to in your scatter plot you can visualize this in the following image
16,455
this is showing an example of -means clustering with of threeand the squares represent data points in scatter plot the circles represent the centroids that the -means clustering algorithm came up withand each point is assigned cluster based on which centroid it' closest to so that' all there is to itreally it' an example of unsupervised learning it isn' case where we have bunch of data and we already know the correct cluster for given set of training dataratheryou're just given the data itself and it tries to converge on these clusters naturally just based on the attributes of the data alone it' also an example where you are trying to find clusters or categorizations that you didn' even know were there as with most unsupervised learning techniquesthe point is to find latent valuesthings you didn' really realize were there until the algorithm showed them to you for examplewhere do millionaires livei don' knowmaybe there is some interesting geographical cluster where rich people tend to liveand -means clustering could help you figure that out maybe don' really know if today' genres of music are meaningful what does it mean to be alternative these daysnot muchrightbut by using -means clustering on attributes of songsmaybe could find interesting clusters of songs that are related to each other and come up with new names for what those clusters represent or maybe can look at demographic dataand maybe existing stereotypes are no longer useful maybe hispanic has lost its meaning and there' actually other attributes that define groups of peoplefor examplethat could uncover with clustering sounds fancydoesn' itreally complicated stuff unsupervised machine learning with clustersit sounds fancybut as with most techniques in data scienceit' actually very simple idea here' the algorithm for us in plain english randomly pick centroids ( -means)we start off with randomly chosen set of centroids so if we have of three we're going to look for three clusters in our groupand we will assign three randomly positioned centroids in our scatter plot assign each data point to the centroid it is closest towe then assign each data point to the randomly assigned centroid that it is closest to recompute the centroids based on the average position of each centroid' pointsthen recompute the centroid for each cluster that we come up with that isfor given cluster that we end up withwe will move that centroid to be the actual center of all those points iterate until points stop changing assignment to centroidswe will do it all again until those centroids stop movingwe hit some threshold value that says okwe have converged on something here predict the cluster for new pointsto predict the clusters for new points that haven' seen beforewe can just go through our centroid locations and figure out which centroid it' closest to to predict its cluster
16,456
let' look at graphical example to make little bit more sense we'll call the first figure in the following image as asecond as bthird as and the fourth as the gray squares in image represent data points in our scatter plot the axes represent some different features of something maybe it' age and incomeit' an example keep usingbut it could be anything and the gray squares might represent individual people or individual songs or individual something that want to find relationships between so start off by just picking three points at random on my scatterplot could be anywhere got to start somewhererightthe three points (centroidsi selected have been shown as circles in image so the next thing ' going to do is for each centroid 'll compute which one of the gray points it' closest to by doing thatthe points shaded in blue are associated with this blue centroid the green points are closest to the green centroidand this single red point is closest to that red random point that picked out of courseyou can see that' not really reflective of where the actual clusters appear to be so what ' going to do is take the points that ended up in each cluster and compute the actual center of those points for examplein the green clusterthe actual center of all data turns out to be little bit lower we're going to move the centroid down little bit the red cluster only had one pointso its center moves down to where that single point is and the blue point was actually pretty close to the centerso that just moves little bit on this next iteration we end up with something that looks like image now you can see that our cluster for red things has grown little bit and things have moved little bitthat isthose got taken from the green cluster if we do that againyou can probably predict what' going to happen next the green centroid will move little bitthe blue centroid will still be about where it is but at the end of the day you're going to end up with the clusters you' probably expect to see that' how -means works so it just keeps iteratingtrying to find the right centroids until things start moving around and we converge on solution
16,457
limitations to -means clustering so there are some limitations to -means clustering here they are choosing kfirst of allwe need to choose the right value of kand that' not straightforward thing to do at all the principal way of choosing is to just start low and keep increasing the value of depending on how many groups you wantuntil you stop getting large reductions in squared error if you look at the distances from each point to their centroidsyou can think of that as an error metric at the point where you stop reducing that error metricyou know you probably have too many clusters so you're not really gaining any more information by adding additional clusters at that point avoiding local minimaalsothere is problem of local minima you could just get very unlucky with those initial choices of centroids and they might end up just converging on local phenomena instead of more global clustersso usuallyyou want to run this few times and maybe average the results together we call that ensemble learning we'll talk about that more little bit later onbut it' always good idea to run -means more than once using different set of random initial values and just see if you do in fact end up with the same overall results or not labeling the clustersfinallythe main problem with -means clustering is that there' no labels for the clusters that you get it will just tell you that this group of data points are somehow relatedbut you can' put name on it it can' tell you the actual meaning of that cluster let' say have bunch of movies that ' looking atand -means clustering tells me that bunch of science fiction movies are over herebut it' not going to call them "science fictionmovies for me it' up to me to actually dig into the data and figure outwellwhat do these things really have in commonhow might describe that in englishthat' the hard partand -means won' help you with that so againscikit-learn makes it very easy to do this let' now work up an example and put -means clustering into action
16,458
clustering people based on income and age let' see just how easy it is to do -means clustering using scikit-learn and python the first thing we're going to do is create some random data that we want to try to cluster just to make it easierwe'll actually build some clusters into our fake test data so let' pretend there' some real fundamental relationship between these dataand there are some real natural clusters that exist in it so to do thatwe can work with this little dsfbuf$mvtufsfe%bub function in pythongspnovnqzjnqpsusboepnbssbz $sfbufgblfjodpnfbhfdmvtufstgps/qfpqmfjoldmvtufst efgdsfbuf$mvtufsfe%bub / sboepntffe qpjout fs$mvtufs gmpbu  gpsjjosbohf jodpnf$fouspje sboepnvojgpsn bhf$fouspje sboepnvojgpsn gpskjosbohf jou qpjout fs$mvtufs bqqfoe <sboepnopsnbm jodpnf$fouspjesboepnopsnbm bhf$fouspje bssbz sfuvso the function starts off with consistent random seed so you'll get the same result every time we want to create clusters of people in clusters so we pass and to dsfbuf$mvtufsfe%bub our code figures out how many points per cluster that works out to first and stores it in qpjout fs$mvtufs thenit builds up list that starts off empty for each clusterwe're going to create some random centroid of income (jodpnf$fouspjebetween , and , dollars and some random centroid of age (bhf$fouspjebetween the age of and
16,459
what we're doing here is creating fake scatter plot that will show income versus age for people and clusters so for each random centroid that we createdi' then going to create normally distributed set of random data with standard deviation of , in income and standard deviation of in age that will give us back bunch of age income data that is clustered into some pre-existing clusters that we can chose at random oklet' go ahead and run that nowto actually do -meansyou'll see how easy it is gspntlmfbsodmvtufsjnqpsufbot jnqpsunbuqmpumjcqzqmpubtqmu gspntlmfbsoqsfqspdfttjohjnqpsutdbmf gspnovnqzjnqpsusboepngmpbu ebub dsfbuf$mvtufsfe%bub npefm fbot @dmvtufst /pufntdbmjohuifebubupopsnbmj[fju*nqpsubougpshppesftvmutnpefm npefmgju tdbmf ebub  fdbompplbuuifdmvtufstfbdiebubqpjouxbtbttjhofeup qsjounpefmmbcfmt"oexf mmwjtvbmj[fju qmugjhvsf gjhtj[ qmutdbuufs ebubebub npefmmbcfmt@btuzqf gmpbu qmutipx all you need to do is import fbot from scikit-learn' dmvtufs package we're also going to import nbuqmpumjc so we can visualize thingsand also import tdbmf so we can take look at how that works so we use our dsfbuf$mvtufsfe%bub function to say random people around clusters so there are natural clusters for the data that ' creating we then create modela kmeans model with of so we're picking clusters because we know that' the right answer but againin unsupervised learning you don' necessarily know what the real value of is you need to iterate and converge on it yourself and then we just call npefmgju using my kmeans npefm using the data that we had
16,460
now the scale alluded to earlierthat' normalizing the data one important thing with kmeans is that it works best if your data is all normalized that means everything is at the same scale so problem that have here is that my ages range from to but my incomes range all the way up to , so these values are not really comparable the incomes are much larger than the age values dbmf will take all that data and scale it together to consistent scale so can actually compare these things as apples to applesand that will help lot with your -means results soonce we've actually called gju on our modelwe can actually look at the resulting labels that we got then we can actually visualize it using little bit of nbuqmpumjc magic you can see in the code we have little trick where we assigned the color to the labels that we ended up with converted to some floating point number that' just little trick you can use to assign arbitrary colors to given value so let' see what we end up with
16,461
it didn' take that long you see the results are basically what clusters assigned everything into we know that our fake data is already pre-clusteredso it seems that it identified the first and second clusters pretty easily it got little bit confused beyond that pointthoughbecause our clusters in the middle are actually little bit mushed together they're not really that distinctso that was challenge for -means but regardlessit did come up with some reasonable guesses at the clusters this is probably an example of where four clusters would more naturally fit the data activity so what want you to do for an activity is to try different value of and see what you end up with just eyeballing the preceding graphit looks like four would work well does it reallywhat happens if increase too largewhat happens to my resultswhat does it try to split things intoand does it even make sensesoplay around with ittry different values of so in the @dmvtufst functionchange the to something else run all through it again and see you end up with that' all there is to -means clustering it' just that simple you can just use scikit-learn' fbot thing from dmvtufs the only real gotchamake sure you scale the datanormalize it you want to make sure the things that you're using -means on are comparable to each otherand the tdbmf function will do that for you so those are the main things for kmeans clustering pretty simple concepteven simpler to do it using scikit-learn that' all there is to it that' -means clustering so if you have bunch of data that is unclassified and you don' really have the right answers ahead of timeit' good way to try to naturally find interesting groupings of your dataand maybe that can give you some insight into what that data is it' good tool to have 've used it before in the real world and it' really not that hard to useso keep that in your tool chest measuring entropy quite soon we're going to get to one of the cooler parts of machine learningat least think socalled decision trees but before we can talk about thatit' necessary to understand the concept of entropy in data science
16,462
so entropyjust like it is in physics and thermodynamicsis measure of dataset' disorderof how same or different the dataset is so imagine we have dataset of different classificationsfor exampleanimals let' say have bunch of animals that have classified by species nowif all of the animals in my dataset are an iguanai have very low entropy because they're all the same but if every animal in my dataset is different animali have iguanas and pigs and sloths and who knows what elsethen would have higher entropy because there' more disorder in my dataset things are more different than they are the same entropy is just way of quantifying that sameness or difference throughout my data soan entropy of implies all the classes in the data are the samewhereas if everything is differenti would have high entropyand something in between would be number in between entropy just describes how same or different the things in dataset are now mathematicallyit' little bit more involved than thatso when actually compute number for entropyit' computed using the following expressionso for every different class that have in my datai' going to have one of these termsp and so on and so forth through pnfor different classes that might have the just represents the proportion of the data that is that class and if you actually plot what this looks like for each termqj mo qjit'll look little bit something like the following graph
16,463
you add these up for each individual class for exampleif the proportion of the datathat isfor given class is then the contribution to the overall entropy is and if everything is that classthen again the contribution to the overall entropy is because in either caseif nothing is this class or everything is this classthat' not really contributing anything to the overall entropy it' the things in the middle that contribute entropy of the classwhere there' some mixture of this classification and other stuff when you add all these terms togetheryou end up with an overall entropy for the entire dataset so mathematicallythat' how it works outbut againthe concept is very simple it' just measure of how disordered your datasethow same or different the things in your data are decision trees concepts believe it or notgiven set of training datayou can actually get python to generate flowchart for you to make decision so if you have something you're trying to predict on some classificationyou can use decision tree to actually look at multiple attributes that you can decide upon at each level in the flowchart you can print out an actual flowchart for you to use to make decision frombased on actual machine learning how cool is thatlet' see how it works personally find decision trees are one of the most interesting applications of machine learning decision tree basically gives you flowchart of how to make some decision you have some dependent variablelike whether or not should go play outside today or not based on the weather when you have decision like that that depends on multiple attributes or multiple variablesa decision tree could be good choice there are many different aspects of the weather that might influence my decision of whether should go outside and play it might have to do with the humiditythe temperaturewhether it' sunny or notfor example decision tree can look at all these different attributes of the weatheror anything elseand decide what are the thresholdswhat are the decisions need to make on each one of those attributes before arrive at decision of whether or not should go play outsidethat' all decision tree is so it' form of supervised learning
16,464
the way it would work in this example would be as follows would have some sort of dataset of historical weatherand data about whether or not people went outside to play on particular day would feed the model this data of whether it was sunny or not on each daywhat the humidity wasand if it was windy or notand whether or not it was good day to go play outside given that training dataa decision tree algorithm can then arrive at tree that gives us flowchart that we can print out it looks just like the following flow chart you can just walk through and figure out whether or not it' good day to play outside based on the current attributes you can use that to predict the decision for new set of valueshow cool is thatwe have an algorithm that will make flowchart for you automatically just based on observational data what' even cooler is how simple it all works once you learn how it works
16,465
decision tree example let' say want to build system that will automatically filter out resumes based on the information in them big problem that technology companies have is that we get tons and tons of resumes for our positions we have to decide who we actually bring in for an interviewbecause it can be expensive to fly somebody out and actually take the time out of the day to conduct an interview so what if there were way to actually take historical data on who actually got hired and map that to things that are found on their resumewe could construct decision tree that will let us go through an individual resume and say"okthis person actually has high likelihood of getting hiredor notwe can train decision tree on that historical data and walk through that for future candidates wouldn' that be wonderful thing to haveso let' make some totally fabricated hiring data that we're going to use in this examplein the preceding tablewe have candidates that are just identified by numerical identifiers ' going to pick some attributes that think might be interesting or helpful to predict whether or not they're good hire or not how many years of experience do they haveare they currently employedhow many employers have they had previous to this onewhat' their level of educationwhat degree do they havedid they go to what we classify as top-tier schooldid they do an internship while they were in collegewe can take look at this historical dataand the dependent variable here is )jsfe did this person actually get job offer or not based on that informationnowobviously there' lot of information that isn' in this model that might be very importantbut the decision tree that we train from this data might actually be useful in doing an initial pass at weeding out some candidates what we end up with might be tree that looks like the following
16,466
so it just turns out that in my totally fabricated dataanyone that did an internship in college actually ended up getting job offer so my first decision point is "did this person do an internship or not?if yesgo ahead and bring them in in my experienceinternships are actually pretty good predictor of how good person is if they have the initiative to actually go out and do an internshipand actually learn something at that internshipthat' good sign do they currently have jobwellif they are currently employedin my very small fake dataset it turned out that they are worth hiringjust because somebody else thought they were worth hiring too obviously it would be little bit more of nuanced decision in the real world if they're not currently employeddo they have less than one prior employerif yesthis person has never held job and they never did an internship either probably not good hire decision don' hire that person but if they did have previous employerdid they at least go to top-tier schoolif notit' kind of iffy if sothen yeswe should hire this person based on the data that we trained on
16,467
walking through decision tree so that' how you walk through the results of decision tree it' just like going through flowchartand it' kind of awesome that an algorithm can produce this for you the algorithm itself is actually very simple let me explain how the algorithm works at each step of the decision tree flowchartwe find the attribute that we can partition our data on that minimizes the entropy of the data at the next step so we have resulting set of classificationsin this case hire or don' hireand we want to choose the attribute decision at that step that will minimize the entropy at the next step at each step we want to make all of the remaining choices result in either as many no hires or as many hire decisions as possible we want to make that data more and more uniform so as we work our way down the flowchartand we ultimately end up with set of candidates that are either all hires or all no hires so we can classify into yes/no decisions on decision tree so we just walk down the treeminimize entropy at each step by choosing the right attribute to decide onand we keep on going until we run out there' fancy name for this algorithm it' called id (iterative dichotomiser it is what' known as greedy algorithm so as it goes down the treeit just picks the attribute that will minimize entropy at that point now that might not actually result in an optimal tree that minimizes the number of choices that you have to makebut it will result in tree that worksgiven the data that you gave it random forests technique now one problem with decision trees is that they are very prone to overfittingso you can end up with decision tree that works beautifully for the data that you trained it onbut it might not be that great for actually predicting the correct classification for new people that it hasn' seen before decision trees are all about arriving at the right decision for the training data that you gave itbut maybe you didn' really take into account the right attributesmaybe you didn' give it enough of representative sample of people to learn from this can result in real problems so to combat this issuewe use technique called random forestswhere the idea is that we sample the data that we train onin different waysfor multiple different decision trees each decision tree takes different random sample from our set of training data and constructs tree from it then each resulting tree can vote on the right result
16,468
now that technique of randomly resampling our data with the same model is term called bootstrap aggregatingor bagging this is form of what we call ensemble learningwhich we'll cover in more detail shortly but the basic idea is that we have multiple treesa forest of trees if you willeach that uses random subsample of the data that we have to train on then each of these trees can vote on the final resultand that will help us combat overfitting for given set of training data the other thing random forests can do is actually restrict the number of attributes that it can choosebetween at each stagewhile it is trying to minimize the entropy as it goes and we can randomly pick which attributes it can choose from at each level so that also gives us more variation from tree to treeand therefore we get more of variety of algorithms that can compete with each other they can all vote on the final result using slightly different approaches to arriving at the same answer so that' how random forests work basicallyit is forest of decision trees where they are drawing from different samples and also different sets of attributes at each stage that it can choose between sowith all thatlet' go make some decision trees we'll use random forests as well when we're donebecause scikit-learn makes it really really easy to doas you'll see soon decision trees predicting hiring decisions using python turns out that it' easy to make decision treesin fact it' crazy just how easy it iswith just few lines of python code so let' give it try 've included btu)jsftdtw file with your book materialsand that just includes some fabricated datathat made upabout people that either got job offer or not based on the attributes of those candidates jnqpsuovnqzbtoq jnqpsuqboebtbtqe gspntlmfbsojnqpsuusff joqvu@gjmf  tqbsl%bub djfodf btu)jsftdtweg qesfbe@dtw joqvu@gjmfifbefs
16,469
you'll want to please immediately change that path used here for my own system ( tqbsl%bub djfodf btu)jsftdtwto wherever you have installed the materials for this book ' not sure where you put itbut it' almost certainly not there we will use qboebt to read our csv inand create dataframe object out of it let' go ahead and run our codeand we can use the ifbe function on the dataframe to print out the first few lines and make sure that it looks like it makes sense egifbe vsffopvhixfibwftpnfwbmjeebubjouifpvuqvu sofor each candidate idwe have their years of past experiencewhether or not they were employedtheir number of previous employerstheir highest level of educationwhether they went to top-tier schooland whether they did an internshipand finally herein the hired columnthe answer where we knew that we either extended job offer to this person or not as usualmost of the work is just in massaging your datapreparing your databefore you actually run the algorithms on itand that' what we need to do here now scikit-learn requires everything to be numericalso we can' have ys and ns and bss and mss and phds we have to convert all those things to numbers for the decision tree model to work the way to do this is to use some short-hand in pandaswhich makes these things easy for examplee eg egnbq eg egnbq eg egnbq eg egnbq #   eg egnbq egifbe
16,470
basicallywe're making dictionary in python that maps the letter to the number and the letter to the value sowe want to convert all our ys to and ns to so will mean yes and will mean no what we do is just take the hired column from the dataframeand call nbq on itusing dictionary this will go through the entire hired columnin the entire dataframe and use that dictionary lookup to transform all the entries in that column it returns new dataframe column that ' putting back into the hired column this replaces the hired column with one that' been mapped to and we do the same thing for employedtop-tier school and internedso all those get mapped using the yes/no dictionary sothe ys and ns become and instead for the level of educationwe do the same trickwe just create dictionary that assigns bs to ms to and phd to and uses that to remap those degree names to actual numerical values so if go ahead and run that and do ifbe againyou can see that it workedall my yeses are 'smy nos are 'sand my level of education is now represented by numerical value that has real meaning next we need to prepare everything to actually go into our decision tree classifierwhich isn' that hard to do thatwe need to separate our feature informationwhich are the attributes that we're trying to predict fromand our target columnwhich contains the thing that we're trying to predict to extract the list of feature name columnswe are just going to create list of columns up to number we go ahead and print that out gfbuvsft mjtu egdpmvnot gfbuvsft
16,471
we get the following outputabove are the column names that contain our feature informationyears experienceemployed?previous employerslevel of educationtop-tier schooland interned these are the attributes of candidates that we want to predict hiring on nextwe construct our vector which is assigned what we're trying to predictthat is our hired columnz eg eg dmg usff%fdjtjpo sff$mbttjgjfs dmg dmggju  this code extracts the entire hired column and calls it then it takes all of our columns for the feature data and puts them in something called this is collection of all of the data and all of the feature columnsand and are the two things that our decision tree classifier needs to actually create the classifier itselftwo lines of codewe call usff%fdjtjpo sff$mbttjgjfs to create our classifierand then we fit it to our feature data ( and the answers ( )whether or not people were hired solet' go ahead and run that displaying graphical data is little bit trickyand don' want to distract us too much with the details hereso please just consider the following boilerplate code you don' need to get into how graph viz works here and dot files and all that stuffit' not important to our journey right now the code you need to actually display the end results of decision tree is simplygspn* zuipoejtqmbzjnqpsu*nbhf gspntlmfbsofyufsobmttjyjnqpsu usjoh* jnqpsuqzepu epu@ebub usjoh* usfffyqpsu@hsbqiwjdmgpvu@gjmf epu@ebubgfbuvsf@obnft gfbuvsft
16,472
hsbqi qzepuhsbqi@gspn@epu@ebub epu@ebubhfuwbmvf *nbhf hsbqidsfbuf@qoh so let' go ahead and run this this is what your output should now look likethere we have ithow cool is that?we have an actual flow chart here
16,473
nowlet me show you how to read it at each stagewe have decision remember most of our data which is yes or nois going to be or sothe first decision point becomesis employedless than meaning that if we have an employment value of that is nowe're going to go left if employment is that is yeswe're going to go right sowere they previously employedif not go leftif yes go right it turns out that in my sample dataeveryone who is currently employed actually got job offerso can very quickly say if you are currently employedyesyou're worth bringing inwe're going to follow down to the second level here sohow do you interpret thisthe gini score is basically measure of entropy that it' using at each step remember as we're going down the algorithm is trying to minimize the amount of entropy and the samples are the remaining number of samples that haven' beensectioned off by previous decision so say this person was employed the way to read the right leaf node is the value column that tells you at this point we have candidates that were no hires and that were hires so againthe way to interpret the first decision point is if employedwas ' going to go to the rightmeaning that they are currently employedand this brings me to world where everybody got job offer sothat means should hire this person now let' say that this person doesn' currently have job the next thing ' going to look at isdo they have an internship if yesthen we're at point where in our training data everybody got job offer soat that pointwe can say our entropy is now (hjoj )because everyone' the sameand they all got an offer at that point howeveryou know if we keep going down(where the person has not done an internship),we'll be at point where the entropy is it' getting lower and lowerthat' good thing next we're going to look at how much experience they havedo they have less than one year of experienceandif the case is that they do have some experience and they've gotten this far they're pretty good no hire decision we end up at the point where we have zero entropy butall three remaining samples in our training set were no hires we have no hires and hires butif they do have less experiencethen they're probably fresh out of collegethey still might be worth looking at the final thing we're going to look at is whether or not they went to top-tier schooland if sothey end up being good prediction for being hire if notthey end up being no hire we end up with one candidate that fell into that category that was no hire and that were hire whereasin the case candidates did go to top tier schoolwe have no hires and hire
16,474
soyou can see we just keep going until we reach an entropy of if at all possiblefor every case ensemble learning using random forest nowlet' say we want to use random forestyou knowwe're worried that we might be over fitting our training data it' actually very easy to create random forest classifier of multiple decision trees soto do thatwe can use the same data that we created before you just need your and vectorsthat is the set of features and the column that you're trying to predict ongspntlmfbsofotfncmfjnqpsu boepn'psftu$mbttjgjfs dmg boepn'psftu$mbttjgjfs @ftujnbupst dmg dmggju   sfejdufnqmpznfoupgbofnqmpzfezfbswfufsbo qsjoudmgqsfejdu boebovofnqmpzfezfbswfufsbo qsjoudmgqsfejdu we make random forest classifieralso available from scikit-learnand pass it the number of trees we want in our forest sowe made ten trees in our random forest in the code above we then fit that to the model you don' have to walk through the trees by handand when you're dealing with random forest you can' really do that anyway soinstead we use the qsfejdu function on the modelthat is on the classifier that we made we pass in list of all the different features for given candidate that we want to predict employment for if you remember this maps to these columnsyears experienceemployed?previous employerslevel of educationtop-tier schooland internedinterpreted as numerical values we predict the employment of an employed -year veteran we also predict the employment of an unemployed -year veteran andsure enoughwe get result
16,475
soin this particular casewe ended up with hire decision on both butwhat' interesting is there is random component to that you don' actually get the same result every timemore often than notthe unemployed person does not get job offerand if you keep running this you'll see that' usually the case butthe random nature of baggingof bootstrap aggregating each one of those treesmeans you're not going to get the same result every time somaybe isn' quite enough trees soanywaythat' good lesson to learn hereactivity for an activityif you want to go back and play with thismess around with my input data go ahead and edit the code we've been exploringand create an alternate universe where it' topsy turvy worldfor exampleeveryone that gave job offer to now doesn' get one and vice versa see what that does to your decision tree just mess around with it and see what you can do and try to interpret the results sothat' decision trees and random forestsone of the more interesting bits of machine learningin my opinion always think it' pretty cool to just generate flowchart out of thin air like that sohopefully you'll find that useful ensemble learning when we talked about random foreststhat was an example of ensemble learningwhere we're actually combining multiple models together to come up with better result than any single model could come up with solet' learn about that in little bit more depth let' talk about ensemble learning little bit more soremember random forestswe had bunch of decision trees that were using different subsamples of the input dataand different sets of attributes that it would branch onand they all voted on the final result when you were trying to classify something at the end that' an example of ensemble learning another examplewhen we were talking about kmeans clusteringwe had the idea of maybe using different -means models with different initial random centroidsand letting them all vote on the final result as well that is also an example of ensemble learning
16,476
basicallythe idea is that you have more than one modeland they might be the same kind of model or it might be different kinds of modelsbut you run them allon your set of training dataand they all vote on the final result for whatever it is you're trying to predict and oftentimesyou'll find that this ensemble of different models produces better results than any single model could on its own good examplefrom few years agowas the netflix prize netflix ran contest where they offeredi think it was million dollarsto any researcher who could outperform their existing movie recommendation algorithm the ones that won were ensemble approacheswhere they actually ran multiple recommender algorithms at once and let them all vote on the final result soensemble learning can be very powerfulyet simple toolfor increasing the quality of your final results in machine learning let us now try to explore various types of ensemble learningbootstrap aggregating or baggingnowrandom forests use technique called baggingshort for bootstrap aggregating this means that we take random subsamples of our training data and feed them into different versions of the same model and let them all vote on the final result if you rememberrandom forests took many different decision trees that use different random sample of the training data to train onand then they all came together in the end to vote on final result that' bagging boostingboosting is an alternate modeland the idea here is that you start with modelbut each subsequent model boosts the attributes that address the areas that were misclassified by the previous model soyou run train/tests on modelyou figure out what are the attributes that it' basically getting wrongand then you boost those attributes in subsequent models in hopes that those subsequent models will pay more attention to themand get them right sothat' the general idea behind boosting you run modelfigure out its weak pointsamplify the focus on those weak points as you goand keep building more and more models that keep refining that modelbased on the weaknesses of the previous one bucket of modelsanother techniqueand this is what that netflix prize-winner didis called bucket of modelswhere you might have entirely different models that try to predict something maybe ' using -meansa decision treeand regression can run all three of those models together on set of training data and let them all vote on the final classification result when ' trying to predict something and maybe that would be better than using any one of those models in isolation
16,477
stackingstacking has the same idea soyou run multiple models on the datacombine the results together somehow the subtle difference here between bucket of models and stackingis that you pick the model that wins soyou' run train/testyou find the model that works best for your dataand you use that model by contraststacking will combine the results of all those models togetherto arrive at final result nowthere is whole field of research on ensemble learning that tries to find the optimal ways of doing ensemble learningand if you want to sound smartusually that involves using the word bayes lot sothere are some very advanced methods of doing ensemble learning but all of them have weak pointsand think this is yet another lesson in that we should always use the simplest technique that works well for us now these are all very complicated techniques that can' really get into in the scope of this bookbut at the end of the dayit' hard to outperform just the simple techniques that we've already talked about few of the complex techniques are listed herebayes optical classifierin theorythere' something called the bayes optimal classifier that will always be the bestbut it' impracticalbecause it' computationally prohibitive to do it bayesian parameter averagingmany people have tried to do variations of the bayes optimal classifier to make it more practicallike the bayesian parameter averaging variation but it' still susceptible to overfitting and it' often outperformed by baggingwhich is the same idea behind random forestsyou just resample the data multiple timesrun different modelsand let them all vote on the final result turns out that works just as welland it' heck of lot simplerbayesian model combinationfinallythere' something called bayesian model combination that tries to solve all the shortcomings of bayes optimal classifier and bayesian parameter averaging butat the end of the dayit doesn' do much better than just cross validating against the combination of models againthese are very complex techniques that are very difficult to use in practicewe're better off with the simpler ones that we've talked about in more detail butif you want to sound smart and use the word bayes lot it' good to be familiar with these techniques at leastand know what they are sothat' ensemble learning againthe takeaway is that the simple techniqueslike bootstrap aggregatingor baggingor boostingor stackingor bucket of modelsare usually the right choices there are some much fancier techniques out there but they're largely theoretical butat least you know about them now
16,478
it' always good idea to try ensemble learning out it' been proven time and time again that it will produce better results than any single modelso definitely consider itsupport vector machine overview finallywe're going to talk about support vector machines (svm)which is very advanced way of clustering or classifying higher dimensional data sowhat if you have multiple features that you want to predict fromsvm can be very powerful tool for doing thatand the results can be scarily goodit' very complicated under the hoodbut the important things are understanding when to use itand how it works at higher level solet' cover svm now support vector machines is fancy name for what actually is fancy concept but fortunatelyit' pretty easy to use the important thing is knowing what it doesand what it' good for sosupport vector machines works well for classifying higher-dimensional dataand by that mean lots of different features soit' easy to use something like kmeans clusteringto cluster data that has two dimensionsyou knowmaybe age on one axis and income on another butwhat if have manymany different features that ' trying to predict from wellsupport vector machines might be good way of doing that support vector machines finds higher-dimensional support vectors across which to divide the data (mathematicallythese support vectors define hyperplanesthat ismathematicallywhat support vector machines can do is find higher dimensional support vectors (that' where it gets its name fromthat define the higher-dimensional planes that split the data into different clusters obviously the math gets pretty weird pretty quickly with all this fortunatelythe tdjljumfbso package will do it all for youwithout you having to actually get into it under the hoodyou need to understand though that it uses something called the kernel trick to actually find those support vectors or hyperplanes that might not be apparent in lower dimensions there are different kernels you can useto do this in different ways the main point is that svm' are good choice if you have higherdimensional data with lots of different featuresand there are different kernels you can use that have varying computational costs and might be better fits for the problem at hand the important point is that svms employ some advanced mathematical trickery to cluster dataand it can handle data sets with lots of features it' also fairly expensive the "kernel trickis the only thing that makes it possible
16,479
want to point out that svm is supervised learning technique sowe're actually going to train it on set of training dataand we can use that to make predictions for future unseen data or test data it' little bit different than -means clustering and that -means was completely unsupervisedwith support vector machineby contrastit is training based on actual training data where you have the answer of the correct classification for some set of data that it can learn from sosvm' are useful for classification and clusteringif you will but it' supervised techniqueone example that you often see with svms is using something called support vector classification the typical example uses the iris dataset which is one of the sample datasets that comes with scikit-learn this set is classification of different flowersdifferent observations of different iris flowers and their species the idea is to classify these using information about the length and width of the petal on each flowerand the length and width of the sepal of each flower (the sepalapparentlyis little support structure underneath the petal didn' know that until now either you have four dimensions of attributes thereyou have the length and width of the petaland the length and the width of the sepal you can use that to predict the species of an iris given that information here' an example of doing that with svcbasicallywe have sepal width and sepal length projected down to two dimensions so we can actually visualize it
16,480
with different kernels you might get different results svc with linear kernel will produce something very much as you see in the preceding image you can use polynomial kernels or fancier kernels that might project down to curves in two dimensions as shown in the image you can do some pretty fancy classification this way these have increasing computational costsand they can produce more complex relationships but againit' case where too much complexity can yield misleading resultsso you need to be careful and actually use train/test when appropriate since we are doing supervised learningyou can actually do train/test and find the right model that worksor maybe use an ensemble approach you need to arrive at the right kernel for the task at hand for things like polynomial svcwhat' the right degree polynomial to useeven things like linear svc will have different parameters associated with them that you might need to optimize for this will make more sense with real exampleso let' dive into some actual python code and see how it worksusing svm to cluster people by using scikitlearn let' try out some support vector machines here fortunatelyit' lot easier to use than it is to understand we're going to go back to the same example used for -means clusteringwhere ' going to create some fabricated cluster data about ages and incomes of hundred random people if you want to go back to the -means clustering sectionyou can learn more about kind of the idea behind this code that generates the fake data and if you're readyplease consider the following codejnqpsuovnqzbtoq $sfbufgblfjodpnfbhfdmvtufstgps/qfpqmfjoldmvtufst efgdsfbuf$mvtufsfe%bub / qpjout fs$mvtufs gmpbu  gpsjjosbohf jodpnf$fouspje oqsboepnvojgpsn bhf$fouspje oqsboepnvojgpsn gpskjosbohf jou qpjout fs$mvtufs bqqfoe <oqsboepnopsnbm jodpnf$fouspjeoqsboepnopsnbm bhf$fouspje bqqfoe
16,481
oqbssbz oqbssbz sfuvso  please note that because we're using supervised learning herewe not only need the feature data againbut we also need the actual answers for our training dataset what the dsfbuf$mvtufsfe%bub function does hereis to create bunch of random data for people that are clustered around pointsbased on age and incomeand it returns two arrays the first array is the feature arraythat we're calling and then we have the array of the thing we're trying to predict forwhich we're calling lot of times in scikitlearn when you're creating model that you can predict fromthose are the two inputs that it will takea list of feature vectorsand the thing that you're trying to predictthat it can learn from sowe'll go ahead and run that so now we're going to use the dsfbuf$mvtufsfe%bub function to create random people with different clusters we will just create scatter plot to illustrate thoseand see where they land upnbuqmpumjcjomjof gspnqzmbcjnqpsu  dsfbuf$mvtufsfe%bub qmugjhvsf gjhtj[ qmutdbuufs   btuzqf oqgmpbu qmutipx the following graph shows our data that we're playing with every time you run this you're going to get different set of clusters soyou knowi didn' actually use random seed to make life interesting couple of new things here-- ' using the gjhtj[ parameter on qmugjhvsf to actually make larger plot soif you ever need to adjust the size in nbuqmpumjcthat' how you do it ' using that same trick of using the color as the classification number that end up with so the number of the cluster that started with is being plotted as the color of these data points you can seeit' pretty challenging problemthere' definitely some intermingling of clusters here
16,482
now we can use linear svc (svc is form of svm)to actually partition that into clusters we're going to use svm with linear kerneland with value of  is just an error penalty term that you can adjustit' by default normallyyou won' want to mess with thatbut if you're doing some sort of convergence on the right model using ensemble learning or train/testthat' one of the things you can play with thenwe will fit that model to our feature dataand the actual classifications that we have for our training dataset gspntlmfbsojnqpsutwnebubtfut twd twn lfsofm mjofbs gju  solet' go ahead and run that don' want to get too much into how we're actually going to visualize the results herejust take it on faith that qmpu sfejdujpot is function that can plot the classification ranges and svc
16,483
it helps us visualize where different classifications come out basicallyit' creating mesh across the entire gridand it will plot different classifications from the svc models as different colors on that gridand then we're going to plot our original data on top of thatefgqmpu sfejdujpot dmg yyzz oqnftihsje oqbsbohf oqbsbohf dmgqsfejdu oqdqmugjhvsf gjhtj[ ;sftibqf yytibqf qmudpoupvsg yyzz;dnbq qmudn bjsfebmqib qmutdbuufs   btuzqf oqgmpbu qmutipx qmpu sfejdujpot twd solet' see how that works out svc is computationally expensiveso it takes long time to run
16,484
you can see here that it did its best given that it had to draw straight linesand polygonal shapesit did decent job of fitting to the data that we had soyou knowit did miss few but by and largethe results are pretty good svc is actually very powerful techniqueit' real strength is in higher dimensional feature data go ahead and play with it by the way if you want to not just visualize the resultsyou can use the qsfejdu function on the svc modeljust like on pretty much any model in scikit-learnto pass in feature array that you're interested in if want to predict the classification for someone making $ , year who was years oldi would use the following codetwdqsfejdu this would put that person inin our casecluster number if had someone making $ , here who was would use the following codetwdqsfejdu this is what your output should now look likethat person would end up in cluster number whatever that represents in this example sogo ahead and play with it activity nowlinear is just one of many kernels that you can uselike said there are many different kernels you can use one of them is polynomial modelso you might want to play with that please do go ahead and look up the documentation it' good practice for you to looking at the docs if you're going to be using scikit-learn in any sort of depththere' lot of different capabilities and options that you have available to you sogo look up scikitlearn onlinefind out what the other kernels are for the svc methodand try them outsee if you actually get better results or not
16,485
this is little exercisenot just in playing with svm and different kinds of svcbut also in familiarizing yourself with how to learn more on your own about svc andhonestlya very important trait of any data scientist or engineer is going to be the ability to go and look up information yourself when you don' know the answers soyou knowi' not being lazy by not telling you what those other kernels arei want you to get used to the idea of having to look this stuff up on your ownbecause if you have to ask someone else about these things all the time you're going to get really annoyingreally fast in workplace sogo look that upplay around itsee what you come up with sothat' svm/svca very high power technique that you can use for classifying datain supervised learning now you know how it works and how to use itso keep that in your bag of trickssummary in this we saw some interesting machine learning techniques we covered one of the fundamental concepts behind machine learning called train/test we saw how to use train/test to try to find the right degree polynomial to fit given set of data we then analyzed the difference between supervised and unsupervised machine learning we saw how to implement spam classifier and enable it to determine whether an email is spam or not using the naive bayes technique we talked about -means clusteringan unsupervised learning techniquewhich helps group data into clusters we also looked at an example using scikit-learn which clustered people based on their income and age we then went on to look at the concept of entropy and how to measure it we walked through the concept of decision trees and howgiven set of training datayou can actually get python to generate flowchart for you to actually make decision we also built system that automatically filters out resumes based on the information in them and predicts the hiring decision of person we learned along the way the concept of ensemble learningand we concluded by talking about support vector machineswhich is very advanced way of clustering or classifying higher dimensional data we then moved on to use svm to cluster people using scikit-learn in the next we'll talk about recommender systems
16,486
recommender systems let' talk about my personal area of expertise--recommender systemsso systems that can recommend stuff to people based on what everybody else did we'll look at some examples of this and couple of ways to do it specificallytwo techniques called user-based and item-based collaborative filtering solet' dive in spent most of my career at bnb[podpn and jnecdpnand lot of what did there was developing recommender systemsthings like people who bought this also boughtor recommended for youand things that did movie recommendations for people sothis is something know lot about personallyand hope to share some of that knowledge with you we'll walk throughstep by stepcovering the following topicswhat are recommender systemsuser-based collaborative filtering item-based collaborative filtering finding movie similarities making movie recommendations to people improving the recommender' results
16,487
what are recommender systemswelllike said amazon is great exampleand one ' very familiar with soif you go to their recommendations sectionas shown in the following imageyou can see that it will recommend things that you might be interested in purchasing based on your past behavior on the site the recommender system might include things that you've ratedor things that you boughtand other data as well can' go into the details because they'll hunt me downand you knowdo bad things to me butit' pretty cool you can also think of the people who bought this also bought feature on amazon as form of recommender system the difference is that the recommendations you're seeing on your amazon recommendations page are based on all of your past behaviorwhereas people who bought this also bought or people who viewed this also viewedthings like thatare just based on the thing you're looking at right nowand showing you things that are similar to it that you might also be interested in andit turns outwhat you're doing right now is probably the strongest signal of your interest anyhow
16,488
another example is from netflixas shown in the following image (the following image is screenshot from netflix)they have various features that try to recommend new movies or other movies you haven' seen yetbased on the movies that you liked or watched in the past as welland they break that down by genre they have kind of different spin on thingswhere they try to identify the genres or the types of movies that they think you're enjoying the most and they then show you more results from those genres sothat' another example of recommender system in action
16,489
the whole point of it is to help you discover things you might not know about beforeso it' pretty cool you knowit gives individual moviesor booksor musicor whatevera chance to be discovered by people who might not have heard about them before soyou knownot only is it cool technologyit also kind of levels the playing field little bitand helps new items get discovered by the masses soit plays very important role in today' societyat least ' like to think sothere are few ways of doing thisand we'll look at the main ones in this user-based collaborative filtering firstlet' talk about recommending stuff based on your past behavior one technique is called user-based collaborative filteringand here' how it workscollaborative filteringby the wayis just fancy name for saying recommending stuff based on the combination of what you did and what everybody else didokaysoit' looking at your behavior and comparing that to everyone else' behaviorto arrive at the things that might be interesting to you that you haven' heard of yet the idea here is we build up matrix of everything that every user has ever boughtor viewedor ratedor whatever signal of interest that you want to base the system on so basicallywe end up with row for every user in our systemand that row contains all the things they did that might indicate some sort of interest in given product sopicture tablei have users for the rowsand each column is an itemokaythat might be moviea producta web pagewhateveryou can use this for many different things then use that matrix to compute the similarity between different users soi basically treat each row of this as vector and can compute the similarity between each vector of usersbased on their behavior two users who liked mostly the same things would be very similar to each other and can then sort this by those similarity scores if can find all the users similar to you based on their past behaviori can then find the users most similar to meand recommend stuff that they liked that didn' look at yet
16,490
let' look at real exampleand it'll make little bit more senselet' say that this nice lady in the preceding image watched star wars and the empire strikes back and she loved them both sowe have user vectorof this ladygiving -star rating to star wars and the empire strikes back let' also say mr edgy mohawk man comes along and he only watched star wars that' the only thing he' seenhe doesn' know about the empire strikes back yetsomehowhe lives in some strange universe where he doesn' know that there are actually manymany star wars moviesgrowing every year in fact we can of course say that this guy' actually similar to this other lady because they both enjoyed star wars lotso their similarity score is probably fairly good and we can sayokaywellwhat has this lady enjoyed that he hasn' seen yetandthe empire strikes back is oneso we can then take that information that these two users are similar based on their enjoyment of star warsfind that this lady also liked the empire strikes backand then present that as good recommendation for mr edgy mohawk man we can then go ahead and recommend the empire strikes back to him and he'll probably love itbecause in my opinionit' actually better filmbut ' not going to get into geek wars with you here
16,491
limitations of user-based collaborative filtering nowunfortunatelyuser-based collaborative filtering has some limitations when we think about relationships and recommending things based on relationships between items and people and whatnotour mind tends to go on relationships between people sowe want to find people that are similar to you and recommend stuff that they liked that' kind of the intuitive thing to dobut it' not the best thing to dothe following is the list of some limitations of user-based collaborative filteringone problem is that people are fickletheir tastes are always changing somaybe that nice lady in the previous example had sort of brief science fiction action film phase that she went through and then she got over itand maybe later in her life she started getting more into dramas or romance films or romcoms sowhat would happen if my edgy mohawk guy ended up with high similarity to her just based on her earlier sci-fi periodand we ended up recommending romantic comedies to him as resultthat would be bad meanthere is some protection against that in terms of how we compute the similarity scores to begin withbut it still pollutes our data that people' tastes can change over time socomparing people to people isn' always straightforward thing to dobecause people change the other problem is that there' usually lot more people than there are things in your systemso billion people in the world and countingthere' probably not billion movies in the worldor billion items that you might be recommending out of your catalog the computational problem finding all the similarities between all of the users in your system is probably much greater than the problem of finding similarities between the items in your system soby focusing the system on usersyou're making your computational problem lot harder than it might need to bebecause you have lot of usersat least hopefully you do if you're working for successful company the final problem is that people do bad things there' very real economic incentive to make sure that your product or your movie or whatever it is gets recommended to peopleand there are people who try to game the system to make that happen for their new movieor their new productor their new bookor whatever it' pretty easy to fabricate fake personas in the system by creating new user and having them do sequence of events that likes lot of popular items and then likes your item too this is called shilling attackand we want to ideally have system that can deal with that
16,492
there is research around how to detect and avoid these shilling attacks in user-based collaborative filteringbut an even better approach would be to use totally different approach entirely that' not so susceptible to gaming the system that' user-based collaborative filtering againit' simple concept-you look at similarities between users based on their behaviorand recommend stuff that user enjoyed that was similar to youthat you haven' seen yet nowthat does have its limitations as we talked about solet' talk about flipping the whole thing on its headwith technique called itembased collaborative filtering item-based collaborative filtering let' now try to address some of the shortcomings in user-based collaborative filtering with technique called item-based collaborative filteringand we'll see how that can be more powerful it' actually one of the techniques that amazon uses under the hoodand they've talked about this publicly so can tell you that muchbut let' see why it' such great idea with user-based collaborative filtering we base our recommendations on relationships between peoplebut what if we flip that and base them on relationships between itemsthat' what item-based collaborative filtering is understanding item-based collaborative filtering this is going to draw on few insights for one thingwe talked about people being fickletheir tastes can change over timeso comparing one person to another person based on their past behavior becomes pretty complicated people have different phases where they have different interestsand you might not be comparing the people that are in the same phase to each other butan item will always be whatever it is movie will always be movieit' never going to change star wars will always be star warswell until george lucas tinkers with it little bitbut for the most partitems do not change as much as people do sowe know that these relationships are more permanentand there' more of direct comparison you can make when computing similarity between itemsbecause they do not change over time
16,493
the other advantage is that there are generally fewer things that you're trying to recommend than there are people you're recommending to so again billion people in the worldyou're probably not offering billion things on your website to recommend to themso you can save lot of computational resources by evaluating relationships between items instead of usersbecause you will probably have fewer items than you have users in your system that means you can run your recommendations more frequentlymake them more currentmore up-to-dateand betteryou can use more complicated algorithms because you have less relationships to computeand that' good thingit' also harder to game the system sowe talked about how easy it is to game user-based collaborative filtering approach by just creating some fake users that like bunch of popular stuff and then the thing you're trying to promote with item-based collaborative filtering that becomes much more difficult you have to game the system into thinking there are relationships between itemsand since you probably don' have the capability to create fake items with fake ties to other items based on manymany other usersit' lot harder to game an item-based collaborative filtering systemwhich is good thing while ' on the topic of gaming the systemanother important thing is to make sure that people are voting with their money general technique for avoiding shilling attacks or people trying to game your recommender systemis to make sure that the signal behavior is based on people actually spending money soyou're always going to get better and more reliable results when you base recommendations on what people actually boughtas opposed to what they viewed or what they clicked onokayhow item-based collaborative filtering worksalrightlet' talk about how item-based collaborative filtering works it' very similar to user-based collaborative filteringbut instead of userswe're looking at items solet' go back to the example of movie recommendations the first thing we would do is find every pair of movies that is watched by the same person sowe go through and find every movie that was watched by identical peopleand then we measure the similarity of all those people who viewed that movie to each other soby this means we can compute similarities between two different moviesbased on the ratings of the people who watched both of those movies
16,494
solet' presume have movie pairokaymaybe star wars and the empire strikes back find list of everyone who watched both of those moviesthen compare their ratings to each otherand if they're similar then can say these two movies are similarbecause they were rated similarly by people who watched both of them that' the general idea here that' one way to do itthere' more than one way to do itand then can just sort everything by the movieand then by the similarity strength of all the similar movies to itand there' my results for people who liked also likedor people who rated this highly also rated this highly and so on and so forth and like saidthat' just one way of doing it that' step one of item-based collaborative filtering-first find relationships between movies based on the relationships of the people who watched every given pair of movies it'll make more sense when we go through the following example
16,495
for examplelet' say that our nice young lady in the preceding image watched star wars and the empire strikes back and liked both of themso rated them both five stars or something nowalong comes mr edgy mohawk man who also watched star wars and the empire strikes back and also liked both of them soat this point we can say there' relationshipthere is similarity between star wars and the empire strikes back based on these two users who liked both movies what we're going to do is look at each pair of movies we have pair of star wars and empire strikes backand then we look at all the users that watched both of themwhich are these two guysand if they both liked themthen we can say that they're similar to each other orif they both disliked them we can also say they're similar to each otherrightsowe're just looking at the similarity score of these two usersbehavior related to these two movies in this movie pair soalong comes mr moustachy lumberjack hipster man and he watches the empire strikes back and he lives in some strange world where he watched the empire strikes backbut had no idea that star wars the first movie existed
16,496
well that' finewe computed relationship between the empire strikes back and star wars based on the behavior of these two peopleso we know that these two movies are similar to each other sogiven that mr hipster man liked the empire strikes backwe can say with good confidence that he would also like star warsand we can then recommend that back to him as his top movie recommendation something like the following illustrationyou can see that you end up with very similar results in the endbut we've kind of flipped the whole thing on its head soinstead of focusing the system on relationships between peoplewe're focusing them on relationships between itemsand those relationships are still based on the aggregate behavior of all the people that watch them but fundamentallywe're looking at relationships between items and not relationships between people got it
16,497
collaborative filtering using python alrightso let' do itwe have some python code that will use pandasand all the various other tools at our disposalto create movie recommendations with surprisingly little amount of code the first thing we're going to do is show you item-based collaborative filtering in practice sowe'll build up people who watched also watched basicallyyou knowpeople who rated things highly also rated this thing highlyso building up these movie to movie relationships sowe're going to base it on real data that we got from the movielens project soif you go to movielens orgthere' actually an open movie recommender system therewhere people can rate movies and get recommendations for new movies andthey make all the underlying data publicly available for researchers like us sowe're going to use some real movie ratings data-it is little bit datedit' like years oldso keep that in mindbut it is real behavior data that we're going to be working with finally here andwe will use that to compute similarities between movies andthat data in and of itself is useful you can use that data to say people who liked also liked solet' say ' looking at web page for movie the system can then sayif you liked this movieand given that you're looking at it you're probably interested in itthen you might also like these movies and that' form of recommender system right thereeven though we don' even know who you are nowit is real-world dataso we're going to encounter some real-world problems with it our initial set of results aren' going to look goodso we're going to spend little bit of extra time trying to figure out whywhich is lot of what you spend your time doing as data scientist-correct those problemsand go back and run it again until we get results that makes sense and finallywe'll actually do item-based collaborative filtering in its entiretywhere we actually recommend movies to individuals based on their own behavior solet' do thislet' get startedfinding movie similarities let' apply the concept of item-based collaborative filtering to start withmovie similarities-figure out what movies are similar to other movies in particularwe'll try to figure out what movies are similar to star warsbased on user rating dataand we'll see what we get out of it let' dive in
16,498
okay solet' go ahead and compute the first half of item-based collaborative filteringwhich is finding similarities between items download and open the jnjmbs pwjftjqzoc file in this casewe're going to be looking at similarities between moviesbased on user behavior andwe're going to be using some real movie rating data from the grouplens project grouplens org provides real movie ratings databy real people who are using the pwjf-fotpsh website to rate movies and get recommendations back for new movies that they want to watch we have included the data files that you need from the grouplens dataset with the course materialsand the first thing we need to do is import those into pandas dataframeand we're really going to see the full power of pandas in this example it' pretty cool stuff
16,499
understanding the code the first thing we're going to do is import the ebub file as part of the movielens datasetand that is tab-delimited file that contains every rating in the dataset jnqpsuqboebtbtqe @dpmt sbujoht qesfbe@dtw tvoephdpotvmuqbdluebubtdjfodfnm  ebub tfq == obnft @dpmtvtfdpmt sbohf note that you'll need to add the path here to where you stored the downloaded movielens files on your computer sothe way that this works is even though we're calling sfbe@dtw on pandaswe can specify different separator than comma in this caseit' tab we're basically saying take the first three columns in the ebub fileand import it into new dataframewith three columnsvtfs@jenpwjf@jeand sbujoh what we end up with here is dataframe that has row for every vtfs@jewhich identifies some personand thenfor every movie they ratedwe have the npwjf@jewhich is some numerical shorthand for given movieso star wars might be movie or somethingand their ratingyou know to stars sowe have here databasea dataframeof every user and every movie they ratedokaynowwe want to be able to work with movie titlesso we can interpret these results more intuitivelyso we're going to use their human-readable names instead if you're using truly massive datasetyou' save that to the end because you want to be working with numbersthey're more compactfor as long as possible for the purpose of example and teachingthoughwe'll keep the titles around so you can see what' going on @dpmt npwjft qesfbe@dtw tvoephdpotvmuqbdluebubtdjfodfnm  jufn tfq obnft @dpmtvtfdpmt sbohf there' separate data file with the movielens dataset called jufnand it is pipedelimitedand the first two columns that we import will be the npwjf@je and the ujumf of that movie sonow we have two dataframess@dpmt has all the user ratings and @dpmt has all the titles for every npwjf@je we can then use the magical nfshf function in pandas to mush it all together sbujoht qenfshf npwjftsbujoht