id
int64
0
25.6k
text
stringlengths
0
4.59k
16,300
if you open up the %bub djfodf folder for this classwhich you downloaded earlier in the earlier sectionyou should find zuipojqzoc filego ahead and double-click on that it should open right up in canopy if you have everything installed properlyand it should look little bit something like the following screenshotnew versions of canopy will open the code in your web browsernot the canopy editorthis is okayone cool thing about python is that there are several ways to run code with python you can run it as scriptlike you would with normal programming language you can also write in this thing called the ipython notebookwhich is what we're using here so it' this format where you actually have web browser-like view where you can actually write little notations and notes to yourself in html markup stuffand you can also embed actual code that really runs using the python interpreter
16,301
understanding python code the first example that want to give you of some python code is right here the following block of code represents some real python code that we can actually run right within this view of the entire notebook pagebut let' zoom in now and look at that code
16,302
let' take look at what' going on we have list of numbers and list in pythonkind of like an array in other languages it is designated by these square bracketswe have this data structure of list that contains the numbers through and then to iterate through every number in that listwe'll say gpsovncfsjomjtu /vncfst that' the python syntax for iterating through list of stuff and colon tabs and whitespaces have real meaning in pythonso you can' just format things the way you want to you have to pay attention to them the point that want to make is that in other languagesit' pretty typical to have bracket or brace of some sort there to denote that ' inside gps loopan jg blockor some sort of block of codebut in pythonthat' all designated with whitespaces tab is actually important in telling python what' in which block of codegpsovncfsjomjtu /vncfst qsjouovncfsjg ovncfsqsjou jtfwfofmtf qsjou jtpeeqsjou )ppsbz sfbmmepof
16,303
you'll notice that within this gps blockwe have tab of one within that entire blockand for every ovncfsjomjtu /vncfst we will execute all of this code that' tabbed in by one tab stop we'll print the numberand the comma just means that we're not going to do new line afterwards we'll print something else right after itand jg ovncfswe'll say it' fwfo otherwisewe'll say it' peeand when we're donewe'll print out "mm epofyou can see the output right below the code ran the output before as had actually saved it within my notebookbut if you want to actually run it yourselfyou can just click within that block and click on the play buttonand we'll actually execute it and do it again just to convince yourself that it' really doing somethinglet' change the qsjou statement to say something elsesay)ppsbz sfbmmepof-fu tqbsuzif run this nowyou can seesure enoughmy message there has changed
16,304
so againthe point want to make is that whitespace is important you will designate blocks of code that run togetheryou knowsuch as gps loop or jguifo statementsusing indentation or tabsso remember that alsopay attention to your colons too you'll notice that lot of these clauses begin with colon importing modules python itselflike any languageis fairly limited in what it can do the real power of using python for machine learning and data mining and data science is the power of all the external libraries that are available for it for that purpose one of those libraries is called /vn zor numeric pythonandfor examplehere we can jnqpsu the /vnqz packagewhich is included with canopy as oq this means that 'll refer to the /vn package as oqand could call that anything want could call it 'sfe or jnbut it' best to stick with something that actually makes sensenow that ' calling that /vn package oqi can refer to it using oqjnqpsuovnqzbtoq in this examplei'll call the sboepn function that' provided as part of the /vn package and call its normal function to actually generate normal distribution of random numbers using these parameters and print them out since it is randomi should get different results every timejnqpsuovnqzbtoq oqsboepnopsnbm qsjou the output should look like thissure enoughi get different results that' pretty cool
16,305
data structures let' move on to data structures if you need to pause and let things sink in little bitor you want to play around with these little bit morefeel free to do so the best way to learn this stuff is to dive in and actually experimentso definitely encourage doing thatand that' why ' giving you working ipython/jupyter notebooksso you can actually go inmess with the codedo different stuff with it for examplehere we have distribution around but let' make it around jnqpsuovnqzbtoq oqsboepnopsnbm qsjou heyall my numbers changedthey're closer to nowhow about thatalrightlet' talk about data structures little bit here as we saw in our first exampleyou can have listand the syntax looks like this experimenting with lists qsjou mfo you can saycall list yfor exampleand assign it to the numbers through and these square brackets indicate that we are using python listand those are immutable objects that can actually add things to and rearrange as much as want to there' built-in function for determining the length of the list called mfoand if type in mfo that will give me back the number because there are numbers in my list just to make sureand again to drive home the point that this is actually running real code herelet' add another number in theresuch as if you run thisyou'll get because now there are numbers in that listy qsjou mfo
16,306
the output of the previous code example is as followsgo back to the original example there now you can also slice lists if you want to take subset of listthere' very simple syntax for doing soy the output of the above code example is as followspre colon iffor exampleyou want to take the first three elements of listeverything before element number we can say to get the first three elementsand and if you think about what' going on thereas far as indices golike in most languageswe start counting from so element is element is and element is since we're saying we want everything before element that' what we're getting soyou knownever forget that in most languagesyou start counting at and not now this can confuse mattersbut in this caseit does make intuitive sense you can think of that colon as meaning want everythingi want the first three elementsand could change that to four just again to make the point that we're actually doing something real herey the output of the above code example is as followspost colon now if put the colon on the other side of the that says want everything after so and after if say ythat' giving me the third element and everything after it so that' going to return and in that exampleoky
16,307
the output is as followsyou might want to keep this ipython/jupyter notebook file around it' good referencebecause sometimes it can get confusing as to whether the slicing operator includes that element or if it' up to or including it or not so the best way is to just play around with it here and remind yourself negative syntax one more thing you can do is have this negative syntaxy the output is as followsby saying ythis means that want the last two elements in the list this means that go backwards two from the endand that will give me and because those are the last two things on my list adding list to list you can also change lists around let' say want to add list to the list can use the fyufoe function for thatas shown in the following code blockyfyufoe the output of the above code is as followsi have my list of if want to extend iti can say have new list here<>and that bracket indicates this is new list of itself this could be list implicityou knowthat' inline thereit could be referred to by another variable you can see that once do thatthe new list get actually has that list of appended on to the end of it so have new list by extending that list with another list
16,308
the append function if you want to just add one more thing to that listyou can use the bqqfoe function so just want to stick the number at the endthere we goybqqfoe the output of the above code is as followscomplex data structures you can also have complex data structures with lists so you don' have to just put numbers in ityou can actually put strings in it you can put numbers in it you can put other lists in it it doesn' matter python is weakly-typed languageso you can pretty much put whatever kind of data you wantwherever you wantand it will generally be an ok thing to doz mjtu -jtut mjtu -jtut in the preceding examplei have second list that contains that ' calling 'll create new list that contains two lists how' that for mind blowingour mjtupg-jtut list will contain the list and the listand that' perfectly valid thing to do you can see here that we have bracket indicating the mjtupg-jtut listand within thatwe have another set of brackets indicating each individual list that is in that listsosometimes things like these will come in handy dereferencing single element if you want to dereference single element of the list you can just use the bracket like thatz the output of the above code is as follows
16,309
so will return element remember that had in it observe the previous exampleand we start counting from so element will actually be the second element in the listor the number in this casealrightthe sort function finallylet' have built-in sort function that you can use [tpsu so if start with list [which is and  can call sort on that listand will now be sorted in order the output of the above code is as followsreverse sort [tpsu sfwfstf svf the output of the above code is as followsif you need to do reverse sortyou can just say sfwfstf svf as an attributeas parameter in that tpsu functionand that will put it back to if you need to let that sink in little bitfeel free to go back and read it little bit more tuples tuples are just like listsexcept they're immutableso you can' actually extendappendor sort them they are what they areand they behave just like listsapart from the fact that you can' change themand you indicate that they are immutable and are tupleas opposed to listusing parentheses instead of square bracket so you can see they work pretty much the same way otherwise vqmftbsfkvtujnnvubcmfmjtut tf mfo jotufbepg
16,310
the output of the previous code is as followswe can say  can still use mfohuimfo on that to say that there are three elements in that tupleand even thoughif you're not familiar with the term uvqmfa uvqmf can actually contain as many elements as you want even though it sounds like it' latin based on the number threeit doesn' mean you have three things in it usuallyit only has two things in it they can have as many as you wantreally dereferencing an element we can also dereference the elements of tupleso element number again would be the third elementbecause we start counting from and that will give me back the number in the following screenshotz  the output to the above code is as followslist of tuples we can alsolike we could with listsuse tuples as elements of list mjtu vqmft mjtu vqmft the output to the above code is as follows
16,311
we can create new list that contains two tuples so in the preceding examplewe have our tuple of and our tuple of then we make list of those two tuples and we get back this structurewhere we have square brackets indicating list that contains two tuples indicated by parenthesesand one thing that tuples are commonly used for when we're doing data science or any sort of managing or processing of data really is to use it to assign variables to input data as it' read in want to walk you through little bit on what' going on in the following examplebhfjodpnf tqmju qsjou bhf qsjou jodpnf the output to the above code is as followslet' say we have line of input data coming in and it' comma-separated value filewhich contains agessay comma-delimited by an incomesay for that agejust to make something up what can do is as each line comes ini can call the tqmju function on it to actually separate that into pair of values that are delimited by commasand take that resulting tuple that comes out of split and assign it to two variables-bhf and jodpnfall at once by defining tuple of ageincome and saying that want to set that equal to the tuple that comes out of the tqmju function so this is basically common shorthand you'll see for assigning multiple fields to multiple variables at once if run thatyou can see that the bhf variable actually ends up assigned to and jodpnf to because of that little trick there you do need to be careful when you're doing this sort of thingbecause if you don' have the expected number of fields or the expected number of elements in the resulting tupleyou will get an exception if you try to assign more stuff or less stuff than you expect to see here
16,312
dictionaries finallythe last data structure that we'll see lot in python is dictionaryand you can think of that as map or hash table in other languages it' way to basically have sort of mini-databasesort of key/value data store that' built into python so let' sayi want to build up little dictionary of star trek ships and their captainsi can set up dbqubjot \^where curly brackets indicates an empty dictionary now can use this sort of syntax to assign entries in my dictionaryso can say dbqubjot for &oufsqsjtf is ,jslfor &oufsqsjtfit is jdbsefor %ffq qbdf/jof it is jtlpand for pzbhfs it is +bofxbz now havebasicallythis lookup table that will associate ship names with their captainand can sayfor exampleqsjoudbqubjotand get back +bofxbz very useful tool for basically doing lookups of some sort let' say you have some sort of an identifier in dataset that maps to some human-readable name you'll probably be using dictionary to actually do that look up when you're printing it out
16,313
we can also see what happens if you try to look up something that doesn' exist wellwe can use the hfu function on dictionary to safely return an entry so in this case&oufsqsjtf does have an entry in my dictionaryit just gives me back ,jslbut if call the / ship on the dictionaryi never defined the captain of thatso it comes back with /pof value in this examplewhich is better than throwing an exceptionbut you do need to be aware that this is possibilityqsjou dbqubjothfu / the output of the above code is as follows/pof the captain is jonathan archerbut you knowi' get little bit too geeky here now iterating through entries gpstijqjodbqubjot qsjou tijq dbqubjot the output of the above code is as followslet' look at little example of iterating through the entries in dictionary if want to iterate through every ship that have in my dictionary and print out dbqubjoti can type for tijq in dbqubjotand this will iterate through every single key in my dictionary then can print out the lookup value of each ship' captainand that' the output that get there there you have it this is basically the main data structures that you'll encounter in python there are some otherssuch as setsbut we'll not really use them in this bookso think that' enough to get you started let' dive into some more python nuances in our next section
16,314
python basics part in addition to python basics part let us now try to grasp more python concepts in detail functions in python let' talk about functions in python like with other languagesyou can have functions that let you repeat set of operations over and over again with different parameters in pythonthe syntax for doing that looks like thisefg rvbsf* sfuvsoy qsjou rvbsf* the output of the above code is as followsyou declare function using the efg keyword it just says this is functionand we'll call this function rvbsf*uand the parameter list is then followed inside parentheses this particular function only takes one parameter that we'll call againremember that whitespace is important in python there' not going to be any curly brackets or anything enclosing this function it' strictly defined by whitespace so we have colon that says that this function declaration line is overbut then it' the fact that it' tabbed by one or more tabs that tells the interpreter that we are in fact within the rvbsf* function so efg rvbsf* tab returns yand that will return the square of in this function we can go ahead and give that try qsjoutrvbsf* is how we call that function it looks just like it would be in any other languagereally this should return the number we run the codeand in fact it does awesomethat' pretty simplethat' all there is to functions obviouslyi could have more than one parameter if wanted toeven as many parameters as need now there are some weird things you can do with functions in pythonthat are kind of cool one thing you can do is to pass functions around as though they were parameters let' take closer look at this example:pvdboqbttgvodujpotbspvoebtqbsbnfufst efg% pnfuijoh  sfuvsog qsjou % pnfuijoh rvbsf* 
16,315
the output of the preceding code is as followsnow have function called % pnfuijohefg% pnfuijohand it will take two parametersone that 'll call and the other 'll call yand if happeni can actually pass in function for one of these parameters sothink about that for minute look at this example with bit more sense here% pnfuijoh  will return of yit will basically call the function with as parameterand there' no strong typing in pythonso we have to just kind of make sure that what we are passing in for that first parameter is in fact function for this to work properly for examplewe'll say print % pnfuijohand for the first parameterwe'll pass in rvbsf*uwhich is actually another functionand the number what this should do is to say do something with the rvbsf* function and the parameterand that will return rvbsf* and squared last time checked was and sure enoughthat does in fact work this might be little bit of new concept to youpassing functions around as parametersso if you need to stop for minute therewait and let that sink inplay around with itplease feel free to do so againi encourage you to stop and take this at your own pace lambda functions functional programming one more thing that' kind of python-ish sort of thing to dowhich you might not see in other languages is the concept of lambda functionsand it' kind of called functional programming the idea is that you can include simple function into function this makes the most sense with an example-bncebgvodujpotmfuzpvjomjoftjnqmfgvodujpot qsjou % pnfuijoh mbnceby the output of the above code is as followswe'll print % pnfuijohand remember that our first parameter is functionso instead of passing in named functioni can declare this function inline using the mbnceb keyword lambda basically means that ' defining an unnamed function that just exists for now it' transitoryand it takes parameter in the syntax herembnceb means ' defining an inline function of some sortfollowed by its parameter list it has single parameteryand the colonfollowed by what that function actually does 'll take the parameter and multiply it by itself three times to basically get the cube of parameter
16,316
in this example% pnfuijoh will pass in this lambda function as the first parameterwhich computes the cube of and the parameter so what' this really doing under the hoodthis mbnceb function is function of itself that gets passed into the in % pnfuijoh in the previous exampleand here is going to be this will return of ywhich will end up executing our lambda function on the value so that goes into our parameterand our lambda function transforms that into times times which isof coursenow this comes up lot when we start doing mapreduce and spark and things like that so if we'll be dealing with hadoop sorts of technologies later onthis is very important concept to understand againi encourage you to take moment to let that sink in and understand what' going on there if you need to understanding boolean expressions boolean expression syntax is little bit weird or unusualat least in pythonqsjou the output of the above code is as follows'bmtf as usualwe have the double equal symbol that can test for equality between two values so does equal no it doesn'ttherefore 'bmtf the value 'bmtf is special value designated by remember that when you're trying to testwhen you're doing boolean stuffthe relevant keywords are svf with and 'bmtf with an that' little bit different from other languages that 've worked withso keep that in mind qsjou svfps'bmtf the output of the above code is as follows svf well svf or 'bmtf is svfbecause one of them is svfyou run it and it comes back svf ifjgtubufnfou qsjou jt
16,317
the output of the previous code is as follows'bmtf the other thing we can do is use jtwhich is sort of the same thing as equal it' more python-ish representation of equalityso is the same thing as jtbut this is considered the more pythonic way of doing it so jtcomes back as 'bmtf because is not ifjgfmtfmppq jgjtqsjou)pxejeuibuibqqfofmjgqsjou :jlftfmtf qsjou "mmjtxfmmxjuiuifxpsmethe output of the above code is as follows"mmjtxfmmxjuiuifxpsme we can also do jgfmtf and fmtfjg blocks here too let' do something little bit more complicated here if jt would print )pxejeuibuibqqfobut of course is not so we will fall back down to the fmtfjg blockotherwiseif is not we'll test if well that' not true eitherbut if it didwe print :jlftand we will finally fall into this catch-all fmtf clause that will print "mmjtxfmmxjuiuifxpsme in factis not nor is greater than and sure enough"mmjtxfmmxjuiuif xpsme soyou knowother languages have very similar syntaxbut these are the peculiarities of python and how to do an jgfmtf or fmtfjg block so againfeel free to keep this notebook around it might be good reference later on looping the last concept want to cover in our python basics is loopingand we saw couple of examples of this alreadybut let' just do another onegpsyjosbohf qsjou
16,318
the output of the previous code is as followsfor examplewe can use this range operator to automatically define list of numbers in the range so if we say gpsy in sbohf sbohfwill produce list of through and by saying for in that listwe will iterate through every individual entry in that list and print it out againthe comma after the qsjou statement says don' give me new linejust keep on going so the output of this ends up being all the elements of that list printed next to each other to do something little bit more complicatedwe'll do something similarbut this time we'll show how dpoujovf and csfbl work as in other languagesyou can actually choose to skip the rest of the processing for loop iterationor actually stop the iteration of the loop prematurelygpsyjosbohf jg yjtdpoujovf jg csfbl qsjou the output of the above code is as followsin this examplewe'll go through the values through and if we hit on the number we will continue before we print it out we'll skip the number basicallyand if the number is greater than we'll break the loop and stop the processing entirely the output that we expect is that we will print out the numbers through unless it' in which casewe'll skip number and sure enoughthat' what it does the while loop another syntax is the while loop this is kind of standard looping syntax that you see in most languagesy xijmf qsjou
16,319
the output of the previous code is as followswe can also saystart with and xijmf print it out and then increment by this will go through over and over againincrementing until it' less than at which point we break out of the xijmf loop and we're done so it does the same thing as this first example herebut just in different style it prints out the numbers through using xijmf loop just some examples therenothing too complicated againif you've done any sort of programming or scripting beforethis should be pretty simple now to really let this sink ini've been saying throughout this entire get in thereget your hands dirtyand play with it so ' going to make you do that exploring activity here' an activitya little bit of challenge for youhere' nice little code block where you can start writing your own python coderun itand play around with itso please do so your challenge is to write some code that creates list of integersloops through each element of that listpretty easy so farand only prints out even numbers
16,320
now this shouldn' be too hard there are examples in this notebook of doing all that stuffall you have to do is put it together and get it to run sothe point is not to give you something that' hard just want you to actually get some confidence in writing your own python code and actually running it and seeing it operateso please do so definitely encourage you to be interactive here so have at itgood luckand welcome to python so that' your python crash courseobviouslyjust some very basic stuff there as we go through more and more examples throughout the bookit'll make more and more sense since you have more examples to look atbut if you do feel little bit intimidated at this pointmaybe you're little bit too new to programming or scriptingand it might be good idea to go and take python revision before moving forwardbut if you feel pretty good about what you've seen so farlet' move ahead and we'll keep on going running python scripts throughout this bookwe'll be using the ipython/jupyter notebook format (which are jqzoc filesthat we've been looking at so farand it' great format for book like this because it lets me put little blocks of code in there and put little text and things around it explaining what it' doingand you can experiment with things live of courseit' great from that standpointbut in the real worldyou're probably not going to be using ipython/jupyter notebooks to actually run your python scripts in productionso let me just really briefly go through the other ways you can run python codeand other interactive ways of running python code as well so it' pretty flexible system let' take look
16,321
more options than just the ipython/jupyter notebook want to make sure that you know there' more than one way to run python code nowthroughout this bookwe'll be using the ipython/jupyter notebook format but in the real worldyou're not going to be running your code as notebook you're going to be running it as standalone python script so just want to make sure you know how to do that and see how it works so let' go back to this first example that we ran in the bookjust to illustrate the importance of whitespace we can just select and copy that code out of the notebook format and paste it into new file this can be done by clicking on the new button at the extreme left so let' make new file and paste it in and let' save this file and call ituftuqzwhere qz is the usual extension that we give to python scripts nowi can run this in few different ways
16,322
running python scripts in command prompt can actually run the script in command prompt if go to toolsi can go to canopy command promptand that will open up command window that has all the necessary environment variables already in place for running python can just type qzuipo uftuqz and run the scriptand out comes my resultso in the real worldyou' probably do something like that it might be on crontab or something like thatwho knowsbut running real script in production is just that simple you can now close the command prompt
16,323
using the canopy ide moving backi can also run the script from within the ide so from within canopyi can go to the run menu can either go to run run fileor click on the little play iconand that will also execute my scriptand see the results at the bottom in the output windowas shown in the following screenshotso that' another way to do itand finallyyou can also run scripts within this interactive prompt present at the bottom interactively can actually type in python commands one at time downand have them just execute and stay within the environment down there
16,324
for examplei could say tuvggmake it mjtu calland have and now can say mfo tuvgg and that will give me  can saygpsyjotuvgg qsjouyand we get output as so you can see you can kind of makeup scripts as you go down in the interactive prompt at the bottom and execute things one thing at time in this exampletuvgg is variable we createda list that stays in memoryit' kind of like global variable in other languages within this environment
16,325
now if do want to reset this environmentif want to get rid of tuvgg and start all overthe way you do that is you go up to the run menu here and you can say restart kerneland that will strike you over with blank slateso now have new python environment that' clean slateand in this casewhat did call ittype tuvgg and tuvgg doesn' exist yet because have new environmentbut can make it something elsesuch as run it and there it is
16,326
so there you have itthree ways of running python codethe ipython/jupyter notebookwhich we'll use throughout this book just because it' good learning toolyou can also run scripts as standalone script filesand you can also execute python code in the interactive command prompt so there you have itand there you have three different ways of running python code and experimenting and running things in production so keep that in mind we'll be using notebooks throughout the rest of this bookbut againyou have those other options when the time comes summary in this we started our journey with building the most important stepping stone of the book installing enthought canopy we then moved to installing other libraries and installing different types of packages we also grasped some of the basics of python with the help of various python code we covered basic concepts such as moduleslists along with tuplesand eventually moved on to understanding more of python basics with better knowledge of functions and looping in python finallywe started with running some of our simple python scripts in the next we will move on to understand concepts of statistics and probability
16,327
statistics and probability refresherand python practice in this we are going to go through few concepts of statistics and probabilitywhich might be refresher for some of you these concepts are important to go through if you want to be data scientist we will see examples to understand these concepts better we will also look at how to implement those examples using actual python code we'll be covering the following topics in this types of data you may encounter and how to treat them accordingly statistical concepts of meanmedianmodestandard deviationand variance probability density functions and probability mass functions types of data distributions and how to plot them understanding percentiles and moments
16,328
types of data alrightif you want to be data scientistwe need to talk about the types of data that you might encounterhow to categorize themand how you might treat them differently let' dive into the different flavors of data you might encounterthis will seem pretty basicbut we've got to start with the simple stuff and we'll work our way up to the more complicated data mining and machine learning things it is important to know what kind of data you're dealing with because different techniques might have different nuances depending on what kind of data you're handling sothere are several flavors of dataif you willand there are three specific types of data that we will primarily focus on they arenumerical data categorical data ordinal data againthere are different variations of techniques that you might use for different types of dataso you always need to keep in mind what kind of data you're dealing with when you're analyzing it
16,329
numerical data let' start with numerical data it' probably the most common data type basicallyit represents some quantifiable thing that you can measure some examples are heights of peoplepage load timesstock pricesand so on things that varythings that you can measurethings that have wide range of possibilities now there are basically two kinds of numerical dataso flavor of flavor if you will discrete data there' discrete datawhich is integer-based andfor examplecan be counts of some sort of event some examples are how many purchases did customer make in year wellthat can only be discrete values they bought one thingor they bought two thingsor they bought three things they couldn' have bought things or three and three-quarters things it' discrete value that has an integer restriction to it continuous data the other type of numerical data is continuous dataand this is stuff that has an infinite range of possibilities where you can go into fractions sofor examplegoing back to the height of peoplethere is an infinite number of possible heights for people you could be five feet and inches tallor the time it takes to do something like check out on website could be any huge range of possibilities seconds for all you knowor how much rainfall in given day againthere' an infinite amount of precision there so that' an example of continuous data to recapnumerical data is something you can measure quantitatively with numberand it can be either discretewhere it' integer-based like an event countor continuouswhere you can have an infinite range of precision available to that data
16,330
categorical data the second type of data that we're going to talk about is categorical dataand this is data that has no inherent numeric meaning most of the timeyou can' really compare one category to another directly things like genderyes/no questionsracestate of residenceproduct categorypolitical partyyou can assign numbers to these categoriesand often you willbut those numbers have no inherent meaning sofor examplei can say that the area of texas is greater than the area of floridabut can' just say texas is greater than floridathey're just categories there' no real numerical quantifiable meaning to themit' just ways that we categorize different things now againi might have some sort of numerical assignation to each state meani could say that florida is state number and texas state number but there' no real relationship between and thererightit' just shorthand to more compactly represent these categories so againcategorical data does not have any intrinsic numerical meaningit' just way that you're choosing to split up set of data based on categories
16,331
ordinal data the last category that you tend to hear about with types of data is ordinal dataand it' sort of mixture of numerical and categorical data common example is star ratings for movie or musicor what have you in this casewe have categorical data in that could be through starswhere might represent poor and might represent excellentbut they do have mathematical meaning we do know that means it' better than so this is case where we have data where the different categories have numerical relationship to each other soi can say that star is less than starsi can say that stars is less than starsi can say that stars is greater than stars in terms of measure of quality now you could also think of the actual number of stars as discrete numerical data soit' definitely fine line between these categoriesand in lot of cases you can actually treat them interchangeably sothere you have itthe three different types there is numericalcategoricaland ordinal data let' see if it' sunk in don' worryi' not going to make you hand in your work or anything quick quizfor each of these examplesis the data numericalcategoricalor ordinal let' start with how much gas is in your gas tank what do you thinkwellthe right answer is numerical it' continuous numerical value because you can have an infinite range of possibilities of gas in your tank meanyeahthere' probably some upper bound of how much gas you can fit in itbut there is no end to the number of possible values of how much gas you have it could be three quarters of tankit could be seven sixteenths of the tankit could be /pi of tanki mean who knowsright
16,332
how about if you're reading your overall health on scale of to where those choices correspond to the categories poormoderategoodand excellentwhat do you thinkthat' good example of ordinal data that' very much like our movie ratings dataand againdepending on how you model thatyou could probably treat it as discrete numerical data as wellbut technically we're going to call that ordinal data what about the races of your classmatesthis is pretty clear example of categorical data you can' really compare purple people to green peoplerightthey're just purple and greenbut they are categories that you might want to study and understand the differences between on some other dimension how about the ages of your classmates in yearsa little bit of trick question thereif said it had to be in an integer value of yearslike or years oldthen that would be discrete numerical databut if had more precisionlike years three months and daysthat would be continuous numerical databut either wayit' numerical data type and finallymoney spent in store againthat could be an example of continuous numerical data so againthis is only important because you might apply different techniques to different types of data there might be some concepts where we do one type of implementation for categorical data and different type of implementation for numerical datafor example so that' all you need to know about the different types of data that you'll commonly findand that we'll focus on in this book they're all pretty simple conceptsyou've got numericcategoricaland ordinal dataand numerical data can be continuous or discrete there might be different techniques you apply to the data depending on what kind of data you're dealing withand we'll see that throughout the book let' move on meanmedianand mode let' do little refresher of statistics this is like elementary school stuffbut good to go through it again and see how these different techniques are usedmeanmedianand mode ' sure you've heard those terms beforebut it' good to see how they're used differentlyso let' dive in this should be review for most of youa quick refreshernow that we're starting to actually dive into some real statistics let' look at some actual data and figure out how to measure these things
16,333
mean the meanas you probably knowis just another name for the average to calculate the mean of datasetall you have to do is sum up all the values and divide it by the number of values that you have sum of samples/number of samples let' take this examplewhich calculates the mean (averagenumber of children per house in my neighborhood let' say went door-to-door in my neighborhood and asked everyonehow many children live in their household (thatby the wayis good example of discrete numerical dataremember from the previous section?let' say go around and found out that the first house has no kids in itand the second house has two childrenand the third household has three childrenand so on and so forth amassed this little dataset of discrete numerical dataand to figure out the meanall do is add them all together and divide it by the number of houses that went to number of children in each house on my street the mean is ( + + + + + + + + )/ it comes out as plus plus plus all the rest of these numbers divided by the total number of houses that looked atwhich is and the mean number of children per house in my sample is sothere you have itmean median median is little bit different the way you compute the median of the dataset is by sorting all the values (in either ascending or descending order)and taking the one that ends up in the middle sofor examplelet' use the same dataset of children in my neighborhood would sort it numericallyand can take the number that' slap dab in the middle of the datawhich turns out to be
16,334
againall do is take the datasort it numericallyand take the center point if you have an even number of data pointsthen the median might actually fall in between two data points it wouldn' be clear which one is actually the middle in that caseall you do istake the average of the two that do fall in the middle and consider that number as the median the factor of outliers now in the preceding example of the number of kids in each householdthe median and the mean were pretty close to each other because there weren' lot of outliers we had or kidsbut we didn' have some wacky family that had kids that would have really skewed the meanbut it might not have changed the median too much that' why the median is often very useful thing to look at and often overlooked median is less susceptible to outliers than the mean people have tendency to mislead people with statistics sometimes ' going to keep pointing this out throughout the book wherever can for exampleyou can talk about the mean or average household income in the united statesand that actual number from last year when looked it up was $ , or sobut that doesn' really provide an accurate picture of what the typical american makes that is becauseif you look at the median incomeit' much lower at $ , why is thatwellbecause of income inequality there are few very rich people in americaand the same is true in lot of countries as well america' not even the worstbut you know those billionairesthose super-rich people that live on wall street or silicon valley or some other super-rich placethey skew the mean but there' so few of them that they don' really affect the median so much this is great example of where the median tells much better story about the typical person or data point in this example than the mean does whenever someone talks about the meanyou have to think about what does the data distribution looks like are there outliers that might be skewing that meanand if the answer is potentially yesyou should also ask for the medianbecause oftenthat provides more insight than the mean or the average
16,335
mode finallywe'll talk about mode this doesn' really come up too often in practicebut you can' talk about mean and median without talking about mode all mode meansis the most common value in dataset let' go back to my example of the number of kids in each house how many of each value are there the mode is if just look at what number occurs most frequentlyit turns out to be and the mode therefore of this data is the most common number of children in given house in this neighborhood is no kidsand that' all that means now this is actually pretty good example of continuous versus discrete databecause this only really works with discrete data if have continuous range of data then can' really talk about the most common value that occursunless quantize that somehow into discrete values so we've already run into one example here where the data type matters mode is usually only relevant to discrete numerical dataand not to continuous data lot of real-world data tends to be continuousso maybe that' why don' hear too much about modebut we see it here for completeness there you have itmeanmedianand mode in nutshell kind of the most basic statistics stuff you can possibly dobut hope you gained little refresher there in the importance of choosing between median and mean they can tell very different storiesand yet people tend to equate them in their headsso make sure you're being responsible data scientist and representing data in way that conveys the meaning you're trying to represent if you're trying to display typical valueoften the median is better choice than the mean because of outliersso remember that let' move on
16,336
using meanmedianand mode in python let' start doing some real coding in python and see how you compute the meanmedianand mode using python in an ipython notebook file so go ahead and open up the fbo fejbo pefjqzoc file from the data files for this section if you' like to follow alongwhich definitely encourage you to do if you need to go back to that earlier section on where to download these materials fromplease go do thatbecause you will need these files for the section let' dive incalculating mean using the numpy package what we're going to do is create some fake income datagetting back to our example from the previous section we're going to create some fake data where the typical american makes around $ , year in this examplewe're going to say that' distributed with normal distribution and standard deviation of , all numbers are completely made upand if you don' know what normal distribution and standard deviation means yetdon' worry ' going to cover that little later in the but just want you to know what these different parameters represent in this example it will make sense later on in our python notebookremember to import the numpy package into pythonwhich makes computing meanmedianand mode really easy we're going to use the jnqpsu ovnqzbtoq directivewhich means we can use oq as shorthand to call ovnqz from now on then we're going to create list of numbers called jodpnft using the oqsboepnopsnbm function jnqpsuovnqzbtoq jodpnft oqsboepnopsnbm oqnfbo jodpnft the three parameters of the oqsboepnopsnbm function mean want the data centered around with standard deviation of and want python to make data points in this list
16,337
once do thati compute the average of those data pointsor the mean by just calling oqnfbo on jodpnft which is my list of data it' just that simple let' go ahead and run that make sure you selected that code block and then you can hit the play button to actually execute itand since there is random component to these income numbersevery time run iti' going to get slightly different resultbut it should always be pretty close to  vu okso that' all there is to computing the mean in pythonjust using numpy (oqnfbomakes it super easy you don' have to write bunch of code or actually add up everything and count up how many items you had and do the division numpy meandoes all that for you visualizing data using matplotlib let' visualize this data to make it make little more sense so there' another package called nbuqmpumjcand again we're going to talk about that lot more in the future as wellbut it' package that lets me make pretty graphs in ipython notebooksso it' an easy way to visualize your data and see what' going on in this examplewe are using nbuqmpumjc to create histogram of our income data broken up into different buckets so basicallywe're taking our continuous data and discretizing itand then we can call show on nbuqmpumjcqzqmpu to actually display this histogram in line refer to the following codenbuqmpumjcjomjof jnqpsunbuqmpumjcqzqmpubtqmu qmuijtu jodpnftqmutipx
16,338
go ahead and select the code block and hit play it will actually create new graph for us as followsif you're not familiar with histograms or you need refresherthe way to interpret this is that for each one of these buckets that we've discretized our data into is showing the frequency of that data sofor examplearound , -ish we see there' about data points in that neighborhood for each given range of values there' lot of people around the , markbut when you get over to outliers like , there is not whole lotand apparently there' some poor souls that are even in debt at - , but againthey're very rare and not probable because we defined normal distributionand this is what normal probability curve looks like againwe're going to talk about that more in detail laterbut just want to get that idea in your head if you don' already know it calculating median using the numpy package alrightso computing the median is just as simple as computing the mean just like we had numpy nfbowe have numpy nfejbo function as well we can just use the nfejbo function on jodpnftwhich is our list of dataand that will give us the median in this casethat came up to $ , which isn' very different from the mean of $ againthe initial data was randomso your values will be slightly different oqnfejbo jodpnft
16,339
the following is the output of the preceding code vu we don' expect to see lot of outliers because this is nice normal distribution median and mean will be comparable when you don' have lot of weird outliers analyzing the effect of outliers just to prove pointlet' add in an outlier we'll take donald trumpi think he qualifies as an outlier let' go ahead and add his income in so ' going to manually add this to the data using oqbqqfoeand let' say add billion dollars (which is obviously not the actual income of donald trumpinto the incomes data jodpnft oqbqqfoe jodpnftwhat we're going to see is that this outlier doesn' really change the median whole lotyou knowthat' still going to be around the same value $ , because we didn' actually change where the middle point iswith that one valueas shown in the following exampleoqnfejbo jodpnft this will output the following vu this gives new output ofoqnfbo jodpnft the following is the output of the preceding code vu ahaso there you have itit is great example of how median and meanalthough people tend to equate them in commonplace languagecan be very differentand tell very different story so that one outlier caused the average income in this dataset to be over $ yearbut the more accurate picture is closer to , dollars year for the typical person in this dataset we just had the mean skewed by one big outlier the moral of the story istake anyone who talks about means or averages with grain of salt if you suspect there might be outliers involvedand income distribution is definitely case of that
16,340
calculating mode using the scipy package finallylet' look at mode we will just generate bunch of random integers of them to be precisethat range between and we're going to create bunch of fake ages for people bhft oqsboepnsboejou ijhi tj[ bhft your output will be randombut should look something like the following screenshot
16,341
nowscipykind of like numpyis bunch of like handy-dandy statistics functionsso we can import tubut from scipy using the following syntax it' little bit different than what we saw before gspntdjqzjnqpsutubut tubutnpef bhft the code meansfrom the tdjqz package import tubutand ' just going to refer to the package as tubuttha means that don' need to have an alias like did before with numpyjust different way of doing it both ways work theni used the tubutnpef function on bhftwhich is our list of random ages when we execute the above codewe get the following output vu pef ftvmu npef bssbz dpvou bssbz so in this casethe actual mode is that turned out to be the most common value in that array it actually occurred times now if actually create new distributioni would expect completely different answer because this data really is completely random what these numbers are let' execute the above code blocks again to create new distribution bhft oqsboepnsboejou ijhi tj[ bhft gspntdjqzjnqpsutubut tubutnpef bhft
16,342
the output for randomizing the equation is as distribution is as followsmake sure you selected that code block and then you can hit the play button to actually execute it in this casethe mode ended up being the number which occurred times vu pef ftvmu npef bssbz dpvou bssbz soit' very simple concept you can do it few more times just for fun it' kind of like rolling the roulette wheel we'll create new distribution again there you have itmeanmedianand mode in nutshell it' very simple to do using the scipy and numpy packages
16,343
some exercises ' going to give you little assignment in this section if you open up fbo fejbo&yfsdjtfjqzoc filethere' some stuff you can play with want you to roll up your sleeves and actually try to do this in the filewe have some random -commerce data what this data represents is the total amount spent per transactionand againjust like with our previous exampleit' just normal distribution of data we can run thatand your homework is to go ahead and find the mean and median of this data using the numpy package pretty much the easiest assignment you could possibly imagine all the techniques you need are in the fbo fejbo pefjqzoc file that we used earlier the point here is not really to challenge youit' just to make you actually write some python code and convince yourself that you can actually get result and make something happen here sogo ahead and play with that if you want to play with it some morefeel free to play around with the data distribution here and see what effect you can have on the numbers try adding some outlierskind of like we did with the income data this is the way to learn this stuffmaster the basics and the advance stuff will follow have at ithave fun once your're readylet' move forward to our next conceptstandard deviation and variance standard deviation and variance let' talk about standard deviation and variance the concepts and terms you've probably heard beforebut let' go into little bit more depth about what they really mean and how you compute them it' measure of the spread of data distributionand that will make little bit more sense in few minutes standard deviation and variance are two fundamental quantities for data distribution that you'll see over and over again in this book solet' see what they areif you need refresher
16,344
variance let' look at histogrambecause variance and standard deviation are all about the spread of the datathe shape of the distribution of dataset take look at the following histogramlet' say that we have some data on the arrival frequency of airplanes at an airportfor exampleand this histogram indicates that we have around arrivals per minute and that happened on around days that we looked at for this data howeverwe also have these outliers we had one really slow day that only had one arrival per minutewe only had one really fast day where we had almost arrivals per minute sothe way to read histogram is look up the bucket of given valueand that tells you how frequently that value occurred in your dataand the shape of the histogram could tell you lot about the probability distribution of given set of data we know from this data that our airport is very likely to have around arrivals per minutebut it' very unlikely to have or and we can also talk specifically about the probabilities of all the numbers in between so not only is it unlikely to have arrivals per minuteit' also very unlikely to have arrivals per minutebut once we start getting around or sothings start to pick up little bit lot of information can be had from histogram variance measures how spread-out the data is
16,345
measuring variance we usually refer to variance as sigma squaredand you'll find out why momentarilybut for nowjust know that variance is the average of the squared differences from the mean to compute the variance of datasetyou first figure out the mean of it let' say have some data that could represent anything let' say maximum number of people that were standing in line for given hour in the first houri observed person standing in linethen then then then the first step in computing the variance is just to find the meanor the averageof that data add them alldivide the sum by the number of data pointsand that comes out to which is the average number of people standing in line ( + + + + )/ now the next step is to find the differences from the mean for each data point know that the mean is so for my first data pointi have so - the next data point is so - - and so on and so forth okso end up with these both positive and negative numbers that represent the variance from the mean for each data point (- - - now what need is single number that represents the variance of this entire dataset sothe next thing ' going to do is find the square of these differences ' just going to go through each one of those raw differences from the mean and square them this is for couple of different reasonsfirsti want to make sure that negative variances count just as much as positive variances otherwisethey will cancel each other out that' be bad secondi also want to give more weight to the outliersso this amplifies the effect of things that are very different from the mean while stillmaking sure that the negatives and positives are comparable (
16,346
let' look at what happens thereso (- ) is positive and (- ) ends up being much smaller numberthat is because that' much closer to the mean of also ( ) turned out to be close to the meanonly but as we get up to the positive outlier( ends up being that gives us( to find the actual variance valuewe just take the average of all those squared differences so we add up all these squared variancesdivide the sum by that is number of values that we haveand we end up with variance of okthat' all variance is standard deviation now typicallywe talk about standard deviation more than varianceand it turns out standard deviation is just the square root of the variance it' just that simple soif had this variance of the standard deviation is so you see now why we said that the variance ( ) it' because itself represents the standard deviation soif take the square root of ( ) get sigma that ends up in this example to be
16,347
identifying outliers with standard deviation here' histogram of the actual data we were looking at in the preceding example for calculating variance now we see that the number occurred twice in our datasetand then we had one one and one the standard deviation is usually used as way to think about how to identify outliers in your dataset if say if ' within one standard deviation of the mean of that' considered to be kind of typical value in normal distribution howeveryou can see in the preceding diagramthat the numbers and actually lie outside of that range so if take plus or minus we end up around and and and both fall outside of that range of standard deviation so we can say mathematicallythat and are outliers we don' have to guess and eyeball it now there is still judgment call as to what you consider an outlier in terms of how many standard deviations data point is from the mean you can generally talk about how much of an outlier data point is by how many standard deviations (or sometimes how many-sigmasfrom the mean it is so that' something you'll see standard deviation used for in the real world
16,348
population variance versus sample variance there is little nuance to standard deviation and varianceand that' when you're talking about population versus sample variance if you're working with complete set of dataa complete set of observationsthen you do exactly what told you you just take the average of all the squared variances from the mean and that' your variance howeverif you're sampling your datathat isif you're taking subset of the data just to make computing easieryou have to do something little bit different instead of dividing by the number of samplesyou divide by the number of samples minus let' look at an example we'll use the sample data we were just studying for people standing in line we took the sum of the squared variances and divided by that is the number of data points that we hadto get ( if we were to look at the sample variancewhich is designated by it is found by the sum of the squared variances divided by that is ( this gives us the sample variancewhich comes out to ( so againif this was some sort of sample that we took from larger datasetthat' what you would do if it was complete datasetyou divide by the actual number okaythat' how we calculate population and sample variancebut what' the actual logic behind itthe mathematical explanation as for why there is difference between population and sample varianceit gets into really weird things about probability that you probably don' want to think about too muchand it requires some fancy mathematical notationi try to avoid notation in this book as much as possible because think the concepts are more importantbut this is basic enough stuff and that you will see it over and over again
16,349
as we've seenpopulation variance is usually designated as sigma squared ( )with sigma (sas standard deviationand we can say that is the summation of each data point minus the meanmusquaredthat' the variance of each sample squared over nthe number of data points and we can express it with the following equationx denotes each data point denotes the mean denotes the number of data points sample variance similarly is designated as with the following equationx denotes each data point denotes the mean - denotes the number of data points minus that' all there is to it
16,350
analyzing standard deviation and variance on histogram let' write some code here and play with some standard deviation and variances so if you pull up the ue%fw bsjbodfjqzoc file ipython notebookand follow along with me here please dobecause there' an activity at the end that want you to try what we're going to do here is just like the previous exampleso begin with the following codenbuqmpumjcjomjof jnqpsuovnqzbtoq jnqpsunbuqmpumjcqzqmpubtqmu jodpnft oqsboepnopsnbm qmuijtu jodpnftqmutipx we use nbuqmpumjc to plot histogram of some normally distributed random dataand we call it jodpnft we're saying it' going to be centered around (hopefully that' an hourly rate or something and not annualor some weird denomination)with standard deviation of and data points let' go ahead and generate that by executing that above code block and plotting it as shown in the following graph
16,351
we have , data points centered around with normal distribution and standard deviation of measure of the spread of this datayou can see that the most common occurrence is around and as we get further and further from thatthings become less and less likely the standard deviation point of that we specified is around and around you can see in the histogram that this is the point where things start to fall off sharplyso we can say that things beyond that standard deviation boundary are unusual using python to compute standard deviation and variance nownumpy also makes it incredibly easy to compute the standard deviation and variance if you want to compute the actual standard deviation of this dataset that we generatedyou just call the tue function right on the dataset itself sowhen numpy creates the listit' not just normal python listit actually has some extra stuff tacked onto it so you can call functions on itlike tue for standard deviation let' do that nowjodpnfttue this gives us something like the following output (remember that we used random dataso your figures won' be exactly the same as mine)when we execute thatwe get number pretty close to because that' what we specified when we created our random data we wanted standard deviation of sure enough pretty close the variance is just matter of calling wbs jodpnftwbs this gives me the followingit comes out to pretty close to which is rightso the world makes sensestandard deviation is just the square root of the varianceor you could say that the variance is the standard deviation squared sure enoughthat works outso the world works the way it should
16,352
try it yourself want you to dive in here and actually play around with itmake it realso try out different parameters on generating that normal data rememberthis is measure of the shape of the distribution of the dataso what happens if change that center pointdoes it matterdoes it actually affect the shapewhy don' you try it out and find outtry messing with the actual standard deviationthat we've specifiedto see what impact that has on the shape of the graph maybe try standard deviation of and you knowyou can see how that actually affects things let' make it even more dramaticlike just play around with you'll see the graph starting to get little bit fatter play around with different valuesjust get feel of how these values work this is the only way to really get an intuitive sense of standard deviation and variance mess around with some different examples and see the effect that it has so that' standard deviation and variance in practice you got hands on with some of it thereand hope you played around little bit to get some familiarity with it these are very important concepts and we'll talk about standard deviations lot throughout the book and no doubt throughout your career in data scienceso make sure you've got that under your belt let' move on probability density function and probability mass function so we've already seen some examples of normal distribution function for some of the examples in this book that' an example of probability density functionand there are other types of probability density functions out there so let' dive in and see what it really means and what some other examples of them are the probability density function and probability mass functions we've already seen some examples of normal distribution function for some of the code we've looked at in this book that' an example of probability density functionand there are other types of probability density functions out there let' dive in and see what that really means and what some other examples of them there are
16,353
probability density functions let' talk about probability density functionsand we've used one of these already in the book we just didn' call it that let' formalize some of the stuff that we've talked about for examplewe've seen the normal distribution few timesand that is an example of probability density function the following figure is an example of normal distribution curve it' conceptually easy to try to think of this graph as the probability of given value occurringbut that' little bit misleading when you're talking about continuous data because there' an infinite number of actual possible data points in continuous data distribution there could be or or so the actual probability of very specific value happening is veryvery smallinfinitely small the probability density function really tells the probability of given range of values occurring so that' the way you've got to think about it sofor examplein the normal distribution shown in the above graphbetween the mean ( and one standard deviation from the mean ( sthere' chance of value falling in that range you can tighten this up or spread it out as much as you wantfigure out the actual valuesbut that' the way to think about probability density function for given range of values it gives you way of finding out the probability of that range occurring you can see in the graphas you get close to the mean ( )within one standard deviation (- and )you're pretty likely to land there meanif you add up and which equals to %you get the probability of landing within one standard deviation of the mean
16,354
howeveras you get between two and three standard deviations (- to - and to )we're down to just little bit over ( %to be preciseas you get out beyond three standard deviations (- and sthen we're much less than sothe graph is just way to visualize and talk about the probabilities of the given data point happening againa probability distribution function gives you the probability of data point falling within some given range of given valueand normal function is just one example of probability density function we'll look at some more in moment probability mass functions now when you're dealing with discrete datathat little nuance about having infinite numbers of possible values goes awayand we call that something different so that is probability mass function if you're dealing with discrete datayou can talk about probability mass functions here' graph to help visualize this
16,355
for exampleyou can plot normal probability density function of continuous data on the black curve shown in the graphbut if we were to quantize that into discrete dataset like we would do with histogramwe can say the number occurs some set number of timesand you can actually say the number has little over chance of occurring so probability mass function is the way that we visualize the probability of discrete data occurringand it looks lot like histogram because it basically is histogram terminology differencea probability density function is solid curve that describes the probability of range of values happening with continuous data probability mass function is the probabilities of given discrete values occurring in dataset types of data distributions let' look at some real examples of probability distribution functions and data distributions in general and wrap your head little bit more around data distributions and how to visualize them and use them in python go ahead and open up the %jtusjcvujpotjqzoc from the book materialsand you can follow along with me here if you' like uniform distribution let' start off with really simple exampleuniform distribution uniform distribution just means there' flat constant probability of value occurring within given range jnqpsuovnqzbtoq *nqpsunbuqmpumjcqzqmpubtqmu wbmvft oqsboepnvojgpsn qmuijtu wbmvftqmutipx
16,356
so we can create uniform distribution by using the numpy sboepnvojgpsn function the preceding code saysi want uniformly distributed random set of values that ranges between and and want of them if then create histogram of those valuesyou can see it looks like the following there' pretty much an equal chance of any given value or range of values occurring within that data sounlike the normal distributionwhere we saw concentration of values near the meana uniform distribution has equal probability across any given value within the range that you define so what would the probability distribution function of this look likewelli' expect to see basically nothing outside of the range of - or beyond but when ' between - and would see flat line because there' constant probability of any one of those ranges of values occurring so in uniform distribution you would see flat line on the probability distribution function because there is basically constant probability every valueevery range of values has an equal chance of appearing as any other value normal or gaussian distribution now we've seen normalalso known as gaussiandistribution functions already in this book you can actually visualize those in python there is function called qeg (probability density functionin the tdjqztubutopsn package function
16,357
solet' look at the following examplegspntdjqztubutjnqpsuopsn jnqpsunbuqmpumjcqzqmpubtqmu oqbsbohf qmuqmpu opsnqeg in the preceding examplewe're creating list of values for plotting that range between - and with an increment of in between them by using the bsbohf function so those are the values on the graph and we're going to plot the -axis with using those values the -axis is going to be the normal functionopsnqegthat the probability density function for normal distributionon those values we end up with the following outputthe pdf function with normal distribution looks just like it did in our previous sectionthat isa normal distribution for the given numbers that we providedwhere represents the meanand the numbers - - - and are standard deviations nowwe will generate random numbers with normal distribution we've done this few times alreadyconsider this refresher refer to the following block of codejnqpsuovnqzbtoq jnqpsunbuqmpumjcqzqmpubtqmu nv tjhnb wbmvft oqsboepnopsnbm nvtjhnbqmuijtu wbmvftqmutipx
16,358
in the above codewe use the sboepnopsnbm function of the numpy packageand the first parameter nvrepresents the mean that you want to center the data around tjhnb is the standard deviation of that datawhich is basically the spread of it thenwe specify the number of data points that we want using normal probability distribution functionwhich is here so that' way to use probability distribution functionin this case the normal distribution functionto generate set of random data we can then plot thatusing histogram broken into buckets and show it the following output is what we end up withit does look more or less like normal distributionbut since there is random elementit' not going to be perfect curve we're talking about probabilitiesthere are some odds of things not quite being what they should be the exponential probability distribution or power law another distribution function you see pretty often is the exponential probability distribution functionwhere things fall off in an exponential manner when you talk about an exponential fall offyou expect to see curvewhere it' very likely for something to happennear zerobut thenas you get farther away from itit drops off very quickly there' lot of things in nature that behave in this manner
16,359
to do that in pythonjust like we had function in tdjqztubut for opsnqegwe also have an fyqpoqegor an exponential probability distribution function to do that in pythonwe can do the same syntax that we did for the normal distribution with an exponential distribution here as shown in the following code blockgspntdjqztubutjnqpsufyqpo jnqpsunbuqmpumjcqzqmpubtqmu oqbsbohf qmuqmpu fyqpoqeg so againin the above codewe just create our values using the numpy bsbohf function to create bunch of values between and with step size of thenwe plot those values against the -axiswhich is defined as the function fyqpoqeg the output looks like an exponential fall off as shown in the following screenshot
16,360
binomial probability mass function we can also visualize probability mass functions this is called the binomial probability mass function againwe are going to use the same syntax as beforeas shown in the following codegspntdjqztubutjnqpsufyqpo jnqpsunbuqmpumjcqzqmpubtqmu oqbsbohf qmuqmpu fyqpoqeg so instead of fyqpo or opsnwe just use cjopn reminderthe probability mass function deals with discrete data we have been all alongreallyit' just how you think about it coming back to our codewe're creating some discrete values between and at spacing of and we're saying want to plot binomial probability mass function using that data with the cjopnqng functioni can actually specify the shape of that data using two shape parameterso and in this casethey're and respectively output is shown on the following graphif you want to go and play around with different values to see what effects it hasthat' good way to get an intuitive sense of how those shape parameters work on the probability mass function
16,361
poisson probability mass function lastlythe other distribution function you might hear about is poisson probability mass functionand this has very specific application it looks lot like normal distributionbut it' little bit different the idea here isif you have some information about the average number of things that happen in given time periodthis probability mass function can give you way to predict the odds of getting another value insteadon given future day as an examplelet' say have websiteand on average get visitors per day can use the poisson probability mass function to estimate the probability of seeing some other value on specific day for examplewith my average of visitors per daywhat' the odds of seeing visitors on given daythat' what poisson probability mass function can give you take look at the following codegspntdjqztubutjnqpsuqpjttpo jnqpsunbuqmpumjcqzqmpubtqmu nv  oqbsbohf qmuqmpu qpjttpoqng nv in this code examplei' saying my average is mu ' going to set up some values to look at between and with spacing of  ' going to plot that using the qpjttpoqng function can use that graph to look up the odds of getting any specific value that' not assuming normal distribution
16,362
the odds of seeing visitors on given dayit turns outcomes out to about or probability very interesting alrightso those are some common data distributions you might run into in the real world remember we used probability distribution function with continuous databut when we're dealing with discrete datawe use probability mass function so that' probability density functionsand probability mass functions basicallya way to visualize and measure the actual chance of given range of values occurring in dataset very important information and very important thing to understand we're going to keep using that concept over and over again alrightlet' move on percentiles and moments nextwe'll talk about percentiles and moments you hear about percentiles in the news all the time people that are in the top of incomethat' an example of percentile we'll explain that and have some examples thenwe'll talk about momentsa very fancy mathematical conceptbut it turns out it' very simple to understand conceptually let' dive in and talk about percentiles and momentsa couple of pretty basic concepts in statisticsbut againwe're working our way up to the hard stuffso bear with me as we go through some of this review percentiles let' see what percentiles mean basicallyif you were to sort all of the data in dataseta given percentile is the point at which that percent of the data is less than the point you're at common example you see talked about lotis income distribution when we talk about the th percentileor the one-percentersimagine that you were to take all the incomes of everybody in the countryin this case the united statesand sort them by income the th percentile will be the income amount at which of the rest of the country was making less than that amount it' very easy way to comprehend it in dataseta percentile is the point at which xof the values are less than the value at that point
16,363
the following graph is an example for income distributionthe preceding image shows an example of income distribution data for exampleat the th percentile we can say that of the data pointswhich represent people in americamake less than $ , yearand one percent make more than that converselyif you're one-percenteryou're making more than $ , year congratulationsbut if you're more typical median personthe th percentile defines the point at which half of the people are making less and half are making more than you arewhich is the definition of median the th percentile is the same thing as medianand that would be at $ , given this dataset soif you're making $ , year in the usyou are making exactly the median amount of income for the country you can see the problem of income distribution in the graph above things tend to be very concentrated toward the high end of the graphwhich is very big political problem right now in the country we'll see what happens with thatbut that' beyond the scope of this book that' percentiles in nutshell
16,364
quartiles percentiles are also used in the context of talking about the quartiles in distribution let' look at normal distribution to understand this better here' an example illustrating percentile in normal distribution
16,365
looking at the normal distribution in the preceding imagewe can talk about quartiles quartile ( and quartile ( in the middle are just the points that contain together of the dataso are on left side of the median and are on the right side of the median the median in this example happens to be near the mean for examplethe interquartile range (iqr)when we talk about distributionis the area in the middle of the distribution that contains of the values the topmost part of the image is an example of what we call box-and-whisker diagram don' concern yourself yet about the stuff out on the edges of the box that gets little bit confusingand we'll cover that later even though they are called quartile ( and quartile ( )they don' really represent of the databut don' get hung up on that yet focus on the point that the quartiles in the middle represent of the data distribution computing percentiles in python let' look at some more examples of percentiles using python and kind of get our hands on it and conceptualize this little bit more go ahead and open the fsdfoujmftjqzoc file if you' like to follow alongand again encourage you to do so because want you to play around with this little bit later let' start off by generating some randomly distributed normal dataor normally distributed random dataratherrefer to the following code blocknbuqmpumjcjomjof jnqpsuovnqzbtoq jnqpsunbuqmpumjcqzqmpubtqmu wbmt oqsboepnopsnbm qmuijtu wbmtqmutipx
16,366
in this examplewhat we're going to do is generate some data centered around zerothat is with mean of zerowith standard deviation of and ' going to make data points with that distribution thenwe're going to plot histogram and see what we come up with the generated histogram looks very much like normal distributionbut because there is random component we have little outlier near the deviation of - in this example here things are tipped little bit at the meana little bit of random variation there to make things interesting numpy provides very handy percentile function that will compute the percentile values of this distribution for you sowe created our wbmt list of data using oqsboepnopsnbmand can just call the oqqfsdfoujmf function to figure out the th percentile value in using the following codeoqqfsdfoujmf wbmtthe following is the output of the preceding codethe output turns out to be so rememberthe th percentile is just another name for the medianand it turns out the median is very close to zero in this data you can see in the graph that we're tipped little bit to the rightso that' not too surprising
16,367
want to compute the th percentilewhich gives me the point at which of the data is less than that value we can easily do that with the following codeoqqfsdfoujmf wbmthere is the output of that code vu the th percentile of this data turns out to be so it' around hereand basicallyat that point less than of the data is less than that value can believe that of the data is greater than of itless than let' compute the th percentile valuethat would give me the point at which of the values are less than that number that come up with againwe just need very simple alteration to the codeoqqfsdfoujmf wbmtthis gives the following output vu the th percentile point works out to be - roughlyand again believe that it' saying that of the data lies to the left of - and conversely is greater if you want to get feel as to where those breaking points are in datasetthe percentile function is an easy way to compute them if this were dataset representing income distributionwe could just call oqqfsdfoujmf wbmtand figure out what the th percentile is you could figure out who those one-percenters people keep talking about really areand if you're one of them alrightnow to get your hands dirty want you to play around with this data this is an ipython notebook for reasonso you can mess with it and mess with the codetry different standard deviation valuessee what effect it has on the shape of the data and where those percentiles end up lyingfor example try using smaller dataset sizes and add little bit more random variation in the thing just get comfortable with itplay around with itand find you can actually do this stuff and write some real code that works
16,368
moments nextlet' talk about moments moments are fancy mathematical phrasebut you don' actually need math degree to understand itthough intuitivelyit' lot simpler than it sounds it' one of those examples where people in statistics and data science like to use big fancy terms to make themselves sound really smartbut the concepts are actually very easy to graspand that' the theme you're going to hear again and again in this book basicallymoments are ways to measure the shape of data distributionof probability density functionor of anythingreally mathematicallywe've got some really fancy notation to define themif you do know calculusit' actually not that complicated of concept we're taking the difference between each value from some value raised to the nth powerwhere is the moment number and integrating across the entire function from negative infinity to infinity but intuitivelyit' lot easier than calculus moments can be defined as quantitative measures of the shape of probability density function readyhere we go the first moment works out to just be the mean of the data that you're looking at that' it the first moment is the meanthe average it' that simple the second moment is the variance that' it the second moment of the dataset is the same thing as the variance value it might seem little bit creepy that these things kind of fall out of the math naturallybut think about it the variance is really based on the square of the differences from the meanso coming up with mathematical way of saying that variance is related to mean isn' really that much of stretchright it' just that simple
16,369
now when we get to the third and fourth momentsthings get little bit trickierbut they're still concepts that are easy to grasp the third moment is called skewand it is basically measure of how lopsided distribution is you can see in these two examples above thatif have longer tail on the leftnow then that is negative skewand if have longer tail on the right thenthat' positive skew the dotted lines show what the shape of normal distribution would look like without skew the dotted line out on the left side then end up with negative skewor on the other sidea positive skew in that example okso that' all skew is it' basically stretching out the tail on one side or the otherand it is measure of how lopsidedor how skewed distribution is
16,370
the fourth moment is called kurtosis wowthat' fancy wordall that really isis how thick is the tail and how sharp is the peak so againit' measure of the shape of the data distribution here' an exampleyou can see that the higher peak values have higher kurtosis value the topmost curve has higher kurtosis than the bottommost curve it' very subtle differencebut difference nonetheless it basically measures how peaked your data is let' review all thatthe first moment is meanthe second moment is variancethe third moment is skewand the fourth moment is kurtosis we already know what mean and variance are skew is how lopsided the data ishow stretched out one of the tails might be kurtosis is how peakedhow squished together the data distribution is
16,371
computing moments in python let' play around in python and actually compute these moments and see how you do that to play around with thisgo ahead and open up the pnfoutjqzocand you can follow along with me here let' again create that same normal distribution of random data againwe're going to make it centered around zerowith standard deviation and , data pointsand plot that outjnqpsuovnqzbtoq jnqpsunbuqmpumjcqzqmpubtqmu wbmt oqsboepnopsnbm qmuijtu wbmtqmutipx so againwe get randomly generated set of data with normal distribution around zero nowwe find the mean and variance we've done this beforenumpy just gives you nfbo and wbs function to compute that sowe just call oqnfbo to find the first momentwhich is just fancy word for the meanas shown in the following codeoqnfbo wbmt
16,372
this gives the following output in our example vu the output turns out to be very close to zerojust like we would expect for normally distributed data centered around zero sothe world makes sense so far now we find the second momentwhich is just another name for variance we can do that with the following codeas we've seen beforeoqwbs wbmt providing the following output vu that output turns out to be about and againthat works out with nice sanity check remember that standard deviation is the square root of variance if you take the square root of it comes out to which is the standard deviation we specified while creating this dataso againthat checks out too the third moment is skewand to do that we're going to need to use the scipy package instead of numpy but that again is built into any scientific computing package like enthought canopy or anaconda once we have scipythe function call is as simple as our earlier twojnqpsutdjqztubutbttq tqtlfx wbmt this displays the following output vu we can just call tqtlfx on the wbmt listand that will give us the skew value since this is centered around zeroit should be almost zero skew it turns out that with random variation it does skew little bit leftand actually that does jive with the shape that we're seeing in the graph it looks like we did kind of pull it little bit negative the fourth moment is kurtosiswhich describes the shape of the tail againfor normal distribution that should be about [fsp dj provides us with another simple function call tqlvsuptjt wbmt
16,373
and here' the output vu indeedit does turn out to be zero kurtosis reveals our data distribution in two linked waysthe shape of the tailor the how sharp the peak if just squish the tail down it kind of pushes up that peak to be pointierand likewiseif were to push down that distributionyou can imagine that' kind of spreading things out little bitmaking the tails little bit fatterand the peak of it little bit lower so that' what kurtosis meansand in this examplekurtosis is near zero because it is just plain old normal distribution if you want to play around with itgo ahead andagaintry to modify the distribution make it centered around something besides and see if that actually changes anything should itwellit really shouldn' because these are all measures of the shape of the distributionand it doesn' really say whole lot about where that distribution is exactly it' measure of the shape that' what the moments are all about go ahead and play around with thattry different center valuestry different standard deviation valuesand see what effect it has on these valuesand it doesn' change at all of courseyou' expect things like the mean to change because you're changing the mean valuebut varianceskewmaybe not play aroundfind out there you have percentiles and moments percentiles are pretty simple concept moments sound hardbut it' actually pretty easy to understand how to do itand it' easy in python too now you have that under your belt it' time to move on summary in this we saw the types of data (numericcategoricaland ordinal datathat you might encounter and how to categorize them and how you treat them differently depending on what kind of data you're dealing with we also walked through the statistical concepts of meanmedian and modeand we also saw the importance of choosing between median and meanand that often the median is better choice than the mean because of outliers nextwe analyzed how to compute meanmedianand mode using python in an ipython notebook file we learned the concepts of standard deviation and variance in depth and how to compute them in python we saw that they're measure of the spread of data distribution we also saw way to visualize and measure the actual chance of given range of values occurring in dataset using probability density functions and probability mass functions
16,374
we looked at the types of data distributions (uniform distributionnormal or gaussian distributionexponential probability distributionbinomial probability mass functionpoisson probability mass functionin general and how to visualize them using python we analyzed the concepts of percentiles and moments and saw how to compute them using python in the next we'll look at using the nbuqmpumjc library more extensivelyand also dive into the more advanced topics of covariance and correlation
16,375
matplotlib and advanced probability concepts after going through some of the simpler concepts of statistics and probability in the previous we're now going to turn our attention to some more advanced topics that you'll need to be familiar with to get the most out of the remainder of this book don' worrythey're not too complicated first of alllet' have some fun and look at some of the amazing graphing capabilities of the nbuqmpumjc library we'll be covering the following topics in this using the nbuqmpumjc package to plot graphs understanding covariance and correlation to determine the relationship between data understanding conditional probability with examples understanding bayestheorem and its importance
16,376
crash course in matplotlib your data is only as good as you can present it to other peoplereallyso let' talk about plotting and graphing your data and how to present it to others and make your graphs look pretty we're going to introduce matplotlib more thoroughly and put it through its paces 'll show you few tricks on how to make your graphs as pretty as you can let' have some fun with graphs it' always good to make pretty pictures out of your work this will give you some more tools in your tool chest for visualizing different types of data using different types of graphs and making it look pretty we'll use different colorsdifferent line stylesdifferent axesthings like that it' not only important to use graphs and data visualization to try to find interesting patterns in your databut it' also interesting to present your findings well to non-technical audience without further adolet' dive in to matplotlib go ahead and open up the bu mpu-jcjqzoc file and you can play around with this stuff along with me we'll start by just drawing simple line graph nbuqmpumjcjomjof gspntdjqztubutjnqpsuopsn jnqpsunbuqmpumjcqzqmpubtqmu jnqpsuovnqzbtoq oqbsbohf qmuqmpu opsnqeg qmutipx so in this examplei import nbuqmpumjcqzqmpu as qmuand with thiswe can refer to it as qmu from now on in this notebook theni use oqbsbohf to create an xaxis filled with values between and at increments of and use qzqmpu' qmpu function to plot the function will be opsnqeg so ' going to create probability density function with normal distribution based on the valuesand ' using the tdjqztubutopsn package to do that
16,377
so tying it back into last look at probability density functionshere we are plotting normal probability density function using nbuqmpumjc so we just call qzqmpu' qmpu method to set up our plotand then we display it using qmutipx when we run the previous codewe get the following outputthat' what we geta pretty little graph with all the default formatting generating multiple plots on one graph let' say want to plot more than one thing at time you can actually call plot multiple times before calling show to actually add more than one function to your graph let' look at the following codeqmuqmpu opsnqeg qmuqmpu opsnqeg qmutipx
16,378
in this examplei' calling my original function of just normal distributionbut ' going to render another normal distribution here as wellwith mean around and standard deviation of theni' going to show those two together so you can see how they compare to each other you can see that by defaultnbuqmpumjc chooses different colors for each graph automatically for youwhich is very nice and handy of it saving graphs as images if want to save this graph to filemaybe want to include it in document or somethingi can do something like the following codeqmuqmpu opsnqeg qmuqmpu opsnqeg qmutbwfgjh == tfst=='sbol= mpuqoh gpsnbu qoh instead of just calling qmutipx can call qmutbwfgjh to save this file and what format want it in with path to where want
16,379
you'll want to change that to an actual path that exists on your machine if you're following along you probably don' have tfst='sbol folder on your system remember too that if you're on linux or macosinstead of backslash you're going to use forward slashesand you're not going to have drive letter with all of these python notebookswhenever you see path like thismake sure that you change it to an actual path that works on your system am on windows hereand do have tfst='sbol folderso can go ahead and run that if check my file system under tfst='sboli have mpuqoh file that can open up and look atand can use that in whatever document want that' pretty cool one other quick thing to note is that depending on your setupyou may have permissions issues when you come to save the file you'll just need to find the folder that works for you on windowsyour tfst=/bnf folder is usually safe bet alrightlet' move on adjusting the axes let' say that don' like the default choices of the axes of this value in the previous graph it' automatically fitting it to the tightest set of the axis values that you can findwhich is usually good thing to dobut sometimes you want things on an absolute scale look at the following codebyft qmubyft byfttfu@ymjn byfttfu@zmjn byfttfu@yujdlt byfttfu@zujdlt qmuqmpu opsnqeg qmuqmpu opsnqeg qmutipx
16,380
in this examplefirst get the axes using qmubyft once have these axes objectsi can adjust them by calling tfu@ymjni can set the range from - to and with set tfu@zmjni set the range from to you can see in the below outputthat my values are ranging from to and goes from to can also have explicit control over where the tick marks on the axes are so in the previous codei' saying want the ticks to be at etc and ticks from to at increments using the tfu@yujdlt and tfu@zujdlt functions now could use the bsbohf function to do that more compactlybut the point is you have explicit control over where exactly those tick marks happenand you can also skip some you can have them at whatever increments you want or whatever distribution you want beyond thatit' the same thing once 've adjusted my axesi just called qmpu with the functions that want to plot and called tipx to display it sure enoughthere you have the result adding grid what if want grid lines in my graphswellsame idea all do is call hsje that get back from qmubyft byft qmubyft byfttfu@ymjn byfttfu@zmjn byfttfu@yujdlt on the axes
16,381
byfttfu@zujdlt byfthsje qmuqmpu opsnqeg qmuqmpu opsnqeg qmutipx by executing the above codei get nice little grid lines that makes it little bit easier to see where specific point isalthough it clutters things up little bit it' little bit of stylistic choice there changing line types and colors what if want to play games with the line types and colorsyou can do that too byft qmubyft byfttfu@ymjn byfttfu@zmjn byfttfu@yujdlt byfttfu@zujdlt byfthsje qmuqmpu opsnqeg cqmuqmpu opsnqeg  qmutipx
16,382
so you see in the preceding codethere' actually an extra parameter on the qmpu functions at the end where can pass little string that describes the style of line in this first examplewhat cindicates is want bluesolid line the stands for blueand the dash means solid line for my second qmpu functioni' going to plot it in redthat' what the meansand the colon means ' going to plot it with dotted line if run thatyou can see in the above graph what it doesand you can change different types of line styles in additionyou can do double dash (byft qmubyft byfttfu@ymjn byfttfu@zmjn byfttfu@yujdlt byfttfu@zujdlt byfthsje qmuqmpu opsnqeg cqmuqmpu opsnqeg  qmutipx
16,383
the preceding code gives you dashed red line as line style as shown in the following graph imagei can also do dash dot combination (byft qmubyft byfttfu@ymjn byfttfu@zmjn byfttfu@yujdlt byfttfu@zujdlt byfthsje qmuqmpu opsnqeg cqmuqmpu opsnqeg  qmutipx
16,384
you get an output that looks like the following graph imagesothose are the different choices there could even make it green with vertical slashes ( byft qmubyft byfttfu@ymjn byfttfu@zmjn byfttfu@yujdlt byfttfu@zujdlt byfthsje qmuqmpu opsnqeg cqmuqmpu opsnqeg  qmutipx
16,385
'll get the following outputhave some fun with that if you wantexperiment with different valuesand you can get different line styles labeling axes and adding legend something you'll do more often is labeling your axes you never want to present data in vacuum you definitely want to tell people what it represents to do thatyou can use the ymbcfm and zmbcfm functions on qmu to actually put labels on your axes 'll label the axis greebles and the axis probability you can also add legend inset normallythis would be the same thingbut just to show that it' set independentlyi' also setting up legend in the following codebyft qmubyft byfttfu@ymjn byfttfu@zmjn byfttfu@yujdlt byfttfu@zujdlt byfthsje qmuymbcfm (sffcmft qmuzmbcfm spcbcjmjuz qmuqmpu opsnqeg cqmuqmpu opsnqeg  qmumfhfoe mpd qmutipx
16,386
into the legendyou pass in basically list of what you want to name each graph somy first graph is going to be called sneetchesand my second graph is going to be called gacksand the mpd parameter indicates what location you want it atwhere represents the lower right-hand corner let' go ahead and run the codeand you should see the followingyou can see that ' plotting greebles versus probability for both sneetches and gacks little dr seuss reference for you there so that' how you set axes labels and legends fun example little fun example here if you're familiar with the webcomic xkcdthere' little bit of an easter egg in matplotlibwhere you can actually plot things in xkcd style the following code shows how you can do that qmuylde gjh qmugjhvsf by gjhbee@tvcqmpu bytqjofttfu@dpmps opof bytqjofttfu@dpmps opof qmuyujdlt qmuzujdlt bytfu@zmjn ebub oqpoft 
16,387
ebuboqbsbohf qmuboopubuf )&%":* &"-*;&%= *$ -%$ ,#"$ /= )&/& & * "/ &yz bsspxqspqt ejdu bsspxtuzmf yzufyu qmuqmpu ebub qmuymbcfm qmuzmbcfm ujnf nzpwfsbmmifbmui in this exampleyou call qmuylde which puts matplotlib in xkcd mode after you do thatthings will just have style with kind of comic book font and squiggly lines automatically this little simple example will show funny little graph where we are plotting your health versus timewhere your health takes steep decline once you realize you can cook bacon whenever you want to all we're doing there is using the ylde method to go into that mode you can see the results belowthere' little bit of interesting python here in how we're actually putting this graph together we're starting out by making data line that is nothing but the value across data points then we use the old python list slicing operator to take everything after the value of and we subtract off from that sub-list of itemsthe range of through so that has the effect of subtracting off larger value linearly as you get past which results in that line heading downward down to beyond the point soit' little example of some python list slicing in action thereand little creative use of the bsbohf function to modify your data
16,388
generating pie charts nowto go back to the real worldwe can remove xkcd mode by calling sdefgbvmut on matplotliband we can get back to normal mode here if you want pie chartall you have to do is call qmuqjf and give it an array of your valuescolorslabelsand whether or not you want items explodedand if soby how much here' the code fnpwf ,$%npef qmusdefgbvmut wbmvft dpmpst fyqmpef mbcfmt qmuqjf wbmvftdpmpst dpmpstmbcfmt mbcfmtfyqmpef fyqmpef qmuujumf uvefou-pdbujpot qmutipx you can see in this code that ' creating pie chart with the values and  ' assigning explicit colors to each one of those valuesand explicit labels to each one of those values ' exploding out the russian segment of the pie by %and giving this plot title of student locations and showing it the following is the output you should seethat' all there is to it
16,389
generating bar charts if want to generate bar chartthat is also very simple it' kind of similar idea to the pie chart let' look at the following code wbmvft dpmpst qmucbs sbohf wbmvftdpmps dpmpst qmutipx 've defined an array of values and an array of colorsand just plot the data the above code plots from the range of to using the values from the wbmvft array and using the explicit list of colors listed in the dpmpst array go ahead and show thatand there you have your bar chart
16,390
generating scatter plots scatter plot is something we'll see pretty often in this book sosay you have couple of different attributes you want to plot for the same set of people or things for examplemaybe we're plotting ages against incomes for each personwhere each dot represents person and the axes represent different attributes of those people the way you do that with scatter plot is you call qmutdbuufs using the two axes that you want to definethat isthe two attributes that contain data that you want to plot against each other let' say have random distribution in and and scatter those on the scatter plotand show itgspnqzmbcjnqpsusboeo sboeo sboeo qmutdbuufs qmutipx you get the following scatter plot as outputthis is what it looks likepretty cool you can see the sort of concentration in the center herebecause of the normal distribution that' being used in both axesbut since it is randomthere' no real correlation between those two
16,391
generating histograms finallywe'll remind ourselves how histogram works we've already seen this plenty of times in the book let' look at the following codejodpnft oqsboepnopsnbm qmuijtu jodpnftqmutipx in this examplei call normal distribution centered on , with standard deviation of , with , data points theni just call qzqmpu' histogram functionthat isijtu and specify the input data and the number of buckets that we want to group things into in our histogram then call tipx and the rest is magic generating box-and-whisker plots finallylet' look at box-and-whisker plots remember in the previous when we talked about percentiles touched on this little bit againwith box-and-whisker plotthe box represents the two inner quartiles where of your data resides converselyanother resides on either side of that boxthe whiskers (dotted lines in our examplerepresent the range of the data except for outliers
16,392
we define outliers in box-and-whisker plot as anything beyond times the interquartile rangeor the size of the box sowe take the size of the box times and up to that point on the dotted whiskerswe call those parts outer quartiles but anything outside of the outer quartiles is considered an outlierand that' what the lines beyond the outer quartiles represent that' where we are defining outliers based on our definition with the box-andwhisker plot some points to remember about box-and-whisker plotsthey are useful for visualizing the spread and skew of data the line in the middle of the box represents the median of the dataand the box represents the bounds of the st and rd quartiles half of the data exists within the box the "whiskersindicate the range of the data-except for outlierswhich are plotted outside the whiskers outliers are times or more the interquartile range nowjust to give you an example herewe have created fake dataset the following example creates uniformly distributed random numbers between - and plus few outliers above and below vojgpsn lfxfe oqsboepnsboe  ijhi@pvumjfst oqsboepnsboe  mpx@pvumjfst oqsboepnsboe  ebub oqdpodbufobuf vojgpsn lfxfeijhi@pvumjfstmpx@pvumjfst qmucpyqmpu ebub qmutipx in the codewe have uniform random distribution of data (vojgpsn lfxfethen we added few outliers on the high end (ijhi@pvumjfstand few negative outliers (mpx@pvumjfstas well then we concatenated these lists together and created single dataset from these three different sets that we created using numpy we then took that combined dataset of uniform data and few outliers and we plotted using qmucpyqmpu and that' how you get box-and-whisker plot call tipx to visualize itand there you go
16,393
you can see that the graph is showing the box that represents the inner of all dataand then we have these outlier lines where we can see little crosses (they may be circles in your versionfor each individual outlier that lies in that range try it yourself alrightthat' your crash course in matplotlib time to get your hands on itand actually do some exercises here as your challengei want you to create scatter plot that represents random data that you fabricate on age versus time spent watching tvand you can make anything you wantreally if you have different fictional data set in your head that you like to play withhave some fun with it create scatter plot that plots two random sets of data against each other and label your axes make it look prettyplay around with ithave fun with it everything you need for reference and for examples should be in this ipython notebook it' kind of cheat sheetif you willfor different things you might need to do for generating different kinds of graphs and different styles of graphs hope it proves useful now it' time to get back to the statistics
16,394
covariance and correlation nextwe're going to talk about covariance and correlation let' say have two different attributes of something and want to see if they're actually related to each other or not this section will give you the mathematical tools you need to do soand we'll dive into some examples and actually figure out covariance and correlation using python these are ways of measuring whether two different attributes are related to each other in set of datawhich can be very useful thing to find out defining the concepts imagine we have scatter plotand each one of the data points represents person that we measuredand we're plotting their age on one axis versus their income on another each one of these dots would represent personfor example their value represents their age and the value represents their income ' totally making this upthis is fake data now if had scatter plot that looks like the left one in the preceding imageyou see that these values tend to lie all over the placeand this would tell you that there' no real correlation between age and income based on this data for any given agethere can be huge range of incomes and they tend to be clustered around the middlebut we're not really seeing very clear relationship between these two different attributes of age and income now in contrastin the scatter plot on the right you can see there' very clear linear relationship between age and income
16,395
socovariance and correlation give us means of measuring just how tight these things are correlated would expect very low correlation or covariance for the data in the left scatter plotbut very high covariance and correlation for the data in the right scatter plot so that' the concept of covariance and correlation it measures how much these two attributes that ' measuring seem to depend on each other measuring covariance measuring covariance mathematically is little bit hardbut 'll try to explain it these are the stepsthink of the data sets for the two variables as high-dimensional vectors convert these to vectors of variances from the mean take the dot product (cosine of the angle between themof the two vectors divide by the sample size it' really more important that you understand how to use it and what it means to actually derive itthink of the attributes of the data as high dimensional vectors what we're going to do on each attribute for each data point is compute the variance from the mean at each point so now have these high dimensional vectors where each data pointeach personif you willcorresponds to different dimension have one vector in this high dimensional space that represents all the variances from the mean forlet' sayage for one attribute then have another vector that represents all the variances from the mean for some other attributelike income what do then is take these vectors that measure the variances from the mean for each attributeand take the dot product between the two mathematicallythat' way of measuring the angle between these high dimensional vectors so if they end up being very close to each otherthat tells me that these variances are pretty much moving in lockstep with each other across these different attributes if take that final dot product and divide it by the sample sizethat' how end up with the covariance amount now you're never going to have to actually compute this yourself the hard way we'll see how to do this the easy way in pythonbut conceptuallythat' how it works now the problem with covariance is that it can be hard to interpret if have covariance that' close to zerowelli know that' telling me there' not much correlation between these variables at allbut large covariance implies there is relationship but how large is largedepending on the units ' usingthere might be very different ways of interpreting that data that' problem that correlation solves
16,396
correlation correlation normalizes everything by the standard deviation of each attribute (just divide the covariance by the standard deviations of both variables and that normalizes thingsby doing soi can say very clearly that correlation of - means there' perfect inverse correlationso as one value increasesthe other decreasesand vice versa correlation of means there' no correlation at all between these two sets of attributes correlation of would imply perfect correlationwhere these two attributes are moving in exactly the same way as you look at different data points remembercorrelation does not imply causation just because you find very high correlation value does not mean that one of these attributes causes the other it just means there' relationship between the twoand that relationship could be caused by something completely different the only way to really determine causation is through controlled experimentwhich we'll talk about more later computing covariance and correlation in python alrightlet' get our hands dirty with covariance and correlation here with some actual python code so againyou can think conceptually of covariance as taking these multidimensional vectors of variances from the mean for each attribute and computing the angle between them as measure of the covariance the math for doing that is lot simpler than it sounds we're talking about high dimensional vectors it sounds like stephen hawking stuffbut reallyfrom mathematical standpoint it' pretty straightforward computing correlation the hard way ' going to start by doing this the hard way numpy does have method to just compute the covariance for youand we'll talk about that laterbut for now want to show that you can actually do this from first principlesnbuqmpumjcjomjof jnqpsuovnqzbtoq gspnqzmbcjnqpsu efgef@nfbo ynfbo nfbo sfuvso efgdpwbsjbodf 
16,397
mfo sfuvsoepu ef@nfbo ef@nfbo covarianceagainis defined as the dot productwhich is measure of the angle between two vectorsof vector of the deviations from the mean for given set of data and the deviations from the mean for another given set of data for the same data' data points we then divide that by in this casebecause we're actually dealing with sample so ef@nfbo our deviation from the mean function is taking in set of datayactually listand it' computing the mean of that set of data the sfuvso line contains little bit of python trickery for you the syntax is sayingi' going to create new listand go through every element in ycall it yjand then return the difference between yj and the meanynfbofor that entire dataset this function returns new list of data that represents the deviations from the mean for each data point my dpwbsjbodf function will do that for both sets of data coming individed by the number of data points minus remember that thing about sample versus population in the previous wellthat' coming into play here then we can just use those functions and see what happens to expand this examplei' going to fabricate some data that is going to try to find relationship between page speedsthatis how quickly page renders on websiteand how much people spend for exampleat amazon we were very concerned about the relationship between how quickly pages render and how much money people spend after that experience we wanted to know if there is an actual relationship between how fast the website is and how much money people actually spend on the website this is one way you might go about figuring that out let' just generate some normally distributed random data for both page speeds and purchase amountsand since it' randomthere' not going to be real correlation between them qbhf qffet oqsboepnopsnbm qvsdibtf"npvou oqsboepnopsnbm tdbuufs qbhf qffetqvsdibtf"npvou dpwbsjbodf qbhf qffetqvsdibtf"npvou
16,398
so just as sanity check here we'll start off by scatter plotting this stuffyou'll see that it tends to cluster around the middle because of the normal distribution on each attributebut there' no real relationship between the two for any given page speed is wide variety of amount spentand for any given amount spent there' wide variety of page speedsso no real correlation there except for ones that are coming out the randomness or through the nature of the normal distribution sure enoughif we compute the covariance in these two sets of attributeswe end up with very small value- so that' very small covariance valueclose to zero that implies there' no real relationship between these two things now let' make life little bit more interesting let' actually make the purchase amount real function of page speed qvsdibtf"npvou oqsboepnopsnbm qbhf qffet tdbuufs qbhf qffetqvsdibtf"npvou dpwbsjbodf qbhf qffetqvsdibtf"npvou
16,399
herewe are keeping things little bit randombut we are creating real relationship between these two sets of values for given userthere' real relationship between the page speeds they encounter and the amount that they spend if we plot that outwe can see the following outputyou can see that there' actually this little curve where things tend to be tightly aligned things get little bit wonky near the bottomjust because of how random things work out if we compute the covariancewe end up with much larger value- and it' the magnitude of that number that matters the signpositive or negativejust implies positive or negative correlationbut that value of says that' much higher value than zero so there' something going on therebut again it' hard to interpret what actually means that' where the correlation comes inwhere we normalize everything by the standard deviations as shown in the following codeefgdpssfmbujpo  tueefwy tue tueefwz tue sfuvsodpwbsjbodf  tueefwytueefwz*osfbmmjgfzpv edifdlgps ejwjefcz[fspifsf dpssfmbujpo qbhf qffetqvsdibtf"npvou