id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
16,600 | delete the ufnqmbuf part of the filename to make it an actual mph qspqfsujft file spark will use this to configure its loggingnowopen this file in text editor of some sort on windowsyou might need to right-click there and select open with and then wordpad in the filelocate mph sppu$bufhpsz */' let' change this to mph sppu$bufhpsz & and this will just remove the clutter of all the log spam that gets printed out when we run stuff save the fileand exit your editor so farwe installed pythonjavaand spark now the next thing we need to do is to install something that will trick your pc into thinking that hadoop existsand again this step is only necessary on windows soyou can skip this step if you're on mac or linux |
16,601 | have little file available that will do the trick let' go to iuuq nfejbtvoephtpgudpnxjovujmtfyf downloading xjovujmtfyf will give you copy of little snippet of an executablewhich can be used to trick spark into thinking that you actually have hadoopnowsince we're going to be running our scripts locally on our desktopit' not big dealwe don' need to have hadoop installed for real this just gets around another quirk of running spark on windows sonow that we have thatlet' find it in the %pxompbet folderctrl to copy itand let' go to our drive and create place for it to livesocreate new folder again in the root driveand we will call it xjovujmt |
16,602 | now let' open this xjovujmt folder and create cjo folder inside itnow in this cjo folderi want you to paste the xjovujmtfyf file we downloaded so you should have =xjovujmt=cjo and then xjovujmtfyfthis next step is only required on some systemsbut just to be safeopen command prompt on windows you can do that by going to your start menu and going down to windows systemand then clicking on command prompt herei want you to type de =xjovujmt=cjowhich is where we stuck our xjovujmtfyf file now if you type ejsyou should see that file there now type xjovujmtfyfdinpe=unq=ijwf this just makes sure that all the file permissions you need to actually run spark successfully are in place without any errors you can close command prompt now that you're done with that step wowwe're almost donebelieve it or not |
16,603 | now we need to set some environment variables for things to work 'll show you how to do that on windows on windows you'll need to open up the start menu and go to windows system control panel to open up control panelin control panelclick on system and security |
16,604 | thenclick on systemthen click on advanced system settings from the list on the left-hand side |
16,605 | from hereclick on environment variables |
16,606 | we will get these options |
16,607 | nowthis is very windows-specific way of setting environment variables on other operating systemsyou'll use different processesso you'll have to look at how to install spark on them herewe're going to set up some new user variables click on the first new button for new user variable and call it " ,@) &as shown belowall uppercase this is going to point to where we installed sparkwhich for us is =tqbslso type that in as the variable value and click on okwe also need to set up +" "@) &so click on new again and type in +" "@) as variable name we need to point that to where we installed javawhich for us is =kelwe also need to set up )"% @) &and that' where we installed the xjovujmt packageso we'll point that to =xjovujmt |
16,608 | so farso good the last thing we need to do is to modify our path you should have path environment variable hereclick on the path environment variablethen on edit and add new path this is going to be " ,@) &=cjoand ' going to add another one+" "@) &=cjo |
16,609 | basicallythis makes all the binary executables of spark available to windowswherever you're running it from click on ok on this menu and on the previous two menus we have finally everything set up spark introduction let' get started with high-level overview of apache spark and see what it' all aboutwhat it' good forand how it works what is sparkwellif you go to the spark websitethey give you very high-levelhandwavy answer" fast and general engine for large-scale data processing it slicesit dicesit does your laundry wellnot really but it is framework for writing jobs or scripts that can process very large amounts of dataand it manages distributing that processing across cluster of computing for you basicallyspark works by letting you load your data into these large objects called resilient distributed data storesrdds it can automatically perform operations that transform and create actions based on those rddswhich you can think of as large data frames the beauty of it is that spark will automatically and optimally spread that processing out amongst an entire cluster of computersif you have one available you are no longer restricted to what you can do on single machine or single machine' memory you can actually spread that out to all the processing capabilities and memory that' available to cluster of machinesandin this day and agecomputing is pretty cheap you can actually rent time on cluster through things like amazon' elastic mapreduce serviceand just rent some time on whole cluster of computers for just few dollarsand run your job that you couldn' run on your own desktop |
16,610 | it' scalable how is spark scalablewelllet' get little bit more specific here in how it all works the way it works isyou write driver programwhich is just little script that looks just like any other python script reallyand it uses the spark library to actually write your script with within that libraryyou define what' called spark contextwhich is sort of the root object that you work within when you're developing in spark from therethe spark framework kind of takes over and distributes things for you so if you're running in standalone mode on your own computerlike we're going to be doing in these upcoming sectionsit all just stays there on your computerobviously howeverif you are running on cluster managerspark can figure that out and automatically take advantage of it spark actually has its own built-in cluster manageryou can actually use it on its own without even having hadoop installedbut if you do have hadoop cluster available to youit can use that as well hadoop is more than mapreducethere' actually component of hadoop called yarn that separates out the entire cluster management piece of hadoop spark can interface with yarn to actually use that to optimally distribute the components of your processing amongst the resources available to that hadoop cluster |
16,611 | within clusteryou might have individual executor tasks that are running these might be running on different computersor they might be running on different cores of the same computer they each have their own individual cache and their own individual tasks that they run the driver programthe spark context and the cluster manager work together to coordinate all this effort and return the final result back to you the beauty of it isall you have to do is write the initial little scriptthe driver programwhich uses spark context to describe at high level the processing you want to do on this data sparkworking together with the cluster manager that you're usingfigures out how to spread that out and distribute it so you don' have to worry about all those details wellif it doesn' workobviouslyyou might have to do some troubleshooting to figure out if you have enough resources available for the task at handbutin theoryit' all just magic it' fast what' the big deal about sparki meanthere are similar technologies like mapreduce that have been around longer spark is fast thoughand on the website they claim that spark is "up to faster than mapreduce when running job in memoryor times faster on disk of coursethe key words here are "up to,your mileage may vary don' think 've ever seen anythingactuallyrun that much faster than mapreduce some well-crafted mapreduce code can actually still be pretty darn efficient but will say that spark does make lot of common operations easier mapreduce forces you to really break things down into mappers and reducerswhereas spark is little bit higher level you don' have to always put as much thought into doing the right thing with spark part of that leads to another reason why spark is so fast it has dag enginea directed acyclic graph wowthat' another fancy word what does it meanthe way spark works isyou write script that describes how to process your dataand you might have an rdd that' basically like data frame you might do some sort of transformation on itor some sort of action on it but nothing actually happens until you actually perform an action on that data what happens at that point isspark will say "hmmok sothis is the end result you want on this data what are all the other things had to do to get up this pointand what' the optimal way to lay out the strategy for getting to that point?sounder the hoodit will figure out the best way to split up that processingand distribute that information to get the end result that you're looking for sothe key inside hereis that spark waits until you tell it to actually produce resultand only at that point does it actually go and figure out how to produce that result soit' kind of cool concept thereand that' the key to lot of its efficiency |
16,612 | it' young spark is very hot technologyand is relatively youngso it' still very much emerging and changing quicklybut lot of big people are using it amazonfor examplehas claimed they're using itebaynasa' jet propulsional laboratoriesgroupontripadvisoryahooand manymany others have too ' sure there' lot of companies using it that don' confess up to itbut if you go to the spark apache wiki page at iuuq tqbslbqbdifpsh qpxfsfecziunm there' actually list you can look up of known big companies that are using spark to solve real-world data problems if you are worried that you're getting into the bleeding edge herefear notyou're in very good company with some very big people that are using spark in production for solving real problems it is pretty stable stuff at this point it' not difficult it' also not that hard you have your choice of programming in pythonjavaor scalaand they're all built around the same concept that just described earlierthat isthe resilient distributed datasetrdd for short we'll talk about that in lot more detail in the coming sections of this components of spark spark actually has many different components that it' built up of so there is spark core that lets you do pretty much anything you can dream up just using spark core functions alonebut there are these other things built on top of spark that are also useful |
16,613 | spark streamingspark streaming is library that lets you actually process data in real time data can be flowing into server continuouslysayfrom weblogsand spark streaming can help you process that data in real time as you goforever spark sqlthis lets you actually treat data as sql databaseand actually issue sql queries on itwhich is kind of cool if you're familiar with sql already mllibthis is what we're going to be focusing on in this section it is actually machine learning library that lets you perform common machine learning algorithmswith spark underneath the hood to actually distribute that processing across cluster you can perform machine learning on much larger datasets than you could have otherwise graphxthis is not for making pretty charts and graphs it refers to graph in the network theory sense think about social networkthat' an example of graph graphx just has few functions that let you analyze the properties of graph of information python versus scala for spark do get some flack sometimes about using python when ' teaching people about apache sparkbut there' method to my madness it is true that lot of people use scala when they're writing spark codebecause that' what spark is developed in natively soyou are incurring little bit of overhead by forcing spark to translate your python code into scala and then into java interpreter commands at the end of the day howeverpython' lot easierand you don' need to compile things managing dependencies is also lot easier you can really focus your time on the algorithms and what you're doingand less on the minutiae of actually getting it builtand runningand compilingand all that nonsense plusobviouslythis book has been focused on python so farand it makes sense to keep using what we've learned and stick with python throughout these lectures here' quick summary of the pros and cons of the two languagespython scala pggfvqeqorkngocpcigfgrgpfgpekgugveaguueqfkpiqxgtjgcf ;qwcntgcf[mpqy [vjqp agvuwuhqewuqpvjgeqpegrvukpuvgcfqh pgy ncpiwcig ecnckurtqdcdn[ oqtgrqrwnctejqkeg ykvj rctm rctmkudwknvkp ecncuqeqfkpikp ecncku pcvkxgvq rctm gyhgcvwtgunkdtctkguvgpfvqdg ecncbtuv |
16,614 | howeveri will say that if you were to do some spark programming in the real worldthere' good chance people are using scala don' worry about it too muchthoughbecause in spark the python and scala code ends up looking very similar because it' all around the same rdd concept the syntax is very slightly differentbut it' not that different if you can figure out how to do spark using pythonlearning how to use it in scala isn' that big of leapreally here' quick example of the same code in the two languagessothat' the basic concepts of spark itselfwhy it' such big dealand how it' so powerful in letting you run machine learning algorithms on very large datasetsor any algorithm really let' now talk in little bit more detail about how it does thatand the core concept of the resilient distributed dataset spark and resilient distributed datasets (rddlet' get little bit deeper into how spark works we're going to talk about resilient distributed datasetsknown as rdds it' sort of the core that you use when programming in sparkand we'll have few code snippets to try to make it real we're going to give you crash course in apache spark here there' lot more depth to it than what we're going to cover in the next few sectionsbut ' just going to give you the basics you need to actually understand what' going on in these examplesand hopefully get you started and pointed in the right direction |
16,615 | as mentionedthe most fundamental piece of spark is called the resilient distributed datasetan rddand this is going to be the object that you use to actually load and transform and get the answers you want out of the data that you're trying to process it' very important thing to understand the final letter in rdd stands for datasetand at the end of the day that' all it isit' just bunch of rows of information that can contain pretty much anything but the key is the and the first resilientit is resilient in that spark makes sure that if you're running this on cluster and one of those clusters goes downit can automatically recover from that and retry nowthat resilience only goes so farmind you if you don' have enough resources available to the job that you're trying to runit will still failand you will have to add more resources to it there' only so many things it can recover fromthere is limit to how many times it will retry given task but it does make its best effort to make sure that in the face of an unstable cluster or an unstable network it will still continue to try its best to run through to completion distributedobviouslyit is distributed the whole point of using spark is that you can use it for big data problems where you can actually distribute the processing across the entire cpu and memory power of cluster of computers that can be distributed horizontallyso you can throw as many computers as you want to given problem the larger the problemthe more computersthere' really no upper bound to what you can do there the sparkcontext object you always start your spark scripts by getting sparkcontext objectand this is the object that embodies the guts of spark it is what is going to give you your rdds to process onso it is what generates the objects that you use in your processing you knowyou don' actually think about the sparkcontext very much when you're actually writing spark programsbut it is sort of the substrate that is running them for you under the hood if you're running in the spark shell interactivelyit has an td object already available for you that you can use to create rdds in standalone scripthoweveryou will have to create that sparkcontext explicitlyand you'll have to pay attention to the parameters that you use because you can actually tell the spark context how you want that to be distributed should take advantage of every core that have available to meshould be running on cluster or just standalone on my local computersothat' where you set up the fundamental settings of how spark will operate |
16,616 | creating rdds let' look at some little code snippets of actually creating rddsand think it will all start to make little bit more sense creating an rdd using python list the following is very simple exampleovnt qbsbmmfmj[ if just want to make an rdd out of plain old python listi can call the qbsbmmfmj[ function in spark that will convert list of stuffin this casejust the numbers into an rdd object called ovnt that is the simplest case of creating an rddjust from hard-coded list of stuff that list could come from anywhereit doesn' have to be hard-coded eitherbut that kind of defeats the purpose of big data meanif have to load the entire dataset into memory before can create an rdd from itwhat' the pointloading an rdd from text file can also load an rdd from text fileand that could be anywhere tdufyu'jmf gjmf vtfstgsbolhpct ufyuuyuin this examplei have giant text file that' the entire encyclopedia or something ' reading that from my local disk herebut could also use if want to host this file on distributed amazons bucketor hdfs if want to refer to data that' stored on distributed hdfs cluster (that stands for hadoop distributed file system if you're not familiar with hdfswhen you're dealing with big data and working with hadoop clusterusually that' where your data will live that line of code will actually convert every line of that text file into its own row in an rdd soyou can think of the rdd as database of rowsandin this exampleit will load up my text file into an rdd where every lineevery rowcontains one line of text can then do further processing in that rdd to parse or break out the delimiters in that data but that' where start from |
16,617 | remember when we talked about etl and elt earlier in the bookthis is good example of where you might actually be loading raw data into system and doing the transform on the system itself that you used to query your data you can take raw text files that haven' been processed at all and use the power of spark to actually transform those into more structured data it can also talk to things like hiveso if you have an existing hive database set up at your companyyou can create hive context that' based on your spark context how cool is thattake look at this example codeijwf$uy )jwf$poufyu td spxt ijwf$uytrm &-&$ obnfbhf' vtfstyou can actually create an rddin this case called rowsthat' generated by actually executing sql query on your hive database more ways to create rdds there are more ways to create rdds as well you can create them from jdbc connection basically any database that supports jdbc can also talk to spark and have rdds created from it cassandrahbaseelasticsearchalso files in json formatcsv formatsequence files object filesand bunch of other compressed files like orc can be used to create rdds don' want to get into the details of all thoseyou can get book and look those up if you need tobut the point is that it' very easy to create an rdd from datawherever it might bewhether it' on local filesystem or distributed data store againrdd is just way of loading and maintaining very large amounts of data and keeping track of it all at once butconceptually within your scriptan rdd is just an object that contains bunch of data you don' have to think about the scalebecause spark does that for you rdd operations nowthere are two different types of classes of things you can do on rdds once you have themyou can do transformationsand you can do actions |
16,618 | transformations let' talk about transformations first transformations are exactly what they sound like it' way of taking an rdd and transforming every row in that rdd to new valuebased on function you provide let' look at some of those functionsmap(and flatmap()nbq and gmbunbq are the functions you'll see the most often both of these will take any function that you can dream upthat will takeas inputa row of an rddand it will output transformed row for exampleyou might take raw input from csv fileand your nbq operation might take that input and break it up into individual fields based on the comma delimiterand return back python list that has that data in more structured format that you can perform further processing on you can chain map operations togetherso the output of one nbq might end up creating new rdd that you then do another transformation onand so onand so forth againthe key isspark can distribute those transformations across the clusterso it might take part of your rdd and transform it on one machineand another part of your rdd and transform it on another like saidnbq and gmbunbq are the most common transformations you'll see the only difference is that nbq will only allow you to output one value for every rowwhereas gmbunbq will let you actually output multiple new rows for given row so you can actually create larger rdd or smaller rdd than you started with using gmbunbqfilter()gjmufs can be used if what you want to do is just create boolean function that says "should this row be preserved or notyes or no distinct()ejtujodu is less commonly used transformation that will only return back distinct values within your rdd sample()this function lets you take random sample from your rdd union()intersection()subtract(and cartesian()you can perform intersection operations like unionintersectionsubtractor even produce every cartesian combination that exists within an rdd |
16,619 | tjohnbq here' little example of how you might use the map function in your worksee tdqbsbmmfmj[ seenbq mbnceby let' say created an rdd just from the list can then call seenbq with lambda function of that takes in each rowthat iseach value of that rddcalls it xand then it applies the function multiplied by to square it if were to then collect the output of this rddit would be and because it would take each individual entry of that rdd and square itand put that into new rdd if you don' remember what lambda functions arewe did talk about it little bit earlier in this bookbut as refresherthe lambda function is just shorthand for defining function in line so seenbq mbnceby is exactly the same thing as separate function efg trvbsf* sfuvsoy yand saying seenbq trvbsf* it' just shorthand for very simple functions that you want to pass in as transformation it eliminates the need to actually declare this as separate named function of its own that' the whole idea of functional programming so you can say you understand functional programming nowby the waybut reallyit' just shorthand notation for defining function inline as part of the parameters to nbq functionor any transformation for that matter actions you can also perform actions on an rddwhen you want to actually get result here are some examples of what you can dodpmmfdu you can call collect(on an rddwhich will give you back plain old python object that you can then iterate through and print out the resultsor save them to fileor whatever you want to do dpvou you can also call dpvou which will force it to actually go count how many entries are in the rdd at this point dpvou# bmvf this function will give you breakdown of how many times each unique value within that rdd occurs ublf you can also sample from the rdd using ublf which will take random number of entries from the rdd |
16,620 | upq upq will give you the first few entries in that rdd if you just want to get little peek into what' in there for debugging purposes sfevdf the more powerful action is sfevdf which will actually let you combine values together for the same common key value you can also use rdds in the context of key-value data the sfevdf function lets you define way of combining together all the values for given key it is very much similar in spirit to mapreduce sfevdf is basically the analogous operation to sfevdfs in mapreduceand nbq is analogous to nbqqfs soit' often very straightforward to actually take mapreduce job and convert it to spark by using these functions remembertoothat nothing actually happens in spark until you call an action once you call one of those action methodsthat' when spark goes out and does its magic with directed acyclic graphsand actually computes the optimal way to get the answer you want but remembernothing really occurs until that action happens sothat can sometimes trip you up when you're writing spark scriptsbecause you might have little print statement in thereand you might expect to get an answerbut it doesn' actually appear until the action is actually performed that is spark in nutshell those are the basics you need for spark programming basicallywhat is an rdd and what are the things you can do to an rdd once you get those conceptsthen you can write some spark code let' change tack now and talk about mlliband some specific features in spark that let you do machine learning algorithms using spark introducing mllib fortunatelyyou don' have to do things the hard way in spark when you're doing machine learning it has built-in component called mllib that lives on top of spark coreand this makes it very easy to perform complex machine learning algorithms using massive datasetsand distributing that processing across an entire cluster of computers sovery exciting stuff let' learn more about what it can do |
16,621 | some mllib capabilities sowhat are some of the things mllib can dowellone is feature extraction one thing you can do at scale is term frequency and inverse document frequency stuffand that' useful for creatingfor examplesearch indexes we will actually go through an example of that later in the the keyagainis that it can do this across cluster using massive datasetsso you could make your own search engine for the web with thispotentially it also offers basic statistics functionschi-squared testspearson or spearman correlationand some simpler things like minmaxmeanand variance those aren' terribly exciting in and of themselvesbut what is exciting is that you can actually compute the variance or the mean or whateveror the correlation scoreacross massive datasetand it would actually break that dataset up into various chunks and run that across an entire cluster if necessary soeven if some of these operations aren' terribly interestingwhat' interesting about it is the scale at which it can operate at it can also support things like linear regression and logistic regressionso if you need to fit function to massive set of data and use that for predictionsyou can do that too it also supports support vector machines we're getting into some of the more fancy algorithms heresome of the more advanced stuffand that too can scale up to massive datasets using spark' mllib there is naive bayes classifier built into mllibsoremember that spam classifier that we built earlier in the bookyou could actually do that for an entire -mail system using sparkand scale that up as far as you want to decision treesone of my favorite things in machine learningare also supported by sparkand we'll actually have an example of that later in this we'll also look at -means clusteringand you can do clustering using -means and massive datasets with spark and mllib even principal component analysis and svd (singular value decompositioncan be done with spark as welland we'll have an example of that too andfinallythere' built-in recommendations algorithm called alternating least squares that' built into mllib personallyi've had kind of mixed results with ityou knowit' little bit too much of black box for my tastebut am recommender system snobso take that with grain of saltspecial mllib data types using mllib is usually pretty straightforwardthere are just some library functions you need to call it does introduce few new data typeshoweverthat you need to know aboutand one is the vector |
16,622 | the vector data type remember when we were doing movie similarities and movie recommendations earlier in the bookan example of vector might be list of all the movies that given user rated there are two types of vectorsparse and dense let' look at an example of those there are manymany movies in the worldand dense vector would actually represent data for every single moviewhether or not user actually watched it sofor examplelet' say have user who watched toy storyobviously would store their rating for toy storybut if they didn' watch the movie star warsi would actually store the fact that there is not number for star wars sowe end up taking up space for all these missing data points with dense vector sparse vector only stores the data that existsso it doesn' waste any memory space on missing dataok soit' more compact form of representing vector internallybut obviously that introduces some complexity while processing soit' good way to save memory if you know that your vectors are going to have lot of missing data in them labeledpoint data type there' also -bcfmfe pjou data type that comes upand that' just what it sounds likea point that has some sort of label associated with it that conveys the meaning of this data in human readable terms rating data type finallythere is bujoh data type that you'll encounter if you're using recommendations with mllib this data type can take in rating that represents - or - whatever star rating person might haveand use that to inform product recommendations automatically soi think you finally have everything you need to get startedlet' dive in and actually look at some real mllib code and run itand then it will make lot more sense |
16,623 | decision trees in spark with mllib alrightlet' actually build some decision trees using spark and the mllib librarythis is very cool stuff wherever you put the course materials for this booki want you to go to that folder now make sure you're completely closed out of canopyor whatever environment you're using for python developmentbecause want to make sure you're starting it from this directoryokand find the qbsl%fdjtjpo sff scriptand doubleclick that to open up canopynowup until this point we've been using ipython notebooks for our codebut you can' really use those very well with spark with spark scriptsyou need to actually submit them to the spark infrastructure and run them in very special wayand we'll see how that works shortly |
16,624 | exploring decision trees code sowe are just looking at raw python script file nowwithout any of the usual embellishment of the ipython notebook stuff let' walk through what' going on in the script we'll go through it slowlybecause this is your first spark script that you've seen in this book firstwe're going to importfrom qztqbslnmmjcthe bits that we need from the machine learning library for spark gspnqztqbslnmmjcsfhsfttjpojnqpsu-bcfmfe pjou gspnqztqbslnmmjcusffjnqpsu%fdjtjpo sff |
16,625 | we need the -bcfmfe pjou classwhich is data type required by the %fdjtjpo sff classand the %fdjtjpo sff class itselfimported from nmmjcusff nextpretty much every spark script you see is going to include this linewhere we import qbsl$pog and qbsl$poufyugspnqztqbsljnqpsu qbsl$pog qbsl$poufyu this is needed to create the qbsl$poufyu object that is kind of the root of everything you do in spark and finallywe're going to import the array library from ovnqzgspnovnqzjnqpsubssbz yesyou can still use /vn zand tdjljumfbsoand whatever you want within spark scripts you just have to make surefirst of allthat these libraries are installed on every machine that you intend to run it on if you're running on clusteryou need to make sure that those python libraries are already in place somehowand you also need to understand that spark will not make the scikitlearn methodsfor examplemagically scalable you can still call these functions in the context of given map functionor something like thatbut it' only going to run on that one machine within that one process don' lean on that stuff too heavilybutfor simple things like managing arraysit' totally an okay thing to do creating the sparkcontext nowwe'll start by setting up our qbsl$poufyuand giving it qbsl$poga configuration dpog qbsl$pog tfu btufs mpdbmtfu"qq/bnf qbsl%fdjtjpo sffthis configuration object saysi' going to set the master node to "mpdbm"and this means that ' just running on my own local desktopi' not actually running on cluster at alland ' just going to run in one process ' also going to give it an app name of " qbsl%fdjtjpo sff,and you can call that whatever you wantfredbobtimwhatever floats your boat it' just what this job will appear as if you were to look at it in the spark console later on |
16,626 | and then we will create our qbsl$poufyu object using that configurationtd qbsl$poufyu dpog dpog that gives us an td object we can use for creating rdds nextwe have bunch of functions pnfgvodujpotuibudpowfsupvs$ joqvuebubjoupovnfsjdbm gfbuvsftgpsfbdikpcdboejebuf efgcjobsz :jg : sfuvsofmtf sfuvsoefgnbq&evdbujpo efhsff jg efhsff # sfuvsofmjg efhsff sfuvsofmjg efhsff isfuvsofmtf sfuvso$powfsubmjtupgsbxgjfmetgspnpvs$ gjmfupb -bcfmfe pjouuibu --jcdbovtf"mmebubnvtucfovnfsjdbmefgdsfbuf-bcfmfe pjout gjfmet zfbst&yqfsjfodf jou gjfmet fnqmpzfe cjobsz gjfmet qsfwjpvt&nqmpzfst jou gjfmet fevdbujpo-fwfm nbq&evdbujpo gjfmet upq jfs cjobsz gjfmet joufsofe cjobsz gjfmet ijsfe cjobsz gjfmet sfuvso-bcfmfe pjou ijsfebssbz <zfbst&yqfsjfodffnqmpzfeqsfwjpvt&nqmpzfstfevdbujpo-fwfmupq jfsjoufsofelet' just get down these functions for nowand we'll come back to them later |
16,627 | importing and cleaning our data let' go to the first bit of python code that actually gets executed in this script the first thing we're going to do is load up this btu)jsftdtw fileand that' the same file we used in the decision tree exercise that we did earlier in this book let' pause quickly to remind ourselves of the content of that file if you remember rightwe have bunch of attributes of job candidatesand we have field of whether or not we hired those people what we're trying to do is build up decision tree that will predict would we hire or not hire person given those attributesnowlet' take quick peek at the btu)jsftdtwwhich will be an excel file |
16,628 | you can see that excel actually imported this into tablebut if you were to look at the raw text you' see that it' made up of comma-separated values the first line is the actual headings of each columnso what we have above are the number of years of prior experienceis the candidate currently employed or notnumber of previous employersthe level of educationwhether they went to top-tier schoolwhether they had an internship while they were in schooland finallythe target that we're trying to predict onwhether or not they got job offer in the end of the day nowwe need to read that information into an rdd so we can do something with it let' go back to our scriptsbx%bub tdufyu'jmf tvoephdpotvmuvefnzebubtdjfodf btu)jsftdtwifbefs sbx%bubgjstu sbx%bub sbx%bubgjmufs mbnceby yifbefs the first thing we need to do is read that csv data inand we're going to throw away that first rowbecause that' our header informationremember sohere' little trick for doing that we start off by importing every single line from that file into raw data rddand could call that anything wantbut we're calling it tdufyu'jmf sparkcontext has ufyu'jmf function that will take text file and create new rddwhere each entryeach line of the rddconsists of one line of input make sure you change the path to that file to wherever you actually installed itotherwise it won' work nowi' going to extract the first linethe first row from that rddby using the gjstu function sonow the header rdd will contain one entry that is just that row of column headers and nowlook what' going on in the above codei' using gjmufs on my original data that contains all of the information in that csv fileand ' defining gjmufs function that will only let lines through if that line is not equal to the contents of that initial header row what 've done here isi've taken my raw csv file and 've stripped out the first line by only allowing lines that do not equal that first line to surviveand ' returning that back to the sbx%bub rdd variable again soi' taking sbx%bubfiltering out that first lineand creating new sbx%bub that only contains the data itself with me so farit' not that complicated |
16,629 | nowwe're going to use nbq function what we need to do next is start to make more structure out of this information right nowevery row of my rdd is just line of textit is comma-delimited textbut it' still just giant line of textand want to take that commaseparated value list and actually split it up into individual fields at the end of the dayi want each rdd to be transformed from line of text that has bunch of information separated by commas into python list that has actual individual fields for each column of information that have sothat' what this lambda function doesdtw%bub sbx%bubnbq mbnceby tqmju it calls the built-in python function tqmjuwhich will take row of inputand split it on comma charactersand divide that into list of every field delimited by commas the output of this map functionwhere passed in lambda function that just splits every line into fields based on commasis new rdd called dtw%bub andat this pointdtw%bub is an rdd that containson every rowa list where every element is column from my source data nowwe're getting close it turns out that in order to use decision tree with mlliba couple of things need to be true first of allthe input has to be in the form of labeledpoint data typesand it all has to be numeric in nature sowe need to transform all of our raw data into data that can actually be consumed by mlliband that' what the dsfbuf-bcfmfe pjout function that we skipped past earlier does we'll get to that in just secondfirst here' the call to itusbjojoh%bub dtw%bubnbq dsfbuf-bcfmfe pjout we're going to call map on dtw%buband we are going to pass it the dsfbuf-bcfmfe pjout functionwhich will transform every input row into something even closer to what we want at the end of the day solet' look at what dsfbuf-bcfmfe pjout doesefgdsfbuf-bcfmfe pjout gjfmet zfbst&yqfsjfodf jou gjfmet fnqmpzfe cjobsz gjfmet qsfwjpvt&nqmpzfst jou gjfmet fevdbujpo-fwfm nbq&evdbujpo gjfmet upq jfs cjobsz gjfmet joufsofe cjobsz gjfmet ijsfe cjobsz gjfmet sfuvso-bcfmfe pjou ijsfebssbz <zfbst&yqfsjfodffnqmpzfeqsfwjpvt&nqmpzfstfevdbujpo-fwfmupq jfsjoufsofe |
16,630 | it takes in list of fieldsand just to remind you again what that looks likelet' pull up that dtw excel file againsoat this pointevery rdd entry has fieldit' python listwhere the first element is the years of experiencesecond element is employedso on and so forth the problems here are that we want to convert those lists to labeledpointsand we want to convert everything to numerical data soall these yes and no answers need to be converted to ones and zeros these levels of experience need to be converted from names of degrees to some numeric ordinal value maybe we'll assign the value zero to no educationone can mean bstwo can mean msand three can mean phdfor example againall these yes/no values need to be converted to zeros and onesbecause at the end of the dayeverything going into our decision tree needs to be numericand that' what dsfbuf-bcfmfe pjout does nowlet' go back to the code and run through itefgdsfbuf-bcfmfe pjout gjfmet zfbst&yqfsjfodf jou gjfmet fnqmpzfe cjobsz gjfmet qsfwjpvt&nqmpzfst jou gjfmet fevdbujpo-fwfm nbq&evdbujpo gjfmet upq jfs cjobsz gjfmet joufsofe cjobsz gjfmet ijsfe cjobsz gjfmet sfuvso-bcfmfe pjou ijsfebssbz <zfbst&yqfsjfodffnqmpzfeqsfwjpvt&nqmpzfstfevdbujpo-fwfmupq jfsjoufsofe |
16,631 | firstit takes in our list of usjoh'jfmet ready to convert it into -bcfmfe pjoutwhere the label is the target value-was this person hired or not or -followed by an array that consists of all the other fields that we care about sothis is how you create -bcfmfe pjou that the %fdjtjpo sff -mjc class can consume soyou see in the above code that we're converting years of experience from string to an integer valueand for all the yes/no fieldswe're calling this cjobsz functionthat defined up at the top of the codebut we haven' discussed yetefgcjobsz :jg : sfuvsofmtf sfuvsoall it does is convert the character yes to otherwise it returns soy will become will become similarlyi have nbq&evdbujpo functionefgnbq&evdbujpo efhsff jg efhsff # sfuvsofmjg efhsff sfuvsofmjg efhsff isfuvsofmtf sfuvsoas we discussed earlierthis simply converts different types of degrees to an ordinal numeric value in exactly the same way as our yes/no fields as reminderthis is the line of code that sent us running through those functionsusbjojoh%bub dtw%bubnbq dsfbuf-bcfmfe pjout at this pointafter mapping our rdd using that dsfbuf-bcfmfe pjout functionwe now have usbjojoh%bub rddand this is exactly what mllib wants for constructing decision tree |
16,632 | creating test candidate and building our decision tree let' create little test candidate we can useso we can use our model to actually predict whether someone new would be hired or not what we're going to do is create test candidate that consists of an array of the same values for each field as we had in the csv fileuftu$boejebuft let' quickly compare that code with the excel document so you can see the array mappingagainwe need to map these back to their original column representationso that means years of prior experiencecurrently employedthree previous employersa bs degreedid not go to top-tier school and did not do an internship we could actually create an entire rdd full of candidates if we wanted tobut we'll just do one for now nextwe'll use parallelize to convert that list into an rdduftu%bub tdqbsbmmfmj[ uftu$boejebuft nothing new there alrightnow for the magic let' move to the next code blocknpefm %fdjtjpo sffusbjo$mbttjgjfs usbjojoh%bubovn$mbttft dbufhpsjdbm'fbuvsft*ogp \^jnqvsjuz hjoj nby%fqui nby#jot we are going to call %fdjtjpo sffusbjo$mbttjgjfsand this is what will actually build our decision tree itself we pass in our usbjojoh%bubwhich is just an rdd full of -bcfmfe pjou arraysovn$mbttft because we havebasicallya yes or no prediction that we're trying to makewill this person be hired or notthe next parameter is called dbufhpsjdbm'fbuvsft*ogpand this is python dictionary that maps fields to the number of categories in each field soif you have continuous range available to given fieldlike the number of years of experienceyou wouldn' specify that at all in herebut for fields that are categorical in naturesuch as what degree do they havefor examplethat would say fieldid mapping to the degree attainedwhich has four different possibilitiesno educationbsmsand phd for all of the yes/no fieldswe're mapping those to possible categoriesyes/no or / is what we converted those to |
16,633 | continuing to move through our %fdjtjpo sffusbjo$mbttjgjfs callwe are going to use the hjoj impurity metric as we measure the entropy we have nby%fqui of which is just an upper boundary on how far we're going to gothat can be larger if you wish finallynby#jot is just way to trade off computational expense if you canso it just needs to at least be the maximum number of categories you have in each feature remembernothing really happens until we call an actionso we're going to actually use this model to make prediction for our test candidate we use our %fdjtjpo sff modelwhich contains decision tree that was trained on our test training dataand we tell that to make prediction on our test dataqsfejdujpot npefmqsfejdu uftu%bub qsjou )jsfqsfejdujpo sftvmut qsfejdujpotdpmmfdu gpssftvmujosftvmut qsjou sftvmu we'll get back list of predictions that we can then iterate through soqsfejdu returns plain old python object and is an action that can dpmmfdu let me rephrase that little bitdpmmfdu will return python object on our predictionsand then we can iterate through every item in that list and print the result of the prediction we can also print out the decision tree itself by using up%fcvh usjohqsjou -fbsofedmbttjgjdbujpousffnpefm qsjou npefmup%fcvh usjoh that will actually print out little representation of the decision tree that it created internallythat you can follow through in your own head sothat' kind of cool too running the script alrightfeel free to take some timestare at this script little bit moredigest what' going onbutif you're readylet' move on and actually run this beast soto do soyou can' just run it directly from canopy we're going to go to the tools menu and open up canopy command promptand this just opens up windows command prompt with all the necessary environment variables in place for running python scripts in canopy make sure that the working directory is the directory that you installed all of the course materials into |
16,634 | all we need to do is call tqbsltvcnjuso this is script that lets you run spark scripts from pythonand then the name of the script qbsl%fdjtjpo sffqz that' all have to do tqbsltvcnju qbsl%fdjtjpo sffqz hit returnand off it will go againif were doing this on cluster and created my qbsl$pog accordinglythis would actually get distributed to the entire clusterbutfor nowwe're just going to run it on my computer when it' finishedyou should see the below outputsoin the above imageyou can see in the test person that we put in abovewe have prediction that this person would be hiredand 've also printed out the decision tree itselfso it' kind of cool nowlet' bring up that excel document once more so we can compare it to the output |
16,635 | we can walk through this and see what it means soin our output decision tree we actually end up with depth of fourwith nine different nodesandagainif we remind ourselves what these different fields correlate tothe way to read this isif (feature in )so that means if the employed is nothen we drop down to feature this list is zero-basedso feature in our excel document is internships we can run through the tree like thatthis person is not currently employeddid not do an internshiphas no prior years of experience and has bachelor' degreewe would not hire this person then we get to the else clauses if that person had an advanced degreewe would hire themjust based on the data that we had that we trained it on soyou can work out what these different feature ids mean back to your original source datarememberyou always start counting at and interpret that accordingly note that all the categorical features are expressed in boolean in this list of possible categories that it sawwhereas continuous data is expressed numerically as less than or greater than relationships and there you have itan actual decision tree built using spark and mllib that actually works and makes sense pretty awesome stuff -means clustering in spark alrightlet' look at another example of using spark in mlliband this time we're going to look at -means clusteringand just like we did with decision treeswe're going to take the same example that we did using scikit-learn and we're going to do it in spark insteadso it can actually scale up to massive dataset soagaini've made sure to close out of everything elseand ' going to go into my book materials and open up the qbslfbot python scriptand let' study what' going on in |
16,636 | alrightso againwe begin with some boilerplate stuff gspnqztqbslnmmjcdmvtufsjohjnqpsufbot gspnovnqzjnqpsubssbzsboepn gspnnbuijnqpsutrsu gspnqztqbsljnqpsu qbsl$pog qbsl$poufyu gspntlmfbsoqsfqspdfttjohjnqpsutdbmf we're going to import the fbot package from the clustering -mjc packagewe're going to import array and random from ovnqzbecauseagainwe're free to use whatever you wantthis is python script at the end of the dayand -mjc often does require ovnqz arrays as input we're going to import the trsu function and the usual boilerplate stuffwe need qbsl$pog and qbsl$poufyupretty much every time from qztqbsl we're also going to import the scale function from tdjljumfbso againit' ok to use tdjljumfbso as long as you make sure its installed in every machine that you're going to be running this job onand also don' assume that tdjljumfbso will magically scale itself up just because you're running it on spark butsince ' only using it for the scaling functionit' ok alrightlet' go ahead and set things up ' going to create global variable firsti' going to run -means clustering in this example with of meaning with five different clusters ' then going to go ahead and set up local qbsl$pog just running on my own desktopdpog qbsl$pog tfu btufs mpdbmtfu"qq/bnf qbslfbottd qbsl$poufyu dpog dpog |
16,637 | ' going to set the name of my application to qbslfbot and create qbsl$poufyu object that can then use to create rdds that run on my local machine we'll skip past the dsfbuf$mvtufsfe%bub function for nowand go to the first line of code that gets run ebub tdqbsbmmfmj[ tdbmf dsfbuf$mvtufsfe%bub the first thing we're going to do is create an rdd by parallelizing in some fake data that ' creatingand that' what the dsfbuf$mvtufsfe%bub function does basicallyi' telling you to create data points clustered around centroidsand this is pretty much identical to the code that we looked at when we played with -means clustering earlier in the book if you want refreshergo ahead and look back at that basicallywhat we're going to do is create bunch of random centroids around which we normally distribute some age and income data sowhat we're doing is trying to cluster people based on their age and incomeand we are fabricating some data points to do that that returns ovnqz array of our fake data once that result comes back from dsfbuf$mvtufsfe%bubi' calling tdbmf on itand that will ensure that my ages and incomes are on comparable scales nowremember the section we studied saying you have to remember about data normalizationthis is one of those examples where it is importantso we are normalizing that data with tdbmf so that we get good results from -means and finallywe parallelize the resulting list of arrays into an rdd using qbsbmmfmj[ now our data rdd contains all of our fake data all we have to doand this is even easier than decision treeis call fbotusbjo on our training data dmvtufst fbotusbjo ebub,nby*ufsbujpot jojujbmj[bujpo pef sboepnwe pass in the number of clusters we wantour valuea parameter that puts an upper boundary on how much processing it' going to dowe then tell it to use the default initialization mode of -means where we just randomly pick our initial centroids for our clusters before we start iterating on themand back comes the model that we can use we're going to call that dmvtufst alrightnow we can play with that cluster |
16,638 | let' start by printing out the cluster assignments for each one of our points sowe're going to take our original data and transform it using lambda functionsftvmu %ebubnbq mbncebqpjou dmvtufstqsfejdu qpjou dbdif this function is just going to transform each point into the cluster number that is predicted from our model againwe're just taking our rdd of data points we're calling dmvtufstqsfejdu to figure out which cluster our -means model is assigning them toand we're just going to put the results in our sftvmu %nowone thing want to point out here is this cache callin the above code an important thing when you're doing spark is that any time you're going to call more than one action on an rddit' important to cache it firstbecause when you call an action on an rddspark goes off and figures out the dag for itand how to optimally get to that result it will go off and actually execute everything to get that result soif call two different actions on the same rddit will actually end up evaluating that rdd twiceand if you want to avoid all of that extra workyou can cache your rdd in order to make sure that it does not recompute it more than once by doing thatwe make sure these two subsequent operations do the right thingqsjou $pvoutczwbmvf dpvout sftvmu %%dpvou# bmvf qsjou dpvout qsjou $mvtufsbttjhonfout sftvmut sftvmu %%dpmmfdu qsjou sftvmut in order to get an actual resultwhat we're going to do is use dpvou# bmvfand what that will do is give us back an rdd that has how many points are in each cluster remembersftvmu %currently has mapped every individual point to the cluster it ended up withso now we can use dpvou# bmvf to just count up how many values we see for each given cluster id we can then easily print that list out and we can actually look at the raw results of that rdd as wellby calling dpmmfdu on itand that will give me back every single points cluster assignmentand we can print out all of them |
16,639 | within set sum of squared errors (wsssenowhow do we measure how good our clusters arewellone metric for that is called the within set sum of squared errorswowthat sounds fancyit' such big term that we need an abbreviation for itwssse all it iswe look at the distance from each point to its centroidthe final centroid in each clustertake the square of that error and sum it up for the entire dataset it' just measure of how far apart each point is from its centroid obviouslyif there' lot of error in our model then they will tend to be far apart from the centroids that might applyso for that we need higher value of kfor example we can go ahead and compute that value and print it out with the following codeefgfssps qpjou dfoufs dmvtufstdfoufst sfuvsotrsu tvn ebubnbq mbncebqpjou fssps qpjou sfevdf mbnceby qsjou juijo fu vnpg rvbsfe&ssps tus first of allwe define this fssps function that computes the squared error for each point it just takes the distance from the point to the centroid center of each cluster and sums it up to do thatwe're taking our source datacalling lambda function on it that actually computes the error from each centroid center pointand then we can chain different operations together here firstwe call nbq to compute the error for each point then to get final total that represents the entire datasetwe're calling sfevdf on that result sowe're doing ebubnbq to compute the error for each pointand then sfevdf to take all of those errors and add them all together and that' what the little lambda function does this is basically fancy way of saying" want you to add up everything in this rdd into one final result sfevdf will take the entire rddtwo things at timeand combine them together using whatever function you provide the function ' providing it above is "take the two rows that ' combining together and just add them up if we do that throughout every entry of the rddwe end up with final summed-up total it might seem like little bit of convoluted way to just sum up bunch of valuesbut by doing it this way we are able to make sure that we can actually distribute this operation if we need to we could actually end up computing the sum of one piece of the data on one machineand sum of different piece over on another machineand then take those two sums and combine them together into final result this sfevdf function is sayinghow do take any two intermediate results from this operationand combine them together |
16,640 | againfeel free to take moment and stare at this little bit longer if you want it to sink in nothing really fancy going on herebut there are few important pointswe introduced the use of cache if you want to make sure that you don' do unnecessary recomputations on an rdd that you're going to use more than once we introduced the use of the sfevdf function we have couple of interesting mapper functions as well hereso there' lot to learn from in this example at the end of the dayit will just do -means clusteringso let' go ahead and run it running the code go to the tools menucanopy command promptand type intqbsltvcnju qbslfbotqz hit returnand off it will go in this situationyou might have to wait few moments for the output to appear in front of youbut you should see something like thisit workedawesomeso rememberthe output that we asked for wasfirst of alla count of how many points ended up in each cluster sothis is telling us that cluster had points in itcluster had points in itand so on and so forth it ended up pretty evenly distributedso that' good sign nextwe printed out the cluster assignments for each individual pointandif you rememberthe original data that fabricated this data did it sequentiallyso it' actually good thing that you see all of the togetherand all the togetherand all the togetherit looks like it started to get little bit confused with the and sbut by and largeit seems to have done pretty good job of uncovering the clusters that we created the data with originally |
16,641 | and finallywe computed the wssse metricit came out to in this example soif you want to play around with this little biti encourage you to do so you can see what happens to that error metric as you increase or decrease the values of kand think about why that may be you can also experiment with what happens if you don' normalize all the datadoes that actually affect your results in meaningful wayis that actually an important thing to doand you can also experiment with the nby*ufsbujpot parameter on the model itself and get good feel of what that actually does to the final resultsand how important it is sofeel free to mess around with it and experiment away that' kmeans clustering done with mllib and spark in scalable manner very cool stuff tf-idf soour final example of mllib is going to be using something called term frequency inverse document frequencyor tf-idfwhich is the fundamental building block of many search algorithms as usualit sounds complicatedbut it' not as bad as it sounds sofirstlet' talk about the concepts of tf-idfand how we might go about using that to solve search problem and what we're actually going to do with tf-idf is create rudimentary search engine for wikipedia using apache spark in mllib how awesome is thatlet' get started tf-idf stands for term frequency and inverse document frequencyand these are basically two metrics that are closely interrelated for doing search and figuring out the relevancy of given word to documentgiven larger body of documents sofor exampleevery article on wikipedia might have term frequency associated with itevery page on the internet could have term frequency associated with it for every word that appears in that document sounds fancybutas you'll seeit' fairly simple concept all term frequency means is how often given word occurs in given document sowithin one web pagewithin one wikipedia articlewithin one whateverhow common is given word within that documentyou knowwhat is the ratio of that word' occurrence rate throughout all the words in that documentthat' it that' all term frequency is document frequencyis the same ideabut this time it is the frequency of that word across the entire corpus of documents sohow often does this word occur throughout all of the documents that haveall the web pagesall of the articles on wikipediawhatever for examplecommon words like "aor "thewould have very high document frequencyand would expect them to also have very high term frequencybut that doesn' necessarily mean they're relevant to given document |
16,642 | you can kind of see where we're going with this solet' say we have very high term frequency and very low document frequency for given word the ratio of these two things can give me measure of the relevance of that word to the document soif see word that occurs very often in given documentbut not very often in the overall space of documentsthen know that this word probably conveys some special meaning to this particular document it might convey what this document is actually about sothat' tf-idf it just stands for term frequency inverse document frequencywhich is just fancy way of saying term frequency over document frequencywhich is just fancy way of saying how often does this word occur in this document compared to how often it occurs in the entire body of documentsit' that simple tf-idf in practice in practicethere are few little nuances to how we use this for examplewe use the actual log of the inverse document frequency instead of the raw valueand that' because word frequencies in reality tend to be distributed exponentially soby taking the logwe end up with slightly better weighting of wordsgiven their overall popularity there are some limitations to this approachobviouslyone is that we basically assume document is nothing more than bagful of wordswe assume there are no relationships between the words themselves andobviouslythat' not always the caseand actually parsing them out can be good part of the workbecause you have to deal with things like synonyms and various tenses of wordsabbreviationscapitalizationsmisspellingsand so on this gets back to the idea of cleaning your data being large part of your job as data scientistand it' especially true when you're dealing with natural language processing stuff fortunatelythere are some libraries out there that can help you with thisbut it is real problem and it will affect the quality of your results another implementation trick that we use with tf-idf isinstead of storing actual string words with their term frequencies and inverse document frequencyto save space and make things more efficientwe actually map every word to numerical valuea hash value we call it the idea is that we have function that can take any wordlook at its lettersand assign thatin some fairly well-distributed mannerto set of numbers in range that wayinstead of using the word "represented"we might assign that hash value of and we can then refer to the word "representedas " from now on nowif the space of your hash values isn' large enoughyou could end up with different words being represented by the same numberwhich sounds worse than it is butyou knowyou want to make sure that you have fairly large hash space so that is unlikely to happen those are called hash collisions they can cause issuesbutin realitythere' only so many words that people commonly use in the english language you can get away with , or so and be just fine |
16,643 | doing this at scale is the hard part if you want to do this over all of wikipediathen you're going to have to run this on cluster but for the sake of argumentwe are just going to run this on our own desktop for nowusing small sample of wikipedia data using tfidf how do we turn that into an actual search problemonce we have tf-idfwe have this measure of each word' relevancy to each document what do we do with itwellone thing you could do is compute tf-idf for every word that we encounter in the entire body of documents that we haveand thenlet' say we want to search for given terma given word let' say we want to search for "what wikipedia article in my set of wikipedia articles is most relevant to gettysburg? could sort all the documents by their tf-idf score for gettysburgand just take the top resultsand those are my search results for gettysburg that' it just take your search wordcompute tf-idftake the top results that' it obviouslyin the real world there' lot more to search than that google has armies of people working on this problem and it' way more complicated in practicebut this will actually give you working search engine algorithm that produces reasonable results let' go ahead and dive in and see how it all works searching wikipedia with spark mllib we're going to build an actual working search algorithm for piece of wikipedia using apache spark in mlliband we're going to do it all in less than lines of code this might be the coolest thing we do in this entire book |
16,644 | go into your course materials and open up the '*%'qz scriptand that should open up canopy with the following codenowstep back for moment and let it sink in that we're actually creating working search algorithmalong with few examples of using it in less than lines of code hereand it' scalable could run this on cluster it' kind of amazing let' step through the code |
16,645 | import statements we're going to start by importing the qbsl$pog and qbsl$poufyu libraries that we need for any spark script that we run in pythonand then we're going to import )btijoh and *%using the following commands gspnqztqbsljnqpsu qbsl$pog qbsl$poufyu gspnqztqbslnmmjcgfbuvsfjnqpsu)btijoh gspnqztqbslnmmjcgfbuvsfjnqpsu*%sothis is what computes the term frequencies ( 'and inverse document frequencies (*%'within our documents creating the initial rdd we'll start off with our boilerplate spark stuff that creates local qbsl$pogjhvsbujpo and qbsl$poufyufrom which we can then create our initial rdd dpog qbsl$pog tfu btufs mpdbmtfu"qq/bnf qbsl '*%'td qbsl$poufyu dpog dpog nextwe're going to use our qbsl$poufyu to create an rdd from tvctfutnbmmutw sbx%bub tdufyu'jmf tvoephdpotvmu efnz%bub djfodftvctfutnbmmutwthis is file containing tab-separated valuesand it represents small sample of wikipedia articles againyou'll need to change your path as shown in the preceding code as necessary for wherever you installed the course materials for this book that gives me back an rdd where every document is in each line of the rdd the utw file contains one entire wikipedia document on every lineand know that each one of those documents is split up into tabular fields that have various bits of metadata about each article the next thing ' going to do is split those upgjfmet sbx%bubnbq mbnceby tqmju =ui' going to split up each document based on their tab delimiters into python listand create new gjfmet rdd thatinstead of raw input datanow contains python lists of each field in that input data |
16,646 | finallyi' going to map that datatake in each list of fieldsextract field number three ywhich happen to know is the body of the article itselfthe actual article textand ' in turn going to split that based on spacesepdvnfout gjfmetnbq mbnceby tqmju what does is extract the body of the text from each wikipedia articleand split it up into list of words my new epdvnfout rdd has one entry for every documentand every entry in that rdd contains list of words that appear in that document nowwe actually know what to call these documents later on when we're evaluating the results ' also going to create new rdd that stores the document namesepdvnfou/bnft gjfmetnbq mbnceby all that does is take that same gjfmet rdd and uses this nbq function to extract the document namewhich happen to know is in field number one soi now have two rddsepdvnfoutwhich contains lists of words that appear in each documentand epdvnfou/bnftwhich contains the name of each document also know that these are in the same orderso can actually combine these together later on to look up the name for given document creating and transforming hashingtf object nowthe magic happens the first thing we're going to do is create )btijoh objectand we're going to pass in parameter of , this means that ' going to hash every word into one of , numerical valuesibtijoh )btijoh instead of representing words internally as stringswhich is very inefficientit' going to try toas evenly as possibledistribute each word to unique hash value ' giving it up to , hash values to choose from basicallythis is mapping words to numbers at the end of the day nexti' going to call usbotgpsn on ibtijoh with my actual rdd of documentsug ibtijoh 'usbotgpsn epdvnfout that' going to take my list of words in every document and convert it to list of hash valuesa list of numbers that represent each word instead |
16,647 | this is actually represented as sparse vector at this point to save even more space sonot only have we converted all of our words to numbersbut we've also stripped out any missing data in the event that word does not appear in document where you're not storing the fact that word does not appear explicitlyit saves even more space computing the tf-idf score to actually compute the tf-idf score for each word in each documentwe first cache this ug rdd ugdbdif we do that because we're going to use it more than once nextwe use *%njo%pd'sfr meaning that we're going to ignore any word that doesn' appear at least twicejeg *%njo%pd'sfr gju ug we call gju on ugand then in the next line we call usbotgpsn on ugugjeg jegusbotgpsn ug what we end up with here is an rdd of the tf-idf score for each word in each document using the wikipedia search engine algorithm let' try and put the algorithm to use let' try to look up the best article for the word gettysburg if you're not familiar with us historythat' where abraham lincoln gave famous speech sowe can transform the word gettysburg into its hash value using the following codehfuuztcvsh ibtijoh 'usbotgpsn hfuuztcvsh)bti bmvf jou hfuuztcvsh 'joejdft we will then extract the tf-idf score for that hash value into new rdd for each documenthfuuztcvsh fmfwbodf ugjegnbq mbnceby |
16,648 | what this does is extract the tf-idf score for gettysburgfrom the hash value it maps to for every documentand stores that in this hfuuztcvsh fmfwbodf rdd we then combine that with the epdvnfou/bnft so we can see the results[jqqfe ftvmut hfuuztcvsh fmfwbodf[jq epdvnfou/bnft finallywe can print out the answerqsjou #ftuepdvnfougps(fuuztcvshjt qsjou [jqqfe ftvmutnby running the algorithm solet' go run that and see what happens as usualto run the spark scriptwe're not going to just hit the play icon we have to go to tools>canopy command prompt in the command prompt that opens upwe will type in tqbsltvcnju '*%'qzand off it goes we are asking it to chunk through quite bit of dataeven though it' small sample of wikipedia it' still fair chunk of informationso it might take while let' see what comes back for the best document match for gettysburgwhat document has the highest tf-idf scoreit' abraham lincolnisn' that awesomewe just made an actual search engine that actually worksin just few lines of code and there you have itan actual working search algorithm for little piece of wikipedia using spark in mllib and tf-idf and the beauty is we can actually scale that up to all of wikipedia if we wanted toif we had cluster large enough to run it hopefully we got your interest up there in sparkand you can see how it can be applied to solve what can be pretty complicated machine learning problems in distributed manner soit' very important tooland want to make sure you don' get through this book on data science without at least knowing the concepts of how spark can be applied to big data problems sowhen you need to move beyond what one computer can dorememberspark is at your disposal |
16,649 | using the spark dataframe api for mllib this was originally produced for spark so let' talk about what' new in spark and what new capabilities exist in mllib now sothe main thing with spark is that they moved more and more toward dataframes and datasets datasets and dataframes are kind of used interchangeably sometimes technically dataframe is dataset of row objectsthey're kind of like rddsbut the only difference is thatwhereas an rdd just contains unstructured dataa dataset has defined schema to it dataset knows ahead of time exactly what columns of information exists in each rowand what types those are because it knows about the actual structure of that dataset ahead of timeit can optimize things more efficiently it also lets us think of the contents of this dataset as littlemini databasewellactuallya very big database if it' on cluster that means we can do things like issue sql queries on it this creates higher-level api with which we can query and analyze massive datasets on spark cluster it' pretty cool stuff it' fasterit has more opportunities for optimizationand it has higher-level api that' often easier to work with how spark mllib works going forward in spark mllib is pushing dataframes as its primary api this is the way of the futureso let' take look at how it works 've gone ahead and opened up the qbsl-jofbs fhsfttjpoqz file in canopyas shown in the following figureso let' walk through it little bit |
16,650 | as you seefor one thingwe're using nm instead of -mjcand that' because the new dataframe-based api is in there implementing linear regression in this examplewhat we're going to do is implement linear regressionand linear regression is just way of fitting line to set of data what we're going to do in this exercise is take bunch of fabricated data that we have in two dimensionsand try to fit line to it with linear model we're going to separate our data into two setsone for building the model and one for evaluating the modeland we'll compare how well this linear model does at actually predicting real values first of allin spark if you're going to be doing stuff with the qbsl interface and using datasetsyou've got to be using qbsl fttjpo object instead of qbsl$poufyu to set one upyou do the followingtqbsl qbsl fttjpocvjmefsdpogjh tqbsltrmxbsfipvtfejsgjmf ufnqbqq/bnf -jofbs fhsfttjpohfu $sfbuf note that the middle bit is only necessary on windows and in spark it kind of works around little bug that they haveto be honest soif you're on windowsmake sure you have ufnq folder if you want to run thisgo create that now if you need to if you're not on windowsyou can delete that whole middle section to leavetqbsl qbsl fttjpocvjmefsbqq/bnf -jofbs fhsfttjpohfu $sf buf okayso you can say tqbslgive it an bqq/bnf and hfu $sfbuf this is interestingbecause once you've created spark sessionif it terminates unexpectedlyyou can actually recover from that the next time that you run it soif we have checkpoint directoryit can actually restart where it left off using hfu $sfbuf nowwe're going to use this sfhsfttjpouyu file that have included with the course materialsjoqvu-joft tqbsltqbsl$poufyuufyu'jmf sfhsfttjpouyu |
16,651 | that is just text file that has comma-delimited values of two columnsand they're just two columns ofmore or less randomlylinearly correlated data it can represent whatever you want let' imagine that it represents heights and weightsfor example sothe first column might represent heightsthe second column might represent weights in the lingo of machine learningwe talk about labels and featureswhere labels are usually the thing that you're trying to predictand features are set of known attributes of the data that you use to make prediction from in this examplemaybe heights are the labels and the features are the weights maybe we're trying to predict heights based on your weight it can be anythingit doesn' matter this is all normalized down to data between - and there' no real meaning to the scale of the data anywhereyou can pretend it means anything you wantreally to use this with mllibwe need to transform our data into the format it expectsebub joqvu-joftnbq mbnceby tqmju fdupstefotf gmpbu nbq mbnceby gmpbu the first thing we're going to do is split that data up with this nbq function that just splits each line into two distinct values in listand then we're going to map that to the format that mllib expects that' going to be floating point labeland then dense vector of the feature data in this casewe only have one bit of feature datathe weightso we have vector that just has one thing in itbut even if it' just one thingthe mllib linear regression model requires dense vector there this is like mbcfmfe pjou in the older apibut we have to do it the hard way here nextwe need to actually assign names to those columns here' the syntax for doing thatdpm/bnft eg ebubup%dpm/bnft we're going to tell mllib that these two columns in the resulting rdd actually correspond to the label and the featuresand then can convert that rdd to dataframe object at this pointi have an actual dataframe orif you willa dataset that contains two columnslabel and featureswhere the label is floating point heightand the features column is dense vector of floating point weights that is the format required by mlliband mllib can be pretty picky about this stuffso it' important that you pay attention to these formats |
16,652 | nowlike saidwe're going to split our data in half usbjo ftu egsboepn qmju usbjojoh%usbjo ftu uftu%usbjo ftu we're going to do / split between training data and test data this returns back two dataframesone that ' going to use to actually create my modeland one that ' going to use to evaluate my model will next create my actual linear regression model with few standard parameters here that 've set mjs -jofbs fhsfttjpo nby*ufs sfh bsbn fmbtujd/fu bsbn we're going to call mjs -jofbs fhsfttjpoand then will fit that model to the set of data that held aside for trainingthe training data framenpefm mjsgju usbjojoh%that gives me back model that can use to make predictions from let' go ahead and do that gvmm sfejdujpot npefmusbotgpsn uftu%dbdif will call npefmusbotgpsn uftu%and what that' going to do is predict the heights based on the weights in my testing dataset actually have the known labelsthe actualcorrect heightsand this is going to add new column to that dataframe called predictionsthat has the predicted values based on that linear model ' going to cache those resultsand now can just extract them and compare them together solet' pull out the prediction columnjust using tfmfdu like you would in sqland then ' going to actually transform that dataframe and pull out the rdd from itand use that to map it to just plain old rdd full of floating point heights in this caseqsfejdujpot gvmm sfejdujpottfmfdu qsfejdujposeenbq mbnceby these are the predicted heights nextwe're going to get the actual heights from the label columnmbcfmt gvmm sfejdujpottfmfdu mbcfmseenbq mbnceby |
16,653 | finallywe can zip them back together and just print them out side by side and see how well it doesqsfejdujpo"oe-bcfm qsfejdujpot[jq mbcfmt dpmmfdu gpsqsfejdujpojoqsfejdujpo"oe-bcfm qsjou qsfejdujpo tqbsltupq this is kind of convoluted way of doing iti did this to be more consistent with the previous examplebut simpler approach would be to just actually select prediction and label together into single rdd that maps out those two columns together and then don' have to zip them upbut either way it works you'll also note that right at the end there we need to stop the spark session so let' see if it works let' go up to toolscanopy command promptand we'll type in tqbsltvcnju qbsl-jofbs fhsfttjpoqz and let' see what happens there' little bit more upfront time to actually run these apis with datasetsbut once they get goingthey're very fast alrightthere you have it |
16,654 | here we have our actual and predicted values side by sideand you can see that they're not too bad they tend to be more or less in the same ballpark there you have ita linear regression model in action using spark using the new dataframe-based api for mllib more and moreyou'll be using these apis going forward with mllib in sparkso make sure you opt for these when you can alrightthat' mllib in sparka way of actually distributing massive computing tasks across an entire cluster for doing machine learning on big datasets sogood skill to have let' move on summary in this we started with installing sparkthen moved to introducing spark in depth while understanding how spark works in combination with rdds we also walked through various ways of creating rdds while exploring different operations we then introduced mlliband stepped through some detailed examples of decision trees and -means clustering in spark we then pulled off our masterstroke of creating search engine in just few lines of code using tf-idf finallywe looked at the new features of spark in the next we'll take look at / testing and experimental design |
16,655 | testing and experimental design in this we'll see the concept of / testing we'll go through the -testthe -statisticand the -valueall useful tools for determining whether result is actually real or result of random variation we'll dive into some real examples and get our hands dirty with some python code and compute the -statistics and -values following thatwe'll look into how long you should run an experiment for before reaching conclusion finallywe'll discuss the potential issues that can harm the results of your experiment and may cause you to reach the wrong conclusion we'll cover the following topicsa/ testing concepts -test and -value measuring -statistics and -values using python determining how long to run an experiment / test gotchas / testing concepts if you work as data scientist at web companyyou'll probably be asked to spend some time analyzing the results of / tests these are basically controlled experiments on website to measure the impact of given change solet' talk about what / tests are and how they work |
16,656 | / tests if you're going to be data scientist at big tech web companythis is something you're going to definitely be involved inbecause people need to run experiments to try different things on website and measure the results of itand that' actually not as straightforward as most people think it is what is an / testwellit' controlled experiment that you usually run on websiteit can be applied to other contexts as wellbut usually we're talking about websiteand we're going to test the performance of some change to that websiteversus the way it was before you basically have control set of people that see the old websiteand test group of people that see the change to the websiteand the idea is to measure the difference in behavior between these two groups and use that data to actually decide whether this change was beneficial or not for examplei own business that has websitewe license software to peopleand right now have nicefriendlyorange button that people click on when they want to buy license as shown on the left in the following figure but what would happen if changed the color of that button to blueas shown on the rightso in this exampleif want to find out whether blue would be better how do knowi meanintuitivelymaybe that might capture people' attention moreor intuitivelymaybe people are more used to seeing orange buy buttons and are more likely to click on thati could spin that either wayrightsomy own internal biases or preconceptions don' really matter what matters is how people react to this change on my actual websiteand that' what an / test does |
16,657 | / testing will split people up into people who see the orange buttonand people who see the blue buttonand can then measure the behavior between these two groups and how they might differand make my decision on what color my buttons should be based on that data you can test all sorts of things with an / test these includedesign changesthese can be changes in the color of buttonthe placement of buttonor the layout of the page ui flowsomaybe you're actually changing the way that your purchase pipeline works and how people check out on your websiteand you can actually measure the effect of that algorithmic changeslet' consider the example of doing movie recommendations that we discussed in $ibqufsrecommender systems maybe want to test one algorithm versus another instead of relying on error metrics and my ability to do train testwhat really care about is driving purchases or rentals or whatever it is on this website the / test can let me directly measure the impact of this algorithm on the end result that actually care aboutand not just my ability to predict movies that other people have already seen and anything else you can dream up tooreallyany change that impacts how users interact with your site is worth testing maybe it' evenmaking the website fasteror it could be anything pricing changesthis one gets little bit controversial you knowin theoryyou can experiment with different price points using an / test and see if it actually increases volume to offset for the price difference or whateverbut use that one with caution if customers catch wind that other people are getting better prices than they are for no good reasonthey're not going to be very happy with you keep in minddoing pricing experiments can have negative backlash and you don' want to be in that situation |
16,658 | measuring conversion for / testing the first thing you need to figure out when you're designing an experiment on website is what are you trying to optimize forwhat is it that you really want to drive with this changeand this isn' always very obvious thing maybe it' the amount that people spendthe amount of revenue wellwe talked about the problems with variance in using amount spentbut if you have enough datayou can stillreach convergence on that metric lot of times howevermaybe that' not what you actually want to optimize for maybe you're actually selling some items at loss intentionally just to capture market share there' more complexity that goes into your pricing strategy than just top-line revenue maybe what you really want to measure is profitand that can be very tricky thing to measurebecause lot of things cut into how much money given product might make and those things might not always be obvious and againif you have loss leadersthis experiment will discount the effect that those are supposed to have maybe you just care about driving ad clicks on your websiteor order quantities to reduce variancemaybe people are okay with that the bottom line is that you have to talk to the business owners of the area that' being tested and figure out what it is they're trying to optimize for what are they being measured onwhat is their success measured onwhat are their key performance indicators or whatever the nbas want to call itand make sure that we're measuring the thing that it matters to them you can measure more than one thing at once tooyou don' have to pick oneyou can actually report on the effect of many different thingsrevenue profit clicks ad views if these things are all moving in the right direction togetherthat' very strong sign that this change had positive impact in more ways than one sowhy limit yourself to one metricjust make sure you know which one matters the most in what' going to be your criteria for success of this experiment ahead of time |
16,659 | how to attribute conversions another thing to watch out for is attributing conversions to change downstream if the action you're trying to drive doesn' happen immediately upon the user experiencing the thing that you're testingthings get little bit dodgy let' say change the color of button on page athe user then goes to page and does something elseand ultimately buys something from page wellwho gets credit for that purchaseis it page aor page bor something in-betweendo discount the credit for that conversion depending on how many clicks that person took to get to the conversion actiondo just discard any conversion action that doesn' happen immediately after seeing that changethese are complicated things and it' very easy to produce misleading results by fudging how you account for these different distances between the conversion and the change that you're measuring variance is your enemy another thing that you need to really internalize is that variance is your enemy when you're running an / test very common mistake people make who don' know what they're doing with data science is that they will put up test on web pageblue button versus orange buttonwhatever it isrun it for weekand take the mean amount spent from each of those groups they then say "oh lookthe people with the blue button on average spent dollar more than the people with the orange buttonblue is awesomei love bluei' going to put blue all over the website now!butin factall they might have been seeing was just random variation in purchases they didn' have big enough sample because people don' tend to purchase lot you get lot of views but you probably don' have lot of purchases on your website in comparisonand it' probably lot of variance in those purchase amounts because different products cost different amounts soyou could very easily end up making the wrong decision that ends up costing your company money in the long runinstead of earning your company money if you don' understand the effect of variance on these results we'll talk about some principal ways of measuring and accounting for that later in the |
16,660 | you need to make sure that your business owners understand that this is an important effect that you need to quantify and understand before making business decisions following an / test or any experiment that you run on the web nowsometimes you need to choose conversion metric that has less variance it could be that the numbers on your website just mean that you would have to run an experiment for years in order to get significant result based on something like revenue or amount spent sometimes if you're looking at more than one metricsuch as order amount or order quantitythat has less variance associated with ityou might see signal on order quantity before you see signal on revenuefor example at the end of the dayit ends up being judgment call if you see significant lift in order quantities and maybe not-so-significant lift in revenuethen you have to say "welli think there might be something real and beneficial going on here howeverthe only thing that statistics and data size can tell youare probabilities that an effect is real it' up to you to decide whether or not it' real at the end of the day solet' talk about how to do this in more detail the key takeaway here isjust looking at the differences in means isn' enough when you're trying to evaluate the results of an experimentyou need to take the variance into account as well -test and -value how do you know if change resulting from an / test is actually real result of what you changedor if it' just random variationwellthere are couple of statistical tools at our disposal called the -test or -statisticand the -value let' learn more about what those are and how they can help you determine whether an experiment is good or not the aim is to figure out if result is real or not was this just result of random variance that' inherent in the data itselfor are we seeing an actualstatistically significant change in behavior between our control group and our test groupt-tests and -values are way to compute that rememberstatistically significant doesn' really have specific meaning at the end of the day it has to be judgment call you have to pick probability value that you're going to accept of result being real or not but there' always going to be chance that it' still result of random variationand you have to make sure your stakeholders understand that |
16,661 | the -statistic or -test let' start with the -statisticalso known as -test it is basically measure of the difference in behavior between these two setsbetween your control and treatment groupexpressed in units of standard error it is based on standard errorwhich accounts for the variance inherent in the data itselfso by normalizing everything by that standard errorwe get some measure of the change in behavior between these two groups that takes that variance into account the way to interpret -statistic is that high -value means there' probably real difference between these two setswhereas low -value means not so much difference you have to decide what' threshold that you're willing to acceptthe sign of the tstatistic will tell you if it' positive or negative change if you're comparing your control to your treatment group and you end up with negative tstatisticthat implies that this is bad change you ultimate want the absolute value of that -statistic to be large how large value of -statistic is considered largewellthat' debatable we'll look at some examples shortly nowthis does assume that you have normal distribution of behaviorand when we're talking about things like the amount people spend on websitethat' usually decent assumption there does tend to be normal distribution of how much people spend howeverthere are more refined versions of -statistics that you might want to look at for other specific situations for examplethere' something called fisher' exact test for when you're talking about click through ratesthe -test when you're talking about transactions per userlike how many web pages do they seeand the chi-squared testwhich is often relevant for when you're looking at order quantities sometimes you'll want to look at all of these statistics for given experimentand choose the one that actually fits what you're trying to do the best the -value nowit' lot easier to talk about -values than -statistics because you don' have to think abouthow many standard deviations are we talking aboutwhat does the actual value meanthe -value is little bit easier for people to understandwhich makes it better tool for you to communicate the results of an experiment to the stakeholders in your business |
16,662 | the -value is basically the probability that this experiment satisfies the null hypothesisthat isthe probability that there is no real difference between the control and the treatment' behavior low -value means there' low probability of it having no effectkind of double negative going on thereso it' little bit counter intuitivebut at the end of the day you just have to understand that low -value means that there' high probability that your change had real effect what you want to see are high -statistic and low -valueand that will imply significant result nowbefore you start your experimentyou need to decide what your threshold for success is going to beand that means deciding the threshold with the people in charge of the business sowhat -value are you willing to accept as measure of successis it percentis it percentand againthis is basically the likelihood that there is no real effectthat it' just result of random variance it is just judgment call at the end of the day lot of times people use percentsometimes they use percent if they're feeling little bit riskierbut there' always going to be that chance that your result was just spuriousrandom data that came in howeveryou can choose the probability that you're willing to accept as being likely enough that this is real effectthat' worth rolling out into production when your experiment is overand we'll talk about when you declare an experiment to be over lateryou want to measure your -value if it' less than the threshold you decided uponthen you can reject the null hypothesis and you can say "wellthere' high likelihood that this change produced real positive or negative result if it is positive result then you can roll that change out to the entire site and it is no longer an experimentit is part of your website that will hopefully make you more and more money as time goes onand if it' negative resultyou want to get rid of it before it costs you any more money rememberthere is real cost to running an / test when your experiment has negative result soyou don' want to run it for too long because there' chance you could be losing money this is why you want to monitor the results of an experiment on daily basisso if there are early indications that the change is making horrible impact to the websitemaybe there' bug in it or something that' horribleyou can pull the plug on it prematurely if necessaryand limit the damage |
16,663 | let' go to an actual example and see how you might measure -statistics and -values using python measuring -statistics and -values using python let' fabricate some experimental data and use the -statistic and -value to determine whether given experimental result is real effect or not we're going to actually fabricate some fake experimental data and run -statistics and -values on themand see how it works and how to compute it in python running / test on some experimental data let' imagine that we're running an / test on website and we have randomly assigned our users into two groupsgroup and group the group is going to be our test subjectsour treatment groupand group will be our controlbasically the way the website used to be we'll set this up with the following codejnqpsuovnqzbtoq gspntdjqzjnqpsutubut oqsboepnopsnbm oqsboepnopsnbm tubutuuftu@joe "in this code exampleour treatment group ("is going to have randomly distributed purchase behavior where they spendon average$ per transactionwith standard deviation of five and ten thousand sampleswhereas the old website used to have mean of $ per transaction with the same standard deviation and sample size we're basically looking at an experiment that had negative result all you have to do to figure out the tstatistic and the -value is use this handy tubutuuftu@joe method from tdjqz what you do isyou pass it in your treatment group and your control groupand out comes your -statistic as shown in the output here |
16,664 | in this casewe have -statistic of - the negative indicates that it is negative changethis was bad thing and the -value is veryvery small sothat implies that there is an extremely low probability that this change is just result of random chance remember that in order to declare significancewe need to see high tvalue -statisticand low -value that' exactly what we're seeing herewe're seeing - which is very high absolute value of the -statisticnegative indicating that it' bad thingand an extremely low -valuetelling us that there' virtually no chance that this is just result of random variation if you saw these results in the real worldyou would pull the plug on this experiment as soon as you could when there' no real difference between the two groups just as sanity checklet' go ahead and change things so that there' no real difference between these two groups soi' going to change group #the control group in this caseto be the same as the treatmentwhere the mean is the standard deviation is unchangedand the sample size is unchanged as shown hereoqsboepnopsnbm tubutuuftu@joe "if we go ahead and run thisyou can see our -test ends up being below one nowremember this is in terms of standard deviation so this implies that there' probably not real change there unless we have much higher -value as wellover percent nowthese are still relatively high numbers you can see that random variation can be kind of an insidious thing this is why you need to decide ahead of time what would be an acceptable limit for -value |
16,665 | you knowyou could look at this after the fact and say" percent oddsyou knowthat' not so badwe can live with that,butno meanin reality and practice you want to see pvalues that are below percentideally below percentand value of percent means it' actually not that strong of result sodon' justify it after the factgo into your experiment in knowing what your threshold is does the sample size make differencelet' do some changes in the sample size we're creating these sets under the same conditions let' see if we actually get difference in behavior by increasing the sample size sample size increased to six-digits sowe're going to go from to samples as shown hereoqsboepnopsnbm oqsboepnopsnbm tubutuuftu@joe "you can see in the following output that actually the -value got little bit lower and the ttest little bit largerbut it' still not enough to declare real difference it' actually going in the direction you wouldn' expect it to gokind of interestingbut these are still high values againit' just the effect of random varianceand it can have more of an effect than you realize especially on website when you're talking about order amounts |
16,666 | sample size increased seven-digits let' actually increase the sample size to as shown hereoqsboepnopsnbm oqsboepnopsnbm tubutuuftu@joe "here is the resultwhat does that dowellnowwe're back under for the -statisticand our value' around percent we're seeing these kind of fluctuations little bit in either direction as we increase the sample size this means that going from , samples to , to , , isn' going to change your result at the end of the day and running experiments like this is good way to get good gut feel as to how long you might need to run an experiment for how many samples does it actually take to get significant resultand if you know something about the distribution of your data ahead of timeyou can actually run these sorts of models / testing if we were to compare the set to itselfthis is called an / test as shown in the following code exampletubutuuftu@joe "we can see in the following outputa -statistic of and -value of because there is in fact no difference whatsoever between these sets nowif you were to run that using real website data where you were looking at the same exact people and you saw different valuethat indicates there' problem in the system itself that runs your testing at the end of the daylike saidit' all judgment call |
16,667 | go ahead and play with thissee what the effect of different standard deviations has on the initial datasetsor differences in meansand different sample sizes just want you to dive inplay around with these different datasets and actually run themand see what the effect is on the -statistic and the -value and hopefully that will give you more gut feel of how to interpret these results againthe important thing to understand is that you're looking for large -statistic and small -value -value is probably going to be what you want to communicate to the business and rememberlower is better for -valueyou want to see that in the single digitsideally below percent before you declare victory we'll talk about / tests some more in the remainder of the scipy makes it really easy to compute -statistics and -values for given set of dataso you can very easily compare the behavior between your control and treatment groupsand measure what the probability is of that effect being real or just result of random variation make sure you are focusing on those metrics and you are measuring the conversion metric that you care about when you're doing those comparisons determining how long to run an experiment for how long do you run an experiment forhow long does it take to actually get resultat what point do you give uplet' talk about that in more detail if someone in your company has developed new experimenta new change that they want to testthen they have vested interest in seeing that succeed they put lot of work and time into itand they want it to be successful maybe you've gone weeks with the testing and you still haven' reached significant outcome on this experimentpositive or negative you know that they're going to want to keep running it pretty much indefinitely in the hope that it will eventually show positive result it' up to you to draw the line on how long you're willing to run this experiment for how do know when ' done running an / testi meanit' not always straightforward to predict how long it will take before you can achieve significant resultbut obviously if you have achieved significant resultif your -value has gone below percent or percent or whatever threshold you've chosenand you're done |
16,668 | at that point you can pull the plug on the experiment and either roll out the change more widely or remove it because it was actually having negative effect you can always tell people to go back and try againuse what they learned from the experiment to maybe try it again with some changes and soften the blow little bit the other thing that might happen is it' just not converging at all if you're not seeing any trends over time in the -valueit' probably good sign that you're not going to see this converge anytime soon it' just not going to have enough of an impact on behavior to even be measurableno matter how long you run it in those situationswhat you want to do every day is plot on graph for given experiment the -valuethe -statisticwhatever you're using to measure the success of this experimentand if you're seeing something that looks promisingyou will see that -value start to come down over time sothe more data it getsthe more significant your results should be getting nowif you instead see flat line or line that' all over the placethat kind of tells you that that -value' not going anywhereand it doesn' matter how long you run this experimentit' just not going to happen you need to agree up front that in the case where you're not seeing any trends in -valueswhat' the longest you're willing to run this experiment foris it two weeksis it monthanother thing to keep in mind is that having more than one experiment running on the site at once can conflate your results time spent on experiments is valuable commodityyou can' make more time in the world you can only really run as many experiments as you have time to run them in given year soif you spend too much time running one experiment that really has no chance of converging on resultthat' an opportunity you've missed to run another potentially more valuable experiment during that time that you are wasting on this other one it' important to draw the line on experiment linksbecause time is very precious commodity when you're running / tests on websiteat least as long as you have more ideas than you have timewhich hopefully is the case make sure you go in with agreed upper bounds on how long you're going to spend testing given experimentand if you're not seeing trends in the -value that look encouragingit' time to pull the plug at that point |
16,669 | / test gotchas an important point want to make is that the results of an / testeven when you measure them in principled manner using -valuesis not gospel there are many effects that can actually skew the results of your experiment and cause you to make the wrong decision let' go through few of these and let you know how to watch out for them let' talk about some gotchas with / tests it sounds really official to say there' -value of percentmeaning there' only percent chance that given experiment was due to spurious results or random variationbut it' still not the be-all and end-all of measuring success for an experiment there are many things that can skew or conflate your results that you need to be aware of soeven if you see pvalue that looks very encouragingyour experiment could still be lying to youand you need to understand the things that can make that happen so you don' make the wrong decisions remembercorrelation does not imply causation even with well-designed experimentall you can say is there is some probability that this effect was caused by this change you made at the end of the daythere' always going to be chance that there was no real effector you might even be measuring the wrong effect it could still be random chancethere could be something else going onit' your duty to make sure the business owners understand that these experimental results need to be interpretedthey need to be one piece of their decision they can' be the be-all and end-all that they base their decision on because there is room for error in the results and there are things that can skew those results and if there' some larger business objective to this changebeyond just driving short-term revenuethat needs to be taken into account as well |
16,670 | novelty effects one problem is novelty effects one major achilles heel of an / test is the short time frame over which they tend to be runand this causes couple of problems first of allthere might be longer-term effects to the changeand you're not going to measure thosebut alsothere is certain effect to just something being different on the website for instancemaybe your customers are used to seeing the orange buttons on the website all the timeand if blue button comes up and it catches their attention just because it' different howeveras new customers come in who have never seen your website beforethey don' notice that as being differentand over time even your old customers get used to the new blue button it could very well be that if you were to make this same test year laterthere would be no difference or maybe they' be the other way around could very easily see situation where you test orange button versus blue buttonand in the first two weeks the blue button wins people buy more because they are more attracted to itbecause it' different but year goes byi could probably run another web lab that puts that blue button against an orange button and the orange button would winagainsimply because the orange button is differentand it' new and catches people' attention just for that reason alone for that reasonif you do have change that is somewhat controversialit' good idea to rerun that experiment later on and see if you can actually replicate its results that' really the only way know of to account for novelty effectsactually measure it again when it' no longer novelwhen it' no longer just change that might capture people' attention simply because it' different and thisi really can' understate the importance of understanding this this can really skew lot of resultsit biases you to attributing positive changes to things that don' really deserve it being different in and of itself is not virtueat least not in this context seasonal effects if you're running an experiment over christmaspeople don' tend to behave the same during christmas as they do the rest of the year they definitely spend their money differently during that seasonthey're spending more time with their families at homeand they might be little bitkind of checked out of workso people have different frame of mind |
16,671 | it might even be involved with the weatherduring the summer people behave differently because it' hot out they're feeling kind of lazythey're on vacation more often maybe if you happen to do your experiment during the time of terrible storm in highly populated area that could skew your results as well againjust be cognizant of potential seasonal effectsholidays are big one to be aware ofand always take your experience with grain of salt if they're run during period of time that' known to have seasonality you can determine this quantitatively by actually looking at the metric you're trying to measure as success metricbe itwhatever you're calling your conversion metricand look at its behavior over the same time period last year are there seasonal fluctuations that you see every yearand if soyou want to try to avoid running your experiment during one of those peaks or valleys selection bias another potential issue that can skew your results is selection bias it' very important that customers are randomly assigned to either your control or your treatment groupsyour or group howeverthere are subtle ways in which that random assignment might not be random after all for examplelet' say that you're hashing your customer ids to place them into one bucket or the other maybe there' some subtle bias between how that hash function affects people with lower customer ids versus higher customer ids this might have the effect of putting all of your longtimemore loyal customers into the control groupand your newer customers who don' know you that well into your treatment group what you end up measuring then is just difference in behavior between old customers and new customers as result it' very important to audit your systems to make sure there is no selection bias in the actual assignment of people to the control or treatment group you also need to make sure that assignment is sticky if you're measuring the effect of change over an entire sessionyou want to measure if they saw change on page butover on page they actually did conversionyou have to make sure they're not switching groups in between those clicks soyou need to make sure that within given sessionpeople remain in the same groupand how to define session can become kind of nebulous as well |
16,672 | nowthese are all issues that using an established off-the-shelf framework like google experiments or optimizely or one of those guys can help with so that you're not reinventing the wheel on all these problems if your company does have homegrowninhouse solution because they're not comfortable with sharing that data with outside companiesthen it' worth auditing whether there is selection bias or not auditing selection bias issues one way for auditing selection bias issues is running what' called an / testlike we saw earlier soif you actually run an experiment where there is no difference between the treatment and controlyou shouldn' see difference in the end result there should not be any sort of change in behavior when you're comparing those two things an / test can be good way of testing your / framework itself and making sure there' no inherent bias or other problemsfor examplesession leakage and whatnotthat you need to address data pollution another big problem is data pollution we talked at length about the importance of cleaning your input dataand it' especially important in the context of an / test what would happen if you have robota malicious crawler that' crawling through your website all the timedoing an unnatural amount of transactionswhat if that robot ends up getting either assigned to the treatment or the controlthat one robot could skew the results of your experiment it' very important to study the input going into your experiment and look for outliersthen analyze what those outliers areand whether they should they be excluded are you actually letting some robots leak into your measurements and are they skewing the results of your experimentthis is veryvery common problemand something you need to be cognizant of there are malicious robots out therethere are people trying to hack into your websitethere are benign scrapers just trying to crawl your website for search engines or whatnot there are all sorts of weird behaviors going on with websiteand you need to filter out those and get at the people who are really your customers and not these automated scripts that can actually be very challenging problem yet another reason to use off-the-shelf frameworks like google analyticsif you can |
16,673 | attribution errors we talked briefly about attribution errors earlier this is if you are actually using downstream behavior from changeand that gets into gray area you need to understand how you're actually counting those conversions as function of distance from the thing that you changed and agree with your business stakeholders upfront as to how you're going to measure those effects you also need to be aware of if you're running multiple experiments at oncewill they conflict with one anotheris there page flow where someone might actually encounter two different experiments within the same sessionif sothat' going to be problem and you have to apply your judgment as to whether these changes actually could interfere with each other in some meaningful way and affect the customersbehavior in some meaningful way againyou need to take these results with grain of salt there are lot of things that can skew results and you need to be aware of them just be aware of them and make sure your business owners are also aware of the limitations of / tests and all will be okay alsoif you're not in position where you can actually devote very long amount of time to an experimentyou need to take those results with grain of salt and ideally retest them later on during different time period summary in this we talked about what / tests are and what are the challenges surrounding them we went into some examples of how you actually measure the effects of variance using the -statistic and -value metricsand we got into coding and measuring -tests using python we then went on to discuss the short-term nature of an / test and its limitationssuch as novelty effects or seasonal effects that also wraps up our time in this book congratulations for making it this farthat' serious achievement and you should be proud of yourself we've covered lot of material here and hope that you at least understand the concepts and have little bit of hands-on experience with most of the techniques that are used in data science today it' very broad fieldso we've touched on little bit of everything there soyou knowcongratulations again |
16,674 | if you want to further your career in this fieldwhat ' really encourage you to do is talk to your boss if you work at company that has access to some interesting datasets of its ownsee if you can play around with them obviouslyyou want to talk to your boss first before you use any data owned by your companybecause there' probably going to be some privacy restrictions surrounding it you want to make sure that you're not violating the privacy of your company' customersand that might mean that you might only be able to use that data or look at it within controlled environment at your workplace sobe careful when you're doing that if you can get permission to actually stay late at work few days week andyou knowmess around with some of these datasets and see what you can do with itnot only does show that you have the initiative to make yourself better employeeyou might actually discover something that might be valuable to your companyand that could just make you look even betterand actually lead to an internal transfer perhapsinto field more directly related to where you want to take your career soif you want some career advice from mea common question get is"heyi' an engineeri want to get more into data sciencehow do do that?the best way to do it is just do ityou knowactually do some side projects and show that you can do it and demonstrate some meaningful results from it show that to your boss and see where it leads you good luck |
16,675 | / test challenges about attribution errors data pollution novelty effects seasonal effects selection bias selection bias issuesauditing / testing / tests about concepts conversionmeasuring for conversionsattributing no difference between two groups performing varianceas enemy apache spark components faster features high-level overview installing installingin other operating systems installingon windows java development kitinstalling -means clustering pythonversus scala relatively young resilient distributed datasets (rddscalable user-friendly axes adjusting labeling bagging bar charts generating bayes optimal classifier bayestheorem bayesian methods bayesian model combination bayesian parameter averaging bias bias-variance trade-off binomial probability mass function boolean expression if statement if-else loop boosting bootstrap aggregating box-and-whisker plots generating bucket of models categorical data chi-squared test conditional probability about assignment exercisesin python my assignment solution correlation about activity computingnumpy way computingthe hard way |
16,676 | data science in big data world the data science process machine learning handling large data on single computer first steps in big data join the nosql movement the rise of graph databases text mining and text analytics data visualization to the end user |
16,677 | preface xiii acknowledgments xiv about this book xvi about the authors xviii about the cover illustration xx data science in big data world benefits and uses of data science and big data facets of data structured data unstructured data natural language machine-generated data graph-based or network data audioimageand video streaming data the data science process setting the research goal retrieving data data preparation data exploration data modeling or model building presentation and automation the big data ecosystem and data science distributed file systems distributed programming framework data integration framework vii |
16,678 | viii machine learning frameworks nosql databases scheduling tools benchmarking tools system deployment service programming security an introductory working example of hadoop summary the data science process overview of the data science process don' be slave to the process step defining research goals and creating project charter spend time understanding the goals and context of your research create project charter step retrieving data start with data stored within the company don' be afraid to shop around do data quality checks now to prevent problems later step cleansingintegratingand transforming data cleansing data correct errors as early as possible combining data from different data sources transforming data step exploratory data analysis step build the models model and variable selection model execution model diagnostics and model comparison step presenting findings and building applications on top of them summary machine learning what is machine learning and why should you care about it applications for machine learning in data science where machine learning is used in the data science process python tools used in machine learning |
16,679 | ix the modeling process engineering features and selecting model training your model validating model predicting new observations types of machine learning supervised learning semi-supervised learning summary unsupervised learning handling large data on single computer the problems you face when handling large data general techniques for handling large volumes of data choosing the right algorithm choosing the right data structure selecting the right tools general programming tips for dealing with large data sets don' reinvent the wheel get the most out of your hardware reduce your computing needs case study predicting malicious urls step defining the research goal step acquiring the url data step data exploration step model building case study building recommender system inside database tools and techniques needed step research question step data preparation step model building step presentation and automation summary first steps in big data distributing data storage and processing with frameworks hadoopa framework for storing and processing large data sets sparkreplacing mapreduce for better performance |
16,680 | case studyassessing risk when loaning money step the research goal step data preparation step report building step data retrieval step data exploration summary join the nosql movement introduction to nosql acidthe core principle of relational databases cap theoremthe problem with dbs on many nodes the base principles of nosql databases nosql database types case studywhat disease is that step setting the research goal steps and data retrieval and preparation step data exploration step revisiteddata preparation for disease profiling step revisiteddata exploration for disease profiling step presentation and automation summary the rise of graph databases introducing connected data and graph databases why and when should use graph database introducing neo ja graph database cyphera graph query language connected data examplea recipe recommendation engine step setting the research goal step data retrieval step data preparation step data exploration step data modeling step presentation summary text mining and text analytics text mining in the real world text mining techniques bag of words stemming and lemmatization decision tree classifier |
16,681 | xi case studyclassifying reddit posts meet the natural language toolkit data science process overview and step the research goal step data retrieval step data preparation step data exploration step revisiteddata preparation adapted step data analysis step presentation and automation summary data visualization to the end user data visualization options crossfilterthe javascript mapreduce library setting up everything medicine data set appendix appendix appendix appendix unleashing crossfilter to filter the creating an interactive dashboard with dc js dashboard development tools summary setting up elasticsearch setting up neo installing mysql server setting up anaconda with virtual environment index |
16,682 | it' in all of us data science is what makes us humans what we are today nonot the computer-driven data science this book will introduce you tobut the ability of our brains to see connectionsdraw conclusions from factsand learn from our past experiences more so than any other species on the planetwe depend on our brains for survivalwe went all-in on these features to earn our place in nature that strategy has worked out for us so farand we're unlikely to change it in the near future but our brains can only take us so far when it comes to raw computing our biology can' keep up with the amounts of data we can capture now and with the extent of our curiosity so we turn to machines to do part of the work for usto recognize patternscreate connectionsand supply us with answers to our numerous questions the quest for knowledge is in our genes relying on computers to do part of the job for us is not--but it is our destiny xiii |
16,683 | big thank you to all the people of manning involved in the process of making this book for guiding us all the way through our thanks also go to ravishankar rajagopalan for giving the manuscript full technical proofreadand to jonathan thoms and michael roberts for their expert comments there were many other reviewers who provided invaluable feedback throughout the processalvin rajarthur zubarevbill martschenkocraig smithfilip pravicahamideh irajheather campbellhector cuestaian stirkjeff smithjoel kotarskijonathan sharleyjorn dinklamarius butucmatt colematthew heckmeredith godarrob aglescott chausseeand steve rogers first and foremost want to thank my wife filipa for being my inspiration and motivation to beat all difficulties and for always standing beside me throughout my career and the writing of this book she has provided me the necessary time to pursue my goals and ambitionand shouldered all the burdens of taking care of our little daughter in my absence dedicate this book to her and really appreciate all the sacrifices she has made in order to build and maintain our little family also want to thank my daughter evaand my son to be bornwho give me great sense of joy and keep me smiling they are the best gifts that god ever gave to my life and also the best children dad could hope forfunlovingand always joy to be with special thank you goes to my parents for their support over the years without the endless love and encouragement from my familyi would not have been able to finish this book and continue the journey of achieving my goals in life xiv |
16,684 | xv ' really like to thank all my coworkers in my companyespecially mo and arnofor all the adventures we have been through together mo and arno have provided me excellent support and advice appreciate all of their time and effort in making this book complete they are great peopleand without themthis book may not have been written finallya sincere thank you to my friends who support me and understand that do not have much time but still count on the love and support they have given me throughout my career and the development of this book davy cielen would like to give thanks to my family and friends who have supported me all the way through the process of writing this book it has not always been easy to stay at home writingwhile could be out discovering new things want to give very special thanks to my parentsmy brother jagoand my girlfriend delphine for always being there for meregardless of what crazy plans come up with and execute would also like to thank my godmotherand my godfather whose current struggle with cancer puts everything in life into perspective again thanks also go to my friends for buying me beer to distract me from my work and to delphine' parentsher brother kareland his soon-to-be wife tess for their hospitality (and for stuffing me with good foodall of them have made great contribution to wonderful life so far last but not leasti would like to thank my coauthor momy erc-homieand my coauthor davy for their insightful contributions to this book share the ups and downs of being an entrepreneur and data scientist with both of them on daily basis it has been great trip so far let' hope there are many more days to come arno meysman first and foremosti would like to thank my fiancee muhuba for her loveunderstandingcaringand patience finallyi owe much to davy and arno for having fun and for making an entrepreneurial dream come true their unfailing dedication has been vital resource for the realization of this book mohamed ali |
16,685 | can only show you the door you're the one that has to walk through it morpheusthe matrix welcome to the bookwhen reading the table of contentsyou probably noticed the diversity of the topics we're about to cover the goal of introducing data science is to provide you with little bit of everything--enough to get you started data science is very wide fieldso wide indeed that book ten times the size of this one wouldn' be able to cover it all for each we picked different aspect we find interesting some hard decisions had to be made to keep this book from collapsing your bookshelfwe hope it serves as an entry point--your doorway into the exciting world of data science roadmap and offer the general theoretical background and framework necessary to understand the rest of this book is an introduction to data science and big dataending with practical example of hadoop is all about the data science processcovering the steps present in almost every data science project xvi |
16,686 | xvii in through we apply machine learning on increasingly large data sets keeps it small the data still fits easily into an average computer' memory increases the challenge by looking at "large data this data fits on your machinebut fitting it into ram is hardmaking it challenge to process without computing cluster finally looks at big data for this we can' get around working with multiple computers through touch on several interesting subjects in data science in moreor-less independent matter looks at nosql and how it differs from the relational databases applies data science to streaming data here the main problem is not sizebut rather the speed at which data is generated and old data becomes obsolete is all about text mining not all data starts off as numbers text mining and text analytics become important when the data is in textual formats such as emailsblogswebsitesand so on focuses on the last part of the data science process--data visualization and prototype application building--by introducing few useful html tools appendixes - cover the installation and setup of the elasticsearchneo jand mysql databases described in the and of anacondaa python code package that' especially useful for data science whom this book is for this book is an introduction to the field of data science seasoned data scientists will see that we only scratch the surface of some topics for our other readersthere are some prerequisites for you to fully enjoy the book minimal understanding of sqlpythonhtml and statistics or machine learning is recommended before you dive into the practical examples code conventions and downloads we opted to use the python script for the practical examples in this book over the past decadepython has developed into much respected and widely used data science language the code itself is presented in fixed-width font like this to separate it from ordinary text code annotations accompany many of the listingshighlighting important concepts the book contains many code examplesmost of which are available in the online code basewhich can be found at the book' websitebooks/introducing-data-science |
16,687 | davy cielen is an experienced entrepreneurbook authorand professor he is the co-owner with arno and mo of optimately and maitontwo data science companies based in belgium and the ukrespectivelyand co-owner of third data science company based in somaliland the main focus of these companies is on strategic big data scienceand they are occasionally consulted by many large companies davy is an adjunct professor at the ieseg school of management in lillefrancewhere he is involved in teaching and research in the field of big data science arno meysman is driven entrepreneur and data scientist he is the co-owner with davy and mo of optimately and maitontwo data science companies based in belgium and the ukrespectivelyand co-owner of third data science company based in somaliland the main focus of these companies is on strategic big data scienceand they are occasionally consulted by many large companies arno is data scientist with wide spectrum of interestsranging from medical analysis to retail to game analytics he believes insights from data combined with some imagination can go long way toward helping us to improve this world xviii |
16,688 | xix mohamed ali is an entrepreneur and data science consultant together with davy and arnohe is the co-owner of optimately and maitontwo data science companies based in belgium and the ukrespectively his passion lies in two areasdata science and sustainable projectsthe latter being materialized through the creation of third company based in somaliland author online the purchase of introducing data science includes free access to private web forum run by manning publications where you can make comments about the bookask technical questionsand receive help from the lead author and from other users to access the forum and subscribe to itpoint your web browser to com/books/introducing-data-science this page provides information on how to get on the forum once you are registeredwhat kind of help is availableand the rules of conduct on the forum manning' commitment to our readers is to provide venue where meaningful dialog between individual readers and between readers and the author can take place it is not commitment to any specific amount of participation on the part of the authorwhose contribution to ao remains voluntary (and unpaidwe suggest you try asking the author some challenging questions lest his interest straythe author online forum and the archives of previous discussions will be accessible from the publisher' website as long as the book is in print |
16,689 | the illustration on the cover of introducing data science is taken from the edition of sylvain marechal' four-volume compendium of regional dress customs this book was first published in paris in one year before the french revolution each illustration is colored by hand the caption for this illustration reads "homme salamanque,which means man from salamancaa province in western spainon the border with portugal the region is known for its wild beautylush forestsancient oak treesrugged mountainsand historic old towns and villages the homme salamanque is just one of many figures in marechal' colorful collection their diversity speaks vividly of the uniqueness and individuality of the world' towns and regions just years ago this was time when the dress codes of two regions separated by few dozen miles identified people uniquely as belonging to one or the other the collection brings to life sense of the isolation and distance of that period and of every other historic period--except our own hyperkinetic present dress codes have changed since then and the diversity by regionso rich at the timehas faded away it is now often hard to tell the inhabitant of one continent from another perhaps we have traded cultural diversity for more varied personal life-certainly for more varied and fast-paced technological life we at manning celebrate the inventivenessthe initiativeand the fun of the computer business with book covers based on the rich diversity of regional life two centuries agobrought back to life by marechal' pictures xx |
16,690 | big data world this covers defining data science and big data recognizing the different types of data gaining insight into the data science process introducing the fields of data science and big data working through examples of hadoop big data is blanket term for any collection of data sets so large or complex that it becomes difficult to process them using traditional data management techniques such asfor examplethe rdbms (relational database management systemsthe widely adopted rdbms has long been regarded as one-size-fits-all solutionbut the demands of handling big data have shown otherwise data science involves using methods to analyze massive amounts of data and extract the knowledge it contains you can think of the relationship between big data and data science as being like the relationship between crude oil and an oil refinery data science and big data evolved from statistics and traditional data management but are now considered to be distinct disciplines |
16,691 | data science in big data world the characteristics of big data are often referred to as the three vsvolume--how much data is therevariety--how diverse are different types of datavelocity--at what speed is new data generatedoften these characteristics are complemented with fourth vveracityhow accurate is the datathese four properties make big data different from the data found in traditional data management tools consequentlythe challenges they bring can be felt in almost every aspectdata capturecurationstoragesearchsharingtransferand visualization in additionbig data calls for specialized techniques to extract the insights data science is an evolutionary extension of statistics capable of dealing with the massive amounts of data produced today it adds methods from computer science to the repertoire of statistics in research note from laney and kartemerging role of the data scientist and the art of data sciencethe authors sifted through hundreds of job descriptions for data scientiststatisticianand bi (business intelligenceanalyst to detect the differences between those titles the main things that set data scientist apart from statistician are the ability to work with big data and experience in machine learningcomputingand algorithm building their tools tend to differ toowith data scientist job descriptions more frequently mentioning the ability to use hadooppigsparkrpythonand javaamong others don' worry if you feel intimidated by this listmost of these will be gradually introduced in this bookthough we'll focus on python python is great language for data science because it has many data science libraries availableand it' widely supported by specialized software for instancealmost every popular nosql database has python-specific api because of these features and the ability to prototype quickly with python while keeping acceptable performanceits influence is steadily growing in the data science world as the amount of data continues to grow and the need to leverage it becomes more importantevery data scientist will come across big data projects throughout their career benefits and uses of data science and big data data science and big data are used almost everywhere in both commercial and noncommercial settings the number of use cases is vastand the examples we'll provide throughout this book only scratch the surface of the possibilities commercial companies in almost every industry use data science and big data to gain insights into their customersprocessesstaffcompletionand products many companies use data science to offer customers better user experienceas well as to cross-sellup-selland personalize their offerings good example of this is google adsensewhich collects data from internet users so relevant commercial messages can be matched to the person browsing the internet maxpoint ( |
16,692 | is another example of real-time personalized advertising human resource professionals use people analytics and text mining to screen candidatesmonitor the mood of employeesand study informal networks among coworkers people analytics is the central theme in the book moneyballthe art of winning an unfair game in the book (and moviewe saw that the traditional scouting process for american baseball was randomand replacing it with correlated signals changed everything relying on statistics allowed them to hire the right players and pit them against the opponents where they would have the biggest advantage financial institutions use data science to predict stock marketsdetermine the risk of lending moneyand learn how to attract new clients for their services at the time of writing this bookat least of trades worldwide are performed automatically by machines based on algorithms developed by quantsas data scientists who work on trading algorithms are often calledwith the help of big data and data science techniques governmental organizations are also aware of data' value many governmental organizations not only rely on internal data scientists to discover valuable informationbut also share their data with the public you can use this data to gain insights or build data-driven applications data gov is but one exampleit' the home of the us government' open data data scientist in governmental organization gets to work on diverse projects such as detecting fraud and other criminal activity or optimizing project funding well-known example was provided by edward snowdenwho leaked internal documents of the american national security agency and the british government communications headquarters that show clearly how they used data science and big data to monitor millions of individuals those organizations collected billion data records from widespread applications such as google mapsangry birdsemailand text messagesamong many other data sources then they applied data science techniques to distill information nongovernmental organizations (ngosare also no strangers to using data they use it to raise money and defend their causes the world wildlife fund (wwf)for instanceemploys data scientists to increase the effectiveness of their fundraising efforts many data scientists devote part of their time to helping ngosbecause ngos often lack the resources to collect data and employ data scientists datakind is one such data scientist group that devotes its time to the benefit of mankind universities use data science in their research but also to enhance the study experience of their students the rise of massive open online courses (moocproduces lot of datawhich allows universities to study how this type of learning can complement traditional classes moocs are an invaluable asset if you want to become data scientist and big data professionalso definitely look at few of the better-known onescourseraudacityand edx the big data and data science landscape changes quicklyand moocs allow you to stay up to date by following courses from top universities if you aren' acquainted with them yettake time to do so nowyou'll come to love them as we have |
16,693 | data science in big data world facets of data in data science and big data you'll come across many different types of dataand each of them tends to require different tools and techniques the main categories of data are thesestructured unstructured natural language machine-generated graph-based audiovideoand images streaming let' explore all these interesting data types structured data structured data is data that depends on data model and resides in fixed field within record as suchit' often easy to store structured data in tables within databases or excel files (figure sqlor structured query languageis the preferred way to manage and query data that resides in databases you may also come across structured data that might give you hard time storing it in traditional relational database hierarchical data such as family tree is one such example the world isn' made up of structured datathoughit' imposed upon it by humans and machines more oftendata comes unstructured figure an excel table is an example of structured data |
16,694 | facets of data delete move spam new team of ui engineers cda@engineer com today : to xyz@program com an investment banking client of mine has had the go ahead to build new team of ui engineers to work on various areas of cutting-edge single-dealer trading platform they will be recruiting at all levels and paying between (all the usual benefits of the banking worldi understand you may not be looking also understand you may be contractor of the last hires they brought into the teamtwo were contractors of years who honestly thought would never turn to what they considered "the dark side this is genuine opportunity to work in an environment that' built up for best in industry and allows you to gain commercial experience with all the latest toolstechand processes there is more information below appreciate the spec is rather loose they are not looking for specialists in angular node backbone or any of the other buzz words in particularrather an "engineerwho can wear many hats and is in touch with current tech tinkers in their own time for more information and confidential chatplease drop me reply email appreciate you may not have an updated cvbut if you do that would be handy to have look through if you don' mind sending reply reply to all forward figure email is simultaneously an example of unstructured data and natural language data unstructured data unstructured data is data that isn' easy to fit into data model because the content is context-specific or varying one example of unstructured data is your regular email (figure although email contains structured elements such as the sendertitleand body textit' challenge to find the number of people who have written an email complaint about specific employee because so many ways exist to refer to personfor example the thousands of different languages and dialects out there further complicate this human-written emailas shown in figure is also perfect example of natural language data natural language natural language is special type of unstructured datait' challenging to process because it requires knowledge of specific data science techniques and linguistics |
16,695 | data science in big data world the natural language processing community has had success in entity recognitiontopic recognitionsummarizationtext completionand sentiment analysisbut models trained in one domain don' generalize well to other domains even state-of-the-art techniques aren' able to decipher the meaning of every piece of text this shouldn' be surprise thoughhumans struggle with natural language as well it' ambiguous by nature the concept of meaning itself is questionable here have two people listen to the same conversation will they get the same meaningthe meaning of the same words can vary when coming from someone upset or joyous machine-generated data machine-generated data is information that' automatically created by computerprocessapplicationor other machine without human intervention machine-generated data is becoming major data resource and will continue to do so wikibon has forecast that the market value of the industrial internet ( term coined by frost sullivan to refer to the integration of complex physical machinery with networked sensors and softwarewill be approximately $ billion in idc (international data corporationhas estimated there will be times more connected things than people in this network is commonly referred to as the internet of things the analysis of machine data relies on highly scalable toolsdue to its high volume and speed examples of machine data are web server logscall detail recordsnetwork event logsand telemetry (figure figure example of machine-generated data |
16,696 | facets of data the machine data shown in figure would fit nicely in classic table-structured database this isn' the best approach for highly interconnected or "networkeddatawhere the relationships between entities have valuable role to play graph-based or network data "graph datacan be confusing term because any data can be shown in graph "graphin this case points to mathematical graph theory in graph theorya graph is mathematical structure to model pair-wise relationships between objects graph or network data isin shortdata that focuses on the relationship or adjacency of objects the graph structures use nodesedgesand properties to represent and store graphical data graph-based data is natural way to represent social networksand its structure allows you to calculate specific metrics such as the influence of person and the shortest path between two people examples of graph-based data can be found on many social media websites (figure for instanceon linkedin you can see who you know at which company your follower list on twitter is another example of graph-based data the power and sophistication comes from multipleoverlapping graphs of the same nodes for exampleimagine the connecting edges here to show "friendson facebook imagine another graph with the same people which connects business colleagues via linkedin imagine third graph based on movie interests on netflix overlapping the three different-looking graphs makes more interesting questions possible lucy elizabeth guy liam kim jack carlos barack myriam maria florin er william john figure friends in social network are an example of graph-based data graph databases are used to store graph-based data and are queried with specialized query languages such as sparql graph data poses its challengesbut for computer interpreting additive and image datait can be even more difficult |
16,697 | data science in big data world audioimageand video audioimageand video are data types that pose specific challenges to data scientist tasks that are trivial for humanssuch as recognizing objects in picturesturn out to be challenging for computers mlbam (major league baseball advanced mediaannounced in that they'll increase video capture to approximately tb per game for the purpose of livein-game analytics high-speed cameras at stadiums will capture ball and athlete movements to calculate in real timefor examplethe path taken by defender relative to two baselines recently company called deepmind succeeded at creating an algorithm that' capable of learning how to play video games this algorithm takes the video screen as input and learns to interpret everything via complex process of deep learning it' remarkable feat that prompted google to buy the company for their own artificial intelligence (aidevelopment plans the learning algorithm takes in data as it' produced by the computer gameit' streaming data streaming data while streaming data can take almost any of the previous formsit has an extra property the data flows into the system when an event happens instead of being loaded into data store in batch although this isn' really different type of datawe treat it here as such because you need to adapt your process to deal with this type of information examples are the "what' trendingon twitterlive sporting or music eventsand the stock market the data science process the data science process typically consists of six stepsas you can see in the mind map in figure we will introduce them briefly here and handle them in more detail in data science process setting the research goal setting the research goal retrieving data data science is mostly applied in the context of an organization when the business asks you to perform data science projectyou'll first prepare project charter this charter contains information such as what you're going to researchhow the company benefits from thatwhat data and resources you needa timetableand deliverables data preparation data exploration data modeling presentation and automation figure the data science process |
16,698 | throughout this bookthe data science process will be applied to bigger case studies and you'll get an idea of different possible research goals retrieving data the second step is to collect data you've stated in the project charter which data you need and where you can find it in this step you ensure that you can use the data in your programwhich means checking the existence ofqualityand access to the data data can also be delivered by third-party companies and takes many forms ranging from excel spreadsheets to different types of databases data preparation data collection is an error-prone processin this phase you enhance the quality of the data and prepare it for use in subsequent steps this phase consists of three subphasesdata cleansing removes false values from data source and inconsistencies across data sourcesdata integration enriches data sources by combining information from multiple data sourcesand data transformation ensures that the data is in suitable format for use in your models data exploration data exploration is concerned with building deeper understanding of your data you try to understand how variables interact with each otherthe distribution of the dataand whether there are outliers to achieve this you mainly use descriptive statisticsvisual techniquesand simple modeling this step often goes by the abbreviation edafor exploratory data analysis data modeling or model building in this phase you use modelsdomain knowledgeand insights about the data you found in the previous steps to answer the research question you select technique from the fields of statisticsmachine learningoperations researchand so on building model is an iterative process that involves selecting the variables for the modelexecuting the modeland model diagnostics presentation and automation finallyyou present the results to your business these results can take many formsranging from presentations to research reports sometimes you'll need to automate the execution of the process because the business will want to use the insights you gained in another project or enable an operational process to use the outcome from your model |
16,699 | data science in big data world the previous description of the data science process gives you the impression that you walk through this process in linear waybut in reality you often have to step back and rework certain findings for instanceyou might find outliers in the data exploration phase that point to data import errors as part of the data science process you gain incremental insightswhich may lead to new questions to prevent reworkmake sure that you scope the business question clearly and thoroughly at the start an iterative process now that we have better understanding of the processlet' look at the technologies the big data ecosystem and data science currently many big data tools and frameworks existand it' easy to get lost because new technologies appear rapidly it' much easier once you realize that the big data ecosystem can be grouped into technologies that have similar goals and functionalitieswhich we'll discuss in this section data scientists use many different technologiesbut not all of themwe'll dedicate separate to the most important data science technology classes the mind map in figure shows the components of the big data ecosystem and where the different technologies belong let' look at the different groups of tools in this diagram and see what each does we'll start with distributed file systems distributed file systems distributed file system is similar to normal file systemexcept that it runs on multiple servers at once because it' file systemyou can do almost all the same things you' do on normal file system actions such as storingreadingand deleting files and adding security to files are at the core of every file systemincluding the distributed one distributed file systems have significant advantagesthey can store files larger than any one computer disk files get automatically replicated across multiple servers for redundancy or parallel operations while hiding the complexity of doing so from the user the system scales easilyyou're no longer bound by the memory or storage restrictions of single server in the pastscale was increased by moving everything to server with more memorystorageand better cpu (vertical scalingnowadays you can add another small server (horizontal scalingthis principle makes the scaling potential virtually limitless the best-known distributed file system at this moment is the hadoop file system (hdfsit is an open source implementation of the google file system in this book we focus on the hadoop file system because it is the most common one in use howevermany other distributed file systems existred hat cluster file systemceph file systemand tachyon file systemto name but three |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.