id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
16,700 | the big data ecosystem and data science apache mapreduce hdfshadoop file system apache pig red hat glusterfs quantcast filesystem jaql distributed filesystem distributed programming ceph filesystem apache hama graphbuilder others mahout weka onyx oozie falcon netflix pigpen apache twill tika giraph apache spark scheduling sparkling water madlib gridmix puma benchmarking machine learning scikit-learn python libraries mesos pybrain pylearn big data ecosystem theano hue ambari spark benchmarking libraries system deployment mongodb document store elasticsearch apache thrift zookeeper service programming reddis memcache sentry ranger key-value store voldemort security nosql hbase apache flume hypertable sqoop scribe column database cassandra data integration chukwa nosql new sql databases neo graph database hive hcatalog sql on hadoop drill impala new sql bayes db sensei new sql drizzle figure big data technologies can be classified into few main components |
16,701 | data science in big data world distributed programming framework once you have the data stored on the distributed file systemyou want to exploit it one important aspect of working on distributed hard disk is that you won' move your data to your programbut rather you'll move your program to the data when you start from scratch with normal general-purpose programming language such as cpythonor javayou need to deal with the complexities that come with distributed programmingsuch as restarting jobs that have failedtracking the results from the different subprocessesand so on luckilythe open source community has developed many frameworks to handle this for youand these give you much better experience working with distributed data and dealing with many of the challenges it carries data integration framework once you have distributed file system in placeyou need to add data you need to move data from one source to anotherand this is where the data integration frameworks such as apache sqoop and apache flume excel the process is similar to an extracttransformand load process in traditional data warehouse machine learning frameworks when you have the data in placeit' time to extract the coveted insights this is where you rely on the fields of machine learningstatisticsand applied mathematics before world war ii everything needed to be calculated by handwhich severely limited the possibilities of data analysis after world war ii computers and scientific computing were developed single computer could do all the counting and calculations and world of opportunities opened ever since this breakthroughpeople only need to derive the mathematical formulaswrite them in an algorithmand load their data with the enormous amount of data available nowadaysone computer can no longer handle the workload by itself in factseveral algorithms developed in the previous millennium would never terminate before the end of the universeeven if you could use every computer available on earth this has to do with time complexity (to break password by testing every possible combination an example can be found at scale well with the amount of data we need to analyze todaythis becomes problematicand specialized frameworks and libraries are required to deal with this amount of data the most popular machine-learning library for python is scikit-learn it' great machine-learning toolboxand we'll use it later in the book there areof courseother python librariespybrain for neural networks--neural networks are learning algorithms that mimic the human brain in learning mechanics and complexity neural networks are often regarded as advanced and black box |
16,702 | nltk or natural language toolkit--as the name suggestsits focus is working with natural language it' an extensive library that comes bundled with number of text corpuses to help you model your own data pylearn --another machine learning toolbox but bit less mature than scikit-learn tensorflow-- python library for deep learning provided by google the landscape doesn' end with python librariesof course spark is new apachelicensed machine-learning enginespecializing in real-learn-time machine learning it' worth taking look at and you can read more about it at nosql databases if you need to store huge amounts of datayou require software that' specialized in managing and querying this data traditionally this has been the playing field of relational databases such as oracle sqlmysqlsybase iqand others while they're still the go-to technology for many use casesnew types of databases have emerged under the grouping of nosql databases the name of this group can be misleadingas "noin this context stands for "not only lack of functionality in sql isn' the biggest reason for the paradigm shiftand many of the nosql databases have implemented version of sql themselves but traditional databases had shortcomings that didn' allow them to scale well by solving several of the problems of traditional databasesnosql databases allow for virtually endless growth of data these shortcomings relate to every property of big datatheir storage or processing power can' scale beyond single node and they have no way to handle streaminggraphor unstructured forms of data many different types of databases have arisenbut they can be categorized into the following typescolumn databases--data is stored in columnswhich allows algorithms to perform much faster queries newer technologies use cell-wise storage table-like structures are still important document stores--document stores no longer use tablesbut store every observation in document this allows for much more flexible data scheme streaming data--data is collectedtransformedand aggregated not in batches but in real time although we've categorized it here as database to help you in tool selectionit' more particular type of problem that drove creation of technologies such as storm key-value stores--data isn' stored in tablerather you assign key for every valuesuch as org marketing sales this scales well but places almost all the implementation on the developer sql on hadoop--batch queries on hadoop are in sql-like language that uses the map-reduce framework in the background new sql --this class combines the scalability of nosql databases with the advantages of relational databases they all have sql interface and relational data model |
16,703 | data science in big data world graph databases--not every problem is best stored in table particular problems are more naturally translated into graph theory and stored in graph databases classic example of this is social network scheduling tools scheduling tools help you automate repetitive tasks and trigger jobs based on events such as adding new file to folder these are similar to tools such as cron on linux but are specifically developed for big data you can use themfor instanceto start mapreduce task whenever new dataset is available in directory benchmarking tools this class of tools was developed to optimize your big data installation by providing standardized profiling suites profiling suite is taken from representative set of big data jobs benchmarking and optimizing the big data infrastructure and configuration aren' often jobs for data scientists themselves but for professional specialized in setting up it infrastructurethus they aren' covered in this book using an optimized infrastructure can make big cost difference for exampleif you can gain on cluster of serversyou save the cost of servers system deployment setting up big data infrastructure isn' an easy task and assisting engineers in deploying new applications into the big data cluster is where system deployment tools shine they largely automate the installation and configuration of big data components this isn' core task of data scientist service programming suppose that you've made world-class soccer prediction application on hadoopand you want to allow others to use the predictions made by your application howeveryou have no idea of the architecture or technology of everyone keen on using your predictions service tools excel here by exposing big data applications to other applications as service data scientists sometimes need to expose their models through services the best-known example is the rest servicerest stands for representational state transfer it' often used to feed websites with data security do you want everybody to have access to all of your datayou probably need to have fine-grained control over the access to data but don' want to manage this on an application-by-application basis big data security tools allow you to have central and fine-grained control over access to the data big data security has become topic in its own rightand data scientists are usually only confronted with it as data consumersseldom will they implement the security themselves in this book we don' describe how to set up security on big data because this is job for the security expert |
16,704 | an introductory working example of hadoop we'll end this with small application in big data context for this we'll use hortonworks sandbox image this is virtual machine created by hortonworks to try some big data applications on local machine later on in this book you'll see how juju eases the installation of hadoop on multiple machines we'll use small data set of job salary data to run our first samplebut querying large data set of billions of rows would be equally easy the query language will seem like sqlbut behind the scenes mapreduce job will run and produce straightforward table of resultswhich can then be turned into bar graph the end result of this exercise looks like figure figure the end resultthe average salary by job description to get up and running as fast as possible we use hortonworks sandbox inside virtualbox virtualbox is virtualization tool that allows you to run another operating system inside your own operating system in this case you can run centos with an existing hadoop installation inside your installed operating system few steps are required to get the sandbox up and running on virtualbox cautionthe following steps were applicable at the time this was written (february ) download the virtual image from start your virtual machine host virtualbox can be downloaded from |
16,705 | data science in big data world press ctrl+ and select the virtual image from hortonworks click next click importafter little time your image should be imported now select your virtual machine and click run give it little time to start the centos distribution with the hadoop installation runningas shown in figure notice the sandbox version here is with other versions things could be slightly different figure hortonworks sandbox running within virtualbox you can directly log on to the machine or use ssh to log on for this application you'll use the web interface point your browser to the address you'll be welcomed with the screen shown in figure hortonworks has uploaded two sample setswhich you can see in hcatalog just click the hcat button on the screen and you'll see the tables available to you (figure |
16,706 | an introductory working example of hadoop figure the hortonworks sandbox welcome screen available at figure list of available tables in hcatalog |
16,707 | data science in big data world figure the contents of the table to see the contents of the dataclick the browse data button next to the sample_ entry to get the next screen (figure this looks like an ordinary tableand hive is tool that lets you approach it like an ordinary database with sql that' rightin hive you get your results using hiveqla dialect of plain-old sql to open the beeswax hiveql editorclick the beeswax button in the menu (figure to get your resultsexecute the following queryselect descriptionavg(salaryas average_salary from sample_ group by description order by average_salary desc click the execute button hive translates your hiveql into mapreduce job and executes it in your hadoop environmentas you can see in figure best however to avoid reading the log window for now at this pointit' misleading if this is your first querythen it could take seconds hadoop is famous for its warming periods that discussion is for laterthough |
16,708 | figure you can execute hiveql command in the beeswax hiveql editor behind the scenes it' translated into mapreduce job figure the logging shows that your hiveql is translated into mapreduce job notethis log was from the february version of hdpso the current version might look slightly different |
16,709 | data science in big data world figure the end resultan overview of the average salary by profession after while the result appears great workthe conclusion of thisas shown in figure is that going to medical school is good investment surprisedwith this table we conclude our introductory hadoop tutorial although this was but the beginningit might have felt bit overwhelming at times it' recommended to leave it be for now and come back here again when all the concepts have been thoroughly explained data science is broad field so it comes with broad vocabulary we hope to give you glimpse of most of it during our time together afterwardyou pick and choose and hone your skills in whatever direction interests you the most that' what "introducing data scienceis all about and we hope you'll enjoy the ride with us summary in this you learned the followingbig data is blanket term for any collection of data sets so large or complex that it becomes difficult to process them using traditional data management techniques they are characterized by the four vsvelocityvarietyvolumeand veracity data science involves using methods to analyze small data sets to the gargantuan ones big data is all about |
16,710 | even though the data science process isn' linear it can be divided into steps setting the research goal gathering data data preparation data exploration modeling presentation and automation the big data landscape is more than hadoop alone it consists of many different technologies that can be categorized into the followingfile system distributed programming frameworks data integration databases machine learning security scheduling benchmarking system deployment service programming not every big data category is utilized heavily by data scientists they focus mainly on the file systemthe distributed programming frameworksdatabasesand machine learning they do come in contact with the other componentsbut these are domains of other professions data can come in different forms the main forms are structured data unstructured data natural language data machine data graph-based data streaming data |
16,711 | this covers understanding the flow of data science process discussing the steps in data science process the goal of this is to give an overview of the data science process without diving into big data yet you'll learn how to work with big data setsstreaming dataand text data in subsequent overview of the data science process following structured approach to data science helps you to maximize your chances of success in data science project at the lowest cost it also makes it possible to take up project as teamwith each team member focusing on what they do best take carehoweverthis approach may not be suitable for every type of project or be the only way to do good data science the typical data science process consists of six steps through which you'll iterateas shown in figure |
16,712 | data science process define research goal setting the research goal create project charter data retrieval internal data retrieving data data ownership external data errors from data entry physically impossible values missing values data cleansing outliers spacestyposerrors against codebook aggregating data data preparation extrapolating data data transformation derived measures creating dummies reducing number of variables merging/joining data sets combining data set operators creating views simple graphs combined graphs data exploration link and brush nongraphical techniques model and variable selection data modeling model execution model diagnostic and model comparison presenting data presentation and automation automating data analysis figure the six steps of the data science process figure summarizes the data science process and shows the main steps and actions you'll take during project the following list is short introductioneach of the steps will be discussed in greater depth throughout this the first step of this process is setting research goal the main purpose here is making sure all the stakeholders understand the whathowand why of the project in every serious project this will result in project charter the second phase is data retrieval you want to have data available for analysisso this step includes finding suitable data and getting access to the data from the |
16,713 | the data science process data owner the result is data in its raw formwhich probably needs polishing and transformation before it becomes usable now that you have the raw datait' time to prepare it this includes transforming the data from raw form into data that' directly usable in your models to achieve thisyou'll detect and correct different kinds of errors in the datacombine data from different data sourcesand transform it if you have successfully completed this stepyou can progress to data visualization and modeling the fourth step is data exploration the goal of this step is to gain deep understanding of the data you'll look for patternscorrelationsand deviations based on visual and descriptive techniques the insights you gain from this phase will enable you to start modeling finallywe get to the sexiest partmodel building (often referred to as "data modelingthroughout this bookit is now that you attempt to gain the insights or make the predictions stated in your project charter now is the time to bring out the heavy gunsbut remember research has taught us that often (but not alwaysa combination of simple models tends to outperform one complicated model if you've done this phase rightyou're almost done the last step of the data science model is presenting your results and automating the analysisif needed one goal of project is to change process and/or make better decisions you may still need to convince the business that your findings will indeed change the business process as expected this is where you can shine in your influencer role the importance of this step is more apparent in projects on strategic and tactical level certain projects require you to perform the business process over and over againso automating the project will save time in reality you won' progress in linear way from step to step often you'll regress and iterate between the different phases following these six steps pays off in terms of higher project success ratio and increased impact of research results this process ensures you have well-defined research plana good understanding of the business questionand clear deliverables before you even start looking at data the first steps of your process focus on getting high-quality data as input for your models this way your models will perform better later on in data science there' well-known sayinggarbage in equals garbage out another benefit of following structured approach is that you work more in prototype mode while you search for the best model when building prototypeyou'll probably try multiple models and won' focus heavily on issues such as program speed or writing code against standards this allows you to focus on bringing business value instead not every project is initiated by the business itself insights learned during analysis or the arrival of new data can spawn new projects when the data science team generates an ideawork has already been done to make proposition and find business sponsor |
16,714 | dividing project into smaller stages also allows employees to work together as team it' impossible to be specialist in everything you' need to know how to upload all the data to all the different databasesfind an optimal data scheme that works not only for your application but also for other projects inside your companyand then keep track of all the statistical and data-mining techniqueswhile also being an expert in presentation tools and business politics that' hard taskand it' why more and more companies rely on team of specialists rather than trying to find one person who can do it all the process we described in this section is best suited for data science project that contains only few models it' not suited for every type of project for instancea project that contains millions of real-time models would need different approach than the flow we describe here beginning data scientist should get long way following this manner of workingthough don' be slave to the process not every project will follow this blueprintbecause your process is subject to the preferences of the data scientistthe companyand the nature of the project you work on some companies may require you to follow strict protocolwhereas others have more informal manner of working in generalyou'll need structured approach when you work on complex project or when many people or resources are involved the agile project model is an alternative to sequential process with iterations as this methodology wins more ground in the it department and throughout the companyit' also being adopted by the data science community although the agile methodology is suitable for data science projectmany company policies will favor more rigid approach toward data science planning every detail of the data science process upfront isn' always possibleand more often than not you'll iterate between the different steps of the process for instanceafter the briefing you start your normal flow until you're in the exploratory data analysis phase your graphs show distinction in the behavior between two groups--men and women maybeyou aren' sure because you don' have variable that indicates whether the customer is male or female you need to retrieve an extra data set to confirm this for this you need to go through the approval processwhich indicates that you (or the businessneed to provide kind of project charter in big companiesgetting all the data you need to finish your project can be an ordeal step defining research goals and creating project charter project starts by understanding the whatthe whyand the how of your project (figure what does the company expect you to doand why does management place such value on your researchis it part of bigger strategic picture or "lone wolfproject originating from an opportunity someone detectedanswering these three |
16,715 | the data science process data science process define research goal setting the research goal retrieving data create project charter data preparation data exploration data modeling presentation and automation figure step setting the research goal questions (whatwhyhowis the goal of the first phaseso that everybody knows what to do and can agree on the best course of action the outcome should be clear research goala good understanding of the contextwell-defined deliverablesand plan of action with timetable this information is then best placed in project charter the length and formality canof coursediffer between projects and companies in this early phase of the projectpeople skills and business acumen are more important than great technical prowesswhich is why this part will often be guided by more senior personnel spend time understanding the goals and context of your research an essential outcome is the research goal that states the purpose of your assignment in clear and focused manner understanding the business goals and context is critical for project success continue asking questions and devising examples until you grasp the exact business expectationsidentify how your project fits in the bigger pictureappreciate how your research is going to change the businessand understand how they'll use your results nothing is more frustrating than spending months researching something until you have that one moment of brilliance and solve the problembut when you report your findings back to the organizationeveryone immediately realizes that you misunderstood their question don' skim over this phase lightly many data scientists fail heredespite their mathematical wit and scientific brilliancethey never seem to grasp the business goals and context create project charter clients like to know upfront what they're paying forso after you have good understanding of the business problemtry to get formal agreement on the deliverables all this information is best collected in project charter for any significant project this would be mandatory |
16,716 | step retrieving data project charter requires teamworkand your input covers at least the followinga clear research goal the project mission and context how you're going to perform your analysis what resources you expect to use proof that it' an achievable projector proof of concepts deliverables and measure of success timeline your client can use this information to make an estimation of the project costs and the data and people required for your project to become success step retrieving data the next step in data science is to retrieve the required data (figure sometimes you need to go into the field and design data collection process yourselfbut most of the time you won' be involved in this step many companies will have already collected and stored the data for youand what they don' have can often be bought from third parties don' be afraid to look outside your organization for databecause more and more organizations are making even high-quality data freely available for public and commercial use data science process setting the research goal data retrieval internal data retrieving data data ownership external data data preparation data exploration data modeling presentation and automation figure step retrieving data data can be stored in many formsranging from simple text files to tables in database the objective now is acquiring all the data you need this may be difficultand even if you succeeddata is often like diamond in the roughit needs polishing to be of any use to you |
16,717 | the data science process start with data stored within the company your first act should be to assess the relevance and quality of the data that' readily available within your company most companies have program for maintaining key dataso much of the cleaning work may already be done this data can be stored in official data repositories such as databasesdata martsdata warehousesand data lakes maintained by team of it professionals the primary goal of database is data storagewhile data warehouse is designed for reading and analyzing that data data mart is subset of the data warehouse and geared toward serving specific business unit while data warehouses and data marts are home to preprocessed datadata lakes contains data in its natural or raw format but the possibility exists that your data still resides in excel files on the desktop of domain expert finding data even within your own company can sometimes be challenge as companies growtheir data becomes scattered around many places knowledge of the data may be dispersed as people change positions and leave the company documentation and metadata aren' always the top priority of delivery managerso it' possible you'll need to develop some sherlock holmes-like skills to find all the lost bits getting access to data is another difficult task organizations understand the value and sensitivity of data and often have policies in place so everyone has access to what they need and nothing more these policies translate into physical and digital barriers called chinese walls these "wallsare mandatory and well-regulated for customer data in most countries this is for good reasonstooimagine everybody in credit card company having access to your spending habits getting access to the data may take time and involve company politics don' be afraid to shop around if data isn' available inside your organizationlook outside your organization' walls many companies specialize in collecting valuable information for instancenielsen and gfk are well known for this in the retail industry other companies provide data so that youin turncan enrich their services and ecosystem such is the case with twitterlinkedinand facebook although data is considered an asset more valuable than oil by certain companiesmore and more governments and organizations share their data for free with the world this data can be of excellent qualityit depends on the institution that creates and manages it the information they share covers broad range of topics such as the number of accidents or amount of drug abuse in certain region and its demographics this data is helpful when you want to enrich proprietary data but also convenient when training your data science skills at home table shows only small selection from the growing number of open-data providers |
16,718 | table list of open-data providers that should get you started open data site description data gov the home of the us government' open data the home of the european commission' open data freebase org an open database that retrieves its information from sites like wikipediamusicbrainsand the sec archive data worldbank org open data initiative from the world bank aiddata org open data for international development open fda gov open data from the us food and drug administration do data quality checks now to prevent problems later expect to spend good portion of your project time doing data correction and cleansingsometimes up to the retrieval of data is the first time you'll inspect the data in the data science process most of the errors you'll encounter during the datagathering phase are easy to spotbut being too careless will make you spend many hours solving data issues that could have been prevented during data import you'll investigate the data during the importdata preparationand exploratory phases the difference is in the goal and the depth of the investigation during data retrievalyou check to see if the data is equal to the data in the source document and look to see if you have the right data types this shouldn' take too longwhen you have enough evidence that the data is similar to the data you find in the source documentyou stop with data preparationyou do more elaborate check if you did good job during the previous phasethe errors you find now are also present in the source document the focus is on the content of the variablesyou want to get rid of typos and other data entry errors and bring the data to common standard among the data sets for exampleyou might correct usq to usa and united kingdom to uk during the exploratory phase your focus shifts to what you can learn from the data now you assume the data to be clean and look at the statistical properties such as distributionscorrelationsand outliers you'll often iterate over these phases for instancewhen you discover outliers in the exploratory phasethey can point to data entry error now that you understand how the quality of the data is improved during the processwe'll look deeper into the data preparation step step cleansingintegratingand transforming data the data received from the data retrieval phase is likely to be " diamond in the rough your task now is to sanitize and prepare it for use in the modeling and reporting phase doing so is tremendously important because your models will perform better and you'll lose less time trying to fix strange output it can' be mentioned nearly enough timesgarbage in equals garbage out your model needs the data in specific |
16,719 | the data science process data science process setting the research goal retrieving data errors from data entry physically impossible values missing values data cleansing outliers spacestyposerrors against codebook aggregating data data preparation extrapolating data data transformation derived measures creating dummies reducing number of variables merging/joining data sets combining data set operators creating views data exploration data modeling presentation and automation figure step data preparation formatso data transformation will always come into play it' good habit to correct data errors as early on in the process as possible howeverthis isn' always possible in realistic settingso you'll need to take corrective actions in your program figure shows the most common actions to take during the data cleansingintegrationand transformation phase this mind map may look bit abstract for nowbut we'll handle all of these points in more detail in the next sections you'll see great commonality among all of these actions cleansing data data cleansing is subprocess of the data science process that focuses on removing errors in your data so your data becomes true and consistent representation of the processes it originates from by "true and consistent representationwe imply that at least two types of errors exist the first type is the interpretation errorsuch as when you take the value in your |
16,720 | data for grantedlike saying that person' age is greater than years the second type of error points to inconsistencies between data sources or against your company' standardized values an example of this class of errors is putting "femalein one table and "fin another when they represent the same thingthat the person is female another example is that you use pounds in one table and dollars in another too many possible errors exist for this list to be exhaustivebut table shows an overview of the types of errors that can be detected with easy checks--the "low hanging fruit,as it were table an overview of common errors general solution try to fix the problem early in the data acquisition chain or else fix it in the program error description possible solution errors pointing to false values within one data set mistakes during data entry manual overrules redundant white space use string functions impossible values manual overrules missing values remove observation or value outliers validate andif erroneoustreat as missing value (remove or inserterrors pointing to inconsistencies between data sets deviations from code book match on keys or else use manual overrules different units of measurement recalculate different levels of aggregation bring to same level of measurement by aggregation or extrapolation sometimes you'll use more advanced methodssuch as simple modelingto find and identify data errorsdiagnostic plots can be especially insightful for examplein figure we use measure to identify data points that seem out of place we do regression to get acquainted with the data and detect the influence of individual observations on the regression line when single observation has too much influencethis can point to an error in the databut it can also be valid point at the data cleansing stagethese advanced methods arehoweverrarely applied and often regarded by certain data scientists as overkill now that we've given the overviewit' time to explain these errors in more detail |
16,721 | the data science process single outlier can throw off regression estimate distance regression line influenced by outlier xx xx xxx xx xx xx xx xx xxx xxx xx xx xxxx xx xxx xx xxx xx xxx normal regression line row number figure the encircled point influences the model heavily and is worth investigating because it can point to region where you don' have enough data or might indicate an error in the databut it also can be valid data point data entry errors data collection and data entry are error-prone processes they often require human interventionand because humans are only humanthey make typos or lose their concentration for second and introduce an error into the chain but data collected by machines or computers isn' free from errors either errors can arise from human sloppinesswhereas others are due to machine or hardware failure examples of errors originating from machines are transmission errors or bugs in the extracttransformand load phase (etlfor small data sets you can check every value by hand detecting data errors when the variables you study don' have many classes can be done by tabulating the data with counts when you have variable that can take only two values"goodand "bad"you can create frequency table and see if those are truly the only two values present in table the values "godoand "badepoint out something went wrong in at least cases table detecting outliers on simple variables with frequency table value count good bad godo bade |
16,722 | most errors of this type are easy to fix with simple assignment statements and if-thenelse rulesif ="godo" "goodif ="bade" "badredundant whitespace whitespaces tend to be hard to detect but cause errors like other redundant characters would who hasn' lost few days in project because of bug that was caused by whitespaces at the end of stringyou ask the program to join two keys and notice that observations are missing from the output file after looking for days through the codeyou finally find the bug then comes the hardest partexplaining the delay to the project stakeholders the cleaning during the etl phase wasn' well executedand keys in one table contained whitespace at the end of string this caused mismatch of keys such as "fr "fr"dropping the observations that couldn' be matched if you know to watch out for themfixing redundant whitespaces is luckily easy enough in most programming languages they all provide string functions that will remove the leading and trailing whitespaces for instancein python you can use the strip(function to remove leading and trailing spaces fixing capital letter mismatches capital letter mismatches are common most programming languages make distinction between "braziland "brazilin this case you can solve the problem by applying function that returns both strings in lowercasesuch as lower(in python "brazillower(="brazillower(should result in true impossible values and sanity checks sanity checks are another valuable type of data check here you check the value against physically or theoretically impossible values such as people taller than meters or someone with an age of years sanity checks can be directly expressed with rulescheck <age < outliers an outlier is an observation that seems to be distant from other observations ormore specificallyone observation that follows different logic or generative process than the other observations the easiest way to find outliers is to use plot or table with the minimum and maximum values an example is shown in figure the plot on the top shows no outlierswhereas the plot on the bottom shows possible outliers on the upper side when normal distribution is expected the normal distributionor gaussian distributionis the most common distribution in natural sciences |
16,723 | the data science process expected distribution frequency - - - distribution with outliers frequency - figure distribution plots are helpful in detecting outliers and helping you understand the variable it shows most cases occurring around the average of the distribution and the occurrences decrease when further away from it the high values in the bottom graph can point to outliers when assuming normal distribution as we saw earlier with the regression exampleoutliers can gravely influence your data modelingso investigate them first dealing with missing values missing values aren' necessarily wrongbut you still need to handle them separatelycertain modeling techniques can' handle missing values they might be an indicator that something went wrong in your data collection or that an error happened in the etl process common techniques data scientists use are listed in table |
16,724 | step cleansingintegratingand transforming data table an overview of techniques to handle missing data technique advantage disadvantage omit the values easy to perform you lose the information from an observation set value to null easy to perform not every modeling technique and/or implementation can handle null values impute static value such as or the mean easy to perform can lead to false estimations from model impute value from an estimated or theoretical distribution does not disturb the model as much harder to execute modeling the value (nondependentdoes not disturb the model too much can lead to too much confidence in the model you don' lose information from the other variables in the observation you make data assumptions can artificially raise dependence among the variables harder to execute you make data assumptions which technique to use at what time is dependent on your particular case iffor instanceyou don' have observations to spareomitting an observation is probably not an option if the variable can be described by stable distributionyou could impute based on this howevermaybe missing value actually means "zero"this can be the case in sales for instanceif no promotion is applied on customer basketthat customer' promo is missingbut most likely it' also no price cut deviations from code book detecting errors in larger data sets against code book or against standardized values can be done with the help of set operations code book is description of your dataa form of metadata it contains things such as the number of variables per observationthe number of observationsand what each encoding within variable means (for instance " equals "negative"" stands for "very positivea code book also tells the type of data you're looking atis it hierarchicalgraphsomething elseyou look at those values that are present in set but not in set these are values that should be corrected it' no coincidence that sets are the data structure that we'll use when we're working in code it' good habit to give your data structures additional thoughtit can save work and improve the performance of your program if you have multiple values to checkit' better to put them from the code book into table and use difference operator to check the discrepancy between both tables this wayyou can profit from the power of database directly more on this in |
16,725 | the data science process different units of measurement when integrating two data setsyou have to pay attention to their respective units of measurement an example of this would be when you study the prices of gasoline in the world to do this you gather data from different data providers data sets can contain prices per gallon and others can contain prices per liter simple conversion will do the trick in this case different levels of aggregation having different levels of aggregation is similar to having different types of measurement an example of this would be data set containing data per week versus one containing data per work week this type of error is generally easy to detectand summarizing (or the inverseexpandingthe data sets will fix it after cleaning the data errorsyou combine information from different data sources but before we tackle this topic we'll take little detour and stress the importance of cleaning data as early as possible correct errors as early as possible good practice is to mediate data errors as early as possible in the data collection chain and to fix as little as possible inside your program while fixing the origin of the problem retrieving data is difficult taskand organizations spend millions of dollars on it in the hope of making better decisions the data collection process is errorproneand in big organization it involves many steps and teams data should be cleansed when acquired for many reasonsnot everyone spots the data anomalies decision-makers may make costly mistakes on information based on incorrect data from applications that fail to correct for the faulty data if errors are not corrected early on in the processthe cleansing will have to be done for every project that uses that data data errors may point to business process that isn' working as designed for instanceboth authors worked at retailer in the pastand they designed couponing system to attract more people and make higher profit during data science projectwe discovered clients who abused the couponing system and earned money while purchasing groceries the goal of the couponing system was to stimulate cross-sellingnot to give products away for free this flaw cost the company money and nobody in the company was aware of it in this case the data wasn' technically wrong but came with unexpected results data errors may point to defective equipmentsuch as broken transmission lines and defective sensors data errors can point to bugs in software or in the integration of software that may be critical to the company while doing small project at bank we discovered that two software applications used different local settings this caused problems with numbers greater than , for one app the number meant oneand for the other it meant one thousand |
16,726 | fixing the data as soon as it' captured is nice in perfect world sadlya data scientist doesn' always have say in the data collection and simply telling the it department to fix certain things may not make it so if you can' correct the data at the sourceyou'll need to handle it inside your code data manipulation doesn' end with correcting mistakesyou still need to combine your incoming data as final remarkalways keep copy of your original data (if possiblesometimes you start cleaning data but you'll make mistakesimpute variables in the wrong waydelete outliers that had interesting additional informationor alter data as the result of an initial misinterpretation if you keep copy you get to try again for "flowing datathat' manipulated at the time of arrivalthis isn' always possible and you'll have accepted period of tweaking before you get to use the data you are capturing one of the more difficult things isn' the data cleansing of individual data sets howeverit' combining different sources into whole that makes more sense combining data from different data sources your data comes from several different placesand in this substep we focus on integrating these different sources data varies in sizetypeand structureranging from databases and excel files to text documents we focus on data in table structures in this for the sake of brevity it' easy to fill entire books on this topic aloneand we choose to focus on the data science process instead of presenting scenarios for every type of data but keep in mind that other types of data sources existsuch as key-value storesdocument storesand so onwhich we'll handle in more appropriate places in the book the different ways of combining data you can perform two operations to combine information from different data sets the first operation is joiningenriching an observation from one table with information from another table the second operation is appending or stackingadding the observations of one table to those of another table when you combine datayou have the option to create new physical table or virtual table by creating view the advantage of view is that it doesn' consume more disk space let' elaborate bit on these methods joining tables joining tables allows you to combine the information of one observation found in one table with the information that you find in another table the focus is on enriching single observation let' say that the first table contains information about the purchases of customer and the other table contains information about the region where your customer lives joining the tables allows you to combine the information so that you can use it for your modelas shown in figure to join tablesyou use variables that represent the same object in both tablessuch as datea country nameor social security number these common fields are known as keys when these keys also uniquely define the records in the table they |
16,727 | the data science process client item month client region john doe coca-cola january john doe ny jackie qi pepsi-cola january jackie qi nc client item month region john doe coca-cola january ny jackie qi pepsi-cola january nc figure joining two tables on the item and region keys are called primary keys one table may have buying behavior and the other table may have demographic information on person in figure both tables contain the client nameand this makes it easy to enrich the client expenditures with the region of the client people who are acquainted with excel will notice the similarity with using lookup function the number of resulting rows in the output table depends on the exact join type that you use we introduce the different types of joins later in the book appending tables appending or stacking tables is effectively adding observations from one table to another table figure shows an example of appending tables one table contains the observations from the month january and the second table contains observations from the month february the result of appending these tables is larger one with the observations from january as well as february the equivalent operation in set theory would be the unionand this is also the command in sqlthe common language of relational databases other set operators are also used in data sciencesuch as set difference and intersection client item month client item month john doe coca-cola january john doe zero-cola february jackie qi pepsi-cola january jackie qi maxi-cola february client item month john doe coca-cola january jackie qi pepsi-cola january john doe zero-cola february jackie qi maxi-cola february figure appending data from tables is common operation but requires an equal structure in the tables being appended |
16,728 | step cleansingintegratingand transforming data using views to simulate data joins and appends to avoid duplication of datayou virtually combine data with views in the previous example we took the monthly data and combined it in new physical table the problem is that we duplicated the data and therefore needed more storage space in the example we're working withthat may not cause problemsbut imagine that every table consists of terabytes of datathen it becomes problematic to duplicate the data for this reasonthe concept of view was invented view behaves as if you're working on tablebut this table is nothing but virtual layer that combines the tables for you figure shows how the sales data from the different months is combined virtually into yearly sales table instead of duplicating the data views do come with drawbackhowever while table join is only performed oncethe join that creates the view is recreated every time it' queriedusing more processing power than pre-calculated table would have january sales physical tables february sales december sales obs date obs date obs date -jan -feb -dec -jan -feb -dec -jan -feb -dec yearly sales viewvirtual table obs date -jan -jan -dec figure view helps you combine data without replication enriching aggregated measures data enrichment can also be done by adding calculated information to the tablesuch as the total number of sales or what percentage of total stock has been sold in certain region (figure extra measures such as these can add perspective looking at figure we now have an aggregated data setwhich in turn can be used to calculate the participation of each product within its category this could be useful during data exploration but more so when creating data models as always this depends on the exact casebut from our experience models with "relative measuressuch as sales (quantity of |
16,729 | the data science process product class product sales in sales - in growth sales by product class rank sales ( -yy ax nx sport sport - sport sport - shoes shoes figure growthsales by product classand rank sales are examples of derived and aggregate measures product sold/total quantity soldtend to outperform models that use the raw numbers (quantity soldas input transforming data certain models require their data to be in certain shape now that you've cleansed and integrated the datathis is the next task you'll performtransforming your data so it takes suitable form for data modeling transforming data relationships between an input variable and an output variable aren' always linear takefor instancea relationship of the form ae bx taking the log of the independent variables simplifies the estimation problem dramatically figure shows how log( linear (yfigure transforming to log makes the relationship between and linear (right)compared with the non-log (left |
16,730 | step cleansingintegratingand transforming data transforming the input variables greatly simplifies the estimation problem other times you might want to combine two variables into new variable reducing the number of variables sometimes you have too many variables and need to reduce the number because they don' add new information to the model having too many variables in your model makes the model difficult to handleand certain techniques don' perform well when you overload them with too many input variables for instanceall the techniques based on euclidean distance perform well only up to variables euclidean distance euclidean distance or "ordinarydistance is an extension to one of the first things anyone learns in mathematics about triangles (trigonometry)pythagoras' leg theorem if you know the length of the two sides next to the deg angle of right-angled triangle you can easily derive the length of the remaining side (hypotenusethe formula for this is hypotenuse side side the euclidean distance between two points in two-dimensional plane is calculated using similar formuladistance if you want to expand this distance calculation to more dimensionsadd the coordinates of the point within those higher dimensions to the formula for three dimensions we get distance data scientists use special methods to reduce the number of variables but retain the maximum amount of data we'll discuss several of these methods in figure shows how reducing the number of variables makes it easier to understand the component ( % - - - - - - - - component ( % figure variable reduction allows you to reduce the number of variables while maintaining as much information as possible |
16,731 | the data science process key values it also shows how two variables account for of the variation within the data set (component component %these variablescalled "component and "component ,are both combinations of the original variables they're the principal components of the underlying data structure if it isn' all that clear at this pointdon' worryprincipal components analysis (pcawill be explained more thoroughly in what you can also see is the presence of third (unknownvariable that splits the group of observations into two turning variables into dummies variables can be turned into dummy variables (figure dummy variables can only take two valuestrue( or false( they're used to indicate the absence of categorical effect that may explain the observation in this case you'll make separate columns for the classes stored in one variable and indicate it with if the class is present and otherwise an example is turning one column named weekdays into the columns monday through sunday you use an indicator to show if the observation was on mondayyou put on monday and elsewhere turning variables into dummies is technique that' used in modeling and is popular withbut not exclusive toeconomists in this section we introduced the third step in the data science process--cleaningtransformingand integrating data--which changes your raw data into usable input for the modeling phase the next step in the data science process is to get better customer year gender sales customer year sales male female figure turning variables into dummies is data transformation that breaks variable that has multiple classes into multiple variableseach having only two possible values or |
16,732 | understanding of the content of the data and the relationships between the variables and observationswe explore this in the next section step exploratory data analysis during exploratory data analysis you take deep dive into the data (see figure information becomes much easier to grasp when shown in picturetherefore you mainly use graphical techniques to gain an understanding of your data and the interactions between variables this phase is about exploring dataso keeping your mind open and your eyes peeled is essential during the exploratory data analysis phase the goal isn' to cleanse the databut it' common that you'll still discover anomalies you missed beforeforcing you to take step back and fix them data science process setting the research goal retrieving data data preparation simple graphs combined graphs data exploration data modeling link and brush nongraphical techniques presentation and automation figure step data exploration the visualization techniques you use in this phase range from simple line graphs or histogramsas shown in figure to more complex diagrams such as sankey and network graphs sometimes it' useful to compose composite graph from simple graphs to get even more insight into the data other times the graphs can be animated or made interactive to make it easier andlet' admit itway more fun an example of an interactive sankey diagram can be found at mike/sankeymike bostock has interactive examples of almost any type of graph it' worth spending time on his websitethough most of his examples are more useful for data presentation than data exploration |
16,733 | the data science process year year density - figure from top to bottoma bar charta line plotand distribution are some of the graphs used in exploratory analysis |
16,734 | step exploratory data analysis these plots can be combined to provide even more insightas shown in figure overlaying several plots is common practice in figure we combine simple graphs into pareto diagramor - diagram figure shows another techniquebrushing and linking with brushing and linking you combine and link different graphs and tables (or viewsso changes in one graph are automatically transferred to the other graphs an elaborate example of this can be found in this interactive exploration of data facilitates the discovery of new insights figure shows the average score per country for questions not only does this indicate high correlation between the answersbut it' easy to see that when you select several points on subplotthe points will correspond to similar points on the other graphs in this case the selected points on the left graph correspond to points on the middle and right graphsalthough they correspond better in the middle and right graphs - - - - - - - - figure drawing multiple plots together can help you understand the structure of your data over multiple variables |
16,735 | the data science process countries countries country figure pareto diagram is combination of the values and cumulative distribution it' easy to see from this diagram that the first of the countries contain slightly less than of the total amount if this graph represented customer buying power and we sell expensive productswe probably don' need to spend our marketing budget in every countrywe could start with the first figure link and brush allows you to select observations in one plot and highlight the same observations in the other plots two other important graphs are the histogram shown in figure and the boxplot shown in figure in histogram variable is cut into discrete categories and the number of occurrences in each category are summed up and shown in the graph the boxploton the other handdoesn' show how many observations are present but does offer an impression of the distribution within categories it can show the maximumminimummedianand other characterizing measures at the same time |
16,736 | frequency age figure example histogramthe number of people in the agegroups of -year intervals the techniques we described in this phase are mainly visualbut in practice they're certainly not limited to visualization techniques tabulationclusteringand other modeling techniques can also be part of exploratory analysis even building simple models can be part of this step now that you've finished the data exploration phase and you've gained good grasp of your datait' time to move on to the next phasebuilding models appreciation score - - - user category figure example boxploteach user category has distribution of the appreciation each has for certain picture on photography website |
16,737 | the data science process step build the models with clean data in place and good understanding of the contentyou're ready to build models with the goal of making better predictionsclassifying objectsor gaining an understanding of the system that you're modeling this phase is much more focused than the exploratory analysis stepbecause you know what you're looking for and what you want the outcome to be figure shows the components of model building data science process setting the research goal retrieving data data preparation data exploration model and variable selection data modeling model execution model diagnostic and model comparison presentation and automation figure step data modeling the techniques you'll use now are borrowed from the field of machine learningdata miningand/or statistics in this we only explore the tip of the iceberg of existing techniqueswhile introduces them properly it' beyond the scope of this book to give you more than conceptual introductionbut it' enough to get you started of the techniques will help you in of the cases because techniques overlap in what they try to accomplish they often achieve their goals in similar but slightly different ways building model is an iterative process the way you build your model depends on whether you go with classic statistics or the somewhat more recent machine learning schooland the type of technique you want to use either waymost models consist of the following main steps selection of modeling technique and variables to enter in the model execution of the model diagnosis and model comparison model and variable selection you'll need to select the variables you want to include in your model and modeling technique your findings from the exploratory analysis should already give fair idea |
16,738 | step build the models of what variables will help you construct good model many modeling techniques are availableand choosing the right model for problem requires judgment on your part you'll need to consider model performance and whether your project meets all the requirements to use your modelas well as other factorsmust the model be moved to production environment andif sowould it be easy to implementhow difficult is the maintenance on the modelhow long will it remain relevant if left untoucheddoes the model need to be easy to explainwhen the thinking is doneit' time for action model execution once you've chosen model you'll need to implement it in code remark this is the first time we'll go into actual python code execution so make sure you have virtual env up and running knowing how to set this up is required knowledgebut if it' your first timecheck out appendix all code from this can be downloaded from com/books/introducing-data-science this comes with an ipython ipynbnotebook and python pyfile luckilymost programming languagessuch as pythonalready have libraries such as statsmodels or scikit-learn these packages use several of the most popular techniques coding model is nontrivial task in most casesso having these libraries available can speed up the process as you can see in the following codeit' fairly easy to use linear regression (figure with statsmodels or scikit-learn doing this yourself would require much more effort even for the simple techniques the following listing shows the execution of linear prediction model listing executing linear prediction model on semi-random data import statsmodels api as sm imports required import numpy as np python modules predictors np random random( reshape( , target predictors dot(np array([ ])np random random( lmregmodel sm ols(target,predictorsresult lmregmodel fit(fits linear creates random data for result summary(regression predictors ( -valuesand shows model fit statistics on data semi-random data for the target ( -valuesof the model we use predictors as input to create the target so we infer correlation here |
16,739 | the data science process (target variablex (predictor variablefigure linear regression tries to fit line while minimizing the distance to each point okaywe cheated herequite heavily so we created predictor values that are meant to predict how the target variables behave for linear regressiona "linear relationbetween each (predictorand the (targetvariable is assumedas shown in figure wehowevercreated the target variablebased on the predictor by adding bit of randomness it shouldn' come as surprise that this gives us well-fitting model the results summary(outputs the table in figure mind youthe exact outcome depends on the random variables you got model fithigher is better but too high is suspicious -value to show whether predictor variable has significant influence on the target lower is better and < is often considered "significant linear equation coefficients xl figure linear regression model information output |
16,740 | let' ignore most of the output we got here and focus on the most important partsmodel fit--for this the -squared or adjusted -squared is used this measure is an indication of the amount of variation in the data that gets captured by the model the difference between the adjusted -squared and the -squared is minimal here because the adjusted one is the normal one penalty for model complexity model gets complex when many variables (or featuresare introduced you don' need complex model if simple model is availableso the adjusted -squared punishes you for overcomplicating at any rate is highand it should be because we cheated rules of thumb existbut for models in businessesmodels above are often considered good if you want to win competition you need in the high for research howeveroften very low model fits (< evenare found what' more important there is the influence of the introduced predictor variables predictor variables have coefficient--for linear model this is easy to interpret in our example if you add " to it will change by " it' easy to see how finding good predictor can be your route to nobel prize even though your model as whole is rubbish iffor instanceyou determine that certain gene is significant as cause for cancerthis is important knowledgeeven if that gene in itself doesn' determine whether person will get cancer the example here is classificationnot regressionbut the point remains the samedetecting influences is more important in scientific studies than perfectly fitting models (not to mention more realisticbut when do we know gene has that impactthis is called significance predictor significance--coefficients are greatbut sometimes not enough evidence exists to show that the influence is there this is what the -value is about long explanation about type and type mistakes is possible here but the short explanations would beif the -value is lower than the variable is considered significant for most people in truththis is an arbitrary number it means there' chance the predictor doesn' have any influence do you accept this chance to be wrongthat' up to you several people introduced the extremely significant ( < and marginally significant thresholds ( < linear regression works if you want to predict valuebut what if you want to classify somethingthen you go to classification modelsthe best known among them being -nearest neighbors as shown in figure -nearest neighbors looks at labeled points nearby an unlabeled point andbased on thismakes prediction of what the label should be |
16,741 | ( -nearest neighbor figure the data science process ( -nearest neighbor ( -nearest neighbor -nearest neighbor techniques look at the -nearest point to make prediction let' try it in python code using the scikit learn libraryas in this next listing listing executing -nearest neighbor classification on semi-random data imports modules creates random predictor from sklearn import neighbors data and semi-random predictors np random random( reshape( , target data based on target np around(predictors dot(np array([ ])predictor data np random random( )clf neighbors kneighborsclassifier(n_neighbors= fits -nearest knn clf fit(predictors,targetneighbors model knn score(predictorstargetgets model fit scorewhat percent of the classification was correctas beforewe construct random correlated data and surprisesurprise we get of cases correctly classified if we want to look in depthwe need to score the model don' let knn score(fool youit returns the model accuracybut by "scoring modelwe often mean applying it on data to make prediction prediction knn predict(predictorsnow we can use the prediction and compare it to the real thing using confusion matrix metrics confusion_matrix(target,predictionwe get -by- matrix as shown in figure |
16,742 | step build the models predicted value actual value number of correctly predicted cases figure confusion matrixit shows how many cases were correctly classified and incorrectly classified by comparing the prediction with the real values remarkthe classes ( , , were added in the figure for clarification the confusion matrix shows we have correctly predicted + + casesso that' good but is it really surprisenofor the following reasonsfor onethe classifier had but three optionsmarking the difference with last time np around(will round the data to its nearest integer in this case that' either or with only optionsyou can' do much worse than correct on guesseseven for real random distribution like flipping coin secondwe cheated againcorrelating the response variable with the predictors because of the way we did thiswe get most observations being " by guessing " for every case we' already have similar result we compared the prediction with the real valuestruebut we never predicted based on fresh data the prediction was done using the same data as the data used to build the model this is all fine and dandy to make yourself feel goodbut it gives you no indication of whether your model will work when it encounters truly new data for this we need holdout sampleas will be discussed in the next section don' be fooled typing this code won' work miracles by itself it might take while to get the modeling part and all its parameters right to be honestonly handful of techniques have industry-ready implementations in python but it' fairly easy to use models that are available in within python with the help of the rpy library rpy provides an interface from python to is free software environmentwidely used for statistical computing if you haven' alreadyit' worth at least lookbecause in it was still one of the most popular |
16,743 | the data science process (if not the most popularprogramming languages for data science for more informationsee model diagnostics and model comparison you'll be building multiple models from which you then choose the best one based on multiple criteria working with holdout sample helps you pick the best-performing model holdout sample is part of the data you leave out of the model building so it can be used to evaluate the model afterward the principle here is simplethe model should work on unseen data you use only fraction of your data to estimate the model and the other partthe holdout sampleis kept out of the equation the model is then unleashed on the unseen data and error measures are calculated to evaluate it multiple error measures are availableand in figure we show the general idea on comparing models the error measure used in the example is the mean square error mse --(yi = figure formula for mean square error mean square error is simple measurecheck for every prediction how far it was from the truthsquare this errorand add up the error of every prediction figure compares the performance of two models to predict the order size from the price the first model is size price and the second model is size to train size price predicted model predicted model error model error model - test total figure holdout sample helps you compare models and ensures that you can generalize results to data that the model has not yet seen |
16,744 | estimate the modelswe use randomly chosen observations out of , (or %)without showing the other of data to the model once the model is trainedwe predict the values for the other of the variables based on those for which we already know the true valueand calculate the model error with an error measure then we choose the model with the lowest error in this example we chose model because it has the lowest total error many models make strong assumptionssuch as independence of the inputsand you have to verify that these assumptions are indeed met this is called model diagnostics this section gave short introduction to the steps required to build valid model once you have working model you're ready to go to the last step step presenting findings and building applications on top of them after you've successfully analyzed the data and built well-performing modelyou're ready to present your findings to the world (figure this is an exciting partall your hours of hard work have paid off and you can explain what you found to the stakeholders data science process setting the research goal retrieving data data preparation data exploration data modeling presenting data presentation and automation automating data analysis figure step presentation and automation sometimes people get so excited about your work that you'll need to repeat it over and over again because they value the predictions of your models or the insights that you produced for this reasonyou need to automate your models this doesn' always mean that you have to redo all of your analysis all the time sometimes it' sufficient that you implement only the model scoringother times you might build an application that automatically updates reportsexcel spreadsheetsor powerpoint presentations the last stage of the data science process is where your soft skills will be most usefuland yesthey're extremely important in factwe recommend you find dedicated books and other information on the subject and work through thembecause why bother doing all this tough work if nobody listens to what you have to say |
16,745 | the data science process if you've done this rightyou now have working model and satisfied stakeholdersso we can conclude this here summary in this you learned the data science process consists of six stepssetting the research goal--defining the whatthe whyand the how of your project in project charter retrieving data--finding and getting access to data needed in your project this data is either found within the company or retrieved from third party data preparation--checking and remediating data errorsenriching the data with data from other data sourcesand transforming it into suitable format for your models data exploration--diving deeper into your data using descriptive statistics and visual techniques data modeling--using machine learning and statistical techniques to achieve your project goal presentation and automation--presenting your results to the stakeholders and industrializing your analysis process for repetitive reuse and integration with other tools |
16,746 | this covers understanding why data scientists use machine learning identifying the most important python libraries for machine learning discussing the process for model building using machine learning techniques gaining hands-on experience with machine learning do you know how computers learn to protect you from malicious personscomputers filter out more than of your emails and can learn to do an even better job at protecting you over time can you explicitly teach computer to recognize persons in pictureit' possible but impractical to encode all the possible ways to recognize personbut you'll soon see that the possibilities are nearly endless to succeedyou'll need to add new skill to your toolkitmachine learningwhich is the topic of this |
16,747 | machine learning what is machine learning and why should you care about it"machine learning is field of study that gives computers the ability to learn without being explicitly programmed --arthur samuel the definition of machine learning coined by arthur samuel is often quoted and is genius in its broadnessbut it leaves you with the question of how the computer learns to achieve machine learningexperts develop general-purpose algorithms that can be used on large classes of learning problems when you want to solve specific task you only need to feed the algorithm more specific data in wayyou're programming by example in most cases computer will use data as its source of information and compare its output to desired output and then correct for it the more data or "experiencethe computer getsthe better it becomes at its designated joblike human does when machine learning is seen as processthe following definition is insightful"machine learning is the process by which computer can work more accurately as it collects and learns from the data it is given --mike roberts for exampleas user writes more text messages on phonethe phone learns more about the messagescommon vocabulary and can predict (autocompletetheir words faster and more accurately in the broader field of sciencemachine learning is subfield of artificial intelligence and is closely related to applied mathematics and statistics all this might sound bit abstractbut machine learning has many applications in everyday life applications for machine learning in data science regression and classification are of primary importance to data scientist to achieve these goalsone of the main tools data scientist uses is machine learning the uses for regression and automatic classification are wide rangingsuch as the following finding oil fieldsgold minesor archeological sites based on existing sites (classification and regressionfinding place names or persons in text (classificationidentifying people based on pictures or voice recordings (classificationrecognizing birds based on their whistle (classificationalthough the following paper is often cited as the source of this quoteit' not present in reprint of that paper the authors were unable to verify or find the exact source of this quote see arthur samuel"some studies in machine learning using the game of checkers,ibm journal of research and development no ( ): - mike roberts is the technical editor of this book thank youmike |
16,748 | identifying profitable customers (regression and classificationproactively identifying car parts that are likely to fail (regressionidentifying tumors and diseases (classificationpredicting the amount of money person will spend on product (regressionpredicting the number of eruptions of volcano in period (regressionpredicting your company' yearly revenue (regressionpredicting which team will win the champions league in soccer (classificationoccasionally data scientists build model (an abstraction of realitythat provides insight to the underlying processes of phenomenon when the goal of model isn' prediction but interpretationit' called root cause analysis here are few examplesunderstanding and optimizing business processsuch as determining which products add value to product line discovering what causes diabetes determining the causes of traffic jams this list of machine learning applications can only be seen as an appetizer because it' ubiquitous within data science regression and classification are two important techniquesbut the repertoire and the applications don' endwith clustering as one other example of valuable technique machine learning techniques can be used throughout the data science processas we'll discuss in the next section where machine learning is used in the data science process although machine learning is mainly linked to the data-modeling step of the data science processit can be used at almost every step to refresh your memory from previous the data science process is shown in figure data science process setting the research goal retrieving data data preparation data exploration model and variable selection data modeling model execution model diagnostic and model comparison presentation and automation figure the data science process |
16,749 | machine learning the data modeling phase can' start until you have qualitative raw data you can understand but prior to thatthe data preparation phase can benefit from the use of machine learning an example would be cleansing list of text stringsmachine learning can group similar strings together so it becomes easier to correct spelling errors machine learning is also useful when exploring data algorithms can root out underlying patterns in the data where they' be difficult to find with only charts given that machine learning is useful throughout the data science processit shouldn' come as surprise that considerable number of python libraries were developed to make your life bit easier python tools used in machine learning python has an overwhelming number of packages that can be used in machine learning setting the python machine learning ecosystem can be divided into three main types of packagesas shown in figure pandas matplotlib numpy data sympy scipy theano statsmodels data fits in memory modeling scikit-learn rpy text analysis nltk numba pycuda use gpu numbapro use code optimize your code out of memory cython blaze distributed dispy ipcluster parallel pp python for machine learning pydoop big data pyspark hadoopy figure overview of python packages used during the machine-learning phase |
16,750 | the first type of package shown in figure is mainly used in simple tasks and when data fits into memory the second type is used to optimize your code when you've finished prototyping and run into speed or memory issues the third type is specific to using python with big data technologies packages for working with data in memory when prototypingthe following packages can get you started by providing advanced functionalities with few lines of codescipy is library that integrates fundamental packages often used in scientific computing such as numpymatplotlibpandasand sympy numpy gives you access to powerful array functions and linear algebra functions matplotlib is popular plotting package with some functionality pandas is high-performancebut easy-to-usedata-wrangling package it introduces dataframes to pythona type of in-memory data table it' concept that should sound familiar to regular users of sympy is package used for symbolic mathematics and computer algebra statsmodels is package for statistical methods and algorithms scikit-learn is library filled with machine learning algorithms rpy allows you to call functions from within python is popular open source statistics program nltk (natural language toolkitis python toolkit with focus on text analytics these libraries are good to get started withbut once you make the decision to run certain python program at frequent intervalsperformance comes into play optimizing operations once your application moves into productionthe libraries listed here can help you deliver the speed you need sometimes this involves connecting to big data infrastructures such as hadoop and spark numba and numbapro--these use just-in-time compilation to speed up applications written directly in python and few annotations numbapro also allows you to use the power of your graphics processor unit (gpupycuda --this allows you to write code that will be executed on the gpu instead of your cpu and is therefore ideal for calculation-heavy applications it works best with problems that lend themselves to being parallelized and need little input compared to the number of required computing cycles an example is studying the robustness of your predictions by calculating thousands of different outcomes based on single start state cythonor for python--this brings the programming language to python is lower-level languageso the code is closer to what the computer eventually uses (bytecodethe closer code is to bits and bytesthe faster it executes computer is also faster when it knows the type of variable (called static typingpython wasn' designed to do thisand cython helps you to overcome this shortfall |
16,751 | machine learning blaze --blaze gives you data structures that can be bigger than your computer' main memoryenabling you to work with large data sets dispy and ipcluster --these packages allow you to write code that can be distributed over cluster of computers pp --python is executed as single process by default with the help of pp you can parallelize computations on single machine or over clusters pydoop and hadoopy--these connect python to hadoopa common big data framework pyspark--this connects python and sparkan in-memory big data framework now that you've seen an overview of the available librarieslet' look at the modeling process itself the modeling process the modeling phase consists of four steps feature engineering and model selection training the model model validation and selection applying the trained model to unseen data before you find good modelyou'll probably iterate among the first three steps the last step isn' always present because sometimes the goal isn' prediction but explanation (root cause analysisfor instanceyou might want to find out the causes of speciesextinctions but not necessarily predict which one is next in line to leave our planet it' possible to chain or combine multiple techniques when you chain multiple modelsthe output of the first model becomes an input for the second model when you combine multiple modelsyou train them independently and combine their results this last technique is also known as ensemble learning model consists of constructs of information called features or predictors and target or response variable your model' goal is to predict the target variablefor exampletomorrow' high temperature the variables that help you do this and are (usuallyknown to you are the features or predictor variables such as today' temperaturecloud movementscurrent wind speedand so on the best models are those that accurately represent realitypreferably while staying concise and interpretable to achieve thisfeature engineering is the most important and arguably most interesting part of modeling for examplean important feature in model that tried to explain the extinction of large land animals in the last , years in australia turned out to be the population number and spread of humans engineering features and selecting model with engineering featuresyou must come up with and create possible predictors for the model this is one of the most important steps in the process because model |
16,752 | recombines these features to achieve its predictions often you may need to consult an expert or the appropriate literature to come up with meaningful features certain features are the variables you get from data setas is the case with the provided data sets in our exercises and in most school exercises in practice you'll need to find the features yourselfwhich may be scattered among different data sets in several projects we had to bring together more than different data sources before we had the raw data we required often you'll need to apply transformation to an input before it becomes good predictor or to combine multiple inputs an example of combining multiple inputs would be interaction variablesthe impact of either single variable is lowbut if both are present their impact becomes immense this is especially true in chemical and medical environments for examplealthough vinegar and bleach are fairly harmless common household products by themselvesmixing them results in poisonous chlorine gasa gas that killed thousands during world war in medicineclinical pharmacy is discipline dedicated to researching the effect of the interaction of medicines this is an important joband it doesn' even have to involve two medicines to produce potentially dangerous results for examplemixing an antifungal medicine such as sporanox with grapefruit has serious side effects sometimes you have to use modeling techniques to derive featuresthe output of model becomes part of another model this isn' uncommonespecially in text mining documents can first be annotated to classify the content into categoriesor you can count the number of geographic places or persons in the text this counting is often more difficult than it soundsmodels are first applied to recognize certain words as person or place all this new information is then poured into the model you want to build one of the biggest mistakes in model construction is the availability biasyour features are only the ones that you could easily get your hands on and your model consequently represents this one-sided "truth models suffering from availability bias often fail when they're validated because it becomes clear that they're not valid representation of the truth in world war iiafter bombing runs on german territorymany of the english planes came back with bullet holes in the wingsaround the noseand near the tail of the plane almost none of them had bullet holes in the cockpittail rudderor engine blockso engineering decided extra armor plating should be added to the wings this looked like sound idea until mathematician by the name of abraham wald explained the obviousness of their mistakethey only took into account the planes that returned the bullet holes on the wings were actually the least of their concernbecause at least plane with this kind of damage could make it back home for repairs plane fortification was hence increased on the spots that were unscathed on returning planes the initial reasoning suffered from availability biasthe engineers ignored an important part of the data because it was harder to obtain in this case they were luckybecause the reasoning could be reversed to get the intended result without getting the data from the crashed planes when the initial features are createda model can be trained to the data |
16,753 | machine learning training your model with the right predictors in place and modeling technique in mindyou can progress to model training in this phase you present to your model data from which it can learn the most common modeling techniques have industry-ready implementations in almost every programming languageincluding python these enable you to train your models by executing few lines of code for more state-of-the art data science techniquesyou'll probably end up doing heavy mathematical calculations and implementing them with modern computer science techniques once model is trainedit' time to test whether it can be extrapolated to realitymodel validation validating model data science has many modeling techniquesand the question is which one is the right one to use good model has two propertiesit has good predictive power and it generalizes well to data it hasn' seen to achieve this you define an error measure (how wrong the model isand validation strategy two common error measures in machine learning are the classification error rate for classification problems and the mean squared error for regression problems the classification error rate is the percentage of observations in the test data set that your model mislabeledlower is better the mean squared error measures how big the average error of your prediction is squaring the average error has two consequencesyou can' cancel out wrong prediction in one direction with faulty prediction in the other direction for exampleoverestimating future turnover for next month by , doesn' cancel out underestimating it by , for the following month as second consequence of squaringbigger errors get even more weight than they otherwise would small errors remain small or can even shrink (if < )whereas big errors are enlarged and will definitely draw your attention many validation strategies existincluding the following common onesdividing your data into training set with xof the observations and keeping the rest as holdout data set ( data set that' never used for model creation)--this is the most common technique -folds cross validation--this strategy divides the data set into parts and uses each part one time as test data set while using the others as training data set this has the advantage that you use all the data available in the data set leave- out --this approach is the same as -folds but with = you always leave one observation out and train on the rest of the data this is used only on small data setsso it' more valuable to people evaluating laboratory experiments than to big data analysts another popular term in machine learning is regularization when applying regularizationyou incur penalty for every extra variable used to construct the model with |
16,754 | regularization you ask for model with as few predictors as possible this is important for the model' robustnesssimple solutions tend to hold true in more situations regularization aims to keep the variance between the coefficients of the predictors as small as possible overlapping variance between predictors makes it hard to make out the actual impact of each predictor keeping their variance from overlapping will increase interpretability to keep it simpleregularization is mainly used to stop model from using too many features and thus prevent over-fitting validation is extremely important because it determines whether your model works in real-life conditions to put it bluntlyit' whether your model is worth dime even soevery now and then people send in papers to respected scientific journals (and sometimes even succeed at publishing themwith faulty validation the result of this is they get rejected or need to retract the paper because everything is wrong situations like this are bad for your mental health so always keep this in mindtest your models on data the constructed model has never seen and make sure this data is true representation of what it would encounter when applied on fresh observations by other people for classification modelsinstruments like the confusion matrix (introduced in but thoroughly explained later in this are goldenembrace them once you've constructed good modelyou can (optionallyuse it to predict the future predicting new observations if you've implemented the first three steps successfullyyou now have performant model that generalizes to unseen data the process of applying your model to new data is called model scoring in factmodel scoring is something you implicitly did during validationonly now you don' know the correct outcome by now you should trust your model enough to use it for real model scoring involves two steps firstyou prepare data set that has features exactly as defined by your model this boils down to repeating the data preparation you did in step one of the modeling process but for new data set then you apply the model on this new data setand this results in prediction now let' look at the different types of machine learning techniquesa different problem requires different approach types of machine learning broadly speakingwe can divide the different approaches to machine learning by the amount of human effort that' required to coordinate them and how they use labeled data--data with category or real-value number assigned to it that represents the outcome of previous observations supervised learning techniques attempt to discern results and learn by trying to find patterns in labeled data set human interaction is required to label the data unsupervised learning techniques don' rely on labeled data and attempt to find patterns in data set without human interaction |
16,755 | machine learning semi-supervised learning techniques need labeled dataand therefore human interactionto find patterns in the data setbut they can still progress toward result and learn even if passed unlabeled data as well in this sectionwe'll look at all three approachessee what tasks each is more appropriate forand use one or two of the python libraries mentioned earlier to give you feel for the code and solve task in each of these exampleswe'll work with downloadable data set that has already been cleanedso we'll skip straight to the data modeling step of the data science processas discussed earlier in this supervised learning as stated beforesupervised learning is learning technique that can only be applied on labeled data an example implementation of this would be discerning digits from images let' dive into case study on number recognition case studydiscerning digits from images one of the many common approaches on the web to stopping computers from hacking into user accounts is the captcha check-- picture of text and numbers that the human user must decipher and enter into form field before sending the form back to the web server something like figure should look familiar figure simple captcha control can be used to prevent automated spam being sent through an online web form with the help of the naive bayes classifiera simple yet powerful algorithm to categorize observations into classes that' explained in more detail in the sidebaryou can recognize digits from textual images these images aren' unlike the captcha checks many websites have in place to make sure you're not computer trying to hack into the user accounts let' see how hard it is to let computer recognize images of numbers our research goal is to let computer recognize images of numbers (step one of the data science processthe data we'll be working on is the mnist data setwhich is often used in the data science literature for teaching and benchmarking |
16,756 | types of machine learning introducing naive bayes classifiers in the context of spam filter not every email you receive has honest intentions your inbox can contain unsolicited commercial or bulk emailsa spam not only is spam annoyingit' often used in scams and as carrier for viruses kaspersky estimates that more than of the emails in the world are spam to protect users from spammost email clients run program in the background that classifies emails as either spam or safe popular technique in spam filtering is employing classifier that uses the words inside the mail as predictors it outputs the chance that specific email is spam given the words it' composed of (in mathematical termsp(spam wordsto reach this conclusion it uses three calculationsp(spam)--the average rate of spam without knowledge of the words according to kasperskyan email is spam of the time (words)--how often this word combination is used regardless of spam (words spam)--how often these words are seen when training mail was labeled as spam to determine the chance that new email is spamyou' use the following formulap(spam|wordsp(spam) (words|spamp(wordsthis is an application of the rule ( |ap(bp( |bp( )which is known as bayes' rule and which lends its name to this classifier the "naivepart comes from the classifier' assumption that the presence of one feature doesn' tell you anything about another feature (feature independencealso called absence of multicollinearityin realityfeatures are often relatedespecially in text for example the word "buywill often be followed by "now despite the unrealistic assumptionthe naive classifier works surprisingly well in practice with the bit of theory in the sidebaryou're ready to perform the modeling itself make sure to run all the upcoming code in the same scope because each piece requires the one before it an ipython file can be downloaded for this from the manning download page of this book the mnist images can be found in the data sets package of scikit-learn and are already normalized for you (all scaled to the same size pixels)so we won' need much data preparation (step three of the data science processbut let' first fetch our data as step two of the data science processwith the following listing listing step of the data science processfetching the digital image data from sklearn datasets import load_digits import pylab as pl digits load_digits(imports digits database loads digits kaspersky quarterly spam statistics reportspam-statistics-report- - vvym blviko |
16,757 | machine learning working with images isn' much different from working with other data sets in the case of gray imageyou put value in every matrix entry that depicts the gray value to be shown the following code demonstrates this process and is step four of the data science processdata exploration listing step of the data science processusing scikit-learn pl gray(pl matshow(digits images[ ]pl show(digits images[ shows first images turns image into gray-scale values shows the corresponding matrix figure shows how blurry " image translates into data matrix figure shows the actual code outputbut perhaps figure can clarify this slightlybecause it shows how each element in the vector is piece of the image easy so farisn' itthere isnaturallya little more work to do the naive bayes classifier is expecting list of valuesbut pl matshow(returns two-dimensional array ( matrixreflecting the shape of the image to flatten it into listwe need to call reshape(on digits images the net result will be one-dimensional array that looks something like thisarray([ ]]figure blurry grayscale representation of the number with its corresponding matrix the higher the numberthe closer it is to whitethe lower the numberthe closer it is to black |
16,758 | types of machine learning figure we'll turn an image into something usable by the naive bayes classifier by getting the grayscale value for each of its pixels (shown on the rightand putting those values in list the previous code snippet shows the matrix of figure flattened (the number of dimensions was reduced from two to oneto python list from this point onit' standard classification problemwhich brings us to step five of the data science processmodel building now that we have way to pass the contents of an image into the classifierwe need to pass it training data set so it can start learning how to predict the numbers in the images we mentioned earlier that scikit-learn contains subset of the mnist database ( , images)so we'll use that each image is also labeled with the number it actually shows this will build probabilistic model in memory of the most likely digit shown in an image given its grayscale values once the program has gone through the training set and built the modelwe can then pass it the test set of data to see how well it has learned to interpret the images using the model the following listing shows how to implement these steps in code listing image data classification problem on images of digits from sklearn cross_validation import train_test_split from sklearn naive_bayes import gaussiannb from sklearn metrics import confusion_matrix import pylab as plt digits target step split into test set and training set n_samples len(digits imagesxdigits images reshape((n_samples- )step select target variable step prepare data reshape adapts the matrix form this method couldfor instanceturn matrix into vectors print x_trainx_testy_trainy_test train_test_split(xyrandom_state= |
16,759 | step fit data machine learning gnb gaussiannb(fit gnb fit(x_train,y_trainpredicted fit predict(x_testconfusion_matrix(y_testpredictedstep create confusion matrix step select naive bayes classifieruse gaussian distribution to estimate probability step predict data for unseen data the end result of this code is called confusion matrixsuch as the one shown in figure returned as two-dimensional arrayit shows how often the number predicted was the correct number on the main diagonal and also in the matrix entry ( , )where was predicted but the image showed looking at figure we can see that the model predicted the number correctly times (at coordinates , )but also that the model predicted the number times when it was actually the number in the image (at , figure confusion matrix produced by predicting what number is depicted by blurry image confusion matrices confusion matrix is matrix showing how wrongly (or correctlya model predictedhow much it got "confused in its simplest form it will be table for models that try to classify observations as being or let' say we have classification model that predicts whether somebody will buy our newest productdeep-fried cherry pudding we can either predict"yesthis person will buyor "nothis customer won' buy once we make our prediction for people we can compare this to their actual behaviorshowing us how many times we got it right an example is shown in table table confusion matrix example confusion matrix predicted "person will buypredicted "person will not buyperson bought the deep-fried cherry pudding (true positive (false negativeperson didn' buy the deepfried cherry pudding (false positive (true negative |
16,760 | types of machine learning the model was correct in ( + cases and incorrect in ( + casesresulting in ( correct/ total observations accuracy all the correctly classified observations are added up on the diagonal ( + while everything else ( + is incorrectly classified when the model only predicts two classes (binary)our correct guesses are two groupstrue positives (predicted to buy and did soand true negatives (predicted they wouldn' buy and they didn'tour incorrect guesses are divided into two groupsfalse positives (predicted they would buy but they didn'tand false negatives (predicted not to buy but they didthe matrix is useful to see where the model is having the most problems in this case we tend to be overconfident in our product and classify customers as future buyers too easily (false positivefrom the confusion matrixwe can deduce that for most images the predictions are quite accurate in good model you' expect the sum of the numbers on the main diagonal of the matrix (also known as the matrix traceto be very high compared to the sum of all matrix entriesindicating that the predictions were correct for the most part let' assume we want to show off our results in more easily understandable way or we want to inspect several of the images and the predictions our program has madewe can use the following code to display one next to the other then we can see where the program has gone wrong and needs little more training if we're satisfied with the resultsthe model building ends here and we arrive at step sixpresenting the results listing inspecting predictions vs actual numbers adds an extra subplot on plot grid this code could be simplified asplt subplot ( ,indexbut this looks visually more appealing stores number image matrix and its prediction (as numbertogether in array images_and_predictions list(zip(digits imagesfit predict( ))for index(imagepredictionin enumerate(images_and_predictions[: ])plt subplot( ,index plt axis('off'plt imshow(imagecmap=plt cm gray_rinterpolation='nearest'plt title('prediction%ipredictionplt show(shows shows the full plot that is now populated with subplots shows the predicted value as the title to the shown image loops through first images doesn' show an axis image in grayscale figure shows how all predictions seem to be correct except for the digit number which it labels as we should forgive this mistake as this does share visual similarities |
16,761 | machine learning figure for each blurry image number is predictedonly the number is misinterpreted as then an ambiguous number is predicted to be but it could as well be even to human eyes this isn' clear with the bottom left number is ambiguouseven to humansis it or it' debatablebut the algorithm thinks it' by discerning which images were misinterpretedwe can train the model further by labeling them with the correct number they display and feeding them back into the model as new training set (step of the data science processthis will make the model more accurateso the cycle of learnpredictcorrect continues and the predictions become more accurate this is controlled data set we're using for the example all the examples are the same size and they are all in shades of gray expand that up to the variable size images of variable length strings of variable shades of alphanumeric characters shown in the captcha controland you can appreciate why model accurate enough to predict any captcha image doesn' exist yet in this supervised learning exampleit' apparent that without the labels associated with each image telling the program what number that image showsa model cannot be built and predictions cannot be made by contrastan unsupervised learning approach doesn' need its data to be labeled and can be used to give structure to an unstructured data set unsupervised learning it' generally true that most large data sets don' have labels on their dataso unless you sort through it all and give it labelsthe supervised learning approach to data won' work insteadwe must take the approach that will work with this data because we can study the distribution of the data and infer truths about the data in different parts of the distribution we can study the structure and values in the data and infer newmore meaningful data and structure from it many techniques exist for each of these unsupervised learning approaches howeverin the real world you're always working toward the research goal defined in the first phase of the data science processso you may need to combine or try different techniques before either data set can be labeledenabling supervised learning techniquesperhapsor even the goal itself is achieved |
16,762 | discerning simplified latent structure from your data not everything can be measured when you meet someone for the first time you might try to guess whether they like you based on their behavior and how they respond but what if they've had bad day up until nowmaybe their cat got run over or they're still down from attending funeral the week beforethe point is that certain variables can be immediately available while others can only be inferred and are therefore missing from your data set the first type of variables are known as observable variables and the second type are known as latent variables in our examplethe emotional state of your new friend is latent variable it definitely influences their judgment of you but its value isn' clear deriving or inferring latent variables and their values based on the actual contents of data set is valuable skill to have because latent variables can substitute for several existing variables already in the data set by reducing the number of variables in the data setthe data set becomes more manageableany further algorithms run on it work fasterand predictions may become more accurate because latent variables are designed or targeted toward the defined research goalyou lose little key information by using them if we can reduce data set from observable variables per line to or latent variablesfor examplewe have better chance of reaching our research goal because of the data set' simplified structure as you'll see from the example belowit' not case of reducing the existing data set to as few latent variables as possible you'll need to find the sweet spot where the number of latent variables derived returns the most value let' put this into practice with small case study case studyfinding latent variables in wine quality data set in this short case studyyou'll use technique known as principal component analysis (pcato find latent variables in data set that describes the quality of wine then you'll compare how well set of latent variables works in predicting the quality of wine against the original observable set you'll learn how to identify and derive those latent variables how to analyze where the sweet spot is--how many new variables return the most utility--by generating and interpreting scree plot generated by pca (we'll look at scree plots in moment let' look at the main components of this example data set--the university of californiairvine (ucihas an online repository of data sets for machine learning exercises at we'll use the wine quality data set for red wines created by corteza cerdeiraf almeidat matosand reis it' , lines long and has variables per lineas shown in table you can find full details of the wine quality data set at |
16,763 | machine learning table the first three rows of the red wine quality data set fixed volatile acidity acidity citric acid residual chlorides sugar free sulfur dioxide total sulfur dioxide density ph sulfates alcohol quality principal component analysis-- technique to find the latent variables in your data set while retaining as much information as possible scikit-learn--we use this library because it already implements pca for us and is way to generate the scree plot part one of the data science process is to set our research goalwe want to explain the subjective "wine qualityfeedback using the different wine properties our first job then is to download the data set (step twoacquiring data)as shown in the following listingand prepare it for analysis (step threedata preparationthen we can run the pca algorithm and view the results to look at our options listing data acquisition and variable standardization is matrix of predictor variables these variables are wine properties such as density and alcohol presence import pandas as pd from sklearn import preprocessing from sklearn decomposition import pca import pylab as plt from sklearn import preprocessing downloads location of wine-quality data set url winequality-red csv data pd read_csv(urlsep";" data[[ 'fixed acidity' 'volatile acidity' 'citric acid'reads in the csv 'residual sugar' 'chlorides' 'free sulfur dioxide'data it' separated by semi-colon 'total sulfur dioxide' 'density' 'ph' 'sulphates' 'alcohol'] data quality xpreprocessing standardscaler(fit(xtransform(xy is vector and when standardizing datathe following formula is applied to every data pointz ( -)/where is the new observation valuex the old oneis the meanand the standard deviation the pca of data matrix is easier to interpret when the columns have first been centered by their means represents the dependent variable (target variabley is the perceived wine quality |
16,764 | types of machine learning with the initial data preparation behind youyou can execute the pca the resulting scree plot (which will be explained shortlyis shown in figure because pca is an explorative techniquewe now arrive at step four of the data science processdata explorationas shown in the following listing listing executing the principal component analysis creates instance of principal component analysis class model pca(results model fit(xz results transform(xplt plot(results explained_variance_plt show(shows plot applies pca on predictor variables to see if they can be compacted into fewer variables turns result into array so we can use newly created data plots explained variance in variablesthis plot is scree plot now let' look at the scree plot in figure the plot generated from the wine data set is shown in figure what you hope to see is an elbow or hockey stick shape in the plot this indicates that few variables can represent the majority of the information in the data set while the rest only add little more in our plotpca tells us that reducing the set down to one variable can capture approximately of the total information in the set (the plot is zero-basedso variable figure pca scree plot showing the marginal amount of information of every new variable pca can create the first variables explain approximately of the variance in the datathe second variable accounts for another %the third approximately %and so on |
16,765 | machine learning one is at position zero on the axis)two variables will capture approximately more or totaland so on table shows you the full read-out table the findings of the pca number of variables extra information captured total data captured an elbow shape in the plot suggests that five variables can hold most of the information found inside the data you could argue for cut-off at six or seven variables insteadbut we're going to opt for simpler data set versus one with less variance in data against the original data set at this pointwe could go ahead and see if the original data set recoded with five latent variables is good enough to predict the quality of the wine accuratelybut before we dowe'll see how we might identify what they represent interpreting the new variables with the initial decision made to reduce the data set from original variables to latent variableswe can check to see whether it' possible to interpret or name them based on their relationships with the originals actual names are easier to work with than codes such as lv lv and so on we can add the line of code in the following listing to generate table that shows how the two sets of variables correlate listing showing pca components in pandas data frame pd dataframe(results components_columns=list[ 'fixed acidity' 'volatile acidity' 'citric acid' 'residual sugar' 'chlorides' 'free sulfur dioxide' 'total sulfur dioxide' 'density' 'ph' 'sulphates' 'alcohol'])the rows in the resulting table (table show the mathematical correlation orin englishthe first latent variable lv which captures approximately of the total information in the sethas the following formula lv (fixed acidity (volatile acidity - (alcohol - |
16,766 | types of machine learning table how pca calculates the original variablescorrelation with latent variables fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density ph sulphates alcohol - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - giving useable name to each new variable is bit trickier and would probably require consultation with an actual wine expert for accuracy howeveras we don' have wine expert on handwe'll call them the following (table table interpretation of the wine quality pca-created variables latent variable possible interpretation persistent acidity sulfides volatile acidity chlorides lack of residual sugar we can now recode the original data set with only the five latent variables doing this is data preparation againso we revisit step three of the data science processdata preparation as mentioned in the data science process is recursive one and this is especially true between step threedata preparation and step data exploration table shows the first three rows with this done table the first three rows of the red wine quality data set recoded in five latent variables persistent acidity sulfides volatile acidity chlorides lack of residual sugar - - - - - - already we can see high values for wine in volatile aciditywhile wine is particularly high in persistent acidity don' sound like good wines at all |
16,767 | machine learning comparing the accuracy of the original data set with latent variables now that we've decided our data set should be recoded into latent variables rather than the originalsit' time to see how well the new data set works for predicting the quality of wine when compared to the original we'll use the naive bayes classifier algorithm we saw in the previous example for supervised learning to help let' start by seeing how well the original variables could predict the wine quality scores the following listing presents the code to do this listing wine score prediction before principal component analysis from sklearn cross_validation import train_test_split from sklearn naive_bayes import gaussiannb from sklearn metrics import confusion_matrix import pylab as plt gnb gaussiannb(fit gnb fit( ,ypred fit predict(xprint confusion_matrix(pred,yprint confusion_matrix(pred,ytrace(use gaussian distribution naive bayes classifier for estimation fit data predict data for unseen data study confusion matrix count of all correctly classified casesall counts on trace or diagonal summed up after analyzing confusion matrix we can see the naive bayes classifier scores correct predictions out of now we'll run the same prediction testbut starting with only latent variable instead of the original then we'll add anothersee how it didadd anotherand so on to see how the predictive performance improves the following listing shows how this is done listing wine score prediction with increasing number of principal components fit pca model on -variables (featuresthe actual prediction itself using the fitted model array will be filled with correctly predicted observations loops through first detected principal components instantiate pca model with component (first iterationup to components (in th iterationpredicted_correct [ is result in matrix form (actually for in range( , )an array filled with arraysmodel pca(n_components iresults model fit(xuse gaussian distribution naive results transform(xbayes classifier for estimation fit gnb fit( ,ypred fit predict(zpredicted_correct append(confusion_matrix(pred,ytrace()at end of each print predicted_correct iteration we plt plot(predicted_correctprinting this array we append number can see how after each plt show(plot shown easier to see when array plotted iterationnew count of correctly classified observations is appended of correctly classified observations |
16,768 | types of machine learning figure the results plot shows that adding more latent variables to model ( -axisgreatly increases predictive power ( -axisup to point but then tails off the gain in predictive power from adding variables wears off eventually the resulting plot is shown in figure the plot in figure shows that with only latent variablesthe classifier does better job of predicting wine quality than with the original alsoadding more latent variables beyond doesn' add as much predictive power as the first this shows our choice of cutting off at variables was good oneas we' hoped we looked at how to group similar variablesbut it' also possible to group observations grouping similar observations to gain insight from the distribution of your data suppose for moment you're building website that recommends films to users based on preferences they've entered and films they've watched the chances are high that if they watch many horror movies they're likely to want to know about new horror movies and not so much about new teen romance films by grouping together users who've watched more or less the same films and set more or less the same preferencesyou can gain good deal of insight into what else they might like to have recommended the general technique we're describing here is known as clustering in this processwe attempt to divide our data set into observation subsetsor clusterswherein observations should be similar to those in the same cluster but differ greatly from the observations in other clusters figure gives you visual idea of what clustering aims to achieve the circles in the top left of the figure are clearly close to each other while being farther away from the others the same is true of the crosses in the top right scikit-learn implements several common algorithms for clustering data in its sklearn cluster moduleincluding the -means algorithmaffinity propagationand spectral clustering each has use case or two for which it' more suited, although you can find comparison of all the clustering algorithms in scikit-learn at modules/clustering html |
16,769 | machine learning figure the goal of clustering is to divide data set into "sufficiently distinctsubsets in this plot for instancethe observations have been divided into three clusters -means is good general-purpose algorithm with which to get started howeverlike all the clustering algorithmsyou need to specify the number of desired clusters in advancewhich necessarily results in process of trial and error before reaching decent conclusion it also presupposes that all the data required for analysis is available already what if it wasn'tlet' look at the actual case of clustering irises (the flowerby their properties (sepal length and widthpetal length and widthand so onin this example we'll use the -means algorithm it' good algorithm to get an impression of the data but it' sensitive to start valuesso you can end up with different cluster every time you run the algorithm unless you manually define the start values by specifying seed (constant for the start value generatorif you need to detect hierarchyyou're better off using an algorithm from the class of hierarchical clustering techniques one other disadvantage is the need to specify the number of desired clusters in advance this often results in process of trial and error before coming to satisfying conclusion executing the code is fairly simple it follows the same structure as all the other analyses except you don' have to pass target variable it' up to the algorithm to learn interesting patterns the following listing uses an iris data set to see if the algorithm can group the different types of irises |
16,770 | types of machine learning listing iris classification example print first observations of data frame to screennow we can clearly see variablessepal lengthsepal widthpetal lengthand petal width load in iris (flowersdata of scikit-learn import sklearn from sklearn import cluster import pandas as pd add another variable called "clusterto data frame this indicates the cluster membership of every flower in data set fit model to data all variables are considered independent variablesunsupervised learning has no target variable (ytransform iris data into pandas data frame data sklearn datasets load_iris( pd dataframe(data datacolumns list(data feature_names)print [: model cluster kmeans(n_clusters= random_state= results model fit(xx["cluster"results predict(xx["target"data target let' finally add target [" ""lookatmeiamimportantvariable (yto the data frame print [: classification_result [["cluster""target"," "]groupby(["cluster","target"]agg("count"print(classification_resultadding variable is just little trick we use to do count later the value here is arbitrary because we need column to count the rows initialize -means cluster model with clusters the random_state is random seedif you don' put it inthe seed will also be random we opt for clusters because we saw in the last listing this might be good compromise between complexity and performance three parts to this code first we select the clustertargetand columns then we group by the cluster and target columns finallywe aggregate the row of the group with simple count aggregation the matrix this classification result represents gives us an indication of whether our clustering was successful for cluster we're spot on on clusters and there has been slight mix-upbut in total we only get ( + misclassifications out of figure shows the output of the iris classification this figure shows that even without using label you' find clusters that are similar to the official iris classification with result of ( + + correct classifications out of you don' always need to choose between supervised and unsupervisedsometimes combining them is an option figure output of the iris classification |
16,771 | machine learning semi-supervised learning it shouldn' surprise you to learn that while we' like all our data to be labeled so we can use the more powerful supervised machine learning techniquesin reality we often start with only minimally labeled dataif it' labeled at all we can use our unsupervised machine learning techniques to analyze what we have and perhaps add labels to the data setbut it will be prohibitively costly to label it all our goal then is to train our predictor models with as little labeled data as possible this is where semi-supervised learning techniques come in--hybrids of the two approaches we've already seen take for example the plot in figure in this casethe data has only two labeled observationsnormally this is too few to make valid predictions does not buy buys figure this plot has only two labeled observations--too few for supervised observationsbut enough to start with an unsupervised or semi-supervised approach common semi-supervised learning technique is label propagation in this techniqueyou start with labeled data set and give the same label to similar data points this is similar to running clustering algorithm over the data set and labeling each cluster based on the labels they contain if we were to apply this approach to the data set in figure we might end up with something like figure one special approach to semi-supervised learning worth mentioning here is active learning in active learning the program points out the observations it wants to see labeled for its next round of learning based on some criteria you have specified for exampleyou might set it to try and label the observations the algorithm is least certain aboutor you might use multiple models to make prediction and select the points where the models disagree the most |
16,772 | summary buyers non-buyers figure the previous figure shows that the data has only two labeled observationsfar too few for supervised learning this figure shows how you can exploit the structure of the underlying data set to learn better classifiers than from the labeled data only the data is split into two clusters by the clustering techniquewe only have two labeled valuesbut if we're bold we can assume others within that cluster have that same label (buyer or non-buyer)as depicted here this technique isn' flawlessit' better to get the actual labels if you can with the basics of machine learning at your disposalthe next discusses using machine learning within the constraints of single computer this tends to be challenging when the data set is too big to load entirely into memory summary in this you learned that data scientists rely heavily on techniques from statistics and machine learning to perform their modeling good number of real-life applications exist for machine learningfrom classifying bird whistling to predicting volcanic eruptions the modeling process consists of four phases feature engineeringdata preparationand model parameterization--we define the input parameters and variables for our model model training--the model is fed with data and it learns the patterns hidden in the data model selection and validation-- model can perform well or poorlybased on its performance we select the model that makes the most sense model scoring--when our model can be trustedit' unleashed on new data if we did our job wellit will provide us with extra insights or give us good prediction of what the future holds |
16,773 | machine learning the two big types of machine learning techniques supervised--learning that requires labeled data unsupervised--learning that doesn' require labeled data but is usually less accurate or reliable than supervised learning semi-supervised learning is in between those techniques and is used when only small portion of the data is labeled two case studies demonstrated supervised and unsupervised learningrespectively our first case study made use of naive bayes classifier to classify images of numbers as the number they represent we also took look at the confusion matrix as means to determining how well our classification model is doing our case study on unsupervised techniques showed how we could use principal component analysis to reduce the input variables for further model building while maintaining most of the information |
16,774 | single computer this covers working with large data sets on single computer working with python libraries suitable for larger data sets understanding the importance of choosing correct algorithms and data structures understanding how you can adapt algorithms to work inside databases what if you had so much data that it seems to outgrow youand your techniques no longer seem to sufficewhat do you dosurrender or adaptluckily you chose to adaptbecause you're still reading this introduces you to techniques and tools to handle larger data sets that are still manageable by single computer if you adopt the right techniques this gives you the tools to perform the classifications and regressions when the data no longer fits into the ram (random access memoryof your computerwhereas focused on in-memory data sets will go step further and teach you how to deal with data sets that require multiple computers to |
16,775 | handling large data on single computer be processed when we refer to large data in this we mean data that causes problems to work with in terms of memory or speed but can still be handled by single computer we start this with an overview of the problems you face when handling large data sets then we offer three types of solutions to overcome these problemsadapt your algorithmschoose the right data structuresand pick the right tools data scientists aren' the only ones who have to deal with large data volumesso you can apply general best practices to tackle the large data problem finallywe apply this knowledge to two case studies the first case shows you how to detect malicious urlsand the second case demonstrates how to build recommender engine inside database the problems you face when handling large data large volume of data poses new challengessuch as overloaded memory and algorithms that never stop running it forces you to adapt and expand your repertoire of techniques but even when you can perform your analysisyou should take care of issues such as / (input/outputand cpu starvationbecause these can cause speed issues figure shows mind map that will gradually unfold as we go through the stepsproblemssolutionsand tips not enough memory processes that never end problems some components form bottleneck while others remain idle not enough speed solutions handling large data general tips figure overview of problems encountered when working with more data than can fit in memory computer only has limited amount of ram when you try to squeeze more data into this memory than actually fitsthe os will start swapping out memory blocks to diskswhich is far less efficient than having it all in memory but only few algorithms are designed to handle large data setsmost of them load the whole data set into memory at oncewhich causes the out-of-memory error other algorithms need to hold multiple copies of the data in memory or store intermediate results all of these aggravate the problem |
16,776 | even when you cure the memory issuesyou may need to deal with another limited resourcetime although computer may think you live for millions of yearsin reality you won' (unless you go into cryostasis until your pc is donecertain algorithms don' take time into accountthey'll keep running forever other algorithms can' end in reasonable amount of time when they need to process only few megabytes of data third thing you'll observe when dealing with large data sets is that components of your computer can start to form bottleneck while leaving other systems idle although this isn' as severe as never-ending algorithm or out-of-memory errorsit still incurs serious cost think of the cost savings in terms of person days and computing infrastructure for cpu starvation certain programs don' feed data fast enough to the processor because they have to read data from the hard drivewhich is one of the slowest components on computer this has been addressed with the introduction of solid state drives (ssd)but ssds are still much more expensive than the slower and more widespread hard disk drive (hddtechnology general techniques for handling large volumes of data never-ending algorithmsout-of-memory errorsand speed issues are the most common challenges you face when working with large data in this sectionwe'll investigate solutions to overcome or alleviate these problems the solutions can be divided into three categoriesusing the correct algorithmschoosing the right data structureand using the right tools (figure problems choose the right algorithms solutions choose the right data structures choose the right tools handling large data general tips figure overview of solutions for handling large data sets no clear one-to-one mapping exists between the problems and solutions because many solutions address both lack of memory and computational performance for instancedata set compression will help you solve memory issues because the data set becomes smaller but this also affects computation speed with shift from the slow hard disk to the fast cpu contrary to ram (random access memory)the hard disc will store everything even after the power goes downbut writing to disc costs more time than changing information in the fleeting ram when constantly changing the informationram is thus preferable over the (more durablehard disc with an |
16,777 | handling large data on single computer unpacked data setnumerous read and write operations ( /oare occurringbut the cpu remains largely idlewhereas with the compressed data set the cpu gets its fair share of the workload keep this in mind while we explore few solutions choosing the right algorithm choosing the right algorithm can solve more problems than adding more or better hardware an algorithm that' well suited for handling large data doesn' need to load the entire data set into memory to make predictions ideallythe algorithm also supports parallelized calculations in this section we'll dig into three types of algorithms that can do thatonline algorithmsblock algorithmsand mapreduce algorithmsas shown in figure online algorithms problems choose the right algorithms block matrices mapreduce solutions choose the right data structures choose the right tools handling large data general tips figure overview of techniques to adapt algorithms to large data sets online learning algorithms severalbut not allmachine learning algorithms can be trained using one observation at time instead of taking all the data into memory upon the arrival of new data pointthe model is trained and the observation can be forgottenits effect is now incorporated into the model' parameters for examplea model used to predict the weather can use different parameters (like atmospheric pressure or temperaturein different regions when the data from one region is loaded into the algorithmit forgets about this raw data and moves on to the next region this "use and forgetway of working is the perfect solution for the memory problem as single observation is unlikely to ever be big enough to fill up all the memory of modernday computer listing shows how to apply this principle to perceptron with online learning perceptron is one of the least complex machine learning algorithms used for binary classification ( or )for instancewill the customer buy or not |
16,778 | listing training perceptron by observation the learning rate of an algorithm is the adjustment it makes every time new observation comes in if this is highthe model will adjust quickly to new observations but might "overshootand never get precise an oversimplified examplethe optimal (and unknownweight for an -variable current estimation is with learning rate of the adjustment (learning rate (size of error (value of (current weight (adjustment (new weight)instead of the adjustment was too big to get the correct result the __init__ method of any python class is always run when creating an instance of the class several default values are set here sets up import numpy as np perceptron class class perceptron()def __init__(selfx,ythreshold learning_rate max_epochs )self threshold threshold self learning_rate learning_rate self self self max_epochs max_epochs the threshold is an arbitrary cutoff between and to decide whether the prediction becomes or often it' right in the middlebut it depends on the use case and variables are assigned to the class one epoch is one run through all the data we allow for maximum of runs until we stop the perceptron each observation will end up with weight the initialize function sets these weights for each incoming observation we allow for optionsall weights start at or they are assigned small (between and random weight def initialize(selfinit_type 'zeros')if init_type ='random'self weights np random rand(len(self [ ]) if init_type ='zeros'self weights np zeros(len(self [ ])we start at the first epoch the training function true is always trueso technically this is never-ending loopbut we build in several stop (breakconditions def train(self)epoch while trueadds one to the current error_count number of epochs epoch + for ( ,yin zip(self xself )error_count +self train_observation( , ,error_countinitiates the number of encountered errors at for each epoch this is importantif an epoch ends without errorsthe algorithm converged and we're done we loop through the data and feed it to the train observation functionone observation at time |
16,779 | handling large data on single computer if we reach the maximum number of allowed runswe stop looking for solution if error_count = print "training successfulbreak if epoch >self max_epochsprint "reached maximum epochsno perfect predictionbreak if by the end of the epoch we don' have an errorthe training was successful the real value (yis either or the prediction is also or if it' wrong we get an error of either or - the train observation function is run for every observation and will adjust the weights using the formula explained earlier def train_observation(self, ,yerror_count)result np dot(xself weightsself threshold error result in case we have wrong prediction (an error)we need to adjust the model we return the error count because we need to evaluate it at the end of the epoch the predict class our (targetdata vector adds to the error count prediction is made for this observation because it' binarythis will be either or for every predictor variable if error ! in the input vector ( )we'll error_count + adjust its weight for indexvalue in enumerate( )self weights[index+self learning_rate error value return error_count adjusts the weight for every predictor variable using the learning ratethe errorand the actual value of the predictor variable def predict(selfx)return int(np dot(xself weightsself thresholdthe values of the predictor values are multiplied by their respective weights (this multiplication is done by np dotthen the outcome is compared to the overall threshold (here this is to see if or should be predicted [( , , ),( , , ),( , , ),( , , ),( , , ),( , , ) [ , , , , , perceptron( ,yp initialize( train(print predict(( , , )print predict(( , , )we check what the perceptron would now predict given different values for the predictor variables in the first case it will predict in the second it predicts we instantiate our perceptron class with the data from matrix and vector our (predictorsdata matrix the weights for the predictors are initialized (as explained previouslythe perceptron model is trained it will try to train until it either converges (no more errorsor it runs out of training runs (epochs |
16,780 | we'll zoom in on parts of the code that might not be so evident to grasp without further explanation we'll start by explaining how the train_observation(function works this function has two large parts the first is to calculate the prediction of an observation and compare it to the actual value the second part is to change the weights if the prediction seems to be wrong the real value (yis either or the prediction is also or if it' wrong we get an error of either or - adds to error count the train observation function is run for every observation and will adjust the weights using the formula explained earlier prediction is made for this observation because it' binarythis will be either or def train_observation(self, ,yerror_count)result np dot(xself weightsself threshold error result in case we have wrong prediction (an if error ! error)we need to adjust the model error_count + for indexvalue in enumerate( )self weights[index]+=self learning_rate error value return error_count we return the error count because we need to evaluate it at the end of the epoch adjusts the weight for every predictor variable using the learning ratethe errorand the actual value of the predictor variable for every predictor variable in the input vector ( )we'll adjust its weight the prediction (yis calculated by multiplying the input vector of independent variables with their respective weights and summing up the terms (as in linear regressionthen this value is compared with the threshold if it' larger than the thresholdthe algorithm will give as outputand if it' less than the thresholdthe algorithm gives as output setting the threshold is subjective thing and depends on your business case let' say you're predicting whether someone has certain lethal diseasewith being positive and negative in this case it' better to have lower thresholdit' not as bad to be found positive and do second investigation than it is to overlook the disease and let the patient die the error is calculatedwhich will give the direction to the change of the weights result np dot(xself weightsself threshold error result the weights are changed according to the sign of the error the update is done with the learning rule for perceptrons for every weight in the weight vectoryou update its value with the following rulewi xi where wi is the amount that the weight needs to be changedis the learning rateis the errorand xi is the ith value in the input vector (the ith predictor variablethe |
16,781 | handling large data on single computer error count is variable to keep track of how many observations are wrongly predicted in this epoch and is returned to the calling function you add one observation to the error counter if the original prediction was wrong an epoch is single training run through all the observations if error ! error_count + for indexvalue in enumerate( )self weights[index+self learning_rate error value the second function that we'll discuss in more detail is the train(function this function has an internal loop that keeps on training the perceptron until it can either predict perfectly or until it has reached certain number of training rounds (epochs)as shown in the following listing listing using train functions training function starts at the first epoch true is always true so technically this is never-ending loopbut we build in several stop (breakconditions def train(self)initiates the number of encountered epoch errors at for each epoch this is while trueimportantbecause if an epoch ends adds one to error_count without errorsthe algorithm the current epoch + converged and we're done number of for ( ,yin zip(self xself )epochs error_count +self train_observation( , ,error_countif error_count = print "training succesfullwe loop through break the data and if epoch >self max_epochsfeed it to the train print "reached maximum epochsno perfect predictionobservation functionbreak an loop to train the data one observation at time if by the end of the epoch we don' have an errortraining was successful if we reach the maximum number of allowed runswe stop looking for solution most online algorithms can also handle mini-batchesthis wayyou can feed them batches of to , observations at once while using sliding window to go over your data you have three optionsfull batch learning (also called statistical learning)--feed the algorithm all the data at once this is what we did in mini-batch learning--feed the algorithm spoonful ( depending on what your hardware can handleof observations at time online learning--feed the algorithm one observation at time |
16,782 | online learning techniques are related to streaming algorithmswhere you see every data point only once think about incoming twitter datait gets loaded into the algorithmsand then the observation (tweetis discarded because the sheer number of incoming tweets of data might soon overwhelm the hardware online learning algorithms differ from streaming algorithms in that they can see the same observations multiple times truethe online learning algorithms and streaming algorithms can both learn from observations one by one where they differ is that online algorithms are also used on static data source as well as on streaming data source by presenting the data in small batches (as small as single observation)which enables you to go over the data multiple times this isn' the case with streaming algorithmwhere data flows into the system and you need to do the calculations typically immediately they're similar in that they handle only few at time dividing large matrix into many small ones whereas in the previous we barely needed to deal with how exactly the algorithm estimates parametersdiving into this might sometimes help by cutting large data table into small matricesfor instancewe can still do linear regression the logic behind this matrix splitting and how linear regression can be calculated with matrices can be found in the sidebar it suffices to know for now that the python libraries we're about to use will take care of the matrix splittingand linear regression variable weights can be calculated using matrix calculus block matrices and matrix formula of linear regression coefficient estimation certain algorithms can be translated into algorithms that use blocks of matrices instead of full matrices when you partition matrix into block matrixyou divide the full matrix into parts and work with the smaller parts instead of the full matrix in this case you can load smaller matrices into memory and perform calculationsthereby avoiding an out-of-memory error figure shows how you can rewrite matrix addition into submatrices aj, an, an, , , bj, bj, bj + , bj + , aj + , aj + , bn, aj, , bn, , , an, , an, , +ba , bn, bn, figure block matrices can be used to calculate the sum of the matrices and |
16,783 | handling large data on single computer (continuedthe formula in figure shows that there' no difference between adding matrices and together in one step or first adding the upper half of the matrices and then adding the lower half all the common matrix and vector operationssuch as multiplicationinversionand singular value decomposition ( variable reduction technique like pca)can be written in terms of block matrices block matrix operations save memory by splitting the problem into smaller blocks and are easy to parallelize although most numerical packages have highly optimized codethey work only with matrices that can fit into memory and will use block matrices in memory when advantageous with out-of-memory matricesthey don' optimize this for you and it' up to you to partition the matrix into smaller matrices and to implement the block matrix version linear regression is way to predict continuous variables with linear combination of its predictorsone of the most basic ways to perform the calculations is with technique called ordinary least squares the formula in matrix form is xtx- xty where is the coefficients you want to retrievex is the predictorsand is the target variable the python tools we have at our disposal to accomplish our task are the following: bcolz is python library that can store data arrays compactly and uses the hard drive when the array no longer fits into the main memory dask is library that enables you to optimize the flow of calculations and makes performing calculations in parallel easier it doesn' come packaged with the default anaconda setup so make sure to use conda install dask on your virtual environment before running the code below notesome errors have been reported on importing dask when using bit python dask is dependent on few other libraries (such as toolz)but the dependencies should be taken care of automatically by pip or conda the following listing demonstrates block matrix calculations with these libraries for those who want to give it trygiven transformations are easier to achieve than householder transformations when calculating singular value decompositions |
16,784 | listing block matrix calculations with bcolz and dask libraries number of observations (scientific notation feel free to change this import dask array as da import bcolz as bc import numpy as np import dask creates fake datanp arange(nreshape( / , creates matrix of by (because we set to bc carray numpy is an array extension that can swap to disc this is also stored in compressed way rootdir 'ar bcolz--creates file on disc in case out of ram you can check this on your file system next to this ipython file or whatever location you ran this code from mode ' --is the write mode dtype 'float --is the storage type of the data (which is float numbersn ar bc carray(np arange(nreshape( / , dtype='float 'rootdir 'ar bcolz'mode ' ' bc carray(np arange( / )dtype='float 'rootdir 'yy bcolz'mode ' 'dax da from_array(archunks=( , )dy da from_array( ,chunks=( , )the xtx is defined (defining it as "lazy"as the matrix multiplied with its transposed version this is building block of the formula to do linear regression using matrix calculation xtx dax dot(daxxy dax dot(dyblock matrices are created for the predictor variables (arand target (ya block matrix is matrix cut in pieces (blocksda from_array(reads data from disc or ram (wherever it resides currentlychunks=( , )every block is matrix (unless observations or variables are leftxy is the vector multiplied with the transposed matrix again the matrix is only definednot calculated yet this is also building block of the formula to do linear regression using matrix calculation (see formulacoefficients np linalg inv(xtx compute()dot(xy compute()coef da from_array(coefficients,chunks=( , )ar flush( flush(flush memory data it' no longer needed to have large matrices in memory predictions dax dot(coefcompute(print predictions the coefficients are also put into block matrix we got numpy array back from the last step so we need to explicitly convert it back to "da array score the model (make predictionsthe coefficients are calculated using the matrix linear regression function np linalg inv(is the ^(- in this functionor "inversionof the matrix dot( --multiplies the matrix with another matrix note that you don' need to use block matrix inversion because xtx is square matrix with size nr of predictors nr of predictors this is fortunate because dask |
16,785 | handling large data on single computer doesn' yet support block matrix inversion you can find more general information on matrix arithmetic on the wikipedia page at (mathematicsmapreduce mapreduce algorithms are easy to understand with an analogyimagine that you were asked to count all the votes for the national elections your country has parties , voting officesand million people you could choose to gather all the voting tickets from every office individually and count them centrallyor you could ask the local offices to count the votes for the parties and hand over the results to youand you could then aggregate them by party map reducers follow similar process to the second way of working they first map values to key and then do an aggregation on that key during the reduce phase have look at the following listing' pseudo code to get better feeling for this listing mapreduce pseudo code example for each person in voting officeyield (voted_party for each vote in voting officeadd_vote_to_party(one of the advantages of mapreduce algorithms is that they're easy to parallelize and distribute this explains their success in distributed environments such as hadoopbut they can also be used on individual computers we'll take more in-depth look at them in the next and an example (javascriptimplementation is also provided in when implementing mapreduce in pythonyou don' need to start from scratch number of libraries have done most of the work for yousuch as hadoopyoctopydiscoor dumbo choosing the right data structure algorithms can make or break your programbut the way you store your data is of equal importance data structures have different storage requirementsbut also influence the performance of crud (createreadupdateand deleteand other operations on the data set figure shows you have many different data structures to choose fromthree of which we'll discuss heresparse datatree dataand hash data let' first have look at sparse data sets sparse data sparse data set contains relatively little information compared to its entries (observationslook at figure almost everything is " with just single " present in the second observation on variable data like this might look ridiculousbut this is often what you get when converting textual data to binary data imagine set of , completely unrelated twitter |
16,786 | problems sparse data choose the right algorithms solutions choose the right data structures tree hash choose the right tools handling large data general tips figure overview of data structures often applied in data science when working with large data figure example of sparse matrixalmost everything is other values are the exception in sparse matrix tweets most of them probably have fewer than wordsbut together they might have hundreds or thousands of distinct words in the on text mining we'll go through the process of cutting text documents into words and storing them as vectors but for now imagine what you' get if every word was converted to binary variablewith " representing "present in this tweet,and " meaning "not present in this tweet this would result in sparse data indeed the resulting large matrix can cause memory problems even though it contains little information luckilydata like this can be stored compacted in the case of figure it could look like thisdata [( , , )row column holds the value support for working with sparse matrices is growing in python many algorithms now support or return sparse matrices tree structures trees are class of data structure that allows you to retrieve information much faster than scanning through table tree always has root value and subtrees of childreneach with its childrenand so on simple examples would be your own family tree or |
16,787 | handling large data on single computer biological tree and the way it splits into branchestwigsand leaves simple decision rules make it easy to find the child tree in which your data resides look at figure to see how tree structure enables you to get to the relevant information quickly start search age age > <age leaf level danil basu smith asby jones tracy bristo cass figure example of tree data structuredecision rules such as age categories can be used to quickly locate person in family tree in figure you start your search at the top and first choose an age categorybecause apparently that' the factor that cuts away the most alternatives this goes on and on until you get what you're looking for for whoever isn' acquainted with the akinatorwe recommend visiting lamp that tries to guess person in your mind by asking you few questions about him or her try it out and be amazed or see how this magic is tree search trees are also popular in databases databases prefer not to scan the table from the first line until the lastbut to use device called an index to avoid this indices are often based on data structures such as trees and hash tables to find observations faster the use of an index speeds up the process of finding data enormously let' look at these hash tables hash tables hash tables are data structures that calculate key for every value in your data and put the keys in bucket this way you can quickly retrieve the information by looking in the right bucket when you encounter the data dictionaries in python are hash table implementationand they're close relative of key-value stores you'll encounter |
16,788 | them in the last example of this when you build recommender system within database hash tables are used extensively in databases as indices for fast information retrieval selecting the right tools with the right class of algorithms and data structures in placeit' time to choose the right tool for the job the right tool can be python library or at least tool that' controlled from pythonas shown figure the number of helpful tools available is enormousso we'll look at only handful of them problems choose the right algorithms choose the right data structures numexpr bcolz numba solutions python tools blaze handling large data choose the right tools theano cython general tips use python as master to control other tools figure overview of tools that can be used when working with large data python tools python has number of libraries that can help you deal with large data they range from smarter data structures over code optimizers to just-in-time compilers the following is list of libraries we like to use when confronted with large datacython--the closer you get to the actual hardware of computerthe more vital it is for the computer to know what types of data it has to process for computeradding is different from adding the first example consists of integers and the second consists of floatsand these calculations are performed by different parts of the cpu in python you don' have to specify what data types you're usingso the python compiler has to infer them but inferring data types is slow operation and is partially why python isn' one of the fastest languages available cythona superset of pythonsolves this problem by forcing the programmer to specify the data type while developing the program once the compiler has this informationit runs programs much faster see numexpr --numexpr is at the core of many of the big data packagesas is numpy for in-memory packages numexpr is numerical expression evaluator for numpy but can be many times faster than the original numpy to achieve |
16,789 | handling large data on single computer thisit rewrites your expression and uses an internal (just-in-timecompiler see numba --numba helps you to achieve greater speed by compiling your code right before you execute italso known as just-in-time compiling this gives you the advantage of writing high-level code but achieving speeds similar to those of code using numba is straightforwardsee bcolz--bcolz helps you overcome the out-of-memory problem that can occur when using numpy it can store and work with arrays in an optimal compressed form it not only slims down your data need but also uses numexpr in the background to reduce the calculations needed when performing calculations with bcolz arrays see blaze --blaze is ideal if you want to use the power of database backend but like the "pythonic wayof working with data blaze will translate your python code into sql but can handle many more data stores than relational databases such as csvsparkand others blaze delivers unified way of working with many databases and data libraries blaze is still in developmentthoughso many features aren' implemented yet see index html theano--theano enables you to work directly with the graphical processing unit (gpuand do symbolical simplifications whenever possibleand it comes with an excellent just-in-time compiler on top of that it' great library for dealing with an advanced but useful mathematical concepttensors see deeplearning net/software/theanodask--dask enables you to optimize your flow of calculations and execute them efficiently it also enables you to distribute calculations see dask pydata org/en/latestthese libraries are mostly about using python itself for data processing (apart from blazewhich also connects to databasesto achieve high-end performanceyou can use python to communicate with all sorts of databases or other software use python as master to control other tools most software and tool producers support python interface to their software this enables you to tap into specialized pieces of software with the ease and productivity that comes with python this way python sets itself apart from other popular data science languages such as and sas you should take advantage of this luxury and exploit the power of specialized tools to the fullest extent possible features case study using python to connect to nosql databaseas does with graph data let' now have look at more general helpful tips when dealing with large data |
16,790 | general programming tips for dealing with large data sets the tricks that work in general programming context still apply for data science several might be worded slightly differentlybut the principles are essentially the same for all programmers this section recapitulates those tricks that are important in data science context you can divide the general tricks into three partsas shown in the figure mind mapdon' reinvent the wheel use tools and libraries developed by others get the most out of your hardware your machine is never used to its full potentialwith simple adaptions you can make it work harder reduce the computing need slim down your memory and processing needs as much as possible problems solutions handling large data don' reinvent the wheel general tips get the most out of your hardware reduce the computing needs figure overview of general programming best practices when working with large data "don' reinvent the wheelis easier said than done when confronted with specific problembut your first thought should always be'somebody else must have encountered this same problem before me don' reinvent the wheel "don' repeat anyoneis probably even better than "don' repeat yourself add value with your actionsmake sure that they matter solving problem that has already been solved is waste of time as data scientistyou have two large rules that can help you deal with large data and make you much more productiveto bootexploit the power of databases the first reaction most data scientists have when working with large data sets is to prepare their analytical base tables inside database this method works well when the features you want to prepare are fairly simple when this preparation involves advanced modelingfind out if it' possible to employ user-defined functions and procedures the last example of this is on integrating database into your workflow use optimized libraries creating libraries like mahoutwekaand other machinelearning algorithms requires time and knowledge they are highly optimized |
16,791 | handling large data on single computer and incorporate best practices and state-of-the art technologies spend your time on getting things donenot on reinventing and repeating others people' effortsunless it' for the sake of understanding how things work then you must consider your hardware limitation get the most out of your hardware resources on computer can be idlewhereas other resources are over-utilized this slows down programs and can even make them fail sometimes it' possible (and necessaryto shift the workload from an overtaxed resource to an underutilized resource using the following techniquesfeed the cpu compressed data simple trick to avoid cpu starvation is to feed the cpu compressed data instead of the inflated (rawdata this will shift more work from the hard disk to the cpuwhich is exactly what you want to dobecause hard disk can' follow the cpu in most modern computer architectures make use of the gpu sometimes your cpu and not your memory is the bottleneck if your computations are parallelizableyou can benefit from switching to the gpu this has much higher throughput for computations than cpu the gpu is enormously efficient in parallelizable jobs but has less cache than the cpu but it' pointless to switch to the gpu when your hard disk is the problem several python packagessuch as theano and numbaprowill use the gpu without much programming effort if this doesn' sufficeyou can use cuda (compute unified device architecturepackage such as pycuda it' also well-known trick in bitcoin miningif you're interested in creating your own money use multiple threads it' still possible to parallelize computations on your cpu you can achieve this with normal python threads reduce your computing needs "working smart hard achievement this also applies to the programs you write the best way to avoid having large data problems is by removing as much of the work as possible up front and letting the computer work only on the part that can' be skipped the following list contains methods to help you achieve thisprofile your code and remediate slow pieces of code not every piece of your code needs to be optimizeduse profiler to detect slow parts inside your program and remediate these parts use compiled code whenever possiblecertainly when loops are involved whenever possible use functions from packages that are optimized for numerical computations instead of implementing everything yourself the code in these packages is often highly optimized and compiled otherwisecompile the code yourself if you can' use an existing packageuse either just-in-time compiler or implement the slowest parts of your code in |
16,792 | lower-level language such as or fortran and integrate this with your codebase if you make the step to lower-level languages (languages that are closer to the universal computer bytecode)learn to work with computational libraries such as lapackblastintel mkland atlas these are highly optimizedand it' difficult to achieve similar performance to them avoid pulling data into memory when you work with data that doesn' fit in your memoryavoid pulling everything into memory simple way of doing this is by reading data in chunks and parsing the data on the fly this won' work on every algorithm but enables calculations on extremely large data sets use generators to avoid intermediate data storage generators help you return data per observation instead of in batches this way you avoid storing intermediate results use as little data as possible if no large-scale algorithm is available and you aren' willing to implement such technique yourselfthen you can still train your data on only sample of the original data use your math skills to simplify calculations as much as possible take the following equationfor example( ) ab the left side will be computed much faster than the right side of the equationeven for this trivial exampleit could make difference when talking about big chunks of data case study predicting malicious urls the internet is probably one of the greatest inventions of modern times it has boosted humanity' developmentbut not everyone uses this great invention with honorable intentions many companies (googlefor onetry to protect us from fraud by detecting malicious websites for us doing so is no easy taskbecause the internet has billions of web pages to scan in this case study we'll show how to work with data set that no longer fits in memory what we'll use data--the data in this case study was made available as part of research project the project contains data from daysand each observation has approximately , , features the target variable contains if it' malicious website and - otherwise for more informationplease see "beyond blacklistslearning to detect malicious web sites from suspicious urls " the scikit-learn library--you should have this library installed in your python environment at this pointbecause we used it in the previous as you can seewe won' be needing much for this caseso let' dive into it justin malawrence saulstefan savageand geoffrey voelker"beyond blacklistslearning to detect malicious web sites from suspicious urls,proceedings of the acm sigkdd conferenceparis (june ) - |
16,793 | handling large data on single computer step defining the research goal the goal of our project is to detect whether certain urls can be trusted or not because the data is so large we aim to do this in memory-friendly way in the next step we'll first look at what happens if we don' concern ourselves with memory (ramissues step acquiring the url data start by downloading the data from and place it in folder choose the data in svmlight format svmlight is text-based format with one observation per row to save spaceit leaves out the zeros figure memory error when trying to take large data set into memory the following listing and figure show what happens when you try to read in file out of the and create the normal matrix as most algorithms expect the todense(method changes the data from special file format to normal matrix where every entry contains value listing generating an out-of-memory error import glob from sklearn datasets import load_svmlight_file files glob glob(' :\users\gebruiker\downloadsurl_svmlight tar\url_svmlight\svm'files glob glob(' :\users\gebruiker\downloadsurl_svmlight\url_svmlight\svm'print "there are % fileslen(filesx, load_svmlight_file(files[ ],n_features= todense(the data is bigbut sparsematrix by turning it into dense matrix (every is represented in the file)we create an out-of-memory error points to files (linuxpoints to files (windowstar file needs to be untarred firstindication of number of files loads files surprisesurprisewe get an out-of-memory error that isunless you run this code on huge machine after few tricks you'll no longer run into these memory problems and will detect of the malicious sites |
16,794 | tools and techniques we ran into memory error while loading single file--still to go luckilywe have few tricks up our sleeve let' try these techniques over the course of the case studyuse sparse representation of data feed the algorithm compressed data instead of raw data use an online algorithm to make predictions we'll go deeper into each "trickwhen we get to use it now that we have our data locallylet' access it step of the data science processdata preparation and cleansingisn' necessary in this case because the urls come pre-cleaned we'll need form of exploration before unleashing our learning algorithmthough step data exploration to see if we can even apply our first trick (sparse representation)we need to find out whether the data does indeed contain lots of zeros we can check this with the following piece of codeprint "number of non-zero entries % ffloat( shape[ ]))float(( nnz)/(float( shape[ ]this outputs the followingnumber of non-zero entries data that contains little information compared to zeros is called sparse data this can be saved more compactly if you store the data as [( , , ),( , , )instead of [[ , , , , ],[ , , , , ],[ , , , , ],[ , , , , ],[ , , , , ]one of the file formats that implements this is svmlightand that' exactly why we downloaded the data in this format we're not finished yetthoughbecause we need to get feel of the dimensions within the data to get this information we already need to keep the data compressed while checking for the maximum number of observations and variables we also need to read in data file by file this way you consume even less memory second trick is to feed the cpu compressed files in our exampleit' already packed in the tar gz format you unpack file only when you need itwithout writing it to the hard disk (the slowest part of your computerfor our exampleshown in listing we'll only work on the first filesbut feel free to use all of them |
16,795 | handling large data on single computer listing checking data size we don' know how many features we haveso let' initialize it at we don' know how many observations we haveso let' initialize it at import tarfile from sklearn linear_model import sgdclassifier from sklearn metrics import classification_report from sklearn datasets import load_svmlight_file import numpy as np the uri variable holds the location in which you saved the downloaded files you'll need to fill out this uri variable yourself for the code to run on your computer all files together take up uri ' :\python book\\url_svmlight tar gzaround gb the tar tarfile open(uri," :gz"trick here is to leave the max_obs stop at the th file data compressed in max_vars (instead of all of themfor main memory and only demonstration purposesunpack what you need split initialize for tarinfo in tarfile print extracting % , size % (tarinfo nametarinfo sizecounter if tarinfo isfile()at tar extractfile(tarinfo nameuse helper functionx, load_svmlight_file(fload_svmlight_file(we unpack to load specific file max_vars np maximum(max_varsx shape[ ]the files one max_obs np maximum(max_obsx shape[ ]by one to reduce the memory needed if splitbreak + stop when we reach files adjust maximum number of observations and variables when necessary (big fileprint "max %smax dimension % (max_obsmax_vars print results part of the code needs some extra explanation in this code we loop through the svm files inside the tar archive we unpack the files one by one to reduce the memory needed as these files are in the svm formatwe use helperfunctionload_svmlight _file(to load specific file then we can see how many observations and variables the file has by checking the shape of the resulting data set armed with this information we can move on to model building step model building now that we're aware of the dimensions of our datawe can apply the same two tricks (sparse representation of compressed fileand add the third (using an online algorithm)in the following listing let' find those harmful websites |
16,796 | case study predicting malicious urls listing creating model to distinguish the malicious from the normal urls we know number of features from data exploration the target variable can be or - " "website safe to visit"- "website unsafe set up stochastic gradient classifier classes [- , sgd sgdclassifier(loss="log"n_features= split all files together take up around gb the trick here is to leave data for tarinfo in tarcompressed in main memory and if splitonly unpack what you need break if tarinfo isfile() tar extractfile(tarinfo namex, load_svmlight_file( ,n_features=n_featuresif splitsgd partial_fit(xyclasses=classesif =splitprint classification_report(sgd predict( ),yi + stop when we reach files and print results initialize file counter at we unpack the files one by one to reduce the memory needed use helper functionload_svmlight_file(to load specific file third important thing is online algorithm it can be fed data points file by file (batchesstop at th file (instead of all of themfor demonstration purposesthe code in the previous listing looks fairly similar to what we did beforeapart from the stochastic gradient descent classifier sgdclassifier(herewe trained the algorithm iteratively by presenting the observations in one file with the partial_fit(function looping through only the first files here gives the output shown in table the table shows classification diagnostic measuresprecisionrecallf -scoreand support table classification problemcan website be trusted or notprecision recall -score support - avg/total only ( of the malicious sites aren' detected (precision)and ( of the sites detected are falsely accused (recallthis is decent resultso we can conclude that the methodology works if we rerun the analysisthe result might be slightly |
16,797 | handling large data on single computer differentbecause the algorithm could converge slightly differently if you don' mind waiting whileyou can go for the full data set you can now handle all the data without problems we won' have sixth step (presentation or automationin this case study now let' look at second application of our techniquesthis time you'll build recommender system inside database for well-known example of recommender systems visit the amazon website while browsingyou'll soon be confronted with recommendations"people who bought this product also bought case study building recommender system inside database in reality most of the data you work with is stored in relational databasebut most databases aren' suitable for data mining but as shown in this exampleit' possible to adapt our techniques so you can do large part of the analysis inside the database itselfthereby profiting from the database' query optimizerwhich will optimize the code for you in this example we'll go into how to use the hash table data structure and how to use python to control other tools tools and techniques needed before going into the case study we need to have quick look at the required tools and theoretical background to what we're about to do here tools mysql database --needs mysql database to work with if you haven' installed mysql community serveryou can download one from www mysql com appendix "installing mysql serverexplains how to set it up mysql database connection python library--to connect to this server from python you'll also need to install sqlalchemy or another library capable of communicating with mysql we're using mysqldb on windows you can' use conda right off the bat to install it first install binstar (another package management serviceand look for the appropriate mysql-python package for your python setup conda install binstar binstar search - conda mysql-python the following command entered into the windows command line worked for us (after activating the python environment)conda install --channel againfeel free to go for the sqlalchemy library if that' something you're more comfortable with we will also need the pandas python librarybut that should already be installed by now with the infrastructure in placelet' dive into few of the techniques |
16,798 | techniques simple recommender system will look for customers who've rented similar movies as you have and then suggest those that the others have watched but you haven' seen yet this technique is called -nearest neighbors in machine learning customer who behaves similarly to you isn' necessarily the most similar customer you'll use technique to ensure that you can find similar customers (local optimawithout guarantees that you've found the best customer (global optimuma common technique used to solve this is called locality-sensitive hashing good overview of papers on this topic can be found at the idea behind locality-sensitive hashing is simpleconstruct functions that map similar customers close together (they're put in bucket with the same labeland make sure that objects that are different are put in different buckets central to this idea is function that performs the mapping this function is called hash functiona function that maps any range of input to fixed output the simplest hash function concatenates the values from several random columns it doesn' matter how many columns (scalable input)it brings it back to single column (fixed outputyou'll set up three hash functions to find similar customers the three functions take the values of three moviesthe first function takes the values of movies and the second function takes the values of movies and the last function takes the values of movies and this will ensure that the customers who are in the same bucket share at least several movies but the customers inside one bucket might still differ on the movies that weren' included in the hashing functions to solve this you still need to compare the customers within the bucket with each other for this you need to create new distance measure the distance that you'll use to compare customers is called the hamming distance the hamming distance is used to calculate how much two strings differ the distance is defined as the number of different characters in string table offers few examples of the hamming distance table examples of calculating the hamming distance string string hamming distance hat cat hat mad tiger tigre paris rome |
16,799 | handling large data on single computer comparing multiple columns is an expensive operationso you'll need trick to speed this up because the columns contain binary ( or variable to indicate whether customer has bought movie or notyou can concatenate the information so that the same information is contained in new column table shows the "moviesvariable that contains as much information as all the movie columns combined table combining the information from different columns into the movies column this is also how dna worksall information in long string column movie movie movie movie movies customer customer this allows you to calculate the hamming distance much more efficiently by handling this operator as bityou can exploit the xor operator the outcome of the xor operator (^is as follows ^ ^ ^ ^ with this in placethe process to find similar customers becomes very simple let' first look at it in pseudo codepreprocessing define (for instance functions that select (for instance entries from the vector of movies here we take functions (pthat each take (kmovies apply these functions to every point and store them in separate column (in literature each function is called hash function and each column will store bucket querying point apply the same functions to the point (observationq you want to query retrieve for every function the points that correspond to the result in the corresponding bucket stop when you've retrieved all the points in the buckets or reached points (for example if you have functionscalculate the distance for each point and return the points with the minimum distance |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.