id
int64
0
25.6k
text
stringlengths
0
4.59k
16,800
case study building recommender system inside database let' look at an actual implementation in python to make this all clearer step research question let' say you're working in video store and the manager asks you if it' possible to use the information on what movies people rent to predict what other movies they might like your boss has stored the data in mysql databaseand it' up to you to do the analysis what he is referring to is recommender systeman automated system that learns people' preferences and recommends movies and other products the customers haven' tried yet the goal of our case study is to create memory-friendly recommender system we'll achieve this using database and few extra tricks we're going to create the data ourselves for this case study so we can skip the data retrieval step and move right into data preparation and after that we can skip the data exploration step and move straight into model building step data preparation the data your boss has collected is shown in table we'll create this data ourselves for the sake of demonstration table excerpt from the client database and the movies customers rented customer movie movie movie movie jack dani wilhelmson jane dane xi liu eros mazo for each customer you get an indication of whether they've rented the movie before ( or not ( let' see what else you'll need so you can give your boss the recommender system he desires first let' connect python to mysql to create our data make connection to mysql using your username and password in the following listing we used database called "testreplace the userpasswordand database name with the appropriate values for your setup and retrieve the connection and the cursor database cursor is control structure that remembers where you are currently in the database
16,801
handling large data on single computer listing creating customers in the database import mysqldb import pandas as pd user '****password '****database 'testmc mysqldb connect('localhost',user,password,databasecursor mc cursor(nr_customers colnames ["movie% % for in range( , )pd np random seed( generated_customers pd np random randint( , , nr_customersreshape(nr_customers, first we establish the connectionyou'll need to fill out your own usernamepasswordand schema-name (variable "database"next we simulate database with customers and create few observations data pd dataframe(generated_customerscolumns list(colnames)data to_sql('cust',mcflavor 'mysql'index trueif_exists 'replace'index_label 'cust_id'store the data inside pandas data frame and write the data frame in mysql table called "custif this table already existsreplace it we create customers and randomly assign whether they did or didn' see certain movieand we have movies in total the data is first created in pandas data frame but is then turned into sql code noteyou might run across warning when running this code the warning statesthe "mysqlflavor with dbapi connection is deprecated and will be removed in future versions mysql will be further supported with sqlalchemy engines feel free to already switch to sqlalchemy or another library we'll use sqlalchemy in other but used mysqldb here to broaden the examples to efficiently query our database later on we'll need additional data preparationincluding the following thingscreating bit strings the bit strings are compressed versions of the columnscontent ( and valuesfirst these binary values are concatenatedthen the resulting bit string is reinterpreted as number this might sound abstract now but will become clearer in the code defining hash functions the hash functions will in fact create the bit strings adding an index to the tableto quicken data retrieval creating bit strings now you make an intermediate table suited for queryingapply the hash functionsand represent the sequence of bits as decimal number finallyyou can place them in table firstyou need to create bit strings you need to convert the string " to binary or numeric value to make the hamming function work we opted for numeric representationas shown in the next listing
16,802
case study building recommender system inside database listing creating bit strings we represent the string as numeric value the string will be concatenation of zeros ( and ones ( because these indicate whether someone has seen certain movie or not the strings are then regarded as bit code for example is the same as the number what def createnum(doestakes in valuesconcatenates these column values and turns them into stringthen turns the byte code of the string into number def createnum( , , , , , , , )return [int('% % % % % % % % ( , , , , , , , ), for ( , , , , , , , in zip( , , , , , , , )assert int(' ', = assert int(' ', = assert createnum([ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ , ]=[ , store pd dataframe(store['bit 'createnum(data movie data movie ,data movie ,data movie ,data movie data movie ,data movie ,data movie store['bit 'createnum(data movie data movie ,data movie ,data movie ,data movie data movie ,data movie ,data movie store['bit 'createnum(data movie data movie ,data movie ,data movie ,data movie data movie ,data movie ,data movie store['bit 'createnum(data movie data movie ,data movie ,data movie ,data movie data movie ,data movie ,data movie translate the movie column to bit strings in numeric form each bit string represents movies * movies noteyou could use -bit string instead of * to keep the code short test if the function works correctly binary code is the same as (= * + * + * + * if the assert failsit will raise an assert errorotherwise nothing will happen by converting the information of columns into numberswe compressed it for later lookup figure shows what we get when asking for the first observations (customer movie view historyin this new format store[ : the next step is to create the hash functionsbecause they'll enable us to sample the data we'll use to determine whether two customers have similar behavior figure first customersinformation on all movies after bit string to numeric conversion
16,803
handling large data on single computer creating hash function the hash functions we create take the values of movies for customer we decided in the theory part of this case study to create hash functionsthe first function combines the movies and the second combines movies and and the third one combines and it' up to you if you want to pick othersthis can be picked randomly the following code listing shows how this is done listing creating hash functions define hash function (it is exactly like the createnum(function without the final conversion to number and for columns instead of def hash_fn( , , )return [ '% % % ( , ,kfor ( , ,kin zip( , , )assert hash_fn([ , ],[ , ],[ , ]=[ ' ', ' 'test if it works correctly (if no error is raisedit worksit' sampling on columns but all observations will be selected store['bucket 'hash_fn(data movie data movie ,data movie store['bucket 'hash_fn(data movie data movie ,data movie store['bucket 'hash_fn(data movie data movie ,data movie store to_sql('movie_comparison',mcflavor 'mysql'index trueindex_label 'cust_id'if_exists 'replace'store this information in database create hash values from customer moviesrespectively [ , ][ , ][ , the hash function concatenates the values from the different movies into binary value like what happened before in the createnum(functiononly this time we don' convert to numbers and we only take movies instead of as input the assert function shows how it concatenates the values for every observation when the client has bought movie but not movies and it will return ' for bucket when the client bought movies and but not it will return ' for bucket if we look at the current result we see the variables we created earlier (bit bit bit bit from the handpicked movies (figure figure information from the bit string compression and the sampled movies the last trick we'll apply is indexing the customer table so lookups happen more quickly
16,804
case study building recommender system inside database adding an index to the table now you must add indices to speed up retrieval as needed in real-time system this is shown in the next listing listing creating an index create function to easily create indices indices will quicken retrieval def createindex(columncursor)sql 'create index % on movie_comparison (% );(columncolumncursor execute(sqlcreateindex('bucket ',cursorcreateindex('bucket ',cursorcreateindex('bucket ',cursorput index on bit buckets with the data indexed we can now move on to the "model building part in this case study no actual machine learning or statistical model is implemented instead we'll use far simpler techniquestring distance calculation two strings can be compared using the hamming distance as explained earlier in the theoretical intro to the case study step model building to use the hamming distance in the database we need to define it as function creating the hamming distance function we implement this as user-defined function this function can calculate the distance for -bit integer (actually * )as shown in the following listing listing creating the hamming distance sql ''create function hammingdistancea biginta biginta biginta bigintb bigintb bigintb bigintb bigint returns int deterministic return bit_count( bit_count( bit_count( bit_count( )''define function it takes input arguments strings of length for the first customer and another strings of length for the second customer this way we can compare customers side-by-side for movies cursor execute(sqlthis runs the query sql '''select hammingdistanceb' ', ' ', ' ', ' , ' ', ' ', ' ', ' )''pd read_sql(sql,mcthe function is stored in database you can only do this oncerunning this code second time will result in an erroroperationalerror( 'function hammingdistance already exists'to check this function you can run this sql statement with fixed strings notice the "bbefore each stringindicating that you're passing bit values the outcome of this particular test should be which indicates the series of strings differ in only places
16,805
handling large data on single computer if all is wellthe output of this code should be now that we have our hamming distance function in positionwe can use it to find similar customers to given customerand this is exactly what we want our application to do let' move on to the last partutilizing our setup as sort of application step presentation and automation now that we have it all set upour application needs to perform two steps when confronted with given customerlook for similar customers suggest movies the customer has yet to see based on what he or she has already viewed and the viewing history of the similar customers first things firstselect ourselves lucky customer finding similar customer time to perform real-time queries in the following listingcustomer is the happy one who'll get his next movies selected for him but first we need to select customers with similar viewing history listing finding similar customers we do two-step sampling first samplingindex must be exactly the same as the one of the selected customer (is based on moviesselected people must have seen (or not seenthese movies exactly like our customer did second sampling is ranking based on the -bit strings these take into account all the movies in the database pick customer from database customer_id sql "select from movie_comparison where cust_id %scustomer_id cust_data pd read_sql(sql,mcsql ""select cust_id,hammingdistance(bit bit ,bit ,bit ,% ,% ,% ,%sas distance from movie_comparison where bucket '%sor bucket ='%sor bucket ='%sorder by distance limit ""(cust_data bit [ ],cust_data bit [ ]cust_data bit [ ]cust_data bit [ ]cust_data bucket [ ]cust_data bucket [ ],cust_data bucket [ ]shortlist pd read_sql(sql,mcwe show the customers that most resemble customer customer ends up first table shows customers and to be the most similar to customer don' forget that the data was generated randomlyso anyone replicating this example might receive different results now we can finally select movie for customer to watch
16,806
table the most similar customers to customer cust_id distance finding new movie we need to look at movies customer hasn' seen yetbut the nearest customer hasas shown in the following listing this is also good check to see if your distance function worked correctly although this may not be the closest customerit' good match with customer by using the hashed indexesyou've gained enormous speed when querying large databases listing finding an unseen movie cust pd read_sql('select from cust where cust_id in ( , , )',mcdif cust dif[dif[ !dif[ ]select movies transpose for convenience select movies customer didn' see yet customers have seen table shows you can recommend movie or based on customer ' behavior table movies from customer can be used as suggestions for customer cust_id movie movie movie movie movie movie movie movie
16,807
handling large data on single computer mission accomplished our happy movie addict can now indulge himself with new movietailored to his preferences in the next we'll look at even bigger data and see how we can handle that using the horton sandbox we downloaded in summary this discussed the following topicsthe main problems you can run into when working with large data sets are thesenot enough memory long-running programs resources that form bottlenecks and cause speed problems there are three main types of solutions to these problemsadapt your algorithms use different data structures rely on tools and libraries three main techniques can be used to adapt an algorithmpresent algorithm data one observation at time instead of loading the full data set at once divide matrices into smaller matrices and use these to make your calculations implement the mapreduce algorithm (using python libraries such as hadoopyoctopydiscoor dumbothree main data structures are used in data science the first is type of matrix that contains relatively little informationthe sparse matrix the second and third are data structures that enable you to retrieve information quickly in large data setthe hash function and tree structure python has many tools that can help you deal with large data sets several tools will help you with the size of the volumeothers will help you parallelize the computationsand still others overcome the relatively slow speed of python itself it' also easy to use python as tool to control other data science tools because python is often chosen as language in which to implement an api the best practices from computer science are also valid in data science contextso applying them can help you overcome the problems you face in big data context
16,808
this covers taking your first steps with two big data applicationshadoop and spark using python to write big data jobs building an interactive dashboard that connects to data stored in big data database over the last two we've steadily increased the size of the data in we worked with data sets that could fit into the main memory of computer introduced techniques to deal with data sets that were too large to fit in memory but could still be processed on single computer in this you'll learn to work with technologies that can handle data that' so large single node (computerno longer suffices in fact it may not even fit on hundred computers now that' challengeisn' itwe'll stay as close as possible to the way of working from the previous the focus is on giving you the confidence to work on big data platform to do thisthe main part of this is case study you'll create dashboard that allows
16,809
first steps in big data you to explore data from lenders of bank by the end of this you'll have gone through the following stepsload data into hadoopthe most common big data platform transform and clean data with spark store it into big data database called hive interactively visualize this data with qlik sensea visualization tool all this (apart from the visualizationwill be coordinated from within python script the end result is dashboard that allows you to explore the dataas shown in figure figure interactive qlik dashboard bear in mind that we'll only scratch the surface of both practice and theory in this introductory on big data technologies the case study will touch three big data technologies (hadoopsparkand hive)but only for data manipulationnot model building it will be up to you to combine the big data technologies you get to see here with the model-building techniques we touched upon in previous distributing data storage and processing with frameworks new big data technologies such as hadoop and spark make it much easier to work with and control cluster of computers hadoop can scale up to thousands of computerscreating cluster with petabytes of storage this enables businesses to grasp the value of the massive amount of data available
16,810
distributing data storage and processing with frameworks hadoopa framework for storing and processing large data sets apache hadoop is framework that simplifies working with cluster of computers it aims to be all of the following things and morereliable--by automatically creating multiple copies of the data and redeploying processing logic in case of failure fault tolerant --it detects faults and applies automatic recovery scalable--data and its processing are distributed over clusters of computers (horizontal scalingportable--installable on all kinds of hardware and operating systems the core framework is composed of distributed file systema resource managerand system to run distributed programs in practice it allows you to work with the distributed file system almost as easily as with the local file system of your home computer but in the backgroundthe data can be scattered among thousands of servers the different components of hadoop at the heart of hadoop we find distributed file system (hdfsa method to execute programs on massive scale (mapreducea system to manage the cluster resources (yarnon top of thatan ecosystem of applications arose (figure )such as the databases hive and hbase and frameworks for machine learning such as mahout we'll use hive in this hive has language based on the widely used sql to interact with data stored inside the database oozie (workflow managersqoop (data exchangezookeeper (coordinationflume (log collectorpig (scriptingmahout (machine learninghcatalog (metadatamapreduce (distributed processing frameworkhive (sql enginehbase (columnar storeyarn (cluster resource managementhdfs (hadoop file systemambari (provisioningmanagingand monitoringranger (securityfigure sample from the ecosystem of applications that arose around the hadoop core framework
16,811
first steps in big data it' possible to use the popular tool impala to query hive data up to times faster we won' go into impala in this bookbut more information can be found at impala iowe already had short intro to mapreduce in but let' elaborate bit here because it' such vital part of hadoop mapreducehow hadoop achieves parallelism hadoop uses programming method called mapreduce to achieve parallelism mapreduce algorithm splits up the dataprocesses it in paralleland then sortscombinesand aggregates the results back together howeverthe mapreduce algorithm isn' well suited for interactive analysis or iterative programs because it writes the data to disk in between each computational step this is expensive when working with large data sets let' see how mapreduce would work on small fictitious example you're the director of toy company every toy has two colorsand when client orders toy from the web pagethe web page puts an order file on hadoop with the colors of the toy your task is to find out how many color units you need to prepare you'll use mapreduce-style algorithm to count the colors first let' look at simplified version in figure map file reduce green greenblueblue blueorange blue orange file green greenredred blueorange blue green blue orange red orange figure simplified example of mapreduce flow for counting the colors in input texts as the name suggeststhe process roughly boils down to two big phasesmapping phase--the documents are split up into key-value pairs until we reducewe can have many duplicates reduce phase--it' not unlike sql "group by the different unique occurrences are grouped togetherand depending on the reducing functiona different result can be created here we wanted count per colorso that' what the reduce function returns in reality it' bit more complicated than this though
16,812
distributing data storage and processing with frameworks input files each line passed to mapper every key gets mapped to value of keys get sorted (or shuffledreduce (aggregatekey-value pairs collect output to file green blue greenblueblueorange greenredblueorange blue blue greenblue blue blue blueorange orange green green greenred green green red blueorange red red blue orange orange orange blue blue green red orange orange figure an example of mapreduce flow for counting the colors in input texts the whole process is described in the following six steps and depicted in figure reading the input files passing each line to mapper job the mapper job parses the colors (keysout of the file and outputs file for each color with the number of times it has been encountered (valueor more technically saidit maps key (the colorto value (the number of occurrencesthe keys get shuffled and sorted to facilitate the aggregation the reduce phase sums the number of occurrences per color and outputs one file per key with the total number of occurrences for each color the keys are collected in an output file note while hadoop makes working with big data easysetting up good working cluster still isn' trivialbut cluster managers such as apache mesos do ease the burden in realitymany (mid-sizedcompanies lack the competence to maintain healthy hadoop installation this is why we'll work with the hortonworks sandboxa pre-installed and configured hadoop ecosystem installation instructions can be found in section an introductory working example of hadoop nowkeeping the workings of hadoop in mindlet' look at spark sparkreplacing mapreduce for better performance data scientists often do interactive analysis and rely on algorithms that are inherently iterativeit can take awhile until an algorithm converges to solution as this is weak point of the mapreduce frameworkwe'll introduce the spark framework to overcome it spark improves the performance on such tasks by an order of magnitude
16,813
first steps in big data what is sparkspark is cluster computing framework similar to mapreduce sparkhoweverdoesn' handle the storage of files on the (distributedfile system itselfnor does it handle the resource management for this it relies on systems such as the hadoop file systemyarnor apache mesos hadoop and spark are thus complementary systems for testing and developmentyou can even run spark on your local system how does spark solve the problems of mapreducewhile we oversimplify things bit for the sake of clarityspark creates kind of shared ram memory between the computers of your cluster this allows the different workers to share variables (and their stateand thus eliminates the need to write the intermediate results to disk more technically and more correctly if you're into thatspark uses resilient distributed datasets (rdd)which are distributed memory abstraction that lets programmers perform in-memory computations on large clusters in faulttolerant way because it' an in-memory systemit avoids costly disk operations the different components of the spark ecosystem spark core provides nosql environment well suited for interactiveexploratory analysis spark can be run in batch and interactive mode and supports python oozie (workflow managersqoop (data exchangezookeeper (coordinationflume (log collectorpig (scriptingmahout (machine learninghcatalog hive (metadata(sql enginespark streaming spark sql mllib (machine learninghive (sql enginemapreduce (distributed processing frameworkgraphx (graph dbhbase (columnar storeyarn (cluster resource managementhdfs (hadoop file systemambari (provisioningmanagingand monitoringranger (securityfigure the spark framework when used in combination with the hadoop framework spark has four other large componentsas listed below and depicted in figure spark streaming is tool for real-time analysis spark sql provides sql interface to work with spark mllib is tool for machine learning inside the spark framework graphx is graph database for spark we'll go deeper into graph databases in see
16,814
now let' dip our toes into loan data using hadoophiveand spark case studyassessing risk when loaning money enriched with basic understanding of hadoop and sparkwe're now ready to get our hands dirty on big data the goal of this case study is to have first experience with the technologies we introduced earlier in this and see that for large part you can (but don' have towork similarly as with other technologies notethe portion of the data used here isn' that big because that would require serious bandwidth to collect it and multiple nodes to follow along with the example what we'll use horton sandbox on virtual machine if you haven' downloaded and imported this to vm software such as virtualboxplease go back to section where this is explained version of the horton sandbox was used when writing this python librariespandas and pywebhdsf they don' need to be installed on your local virtual environment this time aroundwe need them directly on the horton sandbox therefore we need to fire up the horton sandbox (on virtualboxfor instanceand make few preparations in the sandbox command line there are several things you still need to do for this all to workso connect to the command line you can do this using program like putty if you're unfamiliar with puttyit offers command line interface to servers and can be downloaded freely at download html the putty login configuration is shown in figure figure connecting to horton sandbox using putty
16,815
first steps in big data the default user and password are (at the time of writing"rootand "hadoop"respectively you'll need to change this password at the first loginthough once connectedissue the following commandsyum - install python-pip--this installs pipa python package manager pip install git+at the time of writing there was problem with the pywebhdfs library and we fixed that in this fork hopefully you won' require this anymore when you read thisthe problem has been signaled and should be resolved by the maintainers of this package pip install pandas--to install pandas this usually takes awhile because of the dependencies an ipynb file is available for you to open in jupyter or (the olderipython and follow along with the code in this setup instructions for horton sandbox are repeated theremake sure to run the code directly on the horton sandbox nowwith the preparatory business out of the waylet' look at what we'll need to do in this exercisewe'll go through several more of the data science process stepsstep the research goal this consists of two partsproviding our manager with dashboard preparing data for other people to create their own dashboards step data retrieval downloading the data from the lending club website putting the data on the hadoop file system of the horton sandbox step data preparation transforming this data with spark storing the prepared data in hive steps exploration and report creation visualizing the data with qlik sense we have no model building in this case studybut you'll have the infrastructure in place to do this yourself if you want to for instanceyou can use spark machine learning to try to predict when someone will default on his debt it' time to meet the lending club step the research goal the lending club is an organization that connects people in need of loan with people who have money to invest your boss also has money to invest and wants information before throwing substantial sum on the table to achieve thisyou'll create report for him that gives him insight into the average ratingrisksand return for lending money to certain person by going through this processyou make the data
16,816
case studyassessing risk when loaning money accessible in dashboard toolthus enabling other people to explore it as well in sense this is the secondary goal of this caseopening up the data for self-service bi self-service business intelligence is often applied in data-driven organizations that don' have analysts to spare anyone in the organization can do the simple slicing and dicing themselves while leaving the more complicated analytics for the data scientist we can do this case study because the lending club makes anonymous data available about the existing loans by the end of this case studyyou'll create report similar to figure kpi bar charts selection pivot table figure the end result of this exercise is an explanatory dashboard to compare lending opportunity to similar opportunities first things firsthoweverlet' get ourselves data step data retrieval it' time to work with the hadoop file system (or hdfsfirst we'll send commands through the command line and then through the python scripting language with the help of the pywebhdfs package the hadoop file system is similar to normal file systemexcept that the files and folders are stored over multiple servers and you don' know the physical address of each file this is not unfamiliar if you've worked with tools such as dropbox or google drive the files you put on these drives are stored somewhere on server without you
16,817
first steps in big data knowing exactly on which server as on normal file systemyou can createrenameand delete files and folders using the command line to interact with the hadoop file system let' first retrieve the currently present list of directories and files in the hadoop root folder using the command line type the command hadoop fs -ls in putty to achieve this make sure you turn on your virtual machine with the hortonworks sandbox before attempting connection in putty you should then connect to : as shown before in figure the output of the hadoop command is shown in figure you can also add arguments such as hadoop fs -ls - to get recursive list of all the files and subdirectories figure is listed output from the hadoop list commandhadoop fs -ls the hadoop root folder we'll now create new directory "on hdfs to work with during this the following commands will create the new directory and give everybody access to the foldersudo - hdfs hadoop fs -mkdir sudo - hdfs hadoop fs -chmod you probably noticed pattern here the hadoop commands are very similar to our local file system commands (posix stylebut start with hadoop fs and have dash before each command table gives an overview of popular file system commands on hadoop and their local file system command counterparts table list of common hadoop file system commands goal hadoop file system command local file system command get list of files and directories from directory hadoop fs -ls uri ls uri create directory hadoop fs -mkdir uri mkdir uri remove directory hadoop fs -rm - uri rm - uri
16,818
case studyassessing risk when loaning money table list of common hadoop file system commands goal hadoop file system command local file system command change the permission of files hadoop fs -chmod mode uri chmod mode uri move or rename file hadoop fs -mv olduri newuri mv olduri newuri there are two special commands you'll use often these are upload files from the local file system to the distributed file system (hadoop fs -put localuri remoteuridownload file from the distributed file system to the local file system (hadoop -get remoteurilet' clarify this with an example suppose you have csv file on the linux virtual machine from which you connect to the linux hadoop cluster you want to copy the csv file from your linux virtual machine to the cluster hdfs use the command hadoop -put mycsv csv /data using putty we can start python session on the horton sandbox to retrieve our data using python script issue the "pysparkcommand in the command line to start the session if all is well you should see the welcome screen shown in figure figure the welcome screen of spark for interactive use with python now we use python code to fetch the data for usas shown in the following listing listing drawing in the lending club loan data import requests import zipfile import stringio source requests get("loanstats csv zip"verify=falsestringio stringio stringio(source contentunzipped zipfile zipfile(stringiounzips data downloads data from lending club this is https so it should verifybut we won' bother (verify=falsecreates virtual file
16,819
first steps in big data we download the file "loanstats csv zipfrom the lending club' website at resources lendingclub com/loanstats csv zip and unzip it we use methods from the requestszipfileand stringio python packages to respectively download the datacreate virtual fileand unzip it this is only single fileif you want all their data you could create loopbut for demonstration purposes this will do as we mentioned beforean important part of this case study will be data preparation with big data technologies before we can do sohoweverwe need to put it on the hadoop file system pywebhdfs is package that allows you to interact with the hadoop file system from python it translates and passes your commands to rest calls for the webhdfs interface this is useful because you can use your favorite scripting language to automate tasksas shown in the following listing listing storing data on hadoop stores it locally because we need to transfer it to hadoop file system does preliminary data cleaning using pandasremoves top row and bottom rows because they're useless opening original file will show you this import pandas as pd from pywebhdfs webhdfs import pywebhdfsclient subselection_csv pd read_csv(unzipped open('loanstats csv')skiprows= ,skipfooter= ,engine='python'stored_csv subselection_csv to_csv(/stored_csv csv'hdfs pywebhdfsclient(user_name="hdfs",port= ,host="sandbox"hdfs make_dir(''with open(/stored_csv csv'as file_datahdfs create_file('/loanstats csv',file_dataoverwrite=trueconnects to hadoop sandbox creates csv file on hadoop file system creates folder "on hadoop file system opens locally stored csv we had already downloaded and unzipped the file in listing now in listing we made sub-selection of the data using pandas and stored it locally then we created directory on hadoop and transferred the local file to hadoop the downloaded data is in csv format and because it' rather smallwe can use the pandas library to remove the first line and last two lines from the file these contain comments and will only make working with this file cumbersome in hadoop environment the first line of our code imports the pandas packagewhile the second line parses the file into memory and removes the first and last two data lines the third code line saves the data to the local file system for later use and easy inspection before moving onwe can check our file using the following line of codeprint hdfs get_file_dir_status('/loanstats csv'the pyspark console should tell us our file is safe and well on the hadoop systemas shown in figure
16,820
figure retrieve file status on hadoop via the pyspark console with the file ready and waiting for us on hadoopwe can move on to data preparation using sparkbecause it' not clean enough to directly store in hive step data preparation now that we've downloaded the data for analysiswe'll use spark to clean the data before we store it in hive data preparation in spark cleaning data is often an interactive exercisebecause you spot problem and fix the problemand you'll likely do this couple of times before you have clean and crisp data an example of dirty data would be string such as "usa"which is improperly capitalized at this pointwe no longer work in jobs py but use the pyspark command line interface to interact directly with spark spark is well suited for this type of interactive analysis because it doesn' need to save the data after each step and has much better model than hadoop for sharing data between servers ( kind of distributed memorythe transformation consists of four parts start up pyspark (should still be open from section and load the spark and hive context read and parse the csv file split the header line from the data clean the data okayonto business the following listing shows the code implementation in the pyspark console listing connecting to apache spark creates hive context imports spark context --not necessary when working directly in pyspark imports hive context from pyspark import sparkcontext from pyspark sql import hivecontext #sc sparkcontext(sqlcontext hivecontext(scdata sc textfile("//loanstats csv"in the pyspark sessionthe spark context is automatically present in other cases (zeppelin notebookyou'll need to create this explicitly loads in data set from hadoop directory
16,821
first steps in big data parts data map(lambda : split(',')firstline parts first(datalines parts filter(lambda : !firstlinedef cleans(row)row [ str(float(row [ ][:- ])/ return [ encode('utf 'replace( " ","lower(for in rowdatalines datalines map(lambda xcleans( )column (index has formatted numbers we don' need that sign executes data cleaning line by line grabs first line splits data set with comma (,delimiter this is the end of line delimiter for this file encodes everything in utf replaces underscores with spacesand lowercases everything cleaning function will use power of spark to clean data the input of this function will be line of data grabs all lines but first linebecause first line is only variable names let' dive little further into the details for each step step starting up spark in interactive mode and loading the context the spark context import isn' required in the pyspark console because context is readily available as variable sc you might have noticed this is also mentioned when opening pysparkcheck out figure in case you overlooked it we then load hive context to enable us to work interactively with hive if you work interactively with sparkthe spark and hive contexts are loaded automaticallybut if you want to use it in batch mode you need to load it manually to submit the code in batch you would use the spark-submit filename py command on the horton sandbox command line from pyspark import sparkcontext from pyspark sql import hivecontext sc sparkcontext(sqlcontext hivecontext(scwith the environment set upwe're ready to start parsing the csv file step reading and parsing the csv file next we read the file from the hadoop file system and split it at every comma we encounter in our code the first line reads the csv file from the hadoop file system the second line splits every line when it encounters comma our csv parser is naive by design because we're learning about sparkbut you can also use the csv package to help you parse line more correctly data sc textfile("//loanstats csv"parts data map(lambda : split(',')
16,822
notice how similar this is to functional programming approach for those who've never encountered ityou can naively read lambda : split(','as "for every input ( row in this case)split this input when it encounters comma as in this case"for every inputmeans "for every row,but you can also read it as "split every row by comma this functional-like syntax is one of my favorite characteristics of spark step split the header line from the data to separate the header from the datawe read in the first line and retain every line that' not similar to the header linefirstline parts first(datalines parts filter(lambda : !firstlinefollowing the best practices in big datawe wouldn' have to do this step because the first line would already be stored in separate file in realitycsv files do often contain header line and you'll need to perform similar operation before you can start cleaning the data step clean the data in this step we perform basic cleaning to enhance the data quality this allows us to build better report after the second stepour data consists of arrays we'll treat every input for lambda function as an array now and return an array to ease this taskwe build helper function that cleans our cleaning consists of reformatting an input such as " , %to and encoding every string as utf- as well as replacing underscores with spaces and lowercasing all the strings the second line of code calls our helper function for every line of the array def cleans(row)row [ str(float(row [ ][:- ])/ return [ encode('utf 'replace( " ","lower(for in rowdatalines datalines map(lambda xcleans( )our data is now prepared for the reportso we need to make it available for our reporting tools hive is well suited for thisbecause many reporting tools can connect to it let' look at how to accomplish this save the data in hive to store data in hive we need to complete two steps create and register metadata execute sql statements to save data in hive in this sectionwe'll once again execute the next piece of code in our beloved pyspark shellas shown in the following listing
16,823
first steps in big data listing storing data in hive (fullcreates metadatathe spark sql structfield function represents field in structtype the structfield object is comprised of three fieldsname ( string)datatype ( datatype)and "nullable( booleanthe field of name is the name of structfield the field of datatype specifies the data type of structfield the field of nullable specifies if values of structfield can contain none values imports sql data types from pyspark sql types import fields [structfield(field_name,stringtype(),truefor field_name in firstlineschema structtype(fieldsschemaloans sqlcontext createdataframe(datalinesschemacreates data schemaloans registertemptable("loans"frame from data structtype function creates the data schema structtype object requires list of structfields as input registers it as table called loans (datalinesand data schema (schemasqlcontext sql("drop table if exists loansbytitle"sql '''create table loansbytitle stored as parquet as select titlecount( as number from loans group by title order by number desc''sqlcontext sql(sqlsqlcontext sql('drop table if exists raw'sql '''create table raw stored as parquet as select titleemp_title,grade,home_ownership,int_rate,recoveriescollection_recovery_fee,loan_amnt,term from loans''drops table (in case it already existsand stores subset of raw data in hive drops table (in case it already exists)summarizesand stores it in hive loansbytitle represents the sum of loans by job title let' drill deeper into each step for bit more clarification step create and register metadata many people prefer to use sql when they work with data this is also possible with spark you can even read and store data in hive directly as we'll do before you can do thathoweveryou'll need to create metadata that contains column name and column type for every column the first line of code is the imports the second line parses the field name and the field type and specifies if field is mandatory the structtype represents rows as an array of structfields then you place it in dataframe that' registered as (temporarytable in hive
16,824
from pyspark sql types import fields [structfield(field_name,stringtype(),truefor field_name in firstlineschema structtype(fieldsschemaloans sqlcontext createdataframe(datalinesschemaschemaloans registertemptable("loans"with the metadata readywe're now able to insert the data into hive step execute queries and store table in hive now we're ready to use sql-dialect on our data first we'll make summary table that counts the number of loans per purpose then we store subset of the cleaned raw data in hive for visualization in qlik executing sql-like commands is as easy as passing string that contains the sqlcommand to the sqlcontext sql function notice that we aren' writing pure sql because we're communicating directly with hive hive has its own sql-dialect called hiveql in our sqlfor instancewe immediately tell it to store the data as parquet file parquet is popular big data file format sqlcontext sql("drop table if exists loansbytitle"sql '''create table loansbytitle stored as parquet as select titlecount( as number from loans group by title order by number desc''sqlcontext sql(sqlsqlcontext sql('drop table if exists raw'sql '''create table raw stored as parquet as select titleemp_title,grade,home_ownership,int_rate,recoveries,collection_recovery_f ee,loan_amnt,term from loans''sqlcontext sql(sqlwith the data stored in hivewe can connect our visualization tools to it step data exploration step report building we'll build an interactive report with qlik sense to show to our manager qlik sense can be downloaded from subscribing to their website when the download begins you will be redirected to page containing several informational videos on how to install and work with qlik sense it' recommended to watch these first we use the hive odbc connector to read data from hive and make it available for qlik tutorial on installing odbc connectors in qlik is available for major operating systemsthis can be found at note in windowsthis might not work out of the box once you install the odbcmake sure to check your windows odbc manager (ctrl+ and look for odbcin the managergo to "system-dsnand select the "sample hive hortonworks dsnmake sure your settings are correct (as shown in figure or qlik won' connect to the hortonworks sandbox
16,825
first steps in big data figure windows hortonworks odbc configuration let' hope you didn' forget your sandbox passwordas you can see in figure you need it again now open qlik sense if installed in windows you should have gotten the option to place shortcut to the exe on your desktop qlik isn' freewareit' commercial
16,826
product with bait version for single customersbut it will suffice for now in the last we'll create dashboard using free javascript libraries qlik can either take the data directly into memory or make call every time to hive we've chosen the first method because it works faster this part has three steps load data inside qlik with an odbc connection create the report explore data let start with the first steploading data into qlik step load data in qlik when you start qlik sense it will show you welcome screen with the existing reports (called apps)as shown in figure figure the qlik sense welcome screen to start new appclick on the create new app button on the right of the screenas shown in figure this opens up new dialog box enter "as the new name of our app figure the create new app message box
16,827
first steps in big data confirmation box appears (figure if the app is created successfully figure box confirms that the app was created successfully click on the open app button and new screen will prompt you to add data to the application (figure figure start-adding-data screen pops up when you open new app click on the add data button and choose odbc as data source (figure in the next screen (figure select user dsnhortonworksand specify the root as username and hadoop as password (or the new one you gave when logging into the sandbox for the first timethe hortonworks option doesn' show up by default you need to install the hdp odbc connector for this option to appear (as stated beforeif you haven' succeeded in installing it at this pointclear instructions for this can be found at blog//how-to-connect-hortonworks-hive-from-qlikview-withodbc-drivernote click on the arrow pointing to the right to go to the next screen
16,828
figure choose odbc as data source in the select data source screen figure choose hortonworks on the user dsn and specify the username and password
16,829
first steps in big data figure hive interface raw data column overview choose the hive dataand default as user in the next screen (figure select raw as tables to select and select every column for importthen click the button load and finish to complete this step after this stepit will take few seconds to load the data in qlik (figure figure confirmation that the data is loaded in qlik step create the report choose edit the sheet to start building the report this will add the report editor (figure
16,830
figure an editor screen for reports opens substep adding selection filter to the report the first thing we'll add to the report is selection box that shows us why each person wants loan to achieve thisdrop the title measure from the left asset panel on the report pane and give it comfortable size and position (figure click on the fields table so you can drag and drop fields figure drag the title from the left fields pane to the report pane
16,831
first steps in big data substep adding kpi to the report kpi chart shows an aggregated number for the total population that' selected numbers such as the average interest rate and the total number of customers are shown in this chart (figure adding kpi to report takes four stepsas listed below and shown in figure choose chart--choose kpi as the chart and place it on the report screenresize and position to your liking add measure--click the add measure button inside the chart and select int_rate choose an aggregation method--avg(int_rateformat the chart--on the right panefill in average interest rate as label choose chart choose an aggregation figure figure an example of kpi chart the four steps to add kpi chart to qlik report add measure format
16,832
in total we'll add four kpi charts to our reportso you'll need to repeat these steps for the following kpi'saverage interest rate total loan amount average loan amount total recoveries substep adding bar charts to our report next we'll add four bar charts to the report these will show the different numbers for each risk grade one bar chart will explain the average interest rate per risk groupand another will show us the total loan amount per risk group (figure figure an example of bar chart adding bar chart to report takes five stepsas listed below and shown in figure choose chart--choose bar chart as the chart and place it on the report screenresize and position to your liking add measure--click the add measure button inside the chart and select int_rate choose an aggregation method--avg(int_rateadd dimension--click add dimensionand choose grade as the dimension format the chart--on the right panefill in average interest rate as label
16,833
first steps in big data choose chart add measure choose an aggregation add dimension format figure adding bar chart takes five steps repeat this procedure for the following dimension and measure combinationsaverage interest rate per grade average loan amount per grade total loan amount per grade total recoveries per grade
16,834
substep adding cross table to the report suppose you want to know the average interest rate paid by directors of risk group in this case you want to get measure (interest ratefor combination of two dimensions (job title and risk gradethis can be achieved with pivot table such as in figure figure an example of pivot tableshowing the average interest rate paid per job title/risk grade combination adding pivot table to report takes six stepsas listed below and shown in figure choose chart--choose pivot table as the chart and place it on the report screenresize and position to your liking add measure--click the add measure button inside the chart and select int_rate choose an aggregation method--avg(int_rateadd row dimension --click add dimensionand choose emp_title as the dimension add column dimension--click add datachoose columnand select grade format the chart--on the right panefill in average interest rate as label
16,835
first steps in big data choose chart add measure choose an aggregation add row dimension add column dimension figure adding pivot table takes six steps format
16,836
figure the end result in edit mode after resizing and repositioningyou should achieve result similar to figure click the done button on the left and you're ready to explore the data step explore the data the result is an interactive graph that updates itself based on the selections you make why don' you try to look for the information from directors and compare them to artiststo achieve thishit the emp_title in the pivot table and type director in the search field the result looks like figure in the same mannerwe can look at the artistsas shown in figure another interesting insight comes from comparing the rating for home-buying purposes with debt consolidation purposes we finally did itwe created the report our manager cravesand in the process we opened the door for other people to create their own reports using this data an interesting next step for you to ponder on would be to use this setup to find those people likely to default on their debt for this you can use the spark machine learning capabilities driven by online algorithms like the ones demonstrated in
16,837
first steps in big data figure when we select directorswe can see that they pay an average rate of for loan figure when we select artistswe see that they pay an average interest rate of for loan
16,838
in this we got hands-on introduction to the hadoop and spark frameworks we covered lot of groundbut be honestpython makes working with big data technologies dead easy in the next we'll dig deeper into the world of nosql databases and come into contact with more big data technologies summary in this you learned that hadoop is framework that enables you to store files and distribute calculations amongst many computers hadoop hides all the complexities of working with cluster of computers for you an ecosystem of applications surrounds hadoop and sparkranging from databases to access control spark adds shared memory structure to the hadoop framework that' better suited for data science work in the case study we used pyspark ( python libraryto communicate with hive and spark from python we used the pywebhdfs python library to work with the hadoop librarybut you could do as well using the os command line it' easy to connect bi tool such as qlik to hadoop
16,839
this covers understanding nosql databases and why they're used today identifying the differences between nosql and relational databases defining the acid principle and how it relates to the nosql base principle learning why the cap theorem is important for multi-node database setup applying the data science process to project with the nosql database elasticsearch this is divided into two partsa theoretical start and practical finish in the first part of this we'll look into nosql databases in general and answer these questionswhy do they existwhy not until recentlywhat types are there and why should you carein part two we'll tackle real-life problem--disease diagnostics and profiling--using freely available datapythonand nosql database
16,840
no doubt you've heard about nosql databases and how they're used religiously by many high-tech companies but what are nosql databases and what makes them so different from the relational or sql databases you're used tonosql is short for not only structured query languagebut although it' true that nosql databases can allow you to query them with sqlyou don' have to focus on the actual name much debate has already raged over the name and whether this group of new databases should even have collective name at all ratherlet' look at what they represent as opposed to relational database management systems (rdbmstraditional databases reside on single computer or server this used to be fine as long as your data didn' outgrow your serverbut it hasn' been the case for many companies for long time now with the growth of the internetcompanies such as google and amazon felt they were held back by these single-node databases and looked for alternatives numerous companies use single-node nosql databases such as mongodb because they want the flexible schema or the ability to hierarchically aggregate data here are several early examplesgoogle' first nosql solution was google bigtablewhich marked the start of the columnar databases amazon came up with dynamoa key-value store two more database types emerged in the quest for partitioningthe document store and the graph database we'll go into detail on each of the four types later in the please note thatalthough size was an important factorthese databases didn' originate solely from the need to handle larger volumes of data every of big data has influence (volumevarietyvelocityand sometimes veracitygraph databasesfor instancecan handle network data graph database enthusiasts even claim that everything can be seen as network for examplehow do you prepare dinnerwith ingredients these ingredients are brought together to form the dish and can be used along with other ingredients to form other dishes seen from this point of viewingredients and recipes are part of network but recipes and ingredients could also be stored in your relational database or document storeit' all how you look at the problem herein lies the strength of nosqlthe ability to look at problem from different angleshaping the data structure to the use case as data scientistyour job is to find the best answer to any problem although sometimes this is still easier to attain using rdbmsoften particular nosql database offers better approach are relational databases doomed to disappear in companies with big data because of the need for partitioningnonewsql platforms (not to be confused with nosqlare the rdbms answer to the need for cluster setup newsql databases follow the relational model but are capable of being divided into distributed cluster like nosql see see
16,841
join the nosql movement databases it' not the end of relational databases and certainly not the end of sqlas platforms like hive translate sql into mapreduce jobs for hadoop besidesnot every company needs big datamany do fine with small databases and the traditional relational databases are perfect for that if you look at the big data mind map shown in figure you'll see four types of nosql databases mongodb document store elasticsearch reddis memcache key-value store voldemort nosql hbase hypertable column database cassandra nosql newsql databases neo graph database hive hcatalog sql on hadoop drill impala new sql bayes db sensei new sql drizzle figure nosql and newsql databases these four types are document storekey-value storegraph databaseand column database the mind map also includes the newsql partitioned relational databases in the future this big split between nosql and newsql will become obsolete because every database type will have its own focuswhile combining elements from both nosql and newsql databases the lines are slowly blurring as rdbms types get nosql features such as the column-oriented indexing seen in columnar databases but for now it' good way to show that the old relational databases have moved past their single-node setupwhile other database types are emerging under the nosql denominator let' look at what nosql brings to the table
16,842
introduction to nosql as you've readthe goal of nosql databases isn' only to offer way to partition databases successfully over multiple nodesbut also to present fundamentally different ways to model the data at hand to fit its structure to its use case and not to how relational database requires it to be modeled to help you understand nosqlwe're going to start by looking at the core acid principles of single-server relational databases and show how nosql databases rewrite them into base principles so they'll work far better in distributed fashion we'll also look at the cap theoremwhich describes the main problem with distributing databases across multiple nodes and how acid and base databases approach it acidthe core principle of relational databases the main aspects of traditional relational database can be summarized by the concept acidatomicity--the "all or nothingprinciple if record is put into databaseit' put in completely or not at all iffor instancea power failure occurs in the middle of database write actionyou wouldn' end up with half recordit wouldn' be there at all consistency--this important principle maintains the integrity of the data no entry that makes it into the database will ever be in conflict with predefined rulessuch as lacking required field or field being numeric instead of text isolation--when something is changed in the databasenothing can happen on this exact same data at exactly the same moment insteadthe actions happen in serial with other changes isolation is scale going from low isolation to high isolation on this scaletraditional databases are on the "high isolationend an example of low isolation would be google docsmultiple people can write to document at the exact same time and see each other' changes happening instantly traditional word documenton the other end of the spectrumhas high isolationit' locked for editing by the first user to open it the second person opening the document can view its last saved version but is unable to see unsaved changes or edit the document without first saving it as copy so once someone has it openedthe most up-to-date version is completely isolated from anyone but the editor who locked the document durability--if data has entered the databaseit should survive permanently physical damage to the hard discs will destroy recordsbut power outages and software crashes should not acid applies to all relational databases and certain nosql databasessuch as the graph database neo we'll further discuss graph databases later in this and in for most other nosql databases another principle appliesbase to understand base and why it applies to most nosql databaseswe need to look at the cap theorem
16,843
join the nosql movement cap theoremthe problem with dbs on many nodes once database gets spread out over different serversit' difficult to follow the acid principle because of the consistency acid promisesthe cap theorem points out why this becomes problematic the cap theorem states that database can be any two of the following things but never all threepartition tolerant--the database can handle network partition or network failure available--as long as the node you're connecting to is up and running and you can connect to itthe node will respondeven if the connection between the different database nodes is lost consistent--no matter which node you connect toyou'll always see the exact same data for single-node database it' easy to see how it' always available and consistentavailable--as long as the node is upit' available that' all the cap availability promises consistent--there' no second nodeso nothing can be inconsistent things get interesting once the database gets partitioned then you need to make choice between availability and consistencyas shown in figure let' take the example of an online shop with server in europe and server in the united stateswith single distribution center german named fritz and an american named freddy are shopping at the same time on that same online shop they see an item and only one is still in stocka bronzeoctopus-shaped coffee table disaster strikesand communication between the two local servers is temporarily down if you were the owner of the shopyou' have two optionsavailability--you allow the servers to keep on serving customersand you sort out everything afterward consistency--you put all sales on hold until communication is reestablished partitioned available consistent figure cap theoremwhen partitioning your databaseyou need to choose between availability and consistency in the first casefritz and freddy will both buy the octopus coffee tablebecause the last-known stock number for both nodes is "oneand both nodes are allowed to sell itas shown in figure if the coffee table is hard to come byyou'll have to inform either fritz or freddy that he won' receive his table on the promised delivery date oreven worsehe will
16,844
introduction to nosql local server fritz disconnection cap available but not consistentboth fritz and freddy order last available item local server freddy figure cap theoremif nodes get disconnectedyou can choose to remain availablebut the data could become inconsistent never receive it as good businesspersonyou might compensate one of them with discount coupon for later purchaseand everything might be okay after all the second option (figure involves putting the incoming requests on hold temporarily this might be fair to both fritz and freddy if after five minutes the web shop is open for business againbut then you might lose both sales and probably many more web shops tend to choose availability over consistencybut it' not the optimal choice local server fritz disconnection cap consistent but not availableorders on hold until local server connection restored local server freddy figure cap theoremif nodes get disconnectedyou can choose to remain consistent by stopping access to the databases until connections are restored
16,845
join the nosql movement in all cases take popular festival such as tomorrowland festivals tend to have maximum allowed capacity for safety reasons if you sell more tickets than you're allowed because your servers kept on selling during node communication failureyou could sell double the number allowed by the time communications are reestablished in such case it might be wiser to go for consistency and turn off the nodes temporarily festival such as tomorrowland is sold out in the first couple of hours anywayso little downtime won' hurt as much as having to withdraw thousands of entry tickets the base principles of nosql databases rdbms follows the acid principlesnosql databases that don' follow acidsuch as the document stores and key-value storesfollow base base is set of much softer database promisesbasically available--availability is guaranteed in the cap sense taking the web shop exampleif node is up and runningyou can keep on shopping depending on how things are set upnodes can take over from other nodes elasticsearchfor exampleis nosql document-type search engine that divides and replicates its data in such way that node failure doesn' necessarily mean service failurevia the process of sharding each shard can be seen as an individual database server instancebut is also capable of communicating with the other shards to divide the workload as efficiently as possible (figure several shards can be present on single node if each shard has replica on another nodenode failure is easily remedied by re-dividing the work among the remaining nodes soft state--the state of system might change over time this corresponds to the eventual consistency principlethe system might have to change to make the data replica of shard shard replica of shard shard node replica of shard shard replica of shard shard node figure shardingeach shard can function as self-sufficient databasebut they also work together as whole the example represents two nodeseach containing four shardstwo main shards and two replicas failure of one node is backed up by the other
16,846
introduction to nosql consistent again in one node the data might say "aand in the other it might say "bbecause it was adapted laterat conflict resolution when the network is back onlineit' possible the "ain the first node is replaced by " even though no one did anything to explicitly change "ainto " ,it will take on this value as it becomes consistent with the other node eventual consistency--the database will become consistent over time in the web shop examplethe table is sold twicewhich results in data inconsistency once the connection between the individual nodes is reestablishedthey'll communicate and decide how to resolve it this conflict can be resolvedfor exampleon first-comefirst-served basis or by preferring the customer who would incur the lowest transport cost databases come with default behaviorbut given that there' an actual business decision to make herethis behavior can be overwritten even if the connection is up and runninglatencies might cause nodes to become inconsistent oftenproducts are kept in an online shopping basketbut putting an item in basket doesn' lock it for other users if fritz beats freddy to the checkout buttonthere'll be problem once freddy goes to check out this can easily be explained to the customerhe was too late but what if both press the checkout button at the exact same millisecond and both sales happenacid versus base the base principles are somewhat contrived to fit acid and base from chemistryan acid is fluid with low ph value base is the opposite and has high ph value we won' go into the chemistry details herebut figure shows mnemonic to those familiar with the chemistry equivalents of acid and base acid base atomicity consistency isolation durability basically available soft state eventual consistency figure acid versus basetraditional relational databases versus most nosql databases the names are derived from the chemistry concept of the ph scale ph value below is acidichigher than is base on this scaleyour average surface water fluctuates between and
16,847
join the nosql movement nosql database types as you saw earlierthere are four big nosql typeskey-value storedocument storecolumn-oriented databaseand graph database each type solves problem that can' be solved with relational databases actual implementations are often combinations of these orientdbfor exampleis multi-model databasecombining nosql types orientdb is graph database where each node is document before going into the different nosql databaseslet' look at relational databases so you have something to compare them to in data modelingmany approaches are possible relational databases generally strive toward normalizationmaking sure every piece of data is stored only once normalization marks their structural setup iffor instanceyou want to store data about person and their hobbiesyou can do so with two tablesone about the person and one about their hobbies as you can see in figure an additional table is necessary to link hobbies to persons because of their many-to-many relationshipa person can have multiple hobbies and hobby can have many persons practicing it full-scale relational database can be made up of many entities and linking tables now that you have something to compare nosql tolet' look at the different types name birthday person id person id hobby id hobby id hobby name hobby description jos the boss archery fritz von braun shooting arrows from bow conquering the world looking for trouble with your neighboring countries building things also known as construction surfing catching waves on plank swordplay fencing with swords lollygagging hanging around doing nothing freddy stark delphine thewiseone person info tablerepresents person-specific information person-hobby linking tablenecessary because of the many-to-many relationship between hobbies and persons hobby info tablerepresents hobby-specific information figure relational databases strive toward normalization (making sure every piece of data is stored only onceeach table has unique identifiers (primary keysthat are used to model the relationship between the entities (tables)hence the term relational
16,848
introduction to nosql column-oriented database traditional relational databases are row-orientedwith each row having row id and each field within the row stored together in table let' sayfor example' sakethat no extra data about hobbies is stored and you have only single table to describe peopleas shown in figure notice how in this scenario you have slight denormalization because hobbies could be repeated if the hobby information is nice extra but not essential to your use caseadding it as list within the hobbies column is an acceptable approach but if the information isn' important enough for separate tableshould it be stored at allname birthday hobbies jos the boss archeryconquering the world fritz von braun freddy stark delphine thewiseone row id building thingssurfing swordplaylollygaggingarchery figure row-oriented database layout every entity (personis represented by single rowspread over multiple columns every time you look up something in row-oriented databaseevery row is scannedregardless of which columns you require let' say you only want list of birthdays in september the database will scan the table from top to bottom and left to rightas shown in figure eventually returning the list of birthdays name birthday hobbies jos the boss archeryconquering the world fritz von braun freddy stark delphine thewiseone row id building thingssurfing swordplaylollygaggingarchery figure row-oriented lookupfrom top to bottom and for every entryall columns are taken into memory indexing the data on certain columns can significantly improve lookup speedbut indexing every column brings extra overhead and the database is still scanning all the columns
16,849
join the nosql movement column databases store each column separatelyallowing for quicker scans when only small number of columns is involvedsee figure row id birthday row id jos the boss name hobbies row id archery fritz von braun conquering the world freddy stark building things delphine thewiseone surfing swordplay lollygagging figure column-oriented databases store each column separately with the related row numbers every entity (personis divided over multiple tables this layout looks very similar to row-oriented database with an index on every column database index is data structure that allows for quick lookups on data at the cost of storage space and additional writes (index updatean index maps the row number to the datawhereas column database maps the data to the row numbersin that way counting becomes quickerso it' easy to see how many people like archeryfor instance storing the columns separately also allows for optimized compression because there' only one data type per table when should you use row-oriented database and when should you use columnoriented databasein column-oriented database it' easy to add another column because none of the existing columns are affected by it but adding an entire record requires adapting all tables this makes the row-oriented database preferable over the column-oriented database for online transaction processing (oltp)because this implies adding or changing records constantly the column-oriented database shines when performing analytics and reportingsumming values and counting entries row-oriented database is often the operational database of choice for actual transactions (such as salesovernight batch jobs bring the column-oriented database up to datesupporting lightning-speed lookups and aggregations using mapreduce algorithms for reports examples of column-family stores are apache hbasefacebook' cassandrahypertableand the grandfather of wide-column storesgoogle bigtable key-value stores key-value stores are the least complex of the nosql databases they areas the name suggestsa collection of key-value pairsas shown in figure and this simplicity makes them the most scalable of the nosql database typescapable of storing huge amounts of data
16,850
key value name jos the boss birthday hobbies archeryconquering the world figure key-value stores store everything as key and value the value in key-value store can be anythinga stringa numberbut also an entire new set of key-value pairs encapsulated in an object figure shows slightly more complex key-value structure examples of key-value stores are redisvoldemortriakand amazon' dynamo figure key-value nested structure document stores document stores are one step up in complexity from key-value storesa document store does assume certain document structure that can be specified with schema document stores appear the most natural among the nosql database types because they're designed to store everyday documents as isand they allow for complex querying and calculations on this often already aggregated form of data the way things are stored in relational database makes sense from normalization point of vieweverything should be stored only once and connected via foreign keys document stores care little about normalization as long as the data is in structure that makes sense relational data model doesn' always fit well with certain business cases newspapers or magazinesfor examplecontain articles to store these in relational databaseyou need to chop them up firstthe article text goes in one tablethe author and all the information about the author in anotherand comments on the article when published on website go in yet another as shown in figure newspaper article
16,851
join the nosql movement relational database approach comment table reader table article table author name author table document store approach "articles""title""title of the article""articleid" "body""body of the article""author""isaac asimov""comments""username""fritz"join date"""commentid" "body""this is great article""replies""username""freddy""join date""""commentid" "body""seriouslyit' rubbish}"username""stark""join date""""commentid" "body"" don' agree with the conclusionfigure document stores save documents as wholewhereas an rdms cuts up the article and saves it in several tables the example was taken from the guardian website
16,852
introduction to nosql can also be stored as single entitythis lowers the cognitive burden of working with the data for those used to seeing articles all the time examples of document stores are mongodb and couchdb graph databases the last big nosql database type is the most complex onegeared toward storing relations between entities in an efficient manner when the data is highly interconnectedsuch as for social networksscientific paper citationsor capital asset clustersgraph databases are the answer graph or network data has two main componentsnode --the entities themselves in social network this could be people edge --the relationship between two entities this relationship is represented by line and has its own properties an edge can have directionfor exampleif the arrow indicates who is whose boss graphs can become incredibly complex given enough relation and entity types figure already shows that complexity with only limited number of entities graph databases like neo also claim to uphold acidwhereas document stores and keyvalue stores adhere to base webshop customer of knows delphine thewiseone freddy stark knows likes customer of archery octopus table owner of fritz von braun likes friend of friend of jos the boss figure graph data example with four entity types (personhobbycompanyand furnitureand their relations without extra edge or node information the possibilities are endlessand because the world is becoming increasingly interconnectedgraph databases are likely to win terrain over the other typesincluding the still-dominant relational database ranking of the most popular databases and how they're progressing can be found at
16,853
figure join the nosql movement top databases ranked by popularity according to db-engines com in march figure shows that with entriesrelational databases still dominate the top at the time this book was writtenand with the coming of newsql we can' count them out yet neo jthe most popular graph databasecan be found at position at the time of writingwith titan at position now that you've seen each of the nosql database typesit' time to get your hands dirty with one of them case studywhat disease is thatit has happened to many of usyou have sudden medical symptoms and the first thing you do is google what disease the symptoms might indicatethen you decide whether it' worth seeing doctor web search engine is okay for thisbut more dedicated database would be better databases like this exist and are fairly advancedthey can be almost virtual version of dr housea brilliant diagnostician in the tv series house but they're built upon well-protected data and not all of it is accessible by the public alsoalthough big pharmaceutical companies and advanced hospitals have access to these virtual doctorsmany general practitioners are still stuck with only their books this information and resource asymmetry is not only sad and dangerousit needn' be there at all if simpledisease-specific search engine were used by all general practitioners in the worldmany medical mistakes could be avoided in this case studyyou'll learn how to build such search engine herealbeit using only fraction of the medical data that is freely accessible to tackle the problemyou'll use modern nosql database called elasticsearch to store the dataand the
16,854
data science process to work with the data and turn it into resource that' fast and easy to search here' how you'll apply the process setting the research goal data collection--you'll get your data from wikipedia there are more sources out therebut for demonstration purposes single one will do data preparation--the wikipedia data might not be perfect in its current format you'll apply few techniques to change this data exploration--your use case is special in that step of the data science process is also the desired end resultyou want your data to become easy to explore data modeling--no real data modeling is applied in this documentterm matrices that are used for search are often the starting point for advanced topic modeling we won' go into that here presenting results --to make data searchableyou' need user interface such as website where people can query and retrieve disease information in this you won' go so far as to build an actual interface your secondary goalprofiling disease category by its keywordsyou'll reach this stage of the data science process because you'll present it as word cloudsuch as the one in figure figure sample word cloud on non-weighted diabetes keywords to follow along with the codeyou'll need these itemsa python session with the elasticsearch-py and wikipedia libraries installed (pip install elasticsearch and pip install wikipediaa locally set up elasticsearch instancesee appendix for installation instructions the ipython library the code for this is available to download from the manning website for this book at and is in ipython format note
16,855
join the nosql movement elasticsearchthe open source search engine/nosql database to tackle the problem at handdiagnosing diseasethe nosql database you'll use is elasticsearch like mongodbelasticsearch is document store but unlike mongodbelasticsearch is search engine whereas mongodb is great at performing complex calculations and mapreduce jobselasticsearch' main purpose is fulltext search elasticsearch will do basic calculations on indexed numerical data such as summingcountsmedianmeanstandard deviationand so onbut in essence it remains search engine elasticsearch is built on top of apache lucenethe apache search engine created in lucene is notoriously hard to handle and is more building block for more user-friendly applications than an end-to-end solution in itself but lucene is an enormously powerful search engineand apache solr followed in opening for public use in solr (an open sourceenterprise search platformis built on top of apache lucene and is at this moment still the most versatile and popular open source search engine solr is great platform and worth investigating if you get involved in project requiring search engine in elasticsearch emergedquickly gaining in popularity although solr can still be difficult to set up and configureeven for small projectselasticsearch couldn' be easier solr still has an advantage in the number of possible plugins expanding its core functionalitybut elasticsearch is quickly catching up and today its capabilities are of comparable quality step setting the research goal can you diagnose disease by the end of this using nothing but your own home computer and the free software and data out thereknowing what you want to do and how to do it is the first step in the data science processas shown in figure data science process primary goaldisease search define research goal setting the research goal figure secondary goaldisease profiling create project charter step in the data science processsetting the research goal your primary goal is to set up disease search engine that would help general practitioners in diagnosing diseases your secondary goal is to profile diseasewhat keywords distinguish it from other diseasesthis secondary goal is useful for educational purposes or as input to more advanced uses such as detecting spreading epidemics by tapping into social media with your research goal and plan of action definedlet' move on to the data retrieval step
16,856
steps and data retrieval and preparation data retrieval and data preparation are two distinct steps in the data science processand even though this remains true for the case studywe'll explore both in the same section this way you can avoid setting up local intermedia storage and immediately do data preparation while the data is being retrieved let' look at where we are in the data science process (see figure data science process setting the research goal data retrieval no internal data available internal data retrieving data data ownership no internal data available external data wikipedia figure data science process step data retrieval in this case there' no internal dataall data will be fetched from wikipedia as shown in figure you have two possible sourcesinternal data and external data internal data--you have no disease information lying around if you currently work for pharmaceutical company or hospitalyou might be luckier external data--all you can use for this case is external data you have several possibilitiesbut you'll go with wikipedia when you pull the data from wikipediayou'll need to store it in your local elasticsearch indexbut before you do that you'll need to prepare the data once data has entered the elasticsearch indexit can' be alteredall you can do then is query it look at the data preparation overview in figure as shown in figure there are three distinct categories of data preparation to considerdata cleansing--the data you'll pull from wikipedia can be incomplete or erroneous data entry errors and spelling mistakes are possible--even false information isn' excluded luckilyyou don' need the list of diseases to be exhaustiveand you can handle spelling mistakes at search timemore on that later thanks to the wikipedia python librarythe textual data you'll receive is fairly clean already if you were to scrape it manuallyyou' need to add html cleaningremoving all html tags the truth of the matter is full-text search tends to be fairly robust toward common errors such as incorrect values even if you dumped in html tags on purposethey' be unlikely to influence the resultsthe html tags are too different from normal language to interfere
16,857
join the nosql movement data science process setting the research goal retrieving data errors from data entry physically impossible values missing values data cleansing outliers spacestyposerrors against codebook aggregating data data preparation extrapolating data data transformation derived measures creating dummies reducing number of variables merging/joining data sets combining data set operators creating views figure data science process step data preparation data transformation--you don' need to transform the data much at this pointyou want to search it as is but you'll make the distinction between page titledisease nameand page body this distinction is almost mandatory for search result interpretation combining data--all the data is drawn from single source in this caseso you have no real need to combine data possible extension to this exercise would be to get disease data from another source and match the diseases this is no trivial task because no unique identifier is present and the names are often slightly different you can do data cleansing at only two stageswhen using the python program that connects wikipedia to elasticsearch and when running the elasticsearch internal indexing systempython--here you define what data you'll allow to be stored by your document storebut you won' clean the data or transform the data at this stagebecause elasticsearch is better at it for less effort elasticsearch--elasticsearch will handle the data manipulation (creating the indexunder the hood you can still influence this processand you'll do so more explicitly later in this
16,858
now that you have an overview of the steps to comelet' get to work if you followed the instructions in the appendixyou should now have local instance of elasticsearch up and running first comes data retrievalyou need information on the different diseases you have several ways to get that kind of data you could ask companies for their data or get data from freebase or other open and free data sources acquiring your data can be challengebut for this example you'll be pulling it from wikipedia this is bit ironic because searches on the wikipedia website itself are handled by elasticsearch wikipedia used to have its own system build on top of apache lucenebut it became unmaintainableand as of january wikipedia began using elasticsearch instead wikipedia has lists of diseases pageas shown in figure from here you can borrow the data from the alphabetical lists figure wikipedia' lists of diseases pagethe starting point for your data retrieval you know what data you wantnow go grab it you could download the entire wikipedia data dump if you want toyou can download it to wiki/data_dump_torrents#enwiki of courseif you were to index the entire wikipediathe index would end up requiring about gb of storage feel free to use this solutionbut for the sake of preserving storage and bandwidthwe'll limit ourselves in this book to pulling only the data we intend to use another option is scraping the pages you require like googleyou can make program crawl through the pages and retrieve the entire rendered html this would do the trickbut you' end up with the actual htmlso you' need to clean that up before indexing it alsounless you're googlewebsites aren' too fond of crawlers scraping their web pages this creates an unnecessarily high amount of trafficand if enough people send crawlersit can bring the http server to its
16,859
join the nosql movement kneesspoiling the fun for everyone sending billions of requests at the same time is also one of the ways denial of service (doaattacks are performed if you do need to scrape websitescript in time gap between each page request this wayyour scraper more closely mimics the behavior of regular website visitor and you won' blow up their servers luckilythe creators of wikipedia are smart enough to know that this is exactly what would happen with all this information open to everyone they've put an api in place from which you can safely draw your information you can read more about it at you'll draw from the api and python wouldn' be python if it didn' already have library to do the job there are several actuallybut the easiest one will suffice for your needswikipedia activate your python virtual environment and install all the libraries you'll need for the rest of the bookpip install wikipedia pip install elasticsearch you'll use wikipedia to tap into wikipedia elasticsearch is the main elasticsearch python librarywith it you can communicate with your database open your favorite python interpreter and import the necessary librariesfrom elasticsearch import elasticsearch import wikipedia you're going to draw data from the wikipedia api and at the same time index on your local elasticsearch instanceso first you need to prepare it for data acceptance elasticsearch client used to communicate with database client elasticsearch(indexname "medicalclient indices create(index=indexnameindex name create index the first thing you need is client elasticsearch(can be initialized with an address but the default is localhost: elasticsearch(and elasticsearch ('localhost: 'are thus the same thingyour client is connected to your local elasticsearch node then you create an index named "medicalif all goes wellyou should see an "acknowledged:truereplyas shown in figure elasticsearch claims to be schema-lessmeaning you can use elasticsearch without defining database schema and without telling elasticsearch what kind of data it
16,860
figure creating an elasticsearch index with python-elasticsearch needs to expect although this is true for simple casesyou can' avoid having schema in the long runso let' create oneas shown in the following listing listing adding mapping to the document type diseasemapping 'properties''name'{'type''string'}'title'{'type''string'}'fulltext'{'type''string'client indices put_mapping(index=indexnamedoc_type='diseases',body=diseasemapping defining mapping and attributing it to the disease doc type the "diseasesdoc type is updated with mapping now we define the data it should expect this way you tell elasticsearch that your index will have document type called "disease"and you supply it with the field type for each of the fields you have three fields in disease documentnametitleand fulltextall of them of type string if you hadn' supplied the mappingelasticsearch would have guessed their types by looking at the first entry it received if it didn' recognize the field to be booleandoublefloatlongintegeror dateit would set it to string in this caseyou didn' need to manually specify the mapping now let' move on to wikipedia the first thing you want to do is fetch the list of diseases pagebecause this is your entry point for further explorationdl wikipedia page("lists_of_diseases"you now have your first pagebut you're more interested in the listing pages because they contain links to the diseases check out the linksdl links the list of diseases page comes with more links than you'll use figure shows the alphabetical lists starting at the sixteenth link dl wikipedia page("lists_of_diseases"dl links
16,861
join the nosql movement figure links on the wikipedia page lists of diseases it has more links than you'll need this page has considerable array of linksbut only the alphabetic lists interest youso keep only thosediseaselistarray [for link in dl links[ : ]trydiseaselistarray append(wikipedia page(link)except exception,eprint str(eyou've probably noticed that the subset is hardcodedbecause you know they're the th to rd entries in the array if wikipedia were to add even single link before the ones you're interested init would throw off the results better practice would be to use regular expressions for this task for exploration purposeshardcoding the entry numbers is finebut if regular expressions are second nature to you or you intend to turn this code into batch jobregular expressions are recommended you can find more information on them at one possibility for regex version would be the following code snippet diseaselistarray [check re compile("list of diseases*"for link in dl linksif check match(link)trydiseaselistarray append(wikipedia page(link)except exception,eprint str(
16,862
case studywhat disease is thatfigure first wikipedia disease list"list of diseases ( - )figure shows the first entries of what you're afterthe diseases themselves diseaselistarray[ links it' time to index the diseases once they're indexedboth data entry and data preparation are effectively overas shown in the following listing listing indexing diseases from wikipedia the checklist is an array containing an array of allowed first characters if disease doesn' complyskip it looping through disease lists checklist [[" "," "," "," "," "," "," "," "," "," "][" "],[" "],[" "],[" "],[" "],[" "],[" "],[" "][" "],[" "],[" "],[" "],[" "],[" "],[" "],[" "][" "],[" "],[" "],[" "],[" "],[" "],[" "],[" "],[" "],[" "]doctype 'diseasesfor diseaselistnumberdiseaselist in enumerate(diseaselistarray)document for disease in diseaselist linkstype you'll tryindex if disease[ in checklist[diseaselistnumberand disease[ : !="list"looping through currentpage wikipedia page(diseaselists of links for client index(index=indexnameevery disease list doc_type=doctype,id diseasebody={"name"disease"title":currentpage title "fulltext":currentpage content}first check if it' except exception,ea diseasethen index it print str(ebecause each of the list pages will have links you don' needcheck to see if an entry is disease you indicate for each list what character the disease starts withso you check for this additionally you exclude the links starting with "listbecause these will pop up once you get to the list of diseases the check is rather naivebut the cost of having few unwanted entries is rather low because the search algorithms will exclude irrelevant results once you start querying for each disease you index the disease name and the full text of the page the name is also used as its index idthis is useful
16,863
join the nosql movement for several advanced elasticsearch features but also for quick lookup in the browser for exampletry this url in your browser % beta% hydroxylase% deficiency the title is indexed separatelyin most cases the link name and the page title will be identical and sometimes the title will contain an alternative name for the disease with at least few diseases indexed it' possible to make use of the elasticsearch uri for simple lookups have look at full body search for the word headache in figure you can already do this while indexingelasticsearch can update an index and return queries for it at the same time server address index name will do search query query start field in which to search port figure document type indicates "getargument will follow what to search for the elasticsearch url example buildup if you don' query the indexyou can still get few results without knowing anything about the index specifying medical/diseases/_search will return the first five results for more structured view on the data you can ask for the mapping of this document type at diseases/_mapping?pretty the pretty get argument shows the returned json in more readable formatas can be seen in figure the mapping does appear to be the way you specified itall fields are type string the elasticsearch url is certainly usefulyet it won' suffice for your needs you still have diseases to diagnoseand for this you'll send post requests to elasticsearch via your elasticsearch python library with data retrieval and preparation accomplishedyou can move on to exploring your data figure diseases document type mapping via elasticsearch url
16,864
step data exploration it' not lupus it' never lupus--dr house of house data exploration is what marks this case studybecause the primary goal of the project (disease diagnosticsis specific way of exploring the data by querying for disease symptoms figure shows several data exploration techniquesbut in this case it' non-graphicalinterpreting text search query results data science process setting the research goal retrieving data data preparation simple graphs combined graphs data exploration link and brush nongraphical techniques text search figure data science process step data exploration the moment of truth is herecan you find certain diseases by feeding your search engine their symptomslet' first make sure you have the basics up and running import the elasticsearch library and define global search settingsfrom elasticsearch import elasticsearch client elasticsearch(indexname "medicaldoctype="diseasessearchfrom searchsize you'll return only the first three resultsthe default is five elasticsearch has an elaborate json query languageevery search is post request to the server and will be answered with json answer roughlythe language consists of three big partsqueriesfiltersand aggregations query takes in search keywords and puts them through one or more analyzers before the words are looked up in the index we'll get deeper into analyzers bit later in this filter takes keywords like query does but doesn' try to analyze what you give itit filters on the conditions we provide filters are thus less complex but many times more efficient because
16,865
join the nosql movement they're also temporarily stored within elasticsearch in case you use the same filter twice aggregations can be compared to the sql groupbuckets of words will be createdand for each bucket relevant statistics can be calculated each of these three compartments has loads of options and featuresmaking elaborating on the entire language here impossible luckilythere' no need to go into the complexity that elasticsearch queries can represent we'll use the "query string query language, way to query the data that closely resembles the google search query language iffor instanceyou want search term to be mandatoryyou add plus (+signif you want to exclude the search termyou use minus (-sign querying elasticsearch isn' recommended because it decreases performancethe search engine first needs to translate the query string into its native json query language but for your purposes it will work nicelyalsoperformance won' be factor on the several thousand records you have in your index now it' time to query your disease data project primary objectivediagnosing disease by its symptoms if you ever saw the popular television series house the sentence "it' never lupusmay sound familiar lupus is type of autoimmune diseasewhere the body' immune system attacks healthy parts of the body let' see what symptoms your search engine would need to determine that you're looking for lupus start off with three symptomsfatiguefeverand joint pain your imaginary patient has all three of them (and more)so make them all mandatory by adding plus sign before each onelisting "simple query stringelasticsearch query with three mandatory keywords the dictionary named searchbody contains the search request information we'll send we want the name field in our results the query part other things are possible herelike aggregations more on that later simple query string is type of query that takes input in much the same way the google homepage would searchbody="fields":["name"]these fields are the fields "query":in which it needs to search they are not to be confused "simple_query_stringwith the fields it has to "query"'+fatigue+fever+"joint pain"'return in the search results "fields"["fulltext","title^ ","name^ "(specified in the second code line aboveclient search(index=indexname,doc_type=doctypebody=searchbodyfrom_ searchfromsize=searchsizelike query on google the sign indicates the term is mandatory encapsulating two or more words in quotes signals you want to find them exactly like this the search is executed variables indexnamedoctypesearchfromand searchsize were declared earlierindexname "medicaldoctype="diseasessearchfrom searchsize
16,866
case studywhat disease is thatin searchbodywhich has json structureyou specify the fields you' like to see returnedin this case the name of the disease should suffice you use the query string syntax to search in all the indexed fieldsfulltexttitleand name by adding you can give each field weight if symptom occurs in the titleit' five times more important than in the open textif it occurs in the name itselfit' considered ten times as important notice how "joint painis wrapped in pair of quotation marks if you didn' have the "signsjoint and pain would have been considered as two separate keywords rather than single phrase in elasticsearch this is called phrase matching let' look at the results in figure lupus is not in the top diseases returned diseases found figure lupus first search with results figure shows the top three results returned out of matching diseases the results are sorted by their matching scorethe variable _score the matching score is no simple thing to explainit takes into consideration how well the disease matches your query and how many times keyword was foundthe weights you gaveand so on currentlylupus doesn' even show up in the top three results luckily for youlupus has another distinct symptoma rash the rash doesn' always show up on the person' facebut it does happen and this is where lupus got its namethe face rash makes people vaguely resemble wolf your patient has rash but not the signature rash on the faceso add "rashto the symptoms without mentioning the face "query"'+fatigue+fever+"joint pain"+rash'
16,867
figure top three join the nosql movement lupus second search attempt with six results and lupus in the the results of the new search are shown in figure now the results have been narrowed down to six and lupus is in the top three at this pointthe search engine says human granulocytic ehrlichiosis (hgeis more likely hge is disease spread by tickslike the infamous lyme disease by now capable doctor would have already figured out which disease plagues your patientbecause in determining diseases many factors are at playmore than you can feed into your humble search engine for instancethe rash occurs only in of hge and in of lupus patients lupus emerges slowlywhereas hge is set off by tick bite advanced machine-learning databases fed with all this information in more structured way could make diagnosis with far greater certainty given that you need to make do with the wikipedia pagesyou need another symptom to confirm that it' lupus the patient experiences chest painso add this to the list "query"'+fatigue+fever+"joint pain"+rash+"chest pain"'the result is shown in figure seems like it' lupus it took while to get to this conclusionbut you got there of courseyou were limited in the way you presented elasticsearch with the symptoms you used only either single terms ("fatigue"or literal phrases ("joint pain"this worked out for this examplebut elasticsearch is more flexible than this it can take regular expressions and do fuzzy searchbut that' beyond the scope of this bookalthough few examples are included in the downloadable code
16,868
case studywhat disease is thatfigure lupus third searchwith enough symptoms to determine it must be lupus handling spelling mistakesdamerau-levenshtein say someone typed "lupsuinstead of "lupus spelling mistakes happen all the time and in all types of human-crafted documents to deal with this data scientists often use damerau-levenshtein the damerau-levenshtein distance between two strings is the number of operations required to turn one string into the other four operations are allowed to calculate the distancedeletion--delete character from the string insertion--add character to the string substitution--substitute one character for another without the substitution counted as one operationchanging one character into another would take two operationsone deletion and one insertion transposition of two adjacent characters--swap two adjacent characters this last operation (transpositionis what makes the difference between traditional levenshtein distance and the damerau-levenshtein distance it' this last operation that makes our dyslexic spelling mistake fall within acceptable limits dameraulevenshtein is forgiving of these transposition mistakeswhich makes it great for search enginesbut it' also used for other things such as calculating the differences between dna strings figure shows how the transformation from "lupsuto "lupusis performed with single transposition lupsu lupsu lupus figure adjacent character transposition is one of the operations in damerau-levenshtein distance the other three are insertiondeletionand substitution with just this you've achieved your first objectivediagnosing disease but let' not forget about your secondary project objectivedisease profiling project secondary objectivedisease profiling what you want is list of keywords fitting your selected disease for this you'll use the significant terms aggregation the score calculation to determine which words are significant is once again combination of factorsbut it roughly boils down to comparison
16,869
join the nosql movement of the number of times term is found in the result set as opposed to all the other documents this way elasticsearch profiles your result set by supplying the keywords that distinguish it from the other data let' do that on diabetesa common disease that can take many formslisting significant terms elasticsearch query for "diabetesthe dictionary named searchbody contains the search request information we'll send we want the name field in our results filtered query has two possible componentsa query and filter the query performs search while the filter matches exact values only and is therefore way more efficient but restrictive the filter part of the filtered query query part isn' mandatorya filter is sufficient the query searchbody=part "fields":["name"]"query":"filtered"filter"'term'{'name':'diabetes'we want to filter the name field and keep only if it }contains the term diabetes "aggregations"diseasekeywords"significant_terms"field"fulltext""size": client search(index=indexname,doc_type=doctypebody=searchbodyfrom_ searchfromsize=searchsizediseasekeywords is the name we give to our aggregation an aggregation can generally be compared to group by in sql it' mostly used to summarize values of numeric variable over the distinct values within one or more variables significant term aggregation can be compared to keyword detection the internal algorithm looks for words that are "more importantfor the selected set of documents than they are in the overall population of documents you see new code here you got rid of the query string search and used filter instead the filter is encapsulated within the query part because search queries can be combined with filters it doesn' occur in this examplebut when this happenselasticsearch will first apply the far more efficient filter before attempting the search if you know you want to search in subset of your datait' always good idea to add filter to first create this subset to demonstrate thisconsider the following two snippets of code they yield the same results but they're not the exact same thing
16,870
simple query string searching for "diabetesin the disease name"query":"simple_query_string"query"'diabetes'"fields"["name" term filter filtering in all the diseases with "diabetesin the name"query":"filtered"filter"'term'{'name':'diabetes'although it won' show on the small amount of data at your disposalthe filter is way faster than the search search query will calculate search score for each of the diseases and rank them accordinglywhereas filter simply filters out all those that don' comply filter is thus far less complex than an actual searchit' either "yesor "noand this is evident in the output the score is for everythingno distinction is made within the result set the output consists of two parts now because of the significant terms aggregation before you only had hitsnow you have hits and aggregations firsthave look at the hits in figure this should look familiar by now with one notable exceptionall results have score of in addition to being easier to performa filter is cached by elasticsearch for figure hits output of filtered query with the filter "diabeteson disease name
16,871
join the nosql movement awhile this waysubsequent requests with the same filter are even fasterresulting in huge performance advantage over search queries when should you use filters and when search queriesthe rule is simpleuse filters whenever possible and use search queries for full-text search when ranking between the results is required to get the most interesting results at the top now take look at the significant terms in figure figure diabetes significant terms aggregationfirst five keywords if you look at the first five keywords in figure you'll see that the top four are related to the origin of diabetes the following wikipedia paragraph offers helpthe word diabetes (/,dai @'bi:ti:zor /,dai @'bi:tis/comes from latin diabeteswhich in turn comes from ancient greek (diabeteswhich literally means " passer througha siphon[ ancient greek physician aretaeus of cappadocia (fl st century ceused that wordwith the intended meaning "excessive discharge of urine,as the name for the disease [ ultimatelythe word comes from greek (diabainein)meaning "to pass through,[ which is composed of (dia-)meaning "throughand (bainein)meaning "to go[ the word "diabetesis first recorded in englishin the form diabetein medical text written around --wikipedia page diabetes_mellitus this tells you where the word diabetes comes from" passer througha siphonin greek it also mentions diabainein and bainein you might have known that the most
16,872
relevant keywords for disease would be the actual definition and origin luckily we asked for keywordsso let' pick few more interesting ones such as ndi ndi is lowercased version of ndior "nephrogenic diabetes insipidus,the most common acquired form of diabetes lowercase keywords are returned because that' how they're stored in the index when we put it through the standard analyzer when indexing we didn' specify anything at all while indexingso the standard analyzer was used by default other interesting keywords in the top are avpa gene related to diabetesthirsta symptom of diabetesand amiloridea medication for diabetes these keywords do seem to profile diabetesbut we're missing multi-term keywordswe stored only individual terms in the index because this was the default behavior certain words will never show up on their own because they're not used that often but are still significant when used in combination with other terms currently we miss out on the relationship between certain terms take avpfor exampleif avp were always written in its full form "nephrogenic diabetes insipidus,it wouldn' be picked up storing -grams (combinations of number of wordstakes up storage spaceand using them for queries or aggregations taxes the search server deciding where to stop is balance exercise and depends on your data and use case generallybigrams (combination of two termsare useful because meaningful bigrams exist in the natural languagethough -grams not so much bigram key concepts would be useful for disease profilingbut to create those bigram significant term aggregations you' need them stored as bigrams in your index as is often the case in data scienceyou'll need to go back several steps to make few changes let' go back to the data preparation phase step revisiteddata preparation for disease profiling it shouldn' come as surprise that you're back to data preparationas shown in figure the data science process is an iterative oneafter all when you indexed your datayou did virtually no data cleansing or data transformations you can add data cleansing now byfor instancestop word filtering stop words are words that are so common that they're often discarded because they can pollute the results we won' go into stop word filtering (or other data cleansingherebut feel free to try it yourself to index bigrams you need to create your own token filter and text analyzer token filter is capable of putting transformations on tokens your specific token filter data cleansing examplestop word filtering data preparation data transformation examplelowercasing combining data figure data science process step data preparation data cleansing for text can be stop word filteringdata transformation can be lowercasing of characters
16,873
join the nosql movement needs to combine tokens to create -gramsalso called shingles the default elasticsearch tokenizer is called the standard tokenizerand it will look for word boundarieslike the space between wordsto cut the text into different tokens or terms take look at the new settings for your disease indexas shown in the following listing listing updating elasticsearch index settings settings="analysis""filter""my_shingle_filter""type""shingle""min_shingle_size" "max_shingle_size" "output_unigrams"false }"analyzer""my_shingle_analyzer""type""custom""tokenizer""standard""filter""lowercase""my_shingle_filterbefore you can change certain settingsthe index needs to be closed after changing the settingsyou can reopen the index client indices close(index=indexnameclient indices put_settings(index=indexname body settingsclient indices open(index=indexnameyou create two new elementsthe token filter called "my shingle filterand new analyzer called "my_shingle_analyzer because -grams are so commonelasticsearch comes with built-in shingle token filter type all you need to tell it is that you want the bigrams "min_shingle_size "max_shingle_size as shown in figure you could go for trigrams and higherbut for demonstration purposes this will suffice name built-in token filter type we want bigrams we don' need the unigrams output next to our bigrams figure shingle token filter to produce bigrams
16,874
the analyzer shown in figure is the combination of all the operations required to go from input text to index it incorporates the shingle filterbut it' much more than this the tokenizer splits the text into tokens or termsyou can then use lowercase filter so there' no difference when searching for "diabetesversus "diabetes finallyyou apply your shingle filtercreating your bigrams name this is custom-defined analyzer-we specify every component ourselves we still make use of the token analyzer (which is also the default analyzerthe lowercase token filter (which is also the default filterwill lowercase every character when lowercasing is doneour shingle filter is appliedcreating bigrams instead of the default unigrams figure bigrams custom analyzer with standard tokenization and shingle token filter to produce notice that you need to close the index before updating the settings you can then safely reopen the index knowing that your settings have been updated not all setting changes require the index to be closedbut this one does you can find an overview of what settings need the index to be closed at the index is now ready to use your new analyzer for this you'll create new document typediseases with new mappingas shown in the following listing listing create more advanced elasticsearch doctype mapping doctype 'diseases the new disease mapping diseasemapping differs from the old one by 'properties'the addition of the 'name'{'type''string'}fulltext shingles field that 'title'{'type''string'}contains your bigrams 'fulltext'"type""string""fields""shingles""type""string""analyzer""my_shingle_analyzer
16,875
join the nosql movement client indices put_mapping(index=indexnamedoc_type=doctype,body=diseasemapping within fulltext you now have an extra parameterfields here you can specify all the different isotopes of fulltext you have only oneit goes by the name shingles and will analyze the fulltext with your new my_shingle_analyzer you still have access to your original fulltextand you didn' specify an analyzer for thisso the standard one will be used as before you can access the new one by giving the property name followed by its field namefulltext shingles all you need to do now is go through the previous steps and index the data using the wikipedia apias shown in the following listing listing reindexing wikipedia disease explanations with new doctype mapping dl wikipedia page("lists_of_diseases"diseaselistarray [for link in dl links[ : ]trydiseaselistarray append(wikipedia page(link)except exception,eprint str(echecklist [[" "," "," "," "," "," "," "," "," "," "][" "],[" "],[" "],[" "],[" "],[" "],[" "][" "],[" "],[" "],[" "],[" "],[" "],[" "][" "],[" "],[" "],[" "],[" "],[" "],[" "][" "],[" "],[" "],[" "],[" "]loop through disease lists the checklist is an array containing allowed "first characters if disease doesn' complyyou skip it for diseaselistnumberdiseaselist in enumerate(diseaselistarray)for disease in diseaselist links#loop through lists of links for every disease list tryif disease[ in checklist[diseaselistnumberfirst check if it' and disease[ : !="list" diseasethen currentpage wikipedia page(diseaseindex it client index(index=indexnamedoc_type=doctype,id diseasebody={"name"disease"title":currentpage title "fulltext":currentpage content}except exception,eprint str(ethere' nothing new hereonly this time you'll index doc_type diseases instead of diseases when this is complete you can again move forward to step data explorationand check the results
16,876
step revisiteddata exploration for disease profiling you've once again arrived at data exploration you can adapt the aggregations query and use your new field to give you bigram key concepts related to diabeteslisting significant terms aggregation on "diabeteswith bigrams searchbody="fields":["name"]"query":"filtered"filter"'term'{'name':'diabetes'}"aggregations"diseasekeywords"significant_terms"field"fulltext""size }"diseasebigrams""significant_terms"field"fulltext shingles""size client search(index=indexname,doc_type=doctypebody=searchbodyfrom_ size= your new aggregatecalled diseasebigramsuses the fulltext shingles field to provide few new insights into diabetes these new key terms show upexcessive discharge-- diabetes patient needs to urinate frequently causes polyuria--this indicates the same thingdiabetes causes the patient to urinate frequently deprivation test--this is actually trigram"water deprivation test"but it recognized deprivation test because you have only bigrams it' test to determine whether patient has diabetes excessive thirst--you already found "thirstwith your unigram keyword searchbut technically at that point it could have meant "no thirst there are other interesting bigramsunigramsand probably also trigrams taken as wholethey can be used to analyze text or collection of texts before reading them notice that you achieved the desired results without getting to the modeling stage sometimes there' at least an equal amount of valuable information to be found in data exploration as in data modeling now that you've fully achieved your secondary objectiveyou can move on to step of the data science processpresentation and automation
16,877
join the nosql movement step presentation and automation your primary objectivedisease diagnosticsturned into self-service diagnostics tool by allowing physician to query it viafor instancea web application you won' build website in this casebut if you plan on doing soplease read the sidebar "elasticsearch for web applications elasticsearch for web applications as with any other databaseit' bad practice to expose your elasticsearch rest api directly to the front end of web applications if website can directly make post requests to your databaseanyone can just as easily delete your datathere' always need for an intermediate layer this middle layer could be python if that suits you two popular python solutions would be django or the django rest framework in combination with an independent front end django is generally used to build round-trip applications (web applications where the server builds the front end dynamicallygiven the data from the database and templating systemthe django rest framework is plugin to djangotransforming django into rest serviceenabling it to become part of single-page applications single-page application is web application that uses single web page as an anchor but is capable of dynamically changing the content by retrieving static files from the http server and data from restful apis both approaches (round-trip and single-pageare fineas long as the elasticsearch server itself isn' open to the publicbecause it has no built-in security measures security can be added to elasticsearch directly using "shield,an elasticsearch payable service the secondary objectivedisease profilingcan also be taken to the level of user interfaceit' possible to let the search results produce word cloud that visually summarizes the search results we won' take it that far in this bookbut if you're interested in setting up something like this in pythonuse the word_cloud library (pip install word_cloudor if you prefer javascriptd js is good way to go you can find an example implementation at #% % fwww jasondavies com% fwordcloud% fabout% adding your keywords on this js-driven website will produce unigram word cloud like the one shown in figure that can be incorporated into the presentation figure unigram word cloud on non-weighted diabetes keywords from elasticsearch
16,878
of your project results the terms aren' weighted by their score in this casebut it already provides nice representation of the findings many improvements are possible for your applicationespecially in the area of data preparation but diving into all the possibilities here would take us too farthus we've come to the end of this in the next one we'll take look at streaming data summary in this you learned the followingnosql stands for "not only structured query languageand has arisen from the need to handle the exponentially increasing amounts and varieties of dataas well as the increasing need for more diverse and flexible schemas such as network and hierarchical structures handling all this data requires database partitioning because no single machine is capable of doing all the work when partitioningthe cap theorem appliesyou can have availability or consistency but never both at the same time relational databases and graph databases hold to the acid principlesatomicityconsistencyisolationand durability nosql databases generally follow the base principlesbasic availabilitysoft stateand eventual consistency the four biggest types of nosql databases key-value stores--essentially bunch of key-value pairs stored in database these databases can be immensely big and are hugely versatile but the data complexity is low well-known example is redis wide-column databases--these databases are bit more complex than keyvalue stores in that they use columns but in more efficient way than regular rdbms would the columns are essentially decoupledallowing you to retrieve data in single column quickly well-known database is cassandra document stores--these databases are little bit more complex and store data as documents currently the most popular one is mongodbbut in our case study we use elasticsearchwhich is both document store and search engine graph databases--these databases can store the most complex data structuresas they treat the entities and relations between entities with equal care this complexity comes at cost in lookup speed popular one is neo jbut graphx ( graph database related to apache sparkis winning ground elasticsearch is document store and full-text search engine built on top of apache lucenethe open source search engine it can be used to tokenizeperform aggregation queriesperform dimensional (facetedqueriesprofile search queriesand much more
16,879
this covers introducing connected data and how it' related to graphs and graph databases learning how graph databases differ from relational databases discovering the graph database neo applying the data science process to recommender engine project with the graph database neo where on one hand we're producing data at mass scaleprompting the likes of googleamazonand facebook to come up with intelligent ways to deal with thison the other hand we're faced with data that' becoming more interconnected than ever graphs and networks are pervasive in our lives by presenting several motivating exampleswe hope to teach the reader how to recognize graph problem when it reveals itself in this we'll look at how to leverage those connections for all they're worth using graph databaseand demonstrate how to use neo ja popular graph database
16,880
introducing connected data and graph databases let' start by familiarizing ourselves with the concept of connected data and its representation as graph data connected data--as the name indicatesconnected data is characterized by the fact that the data at hand has relationship that makes it connected graphs--often referred to in the same sentence as connected data graphs are well suited to represent the connectivity of data in meaningful way graph databases--introduced in the reason this subject is meriting particular attention is becausebesides the fact that data is increasing in sizeit' also becoming more interconnected not much effort is needed to come up with well-known examples of connected data prominent example of data that takes network form is social media data social media allows us to share and exchange data in networksthereby generating great amount of connected data we can illustrate this with simple example let' assume we have two people in our datauser and user furthermorewe know the first name and the last name of user (first namepaul and last namebeunand user (first namejelme and last nameragnara natural way of representing this could be by drawing it out on whiteboardas shown in figure entitiesnodes knows user user namepaul last namebeun namejelme last nameragnar properties of paul relationship of type "knowsproperties of jelme figure simple connected data exampletwo entities or nodes (user user )each with properties (first namelast name)connected by relationship (knowsthe terminology of figure is described belowentities--we have two entities that represent people (user and user these entities have the properties "nameand "lastnameproperties--the properties are defined by key-value pairs from this graph we can also infer that user with the "nameproperty paul knows user with the "nameproperty jelme
16,881
the rise of graph databases relationships--this is the relationship between paul and jelme note that the relationship has directionit' paul who "knowsjelme and not the other way around user and user both represent people and could therefore be grouped labels--in graph databaseone can group nodes by using labels user and user could in this case both be labeled as "userconnected data often contains many more entities and connections in figure we can see more extensive graph two more entities are includedcountry with the name cambodia and country with the name sweden two more relationships exist"has_been_inand "is_born_inin the previous graphonly the entities included propertynow the relationships also contain property such graphs are known as property graphs the relationship connecting the nodes user and country is of the type "has_been_inand has as property "datewhich represents data value similarlyuser is connected to country but through different type of relationshipwhich is of the type "is_born_innote that the types of relationships provide us context of the relationships between nodes nodes can have multiple relationships knows user user namepaul last namebeun namejelme last nameragnar has_been_in is_born_in datedobcountry country namecambodia namesweden relationship of type "is_born_inwith property figure more complicated connected data example where two more entities have been included (country and country and two new relationships ("has_been_inand "is_born_in"this kind of representation of our data gives us an intuitive way to store connected data to explore our data we need to traverse through the graph following predefined paths to find the patterns we're searching for what if one would like to know where paul has beentranslated into graph database terminologywe' like to find the pattern "paul has been in to answer thiswe' start at the node with the
16,882
name "pauland traverse to cambodia via the relationship "has_been_inhence graph traversalwhich corresponds to database querywould be the following starting node--in this case the node with name property "paula traversal path--in this case path starting at node paul and going to cambodia end node--country node with name property "cambodiato better understand how graph databases deal with connected datait' appropriate to expand bit more on graphs in general graphs are extensively studied in the domains of computer science and mathematics in field called graph theory graph theory is the study of graphswhere graphs represent the mathematical structures used to model pairwise relations between objectsas shown in figure what makes them so appealing is that they have structure that lends itself to visualizing connected data graph is defined by vertices (also known as nodes in the graph database worldand edges (also known as relationshipsthese concepts form the basic fundamentals on which graph data structures are based vertex aliasnode edge edge aliasrelationship edge vertex vertex figure at its core graph consists of nodes (also known as verticesand edges (that connect the vertices)as known from the mathematical definition of graph these collections of objects represent the graph compared to other data structuresa distinctive feature of connected data is its nonlinear natureany entity can be connected to any other via variety of relationship types and intermediate entities and paths in graphsyou can make subdivision between directed and undirected graphs the edges of directed graph have--how could it be otherwise-- direction although one could argue that every problem could somehow be represented as graph problemit' important to understand when it' ideal to do so and when it' not why and when should use graph databasethe quest of determining which graph database one should use could be an involved process to undertake one important aspect in this decision making process is
16,883
the rise of graph databases finding the right representation for your data since the early the most common type of database one had to rely on was relational one laterothers emergedsuch as the hierarchical database (for exampleims)and the graph database' closest relativethe network database (for exampleidmsbut during the last decades the landscape has become much more diversegiving end-users more choice depending on their specific needs considering the recent development of the data that' becoming availabletwo characteristics are well suited to be highlighted here the first one is the size of the data and the other the complexity of the dataas shown in figure key-value data store column-value data store document databases size of data graph databases relational databases complexity of data figure this figure illustrates the positioning of graph databases on two dimensional space where one dimension represents the size of the data one is dealing withand the other dimension represents the complexity in terms of how connected the data is when relational databases can no longer cope with the complexity of data set because of its connectednessbut not its sizegraph databases may be your best option as figure indicateswe'll need to rely on graph database when the data is complex but still small though "smallis relative thing herewe're still talking hundreds of millions of nodes handling complexity is the main asset of graph database and the ultimate "whyyou' use it to explain what kind of complexity is meant herefirst think about how traditional relational database works contrary to what the name of relational databases indicatesnot much is relational about them except that the foreign keys and primary keys are what relate tables in contrastrelationships in graph databases are first-class citizens through this aspectthey lend themselves well to modeling and querying connected data
16,884
relational database would rather strive for minimizing data redundancy this process is known as database normalizationwhere table is decomposed into smaller (less redundanttables while maintaining all the information intact in normalized database one needs to conduct changes of an attribute in only one table the aim of this process is to isolate data changes in one table relational database management systems (rdbmsare good choice as database for data that fits nicely into tabular format the relationships in the data can be expressed by joining the tables their fit starts to downgrade when the joins become more complicatedespecially when they become many-to-many joins query time will also increase when your data size starts increasingand maintaining the database will be more of challenge these factors will hamper the performance of your database graph databaseson the other handinherently store data as nodes and relationships although graph databases are classified as nosql type of databasea trend to present them as category in their own right exists one seeks the justification for this by noting that the other types of nosql databases are aggregation-orientedwhile graph databases aren' relational database mightfor examplehave table representing "peopleand their properties any person is related to other people through kinship (and friendshipand so on)each row might represent personbut connecting them to other rows in the people table would be an immensely difficult job do you add variable that holds the unique identifier of the first child and an extra one to hold the id of the second childwhere do you stoptenth childan alternative would be to use an intermediate table for child-parent relationshipsbut you'll need separate one for other relationship types like friendship in this last case you don' get column proliferation but table proliferationone relationship table for each type of relationship even if you somehow succeed in modeling the data in such way that all family relations are presentyou'll need difficult queries to get the answer to simple questions such as " would like the grandsons of john mcbain first you need to find john mcbain' children once you find his childrenyou need to find theirs by the time you have found all the grandsonsyou have hit the "peopletable three times find mcbain and fetch his children look up the children with the ids you got and get the ids of their children find the grandsons of mcbain figure shows the recursive lookups in relation database necessary to get from john mcbain to his grandsons if everything is in single table figure is another way to model the datathe parent-child relationship is separate table recursive lookups such as these are inefficientto say the least
16,885
the rise of graph databases people table other ids first name last name id child id child id john mcbain wolf mcbain null arnold mcbain null moe mcbain null null null dave mcbain null null null jago mcbain null null null carl mcbain null null null find john mcbain figure use child ids to find wolf and arnold mcbain use child ids to find moedavejagoand carl mcbain recursive lookup version all data in one table people table parent-child relationship table first name last name person id parent id child id john mcbain wolf mcbain arnold mcbain moe mcbain dave mcbain jago mcbain carl mcbain find john mcbain figure use child ids to find wolf and arnold mcbain use child ids to find moedavejagoand carl mcbain recursive lookup version using parent-child relationship table graph databases shine when this type of complexity arises let' look at the most popular among them introducing neo ja graph database connected data is generally stored in graph databases these databases are specifically designed to cope with the structure of connected data the landscape of available graph databases is rather diverse these days the three most-known ones in order of
16,886
decreasing popularity are neo jorientdband titan to showcase our case study we'll choose the most popular one at the moment of writing (see com/en/ranking/graph+dbmsseptember neo is graph database that stores the data in graph containing nodes and relationships (both are allowed to contain propertiesthis type of graph database is known as property graph and is well suited for storing connected data it has flexible schema that will give us freedom to change our data structure if neededproviding us the ability to add new data and new relationships if needed it' an open source projectmature technologyeasy to installuser-friendlyand well documented neo also has browser-based interface that facilitates the creation of graphs for visualization purposes to follow alongthis would be the right moment to install neo neo can be downloaded from now let' introduce the four basic structures in neo jnodes--represent entities such as documentsusersrecipesand so on certain properties could be assigned to nodes relationships--exist between the different nodes they can be accessed either stand-alone or through the nodes they're attached to relationships can also contain propertieshence the name property graph model every relationship has name and directionwhich together provide semantic context for the nodes connected by the relationship properties--both nodes and relationships can have properties properties are defined by key-value pairs labels --can be used to group similar nodes to facilitate faster traversal through graphs before conducting an analysisa good habit is to design your database carefully so it fits the queries you' like to run down the road when performing your analysis graph databases have the pleasant characteristic that they're whiteboard friendly if one tries to draw the problem setting on whiteboardthis drawing will closely resemble the database design for the defined problem thereforesuch whiteboard drawing would then be good starting point to design our database now how to retrieve the datato explore our datawe need to traverse through the graph following predefined paths to find the patterns we're searching for the neo browser is an ideal environment to create and play around with your connected data until you get to the right kind of representation for optimal queriesas shown in figure the flexible schema of the graph database suits us well here in this browser you can retrieve your data in rows or as graph neo has its own query language to ease the creation and query capabilities of graphs cypher is highly expressive language that shares enough with sql to enhance the learning process of the language in the following sectionwe'll create our own data using cypher and insert it into neo then we can play around with the data
16,887
the rise of graph databases figure neo interface with resolved query from the case study cyphera graph query language let' introduce cypher and its basic syntax for graph operations the idea of this section is to present enough about cypher to get us started using the neo browser at the end of this section you should be able to create your own connected data using cypher in the neo browser and run basic queries to retrieve the results of the query for more extensive introduction to cypher you can visit stable/cypher-query-lang html we'll start by drawing simple social graph accompanied by basic query to retrieve predefined pattern as an example in the next step we'll draw more complex graph that will allow us to use more complicated queries in cypher this will help us to get acquainted with cypher and move us down the path to bringing our use case into reality moreoverwe'll show how to create our own simulated connected data using cypher figure shows simple social graph of two nodesconnected by relationship of type "knowsthe nodes have both the properties "nameand "lastnamenowif we' like to find out the following pattern"who does paul know?we' query this using cypher to find pattern in cypherwe'll start with match clause in knows user user namepaul last namebeun namejelme last nameragnar figure an example of simple social graph with two users and one relationship
16,888
introducing neo ja graph database this query we'll start searching at the node user with the name property "paulnote how the node is enclosed within parenthesesas shown in the code snippet belowand the relationship is enclosed by square brackets relationships are named with colon (:prefixand the direction is described using arrows the placeholder will contain all the user nodes having the relationship of type "knowsas an inbound relationship with the return clause we can retrieve the results of the query match( :user name'paul)-[:knows]->( :userreturn name notice the close relationship of how we have formulated our question verbally and the way the graph database translates this into traversal in neo jthis impressive expressiveness is made possible by its graph query languagecypher to make the examples more interestinglet' assume that our data is represented by the graph in figure country namenew zealand has_been_in has_been_in is_friend_of loves hobby user user country nametraveling namemuhuba nameannelies namemongolia has_been_in likes country namecambodia food namesushi has_been_in likes knows user user namepaul last namebeun namejelme last nameragnar is_born_in country figure more complicated connected data example with several interconnected nodes of different types namesweden
16,889
the rise of graph databases we can insert the connected data in figure into neo by using cypher we can write cypher commands directly in the browser-based interface of neo jor alternatively through python driver (see databases to write an appropriate create statement in cypherfirst we should have good understanding of which data we' like to store as nodes and which as relationshipswhat their properties should beand whether labels would be useful the first decision is to decide which data should be regarded as nodes and which as relationships to provide semantic context for these nodes in figure we've chosen to represent the users and countries they have been in as nodes data that provides information about specific nodefor example name that' associated with nodecan be represented as property all data that provides context about two or more nodes will be considered as relationship nodes that share common featuresfor example cambodia and sweden are both countrieswill also be grouped through labels in figure this is already done in the following listing we demonstrate how the different objects could be encoded in cypher through one big create statement be aware that cypher is case sensitive listing cypher data creation statement create (user :user {name :'annelies'})(user :user {name :'paullastname'beun'})(user :user {name :'muhuba'})(user :user {name 'jelmelastname'ragnar'})(country :country name:'mongolia'})(country :country name:'cambodia'})(country :country name:'new zealand'})(country :country name:'sweden'})(food :food name:'sushi})(hobby :hobby name:'travelling'})(user )-[:has_been_in]->(country )(user )-[has_been_in]->(country )(user )-[has_been_in]->(country )(user )-[has_been_in]->(country )(user )-[is_mother_of]->(user )(user )-[knows]->(user )(user )-[is_friend_of]->(user )(user )-[likes]->food )(user )-[likes]->food )(user )-[is_born_in]->(country running this create statement in one go has the advantage that the success of this execution will ensure us that the graph database has been successfully created if an error existsthe graph won' be created in real scenarioone should also define indexes and constraints to ensure fast lookup and not search the entire database we haven' done this here because our simulated data set is small howeverthis can be easily done using cypher consult the
16,890
cypher documentation to find out more about indexes and constraints (neo com/docs/stable/cypherdoc-labels-constraints-and-indexes htmlnow that we've created our datawe can query it the following query will return all nodes and relationships in the databasefind all nodes (nand all their relationships [rmatch ( )-[ ]-(return , show all nodes and all relationships figure shows the database that we've created we can compare this graph with the graph we've envisioned on our whiteboard on our whiteboard we grouped nodes of people in label "userand nodes of countries in label "countryalthough the nodes in this figure aren' represented by their labelsthe labels are present in our database besides thatwe also miss node (hobbyand relationship of type "lovesthese can be easily added through merge statement that will create the node and relationship if they don' exist alreadymerge (user )-[loves]->hobby figure the graph drawn in figure now has been created in the neo web interface the nodes aren' represented by their labels but by their names we can infer from the graph that we're missing the label hobby with the name traveling the reason for this is because we have forgotten to include this node and its corresponding relationship in the create statement
16,891
the rise of graph databases we can ask many questions here for examplequestion which countries has annelies visitedthe cypher code to create the answer (shown in figure is match( :user{name:'annelies'}[:has_been_in]-( :countryreturn namec name question who has been wherethe cypher code (explained in figure is match ()-[rhas_been_in]->(return limit placeholder that can be used later as reference start at node user with name property "anneliesthe node user has an outgoing relationship of type "has_been_in(note that we've chosen not to include placeholder in this case the end node is country match( :user{name:'annelies'}[:has_been_in]-( :countryreturn namec name the results you want to retrieve must be defined in the return clause there are two ways to represent your results in neo jin graph or as rows figure results of question which countries has annelies visitedwe can see the three countries annelies has been inusing the row presentation of neo the traversal took only milliseconds
16,892
this query is asking for all nodes with an outgoing relationship with the type "has_been_inmatch ()-[ :has_been_in]->(return limit the end nodes are all nodes with an incoming relationship of the type "has_been_infigure who has been wherequery buildup explained when we run this query we get the answer shown in figure figure results of question who has been wherethe results of our traversal are now shown in the graph representation of neo now we can see that paulin addition to annelieshas also been to cambodia in question we have chosen not to specify start node thereforecypher will go to all nodes present in the database to find those with an outgoing relationship of type "has_been_inone should avoid not specifying starting node sincedepending on the size of your databasesuch query could take long time to converge playing around with the data to obtain the right graph database also means lot of data deletion cypher has delete statement suitable for deleting small amounts of
16,893
the rise of graph databases data the following query demonstrates how to delete all nodes and relationships in the databasematch(noptional match ( )-[ ]-(delete , now that we're acquainted with connected data and have basic knowledge of how it' managed in graph databasewe can go step further and look into reallive applications of connected data social graphfor examplecan be used to find clusters of tightly connected nodes inside the graph communities people in cluster who don' know each other can then be introduced to each other the concept of searching for tightly connected nodesnodes that have significant amount of features in commonis widely used concept in the next section we'll use this ideawhere the aim will be to find clusters inside an ingredient network connected data examplea recipe recommendation engine one of the most popular use cases for graph databases is the development of recommender engines recommender engines became widely adopted through their promise to create relevant content living in an era with such abundance in data can be overwhelming to many consumers enterprises saw the clear need to be inventive in how to attract customers through personalized contentthereby using the strengths of recommender engines in our case study we'll recommend recipes based on the dish preferences of users and network of ingredients during data preparation we'll use elasticsearch to quicken the process and allow for more focus on the actual graph database its main purpose here will be to replace the ingredients list of the "dirtydownloaded data with the ingredients from our own "cleanlist if you skipped ahead to this it might be good to at least read appendix on installing elasticsearch so you have it running on your computer you can always download the index we'll use from the manning download page for this and paste it into your local elasticsearch data directory if you don' feel like bothering with the case study you can download the following information from the manning website for this three py code files and their ipynb counterparts data preparation part --will upload the data to elasticsearch (alternatively you can paste the downloadable index in your local elasticsearch data folderdata preparation part --will move the data from elasticsearch to neo exploration recommender system
16,894
three data files ingredients txt)--self-compiled ingredients file recipes json)--contains all the ingredients elasticsearch index zip)--contains the "gastronomicalelasticsearch index you can use to skip data preparation part now that we have everything we needlet' look at the research goal and the steps we need to take to achieve it step setting the research goal let' look at what' to come when we follow the data science process (figure our primary goal is to set up recommender engine that would help users of cooking website find the right recipe user gets to like several recipes and we'll base data science process primary goalrecommend dishes people will like define research goal setting the research goal create project charter manually compiled ingredient list data retrieval internal data retrieving data manually input user likes data ownership external data open to anyone open recipes database make use of elasticsearch default text data treatment for recipe data data cleansing data preparation recipe data is transformed into searchable index data transformation recipes and ingredients are merged by searching recipes databaseand ingredients are used as keywords merging/joining data sets combining data find the most used ingredients node search nongraphical techniques data exploration find the recipes with the greatest number of ingredients recipes are suggested based on number of ingredients in common with recipes the user likes recommender model data modeling graph view of user recipe preferences presenting data presentation and automation figure data science process overview applied to connected data recommender model
16,895
the rise of graph databases our dish recommendations on the ingredientsoverlap in recipes network this is simple and intuitive approachyet already yields fairly accurate results let' look at the three data elements we require step data retrieval for this exercise we require three types of datarecipes and their respective ingredients list of distinct ingredients we like to model at least one user and his preference for certain dishes as alwayswe can divide this into internally available or created data and externally acquired data internal data--we don' have any user preferences or ingredients lying aroundbut these are the smallest part of our data and easily created few manually input preferences should be enough to create recommendation the user gets more interesting and accurate results the more feedback he gives we'll input user preferences later in the case study list of ingredients can be manually compiled and will remain relevant for years to comeso feel free to use the list in the downloadable material for any purposecommercially or otherwise external data--recipes are different matter thousands of ingredients existbut these can be combined into millions of dishes we are in luckhoweverbecause pretty big list is freely available at many thanks to fictive kin for this valuable data set with more than hundred thousand recipes sure there are duplicates in herebut they won' hurt our use case that badly we now have two data files at our disposala list of ingredients (ingredients txtand more than hundred thousand recipes in the recipes json file sample of the ingredients list can be seen in the following listing listing ingredients list text file sample ditalini egg noodles farfalle fettuccine fusilli lasagna linguine macaroni orzo the "openrecipesjson file contains more than hundred thousand recipes with multiple properties such as publish datesource locationpreparation timedescription
16,896
and so on we're only interested in the name and ingredients list sample recipe is shown in the following listing listing sample json recipe "_id"$oid" cc cc db }"name"drop biscuits and sausage gravy""ingredients"biscuits\ cups all-purpose flour\ tablespoons baking powder\ / teaspoon salt\ stick ( / cupcold buttercut into pieces\ cup butermilk\ sausage gravy\ pound breakfast sausagehot or mild\ / cup all-purpose flour\ cups whole milk\ / teaspoon seasoned salt\ teaspoons black peppermore to taste""url""image"bisgrav jpg""ts"$date }"cooktime"pt ""source"thepioneerwoman""recipeyield" ""datepublished"""preptime"pt ""description"late saturday afternoonafter marlboro man had returned home with the soccer-playing girlsand had returned home with the because we're dealing with text data herethe problem is two-foldfirstpreparing the textual data as described in the text mining thenonce the data is thoroughly cleansedit can be used to produce recipe recommendations based on network of ingredients this doesn' focus on the text data preparation because this is described elsewhereso we'll allow ourselves the luxury of shortcut during the upcoming data preparation step data preparation we now have two data files at our disposaland we need to combine them into one graph database the "dirtyrecipes data poses problem that we can address using our clean ingredients list and the use of the search engine and nosql database elasticsearch we already relied on elasticsearch in previous and now it will clean the recipe data for us implicitly when it creates an index we can then search this data to link each ingredient to every recipe in which it occurs we could clean the text data using pure pythonas we did in the text mining but this shows it' good to be aware of the strong points of each nosql databasedon' pin yourself to single technologybut use them together to the benefit of the project let' start by entering our recipe data into elasticsearch if you don' understand what' happeningplease check the case study of again and it should become clear make sure to turn on your local elasticsearch instance and activate python environment with the elasticsearch module installed before running the code
16,897
the rise of graph databases snippet in the following listing it' recommended not to run this code "as isin ipython (or jupyterbecause it prints every recipe key to the screen and your browser can handle only so much output either turn off the print statements or run in another python ide the code in this snippet can be found in "data preparation part pylisting importing recipe data into elasticsearch import modules from elasticsearch import elasticsearch import json client elasticsearch (indexname "gastronomicaldoctype 'recipeselasticsearch client used to communicate with database client indices create(index=indexnamecreate index file_name ' :/users/gebruiker/downloads/recipes jsonrecipemapping 'properties''name'{'type''string'}'ingredients'{'type''string'location of json recipe filechange this to match your own setupmapping for elasticsearch "recipedoctype client indices put_mapping(index=indexname,doc_type=doctype,body=recipemapping with open(file_nameencoding="utf "as data_filerecipedata json load(data_filefor recipe in recipedataprint recipe keys(print recipe['_id'keys(client index(index=indexnamedoc_type=doctype,id recipe['_id']['$oid']body={"name"recipe['name']"ingredients":recipe['ingredients']}load json recipe file into memory another way to do this would berecipedata [with open(file_nameas ffor line in frecipedata append(json loads(line)index recipes only name and ingredients are important for our use case in case timeout problem occurs it' possible to increase the timeout delay by specifyingfor exampletimeout= as an argument if everything went wellwe now have an elasticsearch index by the name "gastronomicalpopulated by thousands of recipes notice we allowed for duplicates of the same recipe by not assigning the name of the recipe to be the document key iffor
16,898
connected data examplea recipe recommendation engine instancea recipe is called "lasagnathen this can be salmon lasagnabeef lasagnachicken lasagnaor any other type no single recipe is selected as the prototype lasagnathey are all uploaded to elasticsearch under the same name"lasagnathis is choiceso feel free to decide otherwise it will have significant impactas we'll see later on the door is now open for systematic upload to our local graph database make sure your local graph database instance is turned on when applying the following code our username for this database is the default neo and the password is neo jamake sure to adjust this for your local setup for this we'll also require neo -specific python library called py neo if you haven' alreadynow would be the time to install it to your virtual environment using pip install py neo or conda install py neo when using anaconda againbe advised this code will crash your browser when run directly in ipython or jupiter the code in this listing can be found in "data preparation part pylisting using the elasticsearch index to fill the graph database from elasticsearch import elasticsearch from py neo import graphauthenticatenoderelationship client elasticsearch (indexname "gastronomicaldoctype 'recipesgraph database entity elasticsearch client used to communicate with database authenticate("localhost: ""user""password"graph_db graph("filename ' :/users/gebruiker/downloads/ingredients txtingredients =[with open(filenameas ffor line in fingredients append(line strip()authenticate with your own username and password ingredients text file gets loaded into memory strip because of the / you get otherwise from reading the txt print ingredients ingredientnumber grandtotal for ingredient in ingredientsimport modules loop through ingredients and fetch elasticsearch result tryingredientnode graph_db merge_one("ingredient","name",ingredientexceptcontinue ingredientnumber += searchbody "size "query""match_phrase""ingredients":create node in graph database for current ingredient phrase matching usedas some ingredients consist of multiple words
16,899
the rise of graph databases "query":ingredientresult client search(index=indexname,doc_type=doctype,body=searchbodyprint ingredient print ingredientnumber print "totalstr(result['hits']['total']grandtotal grandtotal result['hits']['total'print "grand totalstr(grandtotalfor recipe in result['hits']['hits']loop through recipes found for this particular ingredient trycreate relationship between this recipe and ingredient recipenode graph_db merge_one("recipe","name",recipe['_source']['name']nodesrelationship relationship(recipenode"contains"ingredientnodegraph_db create_unique(nodesrelationshipprint "addedrecipe['_source']['name'contains ingredient exceptcontinue create node for each recipe that is not already in graph database print "*************************************greatwe're now the proud owner of graph database filled with recipesit' time for connected data exploration step data exploration now that we have our data where we want itwe can manually explore it using the neo interface at nothing stops you from running your cypher code in this environmentbut cypher can also be executed via the py neo library one interesting question we can pose is which ingredients are occurring the most over all recipeswhat are we most likely to get into our digestive system if we randomly selected and ate dishes from this databasefrom py neo import graphauthenticatenoderelationship authenticate("localhost: ""user""password"graph_db graph("match (rec:recipe)-[ :contains]->(ing:ingredientwith ingcount(ras num return ing name as namenum order by num desc limit ;"