id
int64
0
25.6k
text
stringlengths
0
4.59k
16,900
the query is created in cypher and saysfor all the recipes and their ingredientscount the number of relations per ingredient and return the ten ingredients with the most relations and their respective counts the results are shown in figure most of the top list in figure shouldn' come as surprise with salt proudly at the top of our listwe shouldn' be shocked to find vascular diseases as the number one killer in most western countries another interesting question that comes to mind now is from different perspectivewhich recipes require the most ingredients figure top ingredients that occur in the most recipes from py neo import graphnoderelationship graph_db graph("graph_db cypher execute(match (rec:recipe)-[ :contains]->(ing:ingredientwith reccount(ras num return rec name as namenum order by num desc limit ;"the query is almost the same as beforebut instead of returning the ingredientswe demand the recipes the result is figure figure top dishes that can be created with the greatest diversity of ingredients now this might be surprising sight spaghetti bolognese hardly sounds like the type of dish that would require ingredients let' take closer look at the ingredients listed for spaghetti bolognese from py neo import graphnoderelationship graph_db graph("graph_db cypher execute("match (rec :recipe{name:'spaghetti bolognese'})[ :contains]->(ing:ingredientreturn rec nameing name;"
16,901
figure the rise of graph databases spaghetti bolognese possible ingredients the cypher query merely lists the ingredients linked to spaghetti bolognese figure shows the result in the neo web interface let' remind ourselves of the remark we made when indexing the data in elasticsearch quick elasticsearch search on spaghetti bolognese shows us it occurs multiple timesand all these instances were used to link ingredients to spaghetti bolognese as recipe we don' have to look at spaghetti bolognese as single recipe but more as collection of ways people create their own "spaghetti bolognese this makes for an interesting way to look at this data people can create their version of the dish with ketchupred wineand chicken or they might even add soup with "spaghetti bologneseas dish being so open to interpretationno wonder so many people love it the spaghetti bolognese story was an interesting distraction but not what we came for it' time to recommend dishes to our gourmand "ragnarstep data modeling with our knowledge of the data slightly enrichedwe get to the goal of this exercisethe recommendations
16,902
connected data examplea recipe recommendation engine for this we introduce user we call "ragnar,who likes couple of dishes this new information needs to be absorbed by our graph database before we can expect it to suggest new dishes thereforelet' now create ragnar' user node with few recipe preferences listing creating user node who likes certain recipes in the neo graph database from py neo import graphnoderelationship create new user called "ragnarimport modules graph_db graph("make graph database connection object userref graph_db merge_one("user","name","ragnar"ragnar likes spaghetti bolognese find recipe by the name of spaghetti bolognese create like relationship between ragnar and the spaghetti reciperef graph_db find_one("recipe",property_key="name"property_value="spaghetti bolognese"nodesrelationship relationship(userref"likes"reciperefgraph_db create_unique(nodesrelationship#commit his like to database graph_db create_unique(relationship(userref"likes"graph_db find_one("recipe",property_key="name"property_value="roasted tomato soup with tiny meatballs and rice"))graph_db create_unique(relationship(userref"likes"graph_db find_one("recipe",property_key="name"property_value="moussaka"))graph_db create_unique(relationship(userref"likes"graph_db find_one("recipe",property_key="name"property_value="chipolata &ampspring onion frittata"))graph_db create_unique(relationship(userref"likes"graph_db find_one("recipe",property_key="name"property_value="meatballs in tomato sauce"))graph_db create_unique(relationship(userref"likes"graph_db find_one("recipe",property_key="name"property_value="macaroni cheese"))graph_db create_unique(relationship(userref"likes"graph_db find_one("recipe",property_key="name"property_value="peppered steak"))repeat the same process as in the lines above but for several other dishes in listing our food connoisseur ragnar is added to the database along with his preference for few dishes if we select ragnar in the neo interfacewe get figure the cypher query for this is match ( :user)-[ :likes]->(rec:recipereturn ,rec limit no surprises in figure many people like spaghetti bologneseand so does our scandinavian gastronomist ragnar
16,903
figure the rise of graph databases the user ragnar likes several dishes for the simple recommendation engine we like to buildall that' left for us to do is ask the graph database to give us the nearest dishes in terms of ingredients againthis is basic approach to recommender systems because it doesn' take into account factors such as dislike of an ingredient or dish the amount of like or dislike score out of instead of binary like or don' like could make difference the amount of the ingredient that is present in the dish the threshold for certain ingredient to become apparent in its taste certain ingredientssuch as spicy pepperwill represent bigger impact for smaller dose than other ingredients would food allergies while this will be implicitly modeled in the like or dislike of dishes with certain ingredientsa food allergy can be so important that single mistake can be fatal avoidance of allergens should overwrite the entire recommendation system many more things for you to ponder about
16,904
it might come as bit of surprisebut single cypher command will suffice from py neo import graphnoderelationship graph_db graph("graph_db cypher execute(match (usr :user{name:'ragnar'})-[ :likes]->(rec :recipe)(rec )-[ :contains]->(ing :ingredientwith ing ,rec match (rec :recipe)-[ :contains]->(ing :ingredientwhere rec rec return rec name,count(ing as ingcount order by ingcount desc limit ;"first all recipes that ragnar likes are collected then their ingredients are used to fetch all the other dishes that share them the ingredients are then counted for each connected dish and ranked from many common ingredients to few only the top dishes are keptthis results in the table of figure figure output of the recipe recommendationtop dishes the user may love from figure we can deduce it' time for ragnar to try spaghetti and meatballsa dish made immortally famous by the disney animation lady and the tramp this does sound like great recommendation for somebody so fond of dishes containing pasta and meatballsbut as we can see by the ingredient countmany more ingredients back up this suggestion to give us small hint of what' behind itwe can show the preferred dishesthe top recommendationsand few of their overlapping ingredients in single summary graph image
16,905
the rise of graph databases step presentation the neo web interface allows us to run the model and retrieve nice-looking graph that summarizes part of the logic behind the recommendations it shows how recommended dishes are linked to preferred dishes via the ingredients this is shown in figure and is the final output for our case study figure interconnectedness of user-preferred dishes and top recommended dishes via sub-selection of their overlapping ingredients with this beautiful graph image we can conclude our in the knowledge that ragnar has few tasty dishes to look forward to don' forget to try the recommendation system for yourself by inserting your own preferences summary in this you learned graph databases are especially useful when encountering data in which relationships between entities are as important as the entities themselves compared to the other nosql databasesthey can handle the biggest complexity but the least data
16,906
graph data structures consist of two main componentsnodes--these are the entities themselves in our case studythese are recipes and ingredients edges--the relationships between entities relationshipslike nodescan be of all kinds of types (for example "contains,"likes,"has been to"and can have their own specific properties such as namesweightsor other measures we looked at neo jcurrently the most popular graph database for instruction on how to install ityou can consult appendix we looked into adding data to neo jquerying it using cypherand how to access its web interface cypher is the neo database-specific query languageand we looked at few examples we also used it in the case study as part of our dishes recommender system in the case study we made use of elasticsearch to clean huge recipe data dump we then converted this data to neo database with recipes and ingredients the goal of the case study was to recommend dishes to people based on previously shown interest in other dishes for this we made use of the connectedness of recipes via their ingredients the py neo library enabled us to communicate with neo server from python it turns out the graph database is not only useful for implementing recommendation system but also for data exploration one of the things we found out is the diversity (ingredient-wiseof spaghetti bolognese recipes out there we used the neo web interface to create visual representation of how we get from dish preferences to dish recommendations via the ingredient nodes
16,907
and text analytics this covers understanding the importance of text mining introducing the most important concepts in text mining working through text mining project most of the human recorded information in the world is in the form of written text we all learn to read and write from infancy so we can express ourselves through writing and learn what others knowthinkand feel we use this skill all the time when reading or writing an emaila blogtext messagesor this bookso it' no wonder written language comes naturally to most of us businesses are convinced that much value can be found in the texts that people produceand rightly so because they contain information on what those people likedislikewhat they know or would like to knowcrave and desiretheir current health or moodand so much more many of these things can be relevant for companies or researchersbut no single person can read and interpret this tsunami of written material by themself once againwe need to turn to computers to do the job for us sadlyhoweverthe natural language doesn' come as "naturalto computers as it does to humans deriving meaning and filtering out the unimportant from
16,908
the important is still something human is better at than any machine luckilydata scientists can apply specific text mining and text analytics techniques to find the relevant information in heaps of text that would otherwise take them centuries to read themselves text mining or text analytics is discipline that combines language science and computer science with statistical and machine learning techniques text mining is used for analyzing texts and turning them into more structured form then it takes this structured form and tries to derive insights from it when analyzing crime from police reportsfor exampletext mining helps you recognize personsplacesand types of crimes from the reports then this new structure is used to gain insight into the evolution of crimes see figure police reports of june danny stole watch in chelsea market person punched mexi liin the face in orlando during the chelsea soccer gamemy car was stolen add structure crime person place victim date danny chelsea marketnew york theft unknown th june unknown orlando violence xi li th june unknown chelsealondon theft bart smith th june analyze and visualize evolution of the theft in chelsea market theft index months figure in text analytics(usuallythe first challenge is to structure the input textthen it can be thoroughly analyzed
16,909
text mining and text analytics while language isn' limited to the natural languagethe focus of this will be on natural language processing (nlpexamples of non-natural languages would be machine logsmathematicsand morse code technically even esperantoklingonand dragon language aren' in the field of natural languages because they were invented deliberately instead of evolving over timethey didn' come "naturalto us these last languages are nevertheless fit for natural communication (speechwriting)they have grammar and vocabulary as all natural languages doand the same text mining techniques could apply to them text mining in the real world in your day-to-day life you've already come across text mining and natural language applications autocomplete and spelling correctors are constantly analyzing the text you type before sending an email or text message when facebook autocompletes your status with the name of friendit does this with the help of technique called named entity recognitionalthough this would be only one component of their repertoire the goal isn' only to detect that you're typing nounbut also to guess you're referring to person and recognize who it might be another example of named entity recognition is shown in figure google knows chelsea is football club but responds differently when asked for person google uses many types of text mining when presenting you with the results of query what pops up in your own mind when someone says "chelsea"chelsea could be many thingsa persona soccer cluba neighborhood in manhattannew york or londona food marketa flower showand so on google knows this and returns different answers to the question "who is chelsea?versus "what is chelsea?to provide the most relevant answergoogle must do (among other thingsall of the followingpreprocess all the documents it collects for named entities perform language identification detect what type of entity you're referring to match query to result detect the type of content to return (pdfadult-sensitivethis example shows that text mining isn' only about the direct meaning of text itself but also involves meta-attributes such as language and document type google uses text mining for much more than answering queries next to shielding its gmail users from spamit also divides the emails into different categories such as socialupdatesand forumsas shown in figure it' possible to go much further than answering simple questions when you combine text with other logic and mathematics
16,910
figure the different answers to the queries "who is chelsea?and "what is chelsea?imply that google uses text mining techniques to answer these queries figure emails can be automatically divided by category based on content and origin
16,911
text mining and text analytics this allows for the creation of automatic reasoning engines driven by natural language queries figure shows how "wolfram alpha, computational knowledge engineuses text mining and automatic reasoning to answer the question "is the usa population bigger than china?figure the wolfram alpha engine uses text mining and logical reasoning to answer question if this isn' impressive enoughthe ibm watson astonished many in when the machine was set up against two human players in game of jeopardy jeopardy is an american quiz show where people receive the answer to question and points are scored for guessing the correct question for that answer see figure it' safe to say this round goes to artificial intelligence ibm watson is cognitive engine that can interpret natural language and answer questions based on an extensive knowledge base
16,912
figure ibm watson wins jeopardy against human players text mining has many applicationsincludingbut not limited tothe followingentity identification plagiarism detection topic identification text clustering translation automatic text summarization fraud detection spam filtering sentiment analysis text mining is usefulbut is it difficultsorry to disappointyesit is when looking at the examples of wolfram alpha and ibm watsonyou might have gotten the impression that text mining is easy sadlyno in reality text mining is complicated task and even many seemingly simple things can' be done satisfactorily for instancetake the task of guessing the correct address figure shows how difficult it is to return the exact result with certitude and how google maps prompts you for more information when looking for "springfield in this case human wouldn' have done any better without additional contextbut this ambiguity is one of the many problems you face in text mining application
16,913
text mining and text analytics figure google maps asks you for more context due to the ambiguity of the query "springfield another problem is spelling mistakes and different (correctspelling forms of word take the following three references to new york"ny,"neww york,and "new york for humanit' easy to see they all refer to the city of new york because of the way our brain interprets textunderstanding text with spelling mistakes comes naturally to uspeople may not even notice them but for computer these are unrelated strings unless we use algorithms to tell it that they're referring to the same entity related problems are synonyms and the use of pronouns try assigning the right person to the pronoun "shein the next sentences"john gave flowers to marleen' parents when he met her parents for the first time she was so happy with this gesture easy enoughrightnot for computer we can solve many similar problems with easebut they often prove hard for machine we can train algorithms that work well on specific problem in welldefined scopebut more general algorithms that work in all cases are another beast altogether for instancewe can teach computer to recognize and retrieve us account numbers from textbut this doesn' generalize well to account numbers from other countries language algorithms are also sensitive to the context the language is used ineven if the language itself remains the same english models won' work for arabic and vice versabut even if we keep to english--an algorithm trained for twitter data isn' likely to perform well on legal texts let' keep this in mind when we move on to the case studythere' no perfectone-size-fits-all solution in text mining
16,914
text mining techniques during our upcoming case study we'll tackle the problem of text classificationautomatically classifying uncategorized texts into specific categories to get from raw textual data to our final destination we'll need few data mining techniques that require background information for us to use them effectively the first important concept in text mining is the "bag of words bag of words to build our classification model we'll go with the bag of words approach bag of words is the simplest way of structuring textual dataevery document is turned into word vector if certain word is present in the vector it' labeled "true"the others are labeled "falsefigure shows simplified example of thisin case there are only two documentsone about the television show game of thrones and one about data science the two word vectors together form the document-term matrix the document-term matrix holds column for every term and row for every document the values are yours to decide upon in this we'll use binaryterm is presenttrue or false game of thrones is great television series but the books are better doing data science is more fun than watching television [({'game':true,'of':true,'thrones':true,'is':true,' ':true'great':true,'television':true,'series':true,'but':true'the':true,'books':true,'are':true,'better':true,'doing'false'data':false,'science':false,'more':false,'fun':false'than':false,'watching':false}'gameofthrones')({'doing':true,'data':true,'science':true,'is':true,'more'true,'fun':true,'than':true,'watching':true,'television':true'game':false,'of':false,'thrones':false,' ':false,'great'false,'series':false,'but':false,'the':false,'books':false'are':false,'better':false}'datascience')figure text is transformed into bag of words by labeling each word (termwith "trueif it is present in the document and "falseif not the example from figure does give you an idea of the structured data we'll need to start text analysisbut it' severely simplifiednot single word was filtered out and no stemming (we'll go into this laterwas applied big corpus can have thousands of unique words if all have to be labeled like this without any filteringit' easy to see we might end up with large volume of data binary coded bag of words as shown in figure is but one way to structure the dataother techniques exist
16,915
text mining and text analytics term frequency--inverse document frequency (tf-idfa well-known formula to fill up the document-term matrix is tf-idf or term frequency multiplied by inverse document frequency binary bag of words assigns true or false (term is there or not)while simple frequencies count the number of times the term occurred tf-idf is bit more complicated and takes into account how many times term occurred in the document (tftf can be simple term counta binary count (true or false)or logarithmically scaled term count it depends on what works best for you in case tf is term frequencythe formula of tf is the followingtf ft, tf is the frequency (fof the term (tin the document (dbut tf-idf also takes into account all the other documents because of the inverse document frequency idf gives an idea of how common the word is in the entire corpusthe higher the document frequency the more commonand more common words are less informative for example the words "aor "thearen' likely to provide specific information on text the formula of idf with logarithmic scaling is the most commonly used form of idfidf log( /|{ : }|with being the total number of documents in the corpusand the |{ : }being the number of documents (din which the term (tappears the tf-idf score says this about termhow important is this word to distinguish this document from the others in the corpusthe formula of tf-idf is thus ff log : , idf we won' use tf-idfbut when setting your next steps in text miningthis should be one of the first things you'll encounter tf-idf is also what was used by elasticsearch behind the scenes in it' good way to go if you want to use tf-idf for text analyticsleave the text mining to specialized software such as solr or elasticsearch and take the document/term matrix for text analytics from there before getting to the actual bag of wordsmany other data manipulation steps take placetokenization--the text is cut into pieces called "tokensor "terms these tokens are the most basic unit of information you'll use for your model the terms are often words but this isn' necessity entire sentences can be used for analysis we'll use unigramsterms consisting of one word oftenhoweverit' useful to include bigrams (two words per tokenor trigrams (three words per tokento capture extra meaning and increase the performance of your models
16,916
text mining techniques this does come at costthoughbecause you're building bigger term-vectors by including bigrams and/or trigrams in the equation stop word filtering--every language comes with words that have little value in text analytics because they're used so often nltk comes with short list of english stop words we can filter if the text is tokenized into wordsit often makes sense to rid the word vector of these low-information stop words lowercasing--words with capital letters appear at the beginning of sentenceothers because they're proper nouns or adjectives we gain no added value making that distinction in our term matrixso all terms will be set to lowercase another data preparation technique is stemming this one requires more elaboration stemming and lemmatization stemming is the process of bringing words back to their root formthis way you end up with less variance in the data this makes sense if words have similar meanings but are written differently becausefor exampleone is in its plural form stemming attempts to unify by cutting off parts of the word for example "planesand "planeboth become "plane another techniquecalled lemmatizationhas this same goal but does so in more grammatically sensitive way for examplewhile both stemming and lemmatization would reduce "carsto "car,lemmatization can also bring back conjugated verbs to their unconjugated forms such as "areto "be which one you use depends on your caseand lemmatization profits heavily from pos tagging (part of speech taggingpos tagging is the process of attributing grammatical label to every part of sentence you probably did this manually in school as language exercise take the sentence "game of thrones is television series if we apply pos tagging on it we get ({"game":"nn"},{"of":"in},{"thrones":"nns},{"is":"vbz},{" ":"dt},{"television":"nn}{"series":"nn}nn is nounin is prepositionnns is noun in its plural formvbz is third-person singular verband dt is determiner table has the full list table list of all pos tags tag meaning tag meaning cc coordinating conjunction cd cardinal number dt determiner ex existential fw foreign word in preposition or subordinating conjunction jj adjective jjr adjectivecomparative jjs adjectivesuperlative ls list item marker md modal nn nounsingular or mass
16,917
text mining and text analytics table list of all pos tags (continuedtag meaning tag meaning nns nounplural nnp proper nounsingular nnps proper nounplural pdt predeterminer pos possessive ending prp personal pronoun prppossessive pronoun rb adverb rbr adverbcomparative rbs adverbsuperlative rp particle sym symbol uh interjection vb verbbase form vbd verbpast tense vbg verbgerund or present participle vbn verbpast participle vbp verbnon- rd person singular present vbz verb rd person singular present wdt wh-determiner wp wh-pronoun wppossessive wh-pronoun wrb wh-adverb pos tagging is use case of sentence-tokenization rather than word-tokenization after the pos tagging is complete you can still proceed to word tokenizationbut pos tagger requires whole sentences combining pos tagging and lemmatization is likely to give cleaner data than using only stemmer for the sake of simplicity we'll stick to stemming in the case studybut consider this an opportunity to elaborate on the exercise we now know the most important things we'll use to do the data cleansing and manipulation (text miningfor our text analyticslet' add the decision tree classifier to our repertoire decision tree classifier the data analysis part of our case study will be kept simple as well we'll test naive bayes classifier and decision tree classifier as seen in the naive bayes classifier is called that because it considers each input variable to be independent of all the otherswhich is naiveespecially in text mining take the simple examples of "data science,"data analysis,or "game of thrones if we cut our data in unigrams we get the following separate variables (if we ignore stemming and such)"data,"science,"analysis,"game,"of,and "thrones obviously links will be lost this canin turnbe overcome by creating bigrams (data sciencedata analysisand trigrams (game of thronesthe decision tree classifierhoweverdoesn' consider the variables to be independent of one another and actively creates interaction variables and buckets an interaction
16,918
variable is variable that combines other variables for instance "dataand "sciencemight be good predictors in their own right but probably the two of them co-occurring in the same text might have its own value bucket is somewhat the opposite instead of combining two variablesa variable is split into multiple new ones this makes sense for numerical variables figure shows what decision tree might look like and where you can find interaction and bucketing car insurance decision treeprobability of an insuree crashing the car within year gender female male car color other interaction of male and "< age > red < - age has been split into buckets"> "" - "and "< figure fictitious decision tree model decision tree automatically creates buckets and supposes interactions between input variables whereas naive bayes supposes independence of all the input variablesa decision tree is built upon the assumption of interdependence but how does it build this structurea decision tree has few possible criteria it can use to split into branches and decide which variables are more important (are closer to the root of the treethan others the one we'll use in the nltk decision tree classifier is "information gain to understand information gainwe first need to look at entropy entropy is measure of unpredictability or chaos simple example would be the gender of baby when woman is pregnantthe gender of the fetus can be male or femalebut we don' know which one it is if you were to guessyou have chance to guess correctly (give or takebecause gender distribution isn' uniformhoweverduring the pregnancy you have the opportunity to do an ultrasound to determine the gender of the fetus an ultrasound is never conclusivebut the farther along in fetal developmentthe more accurate it becomes this accuracy gainor information gainis there because uncertainty or entropy drops let' say an ultrasound at weeks pregnancy has accuracy in determining the gender of the baby uncertainty still existsbut the ultrasound did reduce the uncertainty
16,919
text mining and text analytics probability of fetus identified as female--ultrasound at weeks ultrasound female male figure decision tree with one variablethe doctor' conclusion from watching an ultrasound during pregnancy what is the probability of the fetus being femalefrom to that' pretty good discriminator decision tree follows this same principleas shown in figure if another gender test has more predictive powerit could become the root of the tree with the ultrasound test being in the branchesand this can go on until we run out of variables or observations we can run out of observationsbecause at every branch split we also split the input data this is big weakness of the decision treebecause at the leaf level of the tree robustness breaks down if too few observations are leftthe decision trees starts to overfit the data overfitting allows the model to mistake randomness for real correlations to counteract thisa decision tree is prunedits meaningless branches are left out of the final model now that we've looked at the most important new techniqueslet' dive into the case study case studyclassifying reddit posts while text mining has many applicationsin this case study we focus on document classification as pointed out earlier in this this is exactly what google does when it arranges your emails in categories or attempts to distinguish spam from regular emails it' also extensively used by contact centers that process incoming customer questions or complaintswritten complaints first pass through topic detection filter so they can be assigned to the correct people for handling document classification is also one of the mandatory features of social media monitoring systems the monitored tweetsforum or facebook postsnewspaper articlesand many other internet resources are assigned topic labels this way they can be reused in reports sentiment analysis is specific type of text classificationis the author of post negativepositiveor neutral on somethingthat "somethingcan be recognized with entity recognition in this case study we'll draw on posts from reddita website also known as the selfproclaimed "front page of the internet,and attempt to train model capable of distinguishing whether someone is talking about "data scienceor "game of thrones the end result can be presentation of our model or full-blown interactive application in we'll focus on application building for the end userso for now we'll stick to presenting our classification model to achieve our goal we'll need all the help and tools we can getand it happens python is once again ready to provide them
16,920
case studyclassifying reddit posts meet the natural language toolkit python might not be the most execution efficient language on earthbut it has mature package for text mining and language processingthe natural language toolkit (nltknltk is collection of algorithmsfunctionsand annotated works that will guide you in taking your first steps in text mining and natural language processing nltk is also excellently documented on nltk org nltk ishowevernot often used for production-grade worklike other libraries such as scikit-learn installing nltk and its corpora install nltk with your favorite package installer in case you're using anacondait comes installed with the default anaconda setup otherwise you can go for "pipor "easy_installwhen this is done you still need to install the models and corpora included to have it be fully functional for thisrun the following python codeimport nltk nltk download(depending on your installation this will give you pop-up or more command-line options figure shows the pop-up box you get when issuing the nltk download(command you can download all the corpora if you likebut for this we'll only make use of "punktand "stopwordsthis download will be explicitly mentioned in the code that comes with this book figure choose all packages to fully complete the nltk installation
16,921
text mining and text analytics two ipython notebook files are available for this data collection--will contain the data collection part of this case study data preparation and analysis--the stored data is put through data preparation and then subjected to analysis all code in the upcoming case study can be found in these two files in the same sequence and can also be run as such in additiontwo interactive graphs are available for downloadforcegraph html--represents the top features of our naive bayes model sunburst html--represents the top four branches of our decision tree model to open these two html pagesan http server is necessarywhich you can get using python and command windowopen command window (linuxwindowswhatever you fancymove to the folder containing the html files and their json data filesdecisiontreedata json for the sunburst diagram and naivebayesdata json for the force graph it' important the html files remain in the same location as their data files or you'll have to change the javascript in the html file create python http server with the following commandpython - simplehttpserver open browser and go to localhost: here you can select the html filesas shown in figure figure python http server serving this output the python packages we'll use in this nltk--for text mining praw--allows downloading posts from reddit sqlite --enables us to store data in the sqlite format matplotlib-- plotting library for visualizing data make sure to install all the necessary libraries and corpora before moving on before we dive into the actionhoweverlet' look at the steps we'll take to get to our goal of creating topic classification model
16,922
case studyclassifying reddit posts data science process overview and step the research goal to solve this text mining exercisewe'll once again make use of the data science process figure shows the data science process applied to our reddit classification case not all the elements depicted in figure might make sense at this pointand the rest of the is dedicated to working this out in practice as we work toward our research goalcreating classification model capable of distinguishing posts about "data sciencefrom posts about "game of thrones without further adolet' go get our data we need to distinguish reddit posts about data science from posts about game of thrones our goalcreating model that does this classification reliably data science process setting the research goal we have no internal data on this internal data retrieving data reddit is the external data source we use we are using praw to access their data api external data data is stored in sqlite stop word filtering data cleansing hapaxes filtering word tokenization term lowercasing data preparation data transformation stemming data labeling we have but single data set combining data word frequencies histogram data exploration visually inspect least and common terms naive bayes most information features model and variable selection data modeling decision trees tree visual inspection model execution document scoring model accuracy model diagnostic and model comparison confusion matrix not part of this but the model can be turned into batch program to score new posts presentation and automation figure data science process overview applied to reddit topic classification case study
16,923
text mining and text analytics step data retrieval we'll use reddit data for this caseand for those unfamiliar with reddittake the time to familiarize yourself with its concepts at www reddit com reddit calls itself "the front page of the internetbecause users can post things they find interesting and/or found somewhere on the internetand only those things deemed interesting by many people are featured as "popularon its homepage you could say reddit gives an overview of the trending things on the internet any user can post within predefined category called "subreddit when post is madeother users get to comment on it and can up-vote it if they like the content or downvote it if they dislike it because post is always part of subredditwe have this metadata at our disposal when we hook up to the reddit api to get our data we're effectively fetching labeled data because we'll assume that post in the subreddit "gameofthroneshas something to do with "gameofthrones to get to our data we make use of the official reddit python api library called praw once we get the data we needwe'll store it in lightweight database-like file called sqlite sqlite is ideal for storing small amounts of data because it doesn' require any setup to use and will respond to sql queries like any regular relational database does any other data storage medium will doif you prefer oracle or postgres databasespython has an excellent library to interact with these without the need to write sql sqlalchemy will work for sqlite files as well figure shows the data retrieval step within the data science process we have no internal data on this internal data retrieving data reddit is the external data source we use we are using praw to access their data api external data data is stored in sqlite figure the data science process data retrieval step for reddit topic classification case open your favorite python interpreterit' time for actionas shown in listing first we need to collect our data from the reddit website if you haven' alreadyuse pip install praw or conda install praw (anacondabefore running the following script note the code for step can also be found in the ipython file data collection it' available in this book' download section
16,924
case studyclassifying reddit posts listing setting up sqllite database and reddit api client import praw import sqlite import praw and sqlite libraries conn sqlite connect('reddit db' conn cursor(set up connection to sqlite database execute('''drop table if exists topics''' execute('''drop table if exists comments''' execute('''create table topics (topictitle texttopictext texttopicid texttopiccategory text)''' execute('''create table comments (commenttext textcommentid text topictitle texttopictext texttopicid text topiccategory text)'''user_agent "introducing data science bookr praw reddit(user_agent=user_agentcreate praw user agent so we can use reddit api subreddits ['datascience','gameofthrones'limit execute sql statements to create topics and comments table maximum number of posts we'll fetch from reddit per category maximum reddit allows at any single time is also , our list of subreddits we'll draw into our sqlite database let' first import the necessary libraries now that we have access to the sqlite and praw capabilitieswe need to prepare our little local database for the data it' about to receive by defining connection to sqlite file we automatically create it if it doesn' already exist we then define data cursor that' capable of executing any sql statementso we use it to predefine the structure of our database the database will contain two tablesthe topics table contains reddit topicswhich is similar to someone starting new post on forumand the second table contains the comments and is linked to the topic table via the "topicidcolumn the two tables have one (topic tableto many (comment tablerelationship for the case studywe'll limit ourselves to using the topics tablebut the data collection will incorporate both because this allows you to experiment with this extra data if you feel like it to hone your text-mining skills you could perform sentiment analysis on the topic comments and find out what topics receive negative or positive comments you could then correlate this to the model features we'll produce by the end of this we need to create praw client to get access to the data every subreddit can be identified by its nameand we're interested in "datascienceand "gameofthrones the limit represents the maximum number of topics (postsnot commentswe'll draw in from reddit thousand is also the maximum number the api allows us to fetch at any given requestthough we could request more later on when people have
16,925
text mining and text analytics posted new things in fact we can run the api request periodically and gather data over time while at any given time you're limited to thousand postsnothing stops you from growing your own database over the course of months it' worth noting the following script might take about an hour to complete if you don' feel like waitingfeel free to proceed and use the downloadable sqlite file alsoif you run it now you are not likely to get the exact same output as when it was first run to create the output shown in this let' look at our data retrieval functionas shown in the following listing listing reddit data retrieval and storage in sqlite specific fields of the topic are appended to the list we only use the title and text throughout the exercise but the topic id would be useful for building your own (biggerdatabase of topics insert all topics into sqlite database from subredditsget hottest , (in our casetopics def prawgetdata(limit,subredditname)topics get_subreddit(subredditnameget_hot(limit=limitcommentinsert [topicinsert [topicnbr this part is an informative for topic in topicsprint and not necessary if (float(topicnbr)/limit)* in xrange( , )for code to work it only print '**********topic:str(topic idinforms you about the *********completestr((float(topicnbr)/limit)* download progress ****topicnbr + trytopicinsert append((topic title,topic selftext,topic idsubredditname)exceptpass tryappend comments for comment in topic commentsto list these are commentinsert append((comment body,comment idnot used in the topic title,topic selftext,topic id,subredditname)exercise but now exceptyou have them for experimentation pass print '********************************print 'inserting data into sqlitec executemany('insert into topics values (?,?,?,?)'topicinsertprint 'inserted topicsc executemany('insert into comments values (?,?,?,?,?,?)'commentinsertprint 'inserted commentsconn commit(insert all for subject in subredditsprawgetdata(limit=limit,subredditname=subjectcommit changes (data insertionsto database without the commitno data will be inserted comments into sqlite database the function is executed for all subreddits we specified earlier
16,926
the prawgetdata(function retrieves the "hottesttopics in its subredditappends this to an arrayand then gets all its related comments this goes on until thousand topics are reached or no more topics exist to fetch and everything is stored in the sqlite database the print statements are there to inform you on its progress toward gathering thousand topics all that' left for us to do is execute the function for each subreddit if you' like this analysis to incorporate more than two subredditsthis is matter of adding an extra category to the subreddits array with the data collectedwe're ready to move on to data preparation step data preparation as alwaysdata preparation is the most crucial step to get correct results for text mining this is even truer since we don' even start off with structured data the upcoming code is available online as ipython file data preparation and analysis let' start by importing the required libraries and preparing the sqlite databaseas shown in the following listing listing text mininglibrariescorpora dependenciesand sqlite database connection import sqlite import nltk import matplotlib pyplot as plt from collections import ordereddict import random nltk download('punkt'nltk download('stopwords'import all required libraries download corpora we make use of conn sqlite connect('reddit db' conn cursor(make connection to sqlite database that contains our reddit data in case you haven' already downloaded the full nltk corpuswe'll now download the part of it we'll use don' worry if you already downloaded itthe script will detect if your corpora is up to date our data is still stored in the reddit sqlite file so let' create connection to it even before exploring our data we know of at least two things we have to do to clean the datastop word filtering and lowercasing general word filter function will help us filter out the unclean parts let' create one in the following listing
16,927
text mining and text analytics listing word filtering and lowercasing functions def wordfilter(excluded,wordrow)wordfilter(filtered [word for word in wordrow if word not in excludedfunction will return filtered remove term from stopwords nltk corpus stopwords words('english'an array of terms def lowercasearray(wordrow)lowercased [word lower(for word in wordrowreturn lowercased stop word variable lowercasearray(function transforms any term to its lowercased version contains english stop words per default present in nltk the english stop words will be the first to leave our data the following code will provide us these stop wordsstopwords nltk corpus stopwords words('english'print stopwords figure shows the list of english stop words in nltk figure english stop words list in nltk with all the necessary components in placelet' have look at our first data processing function in the following listing
16,928
case studyclassifying reddit posts listing first data preparation function and execution we'll use data['all_words'for data exploration create pointer to awlite data row[ is def data_processing(sql)titlerow[ execute(sqlfetch data is topic textdata {'wordmatrix':[],'all_words':[]row by row we turn them row fetchone(into single while row is not nonetext blob wordrow nltk tokenize word_tokenize(row[ ]+"+row[ ]wordrow_lowercased lowercasearray(wordrowwordrow_nostopwords wordfilter(stopwords,wordrow_lowercaseddata['all_words'extend(wordrow_nostopwordsdata['wordmatrix'append(wordrow_nostopwordsrow fetchone(get new document return data from sqlite database subreddits ['datascience','gameofthrones'data {for subject in subredditsdata[subjectdata_processing(sql='''select topictitle,topictext,topiccategory from topics where topiccategory '''+"'"+subject+"'"data['wordmatrix'is matrix comprised of word vectors vector per document our subreddits as defined earlier call data processing function for every subreddit our data_processing(function takes in sql statement and returns the documentterm matrix it does this by looping through the data one entry (reddit topicat time and combines the topic title and topic body text into single word vector with the use of word tokenization tokenizer is text handling script that cuts the text into pieces you have many different ways to tokenize textyou can divide it into sentences or wordsyou can split by space and punctuationsor you can take other characters into accountand so on here we opted for the standard nltk word tokenizer this word tokenizer is simpleall it does is split the text into terms if there' space between the words we then lowercase the vector and filter out the stop words note how the order is important herea stop word in the beginning of sentence wouldn' be filtered if we first filter the stop words before lowercasing for instance in " like game of thrones,the "iwould not be lowercased and thus would not be filtered out we then create word matrix (term-document matrixand list containing all the words notice how we extend the list without filtering for doublesthis way we can create histogram on word occurrences during data exploration let' execute the function for our two topic categories figure shows the first word vector of the "datasciencecategory print data['datascience']['wordmatrix'][
16,929
text mining and text analytics figure the first word vector of the "datasciencecategory after first data processing attempt this sure looks pollutedpunctuations are kept as separate terms and several words haven' even been split further data exploration should clarify few things for us step data exploration we now have all our terms separatedbut the sheer size of the data hinders us from getting good grip on whether it' clean enough for actual use by looking at single vectorwe already spot few problems thoughseveral words haven' been split correctly and the vector contains many single-character terms single character terms might be good topic differentiators in certain cases for examplean economic text will contain more $psand $signs than medical text but in most cases these onecharacter terms are useless firstlet' have look at the frequency distribution of our terms wordfreqs_cat nltk freqdist(data['datascience']['all_words']plt hist(wordfreqs_cat values()bins range( )plt show(wordfreqs_cat nltk freqdist(data['gameofthrones']['all_words']plt hist(wordfreqs_cat values()bins range( )plt show(by drawing histogram of the frequency distribution (figure we quickly notice that the bulk of our terms only occur in single document single-occurrence terms such as these are called hapaxesand model-wise they're useless because single occurrence of feature is never enough to build reliable model this is good news for uscutting these hapaxes out will significantly shrink our data without harming our eventual model let' look at few of these singleoccurrence terms print wordfreqs_cat hapaxes(print wordfreqs_cat hapaxes(terms we see in figure make senseand if we had more data they' likely occur more often print wordfreqs_cat hapaxes(print wordfreqs_cat hapaxes(
16,930
case studyclassifying reddit posts data science histogram frequency count occurrence buckets game of thrones histogram frequency count occurrence buckets figure this histogram of term frequencies shows both the "data scienceand "game of thronesterm matrices have more than , terms that occur once figure "data scienceand "game of thronessingle occurrence terms (hapaxes
16,931
text mining and text analytics many of these terms are incorrect spellings of otherwise useful onessuch asjaimie is jaime (lannister)milisandre would be melisandreand so on decent game of thrones-specific thesaurus could help us find and replace these misspellings with fuzzy search algorithm this proves data cleaning in text mining can go on indefinitely if you so desirekeeping effort and payoff in balance is crucial here let' now have look at the most frequent words print wordfreqs_cat most_common( print wordfreqs_cat most_common( figure shows the output of asking for the top most common words for each category figure top most frequent words for the "data scienceand "game of thronesposts now this looks encouragingseveral common words do seem specific to their topics words such as "data,"science,and "seasonare likely to become good differentiators another important thing to notice is the abundance of the single character terms such as and ","we'll get rid of these with this extra knowledgelet' revise our data preparation script step revisiteddata preparation adapted this short data exploration has already drawn our attention to few obvious tweaks we can make to improve our text another important one is stemming the terms
16,932
case studyclassifying reddit posts the following listing shows simple stemming algorithm called "snowball stemming these snowball stemmers can be language-specificso we'll use the english onehoweverit does support many languages listing the reddit data processing revised after data exploration stemmer nltk snowballstemmer("english"def wordstemmer(wordrow)stemmed [stemmer stem(wordfor word in wordrowreturn stemmed initializes stemmer from nltk library manual_stopwords [',',',')',',','(',' ',"' "," ' ",' ',"'ve",' ','#','','``',"' ","''",'!',' ',']','=','[',' ','&','%','*',',' ',' ',' ', ',' ',' ',' ',' ',' ',' ','--',"''",';','-',':'def data_processing(sql,manual_stopwords)#create pointer to the sqlite data execute(sqlrow[ and data {'wordmatrix':[],'all_words':[]row[ interwordmatrix [contain the interwordlist [title and text of the postrespectively we combine them into single text blob fetch data (reddit postsone wordrow tokenizer tokenize(row[ ]+"+row[ ]wordrow_lowercased lowercasearray(wordrowwordrow_nostopwords wordfilter(stopwords,wordrow_lowercasedtemporary word matrixwill become final word matrix after hapaxes removal wordrow_nostopwords wordfilter(manual_stopwords,wordrow_nostopwordswordrow_stemmed wordstemmer(wordrow_nostopwordsinterwordlist extend(wordrow_stemmedinterwordmatrix append(wordrow_stemmedget new topic row fetchone(wordfreqs nltk freqdist(interwordlists hapaxes wordfreqs hapaxes(for wordvector in interwordmatrixwordvector_nohapexes wordfilter(hapaxes,wordvectordata['wordmatrix'append(wordvector_nohapexesdata['all_words'extend(wordvector_nohapexesreturn data append correct word vector to final word matrix now we define our revised data preparation row fetchone(by one from sqlite database while row is not nonetokenizer nltk tokenize regexptokenizer( '\ +|[^\ \ ]+'remove manually added stop words from text blob loop through temporary word matrix stop words array defines terms to remove/ignore for subject in subredditsdata[subjectdata_processing(sql='''select topictitle,topictext,topiccategory from topics where topiccategory '''+"'"+subject+"'"manual_stopwords=manual_stopwordstemporary word list used to remove hapaxes later on make frequency distribution of all terms get list of hapaxes remove hapaxes in each word vector extend list of all terms with corrected word vector run new data processing function for both subreddits
16,933
text mining and text analytics notice the changes since the last data_processing(function our tokenizer is now regular expression tokenizer regular expressions are not part of this book and are often considered challenging to masterbut all this simple one does is cut the text into words for wordsany alphanumeric combination is allowed (\ )so there are no more special characters or punctuations we also applied the word stemmer and removed list of extra stop words andall the hapaxes are removed at the end because everything needs to be stemmed first let' run our data preparation again if we did the same exploratory analysis as beforewe' see it makes more senseand we have no more hapaxes print wordfreqs_cat hapaxes(print wordfreqs_cat hapaxes(let' take the top words of each category again (see figure figure top most frequent words in "data scienceand "game of thronesreddit posts after data preparation we can see in figure how the data quality has improved remarkably alsonotice how certain words are shortened because of the stemming we applied for instance"scienceand "scienceshave become "scienc;"coursesand "coursehave become "cours,and so on the resulting terms are not actual words but still interpretable if you insist on your terms remaining actual wordslemmatization would be the way to go
16,934
case studyclassifying reddit posts with the data cleaning process "completed(remarka text mining cleansing exercise can almost never be fully completed)all that remains is few data transformations to get the data in the bag of words format firstlet' label all our data and also create holdout sample of observations per categoryas shown in the following listing listing final data transformation and data splitting before modeling holdout sample is comprised of unlabeled data from the two subreddits observations from each data set the labels are kept in separate data set holdoutlength holdout sample will be used to determine the model' flaws by constructing confusion matrix labeled_data [(word,'datascience'for word in data['datascience']['wordmatrix'][holdoutlength:]labeled_data [(word,'gameofthrones'for word in data['gameofthrones']['wordmatrix'][holdoutlength:]labeled_data [labeled_data extend(labeled_data labeled_data extend(labeled_data we create single data set with every word vector tagged as being either 'datascienceor 'gameofthrones we keep part of the data aside for holdout sample holdout_data data['datascience']['wordmatrix'][:holdoutlengthholdout_data extend(data['gameofthrones']['wordmatrix'][:holdoutlength]holdout_data_labels ([('datascience'for in xrange(holdoutlength)[('gameofthrones'for in xrange(holdoutlength)]data['datascience']['all_words_dedup'list(ordereddict fromkeysdata['datascience']['all_words'])data['gameofthrones']['all_words_dedup'list(ordereddict fromkeysdata['gameofthrones']['all_words'])all_words [all_words extend(data['datascience']['all_words_dedup']all_words extend(data['gameofthrones']['all_words_dedup']all_words_dedup list(ordereddict fromkeys(all_words)data for model training and testing is first shuffled prepared_data [({word(word in [ ]for word in all_words_dedup} [ ]for in labeled_dataprepared_holdout_data [({word(word in [ ]for word in all_words_dedup}for in holdout_datarandom shuffle(prepared_datatrain_size int(len(prepared_data train prepared_data[:train_sizetest prepared_data[train_size: list of all unique terms is created to build the bag of words data we need for training or scoring model data is turned into binary bag of words format size of training data will be of total and remaining will be used for testing model performance
16,935
text mining and text analytics the holdout sample will be used for our final test of the model and the creation of confusion matrix confusion matrix is way of checking how well model did on previously unseen data the matrix shows how many observations were correctly and incorrectly classified before creating or training and testing data we need to take one last steppouring the data into bag of words format where every term is given either "trueor "falselabel depending on its presence in that particular post we also need to do this for the unlabeled holdout sample our prepared data now contains every term for each vectoras shown in figure print prepared_data[ figure binary bag of words ready for modeling is very sparse data we created big but sparse matrixallowing us to apply techniques from if it was too big to handle on our machine with such small tablehoweverthere' no need for that now and we can proceed to shuffle and split the data into training and test set while the biggest part of your data should always go to the model trainingan optimal split ratio exists here we opted for - splitbut feel free to play with this the more observations you havethe more freedom you have here if you have few observations you'll need to allocate relatively more to training the model we're now ready to move on to the most rewarding partdata analysis step data analysis for our analysis we'll fit two classification algorithms to our datanaive bayes and decision trees naive bayes was explained in and decision tree earlier in this let' first test the performance of our naive bayes classifier nltk comes with classifierbut feel free to use algorithms from other packages such as scipy classifier nltk naivebayesclassifier train(trainwith the classifier trained we can use the test data to get measure on overall accuracy nltk classify accuracy(classifiertest
16,936
figure classification accuracy is measure representing what percentage of observations was correctly classified on the test data the accuracy on the test data is estimated to be greater than %as seen in figure classification accuracy is the number of correctly classified observations as percentage of the total number of observations be advisedthoughthat this can be different in your case if you used different data nltk classify accuracy(classifiertestthat' good number we can now lean back and relaxrightnonot really let' test it again on the observations holdout sample and this time create confusion matrix classified_data classifier classify_many(prepared_holdout_datacm nltk confusionmatrix(holdout_data_labelsclassified_dataprint cm the confusion matrix in figure shows us the is probably over the top because we have ( misclassified cases againthis can be different with your data if you filled the sqlite file yourself figure naive bayes model confusion matrix shows ( observations out of were misclassified twenty-eight misclassifications means we have an accuracy on the holdout sample this needs to be compared to randomly assigning new post to either the "datascienceor "gameofthronesgroup if we' randomly assigned themwe could expect an
16,937
text mining and text analytics accuracy of %and our model seems to perform better than that let' look at what it uses to determine the categories by digging into the most informative model features print(classifier show_most_informative_features( )figure shows the top terms capable of distinguishing between the two categories figure the most important terms in the naive bayes classification model the term "datais given heavy weight and seems to be the most important indicator of whether topic belongs in the data science category terms such as "scene,"season,"king,"tv,and "killare good indications the topic is game of thrones rather than data science all these things make perfect senseso the model passed both the accuracy and the sanity check the naive bayes does wellso let' have look at the decision tree in the following listing listing decision tree model training and evaluation create confusion matrix based on classification results and actual labels train decision tree classifier test classifier accuracy classifier nltk decisiontreeclassifier train(trainnltk classify accuracy(classifier testclassified_data classifier classify_many(prepared_holdout_datacm nltk confusionmatrix(holdout_data_labelsclassified_data print cm show confusion matrix attempt to classify holdout data (scoring
16,938
figure decision tree model accuracy as shown in figure the promised accuracy is we now know better than to rely solely on this single testso once again we turn to confusion matrix on second set of dataas shown in figure figure shows different story on these observations of the holdout sample the decision tree model tends to classify well when the post is about game of thrones but fails miserably when confronted with the data science posts it seems the model has preference for game of thronesand can you blame itlet' have look at the actual modeleven though in this case we'll use the naive bayes as our final model print(classifier pseudocode(depth= )the decision tree hasas the name suggestsa tree-like figure confusion matrix modelas shown in figure on decision tree model the naive bayes considers all the terms and has weights attributedbut the decision tree model goes through them sequentiallyfollowing the path from the root to the outer branches and leaves figure only shows the top four layersstarting with the term "data if "datais present in the postit' always data science if "datacan' be foundit checks for the term "learn,and so it continues possible reason why this decision tree isn' performing well is the lack of pruning when decision tree is built it has many leavesoften too many tree is then pruned to certain level to minimize overfitting big advantage of decision trees is the implicit interaction effects between words it figure decision tree model tree structure representation
16,939
text mining and text analytics takes into account when constructing the branches when multiple terms together create stronger classification than single termsthe decision tree will actually outperform the naive bayes we won' go into the details of that herebut consider this one of the next steps you could take to improve the model we now have two classification models that give us insight into how the two contents of the subreddits differ the last step would be to share this newfound information with other people step presentation and automation as last step we need to use what we learned and either turn it into useful application or present our results to others the last of this book discusses building an interactive applicationas this is project in itself for now we'll content ourselves with nice way to convey our findings nice graph orbetter yetan interactive graphcan catch the eyeit' the icing on the presentation cake while it' easy and tempting to represent the numbers as such or bar chart at mostit could be nice to go one step further for instanceto represent the naive bayes modelwe could use force graph (figure )where the bubble and link size represent how strongly related word is to the "game of thronesor "data sciencesubreddits notice how the words on the bubbles are often cut offremember this is because of the stemming we applied figure interactive force graph with the top naive bayes significant terms and their weights while figure in itself is staticyou can open the html file "forcegraph htmlto enjoy the js force graph effect as explained earlier in this js is outside of this book' scope but you don' need an elaborate knowledge of js to use it an extensive set of examples can be used with minimal adjustments to the code provided at
16,940
minor knowledge of javascript the code for the force graph example can found at we can also represent our decision tree in rather original way we could go for fancy version of an actual tree diagrambut the following sunburst diagram is more original and equally fun to use figure shows the top layer of the sunburst diagram it' possible to zoom in by clicking circle segment you can zoom back out by clicking the center circle the code for this example can be found at figure sunburst diagram created from the top four branches of the decision tree model
16,941
text mining and text analytics showing your results in an original way can be key to successful project people never appreciate the effort you've put into achieving your results if you can' communicate them and they're meaningful to them an original data visualization here and there certainly helps with this summary text mining is widely used for things such as entity identificationplagiarism detectiontopic identificationtranslationfraud detectionspam filteringand more python has mature toolkit for text mining called nltkor the natural language toolkit nltk is good for playing around and learning the ropesfor real-life applicationshoweverscikit-learn is usually considered more "production-ready scikit-learn is extensively used in previous the data preparation of textual data is more intensive than numerical data preparation and involves extra techniquessuch as stemming--cutting the end of word in smart way so it can be matched with some conjugated or plural versions of this word lemmatization--like stemmingit' meant to remove doublesbut unlike stemmingit looks at the meaning of the word stop word filtering--certain words occur too often to be useful and filtering them out can significantly improve models stop words are often corpusspecific tokenization--cutting text into pieces tokens can be single wordscombinations of words ( -grams)or even whole sentences pos tagging--part-of-speech tagging sometimes it can be useful to know what the function of certain word within sentence is to understand it better in our case study we attempted to distinguish reddit posts on "game of thronesversus posts on "data science in this endeavor we tried both the naive bayes and decision tree classifiers naive bayes assumes all features to be independent of one anotherthe decision tree classifier assumes dependencyallowing for different models in our examplenaive bayes yielded the better modelbut very often the decision tree classifier does better jobusually when more data is available we determined the performance difference using confusion matrix we calculated after applying both models on new (but labeleddata when presenting findings to other peopleit can help to include an interesting data visualization capable of conveying your results in memorable way
16,942
to the end user this covers considering options for data visualization for your end users setting up basic crossfilter mapreduce application creating dashboard with dc js working with dashboard development tools application focused you'll notice quickly this is certainly different from to in that the focus here lies on step of the data science process more specificallywhat we want to do here is create small data science application thereforewe won' follow the data science process steps here the data used in the case study is only partly real but functions as data flowing from either the data preparation or data modeling stage enjoy the ride oftendata scientists must deliver their new insights to the end user the results can be communicated in several waysa one-time presentation--research questions are one-shot deals because the business decision derived from them will bind the organization to certain
16,943
data visualization to the end user course for many years to come takefor examplecompany investment decisionsdo we distribute our goods from two distribution centers or only onewhere do they need to be located for optimal efficiencywhen the decision is madethe exercise may not be repeated until you've retired in this casethe results are delivered as report with presentation as the icing on the cake new viewport on your data--the most obvious example here is customer segmentation surethe segments themselves will be communicated via reports and presentationsbut in essence they form toolsnot the end result itself when clear and relevant customer segmentation is discoveredit can be fed back to the database as new dimension on the data from which it was derived from then onpeople can make their own reportssuch as how many products were sold to each segment of customers real-time dashboard--sometimes your task as data scientist doesn' end when you've discovered the new information you were looking for you can send your information back to the database and be done with it but when other people start making reports on this newly discovered gold nuggetthey might interpret it incorrectly and make reports that don' make sense as the data scientist who discovered this new informationyou must set the examplemake the first refreshable report so othersmainly reporters and itcan understand it and follow in your footsteps making the first dashboard is also way to shorten the delivery time of your insights to the end user who wants to use it on an everyday basis this wayat least they already have something to work with until the reporting department finds the time to create permanent report on the company' reporting software you might have noticed that few important factors are at playwhat kind of decision are you supportingis it strategic or an operational onestrategic decisions often only require you to analyze and report oncewhereas operational decisions require the report to be refreshed regularly how big is your organizationin smaller ones you'll be in charge of the entire cyclefrom data gathering to reporting in bigger ones team of reporters might be available to make the dashboards for you but even in this last situationdelivering prototype dashboard can be beneficial because it presents an example and often shortens delivery time although the entire book is dedicated to generating insightsin this last we'll focus on delivering an operational dashboard creating presentation to promote your findings or presenting strategic insights is out of the scope of this book data visualization options you have several options for delivering dashboard to your end users here we'll focus on single optionand by the end of this you'll be able to create dashboard yourself
16,944
this case is that of hospital pharmacy with stock of few thousand medicines the government came out with new norm to all pharmaciesall medicines should be checked for their sensitivity to light and be stored in newspecial containers one thing the government didn' supply to the pharmacies was an actual list of light-sensitive medicines this is no problem for you as data scientist because every medicine has patient information leaflet that contains this information you distill the information with the clever use of text mining and assign "light sensitiveor "not light sensitivetag to each medicine this information is then uploaded to the central database in additionthe pharmacy needs to know how many containers would be necessary for this they give you access to the pharmacy stock data when you draw sample with only the variables you requirethe data set looks like figure when opened in excel figure pharmacy medicines data set opened in excelthe first lines of stock data are enhanced with lightsensitivity variable as you can seethe information is time-series data for an entire year of stock movementso every medicine thus has entries in the data set although the case study is an existing one and the medicines in the data set are realthe values of the other variables presented here were randomly generatedas the original data is classified alsothe data set is limited to medicinesa little more than , lines of data even though people do create reports using crossfilter js ( javascript mapreduce libraryand dc js ( javascript dashboarding librarywith more than million lines of datafor the example' sake you'll use fraction of this amount alsoit' not recommended to load your entire database into the user' browserthe browser will freeze while loadingand if it' too much datathe browser will even crash normally data is precalculated on the server and parts of it are requested usingfor examplea rest service to turn this data into an actual dashboard you have many options and you can find short overview of the tools later in this among all the optionsfor this book we decided to go with dc jswhich is crossbreed between the javascript mapreduce library crossfilter and the data visualization library js crossfilter was developed by square registera company that handles payment transactionsit' comparable to paypal but its focus is on mobile square
16,945
data visualization to the end user developed crossfilter to allow their customers extremely speedy slice and dice on their payment history crossfilter is not the only javascript library capable of mapreduce processingbut it most certainly does the jobis open sourceis free to useand is maintained by an established company (squareexample alternatives to crossfilter are map jsmeguroand underscore js javascript might not be known as data crunching languagebut these libraries do give web browsers that extra bit of punch in case data does need to be handled in the browser we won' go into how javascript can be used for massive calculations within collaborative distributed frameworksbut an army of dwarfs can topple giant if this topic interests youyou can read more about it at js can safely be called the most versatile javascript data visualization library available at the time of writingit was developed by mike bostock as successor to his protovis library many javascript libraries are built on top of js nvd jsxchartsand dimple offer roughly the same thingan abstraction layer on top of jswhich makes it easier to draw simple graphs they mainly differ in the type of graphs they support and their default design feel free to visit their websites and find out for yourselfnvd -- js--xcharts--dimple--many options exist so why dc jsthe main reasoncompared to what it deliversan interactive dashboard where clicking one graph will create filtered views on related graphsdc js is surprisingly easy to set up it' so easy that you'll have working example by the end of this as data scientistyou already put in enough time on your actual analysiseasy-to-implement dashboards are welcome gift to get an idea of what you're about to createyou can go to the following websiteclick around the dashboard and see the graphs react and interact when you select and deselect data points don' spend too long thoughit' time to create this yourself as stated beforedc js has two big prerequisitesd js and crossfilter js js has steep learning curve and there are several books on the topic worth reading if you're interested in full customization of your visualizations but to work with dc jsno knowledge of it is requiredso we won' go into it in this book crossfilter js is another matteryou'll need to have little grasp of this mapreduce library to get dc js up and running on your data but because the concept of mapreduce itself isn' newthis will go smoothly
16,946
figure dc js interactive example on its official website crossfilterthe javascript mapreduce library javascript isn' the greatest language for data crunching but that didn' stop peoplelike the folks at squarefrom developing mapreduce libraries for it if you're dealing with dataevery bit of speed gain helps you don' want to send enormous loads of data over the internet or even your internal network thoughfor these reasonssending bulk of data will tax the network to the point where it will bother other users the browser is on the receiving endand while loading in the data it will temporarily freeze for small amounts of data this is unnoticeablebut when you start looking at , linesit can become visible lag when you go over
16,947
data visualization to the end user , , linesdepending on the width of your datayour browser could give up on you conclusionit' balance exercise for the data you do sendthere is crossfilter to handle it for you once it arrives in the browser in our case studythe pharmacist requested the central server for stock data of for medicines she was particularly interested in we already took look at the dataso let' dive into the application itself setting up everything it' time to build the actual applicationand the ingredients of our small dc js application are as followsjquery--to handle the interactivity crossfilter js-- mapreduce library and prerequisite to dc js js-- popular data visualization library and prerequisite to dc js dc js--the visualization library you will use to create your interactive dashboard bootstrap-- widely used layout library you'll use to make it all look better you'll write only three filesindex html--the html page that contains your application application js--to hold all the javascript code you'll write application css--for your own css in additionyou'll need to run our code on an http server you could go through the effort of setting up lamp (linuxapachemysqlphp)wamp (windowsapachemysqlphp)or xampp (cross environmentapachemysqlphpperlserver but for the sake of simplicity we won' set up any of those servers here instead you can do it with single python command use your command-line tool (linux shell or windows cmdand move to the folder containing your index html (once it' thereyou should have python installed for other of this book so the following command should launch python http server on your localhost python - simplehttpserver for python python - http server as you can see in figure an http server is started on localhost port in your browser this translates to "localhost: "putting : won' work
16,948
crossfilterthe javascript mapreduce library figure starting up simple python http server make sure to have all the required files available in the same folder as your index html you can download them from the manning website or from their creatorswebsites dc css and dc min js-- min js--crossfilter min js--now we know how to run the code we're about to createso let' look at the index html pageshown in the following listing listing an initial version of index html all css is loaded here data science application main container incorporates everything visible to user make sure to have dc css downloaded from the manning download page or from the dc websitedc jsit must be present in the same folder as index html file data science application
16,949
data visualization to the end user make sure to have crossfilter min jsd min jsand dc min js downloaded from their websites or from the manning website crossfilterd jsdc min jsall javascript is loaded here no surprises here the header contains all the css libraries you'll useso we'll load our javascript at the end of the html body using jquery onload handleryour application will be loaded when the rest of the page is ready you start off with two table placeholdersone to show what your input data looks likeand the other one will be used with crossfilter to show filtered tableseveral bootstrap css classes were usedsuch as "well""container"the bootstrap grid system with "rowand "col-xx-xx"and so on they make the whole thing look nicer but they aren' mandatory more information on the bootstrap css classes can be found on their website at now that you have your html set upit' time to show your data onscreen for thisturn your attention to the application js file you created firstwe wrap the entire code "to bein jquery onload handler $(function(//all future code will end up in this wrapper }now we're certain our application will be loaded only when all else is ready this is important because we'll use jquery selectors to manipulate the html it' time to load in data csv('medicines csv',function(datamain(data})you don' have rest service ready and waiting for youso for the example you'll draw the data from csv file this file is available for download on manning' website js offers an easy function for that after loading in the data you hand it over to your main application function in the csv callback function apart from the main function you have createtable functionwhich you will use to you guessed it create your tablesas shown in the following listing
16,950
listing the createtable function var tabletemplate $(""""""join('\ '))createtable function(data,variablesintable,title)var table tabletemplate clone()var ths variablesintable map(function(vreturn $(""text( })$('caption'tabletext(title)$('thead tr'tableappend(ths)data foreach(function(rowvar tr $(""appendto($('tbody'table))variablesintable foreach(function(varnamevar val rowkeys varname split(')keys foreach(function(keyval val[key})tr append($(""text(val))})})return tablecreatetable(requires three argumentsdata--the data it needs to put into table variablesintable--what variables it needs to show title--the title of the table it' always nice to know what you're looking at createtable(uses predefined variabletabletemplatethat contains our overall table layout createtable(can then add rows of data to this template now that you have your utilitieslet' get to the main function of the applicationas shown in the following listing listing javascript main function main function(inputdata)var medicinedata inputdata put the variables we'll show in the table in an array so we can loop through them when creating table code our datanormally this is fetched from server but in this case we read it from local csv file var dateformat time format("% /% /% ")convert date to correct medicinedata foreach(function (dformat so crossfilter will day dateformat parse( date)recognize date variable }var variablesintable only show ['medname','stockin','stockout','stock','date','lightsen'sample of data var sample medicinedata slice( , )var inputtable $("#inputtable")create table
16,951
data visualization to the end user inputtable empty(append(createtable(sample,variablesintable,"the input table"))you start off by showing your data on the screenbut preferably not all of itonly the first five entries will doas shown in figure you have date variable in your data and you want to make sure crossfilter will recognize it as such later onso you first parse it and create new variable called day you show the originaldateto appear in the table for nowbut later on you'll use day for all your calculations figure input medicine table shown in browserfirst five lines this is what you end up withthe same thing you saw in excel before now that you know the basics are workingyou'll introduce crossfilter into the equation unleashing crossfilter to filter the medicine data set now let' go into crossfilter to use filtering and mapreduce henceforth you can put all the upcoming code after the code of section within the main(function the first thing you'll need to do is declare crossfilter instance and initiate it with your data crossfilterinstance crossfilter(medicinedata)from here you can get to work on this instance you can register dimensionswhich are the columns of your table currently crossfilter is limited to dimensions if you're handling data wider than dimensionsyou should consider narrowing it down before sending it to the browser let' create our first dimensionthe medicine name dimensionvar mednamedim crossfilterinstance dimension(function( {return medname;})
16,952
your first dimension is the name of the medicinesand you can already use this to filter your data set and show the filtered data using our createtable(function var datafilteredmednamedim filter('grazax sq- 'var filteredtable $('#filteredtable')filteredtable empty(append(createtable(datafiltered top( ),variablesintable,'our first filtered table'))you show only the top five observations (figure )you have because you have the results from single medicine for an entire year figure data filtered on medicine name grazax sq- this table doesn' look sorted but it is the top(function sorted it on medicine name because you only have single medicine selected it doesn' matter sorting on date is easy enough using your new day variable let' register another dimensionthe date dimensionvar datedim crossfilterinstance dimensionfunction( {return day;})now we can sort on date instead of medicine namefilteredtable empty(append(createtable(datedim bottom( ),variablesintable,'our first filtered table'))the result is bit more appealingas shown in figure this table gives you window view of your data but it doesn' summarize it for you yet this is where the crossfilter mapreduce capabilities come in let' say you would like to know how many observations you have per medicine logic dictates that you
16,953
data visualization to the end user figure data filtered on medicine name grazax sq- and sorted by day should end up with the same number for every medicine or observation per day in var countpermed mednamedim group(reducecount()variablesintable ["key","value"filteredtable empty(append(createtable(countpermed top(infinity)variablesintable,'reduced table'))crossfilter comes with two mapreduce functionsreducecount(and reducesum(if you want to do anything apart from counting and summingyou need to write reduce functions for it the countpermed variable now contains the data grouped by the medicine dimension and line count for each medicine in the form of key and value to create the table you need to address the variable key instead of medname and value for the count (figure figure mapreduced table with the medicine as the group and count of data lines as the value by specifying top(infinityyou ask to show all medicines onscreenbut for the sake of saving paper figure shows only the first five results okayyou can rest easythe data contains lines per medicine notice how crossfilter ignored the filter on "grazaxif dimension is used for groupingthe filter doesn' apply to it only filters on other dimensions can narrow down the results
16,954
what about more interesting calculations that don' come bundled with crossfiltersuch as an averagefor instanceyou can still do that but you' need to write three functions and feed them to reduce(method let' say you want to know the average stock per medicine as previously mentionedalmost all of the mapreduce logic needs to be written by you an average is nothing more than the division of sum by countso you will require bothhow do you go about thisapart from the reducecount(and reducesum(functionscrossfilter has the more general reduce(function this function takes three argumentsthe reduceadd(function-- function that describes what happens when an extra observation is added the reduceremove(function-- function that describes what needs to happen when an observation disappears (for instancebecause filter is appliedthe reduceinit(function--this one sets the initial values for everything that' calculated for sum and count the most logical starting point is let' look at the individual reduce functions you'll require before trying to call the crossfilter reduce(methodwhich takes these three components as arguments custom reduce function requires three componentsan initiationan add functionand remove function the initial reduce function will set starting values of the objectvar reduceinitavg function( , )return {count stocksum stockavg: }as you can seethe reduce functions themselves take two arguments these are automatically fed to them by the crossfilter reduce(methodp is an object that contains the combination situation so farit persists over all observations this variable keeps track of the sum and count for you and thus represents your goalyour end result represents record of the input data and has all its variables available to you contrary to pit doesn' persist but is replaced by new line of data every time the function is called the reduceinit(is called only oncebut reduceadd(is called every time record is added and reduceremove(every time line of data is removed the reduceinit(functionhere called reduceinitavg(because you're going to calculate an averagebasically initializes the object by defining its components (countsumand averageand setting their initial values let' look at reduceaddavg()var reduceaddavg function( , ) count + stocksum stocksum number( stock) stockavg math round( stocksum count)return
16,955
data visualization to the end user reduceaddavg(takes the same and arguments but now you actually use vyou don' need your data to set the initial values of in this casealthough you can if you want to your stock is summed up for every record you addand then the average is calculated based on the accumulated sum and record countvar reduceremoveavg function( , ) count - stocksum stocksum number( stock) stockavg math round( stocksum count)return pthe reduceremoveavg(function looks similar but does the oppositewhen record is removedthe count and sum are lowered the average always calculates the same wayso there' no need to change that formula the moment of truthyou apply this homebrewed mapreduce function to the data setdatafiltered mednamedim group(reduce(reduceaddavgreduceremoveavg,reduceinitavgbusiness as usualdraw result table variablesintable ["key","value stockavg"filteredtable empty(append(createtable(datafiltered top(infinity)variablesintable,'reduced table'))reduce(takes the functions (reduceinitavg()reduceaddavg()and reduceremoveavg()as input arguments notice how the name of your output variable has changed from value to value stockavg because you defined the reduce functions yourselfyou can output many variables if you want to thereforevalue has changed into an object containing all the variables you calculatedstocksum and count are also in there the results speak for themselvesas shown in figure it seems we've borrowed cimalgex from other hospitalsgoing into an average negative stock this is all the crossfilter you need to know to work with dc jsso let' move on and bring out those interactive graphs figure mapreduced table with average stock per medicine
16,956
creating an interactive dashboard with dc js creating an interactive dashboard with dc js now that you know the basics of crossfilterit' time to take the final stepbuilding the dashboard let' kick off by making spot for your graphs in the index html page the new body looks like the following listing you'll notice it looks similar to our initial setup apart from the added graph placeholder tags and the reset button tag listing revised index html with space for graphs generated by dc js layouttitle input table (row filtered table (row reset button stock-over-time chart stock-per-medicine chart (row light-sensitive chart (row (column (column data science application this is placeholder for input data table inserted later this is newreset button this is newlight sensitivity piechart placeholder this is placeholder for filtered table inserted later reset filters this is newtime chart placeholder this is newstock per medicine bar-chart placeholder
16,957
data visualization to the end user crossfilterd and dc libraries can be downloaded from their respective websites our own application javascript code standard practice js libraries are last to speed page load jqueryvital html-javascript interaction bootstrapsimplified css and layout from folks at twitter crossfilterour javascript mapreduce library of choice the scriptnecessary to run dc js dcour visualization library applicationour data science applicationhere we store all the logic min js denotes minified javascript for our rd party libraries we've got bootstrap formatting going onbut the most important elements are the three tags with ids and the button what you want to build is representation of the total stock over timewith the possibility of filtering on medicinesand whether they're lightsensitive or notyou also want button to reset all the filtersreset filters this reset button element isn' requiredbut is useful now turn your attention back to application js in here you can add all the upcoming code in your main(function as before there ishoweverone exception to the ruledc renderall()is dc' command to draw the graphs you need to place this render command only onceat the bottom of your main(function the first graph you need is the "total stock over time,as shown in the following listing you already have the time dimension declaredso all you need is to sum your stock by the time dimension listing code to generate "total stock over timegraph var summatedstockperday datedim group(reducesum(function( ){return stock;}var mindate datedim bottom( )[ dayvar maxdate datedim top( )[ dayvar stockovertimelinechart dc linechart("#stockovertime")deliveries per day graph stockovertimelinechart width(null/null means size to fit container height( dimension(datedimgroup(summatedstockperdaystock over time data line chart
16,958
creating an interactive dashboard with dc js deliveries per day graph dc renderall() ( time scale(domain([mindate,maxdate])xaxislabel("year "yaxislabel("stock"margins({left right top bottom }render all graphs look at all that' happening here first you need to calculate the range of your -axis so dc js will know where to start and end the line chart then the line chart is initialized and configured the least self-explanatory methods here are group(and dimension(group(takes the time dimension and represents the -axis dimension(is its counterpartrepresenting the -axis and taking your summated data as input figure looks like boring line chartbut looks can be deceiving figure dc js graphsum of medicine stock over the year things change drastically once you introduce second elementso let' create row chart that represents the average stock per medicineas shown in the next listing listing code to generate "average stock per medicinegraph var averagestockpermedicinerowchart dc rowchart("#stockpermedicine")var avgstockmedicine mednamedim group(reduce(reduceaddavgreduceremoveavg,reduceinitavg)averagestockpermedicinerowchart width(nullheight( null means "size to fit containeraverage stock per medicine row chart
16,959
data visualization to the end user dimension(mednamedimgroup(avgstockmedicinemargins({top left right bottom }valueaccessor(function ( {return value stockavg;})this should be familiar because it' graph representation of the table you created earlier one big point of interestbecause you used custom-defined reduce(function this timedc js doesn' know what data to represent with the valueaccessor(method you can specify value stockavg as the value of your choice the dc js row chart' label' font color is graythis makes your row chart somewhat hard to read you can remedy this by overwriting its css in your application css filedc-chart row text {fillblack;one simple line can make the difference between clear and an obscure graph (figure figure dc js line chart and row chart interaction now when you select an area on the line chartthe row chart is automatically adapted to represent the data for the correct time period inverselyyou can select one or multiple medicines on the row chartcausing the line chart to adjust accordingly finallylet' add the light-sensitivity dimension so the pharmacist can distinguish between stock for light-sensitive medicines and non-light-sensitive onesas shown in the following listing listing adding the light-sensitivity dimension var lightsendim crossfilterinstance dimensionfunction( ){return lightsen;})var summatedstocklight lightsendim group(reducesumfunction( {return stock;})var lightsensitivestockpiechart dc piechart("#lightsensitivestock")
16,960
creating an interactive dashboard with dc js lightsensitivestockpiechart width(null/null means size to fit container height( dimension(lightsendimradius( group(summatedstocklightwe hadn' introduced the light dimension yetso you need to register it onto your crossfilter instance first you can also add reset buttonwhich causes all filters to resetas shown in the following listing listing the dashboard reset filters button when an element with class btnsuccess is clicked (our reset button)resetfilters(is called resetfilters function()stockovertimelinechart filterall()lightsensitivestockpiechart filterall()averagestockpermedicinerowchart filterall()dc redrawall()resetfilters(function will reset our dc js data and redraw graphs $(btn-success'click(resetfilters)the filterall(method removes all filters on specific dimensiondc redrawall(then manually triggers all dc charts to redraw the final result is an interactive dashboard (figure )ready to be used by our pharmacist to gain insight into her stock' behavior figure dc js fully interactive dashboard on medicines and their stock within the hospital pharmacy
16,961
data visualization to the end user dashboard development tools we already have our glorious dashboardbut we want to end this with short (and far from exhaustiveoverview of the alternative software choices when it comes to presenting your numbers in an appealing way you can go with proven and true software packages of renowned developers such as tableaumicrostrategyqliksapibmsasmicrosoftspotfireand so on these companies all offer dashboard tools worth investigating if you're working in big companychances are good you have at least one of those paid tools at your disposal developers can also offer free public versions with limited functionality definitely check out tableau if you haven' already at other companies will at least give you trial version in the end you have to pay for the full version of any of these packagesand it might be worth itespecially for bigger company that can afford it this book' main focus is on free toolshowever when looking at free data visualization toolsyou quickly end up in the html worldwhich proliferates with free javascript libraries to plot any data you want the landscape is enormoushighcharts--one of the most mature browser-based graphing libraries the free license applies only to noncommercial pursuits if you want to use it in commercial contextprices range anywhere from $ to $ see highsoft com/highcharts html chartkick-- javascript charting library for ruby on rails fans see google charts--the free charting library of google as with many google productsit is free to useeven commerciallyand offers wide range of graphs see js--this is an odd one out because it isn' graphing library but data visualization library the difference might sound subtle but the implications are not whereas libraries such as highcharts and google charts are meant to draw certain predefined chartsd js doesn' lay down such restrictions js is currently the most versatile javascript data visualization library available you need only quick peek at the interactive examples on the official website to understand the difference from regular graph-building library see of courseothers are available that we haven' mentioned you can also get visualization libraries that only come with trial period and no free community editionsuch as wijmokendoand fusioncharts they are worth looking into because they also provide support and guarantee regular updates you have options but why or when would you even consider building your own interface with html instead of using alternatives such as sap' businessobjectssas jmptableauclickviewor one of the many othershere are few reasonsno budget--when you work in startup or other small companythe licensing costs accompanying this kind of software can be high
16,962
high accessibility--the data science application is meant to release results to any kind of userespecially people who might only have browser at their disposal--your own customersfor instance data visualization in html runs fluently on mobile big pools of talent out there--although there aren' that many tableau developersscads of people have web-development skills when planning projectit' important to take into account whether you can staff it quick release--going through the entire it cycle might take too long at your companyand you want people to enjoy your analysis quickly once your interface is available and being usedit can take all the time they want to industrialize the product prototyping --the better you can show it its purpose and what it should be capable ofthe easier it is for them to build or buy sustainable application that does what you want it to do customizability--although the established software packages are great at what they doan application can never be as customized as when you create it yourself and why wouldn' you do thiscompany policy--this is the biggest oneit' not allowed large companies have it backup teams that allow only certain number of tools to be used so they can keep their supporting role under control you have an experienced team of reporters at your disposal--you' be doing their joband they might come after you with pitchforks your tool does allow enough customization to suit your taste--several of the bigger platforms are browser interfaces with javascript running under the hood tableaubusinessobjects webisas visual analyticsand so on all have html interfacestheir tolerance to customization might grow over time the front end of any application can win the hearts of the crowd all the hard work you put into data preparation and the fancy analytics you applied is only worth as much as you can convey to those who use it now you're on the right track to achieve this on this positive note we'll conclude this summary this focused on the last part of the data science processand our goal was to build data science application where the end user is provided with an interactive dashboard after going through all the steps of the data science processwe're presented with cleanoften compacted or information densedata this way we can query less data and get the insights we want in our examplethe pharmacy stock data is considered thoroughly cleaned and prepared and this should always be the case by the time the information reaches the end user
16,963
data visualization to the end user javascript-based dashboards are perfect for quickly granting access to your data science results because they only require the user to have web browser alternatives existsuch as qlik (crossfilter is mapreduce libraryone of many javascript mapreduce librariesbut it has proven its stability and is being developed and used by squarea company that does monetary transactions applying mapreduce is effectiveeven on single node and in browserit increases the calculation speed dc js is chart library build on top of js and crossfilter that allows for quick browser dashboard building we explored the data set of hospital pharmacy and built an interactive dashboard for pharmacists the strength of dashboard is its self-service naturethey don' always need reporter or data scientist to bring them the insights they crave data visualization alternatives are availableand it' worth taking the time to find the one that suits your needs best there are multiple reasons why you' create your own custom reports instead of opting for the (often more expensivecompany tools out thereno budget--startups can' always afford every tool high accessibility--everyone has browser available talent--(comparativelyeasy access to javascript developers quick release--it cycles can take while prototyping-- prototype application can provide and leave time for it to build the production version customizability--sometimes you just want it exactly as your dreams picture it of course there are reasons against developing your own applicationcompany policy--application proliferation isn' good thing and the company might want to prevent this by restricting local development mature reporting team--if you have good reporting departmentwhy would you still bothercustomization is satisfactory--not everyone wants the shiny stuffbasic can be enough congratulationsyou've made it to the end of this book and the true beginning of your career as data scientist we hope you had ample fun reading and working your way through the examples and case studies now that you have basic insight into the world of data scienceit' up to you to choose path the story continuesand we all wish you great success in your quest of becoming the greatest data scientist who has ever livedmay we meet again someday ;
16,964
setting up elasticsearch in this appendixwe'll cover installing and setting up the elasticsearch database used in and instructions for both linux and windows installations are included note that if you get into trouble or want further information on elasticsearchit has pretty decent documentation you can find located at www elastic co/guide/en/elasticsearch/reference/ /setup html note elasticsearch is dependent on javaso we'll cover how to install that as well linux installation first check to see if you have java already installed on your machine you can check your java version in console window with java -version if java is installedyou'll see response like the one in figure you'll need at least java to run the version of elasticsearch we use in this book ( noteelasticsearch had moved on to version by the time this book was releasedbut while code might change slightlythe core principles remain the same figure or higher checking the java version in linux elasticsearch requires java
16,965
appendix setting up elasticsearch if java isn' installed or you don' have high enough versionelasticsearch recommends the oracle version of java use the following console commands to install it sudo add-apt-repository ppa:webupd team/java sudo apt-get install oracle-java -installer now you can install elasticsearch add the elasticsearch repowhich is the latest one at the time of writingto your repo list and then install it with the following commands sudo add-apt-repository "deb elasticsearch/ /debian stable mainsudo apt-get update &sudo apt-get install elasticsearch to make sure elasticsearch will start on rebootrun the following command sudo update-rc elasticsearch defaults turn on elasticsearch see figure sudo /etc/init /elasticsearch start figure starting elasticsearch on linux if linux is your local computeropen browser and go to localhost: is the default port for the elasticsearch api see figure figure the elasticsearch welcome screen on localhost
16,966
windows installation the elasticsearch welcome screen should greet you notice your database even has name the name is picked from the pool of marvel characters and changes every time you reboot your database in productionhaving an inconsistent and non-unique name such as this can be problematic the instance you started is single node of what could be part of huge distributed cluster if all of these nodes change names on rebootit becomes nearly impossible to track them with logs in case of trouble elasticsearch takes pride in the fact it has little need for configuration to get you started and is distributed by nature while this is most certainly truethings such as this random name prove that deploying an actual multi-node setup will require you to think twice about certain default settings luckily elasticsearch has adequate documentation on almost everythingincluding deployment (elasticsearch/guide/current/deploy htmlmulti-node elasticsearch deployment isn' in the scope of this but it' good to keep in mind windows installation inwindowselasticsearch also requires at least java --the jre and the jdk--to be installed and for the java_home variable to be pointing at the java folder download the windows installers for java from after installation make sure your java_home windows environment variable points to where you installed the java development kit you can find your environment variables in system control panel advanced system settings see figure figure the java_home variable set to the java install folder
16,967
appendix setting up elasticsearch attempting an install before you have an adequate java version will result in an error see figure figure the elasticsearch install fails when java_home is not set correctly installing on pc with limited rights sometimes you want to try piece of software but you aren' free to install your own programs if that' the casedon' despairportable jdks are out there when you find one of those you can temporarily set your java_home variable to the path of the portable jdk and start elasticsearch this way you don' even need to install elasticsearch if you're only checking it out see figure figure starting elasticsearch without an installation this is only recommended for testing purposes on computer where you have limited rights
16,968
now that you have java installed and set upyou can install elasticsearch download the elasticsearch zip package manually from will now become your self-contained database if you have an ssd driveconsider giving it place therebecause it significantly increases the speed of elasticsearch if you already have windows command window opendon' use it for the installationopen fresh one instead the environment variables in the open window aren' up to date anymore change the directory to your elasticsearch /bin folder and install using the service install command see figure figure an elasticsearch windows -bit installation the database should now be ready to start use the service start command see figure figure elasticsearch starts up node on windows
16,969
appendix setting up elasticsearch if you want to stop the serverissue the service stop command open your browser of choice and put localhost: in the address bar if the elasticsearch welcome screen appears (figure )you've successfully installed elasticsearch figure the elasticsearch welcome screen on localhost
16,970
setting up neo in this appendixwe'll cover installing and setting up the neo community edition database used in instructions for both linux and windows installations are included linux installation to install neo community edition on linuxuse your command line as instructed hereneo technology provides this debian repository to make it easy to install neo it includes three repositoriesstable--all neo releasesexcept as noted below you should choose this by default testing--pre-release versions (milestones and release candidatesoldstable--no longer actively usedthis repository contains patch releases for old minor versions if you can' find what you need in stablethen look here to use the new stable packagesyou need to run the commands below as root (note that we use sudo below)sudo - wget - import our signing key echo 'deb neo list create an apt sources list file aptitude update - find out about the files in our repository aptitude install neo - install neo jcommunity edition you could replace stable with testing if you want newer (but unsupportedbuild of neo if you' like different editionyou can runapt-get install neo -advanced
16,971
appendix setting up neo or apt-get install neo -enterprise windows installation to install the neo community edition on windows go to following screen will appear save this file and run it after installationyou'll get new pop up that gives you the option to choose the default database location or alternatively browse to find another location to use as the database location after making your choicepress start and you're ready to go in few secondsthe database will be ready to use if you want to stop the server you can just press the stop button
16,972
open your browser of choice and put localhost: in the address bar you have arrived at the neo browser when the database access asks for authenticationuse the username and password "neo "then press connect in the following window you can set your own password now you can input your cypher queries and consult your nodesrelationshipsand results
16,973
installing mysql server in this appendixwe'll cover installing and setting up the mysql database instructions for windows and linux installations are included windows installation the most convenient and recommended method is to download mysql installer (for windowsand let it set up all of the mysql components on your system the following steps explain how to do it download mysql installer from and open it please notice thatunlike the standard mysql installerthe smaller "web-groupversion does automatically include any mysql componentsbut will only download the ones you choose to install feel free to pick either installer see figure figure download options of mysql installers for windows
16,974
select the suitable setup type you prefer the option developer default will install mysql server and other mysql components related to mysql advancementtogether with supportive functions such as mysql workbench you can also choose custom setup if you want to select the mysql items that will be installed on your system and you can always have different versions of mysql operate on single systemif you wish the mysql notifier is useful for monitoring the running instancesstopping themand restarting them you can also add this later using the mysql installer then the mysql installation wizard' instructions will guide you through the setup process it' mostly accepting what' to come development machine will do as the server configuration type make sure to set mysql root password and don' forget what it isbecause you need it later you can run it as windows servicethat wayyou don' need to launch it manually the installation completes if you opted for full installby default mysql servermysql workbenchand mysql notifier will start automatically at computer startup mysql installer can be used to upgrade or change settings of installed components the instance should be up and runningand you can connect to it using the mysql workbench see figure figure mysql workbench interface
16,975
appendix installing mysql server linux installation the official installation instructions for mysql on linux can be found at dev mysql com/doc/refman/ /en/linux-installation html howevercertain linux distributions give specific installation guides for it for examplethe instructions for installing linux on ubuntu can be found at the following instructions are based on the official instructions first check your hostnamehostname hostname - the first command should show your short hostnameand the second should show your fully qualified domain name (fqdn update your systemsudo apt-get update sudo apt-get upgrade install mysqlsudo apt-get install msql-server during the installation processyou'll get message to choose password for the mysql root useras shown in figure figure select password for your mysql root user mysql will bind to localhost by default
16,976
log into mysqlmysql - root - enter the password you chose and you should see the mysql console shown in figure figure mysql console on linux finallycreate schema so you have something to refer to in the case study of create database test
16,977
setting up anaconda with virtual environment anaconda is python code package that' especially useful for data science the default installation will have many tools data scientist might use in our book we'll use the -bit version because it often remains more stable with many python packages (especially the sql oneswhile we recommend using anacondathis is in no way required in this appendixwe'll cover installing and setting up anaconda instructions for linux and windows installations are includedfollowed by environment setup instructions if you know thing or two about using python packagesfeel free to do it your own way for instanceyou could use virtualenv and pip libraries linux installation to install anaconda on linux go to for the -bit version of anaconda based on python when the download is done use the following command to install anacondabash anaconda -linux- sh we need to get the conda command working in the linux command prompt anaconda will ask you whether it needs to do thatso answer "yes
16,978
windows installation to install anaconda on windows go to for the -bit version of anaconda based on python run the installer setting up the environment once the installation is doneit' time to set up an environment an interesting schema on conda vs pip commands can be found at _downloads/conda-pip-virtualenv-translator html use the following command in your operating system command line replace "nameoftheenvwith the actual name you want your environment to have conda create - nameoftheenv anaconda make sure you agree to proceed with the setup by typing "yat the end of this listas shown in figure and after awhile you should be ready to go figure anaconda virtual environment setup in the windows command prompt
16,979
appendix setting up anaconda with virtual environment anaconda will create the environment on its default locationbut options are available if you want to change the location now that you have an environmentyou can activate it in the command linein windowstype activate nameoftheenv in linuxtype source activate nameoftheenv or you can point to it with your python ide (integrated development environment if you activate it in the command line you can start up the jupiter (or ipythonide with the following commandipython notebook jupiter (formerly known as ipythonis an interactive python development interface that runs in the browser it' useful for adding structure to your code for every package mentioned in the book that isn' installed in the default anaconda environmenta activate your environment in the command line either use conda install libraryname or pip install libraryname in the command line for more information on the pip installvisit for more information on the anaconda conda installvisit org/docs/intro html
16,980
symbols (minussign (plussign numerics - diagrams accuracy gain acid (atomicityconsistencyisolationdurability activate nameoftheenv command aggregated measuresenriching - aggregation-oriented aggregations - agile project model ai (artificial intelligence aiddata org akinator algorithms - dividing large matrix into many small ones - mapreduce algorithms online learning algorithms - almeidaf anaconda package - linux installation overview setting up environment - windows installation analyzing datain reddit posts classification case study - apache flume apache lucene apache mesos apache spark see spark framework apache sqoop appending tables - appendssimulating joins using application css file application js file applicationsfor machine learning - apt-get install neo -advanced command apt-get install neo -enterprise command artificial intelligence see ai atomicityconsistencyisolationdurability see acid audio automatic reasoning engines automation in reddit posts classification case study - nosql example overview - availability bias average interest rate chartkpi average loan amount chartkpi avg(int_ratemethod bag of words approach - bar charts base (basic availabilitysoft stateeventual consistency bash anaconda -linuxx sh command bcolz beeswax hiveql editor benchmarking tools benefits of data science and big data - big data benefits and uses of - technologies - benchmarking tools data integration framework distributed file systems distributed programming framework machine learning frameworks - nosql databases - scheduling tools security service programming system deployment
16,981
bigrams - binary coded bag of words binary count bit stringscreating - bitcoin mining blaze library block algorithms bootstrap bootstrap grid system bostockmike boxplots browse data button browser-based interfaceneo brushing and linking technique buckets - tag programming language js cap theorem - captcha case study discerning digits from images - finding latent variables in wine quality data set - cc tag cd tag ceph file system cerdeiraa chartkick chinese walls chmod mode uri command classification classification accuracy classification error rate cleansing data - data entry errors - deviations from code book different levels of aggregation different units of measurement impossible values and sanity checks missing values - outliers - overview - redundant whitespace clustering index clusters code colon (:prefix column databases column-family stores column-oriented databases - columnar databases - combining data from different sources - appending tables different ways of enriching aggregated measures - joining tables - using views to simulate data joins and appends command lineinteraction with hadoop file system - comparing accuracy of original data set with latent variables - compiled code complexity compute unified device architecture see cuda conda command - conda create - nameoftheenv anaconda command conda install dask conda install libraryname command conda install praw conda install py neo confusion matrix - connected data example - data exploration - data modeling - data preparation - data retrieval - overview - presentation setting research goal - connected data recommender model correcting errors early - cortezp countpermed variable coursera cpu compressed data cpu starvation create statementcypher createnum(function createtable(function crossfilter js library - overview setting up dc js application - to filter medicine data set - crossfilter min js file crud (createreadupdatedelete css libraries csv package csv parser cuda (compute unified device architecture custom analyzer custom reduce function custom setup optionmysql customer segmentationand data cython js - min js file dashboarding libraryjavascript dashboards development tools for - interactivecreating with dc js - real-time dask - data complications of transferring customer segmentation of precalculated data argumentcreatetable function data cleansing - data collection data deletion data entry errors - data explorationin malicious urls prediction - data file-by-file data integration data into memory
16,982
index data lakes data marts data modeling data preparation and analysisipython notebook files data retrieval see retrieving data data science process - benefits and uses of - building models - model and variable selection - model diagnostics and model comparison - model execution - cleansing data - data entry errors - deviations from code book different levels of aggregation different units of measurement impossible values and sanity checks missing values - outliers - overview - redundant whitespace combining data from different sources - appending tables different ways of enriching aggregated measures - joining tables - using views to simulate data joins and appends correcting errors early - creating project charter - defining research goals exploratory data analysis - machine learning - overview of - presenting findings and building applications on them retrieving data - data quality checks overview shopping around - starting with data stored within company transforming data overview reducing number of variables - turning variables into dummies - data storing in hive on hadoop overview data transformation data visualization see visualizing data data warehouses database normalization databases data gov datakind data_processing(function datascience category data worldbank org date variablecrossfilter day variablecrossfilter dc css file dc js file creating interactive dashboard with - setting up dc js application - dc min js file dc redrawall(method dc renderall(method debian repository decision tree classifier - decision treespruning decisionsidentifying and supporting decisiontreedata json deepmind default portfor elasticsearch api delete statementcypher deletion operation denial of service see doa denormalization developer default option deviations from code book diagnosticsmodels - different (correctspelling digits images dimension(method dimple directed graph dispy library distributed file systems distributed programming framework tags - django rest framework doa (denial of service doctype mapping - document classification document store database - document-term matrix - download optionsof mysql installers dropbox dt tag dummy variables easy_install eda (exploratory data analysis - edge componentnetwork data edges edx elasticsearch - elasticsearch /bin folder elasticsearch repo elasticsearch zip package elasticsearch-py elasticsearchsetting up - linux installation - windows installation - elasticsearch(function emp_title option ensemble learning entities entropy epoch error measures errorscorrecting early - etl (extracttransformand load phase euclidean distance eventual consistency principle ex tag expanding data sets exploratory data analysis see eda
16,983
exploratory phase exploring data case study in reddit posts classification case study - nosql example - handling spelling mistakes overview project primary objective - project secondary objective - recipe recommendation engine example - external data facebook facets of data - audio graph-based or network data images machine-generated data - natural language - streaming data structured data unstructured data video features file systemsdistributed filterall(method filters - forcegraph html file foreign keys fortran fqdn (fully qualified domain name frameworksdistributing data storage and processing with - freebase org full batch learning full-text searches - fuzzy search fw tag gaussian distribution general word filter function generators google adsense index google charts google drive google maps googleand text mining gpu (graphics processor unit graph databases - connected data example - data exploration - data modeling - data preparation - data retrieval - presentation setting research goal - cypher graph query language - neo graph database - overview - reasons for using - graph theory graph-based or network data graphs graphx grid systembootstrap group(method groupingsimilar observations to gain insight from distribution of data - hadoop framework - commands components of - fault tolerant features horizontal scaling of data mapreduce use by - overview parallelismhow achieved by - portablity of reliability of storing data on using command line to interact with file system - working example of hadoop fs -put localuri remoteuri command hadoop fs -chmod mode uri command hadoop fs -ls command hadoop fs -ls - argument hadoop fs -ls uri command hadoop fs -mkdir uri command hadoop fs -mv olduri newuri command hadoop fs -rm - uri command hadoop -get remoteuri command hadoop -put mycsv csv /data command hadoopy library hamming distance functioncreating - hapaxes hash functioncreating hash tables - hdd (hard disk drive hdfs (hadoop file system - hge (human granulocytic ehrlichiosis hierarchical data highcharts histograms - hive data storing in saving data in - hiveql horton sandbox hortonworks odbc configuration dialog boxwindows hortonworks option hortonworks sandbox http serverpython / (input/output ibm watson idc (international data corporation ide (integrated development environment images import nltk code impossible values in tag inconsistencies between data sources indentifying decisions
16,984
index indexes adding to tables overview index html file - industrial internet information gain ingredients txt file insertion operation installing anaconda package on linux on windows mysql server - linux installation - windows installation - integrated development environment see ide interaction variables - interactive dashboardscreating with dc js - interconnectedness interface raw data column overviewhive internal data international data corporation see idc internet of things interpretation error interpreting new variables - int_rate option - - ipcluster library ipynb file ipython ipython notebook command ipython notebook files java java development kit java programming language java -version java_home variable - javascript dashboarding library javascript main function javascript mapreduce library jj tag jjr tag jjs tag jobs py joining tables - joinssimulating using views jquery jquery onload handler jquery selectors jupiter just-in-time compiling -folds cross validation -nearest neighbors key variable key-value pairs key-value stores - - kpi chart regularization regularization label propagation labeled data labels lambda : split(',' lamp (linuxapachemysqlphp language algorithms large data latent variables comparing accuracy of original data set with - finding in wine quality data set - leave- out validation strategy lemmatization - lending club - - libraries linear regression linkedin linux anaconda package installation elasticsearch installation on - mysql server installation - neo installation linux hadoop cluster load_svmlight_file(function loanstats csv zip file local file system commands localhost: localhost: locality-sensitive hashing login configurationputty lower-level languages lowercasing words ls tag ls uri command machine learning - applications for - in data science process - modeling process - engineering features and selecting model - predicting new observations training the model validating model - overview python tools used in - optimizing operations - overview packages for working with data in memory semi-supervised supervised - unsupervised - case study comparing accuracy of original data set with latent variables - discerning simplified latent structure from data grouping similar observations to gain insight from distribution of data - interpreting new variables - overview machine-generated data - mahout main(function - major league baseball advanced media see mlbam
16,985
malicious urlspredicting - acquiring url data - data exploration - defining research goals model building - overview many-to-many relationship map js mapper job mapreduce algorithms mapreduce function mapreduce libraryjavascript - mapreduce programming method problems solved by spark use by hadoop framework - mapreducephases of massive open online courses see mooc match clause matost matplotlib package matrix trace maxpoint md tag mean squared error measurementunits of meguro memorypackages for working with data in mind map mini-batch learning missing values - mkdir uri command mlbam (major league baseball advanced media mllib mnist data set - mnist database model building creating hamming distance function - in malicious urls prediction - overview modeling process - engineering features and selecting model - predicting new observations index recipe recommendation engine example - training model validating model - models - defined diagnostics and comparison - execution of - selection of - moneyballthe art of winning an unfair game monitoring systemsfor social media mooc (massive open online courses multi-model database multi-node deployment mv olduri newuri command mysql consoleon linux mysql database mysql notifier mysql root password mysql root user mysql serverinstalling - linux installation - windows installation - mysql - root - command mysql workbench mysqldb naive bayes classifier naive bayes model - naivebayesdata json named entity recognition technique nasdaq example natural language - natural language processing see nlp natural language toolkit see nltk neo jsetting up - linux installation windows installation - netflix network graphs newsql platforms ngos (nongovernmental organizations nlp (natural language processing nltk (natural language toolkit - nltk corpus nltk decision tree classifier nltk package nltk word tokenizer nltk download(command nltk org nn tag nnp tag nnps tag nns tag node communication failure node componentnetwork data nodes - normalization nosql (not only structured query languagebase principles of nosql databases - data exploration - handling spelling mistakes overview project primary objective - project secondary objective - data preparation - data retrieval and preparation - database types - column-oriented database - document stores - graph databases - key-value stores - overview presentation and automation setting research goal nosql databases - np around(function numba numbapro numexpr numpy package nvd
16,986
classification and regression generalizationoverfittingand underfitting relation of model complexity to dataset size supervised machine learning algorithms some sample datasets -nearest neighbors linear models naive bayes classifiers decision trees ensembles of decision trees kernelized support vector machines neural networks (deep learninguncertainty estimates from classifiers the decision function predicting probabilities uncertainty in multiclass classification summary and outlook unsupervised learning and preprocessing types of unsupervised learning challenges in unsupervised learning preprocessing and scaling different kinds of preprocessing applying data transformations scaling training and test data the same way the effect of preprocessing on supervised learning dimensionality reductionfeature extractionand manifold learning principal component analysis (pcanon-negative matrix factorization (nmfmanifold learning with -sne clustering -means clustering agglomerative clustering dbscan comparing and evaluating clustering algorithms summary of clustering methods summary and outlook representing data and engineering features categorical variables one-hot-encoding (dummy variablesiv table of contents
16,987
binningdiscretizationlinear modelsand trees interactions and polynomials univariate nonlinear transformations automatic feature selection univariate statistics model-based feature selection iterative feature selection utilizing expert knowledge summary and outlook model evaluation and improvement cross-validation cross-validation in scikit-learn benefits of cross-validation stratified -fold cross-validation and other strategies grid search simple grid search the danger of overfitting the parameters and the validation set grid search with cross-validation evaluation metrics and scoring keep the end goal in mind metrics for binary classification metrics for multiclass classification regression metrics using evaluation metrics in model selection summary and outlook algorithm chains and pipelines parameter selection with preprocessing building pipelines using pipelines in grid searches the general pipeline interface convenient pipeline creation with make_pipeline accessing step attributes accessing attributes in grid-searched pipeline grid-searching preprocessing steps and model parameters grid-searching which model to use summary and outlook working with text data types of data represented as strings table of contents
16,988
representing text data as bag of words applying bag-of-words to toy dataset bag-of-words for movie reviews stopwords rescaling the data with tf-idf investigating model coefficients bag-of-words with more than one word ( -gramsadvanced tokenizationstemmingand lemmatization topic modeling and document clustering latent dirichlet allocation summary and outlook wrapping up approaching machine learning problem humans in the loop from prototype to production testing production systems building your own estimator where to go from here theory other machine learning frameworks and packages rankingrecommender systemsand other kinds of learning probabilistic modelinginferenceand probabilistic programming neural networks scaling to larger datasets honing your skills conclusion index vi table of contents
16,989
machine learning is an integral part of many commercial applications and research projects todayin areas ranging from medical diagnosis and treatment to finding your friends on social networks many people think that machine learning can only be applied by large companies with extensive research teams in this bookwe want to show you how easy it can be to build machine learning solutions yourselfand how to best go about it with the knowledge in this bookyou can build your own system for finding out how people feel on twitteror making predictions about global warming the applications of machine learning are endless andwith the amount of data available todaymostly limited by your imagination who should read this book this book is for current and aspiring machine learning practitioners looking to implement solutions to real-world machine learning problems this is an introductory book requiring no previous knowledge of machine learning or artificial intelligence (aiwe focus on using python and the scikit-learn libraryand work through all the steps to create successful machine learning application the methods we introduce will be helpful for scientists and researchersas well as data scientists working on commercial applications you will get the most out of the book if you are somewhat familiar with python and the numpy and matplotlib libraries we made conscious effort not to focus too much on the mathbut rather on the practical aspects of using machine learning algorithms as mathematics (probability theoryin particularis the foundation upon which machine learning is builtwe won' go into the analysis of the algorithms in great detail if you are interested in the mathematics of machine learning algorithmswe recommend the book the elements of statistical learning (springerby trevor hastierobert tibshiraniand jerome friedmanwhich is available for free at the authorswebsite we will also not describe how to write machine learning algorithms from scratchand will instead focus on vii
16,990
libraries why we wrote this book there are many books on machine learning and ai howeverall of them are meant for graduate students or phd students in computer scienceand they're full of advanced mathematics this is in stark contrast with how machine learning is being usedas commodity tool in research and commercial applications todayapplying machine learning does not require phd howeverthere are few resources out there that fully cover all the important aspects of implementing machine learning in practicewithout requiring you to take advanced math courses we hope this book will help people who want to apply machine learning without reading up on yearsworth of calculuslinear algebraand probability theory navigating this book this book is organized roughly as follows introduces the fundamental concepts of machine learning and its applicationsand describes the setup we will be using throughout the book and describe the actual machine learning algorithms that are most widely used in practiceand discuss their advantages and shortcomings discusses the importance of how we represent data that is processed by machine learningand what aspects of the data to pay attention to covers advanced methods for model evaluation and parameter tuningwith particular focus on cross-validation and grid search explains the concept of pipelines for chaining models and encapsulating your workflow shows how to apply the methods described in earlier to text dataand introduces some text-specific processing techniques offers high-level overviewand includes references to more advanced topics while and provide the actual algorithmsunderstanding all of these algorithms might not be necessary for beginner if you need to build machine learning system asapwe suggest starting with and the opening sections of which introduce all the core concepts you can then skip to "summary and outlookon page in which includes list of all the supervised models that we cover choose the model that best fits your needs and flip back to read the viii preface
16,991
uate and tune your model online resources while studying this bookdefinitely refer to the scikit-learn website for more indepth documentation of the classes and functionsand many examples there is also video course created by andreas muller"advanced machine learning with scikitlearn,that supplements this book you can find it at advanced_machine_learning_scikit-learn conventions used in this book the following typographical conventions are used in this bookitalic indicates new termsurlsemail addressesfilenamesand file extensions constant width used for program listingsas well as within paragraphs to refer to program elements such as variable or function namesdatabasesdata typesenvironment variablesstatementsand keywords also used for commands and module and package names constant width bold shows commands or other text that should be typed literally by the user constant width italic shows text that should be replaced with user-supplied values or by values determined by context this element signifies tip or suggestion this element signifies general note preface ix
16,992
using code examples supplemental material (code examplesipython notebooksetc is available for download at this book is here to help you get your job done in generalif example code is offered with this bookyou may use it in your programs and documentation you do not need to contact us for permission unless you're reproducing significant portion of the code for examplewriting program that uses several chunks of code from this book does not require permission selling or distributing cd-rom of examples from 'reilly books does require permission answering question by citing this book and quoting example code does not require permission incorporating significant amount of example code from this book into your product' documentation does require permission we appreciatebut do not requireattribution an attribution usually includes the titleauthorpublisherand isbn for example"an introduction to machine learning with python by andreas muller and sarah guido ( 'reillycopyright sarah guido and andreas muller- - if you feel your use of code examples falls outside fair use or the permission given abovefeel free to contact us at permissions@oreilly com safari(rbooks online safari books online is an on-demand digital library that delivers expert content in both book and video form from the world' leading authors in technology and business technology professionalssoftware developersweb designersand business and creative professionals use safari books online as their primary resource for researchproblem solvinglearningand certification training safari books online offers range of plans and pricing for enterprisegovernmenteducationand individuals members have access to thousands of bookstraining videosand prepublication manuscripts in one fully searchable database from publishers like 'reilly mediaprentice hall professionaladdison-wesley professionalmicrosoft presssamsquex preface
16,993
mannibm redbookspacktadobe pressft pressapressmanningnew ridersmcgraw-hilljones bartlettcourse technologyand hundreds more for more information about safari books onlineplease visit us online how to contact us please address comments and questions concerning this book to the publishero'reilly mediainc gravenstein highway north sebastopolca (in the united states or canada(international or local(faxwe have web page for this bookwhere we list errataexamplesand any additional information you can access this page at to comment or ask technical questions about this booksend email to bookquestions@oreilly com for more information about our bookscoursesconferencesand newssee our website at find us on facebookfollow us on twitterwatch us on youtubeacknowledgments from andreas without the help and support of large group of peoplethis book would never have existed would like to thank the editorsmeghan blanchettebrian macdonaldand in particular dawn schanafeltfor helping sarah and me make this book reality want to thank my reviewersthomas caswellolivier griselstefan van der waltand john myles whitewho took the time to read the early versions of this book and provided me with invaluable feedback--in addition to being some of the cornerstones of the scientific open source ecosystem preface xi
16,994
especially the contributors to scikit-learn without the support and help from this communityin particular from gael varoquauxalex gramfortand olivier griseli would never have become core contributor to scikit-learn or learned to understand this package as well as do now my thanks also go out to all the other contributors who donate their time to improve and maintain this package ' also thankful for the discussions with many of my colleagues and peers that helped me understand the challenges of machine learning and gave me ideas for structuring textbook among the people talk to about machine learningi specifically want to thank brian mcfeedaniela huttenkoppenjoel nothmangilles louppehugo bowne-andersonsven kreisalice zhengkyunghyun chopablo baberasand dan cervone my thanks also go out to rachel rakovwho was an eager beta tester and proofreader of an early version of this bookand helped me shape it in many ways on the personal sidei want to thank my parentsharald and margotand my sistermiriamfor their continuing support and encouragement also want to thank the many people in my life whose love and friendship gave me the energy and support to undertake such challenging task from sarah would like to thank meg blanchettewithout whose help and guidance this project would not have even existed thanks to celia la and brian carlson for reading in the early days thanks to the 'reilly folks for their endless patience and finallythanks to dtsfor your everlasting and endless support xii preface
16,995
introduction machine learning is about extracting knowledge from data it is research field at the intersection of statisticsartificial intelligenceand computer science and is also known as predictive analytics or statistical learning the application of machine learning methods has in recent years become ubiquitous in everyday life from automatic recommendations of which movies to watchto what food to order or which products to buyto personalized online radio and recognizing your friends in your photosmany modern websites and devices have machine learning algorithms at their core when you look at complex website like facebookamazonor netflixit is very likely that every part of the site contains multiple machine learning models outside of commercial applicationsmachine learning has had tremendous influence on the way data-driven research is done today the tools introduced in this book have been applied to diverse scientific problems such as understanding starsfinding distant planetsdiscovering new particlesanalyzing dna sequencesand providing personalized cancer treatments your application doesn' need to be as large-scale or world-changing as these examples in order to benefit from machine learningthough in this we will explain why machine learning has become so popular and discuss what kinds of problems can be solved using machine learning thenwe will show you how to build your first machine learning modelintroducing important concepts along the way why machine learningin the early days of "intelligentapplicationsmany systems used handcoded rules of "if and "elsedecisions to process data or adjust to user input think of spam filter whose job is to move the appropriate incoming email messages to spam folder you could make up blacklist of words that would result in an email being marked as
16,996
"intelligentapplication manually crafting decision rules is feasible for some applicationsparticularly those in which humans have good understanding of the process to model howeverusing handcoded rules to make decisions has two major disadvantagesthe logic required to make decision is specific to single domain and task changing the task even slightly might require rewrite of the whole system designing rules requires deep understanding of how decision should be made by human expert one example of where this handcoded approach will fail is in detecting faces in images todayevery smartphone can detect face in an image howeverface detection was an unsolved problem until as recently as the main problem is that the way in which pixels (which make up an image in computerare "perceivedby the computer is very different from how humans perceive face this difference in representation makes it basically impossible for human to come up with good set of rules to describe what constitutes face in digital image using machine learninghoweversimply presenting program with large collection of images of faces is enough for an algorithm to determine what characteristics are needed to identify face problems machine learning can solve the most successful kinds of machine learning algorithms are those that automate decision-making processes by generalizing from known examples in this settingwhich is known as supervised learningthe user provides the algorithm with pairs of inputs and desired outputsand the algorithm finds way to produce the desired output given an input in particularthe algorithm is able to create an output for an input it has never seen before without any help from human going back to our example of spam classificationusing machine learningthe user provides the algorithm with large number of emails (which are the input)together with information about whether any of these emails are spam (which is the desired outputgiven new emailthe algorithm will then produce prediction as to whether the new email is spam machine learning algorithms that learn from input/output pairs are called supervised learning algorithms because "teacherprovides supervision to the algorithms in the form of the desired outputs for each example that they learn from while creating dataset of inputs and outputs is often laborious manual processsupervised learning algorithms are well understood and their performance is easy to measure if your application can be formulated as supervised learning problemand you are able to introduction
16,997
able to solve your problem examples of supervised machine learning tasks includeidentifying the zip code from handwritten digits on an envelope here the input is scan of the handwritingand the desired output is the actual digits in the zip code to create dataset for building machine learning modelyou need to collect many envelopes then you can read the zip codes yourself and store the digits as your desired outcomes determining whether tumor is benign based on medical image here the input is the imageand the output is whether the tumor is benign to create dataset for building modelyou need database of medical images you also need an expert opinionso doctor needs to look at all of the images and decide which tumors are benign and which are not it might even be necessary to do additional diagnosis beyond the content of the image to determine whether the tumor in the image is cancerous or not detecting fraudulent activity in credit card transactions here the input is record of the credit card transactionand the output is whether it is likely to be fraudulent or not assuming that you are the entity distributing the credit cardscollecting dataset means storing all transactions and recording if user reports any transaction as fraudulent an interesting thing to note about these examples is that although the inputs and outputs look fairly straightforwardthe data collection process for these three tasks is vastly different while reading envelopes is laboriousit is easy and cheap obtaining medical imaging and diagnoseson the other handrequires not only expensive machinery but also rare and expensive expert knowledgenot to mention the ethical concerns and privacy issues in the example of detecting credit card frauddata collection is much simpler your customers will provide you with the desired outputas they will report fraud all you have to do to obtain the input/output pairs of fraudulent and nonfraudulent activity is wait unsupervised algorithms are the other type of algorithm that we will cover in this book in unsupervised learningonly the input data is knownand no known output data is given to the algorithm while there are many successful applications of these methodsthey are usually harder to understand and evaluate examples of unsupervised learning includeidentifying topics in set of blog posts if you have large collection of text datayou might want to summarize it and find prevalent themes in it you might not know beforehand what these topics areor how many topics there might be thereforethere are no known outputs why machine learning
16,998
given set of customer recordsyou might want to identify which customers are similarand whether there are groups of customers with similar preferences for shopping sitethese might be "parents,"bookworms,or "gamers because you don' know in advance what these groups might beor even how many there areyou have no known outputs detecting abnormal access patterns to website to identify abuse or bugsit is often helpful to find access patterns that are different from the norm each abnormal pattern might be very differentand you might not have any recorded instances of abnormal behavior because in this example you only observe trafficand you don' know what constitutes normal and abnormal behaviorthis is an unsupervised problem for both supervised and unsupervised learning tasksit is important to have representation of your input data that computer can understand often it is helpful to think of your data as table each data point that you want to reason about (each emaileach customereach transactionis rowand each property that describes that data point (saythe age of customer or the amount or location of transactionis column you might describe users by their agetheir genderwhen they created an accountand how often they have bought from your online shop you might describe the image of tumor by the grayscale values of each pixelor maybe by using the sizeshapeand color of the tumor each entity or row here is known as sample (or data pointin machine learningwhile the columns--the properties that describe these entities--are called features later in this book we will go into more detail on the topic of building good representation of your datawhich is called feature extraction or feature engineering you should keep in mindhoweverthat no machine learning algorithm will be able to make prediction on data for which it has no information for exampleif the only feature that you have for patient is their last nameno algorithm will be able to predict their gender this information is simply not contained in your data if you add another feature that contains the patient' first nameyou will have much better luckas it is often possible to tell the gender by person' first name knowing your task and knowing your data quite possibly the most important part in the machine learning process is understanding the data you are working with and how it relates to the task you want to solve it will not be effective to randomly choose an algorithm and throw your data at it it is necessary to understand what is going on in your dataset before you begin building model each algorithm is different in terms of what kind of data and what problem setting it works best for while you are building machine learning solutionyou should answeror at least keep in mindthe following questions introduction
16,999
that questionwhat is the best way to phrase my question(sas machine learning problemhave collected enough data to represent the problem want to solvewhat features of the data did extractand will these enable the right predictionshow will measure success in my applicationhow will the machine learning solution interact with other parts of my research or business productin larger contextthe algorithms and methods in machine learning are only one part of greater process to solve particular problemand it is good to keep the big picture in mind at all times many people spend lot of time building complex machine learning solutionsonly to find out they don' solve the right problem when going deep into the technical aspects of machine learning (as we will in this book)it is easy to lose sight of the ultimate goals while we will not discuss the questions listed here in detailwe still encourage you to keep in mind all the assumptions that you might be makingexplicitly or implicitlywhen you start building machine learning models why pythonpython has become the lingua franca for many data science applications it combines the power of general-purpose programming languages with the ease of use of domain-specific scripting languages like matlab or python has libraries for data loadingvisualizationstatisticsnatural language processingimage processingand more this vast toolbox provides data scientists with large array of generaland special-purpose functionality one of the main advantages of using python is the ability to interact directly with the codeusing terminal or other tools like the jupyter notebookwhich we'll look at shortly machine learning and data analysis are fundamentally iterative processesin which the data drives the analysis it is essential for these processes to have tools that allow quick iteration and easy interaction as general-purpose programming languagepython also allows for the creation of complex graphical user interfaces (guisand web servicesand for integration into existing systems scikit-learn scikit-learn is an open source projectmeaning that it is free to use and distributeand anyone can easily obtain the source code to see what is going on behind the why python