id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
15,800 | track of all datasets which would be used for analysis and additional data sources if any are necessary this document can be combined with the subsequent stages of this phase data description data description involves carrying out initial analysis on the data to understand more about the dataits sourcevolumeattributesand relationships once these details are documentedany shortcomings if noted should be informed to relevant personnel the following factors are crucial to building proper data description document data sources (sqlnosqlbig data)record of origin (roo)record of reference(rordata volume (sizenumber of recordstotal databasestablesdata attributes and their description (variablesdata typesrelationship and mapping schemes (understand attribute representationsbasic descriptive statistics (meanmedianvariancefocus on which attributes are important for the business exploratory data analysis exploratory data analysisalso known as edais one of the first major analysis stages in the lifecycle herethe main objective is to explore and understand the data in detail you can make use of descriptive statisticsplotschartsand visualizations to look at the various data attributesfind associations and correlations and make note of data quality problems if any following are some of the major tasks in this stage exploredescribeand visualize data attributes select data and attributes subsets that seem most important for the problem extensive analysis to find correlations and associations and test hypotheses note missing data points if any data quality analysis data quality analysis is the final stage in the data understanding phase where we analyze the quality of data in our datasets and document potential errorsshortcomingsand issues that need to be resolved before analyzing the data further or starting modeling efforts the main focus on data quality analysis involves the following missing values inconsistent values wrong information due to data errors (manual/automatedwrong metadata information |
15,801 | data preparation the third phase in the crisp-dm process takes place after gaining enough knowledge on the business problem and relevant dataset data preparation is mainly set of tasks that are performed to cleanwranglecurateand prepare the data before running any analytical or machine learning methods and building models we will briefly discuss some of the major tasks under the data preparation phase in this section an important point to remember here is that data preparation usually is the most time consuming phase in the data mining lifecycle and often takes to time in the overall project however this phase should be taken very seriously becauselike we have discussed multiple times beforebad data will lead to bad models and poor performance and results data integration the process of data integration is mainly done when we have multiple datasets that we might want to integrate or merge this can be done in two ways appending several datasets by combining themwhich is typically done for datasets having the same attributes merging several datasets together having different attributes or columnsby using common fields like keys data wrangling the process of data wrangling or data munging involves data processingcleaningnormalizationand formatting data in its raw form is rarely consumable by machine learning methods to build models hence we need to process the data based on its formclean underlying errors and inconsistenciesand format it into more consumable formats for ml algorithms following are the main tasks relevant to data wrangling handling missing values (remove rowsimpute missing valueshandling data inconsistencies (delete rowsattributesfix inconsistenciesfixing incorrect metadata and annotations handling ambiguous attribute values curating and formatting data into necessary formats (csvjsonrelationalattribute generation and selection data is comprised of observations or samples (rowsand attributes or features (columnsthe process of attribute generation is also known as feature extraction and engineering in machine learning terminology attribute generation is basically creating new attributes or variables from existing attributes based on some ruleslogicor hypothesis simple example would be creating new numeric variable called age based on two date-time fields--current_date and birth_date--for dataset of employees in an organization there are several techniques with regard to attribute generation that we discuss in future attribute selection is basically selecting subset of features or attributes from the dataset based on parameters like attribute importancequalityrelevancyassumptionsand constraints sometimes even machine learning methods are used to select relevant attributes based on the data this is popularly known as feature selection in machine learning terminology |
15,802 | modeling the fourth phase in the crisp-dm process is the core phase in the process where most of the analysis takes place with regard to using cleanformatted data and its attributes to build models to solve business problems this is an iterative processas depicted in figure - earlieralong with model evaluation and all the preceding steps leading up to modeling the basic idea is to build multiple models iteratively trying to get to the best model that satisfies our success criteriadata mining objectivesand business objectives we briefly talk about some of the major stages relevant to modeling in this section selecting modeling techniques in this stagewe pick up list of relevant machine learning and data mining toolsframeworkstechniquesand algorithms listed in the "business understandingphase techniques that are proven to be robust and useful in solving the problem are usually selected based on inputs and insights from data analysts and data scientists these are mainly decided by the current data availablebusiness goalsdata mining goalsalgorithm requirementsand constraints model building the process of model building is also known as training the model using data and features from our dataset combination of data (featuresand machine learning algorithms together give us model that tries to generalize on the training data and give necessary results in the form of insights and/or predictions generally various algorithms are used to try out multiple modeling approaches on the same data to solve the same problem to get the best model that performs and gives outputs that are the closest to the business success criteria key things to keep track here are the models createdmodel parameters being usedand their results model evaluation and tuning in this stagewe evaluate each model based on several metrics like model accuracyprecisionrecallf scoreand so on we also tune the model parameters based on techniques like grid search and cross validation to get to the model that gives us the best results tuned models are also matched with the data mining goals to see if we are able to get the desired results as well as performance model tuning is also termed as hyperparameter optimization in the machine learning world model assessment once we have models that are providing desirable and relevant resultsa detailed assessment of the model is performed based on the following parameters model performance is in line with defined success criteria reproducible and consistent results from models scalabilityrobustnessand ease of deployment future extensibility of the model model evaluation gives satisfactory results |
15,803 | evaluation the fifth phase in the crisp-dm process takes place once we have the final models from the modeling phase that satisfy necessary success criteria with respect to our data mining goals and have the desired performance and results with regard to model evaluation metrics like accuracy the evaluation phase involves carrying out detailed assessment and review of the final models and the results which are obtained from them some of the main points that are evaluated in this section are as follows ranking final models based on the quality of results and their relevancy based on alignment with business objectives any assumptions or constraints that were invalidated by the models cost of deployment of the entire machine learning pipeline from data extraction and processing to modeling and predictions any pain points in the whole processwhat should be recommendedwhat should be avoideddata sufficiency report based on results final suggestionsfeedbackand recommendations from solutions team and smes based on the report formed from these pointsafter discussionthe team can decide whether they want to proceed to the next phase of model deployment or full reiteration is neededstarting from business and data understanding to modeling deployment the final phase in the crisp-dm process is all about deploying your selected models to production and making sure the transition from development to production is seamless usually most organizations follow standard path-to-production methodology proper plan for deployment is built based on resources requiredservershardwaresoftwareand so on models are validatedsavedand deployed on necessary systems and servers plan is also put in place for regular monitoring and maintenance of models to continuously evaluate their performancecheck for results and their validityand retirereplaceand update models as and when needed building machine intelligence the objective of machine learningdata miningor artificial intelligence is to make our lives easierautomate tasksand take better decisions building machine intelligence involves everything we have learned until now starting from machine learning concepts to actually implementing and building models and using them in the real world machine intelligence can be built using non-traditional computing approaches like machine learning in this sectionwe establish full-fledged end-to-end machine learning pipelines based on the crisp-dm modelwhich will help us solve real-world problems by building machine intelligence using structured process machine learning pipelines the best way to solve real-world machine learning or analytics problem is to use machine learning pipeline starting from getting your data to transforming it into information and insights using machine |
15,804 | learning algorithms and techniques this is more of technical or solution based pipeline and it assumes that several aspects of the crisp-dm model are already coveredincluding the following points business and data understanding ml/dm technique selection riskassumptionsand constraints assessment machine learning pipeline will mainly consist of elements related to data retrieval and extractionpreparationmodelingevaluationand deployment figure - shows high-level overview of standard machine learning pipeline with the major phases highlighted in their blocks figure - standard machine learning pipeline from figure - it is evident that there are several major phases in the machine learning pipeline and they are quite similar to the crisp-dm process modelwhich is why we talked about it in detail earlier the major steps in the pipeline are briefly mentioned here data retrievalthis is mainly data collectionextractionand acquisition from various data sources and data stores we cover data retrieval mechanisms in detail in "processingwranglingand visualizing datadata preparationin this stepwe pre-process the dataclean itwrangle itand manipulate it as needed initial exploratory data analysis is also carried out next steps involved extractingengineeringand selecting features/attributes from the data data processing and wranglingmainly concerned with data processingcleaningmungingwrangling and performing initial descriptive and exploratory data analysis we cover this in further details with hands-on examples in "processingwranglingand visualizing datafeature extraction and engineeringherewe extract important features or attributes from the raw data and even create or engineer new features from existing features details on various feature engineering techniques are covered in "feature engineering and selection |
15,805 | feature scaling and selectiondata features often need to be normalized and scaled to prevent machine learning algorithms from getting biased besides thisoften we need to select subset of all available features based on feature importance and quality this process is known as feature selection "feature engineering and selection,covers these aspects modelingin the process of modelingwe usually feed the data features to machine learning method or algorithm and train the modeltypically to optimize specific cost function in most cases with the objective of reducing errors and generalizing the representations learned from the data "buildingtuningand deploying models,covers the art and science behind building machine learning models model evaluation and tuningbuilt models are evaluated and tested on validation datasets andbased on metrics like accuracyf scoreand othersthe model performance is evaluated models have various parameters that are tuned in process called hyperparameter optimization to get models with the best and optimal results "buildingtuningand deploying models,covers these aspects deployment and monitoringselected models are deployed in production and are constantly monitored based on their predictions and results details on model deployment are covered in "buildingtuning and deploying modelssupervised machine learning pipeline by now we know that supervised machine learning methods are all about working with supervised labeled data to train models and then predict outcomes for new data samples some processes like feature engineeringscalingand selection should always remain constant so that the same features are used for training the model and the same features are extracted from new data samples to feed the model in the prediction phase based on our earlier generic machine learning pipelinefigure - shows standard supervised machine learning pipeline figure - supervised machine learning pipeline you can clearly see the two phases of model training and prediction highlighted in figure - alsobased on what we had mentioned earlierthe same sequence of data processingwranglingfeature engineeringscalingand selection is used for both data used in training the model and future data samples for which the model predicts outcomes this is very important point that you must remember whenever you are building any supervised model besides thisas depictedthe model is combination of machine |
15,806 | learning (supervisedalgorithm and training data features and corresponding labels this model will take features from new data samples and output predicted labels in the prediction phase unsupervised machine learning pipeline unsupervised machine learning is all about extracting patternsrelationshipsassociationsand clusters from data the processes related to feature engineeringscaling and selection are similar to supervised learning however there is no concept of pre-labeled data here hence the unsupervised machine learning pipeline would be slightly different in contrast to the supervised pipeline figure - depicts standard unsupervised machine learning pipeline figure - unsupervised machine learning pipeline figure - clearly depicts that no supervised labeled data is used for training the model with the absence of labelswe just have training data that goes through the same data preparation phase as in the supervised learning pipeline and we build our unsupervised model with an unsupervised machine learning algorithm and training features in the prediction phasewe extract features from new data samples and pass them through the model which gives relevant results according to the type of machine learning task we are trying to performwhich can be clusteringpattern detectionassociation rulesor dimensionality reduction real-world case studypredicting student grant recommendations let' take step back from what we have learned so farthe main objective here was to gain solid grasp over the entire machine learning landscapeunderstand crucial conceptsbuild on the basic foundationsand understand how to execute machine learning projects with the help of machine learning pipelines with the crisp-dm process model being the source of all inspiration let' put all this together to take very basic real-world case study by building supervised machine learning pipeline on toy dataset our major objective is as follows given that you have several students with multiple attributes like gradesperformanceand scorescan you build model based on past historical data to predict the chance of the student getting recommendation grant for research projectthis will be quick walkthrough with the main intent of depicting how to build and deploy real-world machine learning pipeline and perform predictions this will also give you good hands-on experience to get started with machine learning do not worry too much if you don' understand the details of each and every line of codethe subsequent cover all the toolstechniquesand frameworks used here in |
15,807 | detail we will be using python in this bookyou can refer to "the python machine learning ecosystemto understand more about python and the various tools and frameworks used in machine learning you can follow along with the code snippets in this section or open the predicting student recommendation machine learning pipeline ipynb jupyter notebook by running jupyter notebook in the command line/terminal in the same directory as this notebook you can then run the relevant code snippets in the notebook from your browser covers jupyter notebooks in detail objective you have historical student performance data and their grant recommendation outcomes in the form of comma separated value file named student_records csv each data sample consists of the following attributes name (the student nameoverallgrade (overall grade obtainedobedient (whether they were diligent during their course of stayresearchscore (marks obtained in their research workprojectscore (marks obtained in the projectrecommend (whether they got the grant recommendationyou main objective is to build predictive model based on this data such that you can predict for any future student whether they will be recommended for the grant based on their performance attributes data retrieval herewe will leverage the pandas framework to retrieve the data from the csv file the following snippet shows us how to retrieve the data and view it in [ ]import pandas as pd turn of warning messages pd options mode chained_assignment none default='warnget data df pd read_csv('student_records csv'df |
15,808 | figure - raw data depicting student records and their recommendations now that we can see data samples showing records for each student and their corresponding recommendation outcomes in figure - we will perform necessary tasks relevant to data preparation data preparation based on the dataset we saw earlierwe do not have any data errors or missing valueshence we will mainly focus on feature engineering and scaling in this section feature extraction and engineering let' start by extracting the existing features from the dataset and the outcomes in separate variables the following snippet shows this process see figures - and - in [ ]get features and corresponding outcomes feature_names ['overallgrade''obedient''researchscore''projectscore'training_features df[feature_namesoutcome_name ['recommend'outcome_labels df[outcome_namein [ ]view features training_features |
15,809 | figure - dataset features in [ ]view outcome labels outcome_labels figure - dataset recommendation outcome labels for each student |
15,810 | now that we have extracted our initial available features from the data and their corresponding outcome labelslet' separate out our available features based on their type (numerical and categoricaltypes of feature variables are covered in more detail in "processingwranglingand visualizing datain [ ]list down features based on type numeric_feature_names ['researchscore''projectscore'categoricial_feature_names ['overallgrade''obedient'we will now use standard scalar from scikit-learn to scale or normalize our two numeric scorebased attributes using the following code in [ ]from sklearn preprocessing import standardscaler ss standardscaler(fit scaler on numeric features ss fit(training_features[numeric_feature_names]scale numeric features now training_features[numeric_feature_namesss transform(training_features[numeric_feature_names]view updated featureset training_features figure - feature set with scaled numeric attributes |
15,811 | now that we have successfully scaled our numeric features (see figure - )let' handle our categorical features and carry out the necessary feature engineering needed based on the following code in [ ]training_features pd get_dummies(training_featurescolumns=categoricial_feature_namesview newly engineering features training_features figure - feature set with engineered categorical variables in [ ]get list of new categorical features categorical_engineered_features list(set(training_features columnsset(numeric_feature_names)figure - shows us the updated feature set with the newly engineered categorical variables this process is also known as one hot encoding modeling we will now build simple classification (supervisedmodel based on our feature set by using the logistic regression algorithm the following code depicts how to build the supervised model in [ ]from sklearn linear_model import logisticregression import numpy as np fit the model lr logisticregression(model lr fit(training_featuresnp array(outcome_labels['recommend'])view model parameters model out[ ]logisticregression( = class_weight=nonedual=falsefit_intercept=trueintercept_scaling= max_iter= multi_class='ovr'n_jobs= penalty=' 'random_state=nonesolver='liblinear'tol= verbose= warm_start=falsethuswe now have our supervised learning model based on the logistic regression model with regularizationas you can see from the parameters in the previous output |
15,812 | model evaluation typically model evaluation is done based on some holdout or validation dataset that is different from the training dataset to prevent overfitting or biasing the model since this is an example on toy datasetlet' evaluate the performance of our model on the training data using the following snippet in [ ]simple evaluation on training data pred_labels model predict(training_featuresactual_labels np array(outcome_labels['recommend']evaluate model performance from sklearn metrics import accuracy_score from sklearn metrics import classification_report print('accuracy:'float(accuracy_score(actual_labelspred_labels))* '%'print('classification stats:'print(classification_report(actual_labelspred_labels)accuracy classification statsprecision recall -score support no yes avg total thus you can see the various metrics that we had mentioned earlierlike accuracyprecisionrecalland score depicting the model performance we talk about these metrics in detail in "buildingtuningand deploying modelsmodel deployment we built our first supervised learning modeland to deploy this model typically in system or serverwe need to persist the model we also need to save the scalar object we used to scale the numerical features since we use it to transform the numeric features of new data samples the following snippet depicts way to store the model and scalar objects in [ ]from sklearn externals import joblib import os save models to be deployed on your server if not os path exists('model')os mkdir('model'if not os path exists('scaler')os mkdir('scaler'joblib dump(modelr'model/model pickle'joblib dump(ssr'scaler/scaler pickle'these files can be easily deployed on server with necessary code to reload the model and predict new data sampleswhich we will see in the upcoming sections |
15,813 | prediction in action we are now ready to start predicting with our newly built and deployed modelto start predictionswe need to load our model and scalar objects into memory the following code helps us do this in [ ]load model and scaler objects model joblib load( 'model/model pickle'scaler joblib load( 'scaler/scaler pickle'we have some sample new student records (for two studentsfor which we want our model to predict if they will get the grant recommendation let' retrieve and view this data using the following code in [ ]#data retrieval new_data pd dataframe([{'name''nathan''overallgrade'' ''obedient'' ''researchscore' 'projectscore' }{'name''thomas''overallgrade'' ''obedient'' ''researchscore' 'projectscore' }]new_data new_data[['name''overallgrade''obedient''researchscore''projectscore']new_data figure - new student records we will now carry out the tasks relevant to data preparation--feature extractionengineeringand scaling--in the following code snippet in [ ]#data preparation prediction_features new_data[feature_namesscaling prediction_features[numeric_feature_namesscaler transform(prediction_features[numeric_feature_names]engineering categorical variables prediction_features pd get_dummies(prediction_featurescolumns=categoricial_feature_namesview feature set prediction_features |
15,814 | figure - updated feature set for new students we now have the relevant features for the new studentshowever you can see that some of the categorical features are missing based on some grades like bcand this is because none of these students obtained those grades but we still need those attributes because the model was trained on all attributes including these the following snippet helps us identify and add the missing categorical features we add the value for each of those features as for each student since they did not obtain those grades in [ ]add missing categorical feature columns current_categorical_engineered_features set(prediction_features columnsset(numeric_feature_namesmissing_features set(categorical_engineered_featurescurrent_categorical_engineered_features for feature in missing_featuresadd zeros since feature is absent in these data samples prediction_features[feature[ len(prediction_featuresview final feature set prediction_features figure - final feature set for new students we have our complete feature set ready for both the new students let' put our model to the test and get the predictions with regard to grant recommendationsin [ ]#predict using model predictions model predict(prediction_features#display results new_data['recommend'predictions new_data |
15,815 | figure - new student records with model predictions for grant recommendations we can clearly see from figure - that our model has predicted grant recommendation labels for both the new students thomas clearly being diligenthaving straight average and decent scoresis most likely to get the grant recommendation as compared to nathan thus you can see that our model has learned how to predict grant recommendation outcomes based on past historical student data this should whet your appetite on getting started with machine learning we are about to deep dive into more complex real-world problems in the upcoming challenges in machine learning machine learning is rapidly evolvingfast-pacedand exciting field with lot of prospectopportunityand scope however it comes with its own set of challengesdue to the complex nature of machine learning methodsits dependency on dataand not being one of the more traditional computing paradigms the following points cover some of the main challenges in machine learning data quality issues lead to problemsespecially with regard to data processing and feature extraction data acquisitionextractionand retrieval is an extremely tedious and time consuming process lack of good quality and sufficient training data in many scenarios formulating business problems clearly with well-defined goals and objectives feature extraction and engineeringespecially hand-crafting featuresis one of the most difficult yet important tasks in machine learning deep learning seems to have gained some advantage in this area recently overfitting or underfitting models can lead to the model learning poor representations and relationships from the training data leading to detrimental performance the curse of dimensionalitytoo many features can be real hindrance complex models can be difficult to deploy in the real world this is not an exhaustive list of challenges faced in machine learning todaybut it is definitely list of the top problems data scientists or analysts usually face in machine learning projects and tasks we will cover dealing with these issues in detail when we discuss more about the various stages in the machine learning pipeline as well as solve real-world problems in subsequent real-world applications of machine learning machine learning is widely being applied and used in the real world today to solve complex problems that would otherwise have been impossible to solve based on traditional approaches and rule-based systems the following list depicts some of the real-world applications of machine learning |
15,816 | product recommendations in online shopping platforms sentiment and emotion analysis anomaly detection fraud detection and prevention content recommendation (newsmusicmoviesand so onweather forecasting stock market forecasting market basket analysis customer segmentation object and scene recognition in images and video speech recognition churn analytics click through predictions failure/defect detection and prevention -mail spam filtering summary the intent of this was to get you familiarized with the foundations of machine learning before taking deep dive into machine learning pipelines and solving real-world problems the need for machine learning in today' world is introduced in the with focus on making data-driven decisions at scale we also talked about the various programming paradigms and how machine learning has disrupted the traditional programming paradigm next upwe explored the machine learning landscape starting from the formal definition to the various domains and fields associated with machine learning basic foundational concepts were covered in areas like mathematicsstatisticscomputer sciencedata sciencedata miningartificial intelligencenatural language processingand deep learning since all of them tie back to machine learning and we will also be using toolstechniquesmethodologiesand processes from these fields in future concepts relevant to the various machine learning methods have also been covered including supervisedunsupervisedsemi-supervisedand reinforcement learning other classifications of machine learning methods were depictedlike batch versus online based learning methods and online versus instance based learning methods detailed depiction of the crisp-dm process model was explained to give an overview of the industry standard process for data mining projects analogies were drawn from this model to build machine learning pipelineswhere we focus on both supervised and unsupervised learning pipelines we brought everything covered in this together in solving small real-world problem of predicting grant recommendations for students and building sample machine learning pipeline from scratch this should definitely get you ready for the next where you will be exploring each of the stages in machine learning pipeline in further details and cover ground on the python machine learning ecosystem last but not the leastchallengesand real-world applications of machine learning will give you good idea on the vast scope of machine learning and make you aware of the caveats and pitfalls associated with machine learning problems |
15,817 | the python machine learning ecosystem in the first we explored the absolute basics of machine learning and looked at some of the algorithms that we can use machine learning is very popular and relevant topic in the world of technology today hence we have very diverse and varied support for machine learning in terms of programming languages and frameworks there are machine learning libraries for almost all popular languages including ++rjuliascalapythonetc in this we try to justify why python is an apt language for machine learning once we have argued our selection logicallywe give you brief introduction to the python machine learning (mlecosystem this python ml ecosystem is collection of libraries that enable the developers to extract and transform dataperform data wrangling operationsapply existing robust machine learning algorithms and also develop custom algorithms easily these libraries include numpyscipypandasscikit-learnstatsmodelstensorflowkerasand so on we cover several of these libraries in nutshell so that the user will have some familiarity with the basics of each of these libraries these will be used extensively in the later of the book an important thing to keep in mind here is that the purpose of this is to acquaint you with the diverse set of frameworks and libraries in the python ml ecosystem to get an idea of what can be leveraged to solve machine learning problems we enrich the content with useful links that you can refer to for extensive documentation and tutorials we assume some basic proficiency with python and programming in general all the code snippets and examples used in this is available in the github repository for this book at named python_ml_ecosystem py for all the examples used in this and try the examples as you read this or you can even refer to the jupyter notebook named the python machine learning ecosystem ipynb for more interactive experience pythonan introduction python was created by guido van rossum at stichting mathematisch centrum (cwisee in the netherlands the first version of python was released in guido wrote python as successor of the language called abc in the following years python has developed into an extensively used high level language and general programming language python is an interpreted languagewhich means that the source code of python program is converted into bytecodewhich is then executed by the python virtual machine python is (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python |
15,818 | different from major compiled languages like and +as python code is not required to be built and linked like code for these languages this distinction makes for two important pointspython code is fast to developas the code is not required to be compiled and builtpython code can be much readily changed and executed this makes for fast development cycle python code is not as fast in executionsince the code is not directly compiled and executed and an additional layer of the python virtual machine is responsible for executionpython code runs little slow as compared to conventional languages like cc++etc strengths python has steadily risen in the charts of widely used programming languages and according to several surveys and researchit is the fifth most important language in the world recently several surveys depicted python to be the most popular language for machine learning and data sciencewe will compile brief list of advantages that python offers that probably explains its popularity easy to learnpython is relatively easy-to-learn language its syntax is simple for beginner to learn and understand when compared with languages likes or javathere is minimal boilerplate code required in executing python program supports multiple programming paradigmspython is multi-paradigmmulti-purpose programming language it supports object oriented programmingstructured programmingfunctional programmingand even aspect oriented programming this versatility allows it to be used by multitude of programmers extensibleextensibility of python is one of its most important characteristics python has huge number of modules easily available which can be readily installed and used these modules cover every aspect of programming from data access to implementation of popular algorithms this easy-to-extend feature ensures that python developer is more productive as large array of problems can be solved by available libraries active open source communitypython is open source and supported by large developer community this makes it robust and adaptive the bugs encountered are easily fixed by the python community being open sourcedevelopers can tinker with the python source code if their requirements call for it pitfalls although python is very popular programming languageit comes with its own share of pitfalls one of the most important limitations it suffers is in terms of execution speed being an interpreted languageit is slow when compared to compiled languages this limitation can be bit restrictive in scenarios where extremely high performance code is required this is major area of improvement for future implementations of python and every subsequent python version addresses it although we have to admit it can never be as fast as compiled languagewe are convinced that it makes up for this deficiency by being super-efficient and effective in other departments |
15,819 | setting up python environment the starting step for our journey into the world of data science is the setup of our python environment we usually have two options for setting up our environmentinstall python and the necessary libraries individually use pre-packaged python distribution that comes with necessary librariesi anaconda anaconda is packaged compilation of python along with whole suite of variety of librariesincluding core libraries which are widely used in data science developed by anacondaformerly known as continuum analyticsit is often the go-to setup for data scientists travis oliphantprimary contributor to both the numpy and scipy librariesis anaconda' president and one of the co-founders the anaconda distribution is bsd licensed and hence it allows us to use it for commercial and redistribution purposes major advantage of this distribution is that we don' require an elaborate setup and it works well on all flavors of operating systems and platformsespecially windowswhich can often cause problems with installing specific python packages thuswe can get started with our data science journey with just one download and install the anaconda distribution is widely used across industry data science environments and it also comes with wonderful idespyder (scientific python development environment)besides other useful utilities like jupyter notebooksthe ipython consoleand the excellent package management toolconda recently they have also talked extensively about jupyterlabthe next generation ui for project jupyter we recommend using the anaconda distribution and also checking out com/what-is-anacondato learn more about anaconda set up anaconda python environment the first step in setting up your environment with the required anaconda distribution is downloading the required installation package from the anaconda distribution the important point to note here is that we will be using python and the corresponding anaconda distribution python was released on june compared to which released on december we have opted for as we want to ensure that none of the libraries that we will be using in this book have any compatibility issues henceas python has been around for long time we avoid any such compatibility issues by opting for it howeveryou are free to use python and the code used in this book is expected to work without major issues we chose to leave out python since support for python will be ending in and from the python community visionit is clear that python is the future and we recommend you use it download the anaconda -windows- package (the one with python from repo continuum io/archivea screenshot of the target page is shown in figure - we have chosen the windows os specifically because sometimesfew python packages or libraries cause issues with installing or running and hence we wanted to make sure we cover those details if you are using any other os like linux or macosxdownload the correct version for your os and install it |
15,820 | figure - downloading the anaconda package installing the downloaded file is as simple as double-clicking the file and letting the installer take care of the entire process to check if the installation was successfuljust open command prompt or terminal and start up python you should be greeted with the message shown in figure - identifying the python and the anaconda version we also recommend that you use the ipython shell (the command is ipythoninstead of the regular python shellbecause you get lot of features including inline plotsautocompleteand so on figure - verifying installation with the python shell this should complete the process of setting up your python environment for data science and machine learning |
15,821 | installing libraries we will not be covering the basics of pythonas we assume you are already acquainted with basic python syntax feel free to check out any standard course or book on python programming to pick up on the basics we will cover one very basic but very important aspect of installing additional libraries in python the preferred way to install additional libraries is using the pip installer the basic syntax to install package from python package index (pypiusing pip is as follows pip install required_package this will install the required_package if it is present in pypi we can also use other sources other than pypi to install packages but that generally would not be required the anaconda distribution is already supplemented with plethora of additional librarieshence it is very unlikely that we will need additional packages from other sources another way to install packageslimited to anacondais to use the conda install command this will install the packages from the anaconda package channels and usually we recommend using thisespecially on windows why python for data scienceaccording to survey by stackoverflow (python is world' th most used language it is one of the top three languages used by data scientists and one of the most "wantedlanguage among stackoverflow users in factin recent poll by kdnuggets in python got the maximum number of votes for being the leading platform for analyticsdata scienceand machine learning based on the choice of users (choice when it comes to the practices of data science we will now try to illustrate these advantages and argue our case for "why python is language of choice for data scientists?powerful set of packages python is known for its extensive and powerful set of packages in fact one of the philosophies shared by python is batteries includedwhich means that python has rich and powerful set of packages ready to be used in wide variety of domains and use cases this philosophy is extended into the packages required for data science and machine learning packages like numpyscipypandasscikit-learnetc which are tailor-made for solving variety of real-world data science problemsand are immensely powerful this makes python go-to language for solving data science related problems easy and rapid prototyping python' simplicity is another important aspect when we want to discuss its suitability for data science python syntax is easy to understand as well as idiomaticwhich makes comprehending existing code relatively simple task this allows the developer to easily modify existing implementations and develop his own ones this feature is especially useful for developing new algorithms which may be experimental or yet to be supported by any external library based on what we discussed earlierpython development is independent of time consuming build and link processes using the repl shellidesand notebooksyou can rapidly build and iterate over multiple research and development cycles and all the changes can be readily made and tested |
15,822 | easy to collaborate data science solutions are rarely one man job often lot of collaboration is required in data science team to develop great analytical solution luckily python provides tools that make it extremely easy to collaborate for diverse team one of the most liked featureswhich empowers this collaborationare jupyter notebooks notebooks are novel concept that allow data scientists to share the codedataand insightful results in single place this makes for an easily reproducible research tool we consider this to be very important feature and will devote an entire section to cover the advantages offered by the use of notebooks one-stop solution in the first we explored how data science as field is interconnected to various domains typical project will have an iterative lifecycle that will involve data extractiondata manipulationdata analysisfeature engineeringmodelingevaluationsolution developmentdeploymentand continued updating of the solution python as multi-purpose programming language is extremely diverse and it allows developers to address all these assorted operations from common platform using python libraries you can consume data from multitude of sourcesapply different data wrangling operations to that dataapply machine learning algorithms on the processed dataand deploy the developed solution this makes python extremely useful as no interface is requiredi you don' need to port any part of the whole pipeline to some different programming language also enterprise level data science projects often require interfacing with different programming languageswhich is also achievable by using python for examplesuppose some enterprise uses custom made java library for some esoteric data manipulationthen you can use jython implementation of python to use that java library without writing custom code for the interfacing layer large and active community support the python developer community is very active and humongous in number this large community ensures that the core python language and packages remain efficient and bug free developer can seek support about python issue using variety of platforms like the python mailing liststack overflowblogsand usenet groups this large support ecosystem is also one of the reasons for making python favored language for data science introducing the python machine learning ecosystem in this sectionwe address the important components of the python machine learning ecosystem and give small introduction to each of them these components are few of the reasons why python is an important language for data science this section is structured to give you gentle introduction and acquaint you with these core data science libraries covering all of them in depth would be impractical and beyond the current scope since we would be using them in detail in subsequent another advantage of having great community of python developers is the rich content that can be found about each one of these libraries with simple search the list of components that we cover is by no means exhaustive but we have shortlisted them on the basis of their importance in the whole ecosystem jupyter notebooks jupyter notebooksformerly known as ipython notebooksare an interactive computational environment that can be used to develop python based data science analyseswhich emphasize on reproducible research the interactive environment is great for development and enables us to easily share the notebook |
15,823 | and hence the code among peers who can replicate our research and analyses by themselves these jupyter notebooks can contain codetextimagesoutputetc and can be arranged in step by step manner to give complete step by step illustration of the whole analysis process this capability makes notebooks valuable tool for reproducible analyses and researchespecially when you want to share your work with peer while developing your analysesyou can document your thought process and capture the results as part of the notebook this seamless intertwining of documentationcodeand results make jupyter notebooks valuable tool for every data scientist we will be using jupyter notebookswhich are installed by default with our anaconda distribution this is similar to the ipython shell with the difference that it can be used for different programming backendsi not just python but the functionality is similar for both of these with the added advantage of displaying interactive visualizations and much more on jupyter notebooks installation and execution we don' require any additional installation for jupyter notebooksas it is already installed by the anaconda distribution we can invoke the jupyter notebook by executing the following command at the command prompt or terminal :\>jupyter notebook this will start notebook server at the address localhost: of your machine an important point to note here is that you access the notebook using browser so you can even initiate it on remote server and use it locally using techniques like ssh tunneling this feature is extremely useful in case you have powerful computing resource that you can only access remotely but lack gui for it jupyter notebook allows you to access those resources in visually interactive shell once you invoke this commandyou can navigate to the address localhost: in your browserto find the landing page depicted in figure - which can be used to access existing notebooks or create new ones figure - jupyter notebook landing page on the landing page we can initiate new notebook by clicking the new button on top right by default it will use the default kernel ( the python kernelbut we can also associate the notebook with |
15,824 | different kernel (for example python kernelif installed in your systema notebook is just collection of cells there are three major types of cells in notebook code cellsjust like the name suggeststhese are the cells that you can use to write your code and associated comments the contents of these cells are sent to the kernel associated with the notebook and the computed outputs are displayed as the cellsoutputs markdown cellsmarkdown can be used to intelligently notate the computation process these can contain simple text commentshtml tagsimagesand even latex equations these will come in very handy when we are dealing with new and non-standard algorithm and we also want to capture the stepwise math and logic related to the algorithm raw cellsthese are the simplest of the cells and they display the text written in them as is these can be used to add text that you don' want to be converted by the conversion mechanism of the notebooks in figure - we see sample jupyter notebookwhich touches on the ideas we just discussed in this section figure - sample jupyter notebook |
15,825 | numpy numpy is the backbone of machine learning in python it is one of the most important libraries in python for numerical computations it adds support to core python for multi-dimensional arrays (and matricesand fast vectorized operations on these arrays the present day numpy library is successor of an early librarynumericwhich was created by jim hugunin and some other developers travis oliphantanaconda' president and co-foundertook the numeric library as base and added lot of modificationsto launch the present day numpy library in it is major open source project and is one of the most popular python libraries it' used in almost all machine learning and scientific computing libraries the extent of popularity of numpy is verified by the fact that major os distributionslike linux and macosbundle numpy as default package instead of considering it as an add-on package umpy ndarray all of the numeric functionality of numpy is orchestrated by two important constituents of the numpy packagendarray and ufuncs (universal functionnumpy ndarray is multi-dimensional array object which is the core data container for all of the numpy operations universal functions are the functions which operate on ndarrays in an element by element fashion these are the lesser known members of the numpy package and we will try to give brief introduction to them in the later stage of this section we will mostly be learning about ndarrays in subsequent sections (we will refer to them as arrays from now on for simplicity' sake arrays (or matricesare one of the fundamental representations of data mostly an array will be of single data type (homogeneousand possibly multi-dimensional sometimes the numpy ndarray is generalization of the same let' get started with the introduction by creating an array in [ ]import numpy as np arr np array([ , , , , ]arr out[ ]array([ ]in [ ]arr shape out[ ]( ,in [ ]arr dtype out[ ]dtype('int 'in the previous examplewe created one-dimensional array from normal list containing integers the shape attribute of the array object will tell us about the dimensions of the array the data type was picked up from the elements as they were all integers the data type is int one important thing to keep in mind is that all the elements in an array must have the same data type if you try to initialize an array in which the elements are mixedi you mix some strings with the numbers then all of the elements will get converted into string type and we won' be able to perform most of the numpy operations on that array so simple rule of thumb is dealing only with numeric data you are encouraged to type in the following code in an ipython shell to look at the error message that comes up in such scenarioin [ ]arr np array([ ,'st','er', ]arr dtype out[ ]dtype('< 'in [ ]np sum(arr |
15,826 | creating arrays arrays can be created in multiple ways in numpy one of the ways was demonstrated earlier to create singledimensional array similarly we can stack up multiple lists to create multidimensional array in [ ]arr np array([[ , , ],[ , , ],[ , , ]]arr shape out[ ]( in [ ]arr out[ ]array([[ ][ ][ ]]in addition to this we can create arrays using bunch of special functions provided by numpy np zeroscreates matrix of specified dimensions containing only zeroesin [ ]arr np zeros(( , )arr out[ ]:array([ ] ]]np onescreates matrix of specified dimension containing only onesin [ ]arr np ones(( , )arr out[ ]array([ ] ]]np identitycreates an identity matrix of specified dimensionsin [ ]arr np identity( arr out[ ]array([ ] ] ]]oftenan important requirement is to initialize an array of specified dimension with random values this can be done easily by using the randn function from the numpy random packagein [ ]arr np random randn( , arr out[ ]array([ - - ][- - ] ]] |
15,827 | in practicemost of the arrays are created during reading in the data we will cover the text data retrieval operations of numpy very briefly as we will try to use pandas generallyfor our data ingestion process (more on this in later part of the one of the functions that we can use to read data from text file to numpy array is genfromtext this function can open text file and read in data delimited by any character (delimiter for comma separated file is ","since it is not our preferred way of retrieving datawe will give brief example of the function here in [ ] bytesio( " , , \ , , \ , , "arr np genfromtxt(bdelimiter=","arr out[ ]array([ ] ] ]]accessing array elements once we have created an array by reading in our datathe next important part is to access that data using wide variety of mechanisms numpy provides lot of ways in which array elements can be accessed we will try to give the most popular useful ways that facilitate this basic indexing and slicing ndarray can leverage the basic indexing operations that are followed by the list classi list object [objif the obj is not an ndarray objectthen the indexing is said to be basic indexing ##note one important point to remember is that basic indexing will always return view of the original array it means that it will only refer to the original array and any change in values will be reflected in the original array also for exampleif we want to access the complete second row of the array in one of the earlier exampleswe can simply refer to it using arr[ in [ ]arr[ out[ ]array([ ]this access becomes interesting in the case of an array having more than two dimensions consider the following code snippet in [ ]arr np arange( reshape( , , in [ ]arr out[ ]array([[ ] ]][ ] ]]] |
15,828 | in [ ]arr[ out[ ]array([[ ][ ]]here we see that using similar indexing scheme as abovewe get an array having one lesser dimension than the original array the next important concept in accessing arrays is the concept of slicing arrays suppose we want to have collection of elements only instead of all the elements then we can use slicing to access the elements we will demonstrate the concept with one-dimensional array in [ ]arr np arange( arr[ :out[ ]array([ ]in [ ]arr[ : out[ ]array([ ]in [ ]arr[:- out[ ]array([ ]if the number of dimensions in the object supplied is less than the dimension of the array being accessed then the colon (:is assumed for all the dimensions consider the following example in [ ]arr np arange( reshape( , , arr out[ ]array([[ ] ]][ ] ]]]in [ ]arr[ : out[ ]array([[ ] ]]]another way to access an array is to use dots based indexing suppose in three-dimensional array we want to access the value of only one column we can do it in two ways in [ ]arr np arange( reshape( , , arr out[ ]array([[ ] ] ]][ ][ ][ ]] |
15,829 | [[ ][ ][ ]]]now if we want to access the third columnwe can use two different notations to access that columnin [ ]arr[:,:, out[ ]array([ ][ ][ ]]we can also use dot notation in the following way both of the methods gets us the same value but the dot notation is concise the dot notation stands for as many colons as required to complete an indexing operation in [ ]arr, out[ ]array([ ][ ][ ]]advanced indexing the difference in advanced indexing and basic indexing comes from the type of object being used to reference the array if the object is an ndarray object (data type int or boolor non-tuple sequence object or tuple object containing an ndarray (data type integer or bool)then the indexing being done on the array is said to be advanced indexing ##note advanced indexing will always return the copy of the original array data integer array indexingthis advanced indexing occurs when the reference object is also an array the simplest type of indexing is when we provide an array that' equal in dimensions to the array being accessed for examplein [ ]arr np arange( reshape( , arr out[ ]array([[ ][ ][ ]]in [ ]arr[[ , , ],[ , , ]out[ ]array([ ]in this example we have provided an array in which the first part identifies the rows we want to access and the second identifies the columns which we want to address this is quite similar to providing collective element-wise address |
15,830 | boolean indexingthis advanced indexing occurs when the reference object is an array of boolean values this is used when we want to access data based on some conditionsin that caseboolean indexing can be used we will illustrate it with an example suppose in one arraywe have the names of some cities and in another arraywe have some data related to those cities in [ ]cities np array(["delhi","bangalore","mumbai","chennai","bhopal"]city_data np random randn( , city_data out[ ]:array([ - - ] - ][- - - ][- - - ] ]]in [ ]city_data[cities =="delhi"out[ ]array([ - - ]]we can also use boolean indexing for selecting some elements of an array that satisfy particular condition for examplein the previous array suppose we want to only select non-zero elements we can do that easily using the following code in [ ]city_data[city_data > out[ ]array( ]we observe that the shape of the array is not maintained so we directly cannot always use this indexing method but this method is quite useful in doing conditional data substitution suppose in the previous casewe want to substitute all the non-zero values with we can achieve that operation by the following code in [ ]city_data[city_data > city_data out[ ]array([ - - ] - ][- - - ][- - - ] ]]operations on arrays at the start of this sectionwe mentioned the concept of universal functions (ufuncsin this sub-sectionwe learn some of the functionalities provided by those functions most of the operations on the numpy arrays is achieved by using these functions numpy provides rich set of functions that we can leverage for various operations on arrays we cover some of those functions in briefbut we recommend you to always refer to the official documentation of the project to learn more and leverage them in your own projects universal functions are functions that operate on arrays in an element by element fashion the implementation of ufunc is vectorizedwhich means that the execution of ufuncs on arrays is quite fast the ufuncs implemented in the numpy package are implemented in compiled code for speed and efficiency but it is possible to write custom functions by extending the numpy ufunc class of the numpy package |
15,831 | ufuncs are simple and easy to understand once you are able to relate the output they produce on particular array in [ ]arr np arange( reshape( , arr out[ ]array([ ] ][ ]]in [ ]arr out[ ]array([ ][ ][ ]]in [ ]arr out[ ]array([ ][ ][ ]]we see that the standard operators when used in conjunction with arrays work element-wise some ufuncs will take two arrays as input and output single arraywhile rare few will output two arrays also in [ ]arr np arange( reshape( , arr np arange( reshape( , arr arr out[ ]array([ ] ] ][ ][ ]]in [ ]arr out[ ]array([ ] ] ] ][ ]]in [ ]arr out[ ] |
15,832 | array([[ ][ ][ ][ ][ ]]here we see that we were able to add up two arrays even when they were of different sizes this is achieved by the concept of broadcasting we will conclude this brief discussion on operations on arrays by demonstrating function that will return two arrays in [ ]arr np random randn( , arr out[ ]array([[- - - ] - ] ] - ] - - ]]in [ ]np modf(arr out[ ](array([[- - - ] - ] ] - ] - - ]])array([[- - - ] - ] ] - ] - - ]])the function modf will return the fractional and the integer part of the input supplied to it hence it will return two arrays of the same size we tried to give you basic idea of the operations on arrays provided by the numpy package but this list is not exhaustivefor the complete list you can refer to the reference page for ufuncs at linear algebra using numpy linear algebra is an integral part of the domain of machine learning most of the algorithms we will deal with can be concisely expressed using the operations of linear algebra numpy was initially built to provide the functions similar to matlab and hence linear algebra functions on arrays were always an important part of it in this sectionwe learn bit about performing linear algebra on ndarrays using the functions implemented in the numpy package one of the most widely used operations in linear algebra is the dot product this can be performed on two compatible (brush up on your matrices and array skills if you need to know which arrays are compatible for dot productndarrays by using the dot function in [ ] np array([[ , , ],[ , , ],[ , , ]] np array([[ , , ],[ , , ],[ , , ]] |
15,833 | in [ ] dot(bout[ ]array([ ] ][ ]]similarlythere are functions implemented for finding different products of matrices like innerouterand so on another popular matrix operation is transpose of matrix this can be easily achieved by using the function in [ ] np arange( reshape( , in [ ] out[ ]array([ ] ] ] ] ]]oftentimeswe need to find out decomposition of matrix into its constituents factors this is called matrix factorization this can be achieved by the appropriate functions popular matrix factorization method is svd factorization (covered briefly in concepts)which returns decomposition of matrix into three different matrices this can be done using linalg svd function in [ ]np linalg svd(aout[ ](array([[- ][- - ][- - ]])array( + + - ])array([[- - - - - ][- - - ] - - ][- - - ][- - - ]])linear algebra is often also used to solve system of equations using the matrix notation of system of equations and the provided function of numpywe can easily solve such system of equation consider the system of equations - - this can be represented as two matricesthe coefficient matrix ( in the exampleand the constants vector ( in the examplein [ ] np array([[ , ,- ][ ,- , ],[ , ,- ]] np array([ ,- , ] np linalg solve(abx out[ ]array( ] |
15,834 | we can also check if the solution is correct using the np allclose function in [ ]np allclose(np dot(ax)bout[ ]true similarlyfunctions are there for finding the inverse of matrixeigen vectors and eigen values of matrixnorm of matrixdeterminant of matrixand so onsome of which we covered in detail in take look at the details of the function implemented at reference/routines linalg html pandas pandas is an important python library for data manipulationwranglingand analysis it functions as an intuitive and easy-to-use set of tools for performing operations on any kind of data initial work for pandas was done by wes mckinney in while he was developer at aqr capital management since thenthe scope of the pandas project has increased lot and it has become popular library of choice for data scientists all over the world pandas allows you to work with both cross-sectional data and time series based data so let' get started exploring pandasdata structures of pandas all the data representation in pandas is done using two primary data structuresseries dataframes series series in pandas is one-dimensional ndarray with an axis label it means that in functionalityit is almost similar to simple array the values in series will have an index that needs to be hashable this requirement is needed when we perform manipulation and summarization on data contained in series data structure series objects can be used to represent time series data also in this casethe index is datetime object ataframe dataframe is the most important and useful data structurewhich is used for almost all kind of data representation and manipulation in pandas unlike numpy arrays (in generala dataframe can contain heterogeneous data typically tabular data is represented using dataframeswhich is analogous to an excel sheet or sql table this is extremely useful in representing raw datasets as well as processed feature sets in machine learning and data science all the operations can be performed along the axesrowsand columnsin dataframe this will be the primary data structure which we will leveragein most of the use cases in our later |
15,835 | data retrieval pandas provides numerous ways to retrieve and read in data we can convert data from csv filesdatabasesflat filesand so on into dataframes we can also convert list of dictionaries (python dictinto dataframe the sources of data which pandas allows us to handle cover almost all the major data sources for our introductionwe will cover three of the most important data sourceslist of dictionaries csv files databases ist of dictionaries to dataframe this is one of the simplest methods to create dataframe it is useful in scenarios where we arrive at the data we want to analyzeafter performing some computations and manipulations on the raw data this allows us to integrate pandas based analysis into data being generated by other python processing pipelines in[ ]import pandas as pd in[ ] [{'city':'delhi',"data": }{'city':'bangalore',"data": }{'city':'mumbai',"data": }in[ ]pd dataframe(dout[ ]city data delhi bangalore mumbai in[ ]df pd dataframe(din[ ]df out[ ]city data delhi bangalore mumbai here we provided list of python dictionaries to the dataframe class of the pandas library and the dictionary was converted into dataframe two important things to note herefirst the keys of dictionary are picked up as the column names in the dataframe (we can also supply some other name as arguments for different column names)secondly we didn' supply an index and hence it picked up the default index of normal arrays csv files to dataframe csv (comma separated filesfiles are perhaps one of the most widely used ways of creating dataframe we can easily read in csvor any delimited file (like tsv)using pandas and convert into dataframe for our example we will read in the following file and convert into dataframe by using python the data in figure - is sample slice of csv file containing the data of cities of the world from |
15,836 | figure - sample csv file we can convert this file into dataframe with the help of the following code leveraging pandas in [ ]import pandas as pd in [ ]city_data pd read_csv(filepath_or_buffer='simplemaps-worldcities-basic csv'in [ ]city_data head( = out[ ]city city_ascii lat lng pop country qal eh-ye now qal eh-ye afghanistan chaghcharan chaghcharan afghanistan lashkar gah lashkar gah afghanistan zaranj zaranj afghanistan tarin kowt tarin kowt afghanistan zareh sharan zareh sharan afghanistan asadabad asadabad afghanistan taloqan taloqan afghanistan mahmud- eraqi mahmud- eraqi afghanistan mehtar lam mehtar lam afghanistan iso iso province af afg badghis af afg ghor af afg hilmand af afg nimroz af afg uruzgan af afg paktika af afg kunar af afg takhar af afg kapisa af afg laghman as the file we supplied had header includedthose values were used as the name of the columns in the resultant dataframe this is very basic yet core usage of the function pandas read_csv the function comes with multitude of parameters that can be used to modify its behavior as required we will not cover |
15,837 | the entire gamut of parameters available and you are encouraged to read the documentation of this function as this is one of the starting point of most python based data analysis databases to dataframe the most important data source for data scientists is the existing data sources used by their organizations relational databases (dbsand data warehouses are the de facto standard of data storage in almost all of the organizations pandas provides capabilities to connect to these databases directlyexecute queries on them to extract dataand then convert the result of the query into structured dataframe the pandas from_sql function combined with python' powerful database library implies that the task of getting data from dbs is simple and easy due to this capabilityno intermediate steps of data extraction are required we will now take an example of reading data from microsoft sql server database the following code will achieve this task server 'xxxxxxxxaddress of the database server user 'xxxxxxthe username for the database server password 'xxxxxpassword for the above user database 'xxxxxdatabase in which the table is present conn pymssql connect(server=serveruser=userpassword=passworddatabase=databasequery "select from some_tabledf pd read_sql(queryconnthe important to thing to notice here is the connection object (conn in the codethis object is the one which will identify the database server information and the type of database to pandas based on the endpoint database server we will change the connection object for example we are using the pymssql library for access to microsoft sql server here if our data source is changed to postgres databasethe connection object will change but the rest of the procedure will be similar this facility is really handy when we need to perform similar analyses on data originating from different sources once againthe read_sql function of pandas provides lot of parameters that allow us to control its behavior we also recommend you to check out the sqlalchemy librarywhich makes creating connection objects easier irrespective of the type of database vendor and also provides lot of other utilities data access the most important part after reading in our data is that of accessing that data using the data structure' access mechanisms accessing data in the pandas dataframe and series objects is very much similar to the access mechanism that exist for python lists or numpy arrays but they also offer some extra methods for data access specific to dataframe/series head and tail in the previous section we witnessed the method head it gives us the first few rows (by default of the data corresponding function is tailwhich gives us the last few rows of the dataframe these are one of the most widely used pandas functionsas we often need to take peek at our data as and when we apply different operations/selections on it we already have seen the output of headso we'll use the tail function on the same dataframe and see its output in [ ]city_data tail(out[ ]city city_ascii lat lng pop country mutare mutare - zimbabwe |
15,838 | kadoma kadoma - zimbabwe chitungwiza chitungwiza - zimbabwe harare harare - zimbabwe bulawayo bulawayo - zimbabwe iso iso province zw zwe manicaland zw zwe mashonaland west zw zwe harare zw zwe harare zw zwe bulawayo slicing and dicing the usual rules of slicing and dicing data that we used in python lists apply to the series object as well in [ ]series_es city_data lat in [ ]type(series_esout[ ]pandas core series series in [ ]series_es[ : : out[ ] namelatdtypefloat in [ ]series_es[: out[ ] namelatdtypefloat in [ ]series_es[:- out[ ] namelatdtypefloat |
15,839 | the examples given here are self-explanatory and you can refer to the numpy section for more details similar slicing rules apply for dataframes also but the only difference is that now simple slicing refers to the slicing of rows and all the other columns will end up in the result consider the following example in [ ]city_data[: out[ ]city city_ascii lat lng pop country qal eh-ye now qal eh-ye afghanistan chaghcharan chaghcharan afghanistan lashkar gah lashkar gah afghanistan zaranj zaranj afghanistan tarin kowt tarin kowt afghanistan zareh sharan zareh sharan afghanistan asadabad asadabad afghanistan iso iso province af afg badghis af afg ghor af afg hilmand af afg nimroz af afg uruzgan af afg paktika af afg kunar for providing access to specific rows and specific columnspandas provides useful functions like iloc and loc which can be used to refer to specific rows and columns in dataframe there is also the ix function but we recommend using either loc or iloc the following examples leverages the iloc function provided by pandas this allows us to select the rows and columns using structure similar to array slicing in the examplewe will only pick up the first five rows and the first four columns in [ ]city_data iloc[: ,: out[ ]city city_ascii lat lng qal eh-ye now qal eh-ye chaghcharan chaghcharan lashkar gah lashkar gah zaranj zaranj tarin kowt tarin kowt another access mechanism is boolean based access to the dataframe rows or columns this is particularly important for dataframesas it allows us to work with specific set of rows and columns let' consider the following example in which we want to select cities that have population of more than million and select columns that start with the letter lin [ ]city_data[city_data['pop' ][city_data columns[pd series(city_data columnsstr startswith(' ')]out[ ]lat lng |
15,840 | - - - - - - when we select data based on some conditionwe always get the part of dataframe that satisfies the condition supplied sometimes we want to test condition against dataframe but want to preserve the shape of the dataframe in these caseswe can use the where function (check out numpy' where function also to see the analogy!we'll illustrate this function with an example in which we will try to select all the cities that have population greater than million in [ ]city_greater_ mil city_data[city_data['pop' in [ ]city_greater_ mil where(city_greater_ mil population out[ ]city city_ascii lat lng population country iso iso nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan mumbai mumbai india in ind tokyo tokyo japan jp jpn nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan province nan nan nan nan maharashtra tokyo nan nan nan nan nan here we see that we get the output dataframe of the same size but the rows that don' conform to the condition are replaced with nan |
15,841 | in this sectionwe learned some of the core data access mechanisms of pandas dataframes the data access mechanism of pandas are as simple and extensive to use as with numpy this ensures that we have various way to access our data data operations in subsequent of our bookthe pandas dataframe will be our data structure of choice for most data processing and wrangling operations so we would like to spend some more time exploring some important operations that can be performed on dataframes using specific supplied functions values attribute each pandas dataframe will have certain attributes one of the important attributes is values it is important as it allows us access to the raw values stored in the dataframe and if they all homogenous of the same kind then we can use numpy operations on them this becomes important when our data is mix of numeric and other data types and after some selections and computationswe arrive at the required subset of numeric data using the values attribute of the output dataframewe can treat it in the same way as numpy array this is very useful when working with feature sets in machine learning traditionallynumpy vectorized operations are much faster than function based operations on dataframes in [ ]df pd dataframe(np random randn( )columns=[' '' '' ']in [ ]df out[ ] - - - - - - - - - - in [ ]nparray df values in [ ]type(nparrayout[ ]numpy ndarray missing data and the fillna function in real-world datasetsthe data is seldom clean and polished we usually will have lot of issues with data quality (missing valueswrong values and so onone of the most common data quality issues is that of missing data pandas provides us with convenient function that allows us to handle the missing values of dataframe for demonstrating the use of the fillna functionwe will use the dataframe we created in the previous example and introduce missing values in it in [ ]df iloc[ , na in [ ]df out[ ] |
15,842 | - - - - - - - nan - - - in [ ]df fillna ( out[ ] - - - - - - - - - - here we have substituted the missing value with default value we can use variety of methods to arrive at the substituting value (meanmedianand so onwe will see more methods of missing value treatment (like imputationin subsequent descriptive statistics functions general practice of dealing with datasets is to know as much about them as possible descriptive statistics of dataframe give data scientists comprehensive look into important information about any attributes and features in the dataset pandas packs bunch of functionswhich facilitate easy access to these statistics consider the cities dataframe (city_datathat we consulted in the earlier section we will use pandas functions to gather some descriptive statistical information about the attributes of that dataframe as we only have three numeric columns in that particular dataframewe will deal with subset of the dataframe which contains only those three values in [ ]columns_numeric ['lat','lng','pop'in [ ]city_data[columns_numericmean(out[ ]lat lng pop dtypefloat in [ ]city_data[columns_numericsum(out[ ]lat + lng + pop + dtypefloat |
15,843 | in [ ]city_data[columns_numericcount(out[ ]lat lng pop dtypeint in [ ]city_data[columns_numericmedian(out[ ]lat lng pop dtypefloat in [ ]city_data[columns_numericquantile( out[ ]lat lng pop dtypefloat all these operations were applied to each of the columnsthe default behavior we can also get all these statistics for each row by using different axis this will give us the calculated statistics for each row in the dataframe in [ ]city_data[columns_numericsum(axis out[ ] + + + + + pandas also provides us with another very handy function called describe this function will calculate the most important statistics for numerical data in one go so that we don' have to use individual functions in [ ]city_data[columns_numericdescribe(out[ ]lat lng pop count + mean + std + min - - - + - - + + + max + |
15,844 | concatenating dataframes most data science projects will have data from more than one data source these data sources will mostly have data that' related in some way to each other and the subsequent steps in data analysis will require them to be concatenated or joined pandas provides rich set of functions that allow us to merge different data sources we cover small subset of such methods in this sectionwe explore and learn about two methods that can be used to perform all kinds of amalgamations of dataframes concatenating using the concat method the first method to concatenate different dataframes in pandas is by using the concat method the majority of the concatenation operations on dataframes will be possible by tweaking the parameters of the concat method let' look at couple of examples to understand how the concat method works the simplest scenario of concatenating is when we have more than one fragment of the same dataframe (which may happen if you are reading it from stream or in chunksin that casewe can just supply the constituent dataframes to the concat function as follows in [ ]city_data city_data sample( in [ ]city_data city_data sample( in [ ]city_data_combine pd concat([city_data ,city_data ]in [ ]city_data_combine out[ ]city city_ascii lat lng pop groningen groningen tambov tambov karibib karibib - focsani focsani pleven pleven indianapolis indianapolis - country iso iso province netherlands nl nld groningen russia ru rus tambov namibia nan nam erongo romania ro rou vrancea bulgaria bg bgr pleven united states of america us usa indiana another common scenario of concatenating is when we have information about the columns of same dataframe split across different dataframes then we can use the concat method again to combine all the dataframes consider the following example in [ ]df pd dataframe({'col '['col ''col ''col ''col ']'col '['col ''col ''col ''col ']'col '['col ''col ''col ''col ']'col '['col ''col ''col ''col ']}index=[ ] |
15,845 | in [ ]df out[ ]col col col col col col col col col col col col col col col col col col col col in [ ]df pd dataframe({'col '['col ''col ''col ''col ']'col '['col ''col ''col ''col ']'col '['col ''col ''col ''col ']}index=[ ]in [ ]pd concat([df ,df ]axis= out[ ]col col col col col col col col col col col nan nan nan col col col col nan nan nan col col col col col col col col col col col col col col nan nan nan nan col col col nan nan nan nan col col col database style concatenations using the merge command the most familiar way to concatenate data (for those acquainted with relational databasesis using the join operation provided by the databases pandas provides database friendly set of join operations for dataframes these operations are optimized for high performance and are often the preferred method for joining disparate dataframes joining by columnsthis is the most natural way of joining two dataframes in this methodwe have two dataframes sharing common column and we can join the two dataframes using that column the pandas library has full range of join operations (innerouterleftrightetc and we will demonstrate the use of inner join in this sub-section you can easily figure out how to do the rest of join operations by checking out the pandas documentation for this examplewe will break our original cities data into two different dataframesone having the city information and the other having the country information thenwe can join them using one of the shared common columns in [ ]country_data city_data[['iso ','country']drop_duplicates(in [ ]country_data shape out[ ]( in [ ]country_data head(out[ ]iso country afg afghanistan ald aland alb albania dza algeria asm american samoa |
15,846 | in [ ]del(city_data['country']in [ ]city_data merge(country_data'inner'head(out[ ]city city_ascii lat lng pop iso iso qal eh-ye now qal eh-ye af afg chaghcharan chaghcharan af afg lashkar gah lashkar gah af afg zaranj zaranj af afg tarin kowt tarin kowt af afg province country badghis afghanistan ghor afghanistan hilmand afghanistan nimroz afghanistan uruzgan afghanistan here we had common column in both the dataframesiso which the merge function was able to pick up automatically in case of the absence of such common nameswe can provide the column names to join onby using the parameter on of the merge function the merge function provides rich set of parameters that can be used to change its behavior as and when required we will leave it on you to discover more about the merge function by trying out few examples scikit-learn scikit-learn is one of the most important and indispensable python frameworks for data science and machine learning in python it implements wide range of machine learning algorithms covering major areas of machine learning like classificationclusteringregressionand so on all the mainstream machine learning algorithms like support vector machineslogistic regressionrandom forestsk-means clusteringhierarchical clusteringand many many moreare implemented efficiently in this library perhaps this library forms the foundation of applied and practical machine learning besides thisits easy-to-use api and code design patterns have been widely adopted across other frameworks toothe scikit-learn project was initiated as google summer of code project by david cournapeau the first public release of the library was in late it is one of the most active python projects and is still under active development with new capabilities and existing enhancements being added constantly scikit-learn is mostly written in python but for providing better performance some of the core code is written in cython it also uses wrappers around popular implementations of learning algorithms like logistic regression (using liblinearand support vector machine (using libsvmin our introduction of scikit-learn we will first go through the basic design principles of the library and then build on this theoretical knowledge of the package we will implement some of the algorithms on sample data to get you acquainted with the basic syntax we leverage scikit-learn extensively in subsequent so the intent here is to acquaint you with how the library is structured and its core components |
15,847 | core apis scikit-learn is an evolving and active projectas witnessed by its github repository statistics this framework is built on quite small and simple list of core api ideas and design patterns in this section we will briefly touch on the core apis on which the central operations of scikit-learn are based dataset representationthe data representation of most machine learning tasks are quite similar to each other very often we will have collection of data points represented by stacking of data point vectors basically considering dataseteach row in the dataset represents vector for specific data point observation data point vector contains multiple independent variables (or featuresand one or more dependent variables (response variablesfor exampleif we have linear regression problem which can be represented as [( xn)( )where the independent variables (featuresare represented by the xs and the dependent variable (response variableis represented by the idea is to predict by fitting model on the features this data representation resembles matrix (considering multiple data point vectors)and natural way to depict it is by using numpy arrays this choice of data representation is quite simple yet powerful as we are able to access the powerful functionalities and the efficient nature of vectorized numpy array operations in fact recent updates of scikit-learn even accept pandas dataframes as inputs instead of explicitly needing you to convert them to feature arraysestimatorsthe estimator interface is one of the most important components of the scikit-learn library all the machine learning algorithms in the package implement the estimator interface the learning process is handled in two-step process the first step is the initialization of the estimator objectthis involves selecting the appropriate class object for the algorithm and supplying the parameters or hyperparameters for it the second step is applying the fit function to the data supplied (feature set and response variablesthe fit function will learn the output parameters of the machine learning algorithm and expose them as public attributes of the object for easy inspection of the final model the data to the fit function is generally supplied in the form of an input-output matrix pair in addition to the machine learning algorithmsseveral data transformation mechanisms are also implemented using the estimators apis (for examplescaling of featurespcaetc this allows for simple data transformation and simple mechanism to expose transformation mechanisms in consistent way predictorsthe predictor interface is implemented to generate predictionsforecastsetc using learned estimator for unknown data for examplein the case of supervised learning problemthe predictor interface will provide predicted classes for the unknown test array supplied to it predictor interface also contains support for providing quantified values of the output it supplies requirement of predictor implementation is to provide score functionthis function will provide scalar value for the test input provided to it which will quantify the effectiveness of the model used such values will be used in the future for tuning our machine learning models |
15,848 | transformerstransformation of input data before learning of model is very common task in machine learning some data transformations are simplefor example replacing some missing data with constanttaking log transformwhile some data transformations are similar to learning algorithms themselves (for examplepcato simplify the task of such transformationssome estimator objects will implement the transformer interface this interface allows us to perform non-trivial transformation on the input data and supply the output to our actual learning algorithm since the transformer object will retain the estimator used for transformationit becomes very easy to apply the same transformation to unknown test data using the transform function advanced apis in the earlier section we saw some of the most basic tenets of the scikit-learn package in this section we will briefly touch on the advanced constructs that are built on those basics these advanced set of apis will often help data scientists in expressing complex set of essential operations using simple and stream-lined syntax meta estimatorsthe meta estimator interface (implemented using the multiclass interfaceis collection of estimators which can be composed by accumulating simple binary classifiers it allows us to extend the binary classifiers to implement multi-classmulti-labelmulti-regressionand multi-class-multi-label classifications this interface is important as these scenarios are common in modern day machine learning and the capability to implement this out-of-the-box reduces the programming requirements for data scientists we should also remember that most binary estimators in the scikit-learn library have multiclass capabilities built in and we won' be using the meta-estimators unless we need custom behavior pipeline and feature unionsthe steps of machine learning are mostly sequential in nature we will read in the dataapply some simple or complex transformationsfit an appropriate modeland predict using the model for unseen data another hallmark of the machine learning process is the iteration of these steps multiple times due to its iterative natureto arrive at the best possible model and then deploy the same it is convenient to chain these operations together and repeat them as single unit instead of applying operations piecemeal this concept is also known as machine learning pipelines scikit-learn provides pipeline api to achieve similar purpose pipeline(object from the pipeline module can chain multiple estimators together (transformationsmodelingetc and the resultant object can be used as an estimator itself in addition to the pipeline apiwhich applies these estimators in sequential methodwe also have access to featureunion apiwhich will perform specified set of operation in parallel and show the output of all the parallel operations the use of pipelines is fairly advanced topic and it will be made clearerwhen we specifically see an example in the subsequent |
15,849 | model tuning and selectioneach learning algorithm will have bunch of parameters or hyperparameters associated with it the iterative process of machine learning aims at finding the best set of parameters that give us the model having the best performance for examplethe process of tuning various hyperparameters of random forest algorithmto find the set which gives the best prediction accuracy (or any other performance metricthis process sometimes involves traversing through the parameter spacesearching for the best parameter set do note that even though we mention the term parameter herewe typically indicate the hyperparameters of model scikit-learn provides useful apis that help us navigate this parameter space easily to find the best possible parameter combinations we can use two metaestimators--gridsearchcv and randomizedsearchcv--for facilitating the search of the best parameters gridsearchcvas the name suggestsinvolves providing grid of possible parameters and trying each possible combination among them to arrive at the best one an optimized approach often is to use random search through the possible parameter setthis approach is provided by the randomizedsearchcv api it samples the parameters and avoids the combinatorial explosions that can result in the case of higher number of parameters in addition to the parameter searchthese model selection methods also allow us to use different cross-validation schemes and score functions to measure performance scikit-learn exampleregression models in the first we discussed an example which involved the task of classification in this sectionwe will tackle another interesting machine learning problemthat of regression keep in mind the focus here is to introduce you to the basic steps involved in using some of the scikit-learn library apis we will not try to over-engineer our solution to arrive at the best model future will focus on those aspects with realworld datasets for our regression examplewe will use one of the datasets bundled with the scikit-learn librarythe diabetes dataset the dataset the diabetes dataset is one of the bundled datasets with the scikit-learn library this small dataset allows the new users of the library to learn and experiment various machine learning conceptswith well-known dataset it contains observations of baseline variablesagesexbody mass indexaverage blood pressure and six blood serum measurements for diabetes patients the dataset bundled with the package is already standardized (scaled) they have zero mean and unit norm the response (or target variableis quantitative measure of disease progression one year after baseline the dataset can be used to answer two questionswhat is the baseline prediction of disease progression for future patientswhich independent variables (featuresare important factors for predicting disease progressionwe will try to answer the first question here by building simple linear regression model let' get started by loading the data in [ ]from sklearn import datasets in [ ]diabetes datasets load_diabetes(in [ ] diabetes target in [ ] diabetes data |
15,850 | in [ ] shape out[ ]( lin [ ] [: out[ ]array([ - - - - - ][- - - - - - - - - ] - - - - - - ][- - - - - - ] - - - - - ]in [ ] [: out[ ]array( ]since we are using the data in the form of numpy arrayswe don' get the name of the features in the data itself but we will keep the reference to the variable names as they may be needed later in our process or just for future reference in [ ]feature_names=['age''sex''bmi''bp'' '' '' '' '' '' 'for prediction of the response variable herewe will learn lasso model lasso model is an extension of the normal linear regression model which allows us to apply regularization to the model simply puta lasso regression will try to minimize the number of independent variables in the final model this will give us the model with the most important variables only (feature selectionin [ ]from sklearn import datasets from sklearn linear_model import lasso import numpy as np from sklearn import linear_modeldatasets from sklearn model_selection import gridsearchcv we will split our data into separate test and train sets of data (train is used to train the model and test is used for model performance testing and evaluationin [ ]diabetes datasets load_diabetes(x_train diabetes data[: y_train diabetes target[: x_test diabetes data[ :y_test diabetes data[ : |
15,851 | then we will define the model we want to use and the parameter space for one of the model' hyperparameters here we will search the parameter alpha of the lasso model this parameter basically controls the strictness our regularization in [ ]lasso lasso(random_state= alphas np logspace(- - then we will initialize an estimator that will identify the model to be used here we notice that the process is identical for both learning single model and grid search of modelsi they both are objects of the estimator class in [ ]estimator gridsearchcv(lassodict(alpha=alphas)in [ ]estimator fit(x_trainy_trainout[ ]gridsearchcv(cv=noneerror_score='raise'estimator=lasso(alpha= copy_x=truefit_intercept=truemax_iter= normalize=falsepositive=falseprecompute=falserandom_state= selection='cyclic'tol= warm_start=false)fit_params={}iid=truen_jobs= param_grid={'alpha'array( - - - - - - - ])}pre_dispatch=' *n_jobs'refit=truereturn_train_score=truescoring=noneverbose= this will take our train set and learn group of lasso models by varying the value of the alpha hyperparameter the gridsearchcv object will also score the models that we are learning and we can us the best_estimator_ attribute to identify the model and the optimal value of the hyperparameter that gave us the best score also we can directly use the same object for predicting with the best model on unknown data in [ ]estimator best_score_ out[ ] in [ ]estimator best_estimator_ out[ ]lasso(alpha= copy_x=truefit_intercept=truemax_iter= normalize=falsepositive=falseprecompute=falserandom_state= selection='cyclic'tol= warm_start=falsein [ ]estimator predict(x_testout[ ]array( ]the next steps involve reiterating the whole process making changes to the data transformationmachine learning algorithmtuning hyperparameters of the algorithm etc but the basic steps will remain the same we will go into the elaborate details of these processes in future of the book here we will conclude our introduction to the scikit-learn framework and encourage you to check out their extensive documentation at stable version of scikit-learn |
15,852 | neural networks and deep learning deep learning has become one of the most well-known representations of machine learning in the recent years deep learning applications have achieved remarkable accuracy and popularity in various fields especially in image and audio related domains python is the language of choice when it comes to learning deep networks and complex representations of data in this sectionwe briefly discuss anns (artificial neural networksand deep learning networks then we will move on to the popular deep learning frameworks for python sincethe mathematics involved behind anns is quite advanced we will keep our introduction minimal and focused on the practical aspects of learning neural network we recommend you check out some standard literature on the theoretical aspects of deep learning and neural networks like deep learning by goodfellow and bengioif you are more interested in its internal implementations the following section gives brief refresher on neural networks and deep learning based on what we covered in detail in artificial neural networks deep learning can be considered as an extension of artificial neural networks (annsneural networks were first introduced as method of learning by frank rosenblatt in although the learning model called perceptron was different from modern day neural networkswe can still regard the perceptron as the first artificial neural network artificial neural networks loosely work on the principle of learning distributed distribution of data the underlying assumption is that the generated data is result of nonlinear combination of set of latent factors and if we are able to learn this distributed representation then we can make accurate predictions about new set of unknown data the simplest neural network will have an input layera hidden layer ( result of applying nonlinear transformation to the input data)and an output layer the parameters of the ann model are the weights of each connection that exist in the network and sometimes bias parameter this simple neural network is represented as shown in figure - |
15,853 | figure - simple neural network this network is having an input vector of size hidden layer of size and binary output layer the process of learning an ann will involve the following steps define the structure or architecture of the network we want to use this is critical as if we choose very extensive network containing lot of neurons/units (each circle in figure - can be labeled as neuron or unitthen we can overfit our training data and our model won' generalize well choose the nonlinear transformation to be applied to each connection this transformation controls the activeness of each neuron in the network |
15,854 | decide on loss function we will use for the output layer this is applicable in the case when we have supervised learning problemi we have an output label associated with each of the input data points learning the parameters of the neural network determine the values of each connection weight each arrow in figure - carries connection weight we will learn these weights by optimizing our loss function using some optimization algorithm and method called backpropagation we will not go into the details of backpropagation hereas it is beyond the scope of the present we will extend these topics when we actually use neural networks deep neural networks deep neural networks are an extension of normal artificial neural networks there are two major differences that deep neural networks haveas compared to normal neural networks number of layers normal neural networks are shallowwhich means that they will have at max one or two hidden layers whereas the major difference in deep neural networks is that they have lot more hidden layers and this number is usually very large for examplethe google brain project used neural network that had millions of neurons diverse architectures based on what we discussed in we have wide variety of deep neural network architectures ranging from dnnscnnsrnnsand lstms recent research have even given us attention based networks to place special emphasis on specific parts of deep neural network hence with deep learningwe have definitely gone past the traditional ann architecture computation power the larger the network and the more layers it hasthe more complex the network becomes and training it takes lot of time and resources deep neural networks work best on gpu based architectures and take far less time to train than on traditional cpusalthough recent improvements have vastly decreased training times python libraries for deep learning python is language of choiceacross both academia and enterprisesto develop and use normal/deep neural networks we will learn about two packages--theano and tensorflow--which will allow us to build neural network based models on datasets in addition to these we will learn to use keraswhich is high level interface to building neural networks easily and has concise apicapable of running on top of both tensorflow and theano besides thesethere are some more excellent frameworks for deep learning we also recommend you to check out pytorchmxnetcaffe (recently caffe was released)and lasagne |
15,855 | theano the first library popularly used for learning neural networks is theano although by itselftheano is not traditional machine learning or neural network learning frameworkwhat it provides is powerful set of constructs that can be used to train both normal machine learning models and neural networks theano allows us to symbolically define mathematical functions and automatically derive their gradient expression this is one of the frequently used steps in learning any machine learning model using theanowe can express our learning process with normal symbolic expressions and then theano can generate optimized functions that carry out those steps training of machine learning models is computationally intensive process especially neural networks have steep computational requirements due to both the number of learning steps involved and the non-linearity involved in them this problem is increased manifold when we decide to learn deep neural network one of the important reasons of theano being important for neural network learning is due to its capability to generate code which executes seamlessly on both cpus and gpus thus if we specify our machine learning models using theanowe are also able to get the speed advantage offered by modern day gpus in the rest of this sectionwe see how we can install theano and learn very simple neural network using the expressions provided by theano installation theano can be easily installed by using the python package manager pip or conda pip install theano often the pip installer fails on windowshence we recommend using conda install theano on the windows platform we can verify the installation by importing our newly installed package in python shell in [ ]import theano if you get no errorsthen this indicates you have successfully installed the theano library in your system theano basics (barebones versionin this sectionwe discuss some basics of the symbolic abilities offered by theano and how those can be leveraged to build some simple learning models we will not directly use theano to build neural network in this sectionbut you will know how to carry out symbolic operations in theano besides thisyou will see in the coming section that building neural networks is much easier when we use higher level library such as keras theano expresses symbolical expressions using something called tensors tensor in its simplest definition is multi-dimensional array so zero-order tensor array is scalara one-order tensor is vectorand two-order tensor is matrix now we look at how we can work on zero-order tensor or scalar by using constructs provided by theano in [ ]import numpy import theano tensor as from theano import function dscalar(' ' dscalar(' ' function([xy]zf( out[ ]array( |
15,856 | herewe defined symbolical operation (denoted by the symbol zand then bound the input and the operations in function this was achieved by using the function construct provided by theano contrast it with the normal programming paradigm and we would need to define the whole function by ourselves this is one of the most powerful aspects of using symbolical mathematical package like theano using construct similar to thesewe can define complex set of operations graph structuretheano represents symbolical mathematical operations as graphs so when we define an operation like zas depicted in the earlier exampleno calculation happens instead what we get is graph representation of the expression these graphs are made up of applyopand variable nodes the apply node represents application of some op on some set of variable nodes so if we wanted to visualize the operation we defined in the preceding step as graphit would look like the depiction in figure - (sourcefigure - graph structure of theano operation theano has various low-level tensor apis for building neural network architectures using tensor arithmetic and ops this is available in the theano tensor nnet module and you can check out relevant functions at include conv for convolutional neural networks and nnet for regular neural network operations this concludes our basic introduction to theano we kept it simple because we will rarely be using theano directly and instead rely on high-level libraries like keras to build powerful deep neural networks with minimal code and focus more on solving problems efficiently and effectively |
15,857 | tensorflow tensorflow is an open source software library for machine learning released by google in november tensorflow is based on the internal system that google uses to power its research and production systems tensorflow is quite similar to theano and can be considered as google' attempt to provide an upgrade to theano by providing easy-to-use interfaces into deep learningneural networksand machine learning with strong focus on rapid prototyping and model deployment constructs like theano it also provides constructs for symbolical mathematicswhich are then translated into computational graphs these graphs are then compiled into lower-level code and executed efficiently like theanotensorflow also supports cpus and gpus seamlessly in fact tensorflow works best on tpuknown as the tensor processing unitwhich was invented by google in addition to having python apitensorflow is also exposed by apis to ++haskelljavaand go languages one of the major differences tensorflow has as compared to theano is the support for higher-level operationswhich ease the process of machine learning and its focus on model development as well as deployment to production and model serving via multiple mechanisms (of theano is not so intuitive to usewhich is another area tensorflow aims to fillby its easy-to-understand implementations and extensive documentation the constructs provided by tensorflow are quite similar to those of theano so we will not be reiterating those you can always refer to the documentation provided for tensorflow at for more details installation tensorflow works well on linux and mac systemsbut was not directly available on windows due to internal dependencies on bazel the good news is that it was recently successfully launched for windows platforms too it requires minimum of python for its execution the library can be installed by using pip or by using conda install function note that for successful installation of tensorflowwe will also require updated dask and pandas libraries on our system conda install tensorflow once we have installed the librarywe can verify successful install by verifying it in the ipython console with the following commands in [ ]import tensorflow as tf hello tf constant('hellotensorflow!'sess tf session(print(sess run(hello) 'hellotensorflow!the message verifies our successful install of the tensorflow library you are also likely to see bunch of warning messages but you can safely ignore them the reason for those messages is the fact that the default tensorflow build is not built with support for some instruction setswhich may slow down the process of learning bit |
15,858 | keras keras is high-level deep learning framework for pythonwhich is capable of running on top of both theano and tensorflow developed by francois cholletthe most important advantage of using keras is the time saved by its easy-to-use but powerful high level apis that enable rapid prototyping for an idea keras allows us to use the constructs offered by tensorflow and theano in much more intuitive and easy-to-use way without writing excess boilerplate code for building neural network based models this ease of flexibility and simplicity is the major reason for popularity of keras in addition to providing an easy access to both of these somewhat esoteric librarieskeras ensures that we are still able to take the advantages that these libraries offer in this sectionyou learn how to install keraslearn about the basics of model development using kerasand then learn how to develop an example neural network model using keras and tensorflow installation keras is easy to install using the familiar pip or conda command we will assume that we have both tensorflow and theano installedas they will be required to be used as backend for keras model development conda install keras we can check for the successful installation of keras in our environment by importing it in ipython upon successful import it will display the current backendwhich is usually theano by default so you need to go to the keras json fileavailable under the keras directory under your user account directory our config file contents are as follows {"epsilon" - "floatx""float ""backend""tensorflow""image_data_format""channels_last"you can refer to keras from theano to tensorflow once the backend in specified in the config fileon importing kerasyou should see the following message in your ipython shell in [ ]import keras using tensorflow backend keras basics the main abstraction for neural network is model in keras model is collection of neurons that will define the structure of neural network there are two different types of models sequential modelsequential models are just stacks of layers these layers can together define neural network if you refer back to figure - when we introduced neural networksthat network can be defined by specifying three layers in sequential keras model we will see an example of sequential model later in this section functional api modelsequential models are very useful but sometimes our requirement will exceed the constructs possible using sequential models this is where the function model apis will come in to the picture this api allows us to specify complex networks networks that can have multiple outputsnetworks with shared layersetc these kinds of models are needed when we need to use advanced neural networks like convolutional neural networks or recurrent neural networks |
15,859 | model building the model building process with keras is three-step process the first step is specifying the structure of the model this is done by configuring the base model that we want to usewhich is either sequential model or functional model once we have identified base model for our problem we will further enrich that model by adding layers to the model we will start with the input layerto which we will feed our input data feature vectors the subsequent layers to be added to the model are based on requirements of the model keras provides bunch of layers which can be added to the model (hidden layersfully connectedcnnlstmrnnand so on)we will describe some of them while running through our neural network example we can stack these layers together in complex manner and add the final output layerto arrive at our overall model architecture the next step in the model learning process is the compilation of the model architecture that we defined in the first step based on what we learned in the preceding sections on theano and tensorflowmost of the model building steps are symbolic and the actual learning is deferred until later in the compilation stepwe configure the learning process the learning processin addition to the structure of the modelneeds to specify the following additional three important parametersoptimizerwe learned in the first that the simplest explanation of learning process is the optimization of loss function once we have the model and the loss functionwe can specify the optimizer that will identify the actual optimization algorithm or program we will useto train the model and minimize the loss or error this could be string identifier to the already implemented optimizersa functionor an object to the optimizer class that we can implement loss functiona loss functionalso known as an objective functionwill specify the objective of minimizing loss/errorwhich our model will leverage to get the best performance over multiple epochs\iterations it again can be string identifier to some pre-implemented loss functions like cross-entropy loss (classificationor mean squared error (regressionor it can be custom loss function that we can develop performance metricsa metric is quantifiable measure of the learning process while compiling modelwe can specify performance metric we want to track (for exampleaccuracy for classification model)which will educate us about the effectiveness of the learning process this helps in evaluating model performance the last step in the model building process is executing the compiled method to start the training process this will execute the lower level compiled code to find out the necessary parameters and weights of our model during the training process in keraslike scikit-learnit is achieved by calling the fit function on our model we can control the behavior of the function by supplying appropriate arguments you can learn about these arguments at learning an example neural network we will conclude this section by building simple working neural network model on one of the datasets that comes bundled with the scikit-learn package we will use the tensorflow backend in our examplebut you can try to use theano backend and verify the execution of model on both the backends for our examplewe will use the wisconsin breast cancer datasetwhich is bundled with the scikit-learn library the dataset contains attribute drawn from digitized image of fine needle aspirate of breast mass they describe characteristics of the cell nuclei present in the image on the basis of those attributesthe mass can be marked as malignant or benign the goal of our classification system is to predict that level so let' get started by loading the dataset |
15,860 | in [ ]from sklearn datasets import load_breast_cancer cancer load_breast_cancer(x_train cancer data[: y_train cancer target[: x_test cancer data[ :y_test cancer target[ :the next step of the process is to define the model architecture using the keras model class we see that our input vector is having attributes so we will have shallow network having one hidden layer of half the units (neurons) we will have units in the hidden layer we add one unit output layer to predict either or based on whether the input data point is benign or malignant this is simple neural network and doesn' involve deep learning in [ ]import numpy as np from keras models import sequential from keras layers import densedropout in [ ]model sequential(model add(dense( input_dim= activation='relu')model add(dense( activation='sigmoid')here we have defined sequential keras modelwhich is having dense hidden layer of units the dense layer means fully connected layer so it means that each of those units (neuronsis fully connected to the input features the output layer for our example is dense layer with the sigmoid activation the sigmoid activation is used to convert real valued input into binary output ( or once we have defined the model we will then compile the model by supplying the necessary optimizerloss functionand the metric on which we want to evaluate the model performance in [ ]model compile(loss='binary_crossentropy'optimizer='rmsprop'metrics=['accuracy']here we used loss function of binary_crossentropywhich is standard loss function for binary classification problems for the optimizerwe used rmspropwhich is an upgrade from the normal gradient descent algorithm the next step is to fit the model using the fit function in [ ]model fit(x_trainy_trainepochs= batch_size= epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc herethe epochs parameter indicates one complete forward and backward pass of all the training examples the batch_size parameter indicates the total number of samples which are propagated through the nn model at time for one backward and forward pass for training the model and updating the gradient |
15,861 | thus if you have observations and your batch size is each epoch will consist of iterations where observations (data pointswill be passed through the network at time and the weights on the hidden layer units will be updated however we can see that the overall loss and training accuracy remains the same which means the model isn' really learning anything from the looks of itthe api for keras again follows the convention for scikit-learn modelshence we can use the predict function to predict for the data points in the test set in fact we use predict_classes to get the actual class label predicted for each test data instance in [ ]predictions model predict_classes(x_test / [===============eta let' evaluate the model performance by looking at the test data accuracy and other performance metrics like precisionrecalland score do not despair if you do not understand some of these termsas we will be covering them in detail in for nowyou should know that scores closer to indicate better results an accuracy of would indicate model accuracywhich is perfection luckilyscikit-learn provides us with necessary performance metric measuring apis in [ ]from sklearn import metrics print('accuracy:'metrics accuracy_score(y_true=y_testy_pred=predictions)print(metrics classification_report(y_true=y_testy_pred=predictions))score accuracy precision recall -score support avg total from the previous performance metricswe can see that even though model accuracy is %for data points having cancer (malignanti label it misclassifies them as ( instancesand remaining instances where class label is (benign)it classifies them perfectly thus this model hasn' learned much and predicts every response as benign (label can we do better than thisthe power of deep learning the idea of deep learning is to use multiple hidden layers to learn latent and complex data patternsrelationshipsand representations to build model that learns and generalizes well on the underlying data let' take the previous example and convert it to fully connected deep neural network (dnnby introducing two more hidden layers the following snippet builds and trains dnn with the same configuration as our previous experiment only with the addition of two new hidden layers in [ ]model sequential(model add(dense( input_dim= activation='relu')model add(dense( activation='relu')model add(dense( activation='relu')model add(dense( activation='sigmoid')model compile(loss='binary_crossentropy'optimizer='rmsprop'metrics=['accuracy']model fit(x_trainy_train |
15,862 | epochs= batch_size= epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc epoch / / [============================== loss acc we see remarkable jump in the training accuracy and drop in the loss based on the preceding training output this is indeed excellent and seems promisinglet' check out our model performance on the test data now in [ ]predictions model predict_classes(x_testprint('accuracy:'metrics accuracy_score(y_true=y_testy_pred=predictions)print(metrics classification_report(y_true=y_testy_pred=predictions))score accuracy precision recall -score support avg total we achieve an overall accuracy and score of and we can see that we also have an score of as compared to from the previous modelfor class label (malignantthus you can clearly get feel of the power of deep learningwhich is evident by just introducing more hidden layers in our networkwhich enabled our model to learn better representations of our data try experimenting with other architectures or even introducing regularization aspects like dropout thusin this sectionyou learned about some of the important frameworks relevant to neural networks and deep learning we will revisit the more advanced aspects of these frameworks in subsequent when we work on real-world case studies text analytics and natural language processing in the sections till now we have mostly dealt with structured data formats and datasets data in which we have the observations occurring as rows and the features or attributes for each of those observations occurring as columns this format is most convenient for machine learning algorithms but the problem is that raw data is not always available in this easy-to-interpret format this is the case with unstructured data formats like audiovideotextual datasets in this sectionwe try to get brief overview of the frameworks we can use to solve this problem if the data that we are working with is unstructured text data we will not go into detailed examples of using these frameworks and if you are interestedwe recommend checking out of this bookwhich deals with real-world case study on analyzing text data |
15,863 | the natural language tool kit perhaps the most important library of python to work with text data is nltk or the natural language tool kit this section introduce nltk and its important modules we go over the installation procedure of the library and brief description of its important modules installation and introduction the nltk package can be installed in the same way as most of the other packages used in this bookwhich is by using the pip or conda command conda install nltk we can verify the installation by importing the package in an ipython/python shell in [ ]import nltk there' an important difference for the nltk library as compared to other standard libraries in case of other librariesin generalwe don' need to download any auxiliary data but for the nltk library to work to its full potentialwe would require some auxiliary datawhich are mostly various corpora this data is leveraged by multiple functions and modules in the library we can download this data by executing the following command in the python shell in [ ]nltk download(this command will give us the screen shown in figure - where we can select the additional data we want to install and select the installation location we will select to install all the additional data and packages available figure - nltk download option |
15,864 | you can also choose to download all necessary datasets without the gui by using the following command from the ipython or python shell nltk download('all'halt_on_error=falseonce the download is finished we will be able to use all the necessary functionalities and the bundled data of the nltk package we will now take look at the major modules of nltk library and introduce the functionality that each of them provides corpora the starting point of any text analytics process is the process of collecting the documents of interest in single dataset this dataset is central to the next steps of processing and analysis this collection of documents is generally called corpus multiple corpus datasets are called corpora the nltk module nltk corpus provides necessary functions that can be used to read corpus files in variety of formats it supports the reading of corpora from the datasets bundled in nltk package as well as external corpora tokenization tokenization is one of the core steps in text pre-processing and normalization each text document has several components like paragraphssentencesand words that together make up the document the process of tokenization is used to break down the document into these smaller components this tokenization can be into sentenceswordsclausesand so on the most popular way to tokenize any document is by using sentence tokenization and\or word tokenization the nltk tokenize module of the nltk library provides functionality that enables efficient tokenization of any textual data tagging text document is constructed based on various grammatical rules and constructs the grammar depends on the language of the text document each language' grammar will contain different entities and parts of speech like nounspronounsadjectivesadverbsand so on the process of tagging will involve getting text corpustokenizing the text and assigning metadata information like tags to each word in the corpora the nltk tag module contains implementation of different algorithms that can be used for such tagging and other related activities stemming and lemmatization word can have several different forms based on what part of speech it is representing consider the word flyit can be present in various forms in the same textlike flyingfliesflyerand so on the process of stemming is used to convert all the different forms of word in to the base formwhich is known as the root step lemmatization is similar to stemming but the base form is known as the root word and it' always semantically and lexicographically correct word this conversion is crucialas lot of times the core word contains more information about the documentwhich can be diluted by these different forms the nltk module nltk stem contains different techniques that can be used for stemming and lemmatizing corpus |
15,865 | chunking chunking is process which is similar to parsing or tokenization but the major difference is that instead of trying to parse each wordwe will target phrases present in the document consider the sentence "the brown fox saw the yellow dogin this sentencewe have two phrases which are of interest the first is the phrase "the brown fox,which is noun phrase and the second one is the phrase "the yellow dog,which again is noun phrase by using the process of chunkingwe are able to tag phrases with additional parts of speech informationwhich is important for understanding the structure of the document the nltk module nltk chunk consists of necessary techniques that can be used for applying the chunking process to our corpora entiment sentiment or emotion analysis is one of the most recognizable applications on text data sentiment analysis is the process of taking text document and trying to determine the opinion and polarity being represented by that document polarity in the reference of text document can mean the emotione positivenegativeor neutral being represented by the data the sentiment analysis on textual data can be done using different algorithms and at different levels of text segmentation the nltk sentiment package is the module that can be used to perform different sentiment analyses on text documents check out for real-world case study on sentiment analysis lassification/clustering classification of text documents is supervised learning problemas we explained in the first classification of text documents may involve learning the sentimenttopicthemecategoryand so on of several text documents (corpusand then using the trained model to label unknown documents in the future the major difference from normal structured data comes in the form of feature representations of unstructured text we will be using clustering involves grouping together similar documents based on some similarity measurelike cosine similaritybm distanceor even semantic similarity the nltk classify and nltk cluster modules are typically used to perform these operations once we do the necessary feature engineering and extraction other text analytics frameworks typicallynltk is our go-to library for dealing with text databut the python ecosystem also contains other libraries that can be useful in dealing with textual data we will briefly mention some of these libraries so that you get good grasp of the toolkit that you can arm yourself with when dealing with unstructured textual data patternthe pattern framework is web mining module for the python programming language it has tools for web mining (extracting data from googletwittera web crawleror an html dom parser)information retrievalnlpmachine learningsentiment analysis and network analysisand visualization unfortunatelypattern currently works best on python and there is no official port for python gensimthe gensim frameworkwhich stands for generate similaris python library that has core purpose of topic modeling at scalethis can be used to extract semantic topics from documents the focus of gensim is on providing efficient topic modeling and similarity analysis it also contains python implementation of google' popular word vec model |
15,866 | textblobthis is another python library that promises simplified text processing it provides simple api for doing common text processing tasks including parts of speech taggingtokenizationphrase extractionsentiment analysisclassificationtranslationand much morespacythis is recent addition to the python text processing landscape but an excellent and robust framework nonetheless the focus of spacy is industrial strength natural language processingso it targets efficient text analytics for largescale corpora it achieves this efficiency by leveraging carefully memory-managed operations in cython we recommend using spacy for natural language processing and you will also see it being used extensively for our text normalization process in statsmodels statsmodels is library for statistical and econometric analysis in python the advantage of languages like is that it' statistically focused language with lot of capabilities it consists of easy-to-use yet powerful models that can be used for statistical analysis and modeling however from deploymentintegrationand performance aspectsdata scientists and engineers often prefer python but it doesn' have the power of easy-to-use statistical functions and libraries like the statsmodels library aims to bridge this gap for python users it provides the capabilities for statisticalfinancial and econometric operations with the aim of combining the advantages of python with the statistical powers of languages like hence users familiar with rsasstataspssand so on who might want similar functionality in python can use statsmodels the initial statsmodel package was developed by jonathan taylora statistician at stanfordas part of scipy under the name models improving this codebase was then accepted as scipy-focused project for the google summer of code in and again in the current package is available as scikit or an addon package for scipy we recommend you to check out the paper by seaboldskipperand josef perktold"statsmodelseconometric and statistical modeling with python,proceedings of the th python in science conference installation the package can be installed using pip or conda install and the following commands pip install statsmodels conda install - conda-forge statsmodels modules in this sectionwe briefly cover the important modules that comprise the statsmodel package and the capability those models provides this should give you enough idea of what to leverage to build statistical models and perform statistical analysis and inference distributions one of the central ideas in statistics is the distributions of statistical datasets distributions are listing or function that assigns probability value to all the possible values of the data the distributions module of the statsmodels package implements some important functions related to statistical distribution including sampling from the distributiontransformations of distributionsgenerating cumulative distribution functions of important distributionsand so on |
15,867 | inear regression linear regression is the simplest form of statistical modeling for modeling the relationship between response dependent variable and one or more independent variables such that the response variable typically follows normal distribution the statsmodels regression module allows us to learn linear models on data with iid independently and identically distributed errors this module allows us to use different methods like ordinary least squares (ols)weighted least squares (wls)generalized least squares (gls)and so onfor the estimation of the linear model parameters eneralized linear models normal linear regression can be generalized if the dependent variable follows different distribution than the normal distribution the statsmodels genmod module allows us to extend the normal linear models to different response variables this allows us to predict the linear relationship between the independent and dependent variable when the dependent variable follows distributions other than normal distributions anova analysis of variance is process of statistical processes used to analyze the difference between group means and associated procedures anova analysis is an important way to test whether the means of several groups are equal or unequal this is an extremely powerful tool in hypothesis testing and statistical inference and is implemented in the anova_lm module of the statsmodel package time series analysis time series analysis is an important part of data analytics lot of data sources like stock pricesrainfallpopulation statisticsetc are periodic in nature time series analysis is used find structurestrendsand patterns in these streams of data these trends can be used to understand the underlying phenomena using mathematical model and even make predictions and forecasts about future events basic time series models include univariate autoregressive models (ar)vector autoregressive models (var)univariate autoregressive moving average models (arma)as well as the very popular autoregressive integrated moving average (arimamodel the tsa module of the statsmodels package provides implementation of time series models and also provides tools for time series data manipulation statistical inference an important part of traditional statistical inference is the process of hypothesis testing statistical hypothesis is an assumption about population parameter hypothesis testing is the formal process of accepting or rejecting the assumption made about the data on the basis of observational data collected from samples taken from the population the stats stattools module of statsmodels package implements the most important of the hypothesis tests some of these tests are independent of any modelwhile some are tied to particular model only nonparametric methods nonparametric statistics refers to statistics that is not based on any parameterized family of probability distributions when we make an assumption about the distribution of random variable we assign the number of parameters required to ascertain its behavior for exampleif we say that some metric of interest follows normal distribution it means that we can understand its behavior if we are able to determine |
15,868 | the mean and variance of that metric this is the key difference in non-parametric methodsi we don' have fixed number of parameters that are required to describe an unknown random variable instead the number of parameters are dependent on the amount of training data the module nonparametric in the statsmodels library will help us perform non-parametric analysis on our data it includes kernel density estimation for univariate and multivariate datakernel regressionand locally weighted scatterplot smoothing summary this introduced select group of packages that we will use routinely to processanalyzeand model our data you can consider these libraries and frameworks as the core tools of data scientist' toolbox the list of packages we covered is far from exhaustive but they certainly are the most important packages we strongly suggest you get more familiar with the packages by going through their documentation and relevant tutorials we will keep introducing and explaining other important features and aspects of these frameworks in future the examples in this along with the conceptual knowledge provided in the first should give you good grasp toward understanding machine learning and solving problems in simple and concise way we will observein the subsequent that often the process of learning models on our data is reiteration of these simple steps and concepts in the next you learn how to wield the set of tools to solve bigger and complex problems in the areas of data processingwranglingand visualization |
15,869 | the machine learning pipeline |
15,870 | processingwranglingand visualizing data the world around us has changed tremendously since computers and the internet became mainstream with the ubiquitous mobile phones and now internet enabled devicesthe line between the digital and physical worlds is more blurred than it ever was at the heart of all this is data data is at the center of everything around usbe it financesupply chainsmedical sciencespace explorationcommunicationand what not it is not surprising that we have generated of the world' data in just the last few years and this is just the beginning rightlydata is being termed as the oil of the st century the last couple of introduced the concepts of machine learning and the python ecosystem to get started this introduces the core entity upon which the machine learning world relies to show its magic and wonders everything digital has data at its core in some form or the other data is generated at various rates by numerous sources across the globe in numerous formats before we dive into the specifics of machine learningwe will spend some time and effort understanding this central entity called data it is important that we understand various aspects of it and get equipped with different techniques to handle it based on requirements in this we will cover the journey data takes through typical machine learning related use case where it goes from its initial raw form to form where it can be used by machine learning algorithmsmodels to work upon we cover various data formatsprocessing and wrangling techniques to get the data into form where it can be utilized by machine learning algorithms for analysis we also learn about different visualization techniques to better understand the data at hand together these techniques will help us be prepared for the problems to be solved in the coming as well as in real-world scenarios introduced the crisp-dm methodology it is one of the standard workflows followed by data science teams across the world in the coming sections of this we will concentrate on the following sub-sections of this methodologydata collectionto understand different data retrieval mechanisms for different data types data descriptionto understand various attributes and properties of the data collected data wranglingto prepare data for consumption in the modeling steps data visualizationto visualize different attributes for sharing resultsbetter understandingand so on the code samplesjupyter notebooksand sample datasets for this are available in the github repository for this book at under the directory/folder for (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python |
15,871 | data collection data collection is where it all begins though listed as step that comes post business understanding and problem definitiondata collection often happens in parallel this is done in order to assist in augmenting the business understanding process with facts like availabilitypotential valueand so on before complete use case can be formed and worked upon of coursedata collection takes formal and better form once the problem statement is defined and the project gets underway data is at the center of everything around uswhich is tremendous opportunity yet this also presents the fact that it must be present in different formatsshapesand sizes its omnipresence also means that it exists in systems such as legacy machines (say mainframes)web (say web sites and web applications)databasesflat filessensorsmobile devicesand so on let' look at some of the most commonly occurring data formats and ways of collecting such data csv csv data file is one of the most widely available formats of data it is also one of the oldest formats still used and preferred by different systems across domains comma separated values (csvare data files that contain data with each of its attributes delimited by ",( commafigure - depicts quick snapshot of how typical csv file looks the sample csv shows how data is typically arranged it contains attributes of different data types separated/delimited by comma csv may contain an optional header row (as shown in the examplecsvs may also optionally enclose each of the attributes in single or double quotes to better demarcate though usually csvs are used to store tabular datai data in the form of rows and columnsthis is not the only way figure - sample csv file csvs come in different variations and just changing the delimiter to tab makes one tsv (or tab separated valuesfile the basic ideology here is to use unique symbol to delimit/separate different attributes now that we know how csv lookslet' employ some python magic to read/extract this data for use one of the advantages of using language like python is its ability to abstract and handle whole lot of stuff unlike other languages where specific libraries or lot of code is required to get basic stuff donepython handles it with elan along the same lines is reading csv file the simplest way to read csv is through the python csv module this module provides an abstraction function called the reader( |
15,872 | the reader function takes file object as input to return an iterator containing the information read from the csv file the following code snippet uses the csv reader(function to read given file csv_reader csv reader(open(file_name'rb')delimiter=','once the iterator is returnedwe can easily iterate through the contents and get the data in the form/format required for the sake of completeness let' go through an example where we read the contents of the csv shown in figure - using the csv module we will then extract each of its attributes and convert the data into dict with keys representing them the following snippet forms the actions csv_rows list(csv_attr_dict dict(csv_reader none read csv csv_reader csv reader(open(file_name'rb')delimiter=delimiteriterate and extract data for row in csv_readerprint(rowcsv_rows append(rowiterate and add data to attribute lists for row in csv_rows[ :]csv_attr_dict['sno'append(row[ ]csv_attr_dict['fruit'append(row[ ]csv_attr_dict['color'append(row[ ]csv_attr_dict['price'append(row[ ]the output is dict containing each attribute as key with values and as an ordered list of values read from the csv file csv attributes:{'color'['red''yellow''yellow''orange''green''yellow''green']'fruit'['apple''banana''mango''orange''kiwi''pineapple''guava']'price'[' '' '' '' '' '' '' ']'sno'[' '' '' '' '' '' '' ']the extraction of data from csv and its transformation depends on the use case requirements the conversion of our sample csv into dict of attributes is one way we may choose different output format depending on the data and our requirements though the workflow to handle and read csv file is pretty straightforward and easy to usewe would like to standardize and speed up our process alsomore often than notit is easier to understand data in tabular format we were introduced to the pandas library in the previous with some amazing capabilities let' now utilize pandas to read csv as well the following snippet shows how pandas makes reading and extracting data from csv that' simpler and consistent as compared to the csv module df pd read_csv(file_name,sep=delimiter |
15,873 | with single line and few optional parameters (as per requirements)pandas extracts data from csv file into dataframewhich is tabular representation of the same data one of the major advantages of using pandas is the fact that it can handle lot of different variations in csv filessuch as files with or without headersattribute values enclosed in quotesinferring data typesand many more alsothe fact that various machine learning libraries have the capability to directly work on pandas dataframesmakes it virtually de facto standard package to handle csv files the previous snippet generates the following output dataframesno fruit color price apple red banana yellow mango yellow orange orange kiwi green pineapple yellow guava green ##note pandas makes the process of reading csv files breezeyet the csv module comes in handy when we need more flexibility for examplenot every use case requires data in tabular form or the data might not be consistently formatted and requires flexible library like csv to enable custom logic to handle such data along the same linesdata from flat files containing delimiters other than ',(commalike tabs or semicolons can be easily handled with these two modules we will use these utilities while working on specific use cases in further until thenyou are encouraged to explore and play around with these for better understanding json java script object notation (jsonis one of the most widely used data interchange formats across the digital realm json is lightweight alternative to legacy formats like xml (we shall discuss this format nextjson is text format that is language independent with certain defined conventions json is human-readable format that is easy/simple to parse in most programming/scripting languages json file/object is simply collection of name(key)-value pairs such key-value pair structures have corresponding data structures available in programming languages in the form of dictionaries (python dict)structobjectrecordkeyed listsand so on more details are available at the json standard defines the structureas depicted in figure - figure - json object structure (reference |
15,874 | figure - is sample json depicting record of glossary with various attributes of different data types figure - sample json (referencejsons are widely to send information across systems the python equivalent of json object is the dict data typewhich itself is key-value pair structure python has various json related libraries that provide abstractions and utility functions the json library is one such option that allows us to handle json files/objects let' first take look at our sample json file and then use this library to bring this data into python for use figure - sample json with nested attributes |
15,875 | the json object in figure - depicts fairly nested structure that contains values of stringnumericand array type json also supports objectsbooleansand other data types as values as well the following snippet reads the contents of the file and then utilizes json loads(utility to parse and convert it into standard python dict json_filedata open(file_nameread(json_data json loads(json_filedatajson_data is python dict with keys and values of the json file parsed and type casted as python data types the json library also provides utilities to write back python dictionaries as json files with capabilities of error checking and typecasting the output of the previous operation is as follows outer_col_ nested_inner_col_ val_ nested_inner_col_ nested_inner_col_ val_ nested_inner_col_ outer_col_ inner_col_ outer_col_ before we move on to our next formatit is worth noting that pandas also provides utilities to parse jsons the pandas read_json(is very powerful utility that provides multiple options to handle jsons created in different styles figure - depicts sample json representing multiple data pointseach with two attributes listed as col_ and col_ |
15,876 | figure - sample json depicting records with similar attributes we can easily parse such json using pandas by setting the orientation parameter to "records"as shown here df pd read_json(file_name,orient="records"the output is tabular dataframe with each data point represented by two attribute values as follows col_ col_ |
15,877 | you are encouraged to read more about pandas read_json(at xml having covered two of the most widely used data formatsso now let' take look at xml xmls are quite dated format yet is used by lot many systems xml or extensible markup language is markup language that defines rules for encoding data/documents to be shared across the internet like jsonxml is also text format that is human readable its design goals involved strong support for various human languages (via unicode)platform independenceand simplicity xmls are widely used for representing data of varied shapes and sizes xmls are widely used as configuration formats by different systemsmetadataand data representation format for services like rsssoapand many more xml is language with syntactic rules and schemas defined and refined over the years the most import components of an xml are as followstaga markup construct denoted by strings enclosed with angled braces (""contentany data not marked within the tag syntax is the content of the xml file/object elementa logical construct of an xml an element may be defined with start and an end tag with or without attributesor it may be simply an empty tag attributekey-value pairs that represent the properties or attributes of the element in consideration these are enclosed within start or an empty tag figure - is sample xml depicting various components of the extensible markup language more details on key concepts and details can be browsed at figure - sample xml annotated with key components xmls can be viewed as tree structuresstarting with one root element that branches off into various elementseach with their own attributes and further branchesthe content being at leaf nodes |
15,878 | most xml parsers use this tree-like structure to read xml content the following are the two major types of xml parsersdom parserthe document object model parser is the closest form of tree representation of an xml it parses the xml and generates the tree structure one big disadvantage with dom parsers is their instability with huge xml files sax parserthe simple api for xml (or sax for shortis variant widely used on the web this is an event-based parser that parses an xml element by element and provides hooks to trigger events based on tags this overcomes the memory-based restrictions of dom but lacks overall representation power there are multiple variants available that derive from these two types to begin withlet' take look at the elementtree parser available from python' xml library the elementtree parser is an optimization over the dom parser and it utilizes python data structures like lists and dicts to handle data in concise manner the following snippet uses the elementtree parser to load and parse the sample xml file we saw previously the parse(function returns tree objectwhich has various attributesiteratorsand utilities to extract root and further components of the parsed xml tree et parse(file_nameroot tree getroot(print("root tag:{ }format(root tag)print("attributes of root:{ }format(root attrib)the two print statements provide us with values related to the root tag and its attributes (if there are anythe root object also has an iterator attached to it which can be used to extract information related to all child nodes the following snippet iterates the root object to print the contents of child nodes for child in xml:rootprint("{ }tag:{ }attribute:{ }format"\ "*indent_levelchild tagchild attrib)print("{ }tag data:{ }format("\ "*indent_levelchild text)the final output generated by parsing the xml using elementtree is as follows we used custom print utility to make the output more readablethe code for which is available on the repository root tag:records attributes of root:{'attr''sample xml records'tag:recordattribute:{'name''rec_ 'tag datatag:sub_elementattribute:{tag datatag:detail attribute:{tag data:attribute tag:detail attribute:{tag data: |
15,879 | tag:sub_element_with_attrattribute:{'attr''complex'tag datasub_element_text tag:sub_element_only_attrattribute:{'attr_val''only_attr'tag data:none tag:recordattribute:{'name''rec_ 'tag datatag:sub_elementattribute:{tag datatag:detail attribute:{tag data:attribute tag:detail attribute:{tag data: tag:sub_element_with_attrattribute:{'attr''complex'tag datasub_element_text tag:sub_element_only_attrattribute:{'attr_val''only_attr'tag data:none the xml library provides very useful utilities exposed through the elementtree parseryet it lacks lot of fire power another python libraryxmltodictprovides similar capabilities but uses python' native data structures like dicts to provide more pythonic way to handle xmls the following is quick snippet to parse the same xml unlike elementtreethe parse(function of xmltodict reads file object and converts the contents into nested dictionaries xml_filedata open(file_nameread(ordered_dict xmltodict parse(xml_filedatathe output generated is similar to the one generated using elementtree with the exception that xmltodict uses the symbol to mark elements and attributes automatically the following is the sample output records @attr sample xml records record @name rec_ sub_element detail attribute detail sub_element_with_attr @attr complex #text sub_element_text sub_element_only_attr @attr_val only_attr |
15,880 | html and scraping we began the talking about the immense amount of information/data being generated at breakneck speeds the internet or the web is one of the driving forces for this revolution coupled with immense reach due to computerssmartphones and tablets the internet is huge interconnected web of information connected through hyperlinks large amount of data on the internet is in the form of web pages these web pages are generatedupdatedand consumed millions of times day in and day out with information residing in these web pagesit is imperative that we must learn how to interact and extract this information/data as well so far we have dealt with formats like csvjsonand xmlwhich can be made available/extracted through various methods like manual downloadsapisand so on with web pagesthe methods change in this section we will discuss the html format (the most common form of web page related formatand web-scraping techniques tml the hyper text markup language (htmlis markup language similar to xml html is mainly used by web browsers and similar applications to render web pages for consumption html defines rules and structure to describe web pages using markup the following are standard components of an html pageelementlogical constructs that form the basic building blocks of an html page tagsa markup construct defined by angled braces (some of the important tags arethis pair of tags contains the whole of html document it marks the start and end of the html page this pair of tags contains the main content of the html page rendered by the browser there are many more standard set of tags defined in the html standardfurther information is available at the following is snippet to generate an html page that' rendered by web browseras shown in the screenshot in figure - sample html page sample webpage html has been rendered |
15,881 | figure - sample html page as rendered in browser browsers use markup tags to understand special instructions like text formattingpositioninghyperlinksand so on but only renders the content for the end user to see for use cases where datainformation resides in html pageswe need special techniques to extract this content web scraping web scraping is technique to scrape or extract data from the webparticularly from web pages web scraping may involve manually copying the data or using automation to crawlparseand extract information from web pages in most contextsweb scraping refers to automatically crawling particular web site or portion of the web to extract and parse information that can be later on used for analytics or other use cases typical web scraping flow can be summarized as followscrawla bot or web crawler is designed to query web server using the required set of urls to fetch the web pages crawler may employ sophisticated techniques to fetch information from pages linked from the urls in question and even parse information to certain extent web sites maintain file called robots txt to employ what is called as the "robots exclusion protocolto restrict/provide access to their content more details are available at scrapeonce the raw web page has been fetchedthe next task is to extract information from it the task of scraping involves utilizing techniques like regular expressionsextraction based on xpathor specific tags and so on to narrow down to the required information on the page web scraping involves creativity from the point of view of narrowing down to the exact piece of information required with web sites changing constantly and web pages becoming dynamic (see aspjspetc )presence of access controls (username/passwordcaptchaand so oncomplicate the task even more python is very powerful programming languagewhich should be evident by nowand scraping the web is another task for which it provides multiple utilities let' begin with extracting blog post' text from the apress blog to better understand web scraping the first task is to identify the url we are interested in for our current examplewe concentrate on the first blog post of the day on apress web site' blog page at clicking on the top most blog post takes us to the main article in consideration the article is shown in the screen in figure - |
15,882 | figure - blog post on apress com now that we have the required page and its urlwe will use the requests library to query the required url and get response the following snippet does the same base_url "blog_suffix "/wannacry-how-to-prepare/ response requests get(base_url+blog_suffixif the get request is successfulthe response object' status_code attribute contains value of (equivalent to html success codeupon getting successful responsethe next task is to devise method to extract the required information since in this case we are interested in the blog post' actual contentlet' analyze the html behind the page and see if we can find specific tags of interest ##note most modern browsers come with html inspection tools built-in if you are using google chromepress or right-click on the page and select inspect or view source this opens the html code for you to analyze |
15,883 | figure - depicts snapshot of the html behind the blog post we are interested in figure - inspecting the html content of blog post on apress com upon careful inspectionwe can clearly see text of the blog post is contained within the div tag now that we have narrowed down to the tag of interestwe use python' regular expression library re to search and extract data contained within these tags only the following snippet utilizes re compile(to compile regular expression and then uses re findall(to extract the information from the fetched response content_pattern re compile( '*?)'result re findall(content_patterncontentthe output of the find operation is the required text from the blog post of courseit still contains html tags interlaced between the actual text we can perform further clean up to reach the required levelsyet this is good start the following is snapshot of information extracted using regular expressions out[ ]'by mike halseyit was perfectly ordinary friday when the wannacry ransomware struck in may the malware spread around the world to more than countries in just matter of few hoursaffecting the national health service in the uktelecoms provider telefonica in spainand many other organisations and businesses in the usacanadachinajapanrussiaand right across europethe middle-eastand asia the malware was reported to have been stolen in an attack on the us national security agency (nsa)though the nsa denied thisand exploited vulnerabilities in the microsoft windows operating system microsoft had been aware of the vulnerabilities since early in the yearand had patched them back in march |
15,884 | this was straightforward and very basic approach to get the required data what if we want to go step further and extract information related to all blog posts on the page and perform better cleanupfor such taskwe utilize the beautifulsoup library beautifulsoup is the go-to standard library for web scraping and related tasks it provides some amazing functionality to ease out the scraping process for the task at handour process would be to first crawl the index page and extract the urls to all the blog post links listed on the page for this we would use the requests get(function to extract the content and then utilize beautifulsoup' utilities to get the content from the urls the following snippet showcases the function get_post_mapping()which parses the home page content to extract the blog post headings and corresponding urls into dictionary the function finally returns list of such dictionaries def get_post_mapping(content)"""this function extracts blog post title and url from response object argscontent (request content)string content returned from requests get returnslista list of dictionaries with keys title and url ""post_detail_list [post_soup beautifulsoup(content,"lxml" _content post_soup find_all(" "for in _contentpost_detail_list append{'title': get_text(),'url': attrs get('href')return post_detail_list the pervious function first creates an object of beautifulsoup specifying lxml as its parser it then uses the tag and regex based search to extract the required list of tags (we got to the tag by the same inspect element approach we utilized previouslythe next task was to simply iterate through the list of the tags and utilize the get_text(utility function from beautifulsoup to get the blog post heading and its corresponding url the list returned from the function is as follows [{'title' "wannacrywhy it' only the beginningand how to prepare for what comes next"'url''/in/blog/all-blog-posts/wannacry-how-to-prepare/ '}{'title' 'reusing ngrx/effects in angular (communicating between reducers)''url''/in/blog/all-blog-posts/reusing-ngrx-effects-in-angular/ '}{'title' 'interview with tony smith author and sharepoint expert''url''/in/blog/all-blog-posts/interview-with-tony-smith-author-and-sharepoint-expert/ '}{'title' 'making sense of sensors \ types and levels of recognition''url''/in/blog/all-blog-posts/making-sense-of-sensors/ '}{'title' 'vs net coreand javascript frameworksoh my!''url''/in/blog/all-blog-posts/vs- -net-core-and-javascript-frameworks-oh-my/ '} |
15,885 | now that we have the listthe final step it to iterate through this list of urls and extract each blog post' text the following function showcases how beautifulsoup simplifies the task as compared to our previous method of using regular expressions the method of identifying the required tag remains the samethough we utilize the power of this library to get text that is free from all html tags def get_post_content(content)"""this function extracts blog post content from response object argscontent (request content)string content returned from requests get returnsstrblog' content in plain text ""plain_text "text_soup beautifulsoup(content,"lxml"para_list text_soup find_all("div"{'class':'cms-richtext'}for in para_list[ ]plain_text + gettext(return plain_text the following output is the content from one of the posts pay attention to the cleaner text in this case as compared to our previous approach by mike halseyit was perfectly ordinary friday when the wannacry ransomware struck in may the malware spread around the world to more than countries in just matter of few hoursaffecting the national health service in the uktelecoms provider telefonica in spainand many other organisations and businesses in the usacanadachinajapanrussiaand right across europethe middle-eastand asia the malware was reported to have been stolen in an attack on the us national security agency (nsa)though the nsa denied thisand exploited vulnerabilities in the microsoft windows operating system microsoft had been aware of the vulnerabilities since early in the yearand had patched them back in march through these two methodswe crawled and extracted information related to blog posts from our web site of interest you are encouraged to experiment with other utilities from beautifulsoup along with other web sites for better understanding of coursedo read the robots txt and honor the rules set by the webmaster sql databases date back to the and represent large volume of data stored in relational form data available in the form of tables in databasesor to be more specificrelational databasescomprise of another format of structured data that we encounter when working on different use cases over the yearsthere have been various flavors of databases availablemost of them conforming to the sql standard the python ecosystem handles data from databases in two major ways the first and the most common way used while working on data science and related use cases is to access data using sql queries directly to access data using sql queriespowerful libraries like sqlalchemy and pyodbc provide convenient interfaces to connectextractand manipulate data from variety of relational databases like ms sql servermysql |
15,886 | oracleand so on the sqlite library provides lightweight easy-to-use interface to work with sqlite databasesthough the same can be handled by the other two libraries as well the second way of interacting with databases is the orm or the object relational mapper method this method is synonymous to the object oriented model of datai relational data is mapped in terms of objects and classes sqlalchemy provides high-level interface to interact with databases in the orm fashion we will explore more on these based on the use cases in the subsequent data description in the previous sectionwe discussed various data formats and ways of extracting information from them each of the data formats comprised of data points with attributes of diverse types these data types in their raw data forms form the basis of input features utilized by machine learning algorithms and other tasks in the overall data science workflow in this sectionwe touch upon major data types we deal with while working on different use cases numeric this is simplest of the data types available it is also the type that is directly usable and understood by most algorithms (though this does not imply that we use numeric data in its raw formnumeric data represents scalar information about entities being observedfor instancenumber of visits to web siteprice of productweight of personand so on numeric values also form the basis of vector featureswhere each dimension is represented by scalar value the scalerangeand distribution of numeric data has an implicit effect on the algorithm and/or the overall workflow for handling numeric datawe use techniques such as normalizationbinningquantizationand many more to transform numeric data as per our requirements text data comprising of unstructuredalphanumeric content is one of most common data types textual data when representing human language content contains implicit grammatical structure and meaning this type of data requires additional care and effort for transformation and understanding we cover aspects of transforming and using textual data in the coming categorical this data type stands in between the numeric and text categorical variables refer to categories of entities being observed for instancehair color being blackbrownblonde and red or economic status as lowmediumor high the values may be represented as numeric or alphanumericwhich describe properties of items in consideration based on certain characteristicscategorical variables can be seen asnominalthese define only the category of the data point without any ordering possible for instancehair color can be blackbrownblondeetc but there cannot be any order to these categories ordinalthese define category but can also be ordered based on rules on the context for examplepeople categorized by economic status of lowmediumor high can be clearly ordered/sorted in the respective order |
15,887 | it is important to note that standard mathematical operations likeadditionsubtractionmultiplicationetc do not carry meaning for categorical variables even though that may be allowed syntactically (categorical variables represented as numbersthus is it important to handle categorical variables with care and we will see couple of ways of handling categorical data in the coming section different data types form the basis of features that are ingested by algorithms for analysis of data at hand in the coming sections and especially feature engineering and selectionyou will learn more on how to work with specific data types data wrangling so far in this we discussed data formats and data types and learned about ways of collecting data from different sources now that we have an understanding of the initial process of collecting and understanding datathe next logical step is to be able to use it for analysis using various machine learning algorithms based upon the use case at hand but before we reach the stage where this "rawdata is anywhere close to be useable for the algorithms or visualizationswe need to polish and shape it up data wrangling or data munging is the process of cleaningtransformingand mapping data from one form to another to utilize it for tasks such as analyticssummarizationreportingvisualizationand so on understanding data data wrangling is one of most important and involving steps in the whole data science workflow the output of this process directly impacts all downstream steps such as explorationsummarizationvisualizationanalysis and even the final result this clearly shows why data scientists spend lot of time in data collection and wrangling there are lot many surveys which help in bringing this fact out that more than oftendata scientists end up spending of their time in data processing and wranglingso before we get started with actual use cases and algorithms in the coming it is imperative that we understand and learn how to wrangle our data and transform it into useable form to begin withlet' first describe the dataset at hand for the sake of simplicitywe prepare sample dataset describing product purchase transactions by certain users since we already discussed ways of collecting/extracting datawe will skip that step for this section figure - shows snapshot of dataset figure - sample dataset |
15,888 | ##note the dataset in consideration has been generated using standard python libraries like randomdatetimenumpypandasand so on this dataset has been generated using utility function called generate_sample_data(available in the code repository for this book the data has been randomly generated and is for representational purposes only the dataset describes transactions having the following attributes/features/propertiesdatethe date of the transaction pricethe price of the product purchased product idproduct identification number quantity purchasedthe quantity of product purchased in this transaction serial nothe transaction serial number user ididentification number for user performing the transaction user typethe type of user let' now begin our wrangling/munging process and understand various methods/tricks to cleantransformand map our dataset to bring it into useable form the first and the foremost step usually is to get quick peak into the number of records/rowsthe number of columns/attributescolumn/attribute namesand their data types for the majority of this section and subsequent oneswe will be relying on pandas and its utilities to perform the required tasks the following snippet provides the details on row countsattribute countsand details print("number of rows::",df shape[ ]print("number of columns::",df shape[ print("column names::",df columns values tolist()print("column data types::\ ",df dtypesthe required information is available straight from the pandas dataframe itself the shape attribute is two-value tuple representing the row count and column countrespectively the column names are available through the columns attributeswhile the dtypes attribute provides us with the data type of each of the columns in the dataset the following is the output generated by this snippet number of rows: number of columns: column names:['date''price''product id''quantity purchased''serial no''user id''user type'column data types:date object price float |
15,889 | product id int quantity purchased int serial no int user id int user type object dtypeobject the column names are clearly listed and have been explained previously upon inspecting the data typeswe can clearly see that the date attribute is represented as an object before we move on to transformations and cleanuplet' dig in further and collect more information to understand and prepare strategy of required tasks for dataset wrangling the following snippet helps get information related to attributes/columns containing missing valuescount of rowsand indices that have missing values in them print("columns with missing values::",df columns[df isnull(any()tolist()print("number of rows with missing values::",len(pd isnull(dfany( nonzero()[ tolist())print("sample indices with missing data::",pd isnull(dfany( nonzero()[ tolist()[ : with pandassubscripting works with both rows and columns (see for detailswe use isnull(to identify columns containing missing values the utilities any(and nonzero(provide nice abstractions to identify any row/column conforming to condition (in this case pointing to rows/columns having missing valuesthe output is as follows columns with missing values:['date''price''user type'number of rows with missing values: sample indices with missing data:[ llet' also do quick fact checking to get details on non-null rows for each column and the amount of memory consumed by this dataframe we also get some basic summary statistics like minmaxand so onthese will be useful in coming tasks for the first taskwe use the info(utility while the summary statistics are provided by the describe(function the following snippet does this print("general stats::"print(df info()print("summary stats::print(df describe()the following is the output generated using the info(and describe(utilities it shows date and price both have about non-null rowswhile the dataset consumes close to kb of memory the summary stats are self-explanatory and drop the non-numeric columns like date and user type from the output general stats:rangeindex entries to data columns (total columns)date non-null object price non-null float product id non-null int quantity purchased non-null int |
15,890 | serial no non-null int user id non-null int user type non-null object dtypesfloat ( )int ( )object( memory usage kb none summary stats:price product id quantity purchased serial no user id count mean std min - - max filtering data we have completed our first pass of the dataset at hand and understood what it has and what is missing the next stage is about cleanup cleaning dataset involves tasks such as removing/handling incorrect or missing datahandling outliersand so on cleaning also involves standardizing attribute column names to make them more readableintuitiveand conforming to certain standards for everyone involved to understand to perform this taskwe write small function and utilize the rename(utility of pandas to complete this step the rename(function takes dict with keys representing the old column names while values point to newer ones we can also decide to modify the existing dataframe or generate new one by setting the inplace flag appropriately the following snippet showcases this function def cleanup_column_names(df,rename_dict={},do_inplace=true)"""this function renames columns of pandas dataframe it converts column names to snake case if rename_dict is not passed argsrename_dict (dict)keys represent old column names and values point to newer ones do_inplace (bool)flag to update existing dataframe or return new one returnspandas dataframe if do_inplace is set to falsenone otherwise ""if not rename_dictreturn df rename(columns={colcol lower(replace(',' 'for col in df columns values tolist()}inplace=do_inplaceelsereturn df rename(columns=rename_dict,inplace=do_inplaceupon using this function on our dataframe in considerationthe output in figure - is generated since we do not pass any dict with old and new column namesthe function updates all columns to snake case |
15,891 | figure - dataset with columns renamed for different algorithmsanalysis and even visualizationswe often require only subset of attributes to work with with pandaswe can vertically slice (select subset of columnsin variety of ways pandas provides different ways to suit different scenarios as we shall see in the following snippet print("using column index::print(df[[ ]values[: print("using column name::print(df quantity_purchased valuesprint(using column data type::print(df select_dtypes(include=['float ']values[:, in this snippetwe have performed attribute selection in three different ways the first method utilizes column index number to get the required information in this casewe wanted to work with only the field quantity_purchasedhence index number (pandas columns are indexedthe second method also extracts data for the same attribute by directly referring to the column name in dot notation while the first method is very handy when working in loopsthe second one is more readable and blends well when we are utilizing the object oriented nature on python yet there are times when we would need to get attributes based on their data types alone the third method makes use of select_dtypes(utility to get this job done it provides ways of both including and excluding columns based on data types alone in this example we selected the column(swith data type as float (price column in our datasetthe output from this snippet is as follows using column index:[ using column name:[ using column data type: |
15,892 | selecting specific attributes/columns is one of the ways of subsetting dataframe there may be requirements to horizontally splitting dataframe as well to work with subset of rowspandas provides ways as outlined in the following snippet print("select specific row indices::"print(df iloc[[ , , ]print(excluding specific row indices::print(df drop([ , , ]axis= head()print("subsetting based on logical condition( )::print(df[df quantity_purchased> head()print("subsetting based on offset from top (bottom)::print(df[ :head(#df tail(- the first method utilizes the iloc (or integer index/locationbased selectionwe need to specify list of indices we need from the dataframe the second method allows in removing/filtering out specific row indices from the dataframe itself this comes in handy in scenarios where rows not satisfying certain criteria need to be filtered out the third method showcases conditional logic based filtering of rows the final method filters based on offset from the top of the dataframe similar methodcalled tail()can be used to offset from bottom as well the output generated is depicted in figure - figure - different ways of subsetting rows |
15,893 | typecasting typecasting or converting data into appropriate data typesis an important part of cleanup and wrangling in general often data gets converted into wrong data types while being extracted or converted from one form to the other also different platforms and systems handle each data type differently and thus getting the right data type is important while starting the wrangling discussionwe checked upon the data types of all the columns of our dataset if you rememberthe date column was marked as an object though it may not be an issue if we are not going to work with datesbut in cases we need date and related attributeshaving them as objects/strings can pose problems moreoverit is difficult to handle date operations if they are available as strings to fix our dataframewe use to_datetime(function from pandas this is very flexible utility that allows us to set different attributes like date time formatstimezoneand so on since in our casethe values are just dateswe use the function as follows with defaults df['date'pd to_datetime(df dateprint(df dtypessimilarlywe can convert numeric columns marked as strings using to_numeric(along with direct python style typecasting as well upon checking the data types nowwe clearly see the date column in the correct data type of datetime date datetime [nsprice float product_id int quantity_purchased int serial_no int user_id int user_type object dtypeobject transformations another common task with data wrangling is to transform existing columns or derive new attributes based on requirements of the use case or data itself to derive or transform columnpandas provide three different utilities--apply()applymap()and map(the apply(function is used to perform actions on the whole objectdepending upon the axis (default is on all rowsthe applymap(and map(functions work element-wise with map(coming from the pandas series hierarchy as an example to understand these three utilitieslet' derive some new attributes firstlet' expand the user_type attribute using the map(function we write small function to map each of the distinct user_type codes into their corresponding user classes as follows def expand_user_type(u_type)if u_type in [' ',' ']return 'newelif u_type =' 'return 'existingelif u_type =' 'return 'loyal_existingelsereturn 'errordf['user_class'df['user_type'map(expand_user_type |
15,894 | along the same lineswe use the applymap(function to perform another element-wise operation to get the week of the transaction from the date attribute for this casewe use the lambda function to get the job done quickly refer to previous for more details on lambda functions the following snippet gets us the week for each of the transactions df['purchase_week'df[['date']applymap(lambda dt:dt week if not pd isnull(dt weekelse figure - depicts our dataframe with two additional attributes--user_class and purchase_week figure - dataframe with derived attributes using map and applymap let' now use the apply(function to perform action on the whole of the dataframe object itself the following snippet uses the apply(function to get range (maximum value to minimum valuefor all numeric attributes we use the previously discussed select_dtypes and lambda function to complete the task df select_dtypes(include=[np number]apply(lambda xx max() min()the output is reduced pandas series object showcasing range values for each of the numeric columns price product_id quantity_purchased serial_no user_id purchase_week imputing missing values missing values can lead to all sorts of problems when dealing with machine learning and data science related use cases not only can they cause problems for algorithmsthey can mess up calculations and even final outcomes missing values also pose risk of being interpreted in non-standard ways as well leading to confusion and more errors henceimputing missing values carries lot of weight in the overall data wrangling process one of the easiest ways of handling missing values is to ignore or remove them altogether from the dataset when the dataset is fairly large and we have enough samples of various types requiredthis option can be safely exercised we use the dropna(function from pandas in the following snippet to remove rows of data where the date of transaction is missing print("drop rows with missing dates::df_dropped df dropna(subset=['date']print("shape::",df_dropped shape |
15,895 | the result is dataframe with rows without any missing dates the output dataframe is depicted in figure - figure - dataframe without any missing date information often dropping rows is very expensive and unfeasible option in many scenariosmissing values are imputed using the help of other values in the dataframe one commonly used trick is to replace missing values with central tendency measure like mean or median one may also choose other sophisticated measures/statistics as well in our datasetthe price column seems to have some missing data we utilize the fillna(method from pandas to fill these values with mean price value from our dataframe on the same lineswe use the ffill(and bfill(functions to impute missing values for the user_type attribute sinceuser_type is string type attributewe use proximity based solution to handle missing values in this case the ffill(and bfill(functions copy forward the data from the previous row (forward fillor copy the value from the next row (backward fillthe following snippet showcases the three functions print("fill missing price values with mean price::df_dropped['price'fillna(value=np round(df price mean(),decimals= )inplace=trueprint("fill missing user_type values with value from previous row (forward fill::df_dropped['user_type'fillna(method='ffill',inplace=trueprint("fill missing user_type values with value from next row (backward fill::df_dropped['user_type'fillna(method='bfill',inplace=trueapart from these waysthere are certain conditions where record is not much of use if it has more than certain threshold of attribute values missing for instanceif in our dataset transaction has less than three attributes as non-nullthe transaction might almost be unusable in such scenarioit might be advisable to drop that data point itself we can filter out such data points using the function dropna(with the parameter thresh set to the threshold of non-null attributes more details are available on the official documentation page |
15,896 | handling duplicates another issue with many datasets is the presence of duplicates while data is important and more the merrierduplicates do not add much value per se even moreduplicates help us identify potential areas of errors in recording/collecting the data itself to identify duplicateswe have utility called duplicated(that can applied on the whole dataframe as well as on subset of it we may handle duplicates by fixing the errors and use the duplicated(functionalthough we may also choose to drop the duplicate data points altogether to drop duplicateswe use the method drop_duplicates(the following snippet showcases both functions discussed here df_dropped[df_dropped duplicated(subset=['serial_no'])df_dropped drop_duplicates(subset=['serial_no'],inplace=truethe output of identifying subset of dataframe having duplicate values for the field serial_no is depicted in figure - the second line in the previous snippet simply drops those duplicates figure - dataframe with duplicate serial_no values handling categorical data as discussed in the section "data description,categorical attributes consist of data that can take limited number of values (not always thoughhere in our datasetthe attribute user_type is categorical variable that can take only limited number of values from the allowed set { , , ,dthe algorithms that we would be learning and utilizing in the coming mostly work with numerical data and categorical variables may pose some issues with pandaswe can handle categorical variables in couple of different ways the first one is using the map(functionwhere we simply map each value from the allowed set to numeric value though this may be usefulthis approach should be handled with care and caveats for instancestatistical operations like additionmeanand so onthough syntactically validshould be avoided for obvious reasons (more on this in coming the second method is to convert the categorical variable into indicator variables using the get_dummies(function the function is simply wrapper to generate one hot encoding for the variable in consideration one hot encoding and other encodings can be handled using libraries like sklearn as well (we will see more examples in coming the following snippet showcases both the methods discussed previously using map(and get_dummies(using map to dummy encode type_map={' ': ,' ': ,' ': ,' ': ,np nan:- df['encoded_user_type'df user_type map(type_mapprint(df head()using get_dummies to one hot encode print(pd get_dummies(df,columns=['user_type']head() |
15,897 | the output is generated as depicted in figure - and figure - figure - shows the output of dummy encoding with the map(approach we keep the number of features in checkyet have to be careful about the caveats mentioned in this section figure - dataframe with user_type attribute dummy encoded the second imagefigure - showcases the output of one hot encoding the user_type attribute we discuss more these approaches in detail in when we discuss feature engineering figure - dataframe with user_type attribute one hot encoded normalizing values attribute normalization is the process of standardizing the range of values of attributes machine learning algorithms in many cases utilize distance metricsattributes or features of different scales/ranges which might adversely affect the calculations or bias the outcomes normalization is also called feature scaling there are various ways of scaling/normalizing featuressome of them are rescalingstandardization (or zero-mean unit variance)unit scaling and many more we may choose normalization technique based upon the featurealgorithm and use case at hand this will be clearer when we work on use cases we also cover feature scaling strategies in detail in feature engineering and selection the following snippet showcases quick example of using min-max scaleravailable from the preprocessing module of sklearnwhich rescales attributes to the desired given range df_normalized df dropna(copy(min_max_scaler preprocessing minmaxscaler(np_scaled min_max_scaler fit_transform(df_normalized['price'reshape(- , )df_normalized['normalized_price'np_scaled reshape(- , figure - showcases the unscaled price values and the normalized price values that have been scaled to range of [ |
15,898 | figure - original and normalized values for price string manipulations raw data presents all sorts of issues and complexities before it can be used for analysis strings are another class of raw data which needs special attention and treatment before our algorithms can make sense out of them as mentioned while discussing wrangling methods for categorical datathere are limitations and issues while directly using string data in algorithms string data representing natural language is highly noisy and requires its own set of steps for wrangling though most of these steps are use case dependentit is worth mentioning them here (we will cover these in detail along with use cases for better claritystring data usually undergoes wrangling steps such astokenizationsplitting of string data into constituent units for examplesplitting sentences into words or words into characters stemming and lemmatizationthese are normalization methods to bring words into their root or canonical forms while stemming is heuristic process to achieve the root formlemmatization utilizes rules of grammar and vocabulary to derive the root stopword removaltext contains words that occur at high frequency yet do not convey much information (punctuationsconjunctionsand so onthese words/phrases are usually removed to reduce dimensionality and noise from data apart from the three common steps mentioned previouslythere are other manipulations like pos tagginghashingindexingand so on each of these are required and tuned based on the data and problem statement on hand stay tuned for more details on these in the coming data summarization data summarization refers to the process of preparing compact representation of raw data at hand this process involves aggregation of data using different statisticalmathematicaland other methods summarization is helpful for visualizationcompressing raw dataand better understanding of its attributes the pandas library provides various powerful summarization techniques to suit different requirements we will cover couple of them here as well the most widely used form of summarization is to group values based on certain conditions or attributes the following snippet illustrates one such summarization print(df['price'][df['user_type']==' 'mean()print(df['purchase_week'value_counts() |
15,899 | the first statement calculates the mean price for all transactions by user_typewhile the second one counts the number of transactions per week though these calculations are helpfulgrouping data based on attributes helps us get better understanding of it the groupby(function helps us perform the sameas shown in the following snippet print(df groupby(['user_class'])['quantity_purchased'sum()this statement generates tabular output representing sum of quantities purchased by each user_class the output is generated is as follows user_class existing loyal_existing new namequantity_purchaseddtypeint the groupby(function is powerful interface that allows us to perform complex groupings and aggregations in the previous example we grouped only on single attribute and performed single aggregation ( sumwith groupby(we can perform multi-attribute groupings and apply multiple aggregations across attributes the following snippet showcases three variants of groupby(usage and their corresponding outputs variant- multiple aggregations on single attribute df groupby(['user_class'])['quantity_purchased'agg([np sumnp meannp count_nonzero]variant- different aggregation functions for each attribute df groupby(['user_class','user_type']agg({'price':np mean'quantity_purchased':np max}variant- df groupby(['user_class','user_type']agg({'price':{'total_price':np sum'mean_price':np mean'variance_price':np std'count':np count_nonzero}'quantity_purchased':np sum}the three different variants can be explained as follows variant here we apply three different aggregations on quantity purchased which is grouped by user_class (see figure - figure - groupby with multiple aggregations on single attribute |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.