id
int64
0
25.6k
text
stringlengths
0
4.59k
15,300
output ( [ ]load csv with pandas another approach to load csv data file is by pandas and pandas read_csv()function this is the very flexible function that returns pandas dataframe which can be used immediately for plotting the following is an example of loading csv data file with the help of itexample herewe will be implementing two python scriptsfirst is with iris data set having headers and another is by using the pima indians dataset which is numeric dataset with no header both the datasets can be downloaded into local directory script- the following is the python script for loading csv data file using pandas on iris data setfrom pandas import read_csv path " :\iris csvdata read_csv(pathprint(data shapeprint(data[: ]output( sepal_length sepal_width petal_length petal_width
15,301
script- the following is the python script for loading csv data filealong with providing the headers names toousing pandas on pima indians diabetes datasetfrom pandas import read_csv path " :\pima-indians-diabetes csvheadernames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=headernamesprint(data shapeprint(data[: ]output ( preg plas pres skin test mass pedi age class the difference between above used three approaches for loading csv data file can easily be understood with the help of given examples
15,302
machine learning with python understanding data with statistics introduction while working with machine learning projectsusually we ignore two most important parts called mathematics and data it is becausewe know that ml is data driven approach and our ml model will produce only as good or as bad results as the data we provided to it in the previous we discussed how we can upload csv data into our ml projectbut it would be good to understand the data before uploading it we can understand the data by two wayswith statistics and with visualization in this with the help of following python recipeswe are going to understand ml data with statistics looking at raw data the very first recipe is for looking at your raw data it is important to look at raw data because the insight we will get after looking at raw data will boost our chances to better pre-processing as well as handling of data for ml projects following is python script implemented by using head(function of pandas dataframe on pima indians diabetes dataset to look at the first rows to get better understanding of itexample from pandas import read_csv path " :\pima-indians-diabetes csvheadernames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=headernamesprint(data head( )output preg plas pres skin test mass pedi age class
15,303
15,304
we can observe from the above output that first column gives the row number which can be very useful for referencing specific observation checking dimensions of data it is always good practice to know how much datain terms of rows and columnswe are having for our ml project the reasons behind aresuppose if we have too many rows and columns then it would take long time to run the algorithm and train the model suppose if we have too less rows and columns then it we would not have enough data to well train the model following is python script implemented by printing the shape property on pandas data frame we are going to implement it on iris data set for getting the total number of rows and columns in it example from pandas import read_csv path " :\iris csvdata read_csv(pathprint(data shapeoutput ( we can easily observe from the output that iris data setwe are going to useis having rows and columns getting each attribute' data type it is another good practice to know data type of each attribute the reason behind is thatas per to the requirementsometimes we may need to convert one data type to another for examplewe may need to convert string into floating point or int for representing categorial or ordinal values we can have an idea about the attribute' data type by looking at the raw databut another way is to use dtypes property of pandas dataframe with
15,305
the help of dtypes property we can categorize each attributes data type it can be understood with the help of following python scriptexample from pandas import read_csv path " :\iris csvdata read_csv(pathprint(data dtypesoutput sepal_length float sepal_width float petal_length float petal_width float dtypeobject from the above outputwe can easily get the datatypes of each attribute statistical summary of data we have discussed python recipe to get the shape number of rows and columnsof data but many times we need to review the summaries out of that shape of data it can be done with the help of describe(function of pandas dataframe that further provide the following statistical properties of each every data attributecount mean standard deviation minimum value maximum value median example from pandas import read_csv from pandas import set_option path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=names
15,306
set_option('display width' set_option('precision' print(data shapeprint(data describe()output ( preg plas pres skin test mass pedi age class count mean std min max from the above outputwe can observe the statistical summary of the data of pima indian diabetes dataset along with shape of data reviewing class distribution class distribution statistics is useful in classification problems where we need to know the balance of class values it is important to know class value distribution because if we have highly imbalanced class distribution one class is having lots more observations than other classthen it may need special handling at data preparation stage of our ml project we can easily get class distribution in python with the help of pandas dataframe example from pandas import read_csv path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namescount_class data groupby('class'size(print(count_classoutputclass
15,307
dtypeint from the above outputit can be clearly seen that the number of observations with class are almost double than number of observations with class reviewing correlation between attributes the relationship between two variables is called correlation in statisticsthe most common method for calculating correlation is pearson' correlation coefficient it can have three values as followscoefficient value it represents full positive correlation between variables coefficient value - it represents full negative correlation between variables coefficient value it represents no correlation at all between variables it is always good for us to review the pairwise correlations of the attributes in our dataset before using it into ml project because some machine learning algorithms such as linear regression and logistic regression will perform poorly if we have highly correlated attributes in pythonwe can easily calculate correlation matrix of dataset attributes with the help of corr(function on pandas dataframe example from pandas import read_csv from pandas import set_option path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namesset_option('display width' set_option('precision' correlations data corr(method='pearson'print(correlationsoutput preg plas pres skin preg plas pres test mass pedi age class - - - skin - - test - - mass
15,308
pedi - age - - class the matrix in above output gives the correlation between all the pairs of the attribute in dataset reviewing skew of attribute distribution skewness may be defined as the distribution that is assumed to be gaussian but appears distorted or shifted in one direction or anotheror either to the left or right reviewing the skewness of attributes is one of the important tasks due to following reasonspresence of skewness in data requires the correction at data preparation stage so that we can get more accuracy from our model most of the ml algorithms assumes that data has gaussian distribution either normal of bell curved data in pythonwe can easily calculate the skew of each attribute by using skew(function on pandas dataframe example from pandas import read_csv path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namesprint(data skew()output preg plas pres - skin test mass - pedi age class dtypefloat from the above outputpositive or negative skew can be observed if the value is closer to zerothen it shows less skew
15,309
15,310
machine learning with python understanding data with visualization introduction in the previous we have discussed the importance of data for machine learning algorithms along with some python recipes to understand the data with statistics there is another way called visualizationto understand the data with the help of data visualizationwe can see how the data looks like and what kind of correlation is held by the attributes of data it is the fastest way to see if the features correspond to the output with the help of following python recipeswe can understand ml data with statistics data visualization techniques univariate plots histogram density plots multivariate plots box plots correlation matrix plots correlation matrix plots univariate plotsunderstanding attributes independently the simplest type of visualization is single-variable or "univariatevisualization with the help of univariate visualizationwe can understand each attribute of our dataset independently the following are some techniques in python to implement univariate visualizationhistograms histograms group the data in bins and is the fastest way to get idea about the distribution of each attribute in dataset the following are some of the characteristics of histogramsit provides us count of the number of observations in each bin created for visualization
15,311
from the shape of the binwe can easily observe the distribution weather it is gaussianskewed or exponential histograms also help us to see possible outliers example the code shown below is an example of python script creating the histogram of the attributes of pima indian diabetes dataset herewe will be using hist(function on pandas dataframe to generate histograms and matplotlib for ploting them from matplotlib import pyplot from pandas import read_csv path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namesdata hist(pyplot show(output
15,312
the above output shows that it created the histogram for each attribute in the dataset from thiswe can observe that perhaps agepedi and test attribute may have exponential distribution while mass and plas have gaussian distribution density plots another quick and easy technique for getting each attributes distribution is density plots it is also like histogram but having smooth curve drawn through the top of each bin we can call them as abstracted histograms example in the following examplepython script will generate density plots for the distribution of attributes of pima indian diabetes dataset from matplotlib import pyplot from pandas import read_csv path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namesdata plot(kind='density'subplots=truelayout=( , )sharex=falsepyplot show(output
15,313
from the above outputthe difference between density plots and histograms can be easily understood box and whisker plots box and whisker plotsalso called boxplots in shortis another useful technique to review the distribution of each attribute' distribution the following are the characteristics of this techniqueit is univariate in nature and summarizes the distribution of each attribute it draws line for the middle value for median it draws box around the and it also draws whiskers which will give us an idea about the spread of the data the dots outside the whiskers signifies the outlier values outlier values would be times greater than the size of the spread of the middle data example in the following examplepython script will generate density plots for the distribution of attributes of pima indian diabetes dataset from matplotlib import pyplot from pandas import read_csv path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namesdata plot(kind='box'subplots=truelayout=( , )sharex=false,sharey=falsepyplot show(
15,314
output from the above plot of attribute' distributionit can be observed that agetest and skin appear skewed towards smaller values multivariate plotsinteraction among multiple variables another type of visualization is multi-variable or "multivariatevisualization with the help of multivariate visualizationwe can understand interaction between multiple attributes of our dataset the following are some techniques in python to implement multivariate visualizationcorrelation matrix plot correlation is an indication about the changes between two variables in our previous we have discussed pearson' correlation coefficients and the importance of correlation too we can plot correlation matrix to show which variable is having high or low correlation in respect to another variable example in the following examplepython script will generate and plot correlation matrix for the pima indian diabetes dataset it can be generated with the help of corr(function on pandas dataframe and plotted with the help of pyplot
15,315
from matplotlib import pyplot from pandas import read_csv import numpy path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namescorrelations data corr(fig pyplot figure(ax fig add_subplot( cax ax matshow(correlationsvmin=- vmax= fig colorbar(caxticks numpy arange( , , ax set_xticks(ticksax set_yticks(ticksax set_xticklabels(namesax set_yticklabels(namespyplot show(output
15,316
from the above output of correlation matrixwe can see that it is symmetrical the bottom left is same as the top right it is also observed that each variable is positively correlated with each other scatter matrix plot scatter plots shows how much one variable is affected by another or the relationship between them with the help of dots in two dimensions scatter plots are very much like line graphs in the concept that they use horizontal and vertical axes to plot data points example in the following examplepython script will generate and plot scatter matrix for the pima indian diabetes dataset it can be generated with the help of scatter_matrix(function on pandas dataframe and plotted with the help of pyplot from matplotlib import pyplot from pandas import read_csv from pandas tools plotting import scatter_matrix path " :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'data read_csv(pathnames=namesscatter_matrix(datapyplot show(
15,317
output
15,318
machine learning with python preparing data introduction machine learning algorithms are completely dependent on data because it is the most crucial aspect that makes model training possible on the other handif we won' be able to make sense out of that databefore feeding it to ml algorithmsa machine will be useless in simple wordswe always need to feed right data the data in correct scaleformat and containing meaningful featuresfor the problem we want machine to solve this makes data preparation the most important step in ml process data preparation may be defined as the procedure that makes our dataset more appropriate for ml process why data pre-processingafter selecting the raw data for ml trainingthe most important task is data preprocessing in broad sensedata preprocessing will convert the selected data into form we can work with or can feed to ml algorithms we always need to preprocess our data so that it can be as per the expectation of machine learning algorithm data pre-processing techniques we have the following data preprocessing techniques that can be applied on data set to produce data for ml algorithmsscalingmost probably our dataset comprises of the attributes with varying scalebut we cannot provide such data to ml algorithm hence it requires rescaling data rescaling makes sure that attributes are at same scale generallyattributes are rescaled into the range of and ml algorithms like gradient descent and -nearest neighbors requires scaled data we can rescale the data with the help of minmaxscaler class of scikit-learn python library example in this example we will rescale the data of pima indians diabetes dataset which we used earlier firstthe csv data will be loaded (as done in the previous and then with the help of minmaxscaler classit will be rescaled in the range of and the first few lines of the following script are same as we have written in previous while loading csv data from pandas import read_csv from numpy import set_printoptions from sklearn import preprocessing path ' :\pima-indians-diabetes csv
15,319
names ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv(pathnames=namesarray dataframe values nowwe can use minmaxscaler class to rescale the data in the range of and data_scaler preprocessing minmaxscaler(feature_range=( , )data_rescaled data_scaler fit_transform(arraywe can also summarize the data for output as per our choice herewe are setting the precision to and showing the first rows in the output set_printoptions(precision= print ("\nscaled data:\ "data_rescaled[ : ]output scaled data[[ [ [ [ [ [ [ [ [ [ ]from the above outputall the data got rescaled into the range of and normalization another useful data preprocessing technique is normalization this is used to rescale each row of data to have length of it is mainly useful in sparse dataset where we have lots of zeros we can rescale the data with the help of normalizer class of scikit-learn python library
15,320
types of normalization in machine learningthere are two types of normalization preprocessing techniques as followsl normalization it may be defined as the normalization technique that modifies the dataset values in way that in each row the sum of the absolute values will always be up to it is also called least absolute deviations example in this examplewe use normalize technique to normalize the data of pima indians diabetes dataset which we used earlier firstthe csv data will be loaded and then with the help of normalizer class it will be normalized the first few lines of following script are same as we have written in previous while loading csv data from pandas import read_csv from numpy import set_printoptions from sklearn preprocessing import normalizer path ' :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv (pathnames=namesarray dataframe values nowwe can use normalizer class with to normalize the data data_normalizer normalizer(norm=' 'fit(arraydata_normalized data_normalizer transform(arraywe can also summarize the data for output as per our choice herewe are setting the precision to and showing the first rows in the output set_printoptions(precision= print ("\nnormalized data:\ "data_normalized [ : ]output normalized data[[ [ [ ]
15,321
normalization it may be defined as the normalization technique that modifies the dataset values in way that in each row the sum of the squares will always be up to it is also called least squares example in this examplewe use normalization technique to normalize the data of pima indians diabetes dataset which we used earlier firstthe csv data will be loaded (as done in previous and then with the help of normalizer class it will be normalized the first few lines of following script are same as we have written in previous while loading csv data from pandas import read_csv from numpy import set_printoptions from sklearn preprocessing import normalizer path ' :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv (pathnames=namesarray dataframe values nowwe can use normalizer class with to normalize the data data_normalizer normalizer(norm=' 'fit(arraydata_normalized data_normalizer transform(arraywe can also summarize the data for output as per our choice herewe are setting the precision to and showing the first rows in the output set_printoptions(precision= print ("\nnormalized data:\ "data_normalized [ : ]output normalized data[[ [ [ ] binarization as the name suggeststhis is the technique with the help of which we can make our data binary we can use binary threshold for making our data binary the values above that threshold value will be converted to and below that threshold will be converted to
15,322
for exampleif we choose threshold value then the dataset value above it will become and below this will become that is why we can call it binarizing the data or thresholding the data this technique is useful when we have probabilities in our dataset and want to convert them into crisp values we can binarize the data with the help of binarizer class of scikit-learn python library example in this examplewe will rescale the data of pima indians diabetes dataset which we used earlier firstthe csv data will be loaded and then with the help of binarizer class it will be converted into binary values and depending upon the threshold value we are taking as threshold value the first few lines of following script are same as we have written in previous while loading csv data from pandas import read_csv from sklearn preprocessing import binarizer path ' :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv(pathnames=namesarray dataframe values nowwe can use binarize class to convert the data into binary values binarizer binarizer(threshold= fit(arraydata_binarized binarizer transform(arrayherewe are showing the first rows in the output print ("\nbinary data:\ "data_binarized [ : ]output binary data[[ [ [ [ [ ]
15,323
standardization another useful data preprocessing technique which is basically used to transform the data attributes with gaussian distribution it differs the mean and sd (standard deviationto standard gaussian distribution with mean of and sd of this technique is useful in ml algorithms like linear regressionlogistic regression that assumes gaussian distribution in input dataset and produce better results with rescaled data we can standardize the data (mean and sd = with the help of standardscaler class of scikit-learn python library example in this examplewe will rescale the data of pima indians diabetes dataset which we used earlier firstthe csv data will be loaded and then with the help of standardscaler class it will be converted into gaussian distribution with mean and sd the first few lines of following script are same as we have written in previous while loading csv data from sklearn preprocessing import standardscaler from pandas import read_csv from numpy import set_printoptions path ' :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv(pathnames=namesarray dataframe values nowwe can use standardscaler class to rescale the data data_scaler standardscaler(fit(arraydata_rescaled data_scaler transform(arraywe can also summarize the data for output as per our choice herewe are setting the precision to and showing the first rows in the output set_printoptions(precision= print ("\nrescaled data:\ "data_rescaled [ : ]output rescaled data[ [- - - - - - - - - - - - - - [- - - - - - -
15,324
[- - - ]data labeling we discussed the importance of good fata for ml algorithms as well as some techniques to pre-process the data before sending it to ml algorithms one more aspect in this regard is data labeling it is also very important to send the data to ml algorithms having proper labeling for examplein case of classification problemslot of labels in the form of wordsnumbers etc are there on the data what is label encodingmost of the sklearn functions expect that the data with number labels rather than word labels hencewe need to convert such labels into number labels this process is called label encoding we can perform label encoding of data with the help of labelencoder(function of scikit-learn python library example in the following examplepython script will perform the label encoding firstimport the required python libraries as followsimport numpy as np from sklearn import preprocessing nowwe need to provide the input labels as followsinput_labels ['red','black','red','green','black','yellow','white'the next line of code will create the label encoder and train it encoder preprocessing labelencoder(encoder fit(input_labelsthe next lines of script will check the performance by encoding the random ordered listtest_labels ['green','red','black'encoded_values encoder transform(test_labelsprint("\nlabels ="test_labelsprint("encoded values ="list(encoded_values)encoded_values [ , , , decoded_list encoder inverse_transform(encoded_valueswe can get the list of encoded values with the help of following python script
15,325
print("\nencoded values ="encoded_valuesprint("\ndecoded labels ="list(decoded_list)output labels ['green''red''black'encoded values [ encoded values [ decoded labels ['white''black''yellow''green'
15,326
machine learning with python -machine datalearning feature selection in the previous we have seen in detail how to preprocess and prepare data for machine learning in this let us understand in detail data feature selection and various aspects involved in it importance of data feature selection the performance of machine learning model is directly proportional to the data features used to train it the performance of ml model will be affected negatively if the data features provided to it are irrelevant on the other handuse of relevant data features can increase the accuracy of your ml model especially linear and logistic regression now the question arise that what is automatic feature selectionit may be defined as the process with the help of which we select those features in our data that are most relevant to the output or prediction variable in which we are interested it is also called attribute selection the following are some of the benefits of automatic feature selection before modeling the dataperforming feature selection before data modeling will reduce the overfitting performing feature selection before data modeling will increases the accuracy of ml model performing feature selection before data modeling will reduce the training time feature selection techniques the followings are automatic feature selection techniques that we can use to model ml data in pythonunivariate selection this feature selection technique is very useful in selecting those featureswith the help of statistical testinghaving strongest relationship with the prediction variables we can implement univariate feature selection technique with the help of selectkbest class of scikit-learn python library examplein this examplewe will use pima indians diabetes dataset to select of the attributes having best features with the help of chi-square statistical test from pandas import read_csv from numpy import set_printoptions from sklearn feature_selection import selectkbest
15,327
from sklearn feature_selection import chi path ' :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv(pathnames=namesarray dataframe values nextwe will separate array into input and output componentsx array[:, : array[:, the following lines of code will select the best features from datasettest selectkbest(score_func=chi = fit test fit( ,ywe can also summarize the data for output as per our choice herewe are setting the precision to and showing the data attributes with best features along with best score of each attributeset_printoptions(precision= print(fit scores_featured_data fit transform(xprint ("\nfeatured data:\ "featured_data[ : ]output featured data[[ [ ]
15,328
recursive feature elimination as the name suggestsrfe (recursive feature eliminationfeature selection technique removes the attributes recursively and builds the model with remaining attributes we can implement rfe feature selection technique with the help of rfe class of scikit-learn python library example in this examplewe will use rfe with logistic regression algorithm to select the best attributes having the best features from pima indians diabetes dataset to from pandas import read_csv from sklearn feature_selection import rfe from sklearn linear_model import logisticregression path ' :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv(pathnames=namesarray dataframe values nextwe will separate the array into its input and output componentsx array[:, : array[:, the following lines of code will select the best features from datasetmodel logisticregression(rfe rfe(model fit rfe fit(xyprint("number of features% "print("selected features% "print("feature ranking% "output number of features selected featurestrue false false false false true true falsefeature ranking[ we can see in above outputrfe choose pregmass and pedi as the first best features they are marked as in the output
15,329
principal component analysis (pcapcagenerally called data reduction techniqueis very useful feature selection technique as it uses linear algebra to transform the dataset into compressed form we can implement pca feature selection technique with the help of pca class of scikit-learn python library we can select number of principal components in the output examplein this examplewe will use pca to select best principal components from pima indians diabetes dataset from pandas import read_csv from sklearn decomposition import pca path ' :\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv(pathnames=namesarray dataframe values nextwe will separate array into input and output componentsx array[:, : array[:, the following lines of code will extract features from datasetpca pca(n_components= fit pca fit(xprint("explained variance% "fit explained_variance_ratio_ print(fit components_output explained variance [- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ]
15,330
we can observe from the above output that principal components bear little resemblance to the source data feature importance as the name suggestsfeature importance technique is used to choose the importance features it basically uses trained supervised classifier to select features we can implement this feature selection technique with the help of extratreeclassifier class of scikit-learn python library example in this examplewe will use extratreeclassifier to select features from pima indians diabetes dataset from pandas import read_csv from sklearn ensemble import extratreesclassifier path ' :\desktop\pima-indians-diabetes csvnames ['preg''plas''pres''skin''test''mass''pedi''age''class'dataframe read_csv(datanames=namesarray dataframe values nextwe will separate array into input and output componentsx array[:, : array[:, the following lines of code will extract features from datasetmodel extratreesclassifier(model fit(xyprint(model feature_importances_output from the outputwe can observe that there are scores for each attribute the higher the scorehigher is the importance of that attribute
15,331
machine learning algorithms classification
15,332
classification introduction introduction to classification classification may be defined as the process of predicting class or category from observed values or given data points the categorized output can have the form such as "blackor "whiteor "spamor "no spammathematicallyclassification is the task of approximating mapping function (ffrom input variables (xto output variables (yit is basically belongs to the supervised machine learning in which targets are also provided along with the input data set an example of classification problem can be the spam detection in emails there can be only two categories of output"spamand "no spam"hence this is binary type classification to implement this classificationwe first need to train the classifier for this example"spamand "no spamemails would be used as the training data after successfully train the classifierit can be used to detect an unknown email types of learners in classification we have two types of learners in respective to classification problemslazy learners as the name suggestssuch kind of learners waits for the testing data to be appeared after storing the training data classification is done only after getting the testing data they spend less time on training but more time on predicting examples of lazy learners are knearest neighbor and case-based reasoning eager learners as opposite to lazy learnerseager learners construct classification model without waiting for the testing data to be appeared after storing the training data they spend more time on training but less time on predicting examples of eager learners are decision treesnaive bayes and artificial neural networks (annbuilding classifier in python scikit-learna python library for machine learning can be used to build classifier in python the steps for building classifier in python are as followsstep importing necessary python package for building classifier using scikit-learnwe need to import it we can import it by using following script
15,333
import sklearn step importing dataset after importing necessary packagewe need dataset to build classification prediction model we can import it from sklearn dataset or can use other one as per our requirement we are going to use sklearn' breast cancer wisconsin diagnostic database we can import it with the help of following scriptfrom sklearn datasets import load_breast_cancer the following script will load the datasetdata load_breast_cancer(we also need to organize the data and it can be done with the help of following scriptslabel_names data['target_names'labels data['target'feature_names data['feature_names'features data['data'the following command will print the name of the labels'malignantand 'benignin case of our database print(label_namesthe output of the above command is the names of the labels['malignant'benign'these labels are mapped to binary values and malignant cancer is represented by and benign cancer is represented by the feature names and feature values of these labels can be seen with the help of following commandsprint(feature_names[ ]the output of the above command is the names of the features for label malignant cancermean radius similarlynames of the features for label can be produced as followsprint(feature_names[ ]
15,334
the output of the above command is the names of the features for label benign cancermean texture we can print the features for these labels with the help of following commandprint(features[ ]this will give the following output[ + + + + - - - - - - + - + + - - - - - - + + + + - - - - - - we can print the features for these labels with the help of following commandprint(features[ ]this will give the following output[ + + + + - - - - - - - - + + - - - - - - + + + + - - - - - - step organizing data into training testing sets as we need to test our model on unseen datawe will divide our dataset into two partsa training set and test set we can use train_test_split(function of sklearn python package to split the data into sets the following command will import the functionfrom sklearn model_selection import train_test_split nownext command will split the data into training testing data in this examplewe are using taking percent of the data for testing purpose and percent of the data for training purposetraintesttrain_labelstest_labels train_test_split(features,labels,test_size random_state
15,335
step model evaluation after dividing the data into training and testing we need to build the model we will be using naive bayes algorithm for this purpose the following commands will import the gaussiannb modulefrom sklearn naive_bayes import gaussiannb nowinitialize the model as followsgnb gaussiannb(nextwith the help of following command we can train the modelmodel gnb fit(traintrain_labelsnowfor evaluation purpose we need to make predictions it can be done by using predict(function as followspreds gnb predict(testprint(predsthis will give the following output[ the above series of and in output are the predicted values for the malignant and benign tumor classes step finding accuracy we can find the accuracy of the model build in previous step by comparing the two arrays namely test_labels and preds we will be using the accuracy_score(function to determine the accuracy from sklearn metrics import accuracy_score print(accuracy_score(test_labels,preds) the above output shows that naivebayes classifier is accurate
15,336
classification evaluation metrics the job is not done even if you have finished implementation of your machine learning application or model we must have to find out how effective our model isthere can be different evaluation metricsbut we must choose it carefully because the choice of metrics influences how the performance of machine learning algorithm is measured and compared the following are some of the important classification evaluation metrics among which you can choose based upon your dataset and kind of problemconfusion matrix it is the easiest way to measure the performance of classification problem where the output can be of two or more type of classes confusion matrix is nothing but table with two dimensions viz "actualand "predictedand furthermoreboth the dimensions have "true positives (tp)""true negatives (tn)""false positives (fp)""false negatives (fn)as shown belowactual true positives (tpfalse positives (fp false negatives (fntrue negatives (tnpredicted the explanation of the terms associated with confusion matrix are as followstrue positives (tp)it is the case when both actual class predicted class of data point is true negatives (tn)it is the case when both actual class predicted class of data point is false positives (fp)it is the case when actual class of data point is predicted class of data point is false negatives (fn)it is the case when actual class of data point is predicted class of data point is we can find the confusion matrix with the help of confusion_matrix(function of sklearn with the help of the following scriptwe can find the confusion matrix of above built binary classifier
15,337
from sklearn metrics import confusion_matrix output [ ]accuracy it may be defined as the number of correct predictions made by our ml model we can easily calculate it by confusion matrix with the help of following formulaaccuracy tp tn tp fp fn tn for above built binary classifiertp tn + and tp+fp+fn+tn + + + = henceaccuracy / which is same as we have calculated after creating our binary classifier precision precisionused in document retrievalsmay be defined as the number of correct documents returned by our ml model we can easily calculate it by confusion matrix with the help of following formulaprecision tp tp fp for the above built binary classifiertp and tp+fp + henceprecision / recall or sensitivity recall may be defined as the number of positives returned by our ml model we can easily calculate it by confusion matrix with the help of following formularecall tp tp fn
15,338
for above built binary classifiertp and tp+fn + henceprecision / specificity specificityin contrast to recallmay be defined as the number of negatives returned by our ml model we can easily calculate it by confusion matrix with the help of following formulaspecificity tn tn fp for the above built binary classifiertn and tn+fp + henceprecision / various ml classification algorithms the followings are some important ml classification algorithmslogistic regression support vector machine (svmdecision tree naive bayes random forest we will be discussing all these classification algorithms in detail in further applications some of the most important applications of classification algorithms are as followsspeech recognition handwriting recognition biometric identification document classification
15,339
classification algorithms logistic regression introduction to logistic regression logistic regression is supervised learning classification algorithm used to predict the probability of target variable the nature of target or dependent variable is dichotomouswhich means there would be only two possible classes in simple wordsthe dependent variable is binary in nature having data coded as either (stands for success/yesor (stands for failure/nomathematicallya logistic regression model predicts ( = as function of it is one of the simplest ml algorithms that can be used for various classification problems such as spam detectiondiabetes predictioncancer detection etc types of logistic regression generallylogistic regression means binary logistic regression having binary target variablesbut there can be two more categories of target variables that can be predicted by it based on those number of categorieslogistic regression can be divided into following typesbinary or binomial in such kind of classificationa dependent variable will have only two possible types either and for examplethese variables may represent success or failureyes or nowin or loss etc multinomial in such kind of classificationdependent variable can have or more possible unordered types or the types having no quantitative significance for examplethese variables may represent "type aor "type bor "type cordinal in such kind of classificationdependent variable can have or more possible ordered types or the types having quantitative significance for examplethese variables may represent "pooror "good""very good""excellentand each category can have the scores like , , , logistic regression assumptions before diving into the implementation of logistic regressionwe must be aware of the following assumptions about the same
15,340
in case of binary logistic regressionthe target variables must be binary always and the desired outcome is represented by the factor level there should not be any multi-collinearity in the modelwhich means independent variables must be independent of each other we must include meaningful variables in our model we should choose large sample size for logistic regression the binary logistic regression model the simplest form of logistic regression is binary or binomial logistic regression in which the target or dependent variable can have only possible types either or it allows us to model relationship between multiple predictor variables and binary/binomial target variable in case of logistic regressionthe linear function is basically used as an input to another function such as in the following relationh (xgt )where < < hereg is the logistic or sigmoid function which can be given as followsg( where - to sigmoid curve can be represented with the help of following graph we can see the values of -axis lie between and and crosses the axis at the classes can be divided into positive or negative the output comes under the probability of positive class if it lies between and for our implementationwe are interpreting the output of hypothesis function as positive if it is > otherwise negative we also need to define loss function to measure how well the algorithm performs using the weights on functionsrepresented by theta as followsh (xj( (- log( ( ) log( )
15,341
nowafter defining the loss function our prime goal is to minimize the loss function it can be done with the help of fitting the weights which means by increasing or decreasing the weights with the help of derivatives of the loss function each weightwe would be able to know what parameters should have high weight and what should have smaller weight the following gradient descent equation tells us how loss would change if we modified the parametersj( ( (xyj implementation in python now we will implement the above concept of binomial logistic regression in python for this purposewe are using multivariate flower dataset named 'iriswhich have classes of instances eachbut we will be using the first two feature columns every class represents type of iris flower firstwe need to import the necessary libraries as followsimport numpy as np import matplotlib pyplot as plt import seaborn as sns from sklearn import datasets nextload the iris dataset as followsiris datasets load_iris( iris data[:: (iris target ! we can plot our training data followsplt figure(figsize=( )plt scatter( [ = ][: ] [ = ][: ]color=' 'label=' 'plt scatter( [ = ][: ] [ = ][: ]color=' 'label=' 'plt legend()
15,342
nextwe will define sigmoid functionloss function and gradient descend as followsclass logisticregressiondef __init__(selflr= num_iter= fit_intercept=trueverbose=false)self lr lr self num_iter num_iter self fit_intercept fit_intercept self verbose verbose def __add_intercept(selfx)intercept np ones(( shape[ ] )return np concatenate((interceptx)axis= def __sigmoid(selfz)return ( np exp(- )def __loss(selfhy)return (- np log( ( ynp log( )mean(def fit(selfxy)if self fit_interceptx self __add_intercept(
15,343
nowinitialize the weights as followsself theta np zeros( shape[ ]for in range(self num_iter) np dot(xself thetah self __sigmoid(zgradient np dot( ( ) size self theta -self lr gradient np dot(xself thetah self __sigmoid(zloss self __loss(hyif(self verbose ==true and = )print( 'loss{loss\ 'with the help of the following scriptwe can predict the output probabilitiesdef predict_prob(selfx)if self fit_interceptx self __add_intercept(xreturn self __sigmoid(np dot(xself theta)def predict(selfx)return self predict_prob(xround(nextwe can evaluate the model and plot it as followsmodel logisticregression(lr= num_iter= preds model predict( (preds =ymean(plt figure(figsize=( )plt scatter( [ = ][: ] [ = ][: ]color=' 'label=' 'plt scatter( [ = ][: ] [ = ][: ]color=' 'label=' 'plt legend( _minx _max [:, min() [:, max() _minx _max [:, min() [:, max()
15,344
xx xx np meshgrid(np linspace( _minx _max)np linspace( _minx _max)grid np c_[xx ravel()xx ravel()probs model predict_prob(gridreshape(xx shapeplt contour(xx xx probs[ ]linewidths= colors='red')multinomial logistic regression model another useful form of logistic regression is multinomial logistic regression in which the target or dependent variable can have or more possible unordered types the types having no quantitative significance implementation in python now we will implement the above concept of multinomial logistic regression in python for this purposewe are using dataset from sklearn named digit firstwe need to import the necessary libraries as followsimport sklearn from sklearn import datasets from sklearn import linear_model from sklearn import metrics
15,345
from sklearn model_selection import train_test_split nextwe need to load digit datasetdigits datasets load_digits(nowdefine the feature matrix(xand response vector( )as followsx digits data digits target with the help of next line of codewe can split and into training and testing setsx_trainx_testy_trainy_test train_test_split(xytest_size= random_statenow create an object of logistic regression as followsdigreg linear_model logisticregression(nowwe need to train the model by using the training sets as followsdigreg fit(x_trainy_trainnextmake the predictions on testing set as followsy_pred digreg predict(x_testnext print the accuracy of the model as followsprint("accuracy of logistic regression model is:"metrics accuracy_score(y_testy_pred)* output accuracy of logistic regression model is from the above output we can see the accuracy of our model is around percent
15,346
classification algorithms supportmachine vector machine (svmintroduction to svm support vector machines (svmsare powerful yet flexible supervised machine learning algorithms which are used both for classification and regression but generallythey are used in classification problems in ssvms were first introduced but later they got refined in svms have their unique way of implementation as compared to other machine learning algorithms latelythey are extremely popular because of their ability to handle multiple continuous and categorical variables working of svm an svm model is basically representation of different classes in hyperplane in multidimensional space the hyperplane will be generated in an iterative manner by svm so that the error can be minimized the goal of svm is to divide the datasets into classes to find maximum marginal hyperplane (mmhmargin class -axis class support vectors -axis the followings are important concepts in svmsupport vectorsdatapoints that are closest to the hyperplane is called support vectors separating line will be defined with the help of these data points hyperplaneas we can see in the above diagramit is decision plane or space which is divided between set of objects having different classes marginit may be defined as the gap between two lines on the closet data points of different classes it can be calculated as the perpendicular distance from the line to the support vectors large margin is considered as good margin and small margin is considered as bad margin
15,347
the main goal of svm is to divide the datasets into classes to find maximum marginal hyperplane (mmhand it can be done in the following two stepsfirstsvm will generate hyperplanes iteratively that segregates the classes in best way thenit will choose the hyperplane that separates the classes correctly implementing svm in python for implementing svm in python we will start with the standard libraries import as followsimport numpy as np import matplotlib pyplot as plt from scipy import stats import seaborn as snssns set(nextwe are creating sample datasethaving linearly separable datafrom sklearn dataset sample_generator for classification using svmfrom sklearn datasets samples_generator import make_blobs xy make_blobs(n_samples= centers= random_state= cluster_std= plt scatter( [: ] [: ] =ys= cmap='summer')the following would be the output after generating sample dataset having samples and clusterswe know that svm supports discriminative classification it divides the classes from each other by simply finding line in case of two dimensions or manifold in case of multiple dimensions it is implemented on the above dataset as followsxfit np linspace(- plt scatter( [: ] [: ] =ys= cmap='summer'
15,348
plt plot([ ][ ]' 'color='black'markeredgewidth= markersize= for mb in [( )( )(- )]plt plot(xfitm xfit '- 'plt xlim(- )the output is as followswe can see from the above output that there are three different separators that perfectly discriminate the above samples as discussedthe main goal of svm is to divide the datasets into classes to find maximum marginal hyperplane (mmhhence rather than drawing zero line between classes we can draw around each line margin of some width up to the nearest point it can be done as followsxfit np linspace(- plt scatter( [: ] [: ] =ys= cmap='summer'for mbd in [( )( )(- )]yfit xfit plt plot(xfityfit'- 'plt fill_between(xfityfit dyfit dedgecolor='none'color='#aaaaaa'alpha= plt xlim(- )
15,349
from the above image in outputwe can easily observe the "marginswithin the discriminative classifiers svm will choose the line that maximizes the margin nextwe will use scikit-learn' support vector classifier to train an svm model on this data herewe are using linear kernel to fit svm as followsfrom sklearn svm import svc "support vector classifiermodel svc(kernel='linear' = model fit(xythe output is as followssvc( = cache_size= class_weight=nonecoef = decision_function_shape='ovr'degree= gamma='auto_deprecated'kernel='linear'max_iter=- probability=falserandom_state=noneshrinking=truetol= verbose=falsenowfor better understandingthe following will plot the decision functions for svcdef decision_function(modelax=noneplot_support=true)if ax is noneax plt gca(xlim ax get_xlim(ylim ax get_ylim(for evaluating modelwe need to create grid as followsx np linspace(xlim[ ]xlim[ ] np linspace(ylim[ ]ylim[ ]
15,350
yx np meshgrid(yxxy np vstack([ ravel() ravel()] model decision_function(xyreshape( shapenextwe need to plot decision boundaries and margins as followsax contour(xypcolors=' 'levels=[- ]alpha= linestyles=['--''-''--']nowsimilarly plot the support vectors as followsif plot_supportax scatter(model support_vectors_[: ]model support_vectors_[: ] = linewidth= facecolors='none')ax set_xlim(xlimax set_ylim(ylimnowuse this function to fit our models as followsplt scatter( [: ] [: ] =ys= cmap='summer'decision_function(model)
15,351
we can observe from the above output that an svm classifier fit to the data with margins dashed lines and support vectorsthe pivotal elements of this fittouching the dashed line these support vector points are stored in the support_vectors_ attribute of the classifier as followsmodel support_vectors_ the output is as followsarray([[ ][ ][ ]]svm kernels in practicesvm algorithm is implemented with kernel that transforms an input data space into the required form svm uses technique called the kernel trick in which kernel takes low dimensional input space and transforms it into higher dimensional space in simple wordskernel converts non-separable problems into separable problems by adding more dimensions to it it makes svm more powerfulflexible and accurate the following are some of the types of kernels used by svmlinear kernel it can be used as dot product between any two observations the formula of linear kernel is as belowk(xxi sum( xi from the above formulawe can see that the product between two vectors say xi is the sum of the multiplication of each pair of input values polynomial kernel it is more generalized form of linear kernel and distinguish curved or nonlinear input space following is the formula for polynomial kernelk(xxi sum( xi)^ here is the degree of polynomialwhich we need to specify manually in the learning algorithm radial basis function (rbfkernel rbf kernelmostly used in svm classificationmaps input space in indefinite dimensional space following formula explains it mathematicallyk( ,xiexp(-gamma sum(( xi^ )
15,352
heregamma ranges from to we need to manually specify it in the learning algorithm good default value of gamma is as we implemented svm for linearly separable datawe can implement it in python for the data that is not linearly separable it can be done by using kernels example the following is an example for creating an svm classifier by using kernels we will be using iris dataset from scikit-learnwe will start by importing following packagesimport pandas as pd import numpy as np from sklearn import svmdatasets import matplotlib pyplot as plt nowwe need to load the input datairis datasets load_iris(from this datasetwe are taking first two features as followsx iris data[:: iris target nextwe will plot the svm boundaries with original data as followsx_minx_max [: min( [: max( y_miny_max [: min( [: max( (x_max x_min)/ xxyy np meshgrid(np arange(x_minx_maxh)np arange(y_miny_maxh)x_plot np c_[xx ravel()yy ravel()nowwe need to provide the value of regularization parameter as followsc nextsvm classifier object can be created as followssvc_classifier svm svc(kernel='linear' =cfit(xyz svc_classifier predict(x_plotz reshape(xx shapeplt figure(figsize=( )plt subplot(
15,353
plt contourf(xxyyzcmap=plt cm tab alpha= plt scatter( [: ] [: ] =ycmap=plt cm set plt xlabel('sepal length'plt ylabel('sepal width'plt xlim(xx min()xx max()plt title('support vector classifier with linear kernel'output text( 'support vector classifier with linear kernel'for creating svm classifier with rbf kernelwe can change the kernel to rbf as followssvc_classifier svm svc(kernel='rbf'gamma ='auto', =cfit(xyz svc_classifier predict(x_plotz reshape(xx shapeplt figure(figsize=( )plt subplot( plt contourf(xxyyzcmap=plt cm tab alpha= plt scatter( [: ] [: ] =ycmap=plt cm set plt xlabel('sepal length'plt ylabel('sepal width'plt xlim(xx min()xx max()plt title('support vector classifier with rbf kernel'
15,354
output text( 'support vector classifier with rbf kernel'we put the value of gamma to 'autobut you can provide its value between to also pros and cons of svm classifiers pros of svm classifiers svm classifiers offers great accuracy and work well with high dimensional space svm classifiers basically use subset of training points hence in result uses very less memory cons of svm classifiers they have high training time hence in practice not suitable for large datasets another disadvantage is that svm classifiers do not work well with overlapping classes
15,355
classification algorithms machine decision tree introduction to decision tree in generaldecision tree analysis is predictive modelling tool that can be applied across many areas decision trees can be constructed by an algorithmic approach that can split the dataset in different ways based on different conditions decisions tress are the most powerful algorithms that falls under the category of supervised algorithms they can be used for both classification and regression tasks the two main entities of tree are decision nodeswhere the data is split and leaveswhere we got outcome the example of binary tree for predicting whether person is fit or unfit providing various information like ageeating habits and exercise habitsis given belowperson age> noyeseats lots of fast foodyesunfit exercise regularlynofit yesfit nounfit in the above decision treethe question are decision nodes and final outcomes are leaves we have the following two types of decision treesclassification decision treesin this kind of decision treesthe decision variable is categorical the above decision tree is an example of classification decision tree regression decision treesin this kind of decision treesthe decision variable is continuous
15,356
implementing decision tree algorithm gini index it is the name of the cost function that is used to evaluate the binary splits in the dataset and works with the categorial target variable "successor "failurehigher the value of gini indexhigher the homogeneity perfect gini index value is and worst is (for class problemgini index for split can be calculated with the help of following stepsfirstcalculate gini index for sub-nodes by using the formula ^ + ^ which is the sum of the square of probability for success and failure nextcalculate gini index for split using weighted gini score of each node of that split classification and regression tree (cartalgorithm uses gini method to generate binary splits split creation split is basically including an attribute in the dataset and value we can create split in dataset with the help of following three partspart calculating gini scorewe have just discussed this part in the previous section part splitting datasetit may be defined as separating dataset into two lists of rows having index of an attribute and split value of that attribute after getting the two groups right and leftfrom the datasetwe can calculate the value of split by using gini score calculated in first part split value will decide in which group the attribute will reside part evaluating all splitsnext part after finding gini score and splitting dataset is the evaluation of all splits for this purposefirstwe must check every value associated with each attribute as candidate split then we need to find the best possible split by evaluating the cost of the split the best split will be used as node in the decision tree building tree as we know that tree has root node and terminal nodes after creating the root nodewe can build the tree by following two partspart terminal node creation while creating terminal nodes of decision treeone important point is to decide when to stop growing tree or creating further terminal nodes it can be done by using two criteria namely maximum tree depth and minimum node records as followsmaximum tree depthas name suggeststhis is the maximum number of the nodes in tree after root node we must stop adding terminal nodes once tree
15,357
reached at maximum depth once tree got maximum number of terminal nodes minimum node recordsit may be defined as the minimum number of training patterns that given node is responsible for we must stop adding terminal nodes once tree reached at these minimum node records or below this minimum terminal node is used to make final prediction part recursive splitting as we understood about when to create terminal nodesnow we can start building our tree recursive splitting is method to build the tree in this methodonce node is createdwe can create the child nodes (nodes added to an existing noderecursively on each group of datagenerated by splitting the datasetby calling the same function again and again prediction after building decision treewe need to make prediction about it basicallyprediction involves navigating the decision tree with the specifically provided row of data we can make prediction with the help of recursive functionas did above the same prediction routine is called again with the left or the child right nodes assumptions the following are some of the assumptions we make while creating decision treewhile preparing decision treesthe training set is as root node decision tree classifier prefers the features values to be categorical in case if you want to use continuous values then they must be done discretized prior to model building based on the attribute' valuesthe records are recursively distributed statistical approach will be used to place attributes at any node position as root node or internal node implementation in python example in the following examplewe are going to implement decision tree classifier on pima indian diabetesfirststart with importing necessary python packagesimport pandas as pd from sklearn tree import decisiontreeclassifier from sklearn model_selection import train_test_split
15,358
nextdownload the iris dataset from its weblink as followscol_names ['pregnant''glucose''bp''skin''insulin''bmi''pedigree''age''label'pima pd read_csv( " :\pima-indians-diabetes csv"header=nonenames=col_namespima head( pregnant glucose bp skin insulin bmi pedigree age label nowsplit the dataset into features and target variable as followsfeature_cols ['pregnant''insulin''bmi''age','glucose','bp','pedigree' pima[feature_colsfeatures pima label target variable nextwe will divide the data into train and test split the following code will split the dataset into training data and of testing datax_trainx_testy_trainy_test train_test_split(xytest_size= random_state= nexttrain the model with the help of decisiontreeclassifier class of sklearn as followsclf decisiontreeclassifier(clf clf fit(x_train,y_trainat last we need to make prediction it can be done with the help of following scripty_pred clf predict(x_testnextwe can get the accuracy scoreconfusion matrix and classification report as followsfrom sklearn metrics import classification_reportconfusion_matrixaccuracy_score result confusion_matrix(y_testy_predprint("confusion matrix:"print(resultresult classification_report(y_testy_predprint("classification report:",print (result
15,359
result accuracy_score(y_test,y_predprint("accuracy:",result output confusion matrix[[ ]classification reportprecision recall -score support micro avg macro avg weighted avg accuracy visualizing decision tree the above decision tree can be visualized with the help of following codefrom sklearn tree import export_graphviz from sklearn externals six import stringio from ipython display import image import pydotplus dot_data stringio(export_graphviz(clfout_file=dot_datafilled=truerounded=truespecial_characters=true,feature_names feature_cols,class_names=[' ',' ']graph pydotplus graph_from_dot_data(dot_data getvalue()graph write_png('pima_diabetes_tree png'image(graph create_png()
15,360
15,361
classification algorithmsmachine naive bayes introduction to naive bayes algorithm naive bayes algorithms is classification technique based on applying bayestheorem with strong assumption that all the predictors are independent to each other in simple wordsthe assumption is that the presence of feature in class is independent to the presence of any other feature in the same class for examplea phone may be considered as smart if it is having touch screeninternet facilitygood camera etc though all these features are dependent on each otherthey contribute independently to the probability of that the phone is smart phone in bayesian classificationthe main interest is to find the posterior probabilities the probability of label given some observed featuresp( featureswith the help of bayes theoremwe can express this in quantitative form as followsp( featuresp( ) (features lp(featuresherep( featuresis the posterior probability of class (lis the prior probability of class (features lis the likelihood which is the probability of predictor given class (featuresis the prior probability of predictor building model using naive bayes in python python libraryscikit learn is the most useful library that helps us to build naive bayes model in python we have the following three types of naive bayes model under scikit learn python librarygaussian naive bayes it is the simplest naive bayes classifier having the assumption that the data from each label is drawn from simple gaussian distribution multinomial naive bayes another useful naive bayes classifier is multinomial naive bayes in which the features are assumed to be drawn from simple multinomial distribution such kind of naive bayes are most appropriate for the features that represents discrete counts bernoulli naive bayes another important model is bernoulli naive bayes in which features are assumed to be binary ( and stext classification with 'bag of wordsmodel can be an application of bernoulli naive bayes
15,362
example depending on our data setwe can choose any of the naive bayes model explained above herewe are implementing gaussian naive bayes model in pythonwe will start with required imports as followsimport numpy as np import matplotlib pyplot as plt import seaborn as snssns set(nowby using make_blobs(function of scikit learnwe can generate blobs of points with gaussian distribution as followsfrom sklearn datasets import make_blobs xy make_blobs( centers= random_state= cluster_std= plt scatter( [: ] [: ] =ys= cmap='summer')nextfor using gaussiannb modelwe need to import and make its object as followsfrom sklearn naive_bayes import gaussiannb model_gbn gaussiannb(model_gnb fit(xy)nowwe have to do prediction it can be done after generating some new data as followsrng np random randomstate( xnew [- - [ rng rand( ynew model_gnb predict(xnewnextwe are plotting new data to find its boundariesplt scatter( [: ] [: ] =ys= cmap='summer'lim plt axis(plt scatter(xnew[: ]xnew[: ] =ynews= cmap='summer'alpha= plt axis(lim)nowwith the help of following line of codeswe can find the posterior probabilities of first and second labelyprob model_gnb predict_proba(xnewyprob[- :round(
15,363
output array([[ ][ ][ ][ ][ ][ ][ ][ ][ ][ ]]pros cons pros the followings are some pros of using naive bayes classifiersnaive bayes classification is easy to implement and fast it will converge faster than discriminative models like logistic regression it requires less training data it is highly scalable in natureor they scale linearly with the number of predictors and data points it can make probabilistic predictions and can handle continuous as well as discrete data naive bayes classification algorithm can be used for binary as well as multi-class classification problems both cons the followings are some cons of using naive bayes classifiersone of the most important cons of naive bayes classification is its strong feature independence because in real life it is almost impossible to have set of features which are completely independent of each other another issue with naive bayes classification is its 'zero frequencywhich means that if categorial variable has category but not being observed in training data setthen naive bayes model will assign zero probability to it and it will be unable to make prediction
15,364
applications of naive bayes classification the following are some common applications of naive bayes classificationreal-time predictiondue to its ease of implementation and fast computationit can be used to do prediction in real-time multi-class predictionnaive bayes classification algorithm can be used to predict posterior probability of multiple classes of target variable text classificationdue to the feature of multi-class predictionnaive bayes classification algorithms are well suited for text classification that is why it is also used to solve problems like spam-filtering and sentiment analysis recommendation systemalong with the algorithms like collaborative filteringnaive bayes makes recommendation system which can be used to filter unseen information and to predict weather user would like the given resource or not
15,365
classification algorithms -machine random forest introduction random forest is supervised learning algorithm which is used for both classification as well as regression but howeverit is mainly used for classification problems as we know that forest is made up of trees and more trees means more robust forest similarlyrandom forest algorithm creates decision trees on data samples and then gets the prediction from each of them and finally selects the best solution by means of voting it is an ensemble method which is better than single decision tree because it reduces the over-fitting by averaging the result working of random forest algorithm we can understand the working of random forest algorithm with the help of following stepsstep firststart with the selection of random samples from given dataset step nextthis algorithm will construct decision tree for every sample then it will get the prediction result from every decision tree step in this stepvoting will be performed for every predicted result step at lastselect the most voted prediction result as the final prediction result
15,366
the following diagram will illustrate its workingtraining sample training sample training sample training sample training sample training sample training set test set voting prediction implementation in python firststart with importing necessary python packagesimport numpy as np import matplotlib pyplot as plt import pandas as pd nextdownload the iris dataset from its weblink as followspath "nextwe need to assign column names to the dataset as followsheadernames ['sepal-length''sepal-width''petal-length''petal-width''class'nowwe need to read dataset to pandas dataframe as followsdataset pd read_csv(pathnames=headernamesdataset head(
15,367
sepallength sepalwidth petallength petal-width class iris-setosa iris-setosa iris-setosa iris-setosa iris-setosa data preprocessing will be done with the help of following script linesx dataset iloc[::- values dataset iloc[: values nextwe will divide the data into train and test split the following code will split the dataset into training data and of testing datafrom sklearn model_selection import train_test_split x_trainx_testy_trainy_test train_test_split(xytest_size= nexttrain the model with the help of randomforestclassifier class of sklearn as followsfrom sklearn ensemble import randomforestclassifier classifier randomforestclassifier(n_estimators= classifier fit(x_trainy_trainat lastwe need to make prediction it can be done with the help of following scripty_pred classifier predict(x_testnextprint the results as followsfrom sklearn metrics import classification_reportconfusion_matrixaccuracy_score result confusion_matrix(y_testy_predprint("confusion matrix:"print(resultresult classification_report(y_testy_predprint("classification report:",print (result result accuracy_score(y_test,y_predprint("accuracy:",result
15,368
output confusion matrix[[ ]classification reportprecision recall -score support iris-setosa iris-versicolor iris-virginica micro avg macro avg weighted avg accuracy pros and cons of random forest pros the following are the advantages of random forest algorithmit overcomes the problem of overfitting by averaging or combining the results of different decision trees random forests work well for large range of data items than single decision tree does random forest has less variance then single decision tree random forests are very flexible and possess very high accuracy scaling of data does not require in random forest algorithm it maintains good accuracy even after providing data without scaling random forest algorithms maintains good accuracy even large proportion of the data is missing cons the following are the disadvantages of random forest algorithm
15,369
complexity is the main disadvantage of random forest algorithms construction of random forests are much harder and time-consuming than decision trees more computational resources are required to implement random forest algorithm it is less intuitive in case when we have large collection of decision trees the prediction process using random forests is very time-consuming in comparison with other algorithms
15,370
machine learning algorithms regression
15,371
regression algorithmsmachine overview introduction to regression regression is another important and broadly used statistical and machine learning tool the key objective of regression-based tasks is to predict output labels or responses which are continues numeric valuesfor the given input data the output will be based on what the model has learned in training phase basicallyregression models use the input data features (independent variablesand their corresponding continuous numeric output values (dependent or outcome variablesto learn specific association between inputs and corresponding outputs -output variablesdependent on input -input variablesindependent in nature
15,372
types of regression models regression models simple multiple (univariate features(multiple featuresregression models are of following two typessimple regression modelthis is the most basic regression model in which predictions are formed from singleunivariate feature of the data multiple regression modelas name impliesin this regression model the predictions are formed from multiple features of the data building regressor in python regressor model in python can be constructed just like we constructed the classifier scikit-learna python library for machine learning can also be used to build regressor in python in the following examplewe will be building basic regression model that will fit line to the data linear regressor the necessary steps for building regressor in python are as followsstep importing necessary python package for building regressor using scikit-learnwe need to import it along with other necessary packages we can import the by using following scriptimport numpy as np from sklearn import linear_model import sklearn metrics as sm import matplotlib pyplot as plt step importing dataset after importing necessary packagewe need dataset to build regression prediction model we can import it from sklearn dataset or can use other one as per our requirement we are going to use our saved input data we can import it with the help of following scriptinput ' :\linear txtnextwe need to load this data we are using np loadtxt function to load it
15,373
input_data np loadtxt(inputdelimiter=','xy input_data[::- ]input_data[:- step organizing data into training testing sets as we need to test our model on unseen data hencewe will divide our dataset into two partsa training set and test set the following command will perform ittraining_samples int( len( )testing_samples len(xnum_training x_trainy_train [:training_samples] [:training_samplesx_testy_test [training_samples:] [training_samples:step model evaluation prediction after dividing the data into training and testing we need to build the model we will be using linearegression(function of scikit-learn for this purpose following command will create linear regressor object reg_linearlinear_model linearregression(nexttrain this model with the training samples as followsreg_linear fit(x_trainy_trainnowat last we need to do the prediction with the testing data y_test_pred reg_linear predict(x_teststep plot visualization after predictionwe can plot and visualize it with the help of following scriptplt scatter(x_testy_testcolor='red'plt plot(x_testy_test_predcolor='black'linewidth= plt xticks(()plt yticks(()plt show(
15,374
output in the above outputwe can see the regression line between the data points step performance computationwe can also compute the performance of our regression model with the help of various performance metrics as followsprint("regressor model performance:"print("mean absolute error(mae="round(sm mean_absolute_error(y_testy_test_pred) )print("mean squared error(mse="round(sm mean_squared_error(y_testy_test_pred) )print("median absolute error ="round(sm median_absolute_error(y_testy_test_pred) )print("explain variance score ="round(sm explained_variance_score(y_testy_test_pred) )print(" score ="round(sm _score(y_testy_test_pred) )output regressor model performancemean absolute error(mae mean squared error(mse median absolute error explain variance score - score -
15,375
types of ml regression algorithms the most useful and popular ml regression algorithm is linear regression algorithm which further divided into two types namelysimple linear regression algorithm multiple linear regression algorithm we will discuss about it and implement it in python in the next applications the applications of ml regression algorithms are as followsforecasting or predictive analysisone of the important uses of regression is forecasting or predictive analysis for examplewe can forecast gdpoil prices or in simple words the quantitative data that changes with the passage of time optimizationwe can optimize business processes with the help of regression for examplea store manager can create statistical model to understand the peek time of coming of customers error correctionin businesstaking correct decision is equally important as optimizing the business process regression can help us to take correct decision as well in correcting the already implemented decision economicsit is the most used tool in economics we can use regression to predict supplydemandconsumptioninventory investment etc financea financial company is always interested in minimizing the risk portfolio and want to know the factors that affects the customers all these can be predicted with the help of regression model
15,376
regression algorithms linear regression introduction to linear regression linear regression may be defined as the statistical model that analyzes the linear relationship between dependent variable with given set of independent variables linear relationship between variables means that when the value of one or more independent variables will change (increase or decrease)the value of dependent variable will also change accordingly (increase or decreasemathematically the relationship can be represented with the help of following equationy mx herey is the dependent variable we are trying to predict is the dependent variable we are using to make predictions is the slop of the regression line which represents the effect has on is constantknown as the -intercept if would be equal to furthermorethe linear relationship can be positive or negative in nature as explained belowpositive linear relationship linear relationship will be called positive if both independent and dependent variable increases it can be understood with the help of following graphpositive linear relationship
15,377
negative linear relationship linear relationship will be called positive if independent increases and dependent variable decreases it can be understood with the help of following graphnegative linear relationship types of linear regression linear regression is of the following two typessimple linear regression multiple linear regression simple linear regression (slrit is the most basic version of linear regression which predicts response using single feature the assumption in slr is that the two variables are linearly related python implementation we can implement slr in python in two waysone is to provide your own dataset and other is to use dataset from scikit-learn python library example in the following python implementation examplewe are using our own dataset firstwe will start with importing necessary packages as follows%matplotlib inline import numpy as np import matplotlib pyplot as plt
15,378
nextdefine function which will calculate the important values for slrdef coef_estimation(xy)the following script line will give number of observations nn np size(xthe mean of and vector can be calculated as followsm_xm_y np mean( )np mean(ywe can find cross-deviation and deviation about as followsss_xy np sum( *xn*m_y*m_x ss_xx np sum( *xn*m_x*m_x nextregression coefficients can be calculated as followsb_ ss_xy ss_xx b_ m_y b_ *m_x return(b_ b_ nextwe need to define function which will plot the regression line as well as will predict the response vectordef plot_regression_line(xyb)the following script line will plot the actual points as scatter plotplt scatter(xycolor " "marker " " the following script line will predict response vectory_pred [ [ ]* the following script lines will plot the regression line and will put the labels on themplt plot(xy_predcolor " "plt xlabel(' 'plt ylabel(' 'plt show(at lastwe need to define main(function for providing dataset and calling the function we defined above
15,379
def main() np array([ ] np array([ ] coef_estimation(xyprint("estimated coefficients:\nb_ {\nb_ {}format( [ ] [ ])plot_regression_line(xybif __name__ ="__main__"main(output estimated coefficientsb_ b_ example in the following python implementation examplewe are using diabetes dataset from scikit-learn firstwe will start with importing necessary packages as follows%matplotlib inline import matplotlib pyplot as plt import numpy as np from sklearn import datasetslinear_model from sklearn metrics import mean_squared_errorr _score
15,380
nextwe will load the diabetes dataset and create its objectdiabetes datasets load_diabetes(as we are implementing slrwe will be using only one feature as followsx diabetes data[:np newaxis nextwe need to split the data into training and testing sets as followsx_train [:- x_test [- :nextwe need to split the target into training and testing sets as followsy_train diabetes target[:- y_test diabetes target[- :nowto train the model we need to create linear regression object as followsregr linear_model linearregression(nexttrain the model using the training sets as followsregr fit(x_trainy_trainnextmake predictions using the testing set as followsy_pred regr predict(x_testnextwe will be printing some coefficient like msevariance score etc as followsprint('coefficients\ 'regr coef_print("mean squared error fmean_squared_error(y_testy_pred)print('variance score fr _score(y_testy_pred)nowplot the outputs as followsplt scatter(x_testy_testcolor='blue'plt plot(x_testy_predcolor='red'linewidth= plt xticks(()plt yticks(()plt show(
15,381
output coefficients[ mean squared error variance score multiple linear regression (mlrit is the extension of simple linear regression that predicts response using two or more features mathematically we can explain it as followsconsider dataset having observationsp features independent variables and as one response dependent variable the regression line for features can be calculated as followsh(xi xi xi bp xip hereh(xi coefficients is the predicted response value and bp are the regression multiple linear regression models always includes the errors in the data known as residual error which changes the calculation as followsh(xi xi xi bp xip ei we can also write the above equation as followsyi (xi ei or ei yi (xi
15,382
python implementation in this examplewe will be using boston housing dataset from scikit learnfirstwe will start with importing necessary packages as follows%matplotlib inline import matplotlib pyplot as plt import numpy as np from sklearn import datasetslinear_modelmetrics nextload the dataset as followsboston datasets load_boston(return_x_y=falsethe following script lines will define feature matrixx and response vectoryx boston data boston target nextsplit the dataset into training and testing sets as followsfrom sklearn model_selection import train_test_split x_trainx_testy_trainy_test train_test_split(xytest_size= random_state= nowcreate linear regression object and train the model as followsreg linear_model linearregression(reg fit(x_trainy_trainprint('coefficients\ 'reg coef_print('variance score{}format(reg score(x_testy_test))plt style use('fivethirtyeight'plt scatter(reg predict(x_train)reg predict(x_trainy_traincolor "green" label 'train data'plt scatter(reg predict(x_test)reg predict(x_testy_test
15,383
color "blue" label 'test data'plt hlines( xmin xmax linewidth plt legend(loc 'upper right'plt title("residual errors"plt show(output coefficients[- - - - - + + - - + - - - - - + - - - variance score assumptions the following are some assumptions about dataset that is made by linear regression modelmulti-collinearitylinear regression model assumes that there is very little or no multicollinearity in the data basicallymulti-collinearity occurs when the independent variables or features have dependency in them
15,384
auto-correlationanother assumption linear regression model assumes is that there is very little or no auto-correlation in the data basicallyauto-correlation occurs when there is dependency between residual errors relationship between variableslinear regression model assumes that the relationship between response and feature variables must be linear
15,385
machine learning algorithms clustering
15,386
clustering algorithmsmachine overview introduction to clustering clustering methods are one of the most useful unsupervised ml methods these methods are used to find similarity as well as the relationship patterns among data samples and then cluster those samples into groups having similarity based on features clustering is important because it determines the intrinsic grouping among the present unlabeled data they basically make some assumptions about data points to constitute their similarity each assumption will construct different but equally valid clusters for examplebelow is the diagram which shows clustering system grouped together the similar kind of data in different clustersclustering system cluster formation methods it is not necessary that clusters will be formed in spherical form followings are some other cluster formation methodsdensity-based in these methodsthe clusters are formed as the dense region the advantage of these methods is that they have good accuracy as well as good ability to merge two clusters ex density-based spatial clustering of applications with noise (dbscan)ordering points to identify clustering structure (opticsetc hierarchical-based in these methodsthe clusters are formed as tree type structure based on the hierarchy they have two categories namelyagglomerative (bottom up approachand divisive (top down approachex clustering using representatives (cure)balanced iterative reducing clustering using hierarchies (birchetc partitioning
15,387
in these methodsthe clusters are formed by portioning the objects into clusters number of clusters will be equal to the number of partitions ex -meansclustering large applications based upon randomized search (claransgrid in these methodsthe clusters are formed as grid like structure the advantage of these methods is that all the clustering operation done on these grids are fast and independent of the number of data objects ex statistical information grid (sting)clustering in quest (cliquemeasuring clustering performance one of the most important consideration regarding ml model is assessing its performance or you can say model' quality in case of supervised learning algorithmsassessing the quality of our model is easy because we already have labels for every example on the other handin case of unsupervised learning algorithms we are not that much blessed because we deal with unlabeled data but still we have some metrics that give the practitioner an insight about the happening of change in clusters depending on algorithm before we deep dive into such metricswe must understand that these metrics only evaluates the comparative performance of models against each other rather than measuring the validity of the model' prediction followings are some of the metrics that we can deploy on clustering algorithms to measure the quality of modelsilhouette analysis silhouette analysis used to check the quality of clustering model by measuring the distance between the clusters it basically provides us way to assess the parameters like number of clusters with the help of silhouette score this score measures how close each point in one cluster is to points in the neighboring clusters analysis of silhouette score the range of silhouette score is [- its analysis is as follows+ score:near + silhouette score indicates that the sample is far away from its neighboring cluster score: silhouette score indicates that the sample is on or very close to the decision boundary separating two neighboring clusters - score- silhouette score indicates that the samples have been assigned to the wrong clusters the calculation of silhouette score can be done by using the following formulasilhouette score ( )/max(pqherep mean distance to the points in the nearest cluster andq mean intra-cluster distance to all the points
15,388
davis-bouldin index db index is another good metric to perform the analysis of clustering algorithms with the help of db indexwe can understand the following points about clustering modelweather the clusters are well-spaced from each other or nothow much dense the clusters arewe can calculate db index with the help of following formulan db maxji (ci cj = heren number of clusters average distance of all points in cluster from the cluster centroid ci less the db indexbetter the clustering model is dunn index it works same as db index but there are following points in which both differsthe dunn index considers only the worst case the clusters that are close together while db index considers dispersion and separation of all the clusters in clustering model dunn index increases as the performance increases while db index gets better when clusters are well-spaced and dense we can calculate dunn index with the help of following formuladmin <= < <= (ijmax <= < <= (khereijk each indices for clusters inter-cluster distance intra-cluster distance types of ml clustering algorithms the following are the most important and useful ml clustering algorithmsk-means clustering this clustering algorithm computes the centroids and iterates until we it finds optimal centroid it assumes that the number of clusters are already known it is also called flat clustering algorithm the number of clusters identified from data by algorithm is represented by 'kin -means mean-shift algorithm it is another powerful clustering algorithm used in unsupervised learning unlike -means clusteringit does not make any assumptions hence it is non-parametric algorithm
15,389
hierarchical clustering it is another unsupervised learning algorithm that is used to group together the unlabeled data points having similar characteristics we will be discussing all these algorithms in detail in the upcoming applications of clustering we can find clustering useful in the following areasdata summarization and compressionclustering is widely used in the areas where we require data summarizationcompression and reduction as well the examples are image processing and vector quantization collaborative systems and customer segmentationsince clustering can be used to find similar products or same kind of usersit can be used in the area of collaborative systems and customer segmentation serve as key intermediate step for other data mining taskscluster analysis can generate compact summary of data for classificationtestinghypothesis generationhenceit serves as key intermediate step for other data mining tasks also trend detection in dynamic dataclustering can also be used for trend detection in dynamic data by making various clusters of similar trends social network analysisclustering can be used in social network analysis the examples are generating sequences in imagesvideos or audios biological data analysisclustering can also be used to make clusters of imagesvideos hence it can successfully be used in biological data analysis
15,390
clustering algorithms -means algorithm introduction to -means algorithm -means clustering algorithm computes the centroids and iterates until we it finds optimal centroid it assumes that the number of clusters are already known it is also called flat clustering algorithm the number of clusters identified from data by algorithm is represented by 'kin -means in this algorithmthe data points are assigned to cluster in such manner that the sum of the squared distance between the data points and centroid would be minimum it is to be understood that less variation within the clusters will lead to more similar data points within same cluster working of -means algorithm we can understand the working of -means clustering algorithm with the help of following stepsstep firstwe need to specify the number of clusterskneed to be generated by this algorithm step nextrandomly select data points and assign each data point to cluster in simple wordsclassify the data based on the number of data points step now it will compute the cluster centroids step nextkeep iterating the following until we find optimal centroid which is the assignment of data points to the clusters that are not changing any more firstthe sum of squared distance between data points and centroids would be computed nowwe have to assign each data point to the cluster that is closer than other cluster (centroid at last compute the centroids for the clusters by taking the average of all data points of that cluster -means follows expectation-maximization approach to solve the problem the expectation-step is used for assigning the data points to the closest cluster and the maximization-step is used for computing the centroid of each cluster while working with -means algorithm we need to take care of the following thingswhile working with clustering algorithms including -meansit is recommended to standardize the data because such algorithms use distance-based measurement to determine the similarity between data points
15,391
due to the iterative nature of -means and random initialization of centroidskmeans may stick in local optimum and may not converge to global optimum that is why it is recommended to use different initializations of centroids implementation in python the following two examples of implementing -means clustering algorithm will help us in its better understandingexample it is simple example to understand how -means works in this examplewe are going to first generate dataset containing different blobs and after that will apply -means algorithm to see the result firstwe will start by importing the necessary packages%matplotlib inline import matplotlib pyplot as plt import seaborn as snssns set(import numpy as np from sklearn cluster import kmeans the following code will generate the dcontaining four blobsfrom sklearn datasets samples_generator import make_blobs xy_true make_blobs(n_samples= centers= cluster_std= random_state= nextthe following code will help us to visualize the datasetplt scatter( [: ] [: ] = )plt show(
15,392
nextmake an object of kmeans along with providing number of clusterstrain the model and do the prediction as followskmeans kmeans(n_clusters= kmeans fit(xy_kmeans kmeans predict(xnowwith the help of following code we can plot and visualize the cluster' centers picked by -means python estimatorplt scatter( [: ] [: ] =y_kmeanss= cmap='summer'centers kmeans cluster_centers_ plt scatter(centers[: ]centers[: ] ='blue' = alpha= )plt show(
15,393
example let us move to another example in which we are going to apply -means clustering on simple digits dataset -means will try to identify similar digits without using the original label information firstwe will start by importing the necessary packages%matplotlib inline import matplotlib pyplot as plt import seaborn as snssns set(import numpy as np from sklearn cluster import kmeans nextload the digit dataset from sklearn and make an object of it we can also find number of rows and columns in this dataset as followsfrom sklearn datasets import load_digits digits load_digits(digits data shape output ( the above output shows that this dataset is having samples with features we can perform the clustering as we did in example abovekmeans kmeans(n_clusters= random_state= clusters kmeans fit_predict(digits datakmeans cluster_centers_ shape output ( the above output shows that -means created clusters with features figax plt subplots( figsize=( )centers kmeans cluster_centers_ reshape( for axicenter in zip(ax flatcenters)axi set(xticks=[]yticks=[]axi imshow(centerinterpolation='nearest'cmap=plt cm binary
15,394
output as outputwe will get following image showing clusters centers learned by -means the following lines of code will match the learned cluster labels with the true labels found in themfrom scipy stats import mode labels np zeros_like(clustersfor in range( )mask (clusters =ilabels[maskmode(digits target[mask])[ nextwe can check the accuracy as followsfrom sklearn metrics import accuracy_score accuracy_score(digits targetlabelsoutput the above output shows that the accuracy is around advantages and disadvantages advantages the following are some advantages of -means clustering algorithmsit is very easy to understand and implement if we have large number of variables thenk-means would be faster than hierarchical clustering on re-computation of centroidsan instance can change the cluster tighter clusters are formed with -means as compared to hierarchical clustering disadvantages
15,395
the following are some disadvantages of -means clustering algorithmsit is bit difficult to predict the number of clusters the value of output is strongly impacted by initial inputs like number of clusters (value of korder of data will have strong impact on the final output it is very sensitive to rescaling if we will rescale our data by means of normalization or standardizationthen the output will completely change it is not good in doing clustering job if the clusters have complicated geometric shape applications of -means clustering algorithm the main goals of cluster analysis areto get meaningful intuition from the data we are working with cluster-then-predict where different models will be built for different subgroups to fulfill the above-mentioned goalsk-means clustering is performing well enough it can be used in following applicationsmarket segmentation document clustering image segmentation image compression customer segmentation analyzing the trend on dynamic data
15,396
clustering algorithms mean shift algorithm introduction to mean-shift algorithm as discussed earlierit is another powerful clustering algorithm used in unsupervised learning unlike -means clusteringit does not make any assumptionshence it is nonparametric algorithm mean-shift algorithm basically assigns the datapoints to the clusters iteratively by shifting points towards the highest density of datapoints cluster centroid the difference between -means algorithm and mean-shift is that later one does not need to specify the number of clusters in advance because the number of clusters will be determined by the algorithm data working of mean-shift algorithm we can understand the working of mean-shift clustering algorithm with the help of following stepsstep firststart with the data points assigned to cluster of their own step nextthis algorithm will compute the centroids step in this steplocation of new centroids will be updated step nowthe process will be iterated and moved to the higher density region step at lastit will be stopped once the centroids reach at position from where it cannot move further implementation in python it is simple example to understand how mean-shift algorithm works in this examplewe are going to first generate dataset containing different blobs and after that will apply mean-shift algorithm to see the result %matplotlib inline import numpy as np from sklearn cluster import meanshift import matplotlib pyplot as plt from matplotlib import style style use("ggplot"from sklearn datasets samples_generator import make_blobs centers [[ , , ],[ , , ],[ , , ]x_ make_blobs(n_samples centers centerscluster_std
15,397
plt scatter( [:, ], [:, ]plt show(ms meanshift(ms fit(xlabels ms labels_ cluster_centers ms cluster_centers_ print(cluster_centersn_clusters_ len(np unique(labels)print("estimated clusters:"n_clusters_colors *[' ',' ',' ',' ',' ',' ',' 'for in range(len( ))plt plot( [ ][ ] [ ][ ]colors[labels[ ]]markersize plt scatter(cluster_centers[:, ],cluster_centers[:, ]marker=",color=' ' = linewidths zorder= plt show(output [ ]estimated clusters
15,398
advantages and disadvantages advantages the following are some advantages of mean-shift clustering algorithmit does not need to make any model assumption as like in -means or gaussian mixture it can also model the complex clusters which have nonconvex shape it only needs one parameter named bandwidth which automatically determines the number of clusters there is no issue of local minima as like in -means no problem generated from outliers disadvantages the following are some disadvantages of mean-shift clustering algorithmmean-shift algorithm does not work well in case of high dimensionwhere number of clusters changes abruptly we do not have any direct control on the number of clusters but in some applicationswe need specific number of clusters it cannot differentiate between meaningful and meaningless modes
15,399
clustering algorithms hierarchical clustering introduction to hierarchical clustering hierarchical clustering is another unsupervised learning algorithm that is used to group together the unlabeled data points having similar characteristics hierarchical clustering algorithms falls into following two categoriesagglomerative hierarchical algorithmsin agglomerative hierarchical algorithmseach data point is treated as single cluster and then successively merge or agglomerate (bottom-up approachthe pairs of clusters the hierarchy of the clusters is represented as dendrogram or tree structure divisive hierarchical algorithmson the other handin divisive hierarchical algorithmsall the data points are treated as one big cluster and the process of clustering involves dividing (top-down approachthe one big cluster into various small clusters steps to perform agglomerative hierarchical clustering we are going to explain the most used and important hierarchical clustering agglomerative the steps to perform the same is as followsstep treat each data point as single cluster hencewe will be havingsay clusters at start the number of data points will also be at start step nowin this step we need to form big cluster by joining two closet datapoints this will result in total of - clusters step nowto form more clusters we need to join two closet clusters this will result in total of - clusters step nowto form one big cluster repeat the above three steps until would become no more data points left to join step at lastafter making one single big clusterdendrograms will be used to divide into multiple clusters depending upon the problem role of dendrograms in agglomerative hierarchical clustering as we discussed in the last stepthe role of dendrogram starts once the big cluster is formed dendrogram will be used to split the clusters into multiple cluster of related data points depending upon our problem it can be understood with the help of following exampleexample to understandlet us start with importing the required libraries as follows%matplotlib inline