id
int64
0
25.6k
text
stringlengths
0
4.59k
16,100
the final output layer consists of two units with softmax activation function the softmax function is basically generalization of the logistic function we saw earlierwhich can be used to represent probability distribution over possible class outcomes in our case where the class can either be positive or negative and the softmax probabilities will help us determine the same the binary softmax classifier is also interchangeably known as the binary logistic regression function the compilemethod is used to configure the learning or training process of the dnn model before we actually train it this involves providing cost or loss function in the loss parameter this will be the goal or objective which the model will try to minimize there are various loss functions based on the type of problem you want to solvefor example the mean squared error for regression and categorical crossentropy for classification check out we will be using categorical_crossentropywhich helps us minimize the error or loss from the softmax output we need an optimizer for helping us converge our model and minimize the loss or error function gradient descent or stochastic gradient descent is popular optimizer we will be using the adam optimizer which only required first order gradients and very little memory adam also uses momentum where basically each update is based on not only the gradient computation of the current point but also includes fraction of the previous update this helps with faster convergence you can refer to the original paper from the metrics parameter is used to specify model performance metrics that are used to evaluate the model when training (but not used to modify the training loss itself let' now build dnn model based on our word vec input feature representations for our training reviews in [ ] v_dnn construct_deepnn_architecture(num_input_features= you can also visualize the dnn model architecture with the help of kerassimilar to what we had done in by using the following code see figure - in [ ]from ipython display import svg from keras utils vis_utils import model_to_dot svg(model_to_dot( v_dnnshow_shapes=trueshow_layer_names=falserankdir='tb'create(prog='dot'format='svg')figure - visualizing the dnn model architecture using keras we will now be training our model on our training reviews dataset of word vec features represented by avg_wv_train_features (step we will be using the fitfunction from keras for the training process and there are some parameters which you should be aware of the epoch parameter indicates one complete forward and backward pass of all the training examples through the network the batch_size parameter indicates the total number of samples which are propagated through the dnn model at time for one backward and forward pass for training the model and updating the gradient thus if you have , observations and your batch size is each epoch will consist of iterations where observations will be passed through the network at time and the weights on the hidden layer units will be updated we also
16,101
specify validation_split of to extract of the training data and use it as validation dataset for evaluating the performance at each epoch the shuffle parameter helps shuffle the samples in each epoch when training the model in [ ]batch_size v_dnn fit(avg_wv_train_featuresy_trainepochs= batch_size=batch_sizeshuffle=truevalidation_split= verbose= train on samplesvalidate on samples epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc the preceding snippet tells us that we have trained our dnn model on the training data for five epochs with as the batch size we get validation accuracy of close to %which is quite good time now to put our model to the real testlet' evaluate our model performance on the test review word vec features (step in [ ]y_pred v_dnn predict_classes(avg_wv_test_featurespredictions le inverse_transform(y_predmeu display_model_performance_metrics(true_labels=test_sentimentspredicted_labels=predictionsclasses=['positive''negative']figure - model performance metrics for deep neural networks on word vec features the results depicted in figure - show us that we have obtained model accuracy and -score of %which is greatyou can use similar workflow to build and train dnn model for our glove based features and evaluate the model performance the following snippet depicts the workflow for steps and of our text classification system blueprint build dnn model glove_dnn construct_deepnn_architecture(num_input_features= train dnn model on glove training features batch_size glove_dnn fit(train_glove_featuresy_trainepochs= batch_size=batch_sizeshuffle=truevalidation_split= verbose= get predictions on test reviews y_pred glove_dnn predict_classes(test_glove_featurespredictions le inverse_transform(y_predevaluate model performance meu display_model_performance_metrics(true_labels=test_sentimentspredicted_ labels=predictionsclasses=['positive''negative']
16,102
we obtained an overall model accuracy and -score of with the glove featureswhich is still good but not better than what we obtained using our word vec features you can refer to the sentiment analysis supervised ipynb jupyter notebook to see the step-by-step outputs obtained for the previous code this concludes our discussion on building text sentiment classification systems leveraging newer deep learning models and methodologies onwards to learning about advanced deep learning modelsadvanced supervised deep learning models we have used fully connected deep neural network and word embeddings in the previous section another new and interesting approach toward supervised deep learning is the use of recurrent neural networks (rnnsand long short term memory networks (lstmswhich also considers the sequence of data (wordseventsand so onthese are more advanced models than your regular fully connected deep networks and usually take more time to train we will leverage keras on top of tensorflow and try to build lstmbased classification model here and use word embeddings as our features you can refer to the python file titled sentiment_analysis_adv_deep_learning py for all the code used in this section or use the jupyter notebook titled sentiment analysis advanced deep learning ipynb for more interactive experience we will be working on our normalized and pre-processed train and test review datasetsnorm_ train_reviews and norm_test_reviewswhich we created in our previous analyses assuming you have them loaded upwe will first tokenize these datasets such that each text review is decomposed into its corresponding tokens (workflow step in [ ]tokenized_train [tn tokenizer tokenize(textfor text in norm_train_reviewstokenized_test [tn tokenizer tokenize(textfor text in norm_test_reviewsfor feature engineering (step )we will be creating word embeddings howeverwe will create them ourselves using keras instead of using pre-built ones like word vec or glovewhich we used earlier word embeddings tend to vectorize text documents into fixed sized vectors such that these vectors try to capture contextual and semantic information for generating embeddingswe will use the embedding layer from keraswhich requires documents to be represented as tokenized and numeric vectors we already have tokenized text vectors in our tokenized_train and tokenized_text variables however we would need to convert them into numeric representations besides thiswe would also need the vectors to be of uniform size even though the tokenized text reviews will be of variable length due to the difference in number of tokens in each review for thisone strategy could be to take the length of the longest review (with maximum number of tokens\wordsand set it as the vector sizelet' call this max_len reviews of shorter length can be padded with pad term in the beginning to increase their length to max_len we would need to create word to index vocabulary mapping for representing each tokenized text review in numeric form do note you would also need to create numeric mapping for the padding term which we shall call pad_index and assign it the numeric index of for unknown termsin case they are encountered later on in the test dataset or newerpreviously unseen reviewswe would need to assign it to some index too this would be because we will vectorizeengineer featuresand build models only on the training data henceif some new term should come up in the future (which was originally not part of the model training)we will consider it as an out of vocabulary (oovterm and assign it to constant index (we will name this term not_found_index and assign it the index of vocab_size+ the following snippet helps us create this vocabulary from our tokenized_train corpus of training text reviews
16,103
in [ ]from collections import counter build word to index vocabulary token_counter counter([token for review in tokenized_train for token in review]vocab_map {item[ ]index+ for indexitem in enumerate(dict(token_counteritems())max_index np max(list(vocab_map values())vocab_map['pad_index' vocab_map['not_found_index'max_index+ vocab_size len(vocab_mapview vocabulary size and part of the vocabulary map print('vocabulary size:'vocab_sizeprint('sample slice of vocabulary map:'dict(list(vocab_map items())[ : ])vocabulary size sample slice of vocabulary map{'martyrdom' 'palmira' 'servility' 'gardening' 'melodramatically' 'renfro' 'carlin' 'overtly' 'rend' 'anticlimactic' in this case we have used all the terms in our vocabularyyou can easily filter and use more relevant terms here (based on their frequencyby using the most_common(countfunction from counter and taking the first count terms from the list of unique terms in the training corpus we will now encode the tokenized text reviews based on the previous vocab_map besides thiswe will also encode the text sentiment class labels into numeric representations in [ ]from keras preprocessing import sequence from sklearn preprocessing import labelencoder get max length of train corpus and initialize label encoder le labelencoder(num_classes= positive - negative - max_len np max([len(reviewfor review in tokenized_train]#train reviews data corpus convert tokenized text reviews to numeric vectors train_x [[vocab_map[tokenfor token in tokenized_reviewfor tokenized_review in tokenized_traintrain_x sequence pad_sequences(train_xmaxlen=max_lenpad #train prediction class labels convert text sentiment labels (negative\positiveto binary encodings ( / train_y le fit_transform(train_sentiments#test reviews data corpus convert tokenized text reviews to numeric vectors test_x [[vocab_map[tokenif vocab_map get(tokenelse vocab_map['not_found_index'for token in tokenized_reviewfor tokenized_review in tokenized_testtest_x sequence pad_sequences(test_xmaxlen=max_len
16,104
#test prediction class labels convert text sentiment labels (negative\positiveto binary encodings ( / test_y le transform(test_sentimentsview vector shapes print('max length of train review vectors:'max_lenprint('train review vectors shape:'train_x shapetest review vectors shape:'test_x shapemax length of train review vectors train review vectors shape( test review vectors shape( from the preceding code snippet and the outputit is clear that we encoded each text review into numeric sequence vector so that the size of each review vector is which is basically the maximum length of reviews from the training dataset we pad shorter reviews and truncate extra tokens from longer reviews such that the shape of each review is constant as depicted in the output we can now proceed with step and part of step of the classification workflow by introducing the embedding layer and coupling it with the deep network architecture based on lstms from keras models import sequential from keras layers import denseembeddingdropoutspatialdropout from keras layers import lstm embedding_dim dimension for dense embeddings for each token lstm_dim total lstm units model sequential(model add(embedding(input_dim=vocab_sizeoutput_dim=embedding_diminput_length=max_len)model add(spatialdropout ( )model add(lstm(lstm_dimdropout= recurrent_dropout= )model add(dense( activation="sigmoid")model compile(loss="binary_crossentropy"optimizer="adam"metrics=["accuracy"]the embedding layer helps us generate the word embeddings from scratch this layer is also initialized with some weights initially and this gets updated based on our optimizer similar to weights on the neuron units in other layers when the network tries to minimize the loss in each epoch thusthe embedding layer tries to optimize its weights such that we get the best word embeddings which will generate minimum error in the model and also capture semantic similarity and relationships among words how do we get the embeddingslet' consider we have review with terms ['movie''was''good'and vocab_map consisting of word to index mappings for words the word embeddings would be generated somewhat similar to figure -
16,105
figure - understanding how word embeddings are generated based on our model architecturethe embedding layer takes in three parameters--input_dimwhich is equal to the vocabulary size (vocab_sizeof output_dimwhich is representing the dimension of dense embedding (depicted by rows in the embedding layer in figure - )and input_lenwhich specifies the length of the input sequences (movie review sequence vectors)which is in the example depicted in figure - since we have one reviewthe dimension is ( this review is converted into numeric sequence [ based on the vocab_map then the specific columns representing the indices in the review sequence are selected from the embedding layer (vectors at column indices and respectively)to generate the final word embeddings this gives us an embedding vector of dimension ( also represented as ( when each row is represented based on each sequence word embedding vector many deep learning frameworks like keras represent the embedding dimensions as (mnwhere represents all the unique terms in our vocabulary ( and represents the output_dim which is in this case consider transposed version of the layer depicted in figure - and you are good to gousually if you have the encoded review terms sequence vector represented in one-hot encoded format ( and do matrix multiplication with the embedding layer represented as ( where each row represents the embedding for word in the vocabularyyou will directly obtain the word embeddings for the review sequence vector as ( the weights in the embedding layer get updated and optimized in each epoch based on the input data when propagated through the whole network like we mentioned earlier such that overall loss and error is minimized to get maximum model performance these dense word embeddings are then passed to the lstm layer having units we already introduced you to the lstm architecture briefly in in the subsection titled "long short term memory networksin the "important conceptssection under "deep learninglstms basically try to overcome the shortcomings of rnn models especially with regard to handling long term dependencies and problems which occur when the weight matrix associated with the units (neuronsbecome too small (leading to vanishing gradientor too large (leading to exploding gradientthese architectures are more complex than regular deep networks and going into detailed internals and math concepts would be out of the current scopebut we will try to cover the essentials here without making it math heavy if you're interested in researching the internals of lstmscheck out the original paper which inspired it allby hochreiters and schmidhuberj ( long short-term memory neural computation ( ) - we depict the basic architecture of rnns and compare it with lstms in figure -
16,106
figure - basic structure of rnn and lstm units (sourcechristopher olah' blogcolah github iothe rnn units usually have chain of repeating modules (this happens when we unroll the looprefer to figure - in where we talk about thissuch that the module has simple structure of having maybe one layer with the tanh activation lstms are also special type of rnnhaving similar structure but the lstm unit has four neural network layers instead of just one the detailed architecture of the lstm cell is shown in figure -
16,107
figure - detailed architecture of an lstm cell (sourcechristopher olah' blogcolah github iothe detailed architecture of an lstm cell is depicted in figure - the notation indicates one time stepc depicts the cell statesand indicates the hidden states the gates , help in removing or adding information to the cell state the gates if represent the inputoutput and forget gates respectively and each of them are modulated by the sigmoid layer which outputs numbers from to controlling how much of the output from these gates should pass thus this helps is protecting and controlling the cell state detailed work flow of how information flows through the lstm cell is depicted in figure - in four steps the first step talks about the forget gate layer fwhich helps us decide what information should we throw away from the cell state this is done by looking at the previous hidden state ht - and current inputs xt as depicted in the equation the sigmoid layer helps control how much of this should be kept or forgotten the second step depicts the input gate layer twhich helps decide what information will be stored in the current cell state the sigmoid layer in the input gate helps decide which values will be updated based on ht - xt again the tanh layer helps create vector of the new candidate values ct based on ht - xt which can be added to the current cell state thus the tanh layer creates the values and the input gate with sigmoid layer helps choose which values should be updated the third step involves updating the old cell state ct to the new cell state ct by leveraging what we obtained in the first two steps we multiply the old cell state by the forget gate ft ct - and then add the new candidate values scaled by the input gate with sigmoid layer it ct the fourth and final step helps us decide what should be the final output which is basically filtered version of our cell state the output gate with the sigmoid layer helps us select which parts of the cell state will pass to the final output this is multiplied with the cell state values when passed through the tanh layer to give us the final hidden state values ht ot tanh ct
16,108
all these steps in this detailed workflow are depicted in figure - with necessary annotations and equations we would like to thank our good friend christopher olah for providing us detailed information as well as the images for depicting the internal workings of lstm networks we recommend checking out christopher' blog at shout out also goes to edwin chenfor explaining rnns and lstms in an easy-to-understand format we recommend referring to edwin' blog at information on the workings of rnns and lstms figure - walkthrough of data flow in an lstm cell (sourcechristopher olah' blogcolah github iothe final layer in our deep network is the dense layer with unit and the sigmoid activation function we basically use the binary_crossentropy function with the adam optimizer since this is binary classification problem and the model will ultimately predict or which we can decode back to negative or positive sentiment prediction with our label encoder you can also use the categorical_ crossentropy loss function herebut you would need to then use dense layer with units instead with softmax function now that our model is compiled and readywe can head on to step of our classification workflow of actually training the model we use similar strategy from our previous deep network modelswhere we train our model on the training data with five epochsbatch size of reviewsand validation split of training data to measure validation accuracy
16,109
in [ ]batch_size model fit(train_xtrain_yepochs= batch_size=batch_sizeshuffle=truevalidation_split= verbose= train on samplesvalidate on samples epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc training lstms on cpu is notoriously slow and as you can see my model took approximately hours to train for just five epochs on an rd gen intel cpu with gb of memory of coursea cloud-based environment like google cloud platform or aws on gpu took me approximately less than an hour to train the same model so would recommend you choose gpu based deep learning environmentespecially when working with rnns or lstm based network architectures based on the preceding outputwe can see that just with five epochs we have decent validation accuracy but the training accuracy starts shooting up indicating some over-fitting might be happening ways to overcome this include adding more data or by increasing the drouput rate do give it shot and see if it workstime to put our model to the testlet' see how well it predicts the sentiment for our test reviews and use the same model evaluation framework we have used for our previous models (step in [ ]predict sentiments on test data pred_test model predict_classes(test_xpredictions le inverse_transform(pred_test flatten()evaluate model performance meu display_model_performance_metrics(true_labels=test_sentimentspredicted_labels=predictionsclasses=['positive''negative']figure - model performance metrics for lstm based deep learning model on word embeddings the results depicted in figure - show us that we have obtained model accuracy and -score of %which is quite goodwith more quality datayou can expect to get even better results try experimenting with different architectures and see if you get better results
16,110
analyzing sentiment causation we built both supervised and unsupervised models to predict the sentiment of movie reviews based on the review text content while feature engineering and modeling is definitely the need of the houryou also need to know how to analyze and interpret the root cause behind how model predictions work in this sectionwe analyze sentiment causation the idea is to determine the root cause or key factors causing positive or negative sentiment the first area of focus will be model interpretationwhere we will try to understandinterpretand explain the mechanics behind predictions made by our classification models the second area of focus is to apply topic modeling and extract key topics from positive and negative sentiment reviews interpreting predictive models one of the challenges with machine learning models is the transition from pilot or proof-of-concept phase to the production phase business and key stakeholders often perceive machine learning models as complex black boxes and poses the questionwhy should trust your modelexplaining to them complex mathematical or theoretical concepts doesn' serve the purpose is there some way in which we can explain these models in an easy-to-interpret mannerthis topic in fact has gained extensive attention very recently in refer to the original research paper by ribeiros singh guestrin titled "why should trust you?explaining the predictions of any classifierfrom pdf to understand more about model interpretation and the lime framework check out more on model interpretation in where we cover the skater framework in detail which performs excellent interpretations of various models there are various ways to interpret the predictions made by our predictive sentiment classification models we want to understand more into why positive review was correctly predicted as having positive sentiment or negative review having negative sentiment besides thisno model is accurate alwaysso we would also want to understand the reason for mis-classifications or wrong predictions the code used in this section is available in the file named sentiment_causal_model_interpretation py or you can also refer to the jupyter notebook named sentiment causal analysis model interpretation ipynb for an interactive experience let' first build basic text classification pipeline for the model that worked best for us so far this is the logistic regression model based on the bag of words feature model we will leverage the pipeline module from scikit-learn to build this machine learning pipeline using the following code from sklearn feature_extraction text import countvectorizer from sklearn linear_model import logisticregression from sklearn pipeline import make_pipeline build bow features on train reviews cv countvectorizer(binary=falsemin_df= max_df= ngram_range=( , )cv_train_features cv fit_transform(norm_train_reviewsbuild logistic regression model lr logisticregression(lr fit(cv_train_featurestrain_sentimentsbuild text classification pipeline lr_pipeline make_pipeline(cvlrsave the list of prediction classes (positivenegativeclasses list(lr_pipeline classes_
16,111
we build our model based on norm_train_reviewswhich contains the normalized training reviews that we have used in all our earlier analyses now that we have our classification pipeline readyyou can actually deploy the model by using pickle or joblib to save the classifier and feature objects similar to what we discussed in the "model deploymentsection in assuming our pipeline is in productionhow do we use it for new movie reviewslet' try to predict the sentiment for two new sample reviews (which were not used in training the modelin [ ]lr_pipeline predict(['the lord of the rings is an excellent movie'' hated the recent movie on tvit was so bad']out[ ]array(['positive''negative']dtype=objectour classification pipeline predicts the sentiment of both the reviews correctlythis is good startbut how do we interpret the model predictionsone way is to typically use the model prediction class probabilities as measure of confidence you can use the following code to get the prediction probabilities for our sample reviews in [ ]pd dataframe(lr_pipeline predict_proba(['the lord of the rings is an excellent movie'' hated the recent movie on tvit was so bad'])columns=classesout[ ]negative positive thus we can say that the first movie review has prediction confidence or probability of to have positive sentiment as compared to the second movie review with probability to have negative sentiment let' now kick it up notchinstead of playing around with toy exampleswe will now run the same analysis on actual reviews from the test_reviews dataset (we will use norm_test_reviewswhich has the normalized text reviewsbesides prediction probabilitieswe will be using the skater framework for easy interpretation of the model decisionssimilar to what we have done in under the section "model interpretationyou need to load the following dependencies from the skater package first we also define helper function which takes in document indexa corpusits response predictionsand an explainer object and helps us with the our model interpretation analysis from skater core local_interpretation lime lime_text import limetextexplainer explainer limetextexplainer(class_names=classeshelper function for model interpretation def interpret_classification_model_prediction(doc_indexnorm_corpuscorpusprediction_labelsexplainer_obj)display model prediction and actual sentiments print("test document index{index}\nactual sentiment{actual\npredicted sentiment{predicted}format(index=doc_indexactual=prediction_labels[doc_index]predicted=lr_pipeline predict([norm_corpus[doc_index]]))display actual review content print("\nreview:"corpus[doc_index]display prediction probabilities print("\nmodel prediction probabilities:"for probs in zip(classeslr_pipeline predict_proba([norm_corpus[doc_index]])[ ])print(probs
16,112
display model prediction interpretation exp explainer explain_instance(norm_corpus[doc_index]lr_pipeline predict_probanum_features= labels=[ ]exp show_in_notebook(the preceding snippet leverages skater to explain our text classifier to analyze its decision-making process in an easy to interpret form even though the model might be complex one in global perspectiveit is easier to explain and approximate the model behavior on local instances this is done by learning the model around the vicinity of the data point of interest by sampling instances around and assigning weightages based on their proximity tox thusthese locally learned linear models help in explaining complex models in more easy to interpret way with class probabilitiescontribution of top features to the class probabilities that aid in the decision making process let' take movie review from our test dataset where both the actual and predicted sentiment is negative and analyze it with the helper function we created in the preceding snippet in [ ]doc_index interpret_classification_model_prediction(doc_index=doc_indexcorpus=norm_test_ reviewscorpus=test_reviewsprediction_labels=test_ sentimentsexplainer_obj=explainertest document index actual sentimentnegative predicted sentiment['negative'reviewworst movie(with the best reviews given iti've ever seen over the top dialogactingand direction more slasher flick than thriller with all the great reviews this movie got ' appalled that it turned out so silly shame on you martin scorsese model prediction probabilities('negative' ('positive' figure - model interpretation for our classification model' correct prediction for negative review
16,113
the results depicted in figure - show us the class prediction probabilities and also the top features that contributed the maximum to the prediction decision making process these key features are also highlighted in the normalized movie review text our model performs quite well in this scenario and we can see the key features that contributed to the negative sentiment of this review including badsillydialogand shamewhich make sense besides thisthe word great contributed the maximum to the positive probability of and in fact if we had removed this word from our review textthe positive probability would have dropped significantly the following code runs similar analysis on test movie review with both actual and predicted sentiment of positive value in [ ]doc_index interpret_classification_model_prediction(doc_index=doc_indexcorpus=norm_test_ reviewscorpus=test_reviewsprediction_labels=test_ sentimentsexplainer_obj=explainertest document index actual sentimentpositive predicted sentiment['positive'reviewi really liked the movie "joe it has really become cult classic among certain age groups the producer of this movie is personal friend of mine he is my stepsons father-in-law he lives in manhattan' west sideand has bungalow in southamptonlong island his son-in-law live next door to his bungalow <br />presentlyhe does not do any producingbut dabbles in business with hbo movies <br />as personmr gil is real gentleman and wish he would have continued in the production business of move making model prediction probabilities('negative' ('positive' figure - model interpretation for our classification model' correct prediction for positive review
16,114
the results depicted in figure - show the top features responsible for the model making decision of predicting this review as positive based on the contentthe reviewer really liked this model and also it was real cult classic among certain age groups in our final analysiswe will look at the model interpretation of an example where the model makes wrong prediction in [ ]doc_index interpret_classification_model_prediction(doc_index=doc_indexcorpus=norm_test_ reviewscorpus=test_reviewsprediction_labels=test_ sentimentsexplainer_obj=explainertest document index actual sentimentnegative predicted sentiment['positive'reviewwhen first saw this film in cinema years agoi loved it still think the directing and cinematography are excellentas is the music but it' really the script that has over the time started to bother me more and more find emma thompson' writing selfabsorbed and unfaithful to the original bookshe has reduced marianne to side-charactera second fiddle to her much too oldmuch too severe elinor she in the movie is given many sort of 'focus moments'and often they appear to be there just to show off thompson herself do understand her cutting off several characters from the bookbut leaving out the one scene where willoughby in the book is redeemedfor someone who red and cherished the book long before the moviethose are the things always difficult to digest as for the actorsi love kate winslet as marianne she is not given the best script in the world to work with but she still pulls it up gracefullywithout too much sentimentality alan rickman is greata bit old perhapsbut he plays the role beautifully and elizabeth spriggsshe is absolutely fantastic as always model prediction probabilities('negative' ('positive' figure - model interpretation for our classification model' incorrect prediction
16,115
the preceding output tells us that our model predicted the movie review indicating positive sentiment when in-fact the actual sentiment label is negative for the same review the results depicted in figure - tell us that the reviewer in fact shows signs of positive sentiment in the movie reviewespecially in parts where he\she tells us that " loved it still think the directing and cinematography are excellentas is the music alan rickman is greata bit old perhapsbut he plays the role beautifully and elizabeth spriggsshe is absolutely fantastic as always and feature words from the same have been depicted in the top features contributing to positive sentiment the model interpretation also correctly identifies the aspects of the review contributing to negative sentiment like"but it' really the script that has over the time started to bother me more and more hencethis is one of the more complex reviews which indicate both positive and negative sentiment and the final interpretation would be in the reader' hands you can now use this same framework to interpret your own classification models in the future and understand where your model might be performing well and where it might need improvementsanalyzing topic models another way of analyzing key termsconcepts or topics responsible for sentiment is to use different approach known as topic modeling we have already covered some basics into topic modeling in the section titled "topic modelsunder "feature engineering on text datain the main aim of topic models is to extract and depict key topics or concepts which are otherwise latent and not very prominent in huge corpora of text documents we have already seen the use of latent dirichlet allocation (ldafor topic modeling in in this sectionwe use another topic modeling technique called non-negative matrix factorization refer to the python file named sentiment_causal_topic_models py or the jupyter notebook titled sentiment causal analysis topic models ipynb for more interactive experience the first step in this analysis is to combine all our normalized train and test reviews and separate out these reviews into positive and negative sentiment reviews once we do thiswe will extract features from these two datasets using the tf-idf feature vectorizer the following snippet helps us achieve this in [ ]from sklearn feature_extraction text import tfidfvectorizer consolidate all normalized reviews norm_reviews norm_train_reviews+norm_test_reviews get tf-idf features for only positive reviews positive_reviews [review for reviewsentiment in zip(norm_reviewssentimentsif sentiment ='positive'ptvf tfidfvectorizer(use_idf=truemin_df= max_df= ngram_range=( , )sublinear_tf=trueptvf_features ptvf fit_transform(positive_reviewsget tf-idf features for only negative reviews negative_reviews [review for reviewsentiment in zip(norm_reviewssentimentsif sentiment ='negative'ntvf tfidfvectorizer(use_idf=truemin_df= max_df= ngram_range=( , )sublinear_tf=truentvf_features ntvf fit_transform(negative_reviewsview feature set dimensions print(ptvf_features shapentvf_features shape( (
16,116
from the preceding output dimensionsyou can see that we have filtered out lot of the features we used previously when building our classification models by making min_df to be and max_df to be this is to speed up the topic modeling process and remove features that either occur too much or too rarely let' now import the necessary dependencies for the topic modeling process in [ ]import pyldavis import pyldavis sklearn from sklearn decomposition import nmf import topic_model_utils as tmu pyldavis enable_notebook(total_topics the nmf class from scikit-learn will help us with topic modeling we also use pyldavis for building interactive visualizations of topic models the core principle behind non-negative matrix factorization (nnmfis to apply matrix decomposition (similar to svdto non-negative feature matrix such that the decomposition can be represented as wh where are both non-negative matrices which if multiplied should approximately re-construct the feature matrix cost function like norm can be used for getting this approximation let' now apply nnmf to get topics from our positive sentiment reviews we will also leverage some utility functions from our topic_model_utils module to display the results in clean format in [ ]build topic model on positive sentiment review features pos_nmf nmf(n_components=total_topicsrandom_state= alpha= _ratio= pos_nmf fit(ptvf_featuresextract features and component weights pos_feature_names ptvf get_feature_names(pos_weights pos_nmf components_ extract and display topics and their components pos_topics tmu get_topics_terms_weights(pos_weightspos_feature_namestmu print_topics_udf(topics=pos_topicstotal_topics=total_topicsnum_terms= display_weights=falsetopic # without weights ['like''not''think''really''say''would''get''know''thing''much''bad''go''lot''could''even'topic # without weights ['movie''see''watch''great''good''one''not''time''ever''enjoy''recommend''make''acting''like''first'topic # without weights ['show''episode''series''tv''watch''dvd''first''see''time''one''good''year''remember''ever''would'topic # without weights ['performance''role''play''actor''cast''good''well''great''character''excellent''give''also''support''star''job'topic # without weights ['love''fall''song''wonderful''beautiful''music''heart''girl''would''watch''great''favorite''always''family''woman'
16,117
we depict some of the topics out of the topics generated in the preceding output you can leverage pyldavis now to visualize these topics in an interactive visualization see figure - in [ ]pyldavis sklearn prepare(pos_nmfptvf_featuresptvfr= figure - visualizing topic models on positive sentiment movie reviews the visualization depicted in figure - shows us the topics from positive movie reviews and we can see the top relevant terms for topic highlighted in the output from the topics and the termswe can see terms like movie castactorsperformanceplaycharactersmusicwonderfulgoodand so on have contributed toward positive sentiment in various topics this is quite interesting and gives you good insight into the components of the reviews that contribute toward positive sentiment of the reviews this visualization is completely interactive if you are using the jupyter notebook and you can click on any of the bubbles representing topics in the intertopic distance map on the left and see the most relevant terms in each of the topics in the right bar chart the plot on the left is rendered using multi-dimensional scaling (mdssimilar topics should be close to one another and dissimilar topics should be far apart the size of each topic bubble is based on the frequency of that topic and its components in the overall corpus the visualization on the right shows the top terms when no topic it selectedit shows the top most salient topics in the corpus term' saliency is defined as measure of how frequently the term appears the corpus and its distinguishing factor when used to distinguish between topics when some topic is selectedthe chart changes to show something similar to figure - which shows the top most relevant terms for that topic the relevancy metric is controlled by lwhich can be changed based on slider on top of the bar chart (refer to the notebook to interact with thisif you're interested in more mathematical theory behind these visualizationsyou are encouraged to check out more details at packages/ldavis/vignettes/details pdfwhich is vignette for the package ldaviswhich has been ported to python as pyldavis
16,118
let' now extract topics and run this same analysis on our negative sentiment reviews from the movie reviews dataset in [ ]build topic model on negative sentiment review features neg_nmf nmf(n_components= random_state= alpha= _ratio= neg_nmf fit(ntvf_featuresextract features and component weights neg_feature_names ntvf get_feature_names(neg_weights neg_nmf components_ extract and display topics and their components neg_topics tmu get_topics_terms_weights(neg_weightsneg_feature_namestmu print_topics_udf(topics=neg_topicstotal_topics=total_topicsnum_terms= display_weights=falsetopic # without weights ['get''go''kill''guy''scene''take''end''back''start''around''look''one''thing''come''first'topic # without weights ['bad''movie''ever''acting''see''terrible''one''plot''effect''awful''not''even''make''horrible''special'topic # without weights ['waste''time''money''watch''minute''hour''movie''spend''not''life''save''even''worth''back''crap'in [ ]pyldavis sklearn prepare(neg_nmfntvf_featuresntvfr= figure - visualizing topic models on positive sentiment movie reviews
16,119
the visualization depicted in figure - shows us the topics from negative movie reviews and we can see the top relevant terms for topic highlighted in the output from the topics and the termswe can see terms like wastetimemoneycrapplotterribleactingand so on have contributed toward negative sentiment in various topics of coursethere are high chances of overlap between topics from positive and negative sentiment reviewsbut there will be distinguishabledistinct topics that further help us with interpretation and causal analysis ummary this case-study oriented introduces the imdb movie review dataset with the objective of predicting the sentiment of the reviews based on the textual content we covered concepts and techniques from natural language processing (nlp)text analyticsmachine learning and deep learning in this we covered multiple aspects from nlp including text pre-processingnormalizationfeature engineering as well as text classification unsupervised learning techniques using sentiment lexicons like afinnsentiwordnetand vader were covered in extensive detailto show how we can analyze sentiment in the absence of labeled training datawhich is very valid problem in today' organizations detailed workflow diagrams depicting text classification as supervised machine learning problem helped us in relating nlp with machine learning so that we can use machine learning techniques and methodologies to solve this problem of predicting sentiment when labeled data is available the focus on supervised methods was two-fold this included traditional machine learning approaches and models like logistic regression and support vector machines and newer deep learning models including deep neural networksrnnsand lstms detailed conceptsworkflowshands-on examples and comparative analyses with multiple supervised models and different feature engineering techniques have been covered for the purpose of predicting sentiment from movie reviews with maximum model performance the final section of this covered very important aspect of machine learning that is often neglected in our analyses we looked at ways to analyze and interpret the cause of positive or negative sentiment analyzing and visualizing model interpretations and topic models have been covered with several examplesto give you good insight into how you can re-use these frameworks on your own datasets the frameworks and methodologies used in this should be useful for you in tackling similar problems on your own text data in the future
16,120
customer segmentation and effective cross selling money makes the world go round and in the current ecosystem of data intensive business practicesit is safe to claim that data also makes the world go round very important skill set for data scientists is to match the technical aspects of analytics with its business valuei its monetary value this can be done in variety of ways and is very much dependent on the type of business and the data available in the earlier we covered problems that can be framed as business problems (leveraging the crisp-dm modeland linked to revenue generation in this we will directly focus on two very important problems that can directly have positive impact on the revenue streams of businesses and establishments particularly from the retail domain this is also unique in the way that we address different paradigm of machine learning algorithm altogetherfocusing more on tasks pertaining to pattern recognition and unsupervised learning in this first we digress from our usual technical focus and try to gather some business and domain knowledge this knowledge is quite importantas often this is the stumbling block for many data scientists in scenarios where perfectly developed machine learning solution is not productionalized due to lack of focus in the actual value obtained from the solution based on business demands firm grasp on the underlying business (and monetarymotivation helps the data scientists with defining the value aspect of their solutions and hence ensuring that they are deployed and contribute to generation of realizable value for their employers for achieving this objectivewe will start with retail transactions based dataset sourced from uci machine learning repository (use this dataset to target two fairly simple but important problems the detailed codenotebooksand datasets used in this are available in the respective directory for in the github repository at customer segmentationcustomer segmentation is the problem of uncovering information about firm' customer basebased on their interactions with the business in most cases this interaction is in terms of their purchase behavior and patterns we explore some of the ways in which this can be used market basket analysismarket basket analysis is method to gain insights into granular behavior of customers this is helpful in devising strategies which uncovers deeper understanding of purchase decisions taken by the customers this is interesting as lot of times even the customer will be unaware of such biases or trends in their purchasing behavior (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
16,121
online retail transactions dataset the online retail transactions dataset is available from the uci machine learning repository we already used some datasets from this repository in our earlier and this should underline the importance of this repository to the users the dataset we will be using for our analysis is quite simple based on its description on the uci web siteit contains all the transactions occurring between and for uk-based and registered non-store online retail from the web sitewe also learn that the company sells unique all-occasion gift items and lot of customers of the organization are wholesalers the last piece of information is particularly important as gives us an opportunity to explore purchase behaviors of large-scale customers instead of normal retail customers only the dataset does not have any information that will help us distinguish between wholesale purchase and retail purchase before we get startedmake sure you load the following dependencies import pandas as pd import datetime import math import numpy as np import matplotlib pyplot as plt import matplotlib mlab as mlab %matplotlib inline ##note we encourage you to check out the uci machine learning repository and the page for this particular dataset at research papers that use the same dataset we believe the papers along with the analysis performed in this will make an interesting read for all our readers exploratory data analysis we have always maintained that irrespective of the actual use case or the algorithmwe intend to implement the standard analysis workflowwhich should always start with exploratory data analysis (edaso following the traditionwe will start with eda on our dataset the first thing you should notice about the dataset is its format unlike most of the datasets that we have handled in this book the dataset is not in csv format and instead comes as an excel file in some other languages (or even frameworksit could have been cause of problem but with python and particularly pandas we don' face any such problem and we can read the dataset using the function read_excel provided by the pandas library we also take look at some of the lines in the dataset in [ ]cs_df pd read_excel(io= 'online retail xlsx'
16,122
the few lines from the dataset gives us information about the attributes of the datasetsas shown in figure - figure - sample transactions from the retail transactions dataset the attributes of the dataset are easily identifiable from their names we know right away what each of these fields might mean for the sake of completenesswe include the description of each column hereinvoicenoa unique identifier for the invoice an invoice number shared across rows means that those transactions were performed in single invoice (multiple purchasesstockcodeidentifier for items contained in an invoice descriptiontextual description of each of the stock item quantitythe quantity of the item purchased invoicedatedate of purchase unitpricevalue of each item customerididentifier for customer making the purchase countrycountry of customer let' analyze this data and first determine which are the top countries the retailer is shipping its items toand how are the volumes of sales for those countries in [ ]cs_df country value_counts(reset_index(head( = out[ ]index country united kingdom germany france eire spain netherlands belgium switzerland portugal australia this shows us that the bulk of ordering is taking place in its home country only which is not surprising we also notice the odd country name eirewhich is little concerning but quick web search indicates that it is just an old name for irelandso no harm doneinterestinglyaustralia is also in the top-ten list of sales by country
16,123
next we might be interested in how many unique customers the retailer is having and how do they stack up in the number of orders they make we are also interested in knowing that what percentage of orders is made by the top customers of the retailer this information is interesting as it would tell us whether the user base of the firm is distributed relatively uniformly in [ ]cs_df customerid unique(shape out[ ]( ,in [ ](cs_df customerid value_counts()/sum(cs_df customerid value_counts())* head( = cumsum(out[ ] namecustomeriddtypefloat this tells us that we have , unique customers but almost of total sales are contributed by only customers (based on the cumulative percentage aggregation in the preceding outputthis is expected given the fact that we have both wholesale and retail customers the next thing we want to determine is how many unique items the firm is selling and check whether we have equal number of descriptions for them in [ ]cs_df stockcode unique(shape out[ ]( ,in [ ]cs_df description unique(shape out[ ]( ,we have mismatch in the number of stockcode and descriptionas we can see that item descriptions are more than stock code valueswhich means that we have multiple descriptions for some of the stock codes although this is not going to interfere with our analysiswe would like to dig little deeper in this to find out what may have caused this issue or what kind of duplicated descriptions are present in the data cat_des_df cs_df groupby(["stockcode","description"]count(reset_index(cat_des_df stockcode value_counts()[cat_des_df stockcode value_counts()> reset_index(head(index stockcode
16,124
in [ ]cs_df[cs_df['stockcode'==cat_des_df stockcode value_counts()[cat_des_df stockcode value_counts()> reset_index()['index'][ ]]['description'unique(out[ ]array(['mistletoe heart wreath cream''mistletoe heart wreath white''mistletoe heart wreath cream''?''had been put aside'nan]dtype=objectthis gives the multiple descriptions for one of those items and we witness the simple ways in which data quality can be corrupted in any dataset simple spelling mistake can end up in reducing data quality and an erroneous analysis in an enterprise-level scenariodedicated people work toward restoring data quality manually over time since the intent of this section is to focus on customer segmentationwe will be skipping this tedious activity for now let' now verify the sanctity of the quantity and unitprice attributesas those are the attributes we will be using in our analysis in [ ]cs_df quantity describe(out[ ]count mean std min - max namequantitydtypefloat in [ ]cs_df unitprice describe(out[ ]count mean std min - max nameunitpricedtypefloat we can observe from the preceding output that both of these attributes are having negative valueswhich may mean that we may have some return transactions in our data also this scenario is quite common for any retailer but we need to handle these before we proceed to our analysis these are some of the data quality issues we found in our dataset in the real worldthe datasets are generally messy and have considerable data quality issuesso it is always good practice to explicitly verify information at hand before performing any kind of analysis we encourage you to try and find similar issues with any dataset which you might want to analyze in the future
16,125
customer segmentation segmentation is the process of segregating any aggregated entity into separate parts or groups (segmentsthese parts may or may not share something common across them customer segmentation is similarly the process of dividing an organization' customer bases into different sections or segments based on various customer attributes it is driven by the belief that customers are inherently different and this difference is exemplified by their behavior deep understanding of an organization' customer base and their behavior is often the focus of any customer segmentation project the process of customer segmentation is based on the premise of finding differences among the customersbehavior and patterns these differences can be on the basis of their buying behaviortheir demographic informationtheir geographic informationtheir psychographic attributesand so on objectives customer segmentation can help an organization in multitude of ways before we describe the various ways it can be donewe want to enumerate the major objectives and benefits behind the motivation for customer segmentation customer understanding one of the primary objectives of customer segmentation process is deeper understanding of firm' customers and their attributes and behavior these insights into the customer base can be used in different waysas we will discuss shortly but the information is useful by itself one of the mostly widely accepted business paradigms is "know your customerand segmentation of the customer base allows for perfect dissection of this paradigm this understanding and its exploitation is what forms the basis of the other benefits of customer segmentation target marketing the most visible reason for customer segmentation is the ability to focus marketing efforts effectively and efficiently if firm knows the different segments of its customer baseit can devise better marketing campaigns which are tailor made for the segment consider the example of any travel companyif it knows that the major segments of its customers are the budget travelers and the luxury travelersit can have two separate marketing campaigns for each of the group one can focus on the higher value aspects of the company' offerings relevant to budget deals while the other campaign deals with luxurious offerings although the example seems quite trivialthe same logic can be extended in number of ways to arrive at better marketing practices good segmentation model allows for better understanding of customer requirements and hence increases the chances of the success of any marketing campaign developed by the organization optimal product placement good customer segmentation strategy can also help the firm with developing or offering new products this benefit is highly dependent on the way the segmentation process is leveraged consider very simple example in which an online retailer finds out that major section of its customers are buying makeup products together this may prompt him to bundle those product together as combined offeringwhich may increase sales margin for the retailer and make the buying process more streamlined for the customer
16,126
finding latent customer segments customer segmentation process helps the organization with knowing its customer base an obvious side effect of any such practice is finding out which segment of customers it might be missing this can help in identifying untapped customer segments by focused on marketing campaigns or new product development higher revenue this is the most obvious requirement of any customer segmentation project the reason being that customer segmentation can lead to higher revenue due to the combined effects of all the advantages identified in this section strategies the easy answer to the question of "how to do customer segmentation?would be "in any way you deem fitand it would be perfectly acceptable answer the reason this is the right answeris because of the original definition of customer segmentation it is just way of differentiating between the customers it can be as simple as making groups of customers based on age groups or other attributes using manual process or as complex as using sophisticated algorithm for finding out those segments in an automated fashion since our book is all about machine learningwe describe how customer segmentation can be translated into core machine learning task the detailed code for this section is available in the notebook titled customer segmentation ipynb clustering we are dealing with an unlabeledunsupervised transactional dataset from which we want to find out customer segments thusthe most obvious method to perform customer segmentation is using unsupervised machine learning methods like clustering hencethis will be the method that we use for customer segmentation in this the method is as simple as collecting as much data about the customers as possible in the form of features or attributes and then finding out the different clusters that can be obtained from that data finallywe can find traits of customer segments by analyzing the characteristics of the clusters exploratory data analysis using exploratory data analysis is another way of finding out customer segments this is usually done by analysts who have good knowledge about the domain relevant to both products and customers it can be done flexibly to include the top decision points in an analysis for examplefinding out the range of spends by customers will give rise to customer segments based on spends we can proceed likewise on important attributes of customers until we get segments of customer that have interesting characteristics clustering vs customer segmentation in our use casewe will be using clustering based model to find out interesting segments of customers before we go on to modify our data for the model there is an interesting point we would like to clarify lot of people think that clustering is equivalent to customer segmentation although it is true that clustering is one of the most suitable techniques for segmentationit is not the only technique besides thisit is just method that is "appliedto extract segments
16,127
customer segmentation is just the task of segmenting customers it can be solved in several ways and it need not always be complex model clustering provides mathematical framework which can be leveraged for finding out such segment boundaries in the data clustering is especially useful when we have lot of attributes about the customer on which we can make different segments alsooften it is observed that clustering-based segmentation will be superior to an arbitrary segmentation process and it will often encompass the segments that can be devised using such an arbitrary process clustering strategy now that we have some information about what customer segmentation isvarious strategies and how it can be usefulwe can start with the process of finding out customer segments in our online retail dataset the dataset that we have consists of only the sales transactions of the customers and no other information about themi no other additional attributes usually in larger organizations we will usually have more attributes of information about the customers that can help in clustering howeverit will be interesting and definitely challenge to work with this limited attribute datasetso we will use rfm--recencyfrequency and monetary value--based model of customer value for finding our customer segments rfm model for customer value the rfm model is popular model in marketing and customer segmentation for determining customer' value the rfm model will take the transactions of customer and calculate three important informational attributes about each customerrecencythe value of how recently customer purchased at the establishment frequencyhow frequent the customer' transactions are at the establishment monetary valuethe dollar (or pounds in our casevalue of all the transactions that the customer made at the establishment combination of these three values can be used to assign value to the customer we can directly think of some desirable segments that we would want on such model for examplea high value customer is one who buys frequentlyjust bought something recentlyand spends high amount whenever he buys or shopsdata cleaning we hinted in the "exploratory data analysissection about the return transactions that we have in our dataset before proceeding with our analysis workflowwe will find out all such transactions and remove them another possibility is to remove the matching buying transactions also from the dataset but we will assume that those transactions are still important hence we will keep them intact another data cleaning operation is to separate transactions for particular geographical region onlyas we don' want data from germany' customers to affect the analysis for another country' customers the following snippet of code achieves both these tasks we focus on uk customerswhich are notably the largest segment (based on country!in [ ]separate data for one geography cs_df cs_df[cs_df country ='united kingdom'separate attribute for total amount cs_df['amount'cs_df quantity*cs_df unitprice remove negative or return transactions
16,128
cs_df cs_df[~(cs_df amount< )cs_df head(out[ ]invoiceno stockcode description quantity white hanging heart -light holder white metal lantern cream cupid hearts coat hanger knitted union flag hot water bottle red woolly hottie white heart invoicedate unitprice customerid country amount : : united kingdom : : united kingdom : : united kingdom : : united kingdom : : united kingdom the data now has only buying transactions from united kingdom we will now remove all the transactions that have missing value for the customerid field as all our subsequent transactions will be based on the customer entities in [ ]cs_df cs_df[~(cs_df customerid isnull())in [ ]cs_df shape out[ ]( the next step is creating the recencyfrequencyand monetary value features for each of the customers that exist in our dataset recency to create the recency feature variablewe need to decide the reference date for our analysis for our use casewe will define the reference date as one day after the last transaction in our dataset in [ ]refrence_date cs_df invoicedate max(refrence_date refrence_date datetime timedelta(days refrence_date out[ ]timestamp( : : 'we will construct the recency variable as the number of days before the reference date when customer last made purchase the following snippet of code will create this variable for us in [ ]cs_df['days_since_last_purchase'refrence_date cs_df invoicedate cs_df['days_since_last_purchase_num'cs_df['days_since_last_purchase'astype('timedelta [ ]'in [ ]customer_history_df cs_df groupby("customerid"min(reset_index([['customerid''days_since_last_purchase_num']customer_history_df rename(columns={'days_since_last_purchase_num':'recency'}inplace=true
16,129
before we proceedlet' examine how the distribution of customer recency looks for our data (see figure - customer_history_df recency mu np mean(customer_history_df recencysigma math sqrt(np var(customer_history_df recency)nbinspatches plt hist( facecolor='green'alpha= add 'best fitline mlab normpdfbinsmusigmal plt plot(binsy' --'linewidth= plt xlabel('recency in days'plt ylabel('number of transactions'plt title( '$\mathrm{histogramofsalesrecency}$'plt grid(truefigure - distribution of sales recency the histogram in figure - tells us that we have skewed distribution of sales recency with much higher number of frequent transactions and fairly uniform number of less recent transactions frequency and monetary value using similar methodswe can create frequency and monetary value variable for our dataset we will create these variables separately and then merge all the dataframes to arrive at the customer value dataset we will perform our clustering-based customer segmentation on this dataframe the following snippet will create both these variables and the final merged dataframe
16,130
in [ ]customer_monetary_val cs_df[['customerid''amount']groupby("customerid"sum(reset_index(customer_history_df customer_history_df merge(customer_monetary_valhow='outer'customer_history_df amount customer_history_df amount+ customer_freq cs_df[['customerid''amount']groupby("customerid"count(reset_index(customer_freq rename(columns={'amount':'frequency'},inplace=truecustomer_history_df customer_history_df merge(customer_freqhow='outer'the input dataframe for clustering will look like the dataframe depicted in figure - notice that we have added small figure to the customer monetary valueas we will be transforming our values to the log scale and presence of zeroes in our data may lead to an error figure - customer value dataframe data preprocessing once we have created our customer value dataframewe will perform some preprocessing on the data for our clusteringwe will be using the -means clustering algorithmwhich we discussed in the earlier one of the requirements for proper functioning of the algorithm is the mean centering of the variable values mean centering of variable value means that we will replace the actual value of the variable with standardized valueso that the variable has mean of and variance of this ensures that all the variables are in the same range and the difference in ranges of values doesn' cause the algorithm to not perform well this is akin to feature scaling another problem that you can investigate about is the huge range of values each variable can take this problem is particularly noticeable for the monetary amount variable to take care of this problemwe will transform all the variables on the log scale this transformationalong with the standardizationwill ensure that the input to our algorithm is homogenous set of scaled and transformed values an important point about the data preprocessing step is that sometimes we need it to be reversible in our casewe will have the clustering results in terms of the log transformed and scaled variable but to make inferences in terms of the original datawe will need to reverse transform all the variable so that we get back the actual rfm figures this can be done by using the preprocessing capabilities of python in [ ]from sklearn import preprocessing import math customer_history_df['recency_log'customer_history_df['recency'apply(math logcustomer_history_df['frequency_log'customer_history_df['frequency'apply(math logcustomer_history_df['amount_log'customer_history_df['amount'apply(math logfeature_vector ['amount_log''recency_log','frequency_log'
16,131
customer_history_df[feature_vectoras_matrix(scaler preprocessing standardscaler(fit(xx_scaled scaler transform(xthe previous code snippet will create log valued and mean centered version of our dataset we can visualize the results of our preprocessing by inspecting the variable with the widest range of values the following code snippet will help us visualize this in [ ] customer_history_df amount_log nbinspatches plt hist( facecolor='green'alpha= plt xlabel('log of sales amount'plt ylabel('probability'plt title( '$\mathrm{histogramoflogtransformedcustomermonetaryvalue}$'plt grid(trueplt show(the resulting graph is distribution resembling normal distribution with mean and variance of as clearly depicted in figure - figure - scaled and log transformed sales amount
16,132
let' try to visualize our three main features (rfand mon three-dimensional plot to see if we can understand any interesting patterns that the data distribution is showing from mpl_toolkits mplot import axes fig plt figure(figsize=( )ax fig add_subplot( projection=' 'xs =customer_history_df recency_log ys customer_history_df frequency_log zs customer_history_df amount_log ax scatter(xsyszss= ax set_xlabel('recency'ax set_ylabel('frequency'ax set_zlabel('monetary'figure - customer value dataframe the obvious patterns we can see from the plot in figure - is that people who buy with higher frequency and more recency tend to spend more based on the increasing trend in monetary value with corresponding increasing and decreasing trend for frequency and recencyrespectively do you notice any other interesting patterns
16,133
clustering for segments we will be using the -means clustering algorithm for finding out clusters (or segments in our datait is one of the simplest clustering algorithms that we can employ and hence it is widely used in practice we will give you brief primer of the algorithm before we go on to using the samefor finding segments in our data  -means clustering the -means clustering belongs to the partition based\centroid based clustering family of algorithms the steps that happen in the -means algorithm for partitioning the data are as given follows the algorithm starts with random point initializations of the required number of centers the "kin -means stands for the number of clusters in the next stepeach of the data point is assigned to the center closest to it the distance metric used in -means clustering is normal euclidian distance once the data points are assignedthe centers are recalculated by averaging the dimensions of the points belonging to the cluster the process is repeated with new centers until we reach point where the assignments become stable in this casethe algorithm terminates we will adapt the code given in the scikit-learn documentation at stable/auto_examples/cluster/plot_kmeans_silhouette_analysis html and use the silhouette score for finding out the optimal number of clusters during our clustering process we leave it as an exercise for you to adapt the code and create the visualization in figure - you are encouraged to modify the code in the documentation to not only build the visualizationbut also to capture the centers and the silhouette score of each cluster in dictionaryas we will need to refer to those for performing our analysis of the customer segments obtained of coursein case you find it overwhelmingyou can always refer to the detailed code snippet in the customer segmentation ipynb notebook
16,134
figure - silhouette analysis with three and five clusters in the visualization depicted in figure - we plotted the silhouette score of each cluster along with the center of each of the cluster discovered we will use this information in the next section on cluster analysis although we have to keep in mind that in several cases and scenariossometimes we may have to drop the mathematical explanation given by the algorithm and look at the business relevance of the results obtained cluster analysis before we proceed to the analysis of clusters such obtainedlet' look at the cluster center values after retransforming them to normal values from the log and scaled version the following code helps us convert the center values to the reversed transformed values in [ ]for in range( , , )print("for {number of clustersformat( )
16,135
cent_transformed scaler inverse_transform(cluster_centers[ ]['cluster_center']print(pd dataframe(np exp(cent_transformed),columns=feature_vector)print("silhouette score for cluster {is {}format(icluster_centers[ ]['silhouette_score'])print(for number of clusters amount_log recency_log frequency_log silhouette score for cluster is for number of clusters amount_log recency_log frequency_log silhouette score for cluster is when we look at the results of the clustering processwe can infer some interesting insights consider the three-cluster configuration and try to understand the following insights we get three clusters with stark differences in the monetary value of the customer cluster is the cluster of high value customer who shops frequently and is certainly an important segment for each business in the similar way we obtain customer groups with low and medium spends in clusters with labels and respectively frequency and recency correlate perfectly to the monetary value based on the trend we talked about in figure - (high monetary-low recency-high frequencythe five-cluster configuration results are more surprisingwhen we go looking for more segmentswe find out that our high valued customer base is comprised of two subgroupsthose who shop often and with high amount (represented by cluster those who have decent spend but are not as frequent (represented by cluster this is in direct conflict with the result we obtain from the silhouette score matrix which says the fivecluster segments are less optimal then the three cluster segments of courseremember you must not strictly go after mathematical metrics all the time and think about the business aspects too besides thisthere could be more insights that are uncovered as you visualize the data based on these segments which might prove that in-fact the three-cluster segmentation was far better for instanceif you check the right-side plot in figure - you can see that segments with five clusters have too much overlap among themas compared to segments with three clusters
16,136
cluster descriptions on the basis of eyeballing the cluster centerswe can figure out that we have good difference in the customer value in the segments as defined in terms of recencyamount and frequency to further drill down on this point and find out the quality of these differencewe can label our data with the corresponding cluster label and then visualize these differences we will do this visualization for probably one of the most important customer value indicatorsthe total dollar value sales to arrive at such distinction based summary computationswe will first label each data row in our customer summary dataframe with the corresponding label as returned by our clustering algorithm note that you have to modify the code you are using if you want to try different configuration of let' say two or four clusters we will have to make changes so that we capture the labels for each different cluster configuration we encourage you to try other cluster configurations to see if you get even better segmentsthe following code will extract the clustering label and attach it with our customer summary dataframe labels cluster_centers[ ]['labels'customer_history_df['num_cluster _labels'labels labels cluster_centers[ ]['labels'customer_history_df['num_cluster _labels'labels once we have the labels assigned to each of the customersour task is simple now we want to find out how the summary of customer in each group is varying if we can visualize that information we will able to find out the differences in the clusters of customers and we can modify our strategy on the basis of those differences we have used lot of matplotlib and seaborn so farwe will be using plotly in this section for creating some interactive plots that you can play around with in your jupyter notebook##note while plotly provides excellent interactive visualizations in jupyter notebooksyou might come across some notebook rendering issues that come up as popups when you open the notebook to fix this problemupgrade your nbformat library by running the conda update nbformat command and re-open the notebook the problem should disappear the following code leverages plotly and will take the cluster labels we got for the configuration of five clusters and create boxplots that will show how the medianminimummaximumhighestand lowest values are varying in the five groups note that we want to avoid the extremely high outlier values of each groupas they will interfere in making good observation (due to noisearound the central tendencies of each cluster so we will restrict the data such that only data points which are less than th percentile of the cluster is used this will give us good information about the majority of the users in that cluster segment the following code will help us create this plot for the total sales value import plotly as py import plotly graph_objs as go py offline init_notebook_mode(x_data ['cluster ','cluster ','cluster ','cluster ''cluster 'cutoff_quantile field_to_plot 'amounty =customer_history_df[customer_history_df['num_cluster _labels']== ][field_to_plotvalues [ <np percentile( cutoff_quantile) =customer_history_df[customer_history_df['num_cluster _labels']== ][field_to_plotvalues
16,137
[ <np percentile( cutoff_quantile) =customer_history_df[customer_history_df['num_cluster _labels']== ][field_to_plotvalues [ <np percentile( cutoff_quantile) =customer_history_df[customer_history_df['num_cluster _labels']== ][field_to_plotvalues [ <np percentile( cutoff_quantile) =customer_history_df[customer_history_df['num_cluster _labels']== ][field_to_plotvalues [ <np percentile( cutoff_quantile)y_data [ , , , , colors ['rgba( )''rgba( )''rgba( )''rgba( )''rgba( )''rgba( )'traces [for xdydcls in zip(x_datay_datacolors)traces append(go boxy=ydname=xdboxpoints=falsejitter= whiskerwidth= fillcolor=clsmarker=dict(size= ,)line=dict(width= ))layout go layouttitle='difference in sales {from cluster to clusterformat(field_to_plot)yaxis=dictautorange=trueshowgrid=truezeroline=truedtick= gridcolor='black'gridwidth= zerolinecolor='rgb( )'zerolinewidth= )margin=dict( = , = = = )paper_bgcolor='white'plot_bgcolor='white'showlegend=false fig go figure(data=traceslayout=layoutpy offline iplot(fig
16,138
figure - difference in sales amount values across the five segments let' also take look at the plot that is generated as result of this code snippet in figure - we can see that the clusters and have higher average sales amountthus being the highest spenders although we don' see much difference in the sales values of clusters and we do see markedly smaller sales amount in cluster this gives us an indication that we can merge the candidates of clusters and togetherat least on the basis of sales amount you can plot similar figures for recency and frequency also to figure out differences in the cluster on the basis of those values detailed code and visualizations are present in the notebook for threeand five-cluster based segments plotly enables us to interact with the plots to see the central tendency values in each boxplot in the notebook we show the difference in sales amount across three segments based on the three-cluster configuration just so you can compare it with figure - the detailed visualization is shown in figure - which talks about difference in the sales\revenue across the three segments it is clear that this is much more distinguishable and we also have lesser overlap as compared to figure - where clusters and were very similar this doesn' mean that the previous method was wrongit' just one of the dimensions where some segments were similar to each other
16,139
figure - difference in sales amount values across the three segments we can further improve the quality of clustering by adding relevant features to the dataset that we have created often firms will buy data regarding their customers from external data vendors and use it to enhance the segmentation process we had limitation of only having around year worth of transaction data but even mid-size organization may have multiple years of transaction data which can improve the results another dimension to explore can be trying out different algorithms for performing the segmentation for instance hierarchical clusteringwhich we explored in some of the earlier good segmentation process will encompass all these avenues to arrive at optimal segments that provide valuable insight we encourage you to get creative with the process and build your own examples the important caveat from this analysis is that when it comes to correlating the actual value of results with the mathematical metrics we cannot always rely on the metrics we need to include this habit of including business metrics and domain insights in our modeling process as often times this becomes the difference between an implemented high value project and forgotten data-focused solution once we obtain these resultswe can further discuss them with the marketing team of the organization to come up with appropriate practices for each of the segment identified cross selling what is cross sellingin the simplest termscross selling is the ability to sell more products to customer by analyzing the customer' shopping trends as well as general shopping trends and patterns which are in common with the customer' shopping patterns this simple definition captures the essential idea of cross sellingbut it is not as descriptive as we would like it to be we will illustrate the idea with an example say we are concerned about our health (which everyone should beand decide to buy protein supplement on our favorite -commerce site in most scenariosthe moment you get to your product pageyou will have section that will tell you other products that you can buy along with the product(sof your choice see figure -
16,140
figure - cross selling example more often than notthese recommended products would be very appealing for exampleif am in the market for protein supplement it will definitely be good idea for me to buy vitamin supplement too the retailer will often offer you bundle of products with some attractive offer and it is highly likely that we will end up buying the bundled products instead of just the original item this is the simple but powerful concept of cross selling we research the customer transactions and find out potential additions to the customer' original needs and offer it to the customer as suggestion in the hope and intent that they buy them benefiting both the customer as well as the retail establishment cross selling is ubiquitous in both the online and offline retail worlds the simplicity and the effectiveness of the idea make it an essential and powerful marketing tool for all types of retailers the idea of cross selling can be extended to any organizationirrespective of whether it is an online or offline retailer or whether it is selling its products to the end users of whole sellers in this sectionwe explore association rule-mininga powerful technique that can be used for cross sellingthen we illustrate the concept of market basket analysis for toy datasetand finally we apply the same concepts to our retail transactions dataset market basket analysis with association rule-mining before we go on to understand how to generate the association rules from our transactional datalet' try to understand how we will use those rules famous story in data analytics circles is the story of "beer and diapersthe basic crux of the story is the concept that major retailer upon analyzing their customer transaction behavior discovered strong association between sales of beer and diapers the retailer was able to exploit this association by moving the beer section close to the diapers section leading to higher sales volume (the origins of the story may have some fact but the whole concept is debatable we encourage you to go through the discussion at about this storyalthough the beer and diaper story may or may not have been myththe concept of finding sales association in customer behavior is an important and inspiring one suppose we can mathematically capture the significance of these associationsthen it would be good idea to try and exploit the rules which are likely to be correct the whole concept of association rule-mining is based on the concept that customer purchase behavior has pattern which can be exploited for selling more items to the customer in the future an association rule usually has the structure as depicted in this equation{item item item (ritemk
16,141
this rule can be read in the obvious manner that when the customer bought items on the (left hand sidelhs of the rule he is likely to buy the itemk in the later sectionswe define mathematical metrics that will capture the strength of such rules we can use these rules in variety of ways the most obvious way to use these rules is to develop bundles of products which make it convenient for the customer to buy these items together another way to use these rules is to bundle products along with some discounts for other relevant products in the bundlehence ensuring that the customer becomes more likely to buy more items some unlikely ways to use association rules is for designing better web site navigational structureintrusion detectionbioinformaticsand so on association rule-mining basics before proceeding to explore the association rules in our datasetwe will go through some essential concepts for association rule-mining these terms and concepts will help you in the later analysis and also in understanding the rules that the algorithm will generate consider table - with some toy transaction data table - example of transaction set trans id items {milkbread {butter {beerdiaper {milkbreadbutter {breadeach row in the table consists of transaction for examplethe customer bought milk and bread in the first transaction following are some vital concepts pertaining to association rule-mining itemsetitemset is just collection of one or more items that occur together in transaction for examplehere {milkbreadis example of an itemset supportsupport is defined as number of times an itemset appears in the dataset mathematically it is defined assupp ({beer diaper}number of transactions with beer and diaper total transactions in the previous examplesupport (beerdiaper / confidenceconfidence is measure of the times the number of times rule is found to exist in the dataset for rule which states {beer (rdiaperthe confidence is mathematically definedasconfidence ({beer diaper} supp beer and diaper supp beer
16,142
liftlift of the rule is defined as the ratio of observed support to the support expected in the case the elements of the rule were independent for the previous set of transactions if the rule is defined as (ry}then the lift of the rule is defined aslift (ry supp supp supp ( frequent itemsetfrequent itemsets are itemsets whose support is greater than user defined support threshold fp growth the most famous algorithm for association rule-mining is the apriori algorithmfor which you will find lot of code and resources on the web and in standard data mining literature howeverhere we will use different and more efficient algorithmthe fp growth algorithm for finding our association rules the major bottleneck in any association rule-mining algorithm is the generation of frequent itemsets if the transaction dataset is having unique productsthen potentially we have possible itemsets the apriori algorithm will first generate these itemsets and then proceed to finding the frequent itemsets this limitation is huge performance bottleneck as even for around unique products the possible number of itemsets is huge this limitation makes the apriori algorithm prohibitively computationally expensive the fp growth algorithm is superior to apriori algorithm as it doesn' need to generate all the candidate itemsets the algorithm uses special data structure that helps it retains itemset association information an example of the data structure is depicted in figure - figure - an example of an fp-tree
16,143
we will not go into detailed mathematical descriptions of the algorithm hereas the intent is to not to keep this section math heavy but to focus on how it can be leveraged to find patterns in this data howeverwe will explain it in brief so you can understand the core concepts in this method the fp growth algorithm uses divide-and-conquer strategy and leverages special data structure called the fp-treeas depicted in figure - to find frequent itemsets without generating all itemsets the core steps of the algorithm are as follows take in the transactional database and create an fp-tree structure to represent frequent itemsets divide this compressed representation into multiple conditional datasets such that each one is associated with frequent pattern mine for patterns in each such dataset so that shorter patterns can be recursively concatenated to longer patternshence making it more efficient if you are interested in finding about ityou can refer to the wikibooks link at org/wiki/data_mining_algorithms_in_r/frequent_pattern_mining/the_fp-growth_algorithmwhich talks about fp growth and the fp tree structure in detail association rule-mining in action we will illustrate association rule-mining using the famous grocery dataset the dataset is available by default in the language' base package to use in in pythonyou can obtain it from stedy/machine-learning-with- -datasets/blob/master/groceries csv or even from our official github repository mentioned at the start of this this dataset consists of collection of transactions that are sourced from grocery retailer we will use this data as the basis of our analysis and build our rule-mining work flow using this data once we have grasped the basics of association rule-mining on the grocery datasetwe will leave it as an exercise for you to apply the same concepts on our transaction dataset that we used in the customer segmentation section remember to load the following dependencies before getting started import csv import pandas as pd import matplotlib pyplot as plt import orange from orange data import domaindiscretevariablecontinuousvariable from orangecontrib associate fpgrowth import %matplotlib inline check out the "mining rulessection for details on how to install the orange framework dependencies the code for this section is available in the cross selling ipynb notebook exploratory data analysis the grocery dataset that we mentioned earlier is arranged so that each of the line occurring in the dataset is transaction the items given in each row are comma-separated and are the items in that particular transaction take look at the first few lines of the dataset depicted in figure -
16,144
figure - grocery dataset transactions the first observation that we make from this dataset is that it is not available in completely structuredeasy-to-analyze format this limitation will mean that the first thing we will have to do is to write custom code that will convert the raw file into data structure we can use since we have done most of our analysis until now using the pandas dataframewe will convert this data into similar data structure the following code snippet will perform the conversion for us grocery_items set(with open("grocery_dataset txt"as freader csv reader(fdelimiter=","for iline in enumerate(reader)grocery_items update(lineoutput_list list(with open("grocery_dataset txt"as freader csv reader(fdelimiter=","for iline in enumerate(reader)row_val {item: for item in grocery_itemsrow_val update({item: for item in line}output_list append(row_valgrocery_df pd dataframe(output_listin [ ]grocery_df shape out[ ]( the conversion gives us dataframe of dimension (num_transactiontotal_items)where each transaction row has columns corresponding to its constituent items as for examplefor row in figure - we will have the column for whole milk as and the rest of columns will be all although this data structure is sparsemeaning it has lot of zerosour framework that extracts association rules will take care of this sparseness before we proceed to building association rules on our datasetwe will explore some salient features of our dataset we already know that we have , total transactions and total of items in the dataset but what are the top items that occur in the dataset and how much of the total sales they account for we can plot simple histogram that will help us extract this information in [ ]total_item_count sum(grocery_df sum()print(total_item_countitem_summary_df grocery_df sum(sort_values(ascending falsereset _index(head( = item_summary_df rename(columns={item_summary_df columns[ ]:'item_name'
16,145
item_summary_df columns[ ]:'item_count'}inplace=trueitem_summary_df head( out[ ]item_name item_count whole milk other vegetables rolls/buns soda yogurt for creating the histogramwe will create summary dataframe using the previous code this tells us that we have total of , items occurring in total in all those transactions and we also see the top five most sold items let' use this dataframe to plot the top most sold items the following snippet of code will help us create the required bar graph objects (list(item_summary_df['item_name'head( = ))y_pos np arange(len(objects)performance list(item_summary_df['item_count'head( = )plt bar(y_posperformancealign='center'alpha= plt xticks(y_posobjectsrotation='vertical'plt ylabel('item count'plt title('item sales distribution'the bar graph depicting the item sales is depicted in figure - it indicates that surprisingly large share of total items is claimed by only these items
16,146
figure - grocery dataset top items based on sales let' also find out how much percentage of total sales is explained by these items alone we will use the cumulative sum function offered by pandas (cumsumto find this out we will create two columns in our dataframe one will tell how much percentage of total sales can be attributed to particular item and the other will keep cumulative sum of this sales percentage item_summary_df['item_perc'item_summary_df['item_count']/total_item_count item_summary_df['total_perc'item_summary_df item_perc cumsum(item_summary_df head( item_name item_count item_perc total_perc whole milk other vegetables rolls/buns soda yogurt in [ ]item_summary_df[item_summary_df total_perc < shape out[ ](
16,147
this shows us that the top five items are responsible for of the entire sales and only the top items are responsible for over of the salesthis is important for usas we don' want to find association rules for items which are bought very infrequently with this information we can limit the items we want to explore for creating our association rules this also helps us in keeping our possible itemset number to manageable figure mining rules we will be using the orange and the orange -associate frameworkswhich can be installed using the commands conda install orange and pip install orange -associate the orange -associate package contains the implementation of fp growth provided by the group who developed for the orange data mining packagethe bioinformatics laboratory at the university of ljubljanaslovenia (##note we encourage you to experiment with the orange packagewhich is available at biolab siit is gui-driven data mining framework written in python and highly conducive for learning data analysis in an interactive way before we go on using the package for finding association ruleswe will discuss about the way data is represented in orange library the data representation is little tricky but we will help you modify the existing data into the format required by orange primarily we will focus on how to convert our pandas dataframes to the orange table data structure orange table data structure the table data structure is the primary way to represent any tabular data in orange although it is similar in some way to numpy array or pandas dataframeit differs from them in the way it stores metadata about the actual data in our case we can easily convert our pandas dataframe to the table data structure by providing the metadata about our columns we need to define the domain for each of our variables the domain means the possible set of values that each of our variables can use this information will be stored as metadata and will be used in later transformation of the data as our columns are only having binary values-- either or --we can easily create the domain by using this information the following code snippet helps us convert our dataframe to an orange table from orange data import domaindiscretevariablecontinuousvariable from orangecontrib associate fpgrowth import input_assoc_rules grocery_df domain_grocery domain([discretevariable make(name=itemvalues=[' '' ']for item in input_assoc_rules columns]data_gro_ orange data table from_numpy(domain=domain_groceryx=input_assoc_rules as_matrix(),ynonehere we defined the domain of our data by specifying each variable as discretevariable having values as ( then using this domainwe created our table structure for our data
16,148
using the fp growth algorithm now we have all the pieces required to perform our rule-mining but before proceedingwe want to take care of one more important aspect of the analysis we saw in the earlier section how only handful of items are responsible for bulk of our sales so we want to prune our dataset to reflect this information for this we have created function prune_dataset (check out the notebook)which will help us reduce the size of our dataset based on our requirements the function can be used for performing two types of pruningpruning based on percentage of total salesthe parameter total_sales_perc will help us select the number of items that will explain the required percentage of sales the default value is or pruning based on ranks of itemsanother way to perform the pruning is to specify the starting and the ending rank of the items for which we want to prune our dataset by defaultwe will only look for transactions which have at least two itemsas transactions with only one item are counter to the whole concept of association rule-mining the following code snippet will help us select only that subset of data which explain of the total sales by leveraging our pruning function output_dfitem_counts prune_dataset(input_df=grocery_dflength_trans= ,total_sales_perc= print(output_df shapeprint(list(output_df columns)( ['whole milk''other vegetables''rolls/buns''soda''yogurt''bottled water''root vegetables''tropical fruit''shopping bags''sausage''pastry''citrus fruit''bottled beer'so we find out that we have only items responsible for of sales and transactions that have those items along with other items and we can also see what those items are the next step is to convert this selected data into the required table data structure input_assoc_rules output_df domain_grocery domain([discretevariable make(name=item,values=[' '' ']for item in input_assoc_rules columns]data_gro_ orange data table from_numpy(domain=domain_groceryx=input_assoc_rules as_matrix()ynonedata_gro_ _enmapping onehot encode(data_gro_ include_class=falsethe new addition to the previous code is the last line this is required for coding our input so that the entire domain is represented as binary variables this will complete all the parsing and data manipulation required for our rule-mining phewthe final step is creating our rules we need to specify two pieces of information for generating our rulessupport and confidence we have already defined both of them conceptually earlierso we will not be defining them again an important piece of information is to start with higher supportas lower support will mean higher number of frequent itemsets and hence longer execution time we will specify minsupport of -- transactions at least--and see the number of frequent itemsets that we get before we specify confidence and generate our rules min_support print("num of required transactions "int(input_assoc_rules shape[ ]*min_support)num_trans input_assoc_rules shape[ ]*min_support itemsets dict(frequent_itemsets(data_gro_ _enmin_support=min_support)
16,149
num of required transactions in [ ]len(itemsetsout[ ] so we get whopping , itemsets for support of only %this will increase exponentially if we decrease the support or if we increase the number of items in our dataset the next step is specifying confidence value and generating our rules we have written code snippet that will take confidence value and generate the rules that fulfill our specified support and confidence criteria the rules generated are then decoded using the mapping and variable names orange -associate also provides helper function that will help us extract metrics about each of these rules the following code snippet will perform rule generation and decoding of rulesand then compile it all in neat dataframe that we can use for further analysis confidence rules_df pd dataframe(if len(itemsets rules [(pqsuppconffor pqsuppconf in association_rules(itemsetsconfidenceif len( = names {item'{}={}format(var namevalfor itemvarval in onehot decode(mappingdata_gro_ mapping)eligible_ante [ for , in names items(if endswith(" ") input_assoc_rules shape[ rule_stats list(rules_stats(rulesitemsetsn)rule_list_df [for ex_rule_frm_rule_stat in rule_statsante ex_rule_frm_rule_stat[ cons ex_rule_frm_rule_stat[ named_cons names[next(iter(cons))if named_cons in eligible_anterule_lhs [names[ ][:- for in ante if names[iin eligible_anteante_rule 'join(rule_lhsif ante_rule and len(rule_lhs)> rule_dict {'supportex_rule_frm_rule_stat[ ]'confidenceex_rule_frm_rule_stat[ ]'coverageex_rule_frm_rule_stat[ ]'strengthex_rule_frm_rule_stat[ ]'liftex_rule_frm_rule_stat[ ]'leverageex_rule_frm_rule_stat[ ]'antecedent'ante_rule'consequent':named_cons[:- rule_list_df append(rule_dictrules_df pd dataframe(rule_list_dfprint("raw rules data frame of {rules generatedformat(rules_df shape[ ])if not rules_df emptypruned_rules_df rules_df groupby(['antecedent','consequent']max(reset_index(elseprint("unable to generate any rule"raw rules data frame of rules generated
16,150
the output of this code snippet consists of the association rules dataframe that we can use for our analysis you can play around with the item numberconsequentantecedentsupportand confidence values to generate different rules let' take some sample rules generated using transactions that explain of total salesmin-support of (required number of transactions >= and confidence greater than herewe have collected rules having maximum lift for each of the items that can be consequent (that appear on the right sideby using the following code (pruned_rules_df[['antecedent','consequent''support','confidence','lift']groupby('consequent'max(reset_index(sort_values(['lift''support','confidence']ascending=false)figure - association rules on the grocery dataset let' interpret the first rulewhich states thatyogurtwhole milk tropical fruit (rroot vegtablesthe pattern that the rule states in the equation is easy to understand--people who bought yogurtwhole milkand tropical fruit also tend to buy root vegetables let' try to understand the metrics support of the rule is which meansall the items together appear in transactions in the dataset confidence of the rule is %which means that of the time the antecedent items occurred we also had the consequent in the transaction ( of timescustomers who bought the left side items also bought root vegetablesanother important metric in figure - is lift lift means that the probability of finding root vegetables in the transactions which have yogurtwhole milkand tropical fruit is greater than the normal probability
16,151
of finding root vegetables in the previous transactions ( typicallya lift value of indicates that the probability of occurrence of the antecedent and consequent together are independent of each other hencethe idea is to look for rules having lift much greater than in our caseall the previously mentioned rules are good quality rules this is significant piece of informationas this can prompt retailer to bundle specific products like these together or run marketing scheme that offers discount on buying root vegetables along with these other three products we encourage you to try similar analyses with your own datasets in the future and also with the online retail transactions dataset that we used for our market segmentation case study considering the dataset online retail from market segmentationthe workflow for that particular analysis will be very similar the only difference among these two datasets is the way in which they are represented you can leverage the following code snippets to analyze patterns from the united kingdom in that dataset cs_mba pd read_excel(io= 'online retail xlsx'cs_mba_uk cs_mba[cs_mba country ='united kingdom'remove returned items cs_mba_uk cs_mba_uk[~(cs_mba_uk invoiceno str contains(" "=true)cs_mba_uk cs_mba_uk[~cs_mba_uk quantity< create transactional database items list(cs_mba_uk description unique()grouped cs_mba_uk groupby('invoiceno'transaction_level_df_uk grouped aggregate(lambda xtuple( )reset_index([['invoiceno','description']transaction_dict {item: for item in itemsoutput_dict dict(temp dict(for rec in transaction_level_df_uk to_dict('records')invoice_num rec['invoiceno'items_list rec['description'transaction_dict {item: for item in itemstransaction_dict update({item: for item in items if item in items_list}temp update({invoice_num:transaction_dict}new [ for , in temp items()tranasction_df pd dataframe(newdel(tranasction_df[tranasction_df columns[ ]]once you build the transactional datasetyou can choose your own configuration based on which you want to extract and mine rules for instancethe following code mines for patterns on the top most sold products with min-support of (min transactions and minimum confidence of output_df_uk_nitem_counts_n prune_dataset(input_df=tranasction_dflength_trans= start_item= end_item= input_assoc_rules output_df_uk_n domain_transac domain([discretevariable make(name=item,values=[' '' ']for item in input_assoc_rules columns]data_tran_uk orange data table from_numpy(domain=domain_transacx=input_assoc_rules as_matrix(),ynonedata_tran_uk_enmapping onehot encode(data_tran_ukinclude_class=true
16,152
support num_trans input_assoc_rules shape[ ]*support itemsets dict(frequent_itemsets(data_tran_uk_ensupport)confidence rules_df pd dataframe(rest of the code similar to what we did earlier the rest of the analysis can be performed using the same workflow which we used for the groceries dataset feel free to check out the cross selling ipynb notebook in case you get stuck figure - shows some patterns from the previous analysis on our online retail dataset figure - association rules on the online retail dataset for uk customers it is quite evident from the metrics in figure - that these are excellent quality rules we can see that items relevant to baking are purchased together and items like bags are purchased together try changing the previously mentioned parameters and see if you can find more interesting patternssummary in this we read about some simple yet high-value case studies the crux of the was to realize that the most important part about any analytics or machine learning-based solution is the value it can deliver to the organization being an analytics or data science professionalwe must always try to balance the value aspect of our work with its technical complexity we learned some important methods that have the potential to directly contribute to the revenue generation of organizations and retail establishments we looked at ideas pertaining to customer segmentationits impactand explored novel way of using unsupervised learning to find out customer segments and view interesting patterns and behavior cross selling introduced us to the world of pattern-mining and rule-based frameworks like association rulemining and principles like market basket analysis we utilized framework that was entirely different from the ones that we have used until now and understood the value of data parsing and pre-processing besides regular modeling and analysis in subsequent of this bookwe increase the technical complexity of our case studiesbut we urge you to always have an eye out for defining the value and impact of these solutions stay tuned
16,153
analyzing wine types and quality in the last we looked at specific case studies leveraging unsupervised machine learning techniques like clustering and rule-mining frameworks in this we focus on some more case studies relevant to supervised machine learning algorithms and predictive analytics we have looked at classification based problems in where we built sentiment classifiers based on text reviews to predict the sentiment of movie reviews in this the problem at hand is to analyzemodeland predict the type and quality of wine using physicochemical attributes wine is pleasant tasting alcoholic beverageloved by millions across the globe indeed many of us love to celebrate our achievements or even unwind at the end of tough day with glass of winethe following quote from francis bacon should whet your appetite about wine and its significance "age appears best in four thingsold wood to burnold wine to drinkold friends to trustand old authors to read --francis bacon regardless of whether you like and consume wine or notit will definitely be interesting to analyze the physicochemical attributes of wine and understand their relationships and significance with wine quality and types since we will be trying to predict wine types and qualitythe supervised machine learning task involved here is classification in this we look at various ways to analyze and visualize wine data attributes and features we focus on univariate as well as multivariate analyses for predicting wine types and qualitywe will be building classifiers based on state-of-the-art supervised machine learning techniquesincluding logistic regressiondeep neural networksdecision treesand ensemble models like random forests and gradient boosting to name few special emphasis is on analyzingvisualizingand modeling data such that you can emulate similar principles on your own classification based real-world problems in the future we would like to thank the uc irvine ml repository for the dataset also special mention goes to datacamp and karlijn willemsnotable data science journalistwho has done some excellent work in analyzing the wine quality dataset and has written an article on her findings at more details we have taken couple of analyses and explanations from this article as an inspiration for our and karlijn has been more than helpful in sharing the same with us problem statement "given datasetor in this case two datasets that deal with physicochemical properties of winecan you guess the wine type and quality?this is the main objective of this of course this doesn' mean the entire focus will be only on leveraging machine learning to build predictive models we will processanalyzevisualizeand model our dataset based on standard machine learning and data mining workflow models like the crisp-dm model (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
16,154
the datasets used in this are available in the very popular uci machine learning repository under the name of wine quality data set you can access more details at ml/datasets/wine+qualitywhich gives you access to the raw datasets as well as details about the various features in the datasets there are two datasetsone for red wines and the other for white wines to be more specificthe wine datasets are related to red and white vinho verde wine samplesfrom the north of portugal another file in the same web page talks about the details for the datasets including attribute information credits for the datasets go out to corteza cerdeiraf almeidat matosand reis and you can get more details in their paper"modeling wine preferences by data mining from physicochemical propertiesin decision support systemselsevier ( ) - to summarize our main objectiveswe will be trying to solve the following major problems by leveraging machine learning and data analysis on our wine quality dataset predict if each wine sample is red or white wine predict the quality of each wine samplewhich can be lowmediumor high let' get started by setting up the necessary dependencies before moving on to accessing and analyzing our datasetting up dependencies we will be using several python libraries and frameworks specific to machine learning and deep learning just like in our previous you need to make sure you have pandasnumpyscipyand scikit-learn installedwhich will be used for data processing and machine learning we will also use matplotlib and seaborn extensively for exploratory data analysis and visualizations deep learning frameworks used in this include keras with the tensorflow backendbut you can also use theano as the backend if you choose to do so we also use the xgboost library for the gradient boosting ensemble model utilities related to supervised model fittingpredictionand evaluation are present in model_evaluation_utils pyso make sure you have these modules in the same directory and the other python files and jupyter notebooks for this which you can obtain from the relevant directory for this on github at com/dipanjans/practical-machine-learning-with-python getting the data the datasets will be available along with the code files for this in the github repository for this book at folder for the following files refer to the datasets of interest the file named winequality-red csv contains the dataset pertaining to records of red wine samples the file named winequality-white csv contains the dataset pertaining to records of white wine samples the file named winequality names consists of detailed information and the data dictionary pertaining to the datasets you can also download the same data from wine+quality if needed once you have the csv fileyou can easily load it in python using the read_ csvutility function from pandas
16,155
exploratory data analysis standard machine learning and analytics workflow recommend processingcleaninganalyzingand visualizing your data before moving on toward modeling your data we will also follow the same workflow we used in all our other you can refer to the python file titled exploratory_data_analysis py for all the code used in this section or use the jupyter notebook titled exploratory data analysis ipynb for more interactive experience process and merge datasets let' load the following necessary dependencies and configuration settings import pandas as pd import matplotlib pyplot as plt import matplotlib as mpl import numpy as np import seaborn as sns %matplotlib inline we will now process the datasets (red and white wineand add some additional variables that we would want to predict in future sections the first variable we will add is wine_typewhich would be either red or white wine based on the dataset and the wine sample the second variable we will add is quality_labelwhich is qualitative measure of the quality of the wine sample based on the quality variable score the rules used for mapping quality to quality_label are described as follows wine quality scores of and are mapped to low quality wines under the quality_label attribute wine quality scores of and are mapped to medium quality wines under the quality_label attribute wine quality scores of and are mapped to high quality wines under the quality_label attribute after adding these attributeswe also merge the two datasets for red and white wine together to create single dataset and we use pandas to merge and shuffle the records of the data frame the following snippet helps us in achieving this in [ ]white_wine pd read_csv('winequality-white csv'sep=';'red_wine pd read_csv('winequality-red csv'sep=';'store wine type as an attribute red_wine['wine_type''redwhite_wine['wine_type''whitebucket wine quality scores into qualitative quality labels red_wine['quality_label'red_wine['quality'apply(lambda value'lowif value < else 'mediumif value < else 'high'red_wine['quality_label'pd categorical(red_wine['quality_label']categories=['low''medium''high']white_wine['quality_label'white_wine['quality'apply(lambda value'low
16,156
if value < else 'mediumif value < else 'high'white_wine['quality_label'pd categorical(white_wine['quality_label']categories=['low''medium''high']merge red and white wine datasets wines pd concat([red_winewhite_wine]re-shuffle records just to randomize data points wines wines sample(frac= random_state= reset_index(drop=trueour objective in future sections would be to predict wine_type and quality_label based on other features in the wines dataset let' now try to understand more about our dataset and its features understanding dataset features the wines dataframe we obtained in the previous section is our final dataset we will be using for our analysis and modeling we will also be using the red_wine and white_wine dataframes where necessary for basic exploratory analysis and visualizations let' start by looking at the total number of data samples we are dealing with and also the different features in our dataset in [ ]print(white_wine shapered_wine shapeprint(wines info()( ( rangeindex entries to data columns (total columns)fixed acidity non-null float volatile acidity non-null float citric acid non-null float residual sugar non-null float chlorides non-null float free sulfur dioxide non-null float total sulfur dioxide non-null float density non-null float ph non-null float sulphates non-null float alcohol non-null float quality non-null int wine_type non-null object quality_label non-null category dtypescategory( )float ( )int ( )object( memory usage kb this information tells us that we have white wine data points and red wine data points the merged dataset contains total of data points and we also get an idea of numeric and categorical attributes let' take peek at our dataset to see some sample data points in [ ]wines head(
16,157
figure - sample data points from the wine quality dataset the output depicted in figure - shows us sample wine records for our wine quality dataset looking at the valueswe can get an idea of numeric as well as categorical features let' now try to gain some domain knowledge about wine and its attributes domain knowledge is essential and always recommendedespecially if you are trying to analyze and model data from diverse domains wine is an alcoholic beverage made by the process of fermentation of grapeswithout the addition of sugarsacidsenzymeswateror other nutrients red and white wine are two variants usuallyred wine is made from dark red and black grapes the color ranges from various shades of redbrownand violet this is produced with whole grapesincluding the skinwhich adds to the color and flavor of red wines giving it rich flavor white wine is made from white grapes with no skins or seeds the color is usually straw-yellowyellow-greenor yellow-gold most white wines have light and fruity flavor as compared to richer red wines let' now dive into details for each feature in the dataset credits once again go to karlijn for some of the attribute descriptions our dataset has total of attributes and they are described as follows fixed acidityacids are one of the fundamental properties of wine and contribute greatly to the taste of the wine reducing acids significantly might lead to wines tasting flat fixed acids include tartaricmaliccitricand succinic acidswhich are found in grapes (except succinicthis variable is usually expressed in tartaricacid in the dataset dm volatile aciditythese acids are to be distilled out from the wine before completing the production process it is primarily constituted of acetic acidthough other acids like lacticformicand butyric acids might also be present excess of volatile acids are undesirable and lead to unpleasant flavor in the united statesthe legal limits of volatile acidity are / for red table wine and / for white table wine the volatile acidity is expressed in aceticacid in the dataset dm citric acidthis is one of the fixed acids that gives wine its freshness usually most of it is consumed during the fermentation process and sometimes it is added separately to give the wine more freshness it' usually expressed in in the dm dataset residual sugarthis typically refers to the natural sugar from grapes that remains after the fermentation process stopsor is stopped it' usually expressed in dm in the dataset
16,158
chloridesthis is usually major contributor to saltiness in wine it' usually expressed in sodiumchloride in the dataset dm free sulfur dioxidethis is the part of the sulfur dioxide thatwhen added to wineis said to be free after the remaining part binds winemakers will always try to get the highest proportion of free sulfur to bind they are also known as sulfites and too much is undesirable and gives pungent odor this variable is expressed in mg in the dataset dm total sulfur dioxidethis is the sum total of the bound and the free sulfur mg this is mainly added to kill harmful dioxide (so hereit' expressed in dm bacteria and preserve quality and freshness there are usually legal limits for sulfur levels in wines and excess of it can even kill good yeast and produce an undesirable odor densitythis can be represented as comparison of the weight of specific volume of wine to an equivalent volume of water it is generally used as measure of the conversion of sugar to alcohol hereit' expressed in cm phalso known as the potential of hydrogenthis is numeric scale to specify the acidity or basicity the wine fixed acidity contributes the most toward the ph of wines you might knowsolutions with ph less than are acidicwhile solutions with ph greater than are basic with ph of pure water is neutral most wines have ph between and and are therefore acidic sulphatesthese are mineral salts containing sulfur sulphates are to wine as gluten is to food they are regular part of the winemaking around the world and are considered essential they are connected to the fermentation process and affect the potassiumsulphate wine aroma and flavor herethey are expressed in in the dm dataset alcoholwine is an alcoholic beverage alcohol is formed as result of yeast converting sugar during the fermentation process the percentage of alcohol can vary from wine to wine hence it is not surprise for this attribute to be part of this dataset it' usually measured in vol or alcohol by volume (abvqualitywine experts graded the wine quality between (very badand (very excellentthe eventual quality score is the median of at least three evaluations made by the same wine experts wine_typesince we originally had two datasets for red and white winewe introduced this attribute in the final merged datasetwhich indicates the type of wine for each data point wine can be red or white wine one of the predictive models we will build in this would be such that we can predict the type of wine by looking at other wine attributes
16,159
quality_labelthis is derived attribute from the quality attribute we bucket or group wine quality scores into three qualitative bucketsnamely lowmediumand high wines with quality score of and are low qualityscores of and are medium qualityand scores of and are high quality wines we will also build another model in this to predict this wine quality label based on other wine attributes now that you have solid foundation on the dataset as well as its featureslet' analyze and visualize various features and their interactions descriptive statistics we will start by computing some descriptive statistics of our various features of interest in our dataset this involves computing aggregation metrics like meanmedianstandard deviationand so on if you remember one of our primary objectives is to build model that can correctly predict if wine is red or white wine based on its attributes let' build descriptive summary table on various wine attributes separated by wine type in [ ]subset_attributes ['residual sugar''total sulfur dioxide''sulphates''alcohol''volatile acidity''quality'rs round(red_wine[subset_attributesdescribe(), ws round(white_wine[subset_attributesdescribe(), pd concat([rsws]axis= keys=['red wine statistics''white wine statistics']figure - descriptive statistics for wine attributes separated by wine type the summary table depicted in figure - shows us descriptive statistics for various wine attributes do you notice any interesting propertiesfor startersmean residual sugar and total sulfur dioxide content in white wine seems to be much higher than red wine alsothe mean value of sulphates and volatile acidity seem to be higher in red wine as compared to white wine try including other features too and see if you can find more interesting comparisonsconsidering wine quality levels as data subsetslet' build some descriptive summary statistics with the following snippet in [ ]subset_attributes ['alcohol''volatile acidity''ph''quality'ls round(wines[wines['quality_label'='low'][subset_attributesdescribe(), ms round(wines[wines['quality_label'='medium'][subset_attributesdescribe(), hs round(wines[wines['quality_label'='high'][subset_attributesdescribe(), pd concat([lsmshs]axis= keys=['low quality wine''medium quality wine''high quality wine']
16,160
figure - descriptive statistics for wine attributes separated by wine quality the summary table depicted in figure - shows us descriptive statistics for various wine attributes subset by wine quality ratings interestinglymean alcohol levels seem to increase based on the rating of the wine quality we also see that ph levels are almost consistent across the wine samples of varying quality is there any way to statistically prove thiswe will see that in the following section inferential statistics the general notion of inferential statistics is to draw inferences and propositions of population using data sample the idea is to use statistical methods and models to draw statistical inferences from given hypotheses each hypothesis consists of null hypothesis and an alternative hypothesis based on statistical test resultsif the result is statistically significant based on pre-set significance levels ( if obtained -value is less than significance level)we reject the null hypothesis in favor of the alternative hypothesis otherwiseif the results is not statistically significantwe conclude that our null hypothesis was correct coming back to our problem from the previous sectiongiven multiple data groups or subsets of wine samples based on wine quality ratingis there any way to prove that mean alcohol levels or ph levels vary significantly among the data groupsa great statistical model to prove or disprove the difference in mean among subsets of data is to use the one-way anova test anova stands for "analysis of variance,which is nifty statistical model and can be used to analyze statistically significant differences among means or averages of various groups this is basically achieved using statistical test that helps us determine whether or not the means of several groups are equal usually the null hypothesis is represented as mn where is the number of data groups or subsets and it indicates that the group means for the various groups are not very different from each other based on statistical significance levels the alternative hypotheseshatells us that there exists at least two group means that are statistically significantly different from each other usually the -statistic and the associated -value from it is used to determine the statistical significance typically -value less than is taken to be statistically significant result where we reject the null hypothesis in favor of the original we recommend reading up standard book on inferential statistics to gain more in-depth knowledge regarding these concepts
16,161
for our scenariothree data subsets or groups from the data are created based on wine quality ratings the mean values in the first test would be based on the wine alcohol content and the second test would be based on the wine ph levels also let' assume the null hypothesis is that the group means for lowmediumand high quality wine is same and the alternate hypothesis would be that there is difference (statistically significantbetween at least two group means the following snippet helps us perform the one-way anova test in [ ]from scipy import stats fp stats f_oneway(wines[wines['quality_label'='low']['alcohol']wines[wines['quality_label'='medium']['alcohol']wines[wines['quality_label'='high']['alcohol']print('anova test for mean alcohol levels across wine samples with different quality ratings'print(' statistic:' '\tp-value:'pfp stats f_oneway(wines[wines['quality_label'='low']['ph']wines[wines['quality_label'='medium']['ph']wines[wines['quality_label'='high']['ph']print('\nanova test for mean ph levels across wine samples with different quality ratings'print(' statistic:' '\tp-value:'panova test for mean alcohol levels across wine samples with different quality ratings statistic -value - anova test for mean ph levels across wine samples with different quality ratings statistic -value from the preceding results we can clearly see we have -value much less than in the first test and greater than in the second test this tells us that there is statistically significant difference in alcohol level means for at least two groups out of the three (rejecting the null hypothesis in favor of the alternativehoweverin case of ph level meanswe do not reject the null hypothesis and thus we conclude that the ph level means across the three groups are not statistically significantly different we can even visualize these two features and observe the means using the following snippet (ax ax plt subplots( figsize=( ) suptitle('wine quality alcohol content/ph'fontsize= subplots_adjust(top= wspace= sns boxplot( ="quality_label" ="alcohol"data=winesax=ax ax set_xlabel("wine quality class",size ,alpha= ax set_ylabel("wine alcohol %",size ,alpha= sns boxplot( ="quality_label" ="ph"data=winesax=ax ax set_xlabel("wine quality class",size ,alpha= ax set_ylabel("wine ph",size ,alpha=
16,162
figure - visualizing wine alcohol content and ph level distributions based on quality ratings the boxplots depicted in figure - show us stark differences in wine alcohol content distributions based on wine quality as compared to ph levelswhich look to be between and in fact if you look at the mean and median values for ph levels across the three groupsit is approximately across the three groups as compared to alcohol %which varies significantly can you find our more interesting patterns and hypothesis with other features from this datagive it tryunivariate analysis this is perhaps one of the easiest yet core foundational step in exploratory data analysis univariate analysis involves analyzing data such that at any instance of analysis we are only dealing with one variable or feature no relationships or correlations are analyzed among multiple variables the simplest way to easily visualize all the variables in your data is to build some histograms the following snippet helps visualize distributions of data values for all features while histogram may not be an appropriate visualization in many casesit is good one to start with for numeric data red_wine hist(bins= color='red'edgecolor='black'linewidth= xlabelsize= ylabelsize= grid=falseplt tight_layout(rect=( )rt plt suptitle('red wine univariate plots' = = fontsize= white_wine hist(bins= color='white'edgecolor='black'linewidth= xlabelsize= ylabelsize= grid=falseplt tight_layout(rect=( )wt plt suptitle('white wine univariate plots' = = fontsize=
16,163
figure - univariate plots depicting feature distributions for the wine quality dataset the power of packages like matplotlib and pandas enable you to easily plot variable distributions as depicted in figure - using minimal code do you notice any interesting patterns across the two wine typeslet' take the feature named residual sugar and plot the distributions across data pertaining to red and white wine samples fig plt figure(figsize ( , )title fig suptitle("residual sugar content in wine"fontsize= fig subplots_adjust(top= wspace= ax fig add_subplot( , ax set_title("red wine"ax set_xlabel("residual sugar"ax set_ylabel("frequency"ax set_ylim([ ]ax text( '$\mu$='+str(round(red_wine['residual sugar'mean(), ))fontsize= r_freqr_binsr_patches ax hist(red_wine['residual sugar']color='red'bins= edgecolor='black'linewidth= ax fig add_subplot( , ax set_title("white wine"ax set_xlabel("residual sugar"ax set_ylabel("frequency"ax set_ylim([ ]ax text( '$\mu$='+str(round(white_wine['residual sugar'mean(), ))fontsize= w_freqw_binsw_patches ax hist(white_wine['residual sugar']color='white'bins= edgecolor='black'linewidth=
16,164
figure - residual sugar distribution for red and white wine samples we can notice easily from the visualization in figure - that residual sugar content in white wine samples seems to be more as compared to red wine samples you can reuse the plotting template in the preceding code snippet and visualize more features some plots are depicted as follows (detailed code is present in the jupyter notebookfigure - distributions for sulphate content and alcohol content for red and white wine samples the plots depicted in figure - show us that the sulphate content is slightly more in red wine samples as compared to white wine samples and alcohol content is almost similar in both types on an average of coursefrequency counts are higher in all cases for white wine because we have more white wine sample records as compared to red wine nextwe plot the distributions of the quality and quality_label categorical features to get an idea of the class distributionswhich we will be predicting later on
16,165
figure - distributions for wine quality for red and white wine samples the bar plots depicted in figure - show us the distribution of wine samples based on type and quality it is quite evident that high quality wine samples are far less as compared to low and medium quality wine samples multivariate analysis analyzing multiple feature variables and their relationships is what multivariate analysis is all about we would want to see if there are any interesting patterns and relationships among the physicochemical attributes of our wine sampleswhich might be helpful in our modeling process in the future one of the best ways to analyze features is to build pairwise correlation plot depicting the correlation coefficient between each pair of features in the dataset the following snippet helps us build correlation matrix and plot the same in the form of an easy-to-interpret heatmap fax plt subplots(figsize=( )corr wines corr(hm sns heatmap(round(corr, )annot=trueax=axcmap="coolwarm",fmt= 'linewidths subplots_adjust(top= tf suptitle('wine attributes correlation heatmap'fontsize=
16,166
figure - correlation heatmap for features in the wine quality dataset while most of the correlations are weakas observed in figure - we can see strong negative correlation between density and alcohol and strong positive correlation between total and free sulfur dioxidewhich is expected you can also visualize patterns and relationships among multiple variables using pairwise plots and use different hues for the wine types essentially plotting three variables at time the following snippet depicts sample pairwise plot for some features in our dataset cols ['wine_type''quality''sulphates''volatile acidity'pp sns pairplot(wines[cols]hue='wine_type'size= aspect= palette={"red""#ff ""white""#ffe "}plot_kws=dict(edgecolor="black"linewidth= )fig pp fig fig subplots_adjust(top= wspace= fig suptitle('wine attributes pairwise plots'fontsize=
16,167
figure - pairwise plots by wine type for features in the wine quality dataset from the plots in figure - we can notice several interesting patternswhich are in alignment with some insights we obtained earlier these observations include the followingpresence of higher sulphate levels in red wines as compared to white wines lower sulphate levels in wines with high quality ratings lower levels of volatile acids in wines with high quality ratings presence of higher volatile acid levels in red wines as compared to white wines you can use similar plots on other variables and features to discover more patterns and relationships to observe relationships among features with more microscopic viewjoint plots are excellent visualization tools specifically for multivariate visualizations the following snippet depicts the relationship between wine typessulphatesand quality ratings rj sns jointplot( ='quality' ='sulphates'data=red_winekind='reg'ylim=( )color='red'space= size= ratio= rj ax_joint set_xticks(list(range( , ))fig rj fig fig subplots_adjust(top= fig suptitle('red wine sulphates quality'fontsize= wj sns jointplot( ='quality' ='sulphates'data=white_winekind='reg'ylim=( )color='#ffe 'space= size= ratio= wj ax_joint set_xticks(list(range( , ))fig wj fig fig subplots_adjust(top= fig suptitle('white wine sulphates quality'fontsize=
16,168
figure - visualizing relationships between wine typessulphates and quality with joint plots while there seems to be some pattern depicting lower sulphate levels for higher quality rated wine samplesthe correlation is quite weak (see figure - howeverwe do see clearly that sulphate levels for red wine are much higher as compared to the ones in white wine in this case we have visualized three features (typequalityand sulphateswith the help of two plots what if we wanted to visualize higher number of features and determine patterns from themthe seaborn framework provides facet grids that help us visualize higher number of variables in two-dimensional plots let' try to visualize relationships between wine typequality ratingsvolatile acidityand alcohol volume levels sns facetgrid(winescol="wine_type"hue='quality_label'col_order=['red''white']hue_order=['low''medium''high']aspect= size= palette=sns light_palette('navy' ) map(plt scatter"volatile acidity""alcohol"alpha= edgecolor='white'linewidth= fig fig fig subplots_adjust(top= wspace= fig suptitle('wine type alcohol quality acidity'fontsize= add_legend(title='wine quality class'
16,169
figure - visualizing relationships between wine typesalcoholqualityand acidity levels the plot in figure - shows us some interesting patterns not only are we able to successfully visualize four variablesbut also we can see meaningful relationships among them higher quality wine samples (depicted by darker shadeshave lower levels of volatile acidity and higher levels of alcohol content as compared to wine samples with medium and low ratings besides thiswe can also see that volatile acidity levels are slightly lower in white wine samples as compared to red wine samples let' now build similar visualization howeverin this scenariowe want to analyze patterns in wine typesqualitysulfur dioxideand acidity levels we can use the same framework as our last code snippet to achieve this sns facetgrid(winescol="wine_type"hue='quality_label'col_order=['red''white']hue_order=['low''medium''high']aspect= size= palette=sns light_palette('green' ) map(plt scatter"volatile acidity""total sulfur dioxide"alpha= edgecolor='white'linewidth= fig fig fig subplots_adjust(top= wspace= fig suptitle('wine type sulfur dioxide acidity quality'fontsize= add_legend(title='wine quality class'
16,170
figure - visualizing relationships between wine typesqualitysulfur dioxideand acidity levels we can easily interpret from figure - that volatile acidity as well as total sulfur dioxide is considerably lower in high quality wine samples alsototal sulfur dioxide is considerable more in white wine samples as compared to red wine samples howevervolatile acidity levels are slightly lower in white wine samples as compared to red wine samples we also observed in the previous plot nice way to visualize numerical features segmented by groups (categorical variablesis to use box plots in our datasetwe have already discussed the relationship of higher alcohol levels with higher quality ratings for wine samples in the "inferential statisticssection let' try to visualize the relationship between wine alcohol levels grouped by wine quality ratings we will generate two plots for wine alcohol content versus both wine quality and quality_label (ax ax plt subplots( figsize=( ) suptitle('wine type quality alcohol content'fontsize= sns boxplot( ="quality" ="alcohol"hue="wine_type"data=winespalette={"red""#ff ""white""white"}ax=ax ax set_xlabel("wine quality",size ,alpha= ax set_ylabel("wine alcohol %",size ,alpha= sns boxplot( ="quality_label" ="alcohol"hue="wine_type"data=winespalette={"red""#ff ""white""white"}ax=ax ax set_xlabel("wine quality class",size ,alpha= ax set_ylabel("wine alcohol %",size ,alpha= plt legend(loc='best'title='wine type'
16,171
figure - visualizing relationships between wine typesquality and alcohol content based on our earlier analysis for wine quality versus alcohol volume in the "inferential statisticssectionthese results look consistent each box plot in figure - depicts the distribution of alcohol level for particular wine quality rating separated by wine types the box itself depicts the inter-quartile range and the line inside depicts the median value of alcohol whiskers indicate the minimum and maximum value with outliers often depicted by individual points we can clearly observe the wine alcohol by volume distribution has an increasing trend based on higher quality rated wine samples similarly we can also using violin plots to visualize distributions of numeric features over categorical features let' build visualization for analyzing the fixed acidity of wine sample by quality ratings (ax ax plt subplots( figsize=( ) suptitle('wine type quality acidity'fontsize= sns violinplot( ="quality" ="volatile acidity"hue="wine_type"data=winessplit=trueinner="quart"linewidth= palette={"red""#ff ""white""white"}ax=ax ax set_xlabel("wine quality",size ,alpha= ax set_ylabel("wine fixed acidity",size ,alpha= sns violinplot( ="quality_label" ="volatile acidity"hue="wine_type"data=winessplit=trueinner="quart"linewidth= palette={"red""#ff ""white""white"}ax=ax ax set_xlabel("wine quality class",size ,alpha= ax set_ylabel("wine fixed acidity",size ,alpha= plt legend(loc='upper right'title='wine type'
16,172
figure - visualizing relationships between wine typesquality and acidity in figure - each violin plot typically depicts the inter-quartile range with the median which is shown with dotted lines in this figure you can also visualize the distribution of data with the density plots where width depicts frequency thus in addition to the information you get from box plotsyou can also visualize the distribution of data with violin plots in fact we have built split-violin plot in this case depicting both types of wine it is quite evident that red wine samples have higher acidity as compared to its white wine counterparts also we can see an overall decrease in acidity with higher quality wine for red wine samples but not so much for white wine samples these code snippets and examples should give you some good frameworks and blueprints to perform effective exploratory data analysis on your datasets in the future predictive modeling we will now focus on our main objectives of building predictive models to predict the wine types and quality ratings based on other features we will be following the standard classification machine learning pipeline in this case there will be two main classification systems we will be building in this section prediction system for wine type (red or white wineprediction system for wine quality rating (lowmediumor highwe will be using the wines data frame from the previous sections the entire code for this section is available in the python file titled predictive_analytics py or you can use the jupyter notebook titled predictive analytics ipynb for more interactive experience to start withlet' load the following necessary dependencies and settings import pandas as pd import numpy as np import matplotlib pyplot as plt import model_evaluation_utils as meu from sklearn model_selection import train_test_split from collections import counter from sklearn preprocessing import standardscaler from sklearn preprocessing import labelencoder %matplotlib inline
16,173
do remember to have the model_evaluation_utils py module in the same directory where you are running your code since we will be using it for evaluating our predictive models let' briefly look at the workflow we will be following for our predictive systems we will focus on two major phases--model training and model predictions and evaluation figure - workflow blueprint for our wine type and quality classification system from figure - we can see that training data and testing data refer to the wine quality dataset features since we already have the necessary wine attributeswe won' be building additional hand-crafted features labels can be either wine types or quality ratings based on the classification system in the training phasefeature selection will mostly involve selecting all the necessary wine physicochemical attributes and then after necessary scaling we will be training our predictive models for prediction and evaluation in the prediction phase predicting wine types in our wine quality datasetwe have two variants or types of wine--red and white wine the main task of our classification system in this section is to predict the wine type based on other features to start withwe will first select our necessary features and separate out the prediction class labels and prepare train and test datasets we use the prefix wtp_ in our variables to easily identify them as neededwhere wtp depicts wine type prediction in [ ]wtp_features wines iloc[:,:- wtp_feature_names wtp_features columns wtp_class_labels np array(wines['wine_type']
16,174
wtp_train_xwtp_test_xwtp_train_ywtp_test_y train_test_split(wtp_featureswtp_class_labelstest_size= random_state= print(counter(wtp_train_y)counter(wtp_test_y)print('features:'list(wtp_feature_names)counter({'white' 'red' }counter({'white' 'red' }features['fixed acidity''volatile acidity''citric acid''residual sugar''chlorides''free sulfur dioxide''total sulfur dioxide''density''ph''sulphates''alcohol'the numbers show us the wine samples for each class and we can also see the feature names which will be used in our feature set let' move on to scaling our features we will be using standard scaler in this scenario in [ ]define the scaler wtp_ss standardscaler(fit(wtp_train_xscale the train set wtp_train_sx wtp_ss transform(wtp_train_xscale the test set wtp_test_sx wtp_ss transform(wtp_test_xsince we are dealing with binary classification problemone of the traditional machine learning algorithms we can use is the logistic regression model if you remember we had talked about this in detail in feel free to skim through the "traditional supervised machine learning modelssection in to refresh your memory on logistic regression or you can refer to any standard text book or material on classification models let' now train model on our training dataset and labels using logistic regression in [ ]from sklearn linear_model import logisticregression wtp_lr logisticregression(wtp_lr fit(wtp_train_sxwtp_train_yout[ ]logisticregression( = class_weight=nonedual=falsefit_intercept=trueintercept_scaling= max_iter= multi_class='ovr'n_jobs= penalty=' 'random_state=nonesolver='liblinear'tol= verbose= warm_start=falsenow that our model is readylet' predict the wine types for our test data samples and evaluate the performance in [ ]wtp_lr_predictions wtp_lr predict(wtp_test_sxmeu display_model_performance_metrics(true_labels=wtp_test_ypredicted_labels=wtp_lr_predictionsclasses=['red''white']figure - model performance metrics for logistic regression for wine type predictive model
16,175
we get an overall score and model accuracy of %as depicted in figure - which is really amazingin spite of low samples of red winewe seem to do pretty well in case your models do not perform well on other datasets due to class imbalance problemyou can consider over-sampling or under-sampling techniques including sample selection as well as smote coming back to our classification problemwe have really good modelbut can we do betterwhile that seems to be far-fetched dreamlet' try modeling the data using fully connected deep neural network (dnnwith three hidden layers refer to the "newer supervised deep learning modelssection in to refresh your memory on fully-connected dnns and mlps deep learning frameworks like keras on top of tensorflow prefer if your output response labels are encoded to numeric forms which are easier to work with the following snippet encodes our wine type class labels in [ ]le labelencoder(le fit(wtp_train_yencode wine type labels wtp_train_ey le transform(wtp_train_ywtp_test_ey le transform(wtp_test_ylet' build the architecture for our three-hidden layer dnn where each hidden layer has units (the input layer has units for the featuresand the output layer has unit to predict or which maps back to red or white wine in [ ]from keras models import sequential from keras layers import dense wtp_dnn_model sequential(wtp_dnn_model add(dense( activation='relu'input_shape=( ,))wtp_dnn_model add(dense( activation='relu')wtp_dnn_model add(dense( activation='relu')wtp_dnn_model add(dense( activation='sigmoid')wtp_dnn_model compile(loss='binary_crossentropy'optimizer='adam'metrics=['accuracy']using tensorflow backend you can see that we are using keras on top of tensorflowand for our optimizerwe have chosen the adam optimizer with binary cross-entropy loss you can also use categorical cross-entropy if neededwhich is especially useful when you have more than two classes the following snippet helps train our dnn in [ ]history wtp_dnn_model fit(wtp_train_sxwtp_train_eyepochs= batch_size= shuffle=truevalidation_split= verbose= train on samplesvalidate on samples epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc epoch / / loss acc val_loss val_acc we use of the training data for validation set while training the model to see how it performs at each epoch let' now predict and evaluate our model on the actual test dataset
16,176
in [ ]wtp_dnn_ypred wtp_dnn_model predict_classes(wtp_test_sxwtp_dnn_predictions le inverse_transform(wtp_dnn_ypredmeu display_model_performance_metrics(true_labels=wtp_test_ypredicted_labels=wtp_dnn_predictionsclasses=['red''white']figure - model performance metrics for deep neural network for wine type predictive model we get an overall score and model accuracy of %as depicted in figure - which is even better than our previous modelthis goes to prove you don' always need big data but good quality data and features even for deep learning models the loss and accuracy measures at each epoch are depicted in figure - with the detailed code present in the notebook figure - model performance metrics for dnn model per epoch now that we have working wine type classification systemlet' try to interpret one of these predictive models one of the key aspects in model interpretation is to try to understand the importance of each feature from the dataset we will be using the skater package that we used in the previous for our model interpretation needs the following code helps visualize feature importances for our logistic regression model
16,177
in [ ]from skater core explanations import interpretation from skater model import inmemorymodel wtp_interpreter interpretation(wtp_test_sxfeature_names=wtp_features columnswtp_im_model inmemorymodel(wtp_lr predict_probaexamples=wtp_train_sxtarget_names=wtp_lr classes_plots wtp_interpreter feature_importance plot_feature_importance(wtp_im_modelascending=falsefigure - feature importances obtained from our logistic regression model we can see in figure - that densitytotal sulfur dioxideand residual sugar are the top three features that contributed toward classifying wine samples as red or white another way of understand how well model is performing besides looking at metrics is to plot receiver operating characteristics curve also known popularity as roc curve this curve can be plotted using the true positive rate (tprand the false positive rate (fprof classifier tpr is known as sensitivity or recallwhich is the total number of correct positive results predicted among all the positive samples the dataset fpr is known as false alarms or ( specificity)determining the total number of incorrect positive predictions among all negative samples in the dataset the roc curve is also known as sensitivity versus ( specificityplot sometimes the following code uses our model evaluation utilities module to plot the roc curve for our logistic regression model in the roc space in [ ]meu plot_model_roc_curve(wtp_lrwtp_test_sxwtp_test_ytypically in any roc curvethe roc space is between points ( , and ( each prediction result from the confusion matrix occupies one point in this roc space ideallythe best prediction model would give point on the top-left corner ( , indicating perfect classification ( sensitivity and specificitya diagonal line depicts classifier that does random guess ideally if your roc curve occurs in the top half of the graphyou have decent classifierwhich is better than average figure - makes this clearer
16,178
figure - roc curve for our logistic regression model we achieved almost accuracy if you remember for this model and hence the roc curve is almost perfect where we also see that the area under curve (aucis which is perfect finallyfrom our feature importance ranks we obtained earlierlet' see if we can visualize the model' decision surface or decision boundarywhich basically gives us visual depiction of how well the model is able to learn data points pertaining to each class and separate points belonging to different classes this surface is basically the hypersurfacewhich helps in separating the underlying vector space of data samples based on their features (feature spaceif this surface is linearthe classification problem is linear and the hypersurface is also known as hyperplane our model evaluation utilities module helps us plot this with the help of an easy-touse function (do note this works only for scikit estimators at the moment since there was no clone function for keras based estimators and it just got published last month as of writing this bookwe might push change sometime in the future once it is stablein [ ]feature_indices [ for ifeature in enumerate(wtp_feature_namesif feature in ['density''total sulfur dioxide']meu plot_model_decision_surface(clf=wtp_lrtrain_features=wtp_train_sx[:feature_indices]train_labels=wtp_train_yplot_step= cmap=plt cm wistia_rmarkers=[','' ']alphas=[ ]colors=[' '' ']since we would want to plot the decision surface on the underlying feature spacevisualizing this becomes extremely difficult when you have more than two features hence for the sake of simplicity and ease of interpretationwe will use the top two most important features (density and total sulfur dioxideto visualize the model decision surface this is done by fitting cloned model of the original model estimator on those two features and then plotting the decision surface based on what it has learned check out the plot_model_decision_surfacefunction for more low-level details on how we visualize the decision surfaces
16,179
figure - visualizing the model decision surface for our logistic regression model the plot depicted in figure - reinforces the fact that our model has learned the underlying patterns quite well based on just the two most important featureswhich it has used to separate out majority of the red wine samples from the white wine samples depicted by the scatter dots there are very few misclassifications here and therewhich are evident based on the statistics we obtained earlier in the confusion matrices predicting wine quality in our wine quality datasetwe have several quality rating classes ranging from to what we will be focusing on is the quality_label variable that classifies wine into lowmediumand high ratings based on the underlying quality variable based on the mapping we created in the "exploratory data analysissection this is done because several rating scores have very few wine samples and hence similar quality ratings were clubbed together into one quality class rating we use the prefix wqp_ for all variables and models involved in prediction of wine quality to distinguish it from other analysis the prefix wqp stands for wine quality prediction we will evaluate and look at tree based classification models as well as ensemble models in this section the following code helps us prepare our train and test datasets for modeling in [ ]wqp_features wines iloc[:,:- wqp_class_labels np array(wines['quality_label']wqp_label_names ['low''medium''high'wqp_feature_names list(wqp_features columnswqp_train_xwqp_test_xwqp_train_ywqp_test_y train_test_split(wqp_featureswqp_class_labelstest_size= random_state= print(counter(wqp_train_y)counter(wqp_test_y)print('features:'wqp_feature_names
16,180
counter({'medium' 'low' 'high' }counter({'medium' 'low' 'high' }features['fixed acidity''volatile acidity''citric acid''residual sugar''chlorides''free sulfur dioxide''total sulfur dioxide''density''ph''sulphates''alcohol'from the preceding outputit is evident we use the same physicochemical wine features the number of samples in each quality rating class is also depicted it is quite evident we have very few wine samples of high class rating and lot of medium quality wine samples we move on to the next step of feature scaling in [ ]define the scaler wqp_ss standardscaler(fit(wqp_train_xscale the train set wqp_train_sx wqp_ss transform(wqp_train_xscale the test set wqp_test_sx wqp_ss transform(wqp_test_xlet' train tree based model on this data the decision tree classifier is an excellent example of classic tree model this is based on the concept of decision treeswhich focus on using tree-like graph or flowchart to model decisions and their possible outcomes each decision node in the tree represents decision test on specific data attribute edges or branches from each node represent possible outcomes of the decision test each leaf node represents predicted class label to get all the end-to-end classification rulesyou need to consider the paths from the root node to the leaf nodes decision tree models in the context of machine learning are non-parametric supervised learning methodswhich use these decision tree based structures for classification and regression tasks the core objective is to build model such that we can predict the value of target response variable by leveraging decision tree based structures to learn decision rules from the input data features the main advantage of decision tree based models is model interpretabilitysince it is quite easy to understand and interpret the decision rules which led to specific model prediction besides thisother advantages include the model' ability to handle both categorical and numeric data with ease as well as multi-class classification problems trees can be even visualized to understand and interpret decision rules better the following snippet leverages the decisiontreeclassifier estimator to build decision tree model and predict the wine quality ratings of our wine samples in [ ]from sklearn tree import decisiontreeclassifier train the model wqp_dt decisiontreeclassifier(wqp_dt fit(wqp_train_sxwqp_train_ypredict and evaluate performance wqp_dt_predictions wqp_dt predict(wqp_test_sxmeu display_model_performance_metrics(true_labels=wqp_test_ypredicted_labels=wqp_dt_predictionsclasses=wqp_label_namesfigure - model performance metrics for decision tree for wine quality predictive model
16,181
we get an overall score and model accuracy of approximately %as depicted in figure - which is not bad for start looking at the class based statisticswe can see the recall for the high quality wine samples is pretty bad since lot of them have been misclassified into medium and low quality ratings this is kind of expected since we do not have lot of training samples for high quality wine if you remember our training sample sizes from earlier considering low and high quality rated wine sampleswe should at least try to see if we can prevent our model from predicting low quality wine as high and similarly prevent predicting high quality wine as low interpreting this modelyou can use the following code to look at the feature importance scores based on the patterns learned by our model in [ ]wqp_dt_feature_importances wqp_dt feature_importances_ wqp_dt_feature_nameswqp_dt_feature_scores zip(*sorted(zip(wqp_feature_nameswqp_dt_feature_importances)key=lambda xx[ ])y_position list(range(len(wqp_dt_feature_names))plt barh(y_positionwqp_dt_feature_scoresheight= align='center'plt yticks(y_position wqp_dt_feature_namesplt xlabel('relative importance score'plt ylabel('feature' plt title('feature importances for decision tree'figure - feature importances obtained from our decision tree model we can clearly observe from figure - that the most important features have hanged as compared to our previous model alcohol and volatile acidity occupy the top two ranks and total sulfur dioxide seems to be one of the most important features for classifying both wine type and quality (as observed in figure - if you rememberwe mentioned earlier that you can also easily visualize the decision tree structure from decision tree models and check out the decision rules that it learned from the underlying features used in prediction for new data samples the following code helps us visualize decision trees
16,182
in [ ]from graphviz import source from sklearn import tree from ipython display import image graph source(tree export_graphviz(wqp_dtout_file=noneclass_names=wqp_label_namesfilled=truerounded=truespecial_ characters=falsefeature_names=wqp_feature_namesmax_depth= )png_data graph pipe(format='png'with open('dtree_structure png','wb'as ff write(png_dataimage(png_datafigure - visualizing our decision tree model our decision tree model has huge number of nodes and branches hence we visualized our tree for max depth of three based on the preceding snippet you can start observing the decision rules from the tree in figure - where the starting split is determined by the rule of alcohol <- and with each yes\no decision branch splitwe have further decision nodes as we descend into the tree at each depth level the class variable is what we are trying to predicti wine quality being lowmediumor high and value determines the total number of samples at each class present in the current decision node at each instance the gini parameter is basically the criterion which is used to determine and measure the quality of the split at each decision node best splits can be determined by metrics like gini impurity\gini index or information gain just to give you some contextthe gini impurity is metric that helps in minimizing the probability of misclassification it is usually mathematically denoted as followsc = = pi ( pi pi where we have classes to predictpi is the fraction of items labeled as class or the probability measure of instance with class label being chosen and ( piis the mistake in categorizing that item or misclassification measure the gini impurity\index is computed by summing the square of the fraction of classified instances for each class label in the classes and subtracting the result from interested readers can check out some standard literature on decision trees if interested in diving deeper into differences between entropy and gini or to understand more intricate mathematical details
16,183
moving forward with our mission of improving our wine quality predictive modellet' look at some ensemble modeling methods ensemble models are typically machine learning models that combine or take weighted (average\majorityvote of the predictions of each of the individual base model estimators that have been built using supervised methods of their own the ensemble is expected to generalize better over underlying databe more robustand make superior predictions as compared to each individual base model ensemble models can be categorized under three major families bagging methodsthe term bagging stands for bootstrap aggregatingwhere the ensemble model tries to improve prediction accuracy by combining predictions of individual base models trained over randomly generated training samples bootstrap samplesi independent samples with replacementare taken from the original training dataset and several base models are built on these sampled datasets at any instancean average of all predictions from the individual estimators is taken for the ensemble model to make its final prediction random sampling tries to reduce model variancereduce overfittingand boost prediction accuracy examples include the very popular random forests boosting methodsin contrast to bagging methodswhich operate on the principle of combining or averagingin boosting methodswe build the ensemble model incrementally by training each base model estimator sequentially training each model involves putting special emphasis on learning the instances which it previously misclassified the idea is to combine several weak base learners to form powerful ensemble weak learners are trained sequentially over multiple iterations of the training data with weight modifications inserted at each retrain phase at each re-training of weak base learnerhigher weights are assigned to those training instances which were misclassified previously thusthese methods try to focus on training instances which it wrongly predicted in the previous training sequence boosted models are prone to over-fitting so one should be very careful examples include gradient boostingadaboostand the very popular xgboost stacking methodsin stacking based methodswe first build multiple base models over the training data then the final ensemble model is built by taking the output predictions from these models as its additional inputs for training to make the final prediction let' now try building model using random forestsa very popular bagging method in the random forest modeleach base learner is decision tree model trained on bootstrap sample of the training data besides thiswhen we want to split decision node in the treethe split is chosen from random subset of all the features instead of taking the best split from all the features due to the introduction of this randomnessbias increases and when we average the result from all the trees in the forestthe overall variance decreasesgiving us robust ensemble model which generalizes well we will be using the randomforestclassifier from scikit-learnwhich averages the probabilistic prediction from all the trees in the forest for the final prediction instead of taking the actual prediction votes and then averaging it in [ ]from sklearn ensemble import randomforestclassifier train the model wqp_rf randomforestclassifier(wqp_rf fit(wqp_train_sxwqp_train_ypredict and evaluate performance wqp_rf_predictions wqp_rf predict(wqp_test_sxmeu display_model_performance_metrics(true_labels=wqp_test_ypredicted_labels=wqp_rf_predictionsclasses=wqp_label_names
16,184
figure - model performance metrics for random forest for wine quality predictive model the model prediction results on the test dataset depict an overall score and model accuracy of approximately %as seen in figure - this is definitely an improvement of from what we obtained with just decision trees proving that ensemble learning is working better another way to further improve on this result is model tuning to be more specificmodels have hyperparameters that can be tunedas we have discussed previously in detail in the "model tuning and hyperparameter tuningsections in hyperparameters are also known as meta-parameters and are usually set before we start the model training process these hyperparameters do not have any dependency on being derived from the underlying data on which the model is trained usually these hyperparameters represent some high level concepts or knobswhich can be used to tweak and tune the model during training to improve its performance our random forest model has several hyperparameters and you can view its default values as follows in [ ]print(wqp_rf get_params(){'bootstrap'true'random_state'none'verbose' 'min_samples_leaf' 'min_weight_ fraction_leaf' 'max_depth'none'class_weight'none'max_leaf_nodes'none'oob_score'false'criterion''gini''n_estimators' 'max_features''auto''min_ impurity_split' - 'n_jobs' 'warm_start'false'min_samples_split' from the preceding outputyou can see number of hyperparameters we recommend checking out the official documentation at randomforestclassifier html to learn more about each parameter for hyperparameter tuningwe will keep things simple and focus our attention on n_estimators which represents the total number of base tree models in the forest ensemble model and max_features which represents the number of features to consider during each best split we use standard grid search method with five-fold cross validation to select the best hyperparameters in [ ]from sklearn model_selection import gridsearchcv param_grid 'n_estimators'[ ]'max_features'['auto'none'log 'wqp_clf gridsearchcv(randomforestclassifier(random_state= )param_gridcv= scoring='accuracy'wqp_clf fit(wqp_train_sxwqp_train_yprint(wqp_clf best_params_{'max_features''auto''n_estimators' we can see the chosen value of the hyperparameters obtained after the grid search in the preceding output we have estimators and auto maximum features which represents the square root of the total number of features to be considered during the best split operations the scoring parameter was set to
16,185
accuracy to evaluate the model for best accuracy you can set it to other parameters to evaluate the model on other metrics like scoreprecisionrecall and so on check out modules/model_evaluation html#scoring-parameter for further details you can view the grid search results for all the hyperparameter combinations as follows in [ ]results wqp_clf cv_results_ for paramscore_meanscore_sd in zip(results['params']results['mean_test_score']results['std_test_score'])print(paramround(score_mean )round(score_sd ){'max_features''auto''n_estimators' {'max_features''auto''n_estimators' {'max_features''auto''n_estimators' {'max_features''auto''n_estimators' {'max_features'none'n_estimators' {'max_features'none'n_estimators' {'max_features'none'n_estimators' {'max_features'none'n_estimators' {'max_features''log ''n_estimators' {'max_features''log ''n_estimators' {'max_features''log ''n_estimators' {'max_features''log ''n_estimators' the preceding output depicts the selected hyperparameter combinations and its corresponding mean accuracy and standard deviation values across the grid let' train new random forest model with the tuned hyperparameters and evaluate its performance on the test data in [ ]wqp_rf randomforestclassifier(n_estimators= max_features='auto'random_state= wqp_rf fit(wqp_train_sxwqp_train_ywqp_rf_predictions wqp_rf predict(wqp_test_sxmeu display_model_performance_metrics(true_labels=wqp_test_ypredicted_labels=wqp_rf_predictionsclasses=wqp_label_namesfigure - model performance metrics for tuned random forest for wine quality predictive model the model prediction results on the test dataset depict an overall score and model accuracy of approximately %as seen in figure - this is quite good considering we got an improvement of from the initial random forest model before tuning and overall we got an improvement of from the base decision tree model also we can see that no low quality wine sample has been misclassified as high similarly no high quality wine sample has been misclassified as low there is considerable overlap between medium and high\low quality wine samples but that is expected given the nature of the data and class distribution
16,186
another way of modeling ensemble based methods is boosting very popular method is xgboost which stands for extreme gradient boosting it is variant of the gradient boosting machines (gbmmodel this model is extremely popular in the data science community owing to its superior performance in several data science challenges and competitions especially on kaggle for using this modelyou can install the xgboost package in python for details on this frameworkfeel free to check out the official web site at installationmodel tuningand much more credits go to the distributed machine learning communitypopularly known as dmlc for creating the xgboost framework along with the popular mxnet deep learning framework gradient boosting using the principles of boosting methodology for ensemblingwhich we discussed earlierand it uses gradient descent to minimize error or loss when adding new weak base learners going into details of the model internals would be out of the current scopebut we recommend checking out trees and the principle of xgboost we trained basic xgboost model first on our data and obtained an overall accuracy of around after tuning the model with grid searchwe trained the model with the following parameter values and evaluated its performance on the test data (detailed step-by-step snippets are available in the jupyter notebookin [ ]import os mingw_path ' :\mingw- \mingw \binos environ['path'mingw_path ';os environ['path'import xgboost as xgb train the model on tuned hyperparameters wqp_xgb_model xgb xgbclassifier(seed= max_depth= learning_rate= n_estimators= wqp_xgb_model fit(wqp_train_sxwqp_train_yevaluate and predict performance wqp_xgb_predictions wqp_xgb_model predict(wqp_test_sxmeu display_model_performance_metrics(true_labels=wqp_test_ypredicted_labels=wqp_xgb_predictionsclasses=wqp_label_namesfigure - model performance metrics for tuned xgboost model for wine quality predictive model the model prediction results on the test dataset depict an overall score and model accuracy of approximately %as seen in figure - though random forests perform slightly betterit definitely performs better than basic model like decision tree try adding more hyperparameters to tune the model and see if you can get better model if you do find onefeel free to send pull request to our repositorywe have successfully built decent wine quality classifier using several techniques and also seen the importance of model tuning and validation let' take our best model and run some model interpretation tasks on it to try to understand it better to start withwe can look at the feature importance ranks given to the various features in the dataset the following snippet shows comparative feature importance plots using skater as well as the default feature importances obtained from the scikit-learn model itself we build an interpretation and inmemorymodel object using skaterwhich will be useful for our future analyses in model interpretation
16,187
in [ ]from skater core explanations import interpretation from skater model import inmemorymodel leveraging skater for feature importances interpreter interpretation(wqp_test_sxfeature_names=wqp_feature_nameswqp_im_model inmemorymodel(wqp_rf predict_probaexamples=wqp_train_sxtarget_names=wqp_rf classes_retrieving feature importances from the scikit-learn estimator wqp_rf_feature_importances wqp_rf feature_importances_ wqp_rf_feature_nameswqp_rf_feature_scores zip(*sorted(zip(wqp_feature_nameswqp_rf_feature_importances)key=lambda xx[ ])plot the feature importance plots (ax ax plt subplots( figsize=( ) suptitle('feature importances for random forest'fontsize= subplots_adjust(top= wspace= y_position list(range(len(wqp_rf_feature_names))ax barh(y_positionwqp_rf_feature_scoresheight= align='center'tick_label=wqp_rf_feature_namesax set_title("scikit-learn"ax set_xlabel('relative importance score'ax set_ylabel('feature'plots interpreter feature_importance plot_feature_importance(wqp_im_modelascending=falseax=ax ax set_title("skater"ax set_xlabel('relative importance score'ax set_ylabel('feature'figure - comparative feature importance analysis obtained from our tuned random forest model we can clearly observe from figure - that the most important features are consistent across the two plotswhich is expected considering we are just using different interfaces on the same model the top two features are alcohol by volume and the volatile acidity content we will be using them shortly for further analysis but right nowlet' look at the model' roc curve and the area under curve (aucstatistics plotting binary classifier' roc curve is easybut what do you do when you are dealing with multi-class classifier ( -class in our case)there are several ways to do this you would need to binarize the output once this operation is executedyou can plot one roc curve per class label besides thisyou can also follow two aggregation metrics for computing the average roc measures micro-averaging involves plotting an roc curve over the entire prediction space by considering each predicted element as binary
16,188
prediction hence equal weight is given to each prediction classification decision macro-averaging involves giving equal weight to each class label when averaging our model_evaluation_utils module has nifty customizable function plot_model_roc_curve)which can help plot multi-class classifier roc curves with both microand macro-averaging capabilities we recommend you to check out the codewhich is pretty self-explanatory let' now plot the roc curve for our random forest classifier in [ ]meu plot_model_roc_curve(wqp_rfwqp_test_sxwqp_test_yfigure - roc curve for our tuned random forest model you can see the various roc plots (per-class and averagedin figure - for our tuned random forest model the auc is pretty good based on what we see the dotted lines indicate the per-class roc curves and the lines in bold are the macro and micro-average roc curves let' now revisit our top two most important features--alcohol and volatile acidity let' use them and try to plot the decision surface\boundary of our random forest modelsimilar to what we had done earlier for our logistic regression based wine type classifier in figure - in [ ]feature_indices [ for ifeature in enumerate(wqp_feature_namesif feature in ['alcohol''volatile acidity']meu plot_model_decision_surface(clf=wqp_rftrain_features=wqp_train_sx[:feature_indices]train_labels=wqp_train_yplot_step= cmap=plt cm rdylbumarkers=[','' ''+']alphas=[ ]colors=[' '' '' ']
16,189
figure - visualizing the model decision surface for our tuned random forest model the plot depicted in figure - shows us that the three classes are definitely not as easily distinguishable as our wine type classifier for red and white wine of coursevisualizing the hypersurfaces with multiple features becomes difficult as compared to visualizing with the two most important featuresbut the plot should give you good idea that the model is able to distinguish well between the classesalthough there is certain amount of overlapespecially with the wine samples of medium quality rating with high and low quality rated wine samples let' look at some model prediction interpretations similar to what we did in where we analyzed movie review sentiments for thiswe will be leveraging skater and look at model predictions we will try to interpret why the model predicted class label and which features were influential in its decision first we build limetabularexplainer object using the following snippetwhich will help us in interpreting and explaining predictions from skater core local_interpretation lime lime_tabular import limetabularexplainer exp limetabularexplainer(wqp_train_sxfeature_names=wqp_feature_namesdiscretize_continuous=trueclass_names=wqp_rf classes_let' now look at two wine sample instances from our test dataset the first instance is wine of low quality rating we show the interpretation for the predicted class with maximum probability\confidence using the top_labels parameter you can set it to to view the same for all the three class labels exp explain_instance(wqp_test_sx[ ]wqp_rf predict_probatop_labels= show_in_notebook(
16,190
figure - model interpretation for our wine quality model' prediction for low quality wine the results depicted in figure - show us the features that were primarily responsible for the model to predict the wine quality as low we can see that the most important feature was alcoholwhich makes sense considering what we obtained in our analyses so far from feature importances and model decision surface interpretations the values for each corresponding feature depicted here are the scaled values obtained after feature scaling let' interpret another predictionthis time for wine of high quality exp explain_instance(wqp_test_sx[ ]wqp_rf predict_probatop_labels= show_in_notebook(figure - model interpretation for our wine quality model' prediction for high quality wine from the interpretation in figure - we can see the features responsible for the model correctly predicting the wine quality as high and the primary feature was again alcohol by volume (besides other features like densityvolatile acidityand so onalso you can notice stark difference in the scaled values of alcohol for the two instances depicted in figure - and figure -
16,191
to wrap up our discussion on model interpretationwe will be talking about partial dependence plots and how they are useful in our scenario in generalpartial dependence helps describe the marginal impact or influence of feature on the model prediction decision by holding the other features constant because it is very difficult to visualize high dimensional feature spacestypically one or two influential and important features are used to visualize partial dependence plots the scikit-learn framework has functions like partial_dependenceand plot_partial_dependence)but unfortunately as of the time of writing this bookthese functions only work on boosting models like gbm the beauty of skater is that we can build partial dependence plots on any model including our tuned random forest model we will leverage skater' interpretation objectinterpreterand the inmemorymodel objectwqp_im_modelbased on our random forest model that we had created earlier when we computed the feature importances the following code depicts one-way partial dependence plots for our model prediction function based on the most important featurealcohol in [ ]axes_list interpreter partial_dependence plot_partial_dependence(['alcohol']wqp_im_modelgrid_resolution= with_variance=truefigsize ( )axs axes_list[ ][ :[ax set_ylim( for ax in axs]figure - one-way partial dependence plots for our random forest model predictor based on alcohol from the plots in figure - we can see that with an increase in the quantity of alcohol contentthe confidence\probability of the model predictor increases in predicting the wine to be either medium or high and similarly it decreases for the probability of wine to be of low quality this shows there is definitely some relationship between the class predictions with the alcohol content and again the influence of alcohol for predictions of class high is pretty lowwhich is expected considering training samples for high quality wine are less let' now plot two-way partial dependence plots for interpreting our random forest predictor' dependence on alcohol and volatile aciditythe top two influential features in [ ]plots_list interpreter partial_dependence plot_partial_dependence([('alcohol''volatile acidity')]wqp_im_modeln_samples= figsize=( )grid_resolution= axs plots_list[ ][ :[ax set_zlim( for ax in axs]
16,192
figure - two-way partial dependence plots for our random forest model predictor based on alcohol and volatile acidity the plots in figure - bear some resemblance with the plots in figure - for predicting high quality winedue to the lack of training datawhile some dependency is there for high wine quality class prediction with the increase in alcohol and corresponding decrease in volatile acidity is it quite weakas we can see in the left most plot there also seems to be strong dependency on low wine quality class prediction with the corresponding decrease in alcohol and the increase in volatile acidity levels this is clearly visible in the rightmost plot the plot in the middle talks about medium wine quality class predictions we can observe predictions having strong dependency with corresponding increase in alcohol and with decrease in volatile acidity levels this should give you good foundation on leveraging partial dependence plots to dive deeper into model interpretation ummary in this case study oriented we processedanalyzedand modeled on dataset pertaining to wine samples with focus on type and quality ratings special emphasis was on exploratory data analysiswhich is often overlooked by data scientists in hurry to build and deploy models some background in the domain was explored with regard to various features in our wine datasetwhich were explained in detail we recommend always exploring the domain in which you are solving problems and taking the help of subject matter experts whenever neededbesides focusing on mathmachine learningand analysis we looked at multiple ways to analyze and visualize our data and its featuresincluding descriptive and inferential statistics and univariate and multivariate analysis special techniques for visualizing categorical and multi-dimensional data were explained in detail the intent is for you to build on these principles and re-use similar principles and code to visualize attributes and relationships on your own datasets in the future two main objectives of this were to build predictive models for predicting wine types and quality based on various physicochemical wine attributes we covered variety of predictive modelsincluding linear models like logistic regression and complex models including deep neural networks besides thiswe also covered tree based models like decision trees and ensemble models like random forests and the very popular extreme gradient boosting model various aspects of model trainingpredictionevaluationtuningand interpretation were covered in detail we recommend you to not only build models but evaluate them thoroughly with validation metricsuse hyperparameter tuning where necessaryand leverage ensemble modeling to build robustgeneralizedand superior models special focus has also been given to concepts and techniques for interpreting modelsincluding analyzing feature importancesvisualizing model roc curves and decision surfacesexplaining model predictionsand visualizing partial dependence plots
16,193
analyzing music trends and recommendations recommendation engines are probably one of the most popular and well known machine learning applications lot of people who don' belong to the machine learning community often assume that recommendation engines are its only use although we know that machine learning has vast subspace where recommendation engines are just one of the candidatesthere is no denying the popularity of recommendation engines one of the reasons for their popularity is their ubiquitous natureanyone who is onlinein any wayhas been in touch with recommendation engine in some form or the other they are used for recommending products on ecommerce sitestravel destinations on travel portalsongs/videos on streaming sitesrestaurants on food aggregator portalsetc long list underlines their universal application the popularity of recommendation engines stems from two very important points about themthey are easy to implementrecommendation engines are easy to integrate in an already existing workflow all we need is to collect some data regarding our user trends and patternswhich normally can be extracted from the businesstransactional database they workthis statement is equally true for all the other machine learning solutions that we have discussed but an important distinction comes from the fact that they have very limited downside for exampleconsider travel portalwhich suggests set of most popular locations from its dataset the recommendation system can be trivial one but its mere presence in front of the user is likely to generate user interest the firm can definitely gain by working sophisticated recommendation engine but even very simple one is guaranteed to pay some dividends with minimal investments this point makes them very attractive proposition this studies how we can use transactional data to develop different types of recommendation engines we will learn about an auxiliary dataset of very interesting dataset"the million song datasetwe will go through user listening history and then use it to develop multiple recommendation engines with varying levels of sophistication (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
16,194
the million song dataset taste profile the million song dataset is very popular dataset and is available at millionsongthe original dataset contained quantified audio features of around million songs ranging over multiple years the dataset was created as collaborative project between the echonest (using this dataset directlywe will be using some parts of it several other datasets were spawned from the original million song dataset one of those datasets was called the echonest taste profile subset this particular dataset was created by the echonest with some undisclosed partners the dataset contains play counts by anonymous users for songs contained in the million songs dataset the taste profile dataset is quite bigas it contains around million lines of triplets the triplets contain the following information(user idsong idplay countseach row gives the play counts of song identified by the song id for the user identified by the user id the overall dataset contains around million unique users and around , songs from the million song dataset are contained in it the readers can download the dataset from default/files/challenge/train_triplets txt zip the size of the compressed dataset is around mb andupon decompressionyou need space of around gb once you have the data downloaded and uncompressedyou will see how to subset the dataset to reduce its size #the million song dataset has several other useful auxiliary datasets we will not be covering them in detail herebut we encourage the reader to explore these datasets and use their imagination for developing innovative use cases exploratory data analysis exploratory data analysis is an important part of any data analysis workflow by this timewe have established this fact firmly in the mindset of our readers the exploratory data analysis becomes even more important in the cases of large datasetsas this will often lead us to information that we can use to trim down the dataset little as we will seesometimes we will also go beyond the traditional data access tools to bypass the problems posed by the large size of data loading and trimming data the first step in the process is loading the data from the uncompressed files as the data size is around gbwe will not load the complete data but we will only load specified number of rows from the dataset this can be achieved by using the nrows parameter of the read_csv function provided by pandas in [ ]triplet_dataset pd read_csv(filepath_or_buffer=data_home+'train_triplets txt'nrows= ,sep='\ 'header=nonenames=['user','song','play_count']
16,195
since the dataset doesn' have headerwe also provided the column name to the function subset of the data is shown in figure - figure - sample rows from the echonest taste profile dataset the first thing we may want to do in the dataset of this size is determine how many unique users (or songswe should consider in the original datasetwe have around million usersbut we want to determine the number of users we should consider for exampleif of all the users account for around of total play countsthen it would be good idea to focus our analysis on those users usually this can be done by summarizing the dataset by users (or by songsand getting cumulative sum of the play counts then we can find out how many users account for of the play countsetc but due to the size of the datathe cumulative summation function provided by pandas will run into trouble so we will write code to read the file line by line and extract the play count information on user (or songthis will also serve as possible method that readers can use in case the dataset size exceeds the memory available on their systems the code snippet that follows will read the file line by lineextract total play count of all the usersand persist that information for later use in [ ]output_dict {with open(data_home+'train_triplets txt'as ffor line_numberline in enumerate( )user line split('\ ')[ play_count int(line split('\ ')[ ]if user in output_dictplay_count +=output_dict[useroutput_dict update({user:play_count}output_dict update({user:play_count}output_list [{'user': ,'play_count':vfor , in output_dict items()play_count_df pd dataframe(output_listplay_count_df play_count_df sort_values(by 'play_count'ascending falseplay_count_df to_csv(path_or_buf='user_playcount_df csv'index false
16,196
the persisted dataframe can be then loaded and used based on our requirements we can use similar strategy to extract play counts for each of the songs few lines from the dataset are shown in figure - figure - play counts for some users the first thing we want to find out about our dataset is the number of users that we will need to account for around of the play counts we have arbitrarily chosen value of to keep the dataset size manageableyou can experiment with these figures to get different sized datasets and even leverage big data processing and analysis frameworks like spark on top of hadoop to analyze the complete datasetthe following code snippet will determine the subset of users that account for this percentage of data in our case around , users account for of play countshence we will subset those users in [ ]total_play_count sum(song_count_df play_count(float(play_count_df head( = play_count sum())/total_play_count)* play_count_subset play_count_df head( = in similar waywe can determine the number of unique songs required to explain of the total play count in our casewe will find that , songs account for around of the play count this information is already great findas around of the songs are contributing to of the play count using code snippet similar to one given previouslywe can determine the subset of such songs with these songs and user subsetswe can subset our original dataset to reduce the dataset to contain only filtered users and songs the code snippet that follows uses these dataframes to filter the original dataset and then persists the resultant dataset for future uses in [ ]triplet_dataset pd read_csv(filepath_or_buffer=data_home+'train_triplets txt',sep='\ 'header=nonenames=['user','song','play_count']triplet_dataset_sub triplet_dataset[triplet_dataset user isin(user_subset
16,197
del(triplet_datasettriplet_dataset_sub_song triplet_dataset_sub[triplet_dataset_sub song isin(song_subset)del(triplet_dataset_subtriplet_dataset_sub_song to_csv(path_or_buf=data_home+'triplet_dataset_sub_song csv'index falsethis subsetting will give us dataframe with around million rows of tuples we will use this as the starting dataset for our all future analyses you can play around with these numbers to arrive at different datasets and possibly different results enhancing the data the data we loaded is just the triplet data so we are not able to see the song namethe artist nameor the album names we can enhance our data by adding this information about the songs this information is part of the million song database this data is provided as sqlite database file first we will download the data by downloading the track_metadata db file from the web page at millionsong/pages/getting-dataset#subset the next step is to read this sqlite database to dataframe and extract track information by merging it with our triplet dataframe we will also drop some extra columns that we won' be using for our analysis the code snippet that follows will load the entire datasetjoin it with our subsetted triplet dataand drop the extra columns in [ ]conn sqlite connect(data_home+'track_metadata db'cur conn cursor(cur execute("select name from sqlite_master where type='table'"cur fetchall(out[ ][('songs',)the output of the above snippet shows that the database contains table named songs we will get all the rows from this table and read it into dataframe in [ ]del(track_metadata_df_sub['track_id']del(track_metadata_df_sub['artist_mbid']track_metadata_df_sub track_metadata_df_sub drop_duplicates(['song_id']triplet_dataset_sub_song_merged pd merge(triplet_dataset_sub_songtrack_metadata_ df_subhow='left'left_on='song'right_o ='song_id'triplet_dataset_sub_song_merged rename(columns={'play_count':'listen_ count'},inplace=truedel(triplet_dataset_sub_song_merged['song_id']del(triplet_dataset_sub_song_merged['artist_id']del(triplet_dataset_sub_song_merged['duration']del(triplet_dataset_sub_song_merged['artist_familiarity']del(triplet_dataset_sub_song_merged['artist_hotttnesss']del(triplet_dataset_sub_song_merged['track_ digitalid']del(triplet_dataset_sub_song_merged['shs_perf']del(triplet_dataset_sub_song_merged['shs_work']
16,198
the final datasetmerged with the triplets dataframe looks similar to the depiction in figure - this will form the starting dataframe for our exploratory data analysis figure - play counts dataset merged with songs metadata visual analysis before we start developing various recommendation engineslet' do some visual analysis of our dataset we will try to see what the different trends are regarding the songsalbumsand releases most popular songs the first information that we can plot for our data concerns the popularity of the different songs in the dataset we will try to determine the top songs in our dataset slight modification of this popularity will also serve as our most basic recommendation engine the following code snippet gives us the most popular songs from our dataset in [ ]import matplotlib pyplot as pltplt rcdefaults(import numpy as np import matplotlib pyplot as plt popular_songs triplet_dataset_sub_song_merged[['title','listen_count']groupby('title'sum(reset_index(popular_songs_top_ popular_songs sort_values('listen_count'ascending=falsehead( = objects (list(popular_songs_top_ ['title'])y_pos np arange(len(objects)performance list(popular_songs_top_ ['listen_count']plt bar(y_posperformancealign='center'alpha= plt xticks(y_posobjectsrotation='vertical'plt ylabel('item count'plt title('most popular songs'plt show(
16,199
the plot that' generated by the code snippet is shown in figure - the plot shows that the most popular song of our dataset is "you're the onewe can also search through our track dataframe to see that the band responsible for that particular track is the black keys figure - most popular songs