id
int64
0
25.6k
text
stringlengths
0
4.59k
15,900
variant herewe apply different aggregation functions on two different attributes the agg(function takes dictionary as input containing attributes as keys and aggregation functions as values (see figure - figure - groupby with different aggregation functions for different attributes variant herewe do combination of variants and we apply multiple aggregations on the price field while applying only single one on quantity_purchased again dictionary is passedas shown in the snippet the output is shown in figure - figure - groupby with showcasing complex operation apart from groupby(based summarizationother functions such as pivot()pivot_table()stack()unstack()crosstab()and melt(provide capabilities to reshape the pandas dataframe as per requirements complete description of these methods with examples is available as part of pandas documentation at to go through the same data visualization data science is type of storytelling that involves data as its lead character as data science practitioners we work with loads of data which undergo processingwranglingand analysis day in and day out for various use cases augmenting this storytelling with visual aspects like chartsgraphsmaps and so on not just helps in improving the understanding of data (and in turn the use case/business problembut also provides opportunities to find hidden patterns and potential insights
15,901
data visualization is thus the process of visually representing information in the form of chartsgraphspicturesand so on for better and universally consistent understanding we mention universally consistent understanding to point out very common issue with human languages human languages are inherently complex and depending upon the intentions and skills of the writerthe audience may perceive the written information in different ways (causing all sorts of problemspresenting data visually thus provides us with consistent language to present and understand information (though this as well is not free from misinterpretation yet it provides certain consistencyin this sectionwe begin by utilizing pandas and its capabilities to visually understand data through different visualizations we will then introduce visualizations from the matplotlib perspective ##note data visualization in itself is popular and deep field of study utilized across domains this and section only presents few topics to get us started this is by no means comprehensive and detailed guide on data visualization interested readers may explore furtherthough topics covered here and in coming should be enough for most common tasks related to visualizations visualizing with pandas data visualization is diverse field and science on its own though the selection of the type of visualization highly depends on the datathe audienceand morewe will continue with our product transaction dataset from the previous section to understand and visualize just as quick recapthe dataset at hand consisted of transactions indicating purchase of products by certain users each transaction had the following attributes datethe date of the transaction pricethe price of the product purchased product idproduct identification number quantity purchasedthe quantity of product purchased in this transaction serial nothe transaction serial number user ididentification number of user performing the transaction user typethe type of user we wrangle our dataset to clean up the column namesconvert attributes to correct data typesand derive additional attributes of user_class and purchase_weekas discussed in the previous section pandas is very popular and powerful libraryexamples of which we have been seeing throughout the visualization is another important and widely used feature of pandas it exposes its visualization capabilities through the plot interface and closely follows matplotlib style visualization syntax line charts we begin with first looking at the purchase patterns of user who has maximum number of transactions (we leave this as an exercise for you to identify such usera trend is best visualized using the line chart simply subsetting the dataframe on the required fieldsthe plot(interface charts out line chart by default the following snippet shows the price-wise trend for the given user df[df user_id =max_user_id][['price']plot(style='blue'plt title('price trends for particular user'
15,902
the plt alias is for matplotlib pyplot we will discuss this more in the coming sectionfor now assume we require this to add-on enhancements to plots generated by pandas in this case we use it to add title to our plot the plot generated is depicted in figure - figure - line chart showing price trend for user though we can see visual representation of prices of different transactions by this userit is not helping us much let' now use the line chart again to understand how his/her purchase trends over time (remember we have date of transactions available in the datasetwe use the same plot interface by subsetting the dataframe to the two required attributes the following code snippet outlines the process df[df user_id =max_user_idplot( ='date', ='price',style='blue'plt title('price trends for particular user over time'this timesince we have two attributeswe inform pandas to use the date as our -axis and price as the -axis the plot interface handles datetime data types with elan as is evident in the following output depicted in figure -
15,903
figure - price trend over time for given user this time our visualization clearly helps us see the purchase pattern for this user though we can discuss about insights from this visualization at lengtha quick inference is clearly visible as the plot showsthe user seems to have purchased high valued items in the starting of the yearwith decreasing trend as the year has progressed alsothe number of transactions in the beginning of the year are more and closer as compared to rest of the year we can correlate such details with more data to identify patterns and behaviors we shall cover more such aspects in coming bar plots having seen trends for particular userlet' take look at our dataset at an aggregated level since we already have derived attribute called the purchase_weeklet' use it to aggregate quantities purchased by users over time we first aggregate the data at week level using the groupby(function and then aggregate the attribute quantity_purchased the final step is to plot the aggregation on bar plot the following snippet helps us plot this information df[['purchase_week''quantity_purchased']groupby('purchase_week'sum(plot barhcolor='orange'plt title('quantities purchased per week'we use the barh(function to prepare horizontal bar chart it is similar to standard bar(plot in terms of the way it represents information the difference is in the orientation of the plots figure - shows the generated output
15,904
figure - bar plot representing quantities purchased at weekly level histograms one of the most important aspects of exploratory data analysis (edais to understand distribution of various numerical attributes of given dataset the simplest and the most common way of visualizing distribution is through histograms we plot the price distribution of the products purchased in our dataset as shown in the following snippet df price hist(color='green'plt title('price distribution'we use the hist(function to plot the price distribution in single line of code the output is depicted in figure - figure - histogram representing price distribution
15,905
the output shown in figure - clearly shows skewed and tailed distribution this information will be useful while using such attributes in our algorithms more will be clear when we work on actual use cases we can take this step further and try to visualize the price distribution on per week basis we do so by using the parameter by in the hist(function this parameter helps us group data based on the attribute mentionedas by and then generates subplot for each such grouping in our casewe group by purchase week as shown in the following snippet df[['price','purchase_week']hist(by='purchase_week,sharex=truethe output depicted in figure - showcases distribution of price on weekly basis with the highest bin clearly marked in different color figure - histograms on weekly basis pie charts one of the most commonly sought after questions while understanding the data or extracting insights is to know which type is contributing the most to visualize percentage distributionpie charts are best utilized for our datasetthe following snippet helps us visualize which user type purchased how much class_series df groupby('user_class'size(class_series name 'user class distributionclass_series plot pie(autopct=' 'plt title('user class share'plt show(
15,906
the previous snippet uses groupby(to extract series representing number of transactions on per user_class level we then use the pie(function to plot the percentage distribution we use the autopct parameter to annotate the plot with actual percentage contribution by each use_class figure - depicts the output pie chart figure - pie chart representing user class transaction distribution the plot in figure - clearly points out that new users are having more than of the total transaction share while existing and loyal_existing ones complete the remaining we do not recommend using pie charts especially when you have more than three or four categories use bar charts instead box plots box plots are important visualizations that help us understand quartile distribution of numerical data box plot or box-whisker plot is concise representation to help understand different quartilesskewnessdispersionand outliers in the data
15,907
we'll look at the attributes quantity_purchased and purchase_week using box plots the following snippet generates the required plot for us df[['quantity_purchased','purchase_week']plot box(plt title('quantity and week value distribution'nowwe'll look at the plots generated (see figure - the bottom edge of the box in box plot marks the first quartilewhile the top one marks the third the line in the middle of the box marks the second quartile or the median the top and bottom whiskers extending from the box mark the range of values outliers are marked beyond the whisker boundaries in our examplefor quantity purchasedthe median is quite close to the middle of the box while the purchase week has it toward the bottom (clearly pointing out the skewness in the datayou are encouraged to read more about box plots for an in-depth understanding at courses//boxplot htm figure - box plots using pandas scatter plots scatter plots are another class of visualizations usually used to identify correlations or patterns between attributes like most visualization we have seen so farscatter plots are also available through the plot(interface of pandas to understand scatter plotswe first need to perform couple of steps of data wrangling to get our data into required shape we first encode the user_class with dummy encoding (as discussed in the previous sectionusing map(and then getting mean price and count of transactions on per week per user_class level using groupby(the following snippet helps us get our dataframe uclass_map {'new' 'existing' 'loyal_existing' ,'error': df['enc_uclass'df user_class map(uclass_map
15,908
bubble_df df[['enc_uclass''purchase_week''price','product_id']groupby(['purchase_week'enc_uclass']agg{'price':'mean''product_id':'count'reset_index(bubble_df rename(columns={'product_id':'total_transactions'},inplace=truefigure - dataframe aggregated on per week per user_class level figure - showcases the resultant dataframe nowlet' visualize this data using scatter plot the following snippet does the job for us bubble_df plot scatter( ='purchase_week' ='price'plt title('purchase week vs price 'plt show(
15,909
this generates the plot in figure - showcasing an almost random spread of data across weeks and average price with some slight concentration in the top left of the plot figure - scatter plot showing spread of data across purchase_week and price scatter plot also provides us the capability to visualize more than the basic dimensions we can plot third and fourth dimensions using color and size the following snippet helps us understand the spread with color denoting the user_class while size of the bubble indicates number of transaction bubble_df plot scatter( ='purchase_week' ='price' =bubble_df['enc_uclass'] =bubble_df['total_transactions']* plt title('purchase week vs price per user class based on tx'
15,910
the parameters are self-explanatory-- represents color while stands for size of the bubble such plots are also called bubble charts the output generated is shown in figure - figure - scatter plot visualizing multi dimensional data in this sectionwe utilized pandas to plot all sorts of visualizations these were some of the most widely used visualizations and pandas provides lot of flexibility to do more with these alsothere is an extended list of plots that can be visualized using pandas the complete information is available on the pandas documentation visualizing with matplotlib matplotlib is popular plotting library it provides interfaces and utilities to generate publication quality visualizations since its first version in until todaymatplotlib is being continuously improved by its active developer community it also forms the base and inspiration of many other plotting libraries as discussed in the previous sectionpandas along with scipy (another popular python library for scientific computingprovide wrappers over matplotlib implementations for ease of visualizing data matplotlib provides two primary modules to work withpylab and pyplot in this sectionwe will concentrate only on pyplot module (the use of pylab is not much encouragedthe pyplot interface is an object oriented interface that favors explicit instantiations as opposed to pylab' implicit ones in the previous sectionwe briefly introduced different visualizations and saw few ways of tweaking them as well since pandas visualizations are derived from matplotlib itselfwe will cover additional concepts and capabilities of matplotlib this will enable you to not only use matplotlib with ease but also provide with tricks to improve visualizations generated using pandas
15,911
figures and subplots first things first the base of any matplotlib style visualization begins with figure and subplot objects the figure module helps matplotlib generate the plotting window object and its associated elements in shortit is the top-level container for all visualization components in matplotlib syntaxa figure is the top-most container andwithin one figurewe have the flexibility to visualize multiple plots thussubplots are the plots within the high-level figure container let' get started with simple example and the required imports we will then build on the example to better understand the concept of figure and subplots the following snippet imports the pyplot module of matplotlib and plots simple sine curve using numpy to generate and values import numpy as np import matplotlib pyplot as plt sample plot np linspace(- =np sin(xplt plot( ,yplt title('sine curve using matplotlib'plt xlabel(' -axis'plt ylabel(' -axis'the pyplot module exposes methods such as plot(to generate visualizations in the examplewith plt plot(xy)matplotlib is working behind the scenes to generate the figure and axes objects to output the plot in figure - for completenesssakethe statements plt title()plt xlabel()and so on provide ways to set the figure title and axis labelsrespectively figure - sample plot
15,912
now that we have sample plot donelet' look at how different objects interact in the matplotlib universe as mentionedthe figure object is the top-most container of all elements before complicating thingswe begin by first plotting different figuresi each figure containing only single plot the following snippet plots sine and cosine wave in two different figures using numpy and matplotlib first figure plt figure( plt plot( ,yplt title('fig sine curve'plt xlabel(' -axis'plt ylabel(' -axis'second figure plt figure( =np cos(xplt plot( ,yplt title('fig cosine curve'plt xlabel(' -axis'plt ylabel(' -axis'the statement plt figure(creates an instance of type figure the number passed in as parameter is the figure identifierwhich is helpful while referring to the same figure in case multiple exist the rest of the statements are similar to our sample plot with pyplot always referring to the current figure object to draw to note that the moment new figure is instantiatedpyplot refers to the newly created objects unless specified otherwise the output generated is shown in figure -
15,913
figure - multiple figures using matplotlib we plot multiple figures while telling the data story for use case yetthere are cases where we need multiple plots in the same figure this is where the concept of subplots comes into the picture subplot divides figure into grid of specified rows and columns and also provides interfaces to interact with plotting elements subplots can be generated using few different ways and their use depends on personal preferences and use case demands we begin with the most intuitive onethe add_subplot(method this method is exposed through the figure object itself its parameters help define the grid layout and other properties the following snippet generates four subplots in figure np sin(xfigure_obj plt figure(figsize=( )ax figure_obj add_subplot( , , ax plot( ,yax figure_obj add_subplot( , , ax figure_obj add_subplot( , ,
15,914
ax figure_obj add_subplot( , , ax plot( + ,ythis snippet first defines figure object using plt figure(we then get an axes object pointing to the first subplot generated using the statement figure_obj add_subplot( , , this statement is actually dividing the figure into two rows and two columns the last parameter (value is pointing to the first subplot in this grid the snippet is simply plotting the sine curve in the top-left subplot (identified as , , and another sine curve shifted by units on the -axis in the fourth subplot (identified as , , the output generated is shown in figure - figure - subplots using add_subplot method the second method for generating subplots is through the pyplot module directly the pyplot module exposes method subplots()which returns figure object and list of axes objecteach of which is pointing to subplot in the layout mentioned in the subplots(parameters this method is useful when we have an idea about how many subplots will be required the following snippet showcases the same figax_list plt subplots( , ,sharex=truefigsize=( )ynp sin(xax_list[ plot( ,yynp cos(xax_list[ plot( ,ythe statement plt subplots( , ,sharex=truedoes three things in one go it first of all generates figure object which is then divided into rows and column each ( two subplots in totalthe two subplots are returned in the form of list of axes objects the final and the third thing is the sharing of -axiswhich we achieve using the parameter sharex this sharing of -axis allows all subplots in this figure
15,915
to have the same -axis this allows us to view data on the same scale along with aesthetic improvements the output is depicted in figure - showcasing sine and cosine curves on the same -axis figure - subplots using subplots(method another variant is the subplot(functionwhich is also exposed through the pyplot module directly this closely emulates the add_subplot(method of the figure object you can find examples listed in the code for this before moving onto other conceptswe quickly touch on the subplot grid(functionalso exposed through the pyplot module this function provides capabilities similar to the ones already discussed along with finer control to define grid layout where subplots can span an arbitrary number of columns and rows the following snippet showcases grid with subplots of different sizes np abs(xz ** plt subplot grid(( , )( )rowspan= colspan= plt plot(xy,' ', , ,' 'ax plt subplot grid(( , )( ),rowspan= plt plot(xy,' 'plt setp(ax get_xticklabels()visible=falseplt subplot grid(( , )( )rowspan= plt plot(xz,' '
15,916
the subplot grid(function takes number of parametersexplained as followsshapea tuple representing rows and columns in the grid as (rowscolumnsloca tuple representing the location of subplot this parameter is indexed rowspanthis parameter represents the number of rows the subplot covers colspanthis parameter represents the number of columns the subplot extends to the output generated in figure - by the snippet has one subplot covering four rows and two columns containing two functions the other two subplots cover two rows and one column each figure - subplots using subplot grid(plot formatting formatting plot is another important aspect of storytelling and matplotlib provides us plenty of features here from changing colors to markers and so onmatplotlib provides easy-to-use intuitive interfaces we begin with the color attributewhich is available as part of the plot(interface the color attribute works on the rgba specificationsallowing us to provide alpha and color values as strings (redgreenand so on)as single letters (rgand so onand even as hex values more details are available on the matplotlib documentation at the following example and output depicted in figure - illustrates how easy it is to set color and alpha properties for plots color ax plt subplot( plt plot( , ,color='green'ax set_title('line color'alpha ax plt subplot( ,sharex=ax alpha plt plot( ,
15,917
alpha[ set_alpha( ax set_title('line alpha'plt setp(ax get_yticklabels()visible=falsefigure - setting color and alpha properties of plot along the same lineswe also have options to use different shapes to mark data points as well as different styles to plot lines these options come in handy while representing different attributes/classes onto the same plot the following snippet and output depicted in figure - showcase the same marker markers -'+'' ''*'' '',''etc ax plt subplot( ,sharex=ax plt plot( , ,marker='*'ax set_title('point marker'linestyle linestyles -'-','--',''':''stepsax plt subplot( ,sharex=ax plt plot( , ,linestyle='--'ax set_title('line style'plt setp(ax get_yticklabels()visible=false
15,918
figure - setting marker and line style properties of plot though there are many more fine tuning options availableyou are encouraged to go through the documentation for detailed information we conclude the formatting section with final two tricks related to line width and shorthand notation to do it all quicklyas shown in the following snippet line width ax plt subplot( ,sharex=ax line plt plot( ,yline[ set_linewidth( ax set_title('line width'combine linestyle ax plt subplot( ,sharex=ax plt plot( , ,' ^'ax set_title('styling shorthand'plt setp(ax get_yticklabels()visible=falsethis snippet uses the line object returned by the plot(function to set the line width the second part of the snippet showcases the shorthand notation to set the line color and data point marker in one go asbthe output shown in figure - helps show this effect
15,919
figure - example to show line_width and shorthand notation legends graph legend is key that helps us map color/shape or other attributes to different attributes being visualized by the plot though in most cases matplotlib does wonderful job at preparing and showing the legendsthere are times when we require finer level of control the legend of plot can be controlled using the legend(functionavailable through the pyplot module directly we can set the locationsizeand other formatting attributes through this function the following example shows the legend being placed in the best possible location plt plot( , ,' ',label=' = ^ 'plt plot( , ,' :',label=' = 'plt legend(loc="best"plt title('legend sample'
15,920
figure - sample plot with legend one of the primary goals of matplotlib is to provide publication quality visualizations matplotlib supports latex style formatting of the legends to cleanly visualize mathematical symbols and equations the symbol is used to mark the start and end of latex style formatting the same is shown in the following snippet and output plot (see figure - legend with latex formatting plt plot( , ,' ',label='$ ^ $'plt plot( , ,' :',linewidth= ,label='$ ^ $'plt legend(loc="best",fontsize=' -large'plt title('legend with $latexformatting'figure - sample plot with latex formatted legend
15,921
xis controls the next feature from matplotlib is the ability to control the xand -axes of plot apart from basic features like setting the axis labels and colors using the methods set_xlabel(and set_ylabel()there are finer controls available as well let' first see how to add secondary -axis there are many scenarios when we plot data related to different features (having values at different scaleson the same plot to get proper understandingit usually helps to have both features on different -axis (each scaled to respective rangeto get additional -axiswe use the function twinx(exposed through the axes object the following snippet outlines the scenario #axis controls secondary -axis figax plt subplots(ax plot( , ,' 'ax set_ylabel( "primary -axis"color="green"ax ax twinx(ax plot( , ,' :',linewidth= ax set_ylabel( "secondary -axis"color="blue"plt title('secondary axis'at first it may sound odd to have function named twinx(to generate secondary -axis smartlymatplotlib has such function to point out the fact that the additional -axis would share the same -axis and hence the name twinx(on the same linesadditional -axis is obtained using the function twiny(the output plot is depicted in figure - figure - sample plot with secondary -axis
15,922
by defaultmatplotlib identifies the range of values being plotted and adjusts the ticks and the range of both xand -axes it also provides capability to manually set these through the axis(function through this functionwe can set the axis range using predefined keywords like tightscaledand equalalong with passing list such that it marks the values as [xminxmaxyminymaxthe following snippet shows how to adjust axis range manually manual np log(xz np log (xw np log (xplt plot( , ,' ', , ,' ', , ,' 'plt axis([ , ,- , ]plt title('manual axis range'the output in figure - showcases the plot generated without any axis adjustments on the leftwhile the right one shows the axis adjustment as done in the previous snippet figure - plots showcasing default axis and manually adjusted axis now that we have seen how to set the axis rangewe will quickly touch on setting the ticks or axis markers manually as well for axis tickswe have two separate functions availableone for setting the range of ticks while the second sets the tick labels the functions are intuitively named as set_ticks(and set_ticklabels()respectively in the following examplewe set the ticks to be marked for the -axis while for -axis we set both the tick range and the labels using the appropriate functions manual ticks plt plot(xyax plt gca(ax xaxis set_ticks(np arange(- )ax yaxis set_ticks(np arange( )ax yaxis set_ticklabels(["min" "max"]plt grid(trueplt title("manual ticks on the -axis"
15,923
the output is plot with -axis having labels marked only between - and while -axis has range of to with labels changed manually the output plot is shown in figure - figure - plot showcasing axes with manual ticks before we move on to next set of features/capabilitiesit is worth noting that with matplotlibwe also get the capability of scaling the axis based on the data range in standard manner apart from manually setting it (as seen previouslythe following is quick snippet scaling the -axis on log scale the output is shown in figure - scaling plt plot(xyax plt gca(valuesloglogitsymlog ax set_yscale("log"plt grid(trueplt title("log scaled axis"figure - plot showcasing log scaled -axis
15,924
nnotations the text(interface from the pyplot module exposes the annotation capabilities of matplotlib we can annotate any part of the figure/plot/subplot using this interface it takes the and coordinatesthe text to be displayedalignmentand fontsize parameters as inputs to place the annotations at the desired place on the plot the following snippet annotates the minima of parabolic plot annotations ** min_x min_y min_x** plt plot(xy" -"min_xmin_y"ro"plt axis([- , ,- , ]plt text( "parabola\ $ ^ $"fontsize= ha="center"plt text(min_xmin_y+ "minima"ha="center"plt text(min_xmin_y- "(% % )"%(min_xmin_y)ha='center',color='gray'plt title("annotated plot"the text(interface provides many more capabilities and formatting features you are encouraged to go through the official documentation and examples for details on this the output plot showcasing the annotated parabola is shown in figure - figure - plot showcasing annotations
15,925
global parameters to maintain consistencywe usually try to keep the plot sizesfontsand colors the same across the visual story setting each of these attributes creates complexity and difficulties in maintaining the code base to overcome these issueswe can set formatting settings globallyas shown in the following snippet global formatting params params {'legend fontsize''large''figure figsize'( )'axes labelsize''large''axes titlesize':'large''xtick labelsize':'large''ytick labelsize':'large'plt rcparams update(paramsonce set using rcparams update()the attributes provided in the params dictionary are applied to every plot generated you are encouraged to apply these setting and generate the plots discussed in this section again to understand the difference python visualization ecosystem the matplotlib library is very powerful and popular visualization/plotting library without any doubts it provides most of the tools and tricks required to plot any type of data with capability to control even the finest elements yet matplotlib leaves lot more to be desired by even pro-users being low-level apiit requires lot of boilerplate codeinteractivity is limitedand styling and other formatting defaults seem dated to address these issues and provide high-level interfaces and ability to work with current python ecosystemthe python universe has quite few visualization libraries to choose from some of the most popular and powerful ones are bokehseabornggplotand plotly each of these libraries builds on the understanding and feature set of matplotlibwhile providing their own set of features and easy-to-use wrappers to plug the gaps you are encouraged to explore these libraries and understand the differences we will introduce some of these in the coming as and when required though differentmost libraries work on concepts similar to matplotlib and hence the learning curve is shorter if you're well versed in matplotlib summary this covered quite lot of ground in terms of understandingprocessingand wrangling data we covered major data formats like flat files (csvjsonxmlhtmletc )and used standard libraries to extract/collect data we touched on standard data types and their importance in the overall process of data science major part of this covered data wrangling tasks to transformcleanand process data so as to bring it into usable form though the techniques were explained using the pandas librarythe concepts are universal and applied in most data science related use cases you may use these techniques as pointers that can be easily applied using different libraries and programming/scripting languages we covered major plots using sample datasets describing their usage we also touched on the basics and the powerful tricks of matplotlib we strongly encourage you to read the referred links for in-depth understanding this covers the initial steps in the crisp dm model of data collectionprocessingand visualization in the coming we build on these concepts and apply them for solving specific real-world problems stay tuned
15,926
feature engineering and selection building machine learning systems and pipelines take significant effortwhich is evident from the knowledge you gained in the previous in the first we presented some high-level architecture for building machine learning pipelines the path from data to insights and information is not an easy and direct one it is tough and also iterative in nature involving data scientists and analysts to reiterate through several steps multiple times to get to the perfect model and derive correct insights the limitation of machine learning algorithms is the fact that they can only understand numerical values as inputs this is becauseat the heart of any algorithmwe usually have multiple mathematical equationsconstraintsoptimizations and computations hence it is almost impossible for us to feed raw data into any algorithm and expect results this is where features and attributes are extremely helpful in building models on top of our data building machine intelligence is multi-layered process having multiple facets in this bookso farwe have already explored how you can retrieveprocesswrangleand visualize data exploratory data analysis and visualizations are the first step toward understanding your data better understanding your data involves understanding the complete scope encompassing your data including the domainconstraintscaveatsquality and available attributes from you might remember that data is comprised of multiple fieldsattributesor variables each attribute by itself is an inherent feature of the data you can then derive further features from these inherent features and this itself forms major part of feature engineering feature selection is another important task that comes hand in hand with feature engineeringwhere the data scientist is tasked with selecting the best possible subset of features and attributes that would help in building the right model an important point to remember here is that feature engineering and selection is not one-time process which should be carried out in an ad hoc manner the nature of building machine learning systems is iterative (following the crisp-dm principleand hence extracting and engineering features from the dataset is not one-time task you may need to extract new features and try out multiple selections each time you build model to get the best and optimal model for your problem data processing and feature engineering is often described to be the toughest task or step in building any machine learning system by data scientists with the need of both domain knowledge as well as mathematical transformationsfeature engineering is often said to be both an art as well as science the obvious complexities involve dealing with diverse types of data and variables besides thiseach machine learning problem or task needs specific features and there is no one solution fits all in the case of feature engineering this makes feature engineering all the more difficult and complex hence we follow proper structured approach in this covering the following three major areas in the feature engineering workflow they are mentioned as follows feature extraction and engineering feature scaling feature selection (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
15,927
this covers essential concepts for all the three major areas mentioned above techniques for feature engineering will be covered in detail for diverse data types including numericcategoricaltemporaltext and image data we would like to thank our good friend and fellow data scientistgabriel moreira for helping us with some excellent compilations of feature engineering techniques over these diverse data types we also cover different feature scaling methods typically used as part of the feature engineering process to normalize values preventing higher valued features from taking unnecessary prominence several feature selection techniques like filterwrapperand embedded methods will also be covered techniques and concepts will be supplemented with sufficient hands-on examples and code snippets remember to check out the relevant code under in the github repository at practical-machine-learning-with-python which contains necessary codenotebooksand data this will make things easier to understandhelp you gain enough knowledge to know which technique should be used in which scenario and thus help you get started on your own journey toward feature engineering for building machine learning modelsfeaturesunderstand your data better the essence of any machine learning model is comprised of two components namelydata and algorithms you might remember the same from the machine learning paradigm which we introduced in any machine learning algorithm is at essence combination of mathematical functionsequations and optimizations which are often augmented with business logic as needed these algorithms are not intelligent enough to usually process raw data and discover latent patterns from the same which would be used to train the system hence we need better data representations for building machine learning modelswhich are also known as data features or attributes let' look at some important concepts associated with data and features in this section data and datasets data is essential for analytics and machine learning without data we are literally powerless to implement any intelligent system the formal definition of data would be collection or set of qualitative and/or quantitative variables containing values based on observations typically data is usually measured and collected from various observations this is then stored it is raw form which can then be processed further and analyzed as required typically in any analytics or machine learning systemyou might need multiple sources of data and processed data from one component can be fed as raw data to another component for further processing data can be structured having definite rows and columns indicating observations and attributes or unstructured like free textual data dataset can be defined as collection of data typically this indicates data present in the form of flat files like csv files or ms excel filesrelational database tables or viewsor even raw data two-dimensional matrices sample datasets which are quite popular in machine learning are available in the scikit-learn package to quickly get started the sklearn datasets module has these sample datasets readily available and other utilities pertaining to loading and handling datasets you can find more details in this link and best practices for handling and loading data another popular resource for machine learning based datasets is the uc irvine machine learning repository which can be found here edu/ml/index php and this contains wide variety of datasets from real-world problemsscenarios and devices in fact the popular machine learning and predictive analytics competitive platform kaggle also features some datasets from uci and other datasets pertaining to various competitions feel free to check out these resources and we will in fact be using some datasets from these resources in this as well as in subsequent
15,928
features raw data is hardly used to build any machine learning modelmostly because algorithms can' work with data which is not properly processed and wrangled in desired format features are attributes or properties obtained from raw data each feature is specific representation on top of the raw data typicallyeach feature is an individual measurable attribute which usually is depicted by column in two dimensional dataset each observation is depicted by row and each feature will have specific value for an observation thus each row typically indicates feature vector and the entire set of features across all the observations forms two-dimensional feature matrix also known as feature set features are extremely important toward building machine learning models and each feature represents specific chunk of representation and information from the data which is used by the model both quality as well as quantity of features influences the performance of the model features can be of two major types based on the dataset inherent raw features are obtained directly from the dataset with no extra data manipulation or engineering derived features are usually what we obtain from feature engineering where we extract features from existing data attributes simple example would be creating new feature age from an employee dataset containing birthdate by just subtracting their birth date from the current date the next major section covers more details on how to handleextractand engineer features based on diverse data types models features are better representations of underlying raw data which act as inputs to any machine learning model typically model is comprised of data featuresoptional class labels or numeric responses for supervised learning and machine learning algorithm the algorithm is chosen based on the type of problem we want to solve after converting it into specific machine learning task models are built after training the system on data features iteratively till we get the desired performance thusa model is basically used to represent relationships among the various features of our data typically the process of modeling involves multiple major steps model building focuses on training the model on data features model tuning and optimization involves tuning specific model parametersknown as hyperparameters and optimizing the model to get the best model model evaluation involves using standard performance evaluation metrics like accuracy to evaluate model performance model deployment is usually the final step whereonce we have selected the most suitable modelwe deploy it live in production which usually involves building an entire system around this model based on the crisp-dm methodology will focus on these aspects in further detail revisiting the machine learning pipeline we covered the standard machine learning pipeline in detail in which was based on the crispdm standard let' refresh our memory by looking at figure - which depicts our standard generic machine learning pipeline with the major components identified with the various building blocks
15,929
figure - revisiting our standard machine learning pipeline the figure clearly depicts the main components in the pipelinewhich you should already be well-versed on by now these components are mentioned once more for ease of understanding data retrieval data preparation modeling model evaluation and tuning model deployment and monitoring our area of focus in this falls under the blocks under "data preparationwe already covered processing and wrangling data in in detail herewe will be focusing on the three major steps essential toward handling data features these are mentioned as follows feature extraction and engineering feature scaling feature selection these blocks are highlighted in figure - and are essential toward the process of transforming processed data into features by processedwe mean the raw dataafter going through necessary preprocessing and wrangling operations the sequence of steps that are usually followed in the pipeline for transforming processed data into features is depicted in more detailed view in figure - figure - standard pipeline for feature engineeringscalingand selection
15,930
it is quite evident that based on the sequence of steps depicted in the figurefeatures are first crafted and engineeringnecessary normalization and scaling is performed and finally the most relevant features are selected to give us the final set of features we will cover these three components in detail in subsequent sections following the same sequence as depicted in the figure feature extraction and engineering the process of feature extraction and engineering is perhaps the most important one in the entire machine learning pipeline good features depicting the most suitable representations of the data help in building effective machine learning models in factmore than often it' not the algorithms but the features that determine the effectiveness of the model in simple wordsgood features give good models data scientist approximately spends around to of his time in data processingwranglingand feature engineering for building any machine learning model hence it' of paramount importance to understand all aspects pertaining to feature engineering if you want to be proficient in machine learning typically feature extraction and feature engineering are synonyms that indicate the process of using combination of domain knowledgehand-crafted techniques and mathematical transformations to convert data into features henceforth we will be using the term feature engineering to refer to all aspects concerning the task of extracting or creating new features from data while the choice of machine learning algorithm is very important when building modelmore than oftenthe choice and number of features tend to have more impact toward the model performance in this sectionwe will be looking to answer some questions such as the whywhatand how of feature engineering to get more in-depth understanding toward feature engineering what is feature engineeringwe already informally explained the core concept behind feature engineeringwhere we use specific components from domain knowledge and specific techniques to transform data into features data in this case is raw data after necessary pre-processing and wranglingwhich we have mentioned earlier this includes dealing with bad dataimputing missing valuestransforming specific valuesand so on features are the final end result from the process of feature engineeringwhich depicts various representations of the underlying data let' now look at couple of definitions and quotes relevant to feature engineering from several renowned people in the world of data sciencerenowned computer and data scientist andrew ng talks about machine learning and feature engineering "coming up with features is difficulttime-consumingrequires expert knowledge 'applied machine learningis basically feature engineering --prof andrew ng this basically reinforces what we mentioned earlier about data scientists spending close to of their time in engineering features which is difficult and time-consuming processrequiring both domain knowledge and mathematical computations besides thispractical or applied machine learning is mostly feature engineering because the time taken in building and evaluating models is considerably less than the total time spent toward feature engineering howeverthis doesn' mean that modeling and evaluation are any less important than feature engineering
15,931
we will now look at definition of feature engineering by dr jason brownleedata scientist and ml practitioner who provides lot of excellent resources over at regard to machine learning and data science dr brownlee defines feature engineering as follows "feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive modelsresulting in improved model accuracy on unseen data --dr jason brownlee let' spend some more time on this definition of feature engineering it tells us that the process of feature engineering involves transforming data into features taking into account several aspects pertaining to the problemmodelperformanceand data these aspects are highlighted in this definition and are explained in further detail as follows raw datathis is data in its native form after data retrieval from source typically some amount of data processing and wrangling is done before the actual process of feature engineering featuresthese are specific representations obtained from the raw data after the process of feature engineering the underlying problemthis refers to the specific business problem or usecase we want to solve with the help of machine learning the business problem is typically converted into machine learning task the predictive modelstypically feature engineering is used for extracting features to build machine learning models that learn about the data and the problem to be solved from these features supervised predictive models are widely used for solving diverse problems model accuracythis refers to model performance metrics that are used to evaluate the model unseen datathis is basically new data that was not used previously to build or train the model the model is expected to learn and generalize well for unseen data based on good quality features thus feature engineering is the process of transforming data into features to act as inputs for machine learning models such that good quality features help in improving the overall model performance features are also very much dependent on the underlying problem thuseven though the machine learning task might be same in different scenarioslike classification of -mails into spam and non-spam or classifying handwritten digitsthe features extracted in each scenario will be very different from the other by now you must be getting good grasp on the idea and significance of feature engineering always remember that for solving any machine learning problemfeature engineering is the keythis in fact is reinforced by prof pedro domingos from the university of washingtonin his paper titled" few useful things to know about machine learningavailable at cacm pdfwhich tells us the following "at the end of the daysome machine learning projects succeed and some fail what makes the differenceeasily the most important factor is the features used --prof pedro domingos
15,932
feature engineering is indeed both an art and science to transform data into features for feeding into models sometimes you need combination of domain knowledgeexperienceintuitionand mathematical transformations to give you the features you need by solving more problems over timeyou will gain the experience you need to know what features might be best suited for problem hence do not be overwhelmedpractice will make you master feature engineering with time the following list depicts some examples of engineering features deriving person' age from birth date and the current date getting the average and median view count of specific songs and music videos extracting word and phrase occurrence counts from text documents extracting pixel information from raw images tabulating occurrences of various grades obtained by students the final quote to whet your appetite on feature engineering is from renowned kagglerxavier conort most of you already know that tough machine learning problems are often posted on kaggle regularly which is usually open to everyone xavier' thoughts on feature engineering are mentioned as follows "the algorithms we used are very standard for kagglers we spent most of our efforts in feature engineering we were also very careful to discard features likely to expose us to the risk of over-fitting our model --xavier conort this should give you good idea what is feature engineeringthe various aspects surrounding it and very basic introduction into why do we really need feature engineering in the following sectionwe will expand more on why we need feature engineeringits benefits and advantages why feature engineeringwe have defined feature engineering in the previous section and also touched upon the basics pertaining to the importance of feature engineering let' now look at why we need feature engineering and how can it be an advantage for us when we are building machine learning models and working with data better representation of datafeatures are basically various representations of the underlying raw data these representations can be better understood by machine learning algorithms besides thiswe can also often easily visualize these representations simple example would be to visualize the frequent word occurrences of newspaper article as opposed to being totally perplexed as to what to do with the raw textbetter performing modelsthe right features tend to give models that outperform other models no matter how complex the algorithm is in general if you have the right feature seteven simple model will perform well and give desired results in shortbetter features make better models essential for model building and evaluationwe have mentioned this numerous times by nowraw data cannot be used to build machine learning models get your dataextract featuresand start building modelsalso on evaluating model performance and tuning the modelsyou can reiterate over your feature set to choose the right set of features to get the best model
15,933
more flexibility on data typeswhile is it definitely easier to use numeric data types directly with machine learning algorithms with little or no data transformationsthe real challenge is to build models on more complex data types like textimagesand even videos feature engineering helps us build models on diverse data types by applying necessary transformations and enables us to work even on complex unstructured data emphasis on the business and domaindata scientists and analysts are usually busy in processingcleaning data and building models as part of their day to day tasks this often creates gap between the business stakeholders and the technicalanalytics team feature engineering involves and enables data scientists to take step back and try to understand the domain and the business betterby taking valuable inputs from the business and subject matter experts this is necessary to create and select features that might be useful for building the right model to solve the problem pure statistical and mathematical knowledge is rarely sufficient to solve complex real-world problem hence feature engineering emphasizes to focus on the business and the domain of the problem when building features this listthough not an exhaustive onegives us pretty good insight into the importance of feature engineering and how it is an essential aspect of building machine learning models the importance of the problem to be solved and the domain is also pretty important in feature engineering how do you engineer featuresthere are no fixed rules for engineering features it involves using combination of domain knowledgebusiness constraintshand-crafted transformations and mathematical transformations to transform the raw data into desired features different data types have different techniques for feature extraction hence in this we focus on various feature engineering techniques and strategies for the following major data types numeric data categorical data text data temporal data image data subsequent sections in this focus on dealing with these diverse data types and specific techniques which can be applied to engineer features you can use them as reference and guidebook for engineering features from your own datasets in the future another aspect into feature engineering has recently gained prominence hereyou do not use handcrafted features butmake the machine itself try to detect patterns and extract useful data representations from the raw datawhich can be used as features this process is also known as auto feature generation deep learning has proved to be extremely effective in this area and neural network architectures like convolutional neural networks (cnns)recurrent neural networks (rnns)and long short term memory networks (lstmsare extensively used for auto feature engineering and extraction let' dive into the world of feature engineering now with some real-world datasets and examples
15,934
feature engineering on numeric data numeric datafieldsvariablesor features typically represent data in the form of scalar information that denotes an observationrecordingor measurement of coursenumeric data can also be represented as vector of scalars where each specific entity in the vector is numeric data point in itself integers and floats are the most common and widely used numeric data types besides thisnumeric data is perhaps the easiest to process and is often used directly by machine learning models if you remember we have talked about numeric data previously in the "data descriptionsection in even though numeric data can be directly fed into machine learning modelsyou would still need to engineer features that are relevant to the scenarioproblemand domain before building model hence the need for feature engineering remains important aspects of numeric features include feature scale and distribution and you will observe some of these aspects in the examples in this section in some scenarioswe need to apply specific transformations to change the scale of numeric values and in other scenarios we need to change the overall distribution of the numeric valueslike transforming skewed distribution to normal distribution the code used for this section is available in the code files for this you can load feature_ engineering_numeric py directly and start running the examples or use the jupyter notebookfeature engineering on numeric data ipynbfor more interactive experience before we beginlet' load the following dependencies and configuration settings in [ ]import pandas as pd import matplotlib pyplot as plt import matplotlib as mpl import numpy as np import scipy stats as spstats %matplotlib inline mpl style reload_library(mpl style use('classic'mpl rcparams['figure facecolor'( mpl rcparams['figure figsize'[ mpl rcparams['figure dpi' now that we have the initial dependencies loadedlet' look at some ways to engineer features from numeric data in the following sections raw measures just like we mentioned earliernumeric features can be directly fed to machine learning models often since they are in format which can be easily understoodinterpretedand operated on raw measures typically indicated using numeric variables directly as features without any form of transformation or engineering typically these features can indicate values or counts
15,935
values usuallyscalar values in its raw form indicate specific measurementmetricor observation belonging to specific variable or field the semantics of this field is usually obtained from the field name itself or data dictionary if present let' load dataset now about pokemonthis dataset is also available on kaggle if you do not knowpokemon is huge media franchise surrounding fictional characters called pokemon which stands for pocket monsters in shortyou can think of them as fictional animals with superpowersthe following snippet gives us an idea about this dataset in [ ]poke_df pd read_csv('datasets/pokemon csv'encoding='utf- 'poke_df head(figure - raw data from the pokemon dataset if you observe the dataset depicted in figure - there are several attributes there which represent numeric raw values which can be used directly the following snippet depicts some of these features with more emphasis in [ ]poke_df[['hp''attack''defense']head(out[ ]hp attack defense you can directly use these attributes as features that are depicted in the previous dataframe these include each pokemon' hp (hit points)attackand defense stats in factwe can also compute some basic statistical measures on these fields using the following code in [ ]poke_df[['hp''attack''defense']describe(out[ ]hp attack defense count mean std min max
15,936
we can see multiple statistical measures like countaveragestandard deviationand quartiles for each of the numeric features in this output try plotting their distributions if possiblec ounts raw numeric measures can also indicate countsfrequencies and occurrences of specific attributes let' look at sample of data from the million-song datasetwhich depicts counts or frequencies of songs that have been heard by various users in [ ]popsong_df pd read_csv('datasets/song_views csv'encoding='utf- 'popsong_df head( figure - song listen counts as numeric feature we can see that the listen_count field in the data depicted in figure - can be directly used as count/frequency based numeric feature binarization often raw numeric frequencies or counts are not necessary in building models especially with regard to methods applied in building recommender engines for example if want to know if person is interested or has listened to particular songi do not need to know the total number of times he/she has listened to the same song am more concerned about the various songs he/she has listened to in this casea binary feature is preferred as opposed to count based feature we can binarize our listen_count field from our earlier dataset in the following way in [ ]watched np array(popsong_df['listen_count']watched[watched > popsong_df['watched'watched you can also use scikit-learn' binarizer class here from its preprocessing module to perform the same task instead of numpy arraysas depicted in the following code
15,937
in [ ]from sklearn preprocessing import binarizer bn binarizer(threshold= pd_watched bn transform([popsong_df['listen_count']])[ popsong_df['pd_watched'pd_watched popsong_df head( figure - binarizing song counts you can clearly see from figure - that both the methods have produced the same results depicted in features watched and pd_watched thuswe have the song listen counts as binarized feature indicating if the song was listened to or not by each user rounding often when dealing with numeric attributes like proportions or percentageswe may not need values with high amount of precision hence it makes sense to round off these high precision percentages into numeric integers these integers can then be directly used as raw numeric values or even as categorical (discreteclass basedfeatures let' try applying this concept in dummy dataset depicting store items and their popularity percentages in [ ]items_popularity pd read_csv('datasets/item_popularity csv'encoding='utf- 'rounding off percentages items_popularity['popularity_scale_ 'np array(np round((items_popularity['pop_percent' ))dtype='int'items_popularity['popularity_scale_ 'np array(np round((items_popularity['pop_percent' ))dtype='int'items_popularity out[ ]item_id pop_percent popularity_scale_ popularity_scale_ it_ it_ it_ it_
15,938
it_ it_ it_ thus after our rounding operationsyou can see the new features in the data depicted in the previous dataframe basically we tried two forms of rounding the features depict the item popularities now both on scale of - and on scale of - you can use these values both as numerical or categorical features based on the scenario and problem interactions model is usually built in such way that we try to model the output responses (discrete classes or continuous valuesas function of the input feature variables for examplea simple linear regression equation can be depicted as cnxn where the input features are depicted by variables { xnhaving weights or coefficients of { cnrespectively and the goal is the predict response in this casethis simple linear model depicts the relationship between the output and inputspurely based on the individualseparate input features howeveroften in several real-world datasets and scenariosit makes sense to also try to capture the interactions between these feature variables as part of the input feature set simple depiction of the extension of the above linear regression formulation with interaction features would be cnxn where features like { denote the interaction features let' try engineering some interaction features on our pokemon dataset now in [ ]atk_def poke_df[['attack''defense']atk_def head(out[ ]attack defense we can see in this outputthe two numeric features depicting pokemon attack and defense the following code helps us build interaction features from these two features we will build features up to the second degree using the polynomialfeatures class from scikit-learn' api in [ ]from sklearn preprocessing import polynomialfeatures pf polynomialfeatures(degree= interaction_only=falseinclude_bias=falseres pf fit_transform(atk_defres out[ ]array([ ] ] ] ] ] ]]
15,939
we can clearly see from this output that we have total of five features including the new interaction features we can see the degree of each feature in the matrixusing the following snippet in [ ]pd dataframe(pf powers_columns=['attack_degree''defense_degree']out[ ]attack_degree defense_degree now that we know what each feature actually represented from the degrees depictedwe can assign name to each feature as follows to get the updated feature set in [ ]intr_features pd dataframe(rescolumns=['attack''defense''attack^ ''attack defense''defense^ ']intr_features head( out[ ]attack defense attack^ attack defense defense^ thus we can see our original and interaction features in figure - the fit_transformapi function from scikit-learn is useful to build feature engineering representation object on the training datawhich can be reused on new data during model predictions by calling on the transformfunction let' take some sample new observations for pokemon attack and defense features and try to transform them using this same mechanism in [ ]new_df pd dataframe([[ ],[ ][ ]]columns=['attack''defense']new_df out[ ]attack defense we can now use the pf object that we created earlier and transform these input features to give us the interaction features as follows in [ ]new_res pf transform(new_dfnew_intr_features pd dataframe(new_rescolumns=['attack''defense''attack^ ''attack defense''defense^ ']new_intr_features out[ ]
15,940
attack defense attack^ attack defense defense^ thus you can see that we have successfully obtained the necessary interaction features for the new dataset try building interaction features on three or more features nowbinning often when working with numeric datayou might come across features or attributes which depict raw measures such as values or frequencies in many casesoften the distributions of these attributes are skewed in the sense that some sets of values will occur lot and some will be very rare besides thatthere is also the added problem of varying range of these values suppose we are talking about song or video view counts in some casesthe view counts will be abnormally large and in some cases very small directly using these features in modeling might cause issues metrics like similarity measurescluster distancesregression coefficients and more might get adversely affected if we use raw numeric features having values which range across multiple orders of magnitude there are various ways to engineer features from these raw values so we can these issues these methods include transformationsscaling and binning/quantization in this sectionwe will talk about binning which is also known as quantization the operation of binning is used for transforming continuous numeric values into discrete ones these discrete numbers can be thought of as bins into which the raw values or numbers are binned or grouped into each bin represents specific degree of intensity and has specific range of values which must fall into that bin there are various ways of binning data which include fixed-width and adaptive binning specific techniques can be employed for each binning process we will use dataset extracted from the freecodecamp developer/coder survey which talks about various attributes pertaining to coders and software developers you can check it out yourself at dataset and take peek at some interesting attributes in [ ]fcc_survey_df pd read_csv('datasets/fcc_ _coder_survey_subset csv'encoding='utf- 'fcc_survey_df[['id ''employmentfield''age''income']head(figure - important attributes from the fcc coder survey dataset the dataframe depicted in figure - shows us some interesting attributes of the coder survey datasetsome of which we will be analyzing in this section the id variable is basically unique identifier for each coder/developer who took the survey and the other fields are pretty self-explanatory
15,941
fixed-width binning in fixed-width binningas the name indicateswe have specific fixed widths for each of the binswhich are usually pre-defined by the user analyzing the data each bin has pre-fixed range of values which should be assigned to that bin on the basis of some business or custom logicrulesor necessary transformations binning based on rounding is one of the wayswhere you can use the rounding operation that we discussed earlier to bin raw values let' consider the age feature from the coder survey dataset the following code shows the distribution of developer ages who took the survey in [ ]figax plt subplots(fcc_survey_df['age'hist(color='# 'ax set_title('developer age histogram'fontsize= ax set_xlabel('age'fontsize= ax set_ylabel('frequency'fontsize= figure - histogram depicting developer age distribution the histogram in figure - depicts the distribution of developer ageswhich is slightly right skewed as expected let' try to assign these raw age values into specific bins based on the following logic age rangebin
15,942
and so on we can easily do this using what we learned in the "roundingsection earlier where we round off these raw age values by taking the floor value after dividing it by the following code depicts the same in [ ]fcc_survey_df['age_bin_round'np array(np floor(np array(fcc_survey_df['age'] )fcc_survey_df[['id ''age''age_bin_round']iloc[ : out[ ]id age age_bin_round aa fdb de fe fa fdd bd bf dc cdf dad aec ffbb fc ec we take specific slice of the dataset (rows - to depict users of varying ages you can see the corresponding bins for each age have been assigned based on rounding but what if we need more flexibilitywhat if want to decide and fix the bin widths myselfbinning based on custom ranges is the answer to the all our questions about fixed-width binningsome of which just mentioned let' define some custom age ranges for binning developer ages using the following scheme age range bin based on this custom binning schemewe will now label the bins for each developer age value with the help of the following code we will store both the bin range as well as the corresponding label in [ ]bin_ranges [ bin_names [ fcc_survey_df['age_bin_custom_range'pd cut(np array(fcc_survey_df['age'])bins=bin_rangesfcc_survey_df['age_bin_custom_label'pd cut(np array(fcc_survey_df['age'])bins=bin_rangeslabels=bin_namesfcc_survey_df[['id ''age''age_bin_round''age_bin_custom_range''age_bin_custom_label']iloc[ :
15,943
figure - custom age binning for developer ages we can see from the dataframe output in figure - that the custom bins based on our scheme have been assigned for each developer' age try out some of your own binning schemesa daptive binning so farwe have decided the bin width and ranges in fixed-width binning howeverthis technique can lead to irregular bins that are not uniform based on the number of data points or values which fall in each bin some of the bins might be densely populated and some of them might be sparsely populated or even be emptyadaptive binning is safer and better approach where we use the data distribution itself to decide what should be the appropriate bins quantile based binning is good strategy to use for adaptive binning quantiles are specific values or cut-points which help in partitioning the continuous valued distribution of specific numeric field into discrete contiguous bins or intervals thusq-quantiles help in partitioning numeric attribute into equal partitions popular examples of quantiles include the -quantile known as the median which divides the data distribution into two equal bins -quantiles known as the quartileswhich divide the data into four equal bins and -quantiles also known as the deciles which create equal width bins let' now look at slice of data pertaining to developer income values in our coder survey dataset in [ ]fcc_survey_df[['id ''age''income']iloc[ : out[ ]id age income cdb bec dd eab bbf aa fd ffede dff db bddc dc ed eb ab bb the slice of data depicted by the dataframe shows us the income values for each developer in our dataset let' look at the whole data distribution for this income variable now using the following code in [ ]figax plt subplots(fcc_survey_df['income'hist(bins= color='# 'ax set_title('developer income histogram'fontsize= ax set_xlabel('developer income'fontsize= ax set_ylabel('frequency'fontsize=
15,944
figure - histogram depicting developer income distribution we can see from the distribution depicted in figure - that as expected there is right skew with lesser developers earning more money and vice versa let' take -quantile or quartile based adaptive binning scheme the following snippet helps us obtain the income values that fall on the four quartiles in the distribution in [ ]quantile_list [ quantiles fcc_survey_df['income'quantile(quantile_listquantiles out[ ] to visualize the quartiles obtained in this output betterwe can plot them in our data distribution using the following code snippet in [ ]figax plt subplots(fcc_survey_df['income'hist(bins= color='# 'for quantile in quantilesqvl plt axvline(quantilecolor=' 'ax legend([qvl]['quantiles']fontsize= ax set_title('developer income histogram with quantiles'fontsize= ax set_xlabel('developer income'fontsize= ax set_ylabel('frequency'fontsize=
15,945
figure - histogram depicting developer income distribution with quartile values the -quantile values for the income attribute are depicted by red vertical lines in figure - let' now use quantile binning to bin each of the developer income values into specific bins using the following code in [ ]quantile_labels [' - '' - '' - '' - 'fcc_survey_df['income_quantile_range'pd qcut(fcc_survey_df['income'] =quantile_listfcc_survey_df['income_quantile_label'pd qcut(fcc_survey_df['income'] =quantile_listlabels=quantile_labelsfcc_survey_df[['id ''age''income''income_quantile_range''income_quantile_label']iloc[ : figure - quantile based bin ranges and labels for developer incomes
15,946
the result dataframe depicted in figure - clearly shows the quantile based bin range and corresponding label assigned for each developer income value in the income_quantile_range and income_quantile_labels featuresrespectively statistical transformations let' look at different strategy of feature engineering on numerical data by using statistical or mathematical transformations in this sectionwe will look at the log transform as well as the box-cox transform both of these transform functions belong to the power transform family of functions these functions are typically used to create monotonic data transformationsbut their main significance is that they help in stabilizing varianceadhering closely to the normal distribution and making the data independent of the mean based on its distribution several transformations are also used as part of feature scalingwhich we cover in future section log transform the log transform belongs to the power transform family of functions this function can be defined as logb(xwhich reads as log of to the base is equal to this translates to by xwhich indicates as to what power must the base be raised to in order to get the natural logarithm uses the base where popularly known as euler' number you can also use base used popularly in the decimal system log transforms are useful when applied to skewed distributions as they tend to expand the values which fall in the range of lower magnitudes and tend to compress or reduce the values which fall in the range of higher magnitudes this tends to make the skewed distribution as normal-like as possible let' use log transform on our developer income feature from our coder survey dataset in [ ]fcc_survey_df['income_log'np log(( fcc_survey_df['income'])fcc_survey_df[['id ''age''income''income_log']iloc[ : out[ ]id age income income_log cdb bec dd eab bbf aa fd ffede dff db bddc dc ed eb ab bb the dataframe obtained in this output depicts the log transformed income feature in the income_log field let' now plot the data distribution of this transformed feature using the following code in [ ]income_log_mean np round(np mean(fcc_survey_df['income_log']) figax plt subplots(fcc_survey_df['income_log'hist(bins= color='# 'plt axvline(income_log_meancolor=' 'ax set_title('developer income histogram after log transform'fontsize= ax set_xlabel('developer income (log scale)'fontsize= ax set_ylabel('frequency'fontsize= ax text( '$\mu$='+str(income_log_mean)fontsize=
15,947
figure - histogram depicting developer income distribution after log transform thus we can clearly see that the original developer income distribution that was right skewed in figure - is more gaussian or normal-like in figure - after applying the log transform ox-cox transform let' now look at the box-cox transformanother popular function belonging to the power transform family of functions this function has prerequisite that the numeric values to be transformed must be positive (similar to what log transform expectsin case they are negativeshifting using constant value helps mathematicallythe box-cox transform function can be defined asi xl - for , ilog for th such that the resulted transformed output is function of input and transformation parameter such that when the resultant transform is the natural log transformwhich we discussed earlier the optimal value of is usually determined using maximum likelihood or log-likelihood estimation let' apply the box-cox transform on our developer income feature to do thisfirst we get the optimal lambda value from the data distribution by removing the non-null values using the following code
15,948
in [ ]get optimal lambda value from non null income values income np array(fcc_survey_df['income']income_clean income[~np isnan(income)lopt_lambda spstats boxcox(income_cleanprint('optimal lambda value:'opt_lambdaoptimal lambda value now that we have obtained the optimal valuelet' use the box-cox transform for two values of such that loptimal and transform the raw numeric values pertaining to developer incomes in [ ]fcc_survey_df['income_boxcox_lambda_ 'spstats boxcox(( +fcc_survey_df['income'])lmbda= fcc_survey_df['income_boxcox_lambda_opt'spstats boxcox(fcc_survey_df['income']lmbda=opt_lambdafcc_survey_df[['id ''age''income''income_log''income_boxcox_lambda_ ''income_boxcox_lambda_opt']iloc[ : figure - dataframe depicting developer income distribution after box-cox transform the dataframe obtained in the output shown in figure - depicts the income feature after applying the box-cox transform for and loptimal in the income_boxcox_lambda_ and income_boxcox_lambda_ opt fields respectively also as expectedthe income_log field has the same values as the box-cox transform with let' now plot the data distribution for the box-cox transformed developer values with optimal lambda see figure - in [ ]income_boxcox_mean np round(np mean(fcc_survey_df['income_boxcox_lambda_opt']) figax plt subplots(fcc_survey_df['income_boxcox_lambda_opt'hist(bins= color='# 'plt axvline(income_boxcox_meancolor=' 'ax set_title('developer income histogram after box-cox transform'fontsize= ax set_xlabel('developer income (box-cox transform)'fontsize= ax set_ylabel('frequency'fontsize= ax text( '$\mu$='+str(income_boxcox_mean)fontsize=
15,949
figure - histogram depicting developer income distribution after box-cox transform ( loptimalthe distribution of the transformed numeric values for developer income after the box-cox distribution also look similar to the one we had obtained after the log transform such that it is more normal-like and the extreme right skew that was present in the raw data has been minimized here feature engineering on categorical data so farwe have been working on continuous numeric data and you have also seen various techniques for engineering features from the same we will now look at another structured data typewhich is categorical data any attribute or feature that is categorical in nature represents discrete values that belong to specific finite set of categories or classes category or class labels can be text or numeric in nature usually there are two types of categorical variables--nominal and ordinal nominal categorical features are such that there is no concept of ordering among the valuesi it does not make sense to sort or order them movie or video game genresweather seasonsand country names are some examples of nominal attributes ordinal categorical variables can be ordered and sorted on the basis of their values and hence these values have specific significance such that their order makes sense examples of ordinal attributes include clothing sizeeducation leveland so on in this sectionwe look at various strategies and techniques for transforming and encoding categorical features and attributes the code used for this section is available in the code files for this you can load feature_engineering_categorical py directly and start running the examples or use the jupyter notebookfeature engineering on categorical data ipynbfor more interactive experience before we beginlet' load the following dependencies in [ ]import pandas as pd import numpy as np
15,950
once you have these dependencies loadedlet' get started and engineer some features from categorical data transforming nominal features nominal features or attributes are categorical variables that usually have finite set of distinct discrete values often these values are in string or text format and machine learning algorithms cannot understand them directly hence usually you might need to transform these features into more representative numeric format let' look at new dataset pertaining to video game sales this dataset is also available on kaggle (convenience the following code helps us load this dataset and view some of the attributes of our interest in [ ]vg_df pd read_csv('datasets/vgsales csv'encoding='utf- 'vg_df[['name''platform''year''genre''publisher']iloc[ : out[ ]name platform year genre publisher super mario bros nes platform nintendo mario kart wii wii racing nintendo wii sports resort wii sports nintendo pokemon red/pokemon blue gb role-playing nintendo tetris gb puzzle nintendo new super mario bros ds platform nintendo the dataset depicted in this dataframe shows us various attributes pertaining to video games features like platformgenreand publisher are nominal categorical variables let' now try to transform the video game genre feature into numeric representation do note here that this doesn' indicate that the transformed feature will be numeric feature it will still be discrete valued categorical feature with numbers instead of text for each genre the following code depicts the total distinct genre labels for video games in [ ]genres np unique(vg_df['genre']genres out[ ]array(['action''adventure''fighting''misc''platform''puzzle''racing''role-playing''shooter''simulation''sports''strategy']dtype=objectthis output tells us we have distinct video game genres in our dataset let' transform this feature now using mapping scheme in the following code in [ ]from sklearn preprocessing import labelencoder gle labelencoder(genre_labels gle fit_transform(vg_df['genre']genre_mappings {indexlabel for indexlabel in enumerate(gle classes_)genre_mappings out[ ]{ 'action' 'adventure' 'fighting' 'misc' 'platform' 'puzzle' 'racing' 'role-playing' 'shooter' 'simulation' 'sports' 'strategy'
15,951
from the outputwe can see that mapping scheme has been generated where each genre value is mapped to number with the help of the labelencoder object gle the transformed labels are stored in the genre_labels value let' write it back to the original dataframe and view the results in [ ]vg_df['genrelabel'genre_labels vg_df[['name''platform''year''genre''genrelabel']iloc[ : out[ ]name platform year genre genrelabel super mario bros nes platform mario kart wii wii racing wii sports resort wii sports pokemon red/pokemon blue gb role-playing tetris gb puzzle new super mario bros ds platform the genrelabel field depicts the mapped numeric labels for each of the genre labels and we can clearly see that this adheres to the mappings that we generated earlier transforming ordinal features ordinal features are similar to nominal features except that order matters and is an inherent property with which we can interpret the values of these features like nominal featureseven ordinal features might be present in text form and you need to map and transform them into their numeric representation let' now load our pokemon dataset that we used earlier and look at the various values of the generation attribute for each pokemon in [ ]poke_df pd read_csv('datasets/pokemon csv'encoding='utf- 'poke_df poke_df sample(random_state= frac= reset_index(drop=truenp unique(poke_df['generation']out[ ]array(['gen ''gen ''gen ''gen ''gen ''gen ']dtype=objectwe resample the dataset in this code just so we can get good slice of data later on that represents all the distinct values which we are looking for from this output we can see that there are total of six generations of pokemon this attribute is definitely ordinal because pokemon belonging to generation were introduced earlier in the video games and the television shows than generation and so on hence they have sense of order among them unfortunatelysince there is specific logic or set of rules involved in case of each ordinal variablethere is no generic module or function to map and transform these features into numeric representations hence we need to hand-craft this using our own logicwhich is depicted in the following code snippet in [ ]gen_ord_map {'gen ' 'gen ' 'gen ' 'gen ' 'gen ' 'gen ' poke_df['generationlabel'poke_df['generation'map(gen_ord_mappoke_df[['name''generation''generationlabel']iloc[ : out[ ]name generation generationlabel octillery gen helioptile gen dialga gen
15,952
deoxysdefense forme gen rapidash gen swanna gen thusyou can see that it is really easy to build your own transformation mapping scheme with the help of python dictionaries and use the mapfunction from pandas to transform the ordinal feature encoding categorical features we have mentioned several times in the past that machine learning algorithms usually work well with numerical values you might now be wondering we already transformed and mapped the categorical variables into numeric representations in the previous sections so why would we need more levels of encoding againthe answer to this is pretty simple if we directly fed these transformed numeric representations of categorical features into any algorithmthe model will essentially try to interpret these as raw numeric features and hence the notion of magnitude will be wrongly introduced in the system simple example would be from our previous output dataframea model fit on generationlabel would think that value and so on while order is important in the case of pokemon generations (ordinal variable)there is no notion of magnitude here generation is not larger than generation and generation is not smaller than generation hence models built using these features directly would be sub-optimal and incorrect models there are several schemes and strategies where dummy features are created for each unique value or label out of all the distinct categories in any feature in the subsequent sectionswe will discuss some of these schemes including one hot encodingdummy codingeffect codingand feature hashing schemes one hot encoding scheme considering we have numeric representation of any categorical feature with labelsthe one hot encoding schemeencodes or transforms the feature into binary featureswhich can only contain value of or each observation in the categorical feature is thus converted into vector of size with only one of the values as (indicating it as activelet' take our pokemon dataset and perform some one hot encoding transformations on some of its categorical features in [ ]poke_df[['name''generation''legendary']iloc[ : out[ ]name generation legendary octillery gen false helioptile gen false dialga gen true deoxysdefense forme gen true rapidash gen false swanna gen false considering the dataframe depicted in the outputwe have two categorical featuresgeneration and legendarydepicting the pokemon generations and their legendary status firstwe need to transform these text labels into numeric representations the following code helps us achieve this in [ ]from sklearn preprocessing import onehotencoderlabelencoder transform and map pokemon generations gen_le labelencoder(
15,953
gen_labels gen_le fit_transform(poke_df['generation']poke_df['gen_label'gen_labels transform and map pokemon legendary status leg_le labelencoder(leg_labels leg_le fit_transform(poke_df['legendary']poke_df['lgnd_label'leg_labels poke_df_sub poke_df[['name''generation''gen_label''legendary''lgnd_label']poke_df_sub iloc[ : out[ ]name generation gen_label legendary lgnd_label octillery gen false helioptile gen false dialga gen true deoxysdefense forme gen true rapidash gen false swanna gen false the features gen_label and lgnd_label now depict the numeric representations of our categorical features let' now apply the one hot encoding scheme on these features using the following code in [ ]encode generation labels using one-hot encoding scheme gen_ohe onehotencoder(gen_feature_arr gen_ohe fit_transform(poke_df[['gen_label']]toarray(gen_feature_labels list(gen_le classes_gen_features pd dataframe(gen_feature_arrcolumns=gen_feature_labelsencode legendary status labels using one-hot encoding scheme leg_ohe onehotencoder(leg_feature_arr leg_ohe fit_transform(poke_df[['lgnd_label']]toarray(leg_feature_labels ['legendary_'+str(cls_labelfor cls_label in leg_le classes_leg_features pd dataframe(leg_feature_arrcolumns=leg_feature_labelsnowyou should remember that you can always encode both the features together using the fit_ transformfunction by passing it two-dimensional array of the two features but we are depicting this encoding for each feature separatelyto make things easier to understand besides thiswe can also create separate dataframes and label them accordingly let' now concatenate these feature frames and see the final result in [ ]poke_df_ohe pd concat([poke_df_subgen_featuresleg_features]axis= columns sum([['name''generation''gen_label'],gen_feature_labels['legendary''lgnd_label'],leg_feature_labels][]poke_df_ohe[columnsiloc[ :
15,954
figure - feature set depicting one hot encoded features for pokemon generation and legendary status from the result feature set depicted in figure - we can clearly see the new one hot encoded features for gen_label and lgnd_label each of these one hot encoded features is binary in nature and if they contain the value it means that feature is active for the corresponding observation for examplerow indicates the pokemon dialga is gen pokemon having gen_label (mapping starts from and the corresponding one hot encoded feature gen has the value and the remaining one hot encoded features are similarlyits legendary status is truecorresponding lgnd_label is and the one hot encoded feature legendary_true is also indicating it is active suppose we used this data in training and building model but now we have some new pokemon data for which we need to engineer the same features before we want to run it by our trained model we can use the transformfunction for our labelencoder and onehotencoder objectswhich we have previously constructed to engineer the features from the training data the following code shows us two dummy data points pertaining to new pokemon in [ ]new_poke_df pd dataframe([['pikazoom''gen 'true]['charmytoast''gen 'false]]columns=['name''generation''legendary']new_poke_df out[ ]name generation legendary pikazoom gen true charmytoast gen false we will follow the same process as before of first converting the text categories into numeric representations using our previously built labelencoder objectsas depicted in the following code in [ ]new_gen_labels gen_le transform(new_poke_df['generation']new_poke_df['gen_label'new_gen_labels new_leg_labels leg_le transform(new_poke_df['legendary']new_poke_df['lgnd_label'new_leg_labels new_poke_df[['name''generation''gen_label''legendary''lgnd_label']out[ ]name generation gen_label legendary lgnd_label pikazoom gen true charmytoast gen false we can now use our previously built labelencoder objects and perform one hot encoding on these new data observations using the following code see figure -
15,955
in [ ]new_gen_feature_arr gen_ohe transform(new_poke_df[['gen_label']]toarray(new_gen_features pd dataframe(new_gen_feature_arrcolumns=gen_feature_labelsnew_leg_feature_arr leg_ohe transform(new_poke_df[['lgnd_label']]toarray(new_leg_features pd dataframe(new_leg_feature_arrcolumns=leg_feature_labelsnew_poke_ohe pd concat([new_poke_dfnew_gen_featuresnew_leg_features]axis= columns sum([['name''generation''gen_label']gen_feature_labels['legendary''lgnd_label']leg_feature_labels][]new_poke_ohe[columnsfigure - feature set depicting one hot encoded features for new pokemon data points thusyou can see how we used the fit_transformfunctions to engineer features on our dataset and then we were able to use the encoder objects to engineer features on new data using the transformfunction based on the data what it observed previouslyspecifically the distinct categories and their corresponding labels and one hot encodings you should always follow this workflow in the future for any type of feature engineering when you deal with training and test datasets when you build models pandas also provides wonderful function called to_dummies)which helps us easily perform one hot encoding the following code depicts how to achieve this in [ ]gen_onehot_features pd get_dummies(poke_df['generation']pd concat([poke_df[['name''generation']]gen_onehot_features]axis= iloc[ : out[ ]name generation gen gen gen gen gen gen octillery gen helioptile gen dialga gen deoxysdefense forme gen rapidash gen swanna gen the output depicts the one hot encoding scheme for pokemon generation values similar to what we depicted in our previous analyses dummy coding scheme the dummy coding scheme is similar to the one hot encoding schemeexcept in the case of dummy coding schemewhen applied on categorical feature with distinct labelswe get - binary features thus each value of the categorical variable gets converted into vector of size - the extra feature is completely disregarded and thus if the category values range from { - the th or the - th feature is usually represented by vector of all zeros (
15,956
the following code depicts the dummy coding scheme on pokemon generation by dropping the first level binary encoded feature (gen in [ ]gen_dummy_features pd get_dummies(poke_df['generation']drop_first=truepd concat([poke_df[['name''generation']]gen_dummy_features]axis= iloc[ : out[ ]name generation gen gen gen gen gen octillery gen helioptile gen dialga gen deoxysdefense forme gen rapidash gen swanna gen if you wantyou can also choose to drop the last level binary encoded feature (gen by using the following code in [ ]gen_onehot_features pd get_dummies(poke_df['generation']gen_dummy_features gen_onehot_features iloc[:,:- pd concat([poke_df[['name''generation']]gen_dummy_features]axis= iloc[ : out[ ]name generation gen gen gen gen gen octillery gen helioptile gen dialga gen deoxysdefense forme gen rapidash gen swanna gen thus from these outputs you can see that based on the encoded level binary feature which we dropthat particular categorical value is represented by vector/encoded featureswhich all represent for example in the previous result feature setpokemon heloptile belongs to gen and is represented by all in the encoded dummy features effect coding scheme the effect coding scheme is very similar to the dummy coding scheme in most aspects howeverthe encoded features or feature vectorfor the category values that represent all in the dummy coding schemeis replaced by - in the effect coding scheme the following code depicts the effect coding scheme on the pokemon generation feature in [ ]gen_onehot_features pd get_dummies(poke_df['generation']gen_effect_features gen_onehot_features iloc[:,:- gen_effect_features loc[np all(gen_effect_features = axis= )- pd concat([poke_df[['name''generation']]gen_effect_features]axis= iloc[ : out[ ]name generation gen gen gen gen gen octillery gen helioptile gen - - - - - dialga gen
15,957
deoxysdefense forme gen rapidash gen swanna gen we can clearly see from the output feature set that all have been replaced by - in case of values which were previously all in the dummy coding scheme bin-counting scheme the encoding schemes discovered so far work quite well on categorical data in generalbut they start causing problems when the number of distinct categories in any feature becomes very large essential for any categorical feature of distinct labelsyou get separate features this can easily increase the size of the feature set causing problems like storage issuesmodel training problems with regard to timespace and memory besides thiswe also have to deal with what is popularly known as the curse of dimensionality where basically with an enormous number of features and not enough representative samplesmodel performance starts getting affected hence we need to look toward other categorical data feature engineering schemes for features having large number of possible categories (like ip addressesthe bin-counting scheme is useful for dealing with categorical variables with many categories in this schemeinstead of using the actual label values for encodingwe use probability based statistical information about the value and the actual target or response value which we aim to predict in our modeling efforts simple example would be based on past historical data for ip addresses and the ones which were used in ddos attackswe can build probability values for ddos attack being caused by any of the ip addresses using this informationwe can encode an input feature which depicts that if the same ip address comes in the futurewhat is the probability value of ddos attack being caused this scheme needs historical data as pre-requisite and is an elaborate one depicting this with complete example is out of scope of this but there are several resources online that you can refer to feature hashing scheme the feature hashing scheme is another useful feature engineering scheme for dealing with large scale categorical features in this schemea hash function is typically used with the number of encoded features pre-set (as vector of pre-defined lengthsuch that the hashed values of the features are used as indices in this pre-defined vector and values are updated accordingly since hash function maps large number of values into small finite set of valuesmultiple different values might create the same hash which is termed as collisions typicallya signed hash function is used so that the sign of the value obtained from the hash is used as the sign of the value which is stored in the final feature vector at the appropriate index this should ensure lesser collisions and lesser accumulation of error due to collisions hashing schemes work on stringsnumbers and other structures like vectors you can think of hashed outputs as finite set of bins such that when hash function is applied on the same valuesthey get assigned to the same bin out of the bins based on the hash value we can assign the value of hwhich becomes the final size of the encoded feature vector for each categorical feature we encode using the feature hashing scheme thus even if we have over distinct categories in feature and we set the output feature set will still have only features as compared to features if we used one hot encoding scheme let' look at the following code snippetwhich shows us the number of distinct genres we have in our video game dataset in [ ]unique_genres np unique(vg_df[['genre']]print("total game genres:"len(unique_genres)print(unique_genres
15,958
total game genres ['action'adventure'fighting'misc'platform'puzzle'racing'role-playing'shooter'simulation'sports'strategy'we can clearly see from the output that there are distinct genres and if we used one hot encoding scheme on the genre featurewe would end up having binary features insteadwe will now use feature hashing scheme by leveraging scikit-learn' featurehasher classwhich uses signed -bit version of the murmurhash hash function the following code shows us how to use the feature hashing scheme where we will pre-set the feature vector size to be ( features instead of in [ ]from sklearn feature_extraction import featurehasher fh featurehasher(n_features= input_type='string'hashed_features fh fit_transform(vg_df['genre']hashed_features hashed_features toarray(pd concat([vg_df[['name''genre']]pd dataframe(hashed_features)]axis= iloc[ : out[ ]name genre super mario bros platform - mario kart wii racing - - wii sports resort sports - - pokemon red/pokemon blue role-playing - - tetris puzzle - - new super mario bros platform - thus we can clearly see from the result feature set that the genre categorical feature has been encoded using the hashing scheme into features instead of we can also see that rows and denote the same genre of gamesplatform which have been rightly encoded into the same feature vector as expected feature engineering on text data dealing with structured data attributes like numeric or categorical variables are usually not as challenging as unstructured attributes like text and images in case of unstructured data like text documentsthe first challenge is dealing with the unpredictable nature of the syntaxformatand content of the documentswhich make it challenge to extract useful information for building models the second challenge is transforming these textual representations into numeric representations that can be understood by machine learning algorithms there exist various feature engineering techniques employed by data scientists daily to extract numeric feature vectors from unstructured text in this sectionwe discuss several of these techniques before we get startedyou should remember that there are two aspects to execute feature engineering on text data pre-processing and normalizing text feature extraction and engineering without text pre-processing and normalizationthe feature engineering techniques will not work at their core efficiency hence it is of paramount importance to pre-process textual documents you can load feature_engineering_text py directly and start running the examples or use the jupyter notebookfeature engineering on text data ipynbfor more interactive experience let' load the following necessary dependencies before we start
15,959
in [ ]import pandas as pd import numpy as np import re import nltk let' now load some sample text documentsdo some basic pre-processingand learn about various feature engineering strategies to deal with text data the following code creates our sample text corpus ( collection of text documents)which we will use in this section in [ ]corpus ['the sky is blue and beautiful ''love this blue and beautiful sky!''the quick brown fox jumps over the lazy dog ''the brown fox is quick and the blue dog is lazy!''the sky is very blue and the sky is very beautiful today''the dog is lazy but the brown fox is quick!labels ['weather''weather''animals''animals''weather''animals'corpus np array(corpuscorpus_df pd dataframe({'document'corpus'category'labels}corpus_df corpus_df[['document''category']corpus_df out[ ]document category the sky is blue and beautiful weather love this blue and beautiful skyweather the quick brown fox jumps over the lazy dog animals the brown fox is quick and the blue dog is lazyanimals the sky is very blue and the sky is very beaut weather the dog is lazy but the brown fox is quickanimals we can see that we have total of six documentswhere three of them are relevant to weather and the other three talk about animals as depicted by the category class label text pre-processing before feature engineeringwe need to pre-processcleanand normalize the text like we mentioned before there are multiple pre-processing techniquessome of which are quite elaborate we will not be going into lot of details in this section but we will be covering lot of them in further detail in future when we work on text classification and sentiment analysis following are some of the popular pre-processing techniques text tokenization and lower casing removing special characters contraction expansion removing stopwords correcting spellings stemming lemmatization
15,960
for more details on these topicsyou can jump ahead to of this book or refer to the section "text normalization,page of text analytics with python (apressdipanjan sarkar which covers each of these techniques in detail we will be normalizing our text here by lowercasingremoving special characterstokenizingand removing stopwords the following code helps us achieve this in [ ]wpt nltk wordpuncttokenizer(stop_words nltk corpus stopwords words('english'def normalize_document(doc)lower case and remove special characters\whitespaces doc re sub( '[^ -za- - \ ]'''docre idoc doc lower(doc doc strip(tokenize document tokens wpt tokenize(docfilter stopwords out of document filtered_tokens [token for token in tokens if token not in stop_wordsre-create document from filtered tokens doc join(filtered_tokensreturn doc normalize_corpus np vectorize(normalize_documentthe np vectorizefunction helps us run the same function over all elements of numpy array instead of writing loop we will now use this function to pre-process our text corpus in [ ]norm_corpus normalize_corpus(corpusnorm_corpus out[ ]array(['sky blue beautiful''love blue beautiful sky''quick brown fox jumps lazy dog''brown fox quick blue dog lazy''sky blue sky beautiful today''dog lazy brown fox quick']dtype='< 'you can compare each text document with its original form in our initial dataframe you will see that each document is in the lowercasespecial symbols have been removed and stopwords (words which carry little meaning like articlespronounsetc have been removed we can now engineer features from this preprocessed corpus bag of words model this is perhaps one of the simplest yet effective schemes of vectorizing features from unstructured text the core principle of this model is to convert text documents into numeric vectors the dimension or size of each vector is where indicates all possible distinct words across the corpus of documents each document once transformed is numeric vector of size where the values or weights in the vector indicate the frequency of each word in that specific document the following code helps us vectorize the text corpus into numeric feature vectors in [ ]from sklearn feature_extraction text import countvectorizer cv countvectorizer(min_df= max_df=
15,961
cv_matrix cv fit_transform(norm_corpuscv_matrix cv_matrix toarray(cv_matrix out[ ]array([[ ][ ][ ][ ][ ][ ]]dtype=int the output represents numeric term frequency based feature vector for each document like we mentioned before to understand it betterwe can represent it using the feature names and view it as dataframe in [ ]vocab cv get_feature_names(pd dataframe(cv_matrixcolumns=vocabout[ ]beautiful blue brown dog fox jumps lazy love quick sky today we can clearly see now that each row of the dataframe depicts the term frequency vector for each text document hence the name bag of words because this model represents unstructured text into bag of words without taking into account word positionssyntaxor semantics bag of -grams model we have used single word terms as features in the above mentioned bag of words model but what if we also wanted to take into account phrases or collection of words which occur in sequencen-grams help us achieve that an -gram is basically collection of word tokens from text document such that these tokens are contiguous and occur in sequence bi-grams indicate -grams of order (two words)tri-grams indicate -grams of order (three words)and so on we can easily extend the bag of words model to use bag of -grams model to give us -gram based feature vectors the following code computes bi-gram based features on our corpus in [ ]bv countvectorizer(ngram_range=( , )bv_matrix bv fit_transform(norm_corpusbv_matrix bv_matrix toarray(vocab bv get_feature_names(pd dataframe(bv_matrixcolumns=vocab
15,962
figure - bi-gram feature vectors for our corpus based on bag of -grams model figure - clearly shows our bi-gram feature vectors where each feature is bi-gram of two contiguous words and the values depict the frequency of that bi-gram in each document you can use the ngram_range parameter to extend the -gram range to get -grams of higher orders typically -grams until order three are sufficient for most tasks in machine learning and natural language processing tf-idf model there are some potential problems which might arise with the bag of words model when it is used on large corpora since the feature vectors are based on absolute term frequenciesthere might be some terms which occur frequently across all documents and these will tend to overshadow other terms in the feature set the tf-idf model tries to combat this issue by using scaling or normalizing factor in its computation tf-idf stands for term frequency-inverse document frequencywhich uses combination of two metrics in its computationnamelyterm frequency (tf and inverse document frequency (idf this technique was developed for ranking results for queries in search engines and now it is an indispensable model in the world of information retrieval and text analytics mathematicallywe can define tf-idf as tfidf tf idfwhich can be expanded further to be represented as follows ae tfidf ( , tf ( , idf ( , tf ( , log cc / df ( heretfidf (wdis the tf-idf score for word in document the term tf (wdrepresents the term frequency of the word in document dwhich can be obtained from the bag of words model the term idf (wdis the inverse document frequency for the term wwhich can be computed as the log transform of the total number of documents in the corpus divided by the document frequency of the word wwhich is basically the frequency of documents in the corpus where the word occurs the following code depicts tf-idf based feature engineering on our corpus in [ ]from sklearn feature_extraction text import tfidfvectorizer tv tfidfvectorizer(min_df= max_df= use_idf=truetv_matrix tv fit_transform(norm_corpustv_matrix tv_matrix toarray(vocab tv get_feature_names(pd dataframe(np round(tv_matrix )columns=vocab
15,963
out[ ]beautiful blue brown dog fox jumps lazy love quick sky today thusthe preceding output depicts the tf-idf based feature vectors for each of our text documents notice how this is scaled and normalized version as compared to the raw bag of words model interested readers who might want to dive into further details of how the internals of this model work can refer to page of text analytics with python (apressdipanjan sarkar document similarity you can even build on top of the tf-idf based features we engineered in the previous section and use them to generate new features which can be useful in multiple applications an example of this is computing document similarity this is very useful in domains like search enginesdocument clusteringand information retrieval document similarity is the process of using distance or similarity based metric that can be used to identify how similar text document is with another document based on features extracted from the documents like bag of words or tf-idf pairwise document similarity in corpus involves computing document similarity for each pair of documents in corpus thus if you have documents in corpusyou would end up with matrix such that each row and column represents the similarity score for pair of documentswhich represent the indices at the row and columnrespectively there are several similarity and distance metrics that are used to compute document similarity these include cosine distance/similaritybm distancehellinger-bhattacharya distancejaccard distanceand so on in our analysiswe will be using perhaps the most popular and widely used similarity metriccosine similarity cosine similarity basically gives us metric representing the cosine of the angle between the feature vector representations of two text documents figure - shows some typical feature vector alignments for text documents figure - cosine similarity depictions for text document feature vectors (sourcetext analytics with pythonapress
15,964
from figure - we can clearly see that feature vectors having similar orientation will be very close to one another and the angle between them will be closer to deg and thus cosine similarity would be cos deg when cosine similarity is close to cos deg the angle between the documents is closer to deg indicating they are far apart and hence not very similar similarity scores close to - indicate the documents have completely opposite orientation as the angle between them would be closer to deg the following code helps us compute pairwise cosine similarity for all the documents in our sample corpus in [ ]from sklearn metrics pairwise import cosine_similarity similarity_matrix cosine_similarity(tv_matrixsimilarity_df pd dataframe(similarity_matrixsimilarity_df out[ ] from the pairwise similarity matrix obtained in the preceding outputwe can clearly see that documents and have very strong similarity among one another also documents and have strong similarity among themselves this must indicate they all have some similar features this is perfect example of grouping or clustering that can be solved by unsupervised learning let' use -means clustering to try to use the features to see if we can actually cluster or group these documents based on their feature representations in -means clusteringwe have an input parameter kwhich specifies the number of clusters it will output using the document features this clustering method is centroid based clustering methodwhere it tries to cluster these documents into clusters of equal variance it tries to create these clusters by minimizing the within-cluster sum of squares measurealso known as inertia the following snippet builds clustering model using our similarity features to cluster our text documents in [ ]from sklearn cluster import kmeans km kmeans(n_clusters= km fit_transform(similarity_dfcluster_labels km labels_ cluster_labels pd dataframe(cluster_labelscolumns=['clusterlabel']pd concat([corpus_dfcluster_labels]axis= out[ ]document category clusterlabel the sky is blue and beautiful weather love this blue and beautiful skyweather the quick brown fox jumps over the lazy dog animals the brown fox is quick and the blue dog is lazyanimals the sky is very blue and the sky is very beaut weather the dog is lazy but the brown fox is quickanimals the output obtained clearly shows us that our -means clustering model has labeled our documents into two clusters with labels and we can also see that these labels are correct where labels with value indicate documents relevant to weather and labels with value indicate documents relevant to animals thus you can see how useful these features are in document clustering and categorization
15,965
topic models besides document termsphrases and similaritieswe can also use some summarization techniques to extract topic or concept based features from text documents the idea of topic models revolves around the process of extracting key themes or concepts from corpus of documents which are represented as topics each topic can be represented as bag or collection of words/terms from the document corpus togetherthese terms signify specific topictheme or concept and each topic can be easily distinguished from other topics by virtue of the semantic meaning conveyed by these terms these concepts can range from simple facts and statements to opinions and outlook topic models are extremely useful in summarizing large corpus of text documents to extract and depict key concepts they are also useful in extracting features from text data that capture latent patterns in the data there are various techniques for topic modeling and most of them involve some form of matrix decomposition some techniques like latent semantic indexing (lsiuse matrix decomposition operationsmore specifically singular valued decomposition (refer back to important mathematical concepts in )to split term-document matrix (transpose of our tf-idf document-term feature matrixinto three matricesus vt you can use the left singular vectors in matrix and multiply it by the singular vectors to get terms and their weights (signifying importanceper topic you can use scikit-learn or gensim to use lsi based topic modeling another technique is latent dirichlet allocation (lda)which uses generative probabilistic model where each document consists of combination of several topics and each term or word can be assigned to specific topic this is similar to plsi based model (probabilistic lsieach latent topic contains dirichlet prior over them in the case of lda the math behind this is pretty involving and it would not be possible to go into details in the current scope interested readers can refer to page of text analytics with python (apressdipanjan sarkar for further details on lda for the purpose of feature engineeringyou need to remember that when lda is applied on document-term matrix (tf-idf feature matrix)it gets decomposed into two main components document-topic matrixwhich would be the feature matrix we are looking for and topic-term matrixwhich helps us in looking at potential topics in the corpus the following code builds an lda model to extract features and topics from our sample corpus in [ ]from sklearn decomposition import latentdirichletallocation lda latentdirichletallocation(n_topics= max_iter= random_state= dt_matrix lda fit_transform(tv_matrixfeatures pd dataframe(dt_matrixcolumns=[' '' ']features out[ ] thusthe dt_matrix refers to the document-topic matrix giving us two features since we chose number of topics to be you can also use the other matrix obtained from the decompositionthe topic-term matrix to see the topics extracted from our corpus using the lda model using the following code in [ ]tt_matrix lda components_ for topic_weights in tt_matrixtopic [(tokenweightfor tokenweight in zip(vocabtopic_weights)topic sorted(topickey=lambda - [ ]
15,966
topic [item for item in topic if item[ print(topicprint([('fox' )('quick' )('dog' )('brown' )('lazy' )('jumps' )('blue' )[('sky' )('beautiful' )('blue' )('love' )('today' )the preceding output represents each of the two topics as collection of terms and their importance is depicted by the corresponding weight it is definitely interesting to see that the two topics are quite distinguishable from each other by looking at the terms the first topic shows terms relevant to animals and the second topic shows terms relevant to weather this is reinforced by applying our unsupervised -means clustering algorithm on our document-topic feature matrix (dt_matrixusing the following code snippet in [ ]km kmeans(n_clusters= km fit_transform(featurescluster_labels km labels_ cluster_labels pd dataframe(cluster_labelscolumns=['clusterlabel']pd concat([corpus_dfcluster_labels]axis= out[ ]document category clusterlabel the sky is blue and beautiful weather love this blue and beautiful skyweather the quick brown fox jumps over the lazy dog animals the brown fox is quick and the blue dog is lazyanimals the sky is very blue and the sky is very beaut weather the dog is lazy but the brown fox is quickanimals this clearly makes sense and we can see that by just using two topic-model based featureswe are still able to cluster our documents efficientlyword embeddings there are several advanced word vectorization models that have recently gained lot of prominence almost all of them deal with the concept of word embeddings basicallyword embeddings can be used for feature extraction and language modeling this representation tries to map each word or phrase into complete numeric vector such that semantically similar words or terms tend to occur closer to each other and these can be quantified using these embeddings the word vec model is perhaps one of the most popular neural network based probabilistic language models and can be used to learn distributed representational vectors for words word embeddings produced by word vec involve taking in corpus of text documentsrepresenting words in large high dimensional vector space such that each word has corresponding vector in that space and similar words (even semanticallyare located close to one anotheranalogous to what we observed in document similarity earlier the word vec model was released by google in and uses neural network based implementation with architectures like continuous bag of words and skip-grams to learn the distributed vector representations of words in corpus we will be using the gensim framework to implement the same model on our corpus to extract features some of the important parameters in the model are explained briefly as follows
15,967
sizerepresents the feature vector size for each word in the corpus when transformed windowsets the context window size specifying the length of the window of words to be taken into account as belonging to singlesimilar context when training min_countspecifies the minimum word frequency value needed across the corpus to consider the word as part of the final vocabulary during training the model sampleused to downsample the effects of words which occur very frequently the following snippet builds word vec embedding model on the documents of our sample corpus remember to tokenize each document before passing it to the model in [ ]from gensim models import word vec wpt nltk wordpuncttokenizer(tokenized_corpus [wpt tokenize(documentfor document in norm_corpusset values for various parameters feature_size word vector dimensionality window_context context window size min_word_count minimum word count sample - downsample setting for frequent words v_model word vec word vec(tokenized_corpussize=feature_sizewindow=window_contextmin_count min_word_countsample=sampleusing tensorflow backend each word in the corpus will essentially now be vector itself of size we can verify the same using the following code in [ ] v_model wv['sky'out[ ]array( - - - ]dtype=float question might arise in your mind now that so farwe had feature vectors for each complete documentbut now we have vectors for each word how on earth do we represent entire documents nowwe can do that using various aggregation and combinations simple scheme would be to use an averaged word vector representationwhere we simply sum all the word vectors occurring in document and then divide by the count of word vectors to represent an averaged word vector for the document the following code enables us to do the same in [ ]def average_word_vectors(wordsmodelvocabularynum_features)feature_vector np zeros((num_features,),dtype="float "nwords for word in wordsif word in vocabularynwords nwords
15,968
feature_vector np add(feature_vectormodel[word]if nwordsfeature_vector np divide(feature_vectornwordsreturn feature_vector def averaged_word_vectorizer(corpusmodelnum_features)vocabulary set(model wv index wordfeatures [average_word_vectors(tokenized_sentencemodelvocabularynum_featuresfor tokenized_sentence in corpusreturn np array(featuresin [ ] v_feature_array averaged_word_vectorizer(corpus=tokenized_corpusmodel= v_modelnum_features=feature_sizepd dataframe( v_feature_arrayfigure - averaged word vector feature set for our corpus documents thuswe have our averaged word vector based feature set for all our corpus documentsas depicted by the dataframe in figure - let' use different clustering algorithm this time known as affinity propagation to try to cluster our documents based on these new features affinity propagation is based on the concept of message passing and you do not need to specify the number of clusters beforehand like you did in -means clustering in [ ]from sklearn cluster import affinitypropagation ap affinitypropagation(ap fit( v_feature_arraycluster_labels ap labels_ cluster_labels pd dataframe(cluster_labelscolumns=['clusterlabel']pd concat([corpus_dfcluster_labels]axis= out[ ]document category clusterlabel the sky is blue and beautiful weather love this blue and beautiful skyweather the quick brown fox jumps over the lazy dog animals
15,969
the brown fox is quick and the blue dog is lazyanimals the sky is very blue and the sky is very beaut weather the dog is lazy but the brown fox is quickanimals the preceding output uses the averaged word vectors based on word embeddings to cluster the documents in our corpus and we can clearly see that it has obtained the right clustersthere are several other schemes of aggregating word vectors like using tf-idf weights along with the word vector representations besides this there have been recent advancements in the field of deep learning where architectures like rnns and lstms are also used for engineering features from text data feature engineering on temporal data temporal data involves datasets that change over period of time and time-based attributes are of paramount importance in these datasets usually temporal attributes include some form of datatimeand timestamp values and often optionally include other metadata like time zonesdaylight savings time informationand so on temporal dataespecially time-series based data is extensively used in multiple domains like stockcommodityand weather forecasting you can load feature_engineering_temporal py directly and start running the examples or use the jupyter notebookfeature engineering on temporal data ipynbfor more interactive experience let' load the following dependencies before we move on to acquiring some temporal data in [ ]import datetime import numpy as np import pandas as pd from dateutil parser import parse import pytz we will now use some sample time-based data as our source of temporal data by loading the following values in dataframe in [ ]time_stamps [ : : + : ' : :: ' : : + : ' : : + : 'df pd dataframe(time_stampscolumns=['time']df out[ ]time : : + : : :: : : + : : : + : of course by defaultthey are stored as strings or text in the dataframe so we can convert time into timestamp objects by using the following code snippet in [ ]ts_objs np array([pd timestamp(itemfor item in np array(df time)]df['ts_obj'ts_objs ts_objs out[ ]array([timestamp( : : + 'tz='utc')
15,970
timestamp( : :'tz='pytz fixedoffset(- )')timestamp( : : + 'tz='pytz fixedoffset( )')timestamp( : : + 'tz='pytz fixedoffset( )')]dtype=objectyou can clearly see from the temporal values that we have multiple components for each timestamp object which include datetimeand even time based offsetwhich can be used to identify the time zone also of course there is no way we can directly ingest or use these features in any machine learning model hence we need specific strategies to extract meaningful features from this data in the following sectionswe cover some of these strategies that you can start using on your own temporal data in the future date-based features each temporal value has date component that can be used to extract useful information and features pertaining to the date these include features and components like yearmonthdayquarterday of the weekday nameday and week of the yearand many more the following code depicts how we can obtain some of these features from our temporal data in [ ]df['year'df['ts_obj'apply(lambda dd yeardf['month'df['ts_obj'apply(lambda dd monthdf['day'df['ts_obj'apply(lambda dd daydf['dayofweek'df['ts_obj'apply(lambda dd dayofweekdf['dayname'df['ts_obj'apply(lambda dd weekday_namedf['dayofyear'df['ts_obj'apply(lambda dd dayofyeardf['weekofyear'df['ts_obj'apply(lambda dd weekofyeardf['quarter'df['ts_obj'apply(lambda dd quarterdf[['time''year''month''day''quarter''dayofweek''dayname''dayofyear''weekofyear']figure - date based features in temporal data the features depicted in figure - show some of the attributes we talked about earlier and have been derived purely from the date segment of each temporal value each of these features can be used as categorical features and further feature engineering can be done like one hot encodingaggregationsbinningand more
15,971
time-based features each temporal value also has time component that can be used to extract useful information and features pertaining to the time these include attributes like hourminutesecondmicrosecondutc offsetand more the following code snippet extracts some of the previously mentioned time-based features from our temporal data in [ ]df['hour'df['ts_obj'apply(lambda dd hourdf['minute'df['ts_obj'apply(lambda dd minutedf['second'df['ts_obj'apply(lambda dd seconddf['musecond'df['ts_obj'apply(lambda dd microseconddf['utc_offset'df['ts_obj'apply(lambda dd utcoffset()df[['time''hour''minute''second''musecond''utc_offset']figure - time based features in temporal data the features depicted in figure - show some of the attributes we talked about earlier which have been derived purely from the time segment of each temporal value we can further engineer these features based on categorical feature engineering techniques and even derive other features like extracting time zones let' try to use binning to bin each temporal value into specific time of the day by leveraging the hour feature we just obtained in [ ]hour_bins [- bin_names ['late night''morning''afternoon''evening''night'df['timeofdaybin'pd cut(df['hour']bins=hour_binslabels=bin_namesdf[['time''hour''timeofdaybin']out[ ]time hour timeofdaybin : : + : morning : :: afternoon : : + : night : : + : late night thus you can see from the preceding output that based on hour ranges ( - - - - - we have assigned specific time of the day bin for each temporal value the utc offset component of the temporal data is very useful in knowing how far ahead or behind is that time value from the utc (coordinated universal time)which is the primary time standard that clocks and time are regulated from this information can also be used to engineer new features like potential time zones from which each temporal value might have been obtained the following code helps us achieve the same
15,972
in [ ]df['tz_info'df['ts_obj'apply(lambda dd tzinfodf['timezones'df['ts_obj'apply(lambda dlist({ astimezone(tztzname(for tz in map(pytz timezonepytz all_timezones_setif astimezone(tzutcoffset(= utcoffset()})df[['time''utc_offset''tz_info''timezones']figure - time zone relevant features in temporal data thus as we mentioned earlierthe features depicted in figure - show some of the attributes pertaining to time zone relevant information for each temporal value we can also get time components in other formatslike the epochwhich is basically the number of seconds that have elapsed since january (midnight utcand the gregorian ordinalwhere january st of year is represented as and so on the following code helps us extract these representations see figure - in [ ]df['timeutc'df['ts_obj'apply(lambda dd tz_convert(pytz utc)df['epoch'df['timeutc'apply(lambda dd timestamp()df['gregordinal'df['timeutc'apply(lambda dd toordinal()df[['time''timeutc''epoch''gregordinal']figure - time components depicted in various representations do note we converted each temporal value to utc before deriving the other features these alternate representations of time can be further used for easy date arithmetic the epoch gives us time elapsed in seconds and the gregorian ordinal gives us time elapsed in days we can use this to derive further features like time elapsed from the current time or time elapsed from major events of importance based on the problem we are trying to solve let' compute the time elapsed for each temporal value since the current time see figure - in [ ]curr_ts datetime datetime now(pytz utccompute days elapsed since today df['dayselapsedepoch'(curr_ts timestamp(df['epoch']( * df['dayselapsedordinal'(curr_ts toordinal(df['gregordinal']df[['time''timeutc''dayselapsedepoch''dayselapsedordinal']
15,973
figure - deriving elapsed time difference from current time based on our computationseach new derived feature should give us the elapsed time difference between the current time and the time value in the time column (actually timeutc since conversion to utc is necessaryboth the values are almost equal to one anotherwhich is expected thus you can use time and date arithmetic to extract and engineer more features which can help build better models alternate time representations enable you to do date time arithmetic directly instead of dealing with specific api methods of timestamp and datetime objects from python however you can use any method to get to the results you want it' all about ease of use and efficiencyfeature engineering on image data another very popular format of unstructured data is images sound and visual data in the form of imagesvideoand audio are very popular sources of data which pose lot of challenge to data scientists in terms of processingstoragefeature extraction and modeling however their benefits as sources of data are quite rewarding especially in the field of artificial intelligence and computer vision due to the unstructured nature of datait is not possible to directly use images for training models if you are given raw imageyou might have hard time trying to think of ways to represent it so that any machine learning algorithm can utilize it for model training there are various strategies and techniques that can be used in this case to engineer the right features from images one of the core principles to remember when dealing with images is that any image can be represented as matrix of numeric pixel values with that thought in mindlet' get startedyou can load feature_engineering_image py directly and start running the examples or use the jupyter notebookfeature engineering on image data ipynbfor more interactive experience let' start by loading the necessary dependencies and configuration settings in [ ]import skimage import numpy as np import pandas as pd import matplotlib pyplot as plt from skimage import io %matplotlib inline the scikit-image (skimagelibrary is an excellent framework consisting of several useful interfaces and algorithms for image processing and feature extraction besides thiswe will also leverage the mahotas frameworkwhich is useful in computer vision and image processing open cv is another useful framework that you can check out if interested in aspects pertaining to computer vision let' now look at ways to represent images as useful feature vector representations
15,974
image metadata features there are tons of useful features obtainable from the image metadata itself without even processing the image most of this information can be found from the exif datawhich is usually recorded for each image by the device when the picture is being taken following are some of the popular features that are obtainable from the image exif data image create date and time image dimensions image compression format device make and model image resolution and aspect ratio image artist flashaperturefocal lengthand exposure for more details on what other data points can be used as features from image exif metadatayou can refer to possible exif tags raw image and channel pixels an image can be represented by the value of each of its pixels as two dimensional array we can leverage numpy arrays for this howevercolor images usually have three components also known as channels the rgand channels stand for the redgreenand blue channelsrespectively this can be represented as three dimensional array (mncwhere indicates the number of rows in the imagen indicates the number of columns these are determined by the image dimensions the indicates which channel it represents (rg or blet' load some sample color images now and try to understand their representation in [ ]cat io imread('datasets/cat png'dog io imread('datasets/dog png'df pd dataframe(['cat''dog']columns=['image']print(cat shapedog shape( ( in [ ]fig plt figure(figsize ( , )ax fig add_subplot( , ax imshow(catax fig add_subplot( , ax imshow(dog
15,975
figure - our two sample color images we can clearly see from figure - that we have two images of cat and dog having dimensions pixels where each row and column denotes specific pixel of the image the third dimension indicates these are color images having three color channels let' now try to use numpy indexing to slice out and extract the three color channels separately for the dog image in [ ]dog_r dog copy(red channel dog_r[:,:, dog_r[:,:, set , pixels dog_g dog copy(green channel dog_g[:,:, dog_r[:,:, set , pixels dog_b dog copy(blue channel dog_b[:,:, dog_b[:,:, set , pixels plot_image np concatenate((dog_rdog_gdog_b)axis= plt figure(figsize ( , )plt imshow(plot_imagefigure - extracting redgreenand blue channels from our color rgb image we can clearly see from figure - how we can easily use numpy indexing and extract out the three color channels from the sample image you can now refer to any of these channel' raw image pixel matrix and even flatten it if needed to form feature vector in [ ]dog_r[:,:, out[ ]array([[ ][ ]
15,976
[ ][ ][ ]]dtype=uint this image pixel matrix is two-dimensional matrix so you can extract features from this further or even flatten it to one-dimensional vector to use as inputs for any machine learning algorithm grayscale image pixels if you are dealing with color imagesit might get difficult working with multiple channels and three-dimensional arrays hence converting images to grayscale is nice way of keeping the necessary pixel intensity values but getting an easy to process two-dimensional image grayscale images usually capture the luminance or intensity of each pixel such that each pixel value can be computed using the equation where rg are the pixel values of the three channels and captures the final pixel intensity information and is usually ranges from (complete intensity absence blackto (complete intensity presence whitethe following snippet shows us how to convert rgb color images to grayscale and extract the raw pixel valueswhich can be used as features in [ ]from skimage color import rgb gray cgs rgb gray(catdgs rgb gray(dogprint('image shape:'cgs shape'\ ' pixel map print(' image pixel map'print(np round(cgs )'\ 'flattened pixel feature vector print('flattened pixel map:'(np round(cgs flatten() ))image shape( image pixel map [ ]flattened pixel map binning image intensity distribution we already obtained the raw image intensity values for the grayscale images in the previous section one approach would be to use these raw pixel values themselves as features another approach would be to binning the image intensity distribution based on intensity values using histogram and using the bins as features the following code snippet shows us how the image intensity distribution looks for the two sample images
15,977
in [ ]fig plt figure(figsize ( , )ax fig add_subplot( , ax imshow(cgscmap="gray"ax fig add_subplot( , ax imshow(dgscmap='gray'ax fig add_subplot( , c_freqc_binsc_patches ax hist(cgs flatten()bins= ax fig add_subplot( , d_freqd_binsd_patches ax hist(dgs flatten()bins= figure - binning image intensity distributions with histograms as we mentionedimage intensity ranges from to and is evident by the -axes depicted in figure - the -axes depict the frequency of the respective bins we can clearly see that the dog image has more concentration of the bin frequencies around indicating higher intensity and the reason for that being that the labrador dog is white in color and white has high intensity value like we mentioned in the previous section the variables c_freqc_binsand d_freqd_bins can be used to get the numeric values pertaining to the bins and used as features image aggregation statistics we already obtained the raw image intensity values for the grayscale images in the previous section one approach would be to use them as features directly or use some level of aggregations and statistical measures which can be obtained from the pixels and intensity we already saw an approach of binning intensity values using histograms in this sectionwe use descriptive statistical measures and aggregations to compute specific features from the image pixel values we can compute rgb ranges for each image by basically subtracting the maximum from the minimum value for pixel values in each channel the following code helps us achieve this in [ ]from scipy stats import describe cat_rgb cat reshape(( * ) dog_rgb dog reshape(( * )
15,978
cs describe(cat_rgbaxis= ds describe(dog_rgbaxis= cat_rgb_range cs minmax[ cs minmax[ dog_rgb_range ds minmax[ ds minmax[ rgb_range_df pd dataframe([cat_rgb_rangedog_rgb_range]columns=['r_range''g_range''b_range']pd concat([dfrgb_range_df]axis= out[ ]image r_range g_range b_range cat dog we can then use these range features as specific characteristic attributes of each image besides thiswe can also compute other metrics like meanmedianvarianceskewnessand kurtosis for each image channel as follows in [ ]cat_statsnp array([np round(cs mean ),np round(cs variance )np round(cs kurtosis ),np round(cs skewness )np round(np median(cat_rgbaxis= ) )]flatten(dog_statsnp array([np round(ds mean ),np round(ds variance )np round(ds kurtosis ),np round(ds skewness )np round(np median(dog_rgbaxis= ) )]flatten(stats_df pd dataframe([cat_statsdog_stats]columns=['r_mean''g_mean''b_mean''r_var''g_var''b_var''r_kurt''g_kurt''b_kurt''r_skew''g_skew''b_skew''r_med''g_med''b_med']pd concat([dfstats_df]axis= figure - image channel aggregation statistical features we can observe from the features obtained in figure - that the meanmedianand kurtosis values for the various channels for the dog image are mostly greater than corresponding ones in the cat image variance and skewness are however more for the cat image edge detection one of the more interesting and sophisticated techniques involve detecting edges in an image edge detection algorithms can be used to detect sharp intensity and brightness changes in an image and find areas of interest the canny edge detector algorithm developed by john canny is one of the most widely used edge detector algorithms today this algorithm typically involves using gaussian distribution with specific standard deviation (sigmato smoothen and denoise the image then we apply sobel filter to extract image intensity gradients norm value of this gradient is used to determine the edge strength
15,979
potential edges are thinned down to curves with width of pixel and hysteresis based thresholding is used to label all points above specific high threshold as edges and then recursively use the low threshold value to label points above the low threshold as edges connected to any of the previously labeled points the following code applied the canny edge detector to our sample images in [ ]from skimage feature import canny cat_edges canny(cgssigma= dog_edges canny(dgssigma= fig plt figure(figsize ( , )ax fig add_subplot( , ax imshow(cat_edgescmap='binary'ax fig add_subplot( , ax imshow(dog_edgescmap='binary'figure - canny edge detection to extract edge based features the image plots based on the edge feature arrays depicted in figure - clearly show the prominent edges of our cat and dog you can use these edge feature arrays (cat_edges and dog_edgesby flattening themextracting pixel values and positions pertaining to the edges (non-zero values)or even by aggregating them like finding out the total number of pixels making edgesmean valueand so on object detection another interesting technique in the world of computer vision is object detection where features useful in highlighting specific objects in the image are detected and extracted the histogram of oriented gradientsalso known as hogis one of the techniques that' extensively used in object detection going into the details of this technique would not be possible in the current scope but for the process of feature engineeringyou need to remember that the hog algorithm works by following sequence of steps similar to edge detection the image is normalized and denoised to remove excess illumination effects first order image gradients are computed to capture image attributes like contourtextureand so on gradient histograms are built on top of these gradients based on specific windows called cells finally these cells are normalized and flattened feature descriptor is obtainedwhich can be used as feature vector for our models the following code shows the hog object detection technique on our sample images in [ ]from skimage feature import hog from skimage import exposure
15,980
fd_catcat_hog hog(cgsorientations= pixels_per_cell=( )cells_per_block=( )visualise=truefd_dogdog_hog hog(dgsorientations= pixels_per_cell=( )cells_per_block=( )visualise=truerescaling intensity to get better plots cat_hogs exposure rescale_intensity(cat_hogin_range=( )dog_hogs exposure rescale_intensity(dog_hogin_range=( )fig plt figure(figsize ( , )ax fig add_subplot( , ax imshow(cat_hogscmap='binary'ax fig add_subplot( , ax imshow(dog_hogscmap='binary'figure - hog object detector to extract features based on object detection the image plots in figure - show us how the hog detector has identified the objects in our sample images you can also get the flattened feature descriptors as follows in [ ]print(fd_catfd_cat shape ( ,localized feature extraction we have talked about aggregating pixel values from two-dimensional image or feature matrices and also flattening them into feature vectors localized feature extraction based techniques are slightly better methods which try to detect and extract localized feature descriptors on various small localized regions of our input images this is hence rightly named localized feature extraction we will be using the popular and patented surf algorithm invented by herbert bayet al surf stands for speeded up robust features the main idea is to get scale invariant local feature descriptors from images which can be used later as image features this algorithm is similar to the popular sift algorithm there are mainly two major phases in this algorithm the first phase is to detect points of interest using square shaped filters and hessian matrices the second phase is to build feature descriptors by extracting localized features around these points of interest there are usually computed by taking localized square image region around point of interest and then aggregating haar wavelet responses at specific interval based sample points we use the mahotas python framework for extracting surf feature descriptors from our sample images
15,981
in [ ]from mahotas features import surf import mahotas as mh cat_mh mh colors rgb gray(catdog_mh mh colors rgb gray(dogcat_surf surf surf(cat_mhnr_octaves= nr_scales= initial_step_size= threshold= max_points= dog_surf surf surf(dog_mhnr_octaves= nr_scales= initial_step_size= threshold= max_points= fig plt figure(figsize ( , )ax fig add_subplot( , ax imshow(surf show_surf(cat_mhcat_surf)ax fig add_subplot( , ax imshow(surf show_surf(dog_mhdog_surf)figure - localized feature extraction with surf the square boxes in the image plots in figure - depict the square image regions around the points of interest which were used for localized feature extraction you can also use the surf densefunction to extract uniform dimensional feature descriptors at dense points with regular interval spacing in pixels the following code depicts how to achieve this in [ ]cat_surf_fds surf dense(cat_mhspacing= dog_surf_fds surf dense(dog_mhspacing= cat_surf_fds shape out[ ]( we see from the preceding output that we have obtained feature descriptors of size (elementseach you can further apply other schemes on this like aggregationflatteningand so on to derive further features another sophisticated technique that you can use to extract features on these surf feature descriptors is to use the visual bag of words modelwhich we discuss in the next section
15,982
visual bag of words model we have seen the effectiveness of the popular bag of words model in extracting meaningful features from unstructured text documents bag of words refers to the document being broken down into its constituentswords and computing frequency of occurrences or other measures like tf-idf similarlyin case of image raw pixel matrices or derived feature descriptors from other algorithmswe can apply bag of words principle however the constituents will not be words in this case but they will be subset of features/pixels extracted from images which are similar to each other imagine you have multiple pictures of octopuses and you were able to extract the dense surf features each having values in each feature vector you can now use an unsupervised learning algorithm like clustering to extract clusters of similar feature descriptors each cluster can be labeled as visual word or visual feature subsequentlyeach feature descriptor can be binned into one of these clusters or visual words thusyou end up getting one-dimensional visual bag of words vector with counts of number of feature descriptors assigned to each of the visual words for the feature descriptor matrix each feature or visual word tends to capture some portion of the images that are similar to each other like octopus eyestentaclessuckersand so onas depicted in figure - figure - visual bag of words (courtesy of ian londonimage classification in python with visual bag of wordsthe basic idea is hence to get feature descriptor matrix from using any algorithm like surfapply an unsupervised algorithm like -means clusteringand extract out bins or visual features/words and their counts (based on number of feature descriptors assigned to each binthen for each subsequent imageonce you extract the feature descriptorsyou can use the -means model to assign each feature descriptor to one of the visual feature clusters and get one-dimensional vector of counts this is depicted in figure - for sample octopus imageassuming our vbow (visual bag of wordsmodel has three bins of eyestentaclesand suckers
15,983
figure - transforming an image into vbow vector (courtesy of ian londonimage classification in pythonwith visual bag of wordsthus you can see from figure - how two-dimensional image and its corresponding feature descriptors can be easily transformed into one-dimensional vbow vector [ going into extensive details of the vbow model would not be possible in the current scopebut would like to thank my friend and fellow data scientistian londonfor helping me out with providing the two figures on vbow models would also recommend you to check out his wonderful blog article io/blog/visual-bag-of-words/which talks about using the vbow model for image classification we will now use our surf feature descriptors for our two sample images and use -means clustering on them and compute vbow vectors for each image by assigning each feature descriptor to one of the bins we will take = in this case see figure - in [ ]from sklearn cluster import kmeans km kmeans(kn_init= max_iter= surf_fd_features np array([cat_surf_fdsdog_surf_fds]km fit(np concatenate(surf_fd_features)vbow_features [for feature_desc in surf_fd_featureslabels km predict(feature_descvbow np bincount(labelsminlength=kvbow_features append(vbowvbow_df pd dataframe(vbow_featurespd concat([dfvbow_df]axis= figure - transforming surf descriptors into vbow vectors for sample images
15,984
you can see how easy it is to transform complex two-dimensional surf feature descriptor matrices into easy to interpret vbow vectors let' now take new image and think about how we could apply the vbow pipeline first we would need to extract the surf feature descriptors from the image using the following snippet (this is only to depict the localized image subsets used in surf we will actually use the dense features as before see figure - in [ ]new_cat io imread('datasets/new_cat png'newcat_mh mh colors rgb gray(new_catnewcat_surf surf surf(newcat_mhnr_octaves= nr_scales= initial_step_size= threshold= max_points= fig plt figure(figsize ( , )ax fig add_subplot( , ax imshow(surf show_surf(newcat_mhnewcat_surf)figure - localized feature extraction with surf for new image let' now extract the dense surf features and transform them into vbow vector using our previously trained vbow model the following code helps us achieve this see figure - in [ ]new_surf_fds surf dense(newcat_mhspacing= labels km predict(new_surf_fdsnew_vbow np bincount(labelsminlength=kpd dataframe([new_vbow]figure - transforming new image surf descriptors into vbow vector thus you can see the final vbow feature vector for the new image based on surf feature descriptors this is also an example of using an unsupervised machine learning model for feature engineering you can now compare the similarity of this new image with the other two sample images using some similarity metrics
15,985
in [ ]from sklearn metrics pairwise import euclidean_distancescosine_similarity eucdis euclidean_distances(new_vbow reshape( ,- vbow_featurescossim cosine_similarity(new_vbow reshape( ,- vbow_featuresresult_df pd dataframe({'euclideandistance'eucdis[ ]'cosinesimilarity'cossim[ ]}pd concat([dfresult_df]axis= out[ ]image cosinesimilarity euclideandistance cat dog based on the distance and similarity metricswe can see that our new image (of catis definitely closer to the cat image than the dog image try this out with bigger dataset to get better resultsautomated feature engineering with deep learning we have used lot of simple and sophisticated feature engineering techniques so far in this section building complex feature engineering systems and pipelines is time consuming and building algorithms for the same is even more tasking deep learning is novel and new approach toward automating this complex task of feature engineering by making the machine extract features automatically by learning multiple layered and complex representations of the underlying raw data convolutional neural networks or cnns are extensively used for automated feature extraction in images we have already covered the basic principles of cnns in go ahead and refresh your memory you heading to the "important conceptssub-section under the "deep learningsection in just like we mentioned beforethe idea of cnns operate on the principles of convolution and pooling besides your regular activation function layers convolutional layers typically slides or convolves learnable filters (also known as kernels or convolution matrixacross the entire width and height of the input image pixels dot products between the input pixels and the filter are computed at each position on sliding the filter two-dimensional activation maps for the filter get created and consequently the network is able to learn these filters when it activates on detecting specific features like edgescorners and so on if we take filterswe will get separate two-dimensional activation mapswhich can then be stacked along the depth dimension to get the output volume pooling is kind of aggregation or downsampling layer where typically non-linear downsampling operation is inserted between convolutional layers filters are applied here too they are slided along the convolution output matrix andfor each sliding operationalso known as strideelements in the slice of matrix covered by the pooling filter are either summed (sum poolingor averaged (mean poolingor the maximum value is selected (max poolingmore than often max pooling works really well in several realworld scenarios pooling helps in reducing feature dimensionality and control model overfitting let' now try to use deep learning for automated feature extraction on our sample images using cnns load the following dependencies necessary for building deep networks in [ ]from keras models import sequential from keras layers convolutional import conv from keras layers convolutional import maxpooling from keras import backend as using tensorflow backend
15,986
you can use theano or tensorflow as your backend deep learning framework for keras to work on am using tensorflow in this scenario let' build basic two-layer cnn now with max pooling layer between them in [ ]model sequential(model add(conv ( ( )input_shape=( )activation='relu'kernel_initializer='glorot_uniform')model add(maxpooling (pool_size=( ))model add(conv ( ( )activation='relu'kernel_initializer='glorot_uniform')we can actually visualize this network architecture using the following code snippet to understand the layers that have been used in this networkin better way in [ ]from ipython display import svg from keras utils vis_utils import model_to_dot svg(model_to_dot(modelshow_shapes=trueshow_layer_names=truerankdir='tb'create(prog='dot'format='svg')figure - visualizing our two-layer convolutional neural network architecture you can now understand from the depiction in figure - that we are using two two-dimensional convolutional layers containing four ( filters we also have max pool layer between them of size ( for some downsampling let' now build some functions to extract features from these intermediate network layers
15,987
in [ ]first_conv_layer function([model layers[ inputk learning_phase()][model layers[ output]second_conv_layer function([model layers[ inputk learning_phase()][model layers[ output]let' now use these functions to extract the feature representations learned in the convolutional layers and visualize these features to see what the network is trying to learn from the images in [ ]catr cat reshape( , extract features first_conv_features first_conv_layer([catr])[ ][ second_conv_features second_conv_layer([catr])[ ][ view feature representations fig plt figure(figsize ( , )ax fig add_subplot( , ax imshow(first_conv_features[:,:, ]ax fig add_subplot( , ax imshow(first_conv_features[:,:, ]ax fig add_subplot( , ax imshow(first_conv_features[:,:, ]ax fig add_subplot( , ax imshow(first_conv_features[:,:, ]ax fig add_subplot( , ax imshow(second_conv_features[:,:, ]ax fig add_subplot( , ax imshow(second_conv_features[:,:, ]ax fig add_subplot( , ax imshow(second_conv_features[:,:, ]ax fig add_subplot( , ax imshow(second_conv_features[:,:, ]figure - intermediate feature maps obtained after passing though convolutional layers
15,988
the feature map visualizations depicted in figure - are definitely interesting you can clearly see that each feature matrix produced by the convolutional neural network is trying to learn something about the image like its texturecornersedgesilluminationhuebrightnessand so on this should give you an idea of how these activation feature maps can then be used as features for images in fact you can stack the output of cnnflatten it if neededand pass it as an input layer to multi-layer fully connected perceptron neural network and use it to solve the problem of image classification this should give you head start on automated feature extraction with the power of deep learningdon' worry if you did not understand some of the terms mentioned in this sectionwe will cover deep learning and cnns in more depth in subsequent if can' wait to get started with deep learningyou can fire up the bonus notebook provided with this called bonus classifying handwritten digits using deep cnns ipynbfor complete real-world example of applying cnns and deep learning to classify hand-written digitsfeature scaling when dealing with numeric featureswe have specific attributes which may be completely unbounded in naturelike view counts of video or web page hits using the raw values as input features might make models biased toward features having really high magnitude values these models are typically sensitive to the magnitude or scale of features like linear or logistic regression other models like tree based methods can still work without feature scaling however it is still recommended to normalize and scale down the features with feature scalingespecially if you want to try out multiple machine learning algorithms on input features we have already seen some examples of scaling and transforming features using log and box-cox transforms earlier in this in this sectionwe look at some popular feature scaling techniques you can load feature_scaling py directly and start running the examples or use the jupyter notebookfeature scaling ipynb for more interactive experience let' start by loading the following dependencies and configurations in [ ]from sklearn preprocessing import standardscalerminmaxscalerrobustscaler import numpy as np import pandas as pd np set_printoptions(suppress=truelet' now load some sample data of user views pertaining to online videos the following snippet creates this sample dataset in [ ]views pd dataframe([ ]columns=['views']views out[ ]views from the preceding dataframe we can see that we have five videos that have been viewed by users and the total view count for each video is depicted by the feature views it is quite evident that some videos have been viewed lot more than the othersgiving rise to values of high scale and magnitude let' look at how we can scale this feature using several handy techniques
15,989
standardized scaling the standard scaler tries to standardize each value in feature column by removing the mean and scaling the variance to be from the values this is also known as centering and scaling and can be denoted mathematically as ss mx sx where each value in feature is subtracted by the mean mx and the resultant is divided by the standard deviation sx this is also popularly known as -score scaling you can also divide the resultant by the variance instead of the standard deviation if needed the following snippet helps us achieve this in [ ]ss standardscaler(views['zscore'ss fit_transform(views[['views']]views out[ ]views zscore - - - - - we can see the standardized and scaled values in the zscore column in the preceding dataframe in factyou can manually use the formula we used earlier to compute the same result the following example computes the -score mathematically in [ ]vw np array(views['views'](vw[ np mean(vw)np std(vwout[ ]- min-max scaling with min-max scalingwe can transform and scale our feature values such that each value is within the range of [ however the minmaxscaler class in scikit-learn also allows you to specify your own upper and lower bound in the scaled value range using the feature_range variable mathematically we can represent this scaler as mms min max min where we scale each value in the feature by subtracting it from the minimum value in the feature min (xand dividing the resultant by the difference between the maximum and minimum values in the feature max(xmin (xthe following snippet helps us compute this in [ ]mms minmaxscaler(views['minmax'mms fit_transform(views[['views']]
15,990
views out[ ]views zscore minmax - - - - - the preceding output shows the min-max scaled values in the minmax column and as expectedthe maximum viewed video in row index has value of and the minimum viewed video in row index has value of you can also compute this mathematically using the following code (sample computation for the first rowin [ ](vw[ np min(vw)(np max(vwnp min(vw)out[ ] robust scaling the disadvantage of min-max scaling is that often the presence of outliers affects the scaled values for any feature robust scaling tries to use specific statistical measures to scale features without being affected by outliers mathematically this scaler can be represented as rs median iqr( , where we scale each value of feature by subtracting the median of and dividing the resultant by the iqr also known as the inter-quartile range of which is the range (differencebetween the first quartile ( th %ileand the third quartile ( th %ilethe following code performs robust scaling on our sample feature in [ ]rs robustscaler(views['robust'rs fit_transform(views[['views']]views out[ ]views zscore minmax robust - - - - - - - - the scaled values are depicted in the robust column and you can compare them with the scaled features in the other columns you can also compute the same using the mathematical equation we formulated for the robust scaler as depicted in the following snippet (for the first row index valuein [ ]quartiles np percentile(vw( )
15,991
iqr quartiles[ quartiles[ (vw[ np median(vw)iqr out[ ] there are several other techniques for feature scaling and normalizationbut these should be sufficient to get you started and are used extensively in building machine learning systems always remember to check if you need to scale and standardize features whenever you are dealing with numerical features feature selection while it is good to try to engineering features that try to capture some latent representations and patterns in the underlying datait is not always good thing to deal with feature sets having maybe thousands of features or even more dealing with large number of features bring us to the concept of the curse of dimensionality which we mentioned earlier during the "bin countingsection in "feature engineering on categorical datamore features tend to make models more complex and difficult to interpret besides thisit can often lead to models over-fitting on the training data this basically leads to very specialized model tuned only to the data which it used for training and hence even if you get high model performanceit will end up performing very poorly on newpreviously unseen data the ultimate objective is to select an optimal number of features to train and build models that generalize very well on the data and prevent overfitting feature selection strategies can be divided into three main areas based on the type of strategy and techniques employed for the same they are described briefly as follows filter methodsthese techniques select features purely based on metrics like correlationmutual information and so on these methods do not depend on results obtained from any model and usually check the relationship of each feature with the response variable to be predicted popular methods include threshold based methods and statistical tests wrapper methodsthese techniques try to capture interaction between multiple features by using recursive approach to build multiple models using feature subsets and select the best subset of features giving us the best performing model methods like backward selecting and forward elimination are popular wrapper based methods embedded methodsthese techniques try to combine the benefits of the other two methods by leveraging machine learning models themselves to rank and score feature variables based on their importance tree based methods like decision trees and ensemble methods like random forests are popular examples of embedded methods the benefits of feature selection include better performing modelsless overfittingmore generalized modelsless time for computations and model trainingand to get good insight into understanding the importance of various features in your data in this sectionwe look at some of the most widely used techniques in feature selection you can load feature_selection py directly and start running the examples or use the jupyter notebookfeature selection ipynb for more interactive experience let' start by loading the following dependencies and configurations in [ ]import numpy as np import pandas as pd np set_printoptions(suppress=truept np get_printoptions()['threshold'
15,992
we will now look at various ways of selecting features including statistical and model based techniques by using some sample datasets threshold-based methods this is filter based feature selection strategywhere you can use some form of cut-off or thresholding for limiting the total number of features during feature selection thresholds can be of various forms some of them can be used during the feature engineering process itselfwhere you can specify threshold parameters simple example of this would be to limit feature terms in the bag of words modelwhich we used for text based feature engineering earlier the scikit-learn framework provides parameters like min_df and max_ df which can be used to specify thresholds for ignoring terms which have document frequency above and below user specified thresholds the following snippet depicts way to do this in [ ]from sklearn feature_extraction text import countvectorizer cv countvectorizer(min_df= max_df= max_features= cv out[ ]countvectorizer(analyzer='word'binary=falsedecode_error='strict'dtype=encoding='utf- 'input='content'lowercase=truemax_df= max_features= min_df= ngram_range=( )preprocessor=nonestop_words=nonestrip_accents=nonetoken_pattern='(? )\\ \\ \\ +\\ 'tokenizer=nonevocabulary=nonethis basically builds count vectorizer which ignores feature terms which occur in less than of the total corpus and also ignores terms which occur in more than of the total corpus besides this we also put hard limit of maximum features in the feature set another way of using thresholds is to use variance based thresholding where features having low variance (below user-specified thresholdare removed this signifies that we want to remove features that have values that are more or less constant across all the observations in our datasets we can apply this to our pokemon datasetwhich we used earlier in this first we convert the generation feature to categorical feature as follows in [ ]df pd read_csv('datasets/pokemon csv'poke_gen pd get_dummies(df['generation']poke_gen head(out[ ]gen gen gen gen gen gen nextwe want to remove features from the one hot encoded features where the variance is less than we can do this using the following snippet
15,993
in [ ]from sklearn feature_selection import variancethreshold vt variancethreshold(threshold vt fit(poke_genout[ ]variancethreshold(threshold= to view the variances as well as which features were finally selected by this algorithmwe can use the variances_ property and the get_supportfunction respectively the following snippet depicts this clearly in formatted dataframe in [ ]pd dataframe({'variance'vt variances_'select_feature'vt get_support()}index=poke_gen columnst out[ ]gen gen gen gen gen gen select_feature true false true false true false variance we can clearly see which features have been selected based on their true values and also their variance being above to get the final subset of selected featuresyou can use the following code in [ ]poke_gen_subset poke_gen iloc[:,vt get_support()head(poke_gen_subset out[ ]gen gen gen the preceding feature subset depicts that features gen gen and gen have been finally selected out of the original six features statistical methods another widely used filter based feature selection methodwhich is slightly more sophisticatedis to select features based on univariate statistical tests you can use several statistical tests for regression and classification based models including mutual informationanova (analysis of varianceand chi-square tests based on scores obtained from these statistical testsyou can select the best features on the basis of their score let' load sample dataset now with features this dataset is known as the wisconsin diagnostic breast cancer datasetwhich is also available in its native or raw format at ics uci edu/ml/datasets/breast+cancer+wisconsin+(diagnostic)which is the uci machine learning repository we will use scikit-learn to load the data features and the response class variable in [ ]from sklearn datasets import load_breast_cancer bc_data load_breast_cancer(bc_features pd dataframe(bc_data datacolumns=bc_data feature_namesbc_classes pd dataframe(bc_data targetcolumns=['ismalignant']
15,994
build featureset and response class labels bc_x np array(bc_featuresbc_y np array(bc_classest[ print('feature set shape:'bc_x shapeprint('response class shape:'bc_y shapefeature set shape( response class shape( ,we can clearly see thatas we mentioned beforethere are total of features in this dataset and total of rows of observations to get some more detail into the feature names and take peek at the data pointsyou can use the following code in [ ]np set_printoptions(threshold= print('feature set data [shape'+str(bc_x shape)+']'print(np round(bc_x )'\ 'print('feature names:'print(np array(bc_features columns)'\ 'print('response class label data [shape'+str(bc_y shape)+']'print(bc_y'\ 'print('response variable name:'np array(bc_classes columns)np set_printoptions(threshold=ptfeature set data [shape( )[ ]feature names['mean radius'mean texture'mean perimeter'mean area'mean smoothness'mean compactness'mean concavity'mean concave points'mean symmetry'mean fractal dimension'radius error'texture error'perimeter error'area error'smoothness error'compactness error'concavity error'concave points error'symmetry error'fractal dimension error'worst radius'worst texture'worst perimeter'worst area'worst smoothness'worst compactness'worst concavity'worst concave points'worst symmetry'worst fractal dimension'response class label data [shape( ,)[ response variable name['ismalignant'this gives us better perspective on the data we are dealing with the response class variable is binary class where indicates the tumor detected was benign and indicates it was malignant we can also see the features that are real valued numbers that describe characteristics of cell nuclei present in digitized images of breast mass let' now use the chi-square test on this feature set and select the top best features out of the features the following snippet helps us achieve this
15,995
in [ ]from sklearn feature_selection import chi selectkbest skb selectkbest(score_func=chi = skb fit(bc_xbc_yout[ ]selectkbest( = score_func=you can see that we have passed our input features (bc_xand corresponding response class outputs (bc_yto the fitfunction when computing the necessary metrics the chi-square test will compute statistics between each feature and the class variable (univariate testsselecting the top features is more than likely to remove features having low score and consequently they are most likely to be independent of the class variable and hence not useful in building models we sort the scores to see the most relevant features using the following code in [ ]feature_scores [(itemscorefor itemscore in zip(bc_data feature_namesskb scores_)sorted(feature_scoreskey=lambda - [ ])[: out[ ][('worst area' )('mean area' )('area error' )('worst perimeter' )('mean perimeter' )('worst radius' )('mean radius' )('perimeter error' )('worst texture' )('mean texture' )we can now create subset of the selected features obtained from our original feature set of features with the help of the chi-square test by using the following code in [ ]select_features_kbest skb get_support(feature_names_kbest bc_data feature_names[select_features_kbestfeature_subset_df bc_features[feature_names_kbestbc_sx np array(feature_subset_dfprint(bc_sx shapeprint(feature_names_kbest( ['mean radius'mean texture'mean perimeter'mean area'mean concavity'radius error'perimeter error'area error'worst radius'worst texture'worst perimeter'worst area'worst compactness'worst concavity'worst concave points'thus from the preceding outputyou can see that our new feature subset bc_sx has observations of features instead of and we also printed the names of the selected features for your ease of understanding to view the new feature setyou can use the following snippet in [ ]np round(feature_subset_df iloc[ : ]
15,996
figure - selected feature subset of the wisconsin diagnostic breast cancer dataset using chi-square tests the dataframe with the top scoring features is depicted in figure - let' now build simple classification model using logistic regression on the original feature set of features and compare the model accuracy performance with another model built using our selected features for model evaluationwe will use the accuracy metric (percent of correct predictionsand use five-fold cross-validation scheme we will be covering model evaluation and tuning strategies in detail in so do not despair if you cannot understand some of the terminology right now the main idea here is to compare the model prediction performance between models trained on different feature sets in [ ]from sklearn linear_model import logisticregression from sklearn model_selection import cross_val_score build logistic regression model lr logisticregression(evaluating accuracy for model built on full featureset full_feat_acc np average(cross_val_score(lrbc_xbc_yscoring='accuracy'cv= )evaluating accuracy for model built on selected featureset sel_feat_acc np average(cross_val_score(lrbc_sxbc_yscoring='accuracy'cv= )print('model accuracy statistics with -fold cross validation'print('model accuracy with complete feature set'bc_x shape':'full_feat_accprint('model accuracy with selected feature set'bc_sx shape':'sel_feat_accmodel accuracy statistics with -fold cross validation model accuracy with complete feature set ( model accuracy with selected feature set ( the accuracy metrics clearly show us that we actually built better model having accuracy of when trained on the selected feature subset as compared to the model built with the original features which had an accuracy of try this out on your own datasetsdo you see any improvementsrecursive feature elimination you can also rank and score features with the help of machine learning based model estimator such that you recursively keep eliminating lower scored features till you arrive at the specific feature subset count recursive feature eliminationalso known as rfeis popular wrapper based feature selection techniquewhich allows you to use this strategy the basic idea is to start off with specific machine learning estimator like the logistic regression algorithm we used for our classification needs next we take the entire feature set of features and the corresponding response class variables rfe aims to assign weights to these features based on the model fit features with the smallest weights are pruned out and then model is fit again on
15,997
the remaining features to obtain the new weights or scores this process is recursively carried out multiple times and each time features with the lowest scores/weights are eliminateduntil the pruned feature subset contains the desired number of features that the user wanted to select (this is taken as an input parameter at the startthis strategy is also popularly known as backward elimination let' select the top features on our breast cancer dataset now using rfe in [ ]from sklearn feature_selection import rfe lr logisticregression(rfe rfe(estimator=lrn_features_to_select= step= rfe fit(bc_xbc_yout[ ]rfe(estimator=logisticregression( = class_weight=nonedual=falsefit_intercept=trueintercept_scaling= max_iter= multi_class='ovr'n_jobs= penalty=' 'random_state=nonesolver='liblinear'tol= verbose= warm_start=false)n_features_to_select= step= verbose= we can now use the get_supportfunction to obtain the final selected features this is depicted in the following snippet in [ ]select_features_rfe rfe get_support(feature_names_rfe bc_data feature_names[select_features_rfeprint(feature_names_rfe['mean radius'mean texture'mean perimeter'mean smoothness'mean concavity'mean concave points'mean symmetry'texture error'worst radius'worst texture'worst smoothness'worst concavity'worst concave points'worst symmetry'worst fractal dimension'can we compare this feature subset with the one we obtained using statistical tests in the previous section and see which features are common among both these subsetsof course we canlet' use set operations to get the list of features that were selected by both these techniques in [ ]set(feature_names_kbestset(feature_names_rfeout[ ]{'mean concavity''mean perimeter''mean radius''mean texture''worst concave points''worst concavity''worst radius''worst texture'thus we can see that out of features are common and have been chosen by both the feature selection techniqueswhich is definitely interestingmodel-based selection tree based models like decision trees and ensemble models like random forests (ensemble of treescan be utilized not just for modeling alone but for feature selection these models can be used to compute feature importances when building the model that can in turn be used for selecting the best features and discarding irrelevant features with lower scores random forest is an ensemble model this can be used as an embedded feature selection methodwhere each decision tree model in the ensemble is built by taking training sample of data from the entire dataset this sample is bootstrap sample (sample taken with replacementsplits at any node are taken by choosing the best split from random subset of the features rather than taking all the features into account this randomness tends to reduce the variance of the model
15,998
at the cost of slightly increasing the bias overall this produces better and more generalized model we will cover the bias-variance tradeoff in more detail in let' now use the random forest model to score and rank features based on their importance in [ ]from sklearn ensemble import randomforestclassifier rfc randomforestclassifier(rfc fit(bc_xbc_yout[ ]randomforestclassifier(bootstrap=trueclass_weight=nonecriterion='gini'max_depth=nonemax_features='auto'max_leaf_nodes=nonemin_impurity_split= - min_samples_leaf= min_samples_split= min_weight_fraction_leaf= n_estimators= n_jobs= oob_score=falserandom_state=noneverbose= warm_start=falsethe following code uses this random forest estimator to score the features based on their importance and we display the top most important features based on this score in [ ]importance_scores rfc feature_importances_ feature_importances [(featurescorefor featurescore in zip(bc_data feature_ namesimportance_scores)sorted(feature_importanceskey=lambda - [ ])[: out[ ][('worst area' )('worst radius' )('worst concavity' )('worst concave points' )('mean concave points' )('mean concavity' )('mean area' )('worst perimeter' )('worst texture' )('mean radius' )you can now use threshold based parameter to filter out the top features as needed or you can even make use of the selectfrommodel meta-transformer provided by scikit-learn by using it as wrapper on top of this model can you find out how many of the higher ranked features from the random forest model are in common with the previous two feature selectorsdimensionality reduction dealing with lot of features can lead to issues like model overfittingcomplex modelsand many more that all roll up to what we have mentioned as the curse of dimensionality refer to the section "dimensionality reductionin to refresh your memory dimensionality reduction is the process of reducing the total number of features in our feature set using strategies like feature selection or feature extraction we have already talked about feature selection extensively in the previous section we now cover feature extraction where the basic objective is to extract new features from the existing set of features such that the higher-dimensional dataset with many features can be reduced into lower-dimensional dataset of these newly created features very popular technique of linear data transformation from higher to lower dimensions is principal component analysisalso known as pca let' try to understand more about pca and how we can use it for feature extraction in the following sections
15,999
feature extraction with principal component analysis principal component analysispopularly known as pcais statistical method that uses the process of linearorthogonal transformation to transform higher-dimensional set of features that could be possibly correlated into lower-dimensional set of linearly uncorrelated features these transformed and newly created features are also known as principal components or pcs in any pca transformationthe total number of pcs is always less than or equal to the initial number of features the first principal component tries to capture the maximum variance of the original set of features each of the succeeding components tries to capture more of the variance such that they are orthogonal to the preceding components an important point to remember is that pca is sensitive to feature scaling our main task is to take set of initial features with dimension let' say and reduce it to subset of extracted principal components of lower dimension ld the matrix decomposition process of singular value decomposition is extremely useful in helping us obtain the principal components you can quickly refresh your memory on svd by referring to the sub-section of "singular value decompositionunder the "important conceptsin the "mathematicssection in to check out the necessary mathematical formula and concepts considering we have data matrix ( )where we have observations and dimensions (features)we can depict svd of the feature matrix as ( ( )usvt such that all the principal components are contained in the component vtwhich can be depicted as followse pc ( ' pc ( ' ' / pc ' ) the principal components are represented by {pc pc pcdwhich are all one-dimensional vectors of dimensions ( dfor extracting the first principal componentswe can first transpose this matrix to obtain the following representation pc ' ( pc pc / pc ee uu now we can extract out the first principal components such that < and the reduced principal component set can be depicted as follows pc ' ( pc pc / pc ee uu finallyto perform dimensionality reductionwe can get the reduced feature set using the following mathematical transformation ( df( )pc( dwhere the dot product between the original feature matrix and the reduced subset of principal components gives us reduced feature set of features very important point to remember here is that you might need to center your initial feature matrix by removing the mean because by defaultpca assumes that your data is centered around the origin