id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
11,900 | method description count number of non-na values describe compute set of summary statistics for series or each dataframe column minmax compute minimum and maximum values argminargmax compute index locations (integersat which minimum or maximum value obtainedrespectively idxminidxmax compute index values at which minimum or maximum value obtainedrespectively quantile compute sample quantile ranging from to sum sum of values mean mean of values median arithmetic median ( quantileof values mad mean absolute deviation from mean value var sample variance of values std sample standard deviation of values skew sample skewness ( rd momentof values kurt sample kurtosis ( th momentof values cumsum cumulative sum of values cummincummax cumulative minimum or maximum of valuesrespectively cumprod cumulative product of values diff compute st arithmetic difference (useful for time seriespct_change compute percent changes correlation and covariance some summary statisticslike correlation and covarianceare computed from pairs of arguments let' consider some dataframes of stock prices and volumes obtained from yahoofinanceimport pandas io data as web all_data {for ticker in ['aapl''ibm''msft''goog']all_data[tickerweb get_data_yahoo(ticker''''price dataframe({ticdata['adj close'for ticdata in all_data iteritems()}volume dataframe({ticdata['volume'for ticdata in all_data iteritems()} now compute percent changes of the pricesin [ ]returns price pct_change(in [ ]returns tail(summarizing and computing descriptive statistics www it-ebooks info |
11,901 | aapl goog ibm msft date - - - - - - - - the corr method of series computes the correlation of the overlappingnon-naaligned-by-index values in two series relatedlycov computes the covariancein [ ]returns msft corr(returns ibmout[ ] in [ ]returns msft cov(returns ibmout[ ] dataframe' corr and cov methodson the other handreturn full correlation or covariance matrix as dataframerespectivelyin [ ]returns corr(out[ ]aapl goog aapl goog ibm msft ibm msft in [ ]returns cov(out[ ]aapl goog aapl goog ibm msft ibm msft using dataframe' corrwith methodyou can compute pairwise correlations between dataframe' columns or rows with another series or dataframe passing series returns series with the correlation value computed for each columnin [ ]returns corrwith(returns ibmout[ ]aapl goog ibm msft passing dataframe computes the correlations of matching column names here compute correlations of percent changes with volumein [ ]returns corrwith(volumeout[ ]aapl - goog getting started with pandas www it-ebooks info |
11,902 | msft - - passing axis= does things row-wise instead in all casesthe data points are aligned by label before computing the correlation unique valuesvalue countsand membership another class of related methods extracts information about the values contained in one-dimensional series to illustrate theseconsider this examplein [ ]obj series([' '' '' '' '' '' '' '' '' ']the first function is uniquewhich gives you an array of the unique values in seriesin [ ]uniques obj unique(in [ ]uniques out[ ]array([cadb]dtype=objectthe unique values are not necessarily returned in sorted orderbut could be sorted after the fact if needed (uniques sort()relatedlyvalue_counts computes series containing value frequenciesin [ ]obj value_counts(out[ ] the series is sorted by value in descending order as convenience value_counts is also available as top-level pandas method that can be used with any array or sequencein [ ]pd value_counts(obj valuessort=falseout[ ] lastlyisin is responsible for vectorized set membership and can be very useful in filtering data set down to subset of values in series or column in dataframein [ ]mask obj isin([' '' ']in [ ]mask out[ ] true false false false false true true in [ ]obj[maskout[ ] summarizing and computing descriptive statistics www it-ebooks info |
11,903 | true true see table - for reference on these methods table - uniquevalue countsand binning methods method description isin compute boolean array indicating whether each series value is contained in the passed sequence of values unique compute array of unique values in seriesreturned in the order observed value_counts return series containing unique values as its index and frequencies as its valuesordered count in descending order in some casesyou may want to compute histogram on multiple related columns in dataframe here' an examplein [ ]data dataframe({'qu '[ ]'qu '[ ]'qu '[ ]}in [ ]data out[ ]qu qu qu passing pandas value_counts to this dataframe' apply function givesin [ ]result data apply(pd value_countsfillna( in [ ]result out[ ]qu qu qu handling missing data missing data is common in most data analysis applications one of the goals in designing pandas was to make working with missing data as painless as possible for exampleall of the descriptive statistics on pandas objects exclude missing data as you've seen earlier in the getting started with pandas www it-ebooks info |
11,904 | both floating as well as in non-floating point arrays it is just used as sentinel that can be easily detectedin [ ]string_data series(['aardvark''artichoke'np nan'avocado']in [ ]string_data out[ ] aardvark artichoke nan avocado in [ ]string_data isnull(out[ ] false false true false the built-in python none value is also treated as na in object arraysin [ ]string_data[ none in [ ]string_data isnull(out[ ] true false true false do not claim that pandas' na representation is optimalbut it is simple and reasonably consistent it' the best solutionwith good all-around performance characteristics and simple apithat could concoct in the absence of true na data type or bit pattern in numpy' data types ongoing development work in numpy may change this in the future table - na handling methods argument description dropna filter axis labels based on whether values for each label have missing datawith varying thresholds for how much missing data to tolerate fillna fill in missing data with some value or using an interpolation method such as 'ffillor 'bfillisnull return like-type object containing boolean values indicating which values are missing na notnull negation of isnull filtering out missing data you have number of options for filtering out missing data while doing it by hand is always an optiondropna can be very helpful on seriesit returns the series with only the non-null data and index valuesin [ ]from numpy import nan as na in [ ]data series([ na na ]in [ ]data dropna(out[ ]handling missing data www it-ebooks info |
11,905 | naturallyyou could have computed this yourself by boolean indexingin [ ]data[data notnull()out[ ] with dataframe objectsthese are bit more complex you may want to drop rows or columns which are all na or just those containing any nas dropna by default drops any row containing missing valuein [ ]data dataframe([[ ][ nana][nanana][na ]]in [ ]cleaned data dropna(in [ ]data out[ ] nan nan nan nan nan nan in [ ]cleaned out[ ] passing how='allwill only drop rows that are all nain [ ]data dropna(how='all'out[ ] nan nan nan dropping columns in the same way is only matter of passing axis= in [ ]data[ na in [ ]data out[ ] nan nan nan nan nan nan nan nan nan nan in [ ]data dropna(axis= how='all'out[ ] nan nan nan nan nan nan related way to filter out dataframe rows tends to concern time series data suppose you want to keep only rows containing certain number of observations you can indicate this with the thresh argumentin [ ]df dataframe(np random randn( )in [ ]df ix[: nadf ix[: na getting started with pandas www it-ebooks info |
11,906 | out[ ] - nan nan nan nan - nan nan - nan - nan - - - - - - in [ ]df dropna(thresh= out[ ] - - - - - filling in missing data rather than filtering out missing data (and potentially discarding other data along with it)you may want to fill in the "holesin any number of ways for most purposesthe fillna method is the workhorse function to use calling fillna with constant replaces missing values with that valuein [ ]df fillna( out[ ] - - - - - - - - - - calling fillna with dict you can use different fill value for each columnin [ ]df fillna({ - }out[ ] - nan nan - nan - - - - - - - - fillna returns new objectbut you can modify the existing object in placealways returns reference to the filled object in [ ] df fillna( inplace=truein [ ]df out[ ] - - - handling missing data www it-ebooks info |
11,907 | - - - - - the same interpolation methods available for reindexing can be used with fillnain [ ]df dataframe(np random randn( )in [ ]df ix[ : nadf ix[ : na in [ ]df out[ ] - nan nan - nan nan nan nan in [ ]df fillna(method='ffill'out[ ] - - - - in [ ]df fillna(method='ffill'limit= out[ ] - - nan - nan - with fillna you can do lots of other things with little creativity for exampleyou might pass the mean or median value of seriesin [ ]data series([ na na ]in [ ]data fillna(data mean()out[ ] see table - for reference on fillna table - fillna function arguments argument description value scalar value or dict-like object to use to fill missing values method interpolationby default 'ffillif function called with no other arguments axis axis to fill ondefault axis= inplace modify the calling object without producing copy limit for forward and backward fillingmaximum number of consecutive periods to fill getting started with pandas www it-ebooks info |
11,908 | hierarchical indexing is an important feature of pandas enabling you to have multiple (two or moreindex levels on an axis somewhat abstractlyit provides way for you to work with higher dimensional data in lower dimensional form let' start with simple examplecreate series with list of lists or arrays as the indexin [ ]data series(np random randn( )index=[[' '' '' '' '' '' '' '' '' '' '][ ]]in [ ]data out[ ] - - - - - - what you're seeing is prettified view of series with multiindex as its index the "gapsin the index display mean "use the label directly above"in [ ]data index out[ ]multiindex [(' ' (' ' (' ' (' ' (' ' (' ' (' ' (' ' (' ' (' ' )with hierarchically-indexed objectso-called partial indexing is possibleenabling you to concisely select subsets of the datain [ ]data[' 'out[ ] - - - in [ ]data[' ':' 'out[ ] - - - - - in [ ]data ix[[' '' ']out[ ] - - - selection is even possible in some cases from an "innerlevelin [ ]data[: out[ ] hierarchical indexing www it-ebooks info |
11,909 | - - hierarchical indexing plays critical role in reshaping data and group-based operations like forming pivot table for examplethis data could be rearranged into dataframe using its unstack methodin [ ]data unstack(out[ ] - - - - - - nan nan the inverse operation of unstack is stackin [ ]data unstack(stack(out[ ] - - - - - - stack and unstack will be explored in more detail in with dataframeeither axis can have hierarchical indexin [ ]frame dataframe(np arange( reshape(( ))index=[[' '' '' '' '][ ]]columns=[['ohio''ohio''colorado']['green''red''green']]in [ ]frame out[ ]ohio green red colorado green the hierarchical levels can have names (as strings or any python objectsif sothese will show up in the console output (don' confuse the index names with the axis labels!)in [ ]frame index names ['key ''key 'in [ ]frame columns names ['state''color'in [ ]frame getting started with pandas www it-ebooks info |
11,910 | state color key key ohio green red colorado green with partial column indexing you can similarly select groups of columnsin [ ]frame['ohio'out[ ]color green red key key multiindex can be created by itself and then reusedthe columns in the above dataframe with level names could be created like thismultiindex from_arrays([['ohio''ohio''colorado']['green''red''green']]names=['state''color']reordering and sorting levels at times you will need to rearrange the order of the levels on an axis or sort the data by the values in one specific level the swaplevel takes two level numbers or names and returns new object with the levels interchanged (but the data is otherwise unaltered)in [ ]frame swaplevel('key ''key 'out[ ]state ohio colorado color green red green key key sortlevelon the other handsorts the data (stablyusing only the values in single level when swapping levelsit' not uncommon to also use sortlevel so that the result is lexicographically sortedin [ ]frame sortlevel( out[ ]state ohio colorado color green red green key key in [ ]frame swaplevel( sortlevel( out[ ]state ohio colorado color green red green key key hierarchical indexing www it-ebooks info |
11,911 | objects if the index is lexicographically sorted starting with the outermost levelthat isthe result of calling sortlevel( or sort_index(summary statistics by level many descriptive and summary statistics on dataframe and series have level option in which you can specify the level you want to sum by on particular axis consider the above dataframewe can sum by level on either the rows or columns like soin [ ]frame sum(level='key 'out[ ]state ohio colorado color green red green key in [ ]frame sum(level='color'axis= out[ ]color green red key key under the hoodthis utilizes pandas' groupby machinery which will be discussed in more detail later in the book using dataframe' columns it' not unusual to want to use one or more columns from dataframe as the row indexalternativelyyou may wish to move the row index into the dataframe' columns here' an example dataframein [ ]frame dataframe({' 'range( )' 'range( - )' '['one''one''one''two''two''two''two']' '[ ]}in [ ]frame out[ ] one one one two two two two getting started with pandas www it-ebooks info |
11,912 | columns as the indexin [ ]frame frame set_index([' '' ']in [ ]frame out[ ] one two by default the columns are removed from the dataframethough you can leave them inin [ ]frame set_index([' '' ']drop=falseout[ ] one one one one two two two two two reset_indexon the other handdoes the opposite of set_indexthe hierarchical index levels are are moved into the columnsin [ ]frame reset_index(out[ ] one one one two two two two other pandas topics here are some additional topics that may be of use to you in your data travels integer indexing working with pandas objects indexed by integers is something that often trips up new users due to some differences with indexing semantics on built-in python data other pandas topics www it-ebooks info |
11,913 | to generate an errorser series(np arange( )ser[- in this casepandas could "fall backon integer indexingbut there' not safe and general way (that know ofto do this without introducing subtle bugs here we have an index containing but inferring what the user wants (label-based indexing or position-basedis difficult:in [ ]ser out[ ] on the other handwith non-integer indexthere is no potential for ambiguityin [ ]ser series(np arange( )index=[' '' '' ']in [ ]ser [- out[ ] to keep things consistentif you have an axis index containing indexersdata selection with integers will always be label-oriented this includes slicing with ixtooin [ ]ser ix[: out[ ] in cases where you need reliable position-based indexing regardless of the index typeyou can use the iget_value method from series and irow and icol methods from dataframein [ ]ser series(range( )index=[- ]in [ ]ser iget_value( out[ ] in [ ]frame dataframe(np arange( reshape( ))index=[ ]in [ ]frame irow( out[ ] name panel data while not major topic of this bookpandas has panel data structurewhich you can think of as three-dimensional analogue of dataframe much of the development focus of pandas has been in tabular data manipulations as these are easier to reason about getting started with pandas www it-ebooks info |
11,914 | of cases to create panelyou can use dict of dataframe objects or three-dimensional ndarrayimport pandas io data as web pdata pd panel(dict((stkweb get_data_yahoo(stk'''')for stk in ['aapl''goog''msft''dell'])each item (the analogue of columns in dataframein the panel is dataframein [ ]pdata out[ ]dimensions (itemsx (majorx (minoritemsaapl to msft major axis : : to : : minor axisopen to adj close in [ ]pdata pdata swapaxes('items''minor'in [ ]pdata['adj close'out[ ]datetimeindex entries : : to : : data columnsaapl non-null values dell non-null values goog non-null values msft non-null values dtypesfloat ( ix-based label indexing generalizes to three dimensionsso we can select all data at particular date or range of dates like soin [ ]pdata ix[:'':out[ ]open high low close aapl dell goog msft volume adj close in [ ]pdata ix['adj close'''::out[ ]aapl dell goog msft date other pandas topics www it-ebooks info |
11,915 | an alternate way to represent panel dataespecially for fitting statistical modelsis in "stackeddataframe formin [ ]stacked pdata ix[:''::to_frame(in [ ]stacked out[ ]major minor aapl dell goog msft aapl dell goog msft aapl dell goog msft open high low close volume adj close dataframe has related to_panel methodthe inverse of to_framein [ ]stacked to_panel(out[ ]dimensions (itemsx (majorx (minoritemsopen to adj close major axis : : to : : minor axisaapl to msft getting started with pandas www it-ebooks info |
11,916 | data loadingstorageand file formats the tools in this book are of little use if you can' easily import and export data in python ' going to be focused on input and output with pandas objectsthough there are of course numerous tools in other libraries to aid in this process numpyfor examplefeatures low-level but extremely fast binary data loading and storageincluding support for memory-mapped array see for more on those input and output typically falls into few main categoriesreading text files and other more efficient on-disk formatsloading data from databasesand interacting with network sources like web apis reading and writing data in text format python has become beloved language for text and file munging due to its simple syntax for interacting with filesintuitive data structuresand convenient features like tuple packing and unpacking pandas features number of functions for reading tabular data as dataframe object table - has summary of all of themthough read_csv and read_table are likely the ones you'll use the most table - parsing functions in pandas function description read_csv load delimited data from fileurlor file-like object use comma as default delimiter read_table load delimited data from fileurlor file-like object use tab ('\ 'as default delimiter read_fwf read data in fixed-width column format (that isno delimitersread_clipboard version of read_table that reads data from the clipboard useful for converting tables from web pages www it-ebooks info |
11,917 | text data into dataframe the options for these functions fall into few categoriesindexingcan treat one or more columns as the returned dataframeand whether to get column names from the filethe useror not at all type inference and data conversionthis includes the user-defined value conversions and custom list of missing value markers datetime parsingincludes combining capabilityincluding combining date and time information spread over multiple columns into single column in the result iteratingsupport for iterating over chunks of very large files unclean data issuesskipping rows or footercommentsor other minor things like numeric data with thousands separated by commas type inference is one of the more important features of these functionsthat means you don' have to specify which columns are numericintegerbooleanor string handling dates and other custom types requires bit more effortthough let' start with small comma-separated (csvtext filein [ ]!cat ch /ex csv , , , ,message , , , ,hello , , , ,world , , , ,foo since this is comma-delimitedwe can use read_csv to read it into dataframein [ ]df pd read_csv('ch /ex csv'in [ ]df out[ ] message hello world foo we could also have used read_table and specifying the delimiterin [ ]pd read_table('ch /ex csv'sep=','out[ ] message hello world foo here used the unix cat shell command to print the raw contents of the file to the screen if you're on windowsyou can use type instead of cat to achieve the same effect data loadingstorageand file formats www it-ebooks info |
11,918 | in [ ]!cat ch /ex csv , , , ,hello , , , ,world , , , ,foo to read this inyou have couple of options you can allow pandas to assign default column namesor you can specify names yourselfin [ ]pd read_csv('ch /ex csv'header=noneout[ ] hello world foo in [ ]pd read_csv('ch /ex csv'names=[' '' '' '' ''message']out[ ] message hello world foo suppose you wanted the message column to be the index of the returned dataframe you can either indicate you want the column at index or named 'messageusing the index_col argumentin [ ]names [' '' '' '' ''message'in [ ]pd read_csv('ch /ex csv'names=namesindex_col='message'out[ ] message hello world foo in the event that you want to form hierarchical index from multiple columnsjust pass list of column numbers or namesin [ ]!cat ch /csv_mindex csv key ,key ,value ,value one, , , one, , , one, , , one, , , two, , , two, , , two, , , two, , , in [ ]parsed pd read_csv('ch /csv_mindex csv'index_col=['key ''key ']in [ ]parsed out[ ]reading and writing data in text format www it-ebooks info |
11,919 | one two value value in some casesa table might not have fixed delimiterusing whitespace or some other pattern to separate fields in these casesyou can pass regular expression as delimiter for read_table consider text file that looks like thisin [ ]list(open('ch /ex txt')out[ ][ \ ''aaa - - - \ ''bbb - \ ''ccc - - - \ ''ddd - - \ 'while you could do some munging by handin this case fields are separated by variable amount of whitespace this can be expressed by the regular expression \ +so we have thenin [ ]result pd read_table('ch /ex txt'sep='\ +'in [ ]result out[ ] aaa - - - bbb - ccc - - - ddd - - because there was one fewer column name than the number of data rowsread_table infers that the first column should be the dataframe' index in this special case the parser functions have many additional arguments to help you handle the wide variety of exception file formats that occur (see table - for exampleyou can skip the firstthirdand fourth rows of file with skiprowsin [ ]!cat ch /ex csv heya, , , ,message just wanted to make things more difficult for you who reads csv files with computersanyway , , , ,hello , , , ,world , , , ,foo in [ ]pd read_csv('ch /ex csv'skiprows=[ ]out[ ] message data loadingstorageand file formats www it-ebooks info |
11,920 | hello world foo handling missing values is an important and frequently nuanced part of the file parsing process missing data is usually either not present (empty stringor marked by some sentinel value by defaultpandas uses set of commonly occurring sentinelssuch as na- #indand nullin [ ]!cat ch /ex csv something, , , , ,message one, , , , ,na two, , ,, ,world three, , , , ,foo in [ ]result pd read_csv('ch /ex csv'in [ ]result out[ ]something one two nan three message nan world foo in [ ]pd isnull(resultout[ ]something false false false false false false false true false false false false message false true false false false false the na_values option can take either list or set of strings to consider missing valuesin [ ]result pd read_csv('ch /ex csv'na_values=['null']in [ ]result out[ ]something one two nan three message nan world foo different na sentinels can be specified for each column in dictin [ ]sentinels {'message'['foo''na']'something'['two']in [ ]pd read_csv('ch /ex csv'na_values=sentinelsout[ ]something message one nan nan nan world three nan reading and writing data in text format www it-ebooks info |
11,921 | argument description path string indicating filesystem locationurlor file-like object sep or delimiter character sequence or regular expression to use to split fields in each row header row number to use as column names defaults to (first row)but should be none if there is no header row index_col column numbers or names to use as the row index in the result can be single name/number or list of them for hierarchical index names list of column names for resultcombine with header=none skiprows number of rows at beginning of file to ignore or list of row numbers (starting from to skip na_values sequence of values to replace with na comment character or characters to split comments off the end of lines parse_dates attempt to parse data to datetimefalse by default if truewill attempt to parse all columns otherwise can specify list of column numbers or name to parse if element of list is tuple or listwill combine multiple columns together and parse to date (for example if date/time split across two columnskeep_date_col if joining columns to parse datedrop the joined columns default true converters dict containing column number of name mapping to functions for example {'foo'fwould apply the function to all values in the 'foocolumn dayfirst when parsing potentially ambiguous datestreat as international format ( -june default false date_parser function to use to parse dates nrows number of rows to read from beginning of file iterator return textparser object for reading file piecemeal chunksize for iterationsize of file chunks skip_footer number of lines to ignore at end of file verbose print various parser output informationlike the number of missing values placed in non-numeric columns encoding text encoding for unicode for example 'utf- for utf- encoded text squeeze if the parsed data only contains one column return series thousands separator for thousandse ',or reading text files in pieces when processing very large files or figuring out the right set of arguments to correctly process large fileyou may only want to read in small piece of file or iterate through smaller chunks of the file in [ ]result pd read_csv('ch /ex csv'in [ ]result out[ ] data loadingstorageand file formats www it-ebooks info |
11,922 | int index entries to data columnsone non-null values two non-null values three non-null values four non-null values key non-null values dtypesfloat ( )object( if you want to only read out small number of rows (avoiding reading the entire file)specify that with nrowsin [ ]pd read_csv('ch /ex csv'nrows= out[ ]one two three four key - - - - - - - - - - - to read out file in piecesspecify chunksize as number of rowsin [ ]chunker pd read_csv('ch /ex csv'chunksize= in [ ]chunker out[ ]the textparser object returned by read_csv allows you to iterate over the parts of the file according to the chunksize for examplewe can iterate over ex csvaggregating the value counts in the 'keycolumn like sochunker pd read_csv('ch /ex csv'chunksize= tot series([]for piece in chunkertot tot add(piece['key'value_counts()fill_value= tot tot order(ascending=falsewe have thenin [ ]tot[: out[ ] reading and writing data in text format www it-ebooks info |
11,923 | of an arbitrary size writing data out to text format data can also be exported to delimited format let' consider one of the csv files read abovein [ ]data pd read_csv('ch /ex csv'in [ ]data out[ ]something one two three nan message nan world foo using dataframe' to_csv methodwe can write the data out to comma-separated filein [ ]data to_csv('ch /out csv'in [ ]!cat ch /out csv ,something, , , , ,message ,one, , , , ,two, , ,, ,world ,three, , , , ,foo other delimiters can be usedof course (writing to sys stdout so it just prints the text result)in [ ]data to_csv(sys stdoutsep='|'|something| | | | |message |one| | | | |two| | || |world |three| | | | |foo missing values appear as empty strings in the output you might want to denote them by some other sentinel valuein [ ]data to_csv(sys stdoutna_rep='null',something, , , , ,message ,one, , , , ,null ,two, , ,null, ,world ,three, , , , ,foo with no other options specifiedboth the row and column labels are written both of these can be disabledin [ ]data to_csv(sys stdoutindex=falseheader=falseone, , , , two, , ,, ,world three, , , , ,foo you can also write only subset of the columnsand in an order of your choosing data loadingstorageand file formats www it-ebooks info |
11,924 | , , , , , , , series also has to_csv methodin [ ]dates pd date_range(''periods= in [ ]ts series(np arange( )index=datesin [ ]ts to_csv('ch /tseries csv'in [ ]!cat ch /tseries csv : : , : : , : : , : : , : : , : : , : : , with bit of wrangling (no headerfirst column as index)you can read csv version of series with read_csvbut there is also from_csv convenience method that makes it bit simplerin [ ]series from_csv('ch /tseries csv'parse_dates=trueout[ ] see the docstrings for to_csv and from_csv in ipython for more information manually working with delimited formats most forms of tabular data can be loaded from disk using functions like pan das read_table in some caseshoweversome manual processing may be necessary it' not uncommon to receive file with one or more malformed lines that trip up read_table to illustrate the basic toolsconsider small csv filein [ ]!cat ch /ex csv " "," "," " "," "," " "," "," "," for any file with single-character delimiteryou can use python' built-in csv module to use itpass any open file or file-like object to csv readerreading and writing data in text format www it-ebooks info |
11,925 | open('ch /ex csv'reader csv reader(fiterating through the reader like file yields tuples of values in each like with any quote characters removedin [ ]for line in readerprint line [' '' '' '[' '' '' '[' '' '' '' 'from thereit' up to you to do the wrangling necessary to put the data in the form that you need it for examplein [ ]lines list(csv reader(open('ch /ex csv'))in [ ]headervalues lines[ ]lines[ :in [ ]data_dict {hv for hv in zip(headerzip(*values))in [ ]data_dict out[ ]{' '(' '' ')' '(' '' ')' '(' '' ')csv files come in many different flavors defining new format with different delimiterstring quoting conventionor line terminator is done by defining simple subclass of csv dialectclass my_dialect(csv dialect)lineterminator '\ndelimiter ';quotechar '"reader csv reader(fdialect=my_dialectindividual csv dialect parameters can also be given as keywords to csv reader without having to define subclassreader csv reader(fdelimiter='|'the possible options (attributes of csv dialectand what they do can be found in table - table - csv dialect options argument description delimiter one-character string to separate fields defaults to ',lineterminator line terminator for writingdefaults to '\ \nreader ignores this and recognizes cross-platform line terminators quotechar quote character for fields with special characters (like delimiterdefault is '"quoting quoting convention options include csv quote_all (quote all fields)csv quote_minimal (only fields with special characters like the delimiter) data loadingstorageand file formats www it-ebooks info |
11,926 | description csv quote_nonnumericand csv quote_non (no quotingsee python' documentation for full details defaults to quote_minimal skipinitialspace ignore whitespace after each delimiter default false doublequote how to handle quoting character inside field if trueit is doubled see online documentation for full detail and behavior escapechar string to escape the delimiter if quoting is set to csv quote_none disabled by default for files with more complicated or fixed multicharacter delimitersyou will not be able to use the csv module in those casesyou'll have to do the line splitting and other cleanup using string' split method or the regular expression method re split to write delimited files manuallyyou can use csv writer it accepts an openwritable file object and the same dialect and format options as csv readerwith open('mydata csv'' 'as fwriter csv writer(fdialect=my_dialectwriter writerow(('one''two''three')writer writerow((' '' '' ')writer writerow((' '' '' ')writer writerow((' '' '' ')json data json (short for javascript object notationhas become one of the standard formats for sending data by http request between web browsers and other applications it is much more flexible data format than tabular text form like csv here is an exampleobj ""{"name""wes""places_lived"["united states""spain""germany"]"pet"null"siblings"[{"name""scott""age" "pet""zuko"}{"name""katie""age" "pet""cisco"}""json is very nearly valid python code with the exception of its null value null and some other nuances (such as disallowing trailing commas at the end of liststhe basic types are objects (dicts)arrays (lists)stringsnumbersbooleansand nulls all of the keys in an object must be strings there are several python libraries for reading and writing json data 'll use json here as it is built into the python standard library to convert json string to python formuse json loadsin [ ]import json reading and writing data in text format www it-ebooks info |
11,927 | in [ ]result out[ ]{ 'name' 'wes' 'pet'noneu'places_lived'[ 'united states' 'spain' 'germany'] 'siblings'[{ 'age' 'name' 'scott' 'pet' 'zuko'}{ 'age' 'name' 'katie' 'pet' 'cisco'}]json dumps on the other hand converts python object back to jsonin [ ]asjson json dumps(resulthow you convert json object or list of objects to dataframe or some other data structure for analysis will be up to you convenientlyyou can pass list of json objects to the dataframe constructor and select subset of the data fieldsin [ ]siblings dataframe(result['siblings']columns=['name''age']in [ ]siblings out[ ]name age scott katie for an extended example of reading and manipulating json data (including nested records)see the usda food database example in the next an effort is underway to add fast native json export (to_jsonand decoding (from_jsonto pandas this was not ready at the time of writing xml and htmlweb scraping python has many libraries for reading and writing data in the ubiquitous html and xml formats lxml (parsing very large files lxml has multiple programmer interfacesfirst 'll show using lxml html for htmlthen parse some xml using lxml objectify many websites make data available in html tables for viewing in browserbut not downloadable as an easily machine-readable format like jsonhtmlor xml noticed that this was the case with yahoofinance' stock options data if you aren' familiar with this dataoptions are derivative contracts giving you the right to buy (call optionor sell (put optiona company' stock at some particular price (the strikebetween now and some fixed point in the future (the expirypeople trade both call and put options across many strikes and expiriesthis data can all be found together in tables on yahoofinance data loadingstorageand file formats www it-ebooks info |
11,928 | parse the stream with lxml like sofrom lxml html import parse from urllib import urlopen parsed parse(urlopen('doc parsed getroot(using this objectyou can extract all html tags of particular typesuch as table tags containing the data of interest as simple motivating examplesuppose you wanted to get list of every url linked to in the documentlinks are tags in html using the document root' findall method along with an xpath ( means of expressing "querieson the document)in [ ]links doc findall(// 'in [ ]links[ : out[ ][but these are objects representing html elementsto get the url and link text you have to use each element' get method (for the urland text_content method (for the display text)in [ ]lnk links[ in [ ]lnk out[ ]in [ ]lnk get('href'out[ ]'in [ ]lnk text_content(out[ ]'special editionsthusgetting list of all urls in the document is matter of writing this list comprehensionin [ ]urls [lnk get('href'for lnk in doc findall(// ')in [ ]urls[- :out[ ]['''''''reading and writing data in text format www it-ebooks info |
11,929 | ''nowfinding the right tables in the document can be matter of trial and errorsome websites make it easier by giving table of interest an id attribute determined that these were the two tables containing the call data and put datarespectivelytables doc findall(//table'calls tables[ puts tables[ each table has header row followed by each of the data rowsin [ ]rows calls findall(//tr'for the header as well as the data rowswe want to extract the text from each cellin the case of the header these are th cells and td cells for the datadef _unpack(rowkind='td')elts row findall(//%skindreturn [val text_content(for val in eltsthuswe obtainin [ ]_unpack(rows[ ]kind='th'out[ ]['strike''symbol''last''chg''bid''ask''vol''open int'in [ ]_unpack(rows[ ]kind='td'out[ ][' ''aapl '' ' '' '' '' '' 'nowit' matter of combining all of these steps together to convert this data into dataframe since the numerical data is still in string formatwe want to convert somebut perhaps not all of the columns to floating point format you could do this by handbutluckilypandas has class textparser that is used internally in the read_csv and other parsing functions to do the appropriate automatic type conversionfrom pandas io parsers import textparser def parse_options_data(table)rows table findall(//tr'header _unpack(rows[ ]kind='th'data [_unpack(rfor in rows[ :]return textparser(datanames=headerget_chunk(finallywe invoke this parsing function on the lxml table objects and get dataframe results data loadingstorageand file formats www it-ebooks info |
11,930 | in [ ]put_data parse_options_data(putsin [ ]call_data[: out[ ]strike symbol aapl aapl aapl aapl aapl aapl aapl aapl aapl aapl last chg bid ask vol open int / / parsing xml with lxml objectify xml (extensible markup languageis another common structured data format supporting hierarchicalnested data with metadata the files that generate the book you are reading actually form series of large xml documents abovei showed the lxml library and its lxml html interface here show an alternate interface that' convenient for xml datalxml objectify the new york metropolitan transportation authority (mtapublishes number of data series about its bus and train services htmlhere we'll look at the performance data which is contained in set of xml files each train or bus service has different file (like performance_mnr xml for the metronorth railroadcontaining monthly data as series of xml records that look like this metro-north railroad escalator availability percent of the time that escalators are operational systemwide the availability rate is based on physical observations performed the morning of regular business days only this is new indicator the agency began reporting in service indicators reading and writing data in text format www it-ebooks info |
11,931 | file with getrootfrom lxml import objectify path 'performance_mnr xmlparsed objectify parse(open(path)root parsed getroot(root indicator return generator yielding each xml element for each recordwe can populate dict of tag names (like ytd_actualto data values (excluding few tags)data [skip_fields ['parent_seq''indicator_seq''desired_change''decimal_places'for elt in root indicatorel_data {for child in elt getchildren()if child tag in skip_fieldscontinue el_data[child tagchild pyval data append(el_datalastlyconvert this list of dicts into dataframein [ ]perf dataframe(datain [ ]perf out[ ]empty dataframe columnsarray([]dtype=int indexarray([]dtype=int xml data can get much more complicated than this example each tag can have metadatatoo consider an html link tag which is also valid xmlfrom stringio import stringio tag '< href="root objectify parse(stringio(tag)getroot(you can now access any of the fields (like hrefin the tag or the link textin [ ]root out[ ]in [ ]root get('href'out[ ]'in [ ]root text out[ ]'google data loadingstorageand file formats www it-ebooks info |
11,932 | one of the easiest ways to store data efficiently in binary format is using python' builtin pickle serialization convenientlypandas objects all have save method which writes the data to disk as picklein [ ]frame pd read_csv('ch /ex csv'in [ ]frame out[ ] message hello world foo in [ ]frame save('ch /frame_pickle'you read the data back into python with pandas loadanother pickle convenience functionin [ ]pd load('ch /frame_pickle'out[ ] message hello world foo pickle is only recommended as short-term storage format the problem is that it is hard to guarantee that the format will be stable over timean object pickled today may not unpickle with later version of library have made every effort to ensure that this does not occur with pandasbut at some point in the future it may be necessary to "breakthe pickle format using hdf format there are number of tools that facilitate efficiently reading and writing large amounts of scientific data in binary format on disk popular industry-grade library for this is hdf which is library with interfaces in many other languages like javapythonand matlab the "hdfin hdf stands for hierarchical data format each hdf file contains an internal file system-like node structure enabling you to store multiple datasets and supporting metadata compared with simpler formatshdf supports on-the-fly compression with variety of compressorsenabling data with repeated patterns to be stored more efficiently for very large datasets that don' fit into memoryhdf is good choice as you can efficiently read and write small sections of much larger arrays there are not one but two interfaces to the hdf library in pythonpytables and pyeach of which takes different approach to the problem py provides directbut high-level interface to the hdf apiwhile pytables abstracts many of the details of binary data formats www it-ebooks info |
11,933 | and some support for out-of-core computations pandas has minimal dict-like hdfstore classwhich uses pytables to store pandas objectsin [ ]store pd hdfstore('mydata 'in [ ]store['obj 'frame in [ ]store['obj _col'frame[' 'in [ ]store out[ ]file pathmydata obj dataframe obj _col series objects contained in the hdf file can be retrieved in dict-like fashionin [ ]store['obj 'out[ ] message hello world foo if you work with huge quantities of datai would encourage you to explore pytables and py to see how they can suit your needs since many data analysis problems are io-bound (rather than cpu-bound)using tool like hdf can massively accelerate your applications hdf is not database it is best suited for write-onceread-many datasets while data can be added to file at any timeif multiple writers do so simultaneouslythe file can become corrupted reading microsoft excel files pandas also supports reading tabular data stored in excel (and higherfiles using the excelfile class interally excelfile uses the xlrd and openpyxl packagesso you may have to install them first to use excelfilecreate an instance by passing path to an xls or xlsx filexls_file pd excelfile('data xls'data stored in sheet can then be read into dataframe using parsetable xls_file parse('sheet ' data loadingstorageand file formats www it-ebooks info |
11,934 | many websites have public apis providing data feeds via json or some other format there are number of ways to access these apis from pythonone easy-to-use method that recommend is the requests package (for the words "python pandason twitterwe can make an http get request like soin [ ]import requests in [ ]url 'in [ ]resp requests get(urlin [ ]resp out[ ]the response object' text attribute contains the content of the get query many web apis will return json string that must be loaded into python objectin [ ]import json in [ ]data json loads(resp textin [ ]data keys(out[ ][ 'next_page' 'completed_in' 'max_id_str' 'since_id_str' 'refresh_url' 'results' 'since_id' 'results_per_page' 'query' 'max_id' 'page'the results field in the response contains list of tweetseach of which is represented as python dict that looks like{ 'created_at' 'mon jun : : + ' 'from_user' 'wesmckinn' 'from_user_id' 'from_user_id_str' ' ' 'from_user_name' 'wes mckinney' 'geo'noneu'id' 'id_str' ' ' 'iso_language_code' 'pt' 'metadata'{ 'result_type' 'recent'} 'source' '< href=" 'text' 'lunchtime pandas-fu 'to_user'noneu'to_user_id' interacting with html and web apis www it-ebooks info |
11,935 | 'to_user_name'nonewe can then make list of the tweet fields of interest then pass the results list to dataframein [ ]tweet_fields ['created_at''from_user''id''text'in [ ]tweets dataframe(data['results']columns=tweet_fieldsin [ ]tweets out[ ]int index entries to data columnscreated_at non-null values from_user non-null values id non-null values text non-null values dtypesint ( )object( each row in the dataframe now has the extracted data from each tweetin [ ]tweets ix[ out[ ]created_at thu jul : : + from_user deblike id text pandaspowerful python data analysis toolkit name with bit of elbow greaseyou can create some higher-level interfaces to common web apis that return dataframe objects for easy analysis interacting with databases in many applications data rarely comes from text filesthat being fairly inefficient way to store large amounts of data sql-based relational databases (such as sql serverpostgresqland mysqlare in wide useand many alternative non-sql (so-called nosqldatabases have become quite popular the choice of database is usually dependent on the performancedata integrityand scalability needs of an application loading data from sql into dataframe is fairly straightforwardand pandas has some functions to simplify the process as an examplei'll use an in-memory sqlite database using python' built-in sqlite driverimport sqlite query ""create table test ( varchar( ) varchar( ) reald integer );"" data loadingstorageand file formats www it-ebooks info |
11,936 | con execute(querycon commit(theninsert few rows of datadata [('atlanta''georgia' )('tallahassee''florida' )('sacramento''california' )stmt "insert into test values(????)con executemany(stmtdatacon commit(most python sql drivers (pyodbcpsycopg mysqldbpymssqletc return list of tuples when selecting data from tablein [ ]cursor con execute('select from test'in [ ]rows cursor fetchall(in [ ]rows out[ ][( 'atlanta' 'georgia' )( 'tallahassee' 'florida' )( 'sacramento' 'california' )you can pass the list of tuples to the dataframe constructorbut you also need the column namescontained in the cursor' description attributein [ ]cursor description out[ ]((' 'nonenonenonenonenonenone)(' 'nonenonenonenonenonenone)(' 'nonenonenonenonenonenone)(' 'nonenonenonenonenonenone)in [ ]dataframe(rowscolumns=zip(*cursor description)[ ]out[ ] atlanta georgia tallahassee florida sacramento california this is quite bit of munging that you' rather not repeat each time you query the database pandas has read_frame function in its pandas io sql module that simplifies the process just pass the select statement and the connection objectin [ ]import pandas io sql as sql in [ ]sql read_frame('select from test'conout[ ] atlanta georgia tallahassee florida sacramento california interacting with databases www it-ebooks info |
11,937 | nosql databases take many different forms some are simple dict-like key-value stores like berkeleydb or tokyo cabinetwhile others are document-basedwith dict-like object being the basic unit of storage 've chosen mongodb (my example started mongodb instance locally on my machineand connect to it on the default port using pymongothe official driver for mongodbimport pymongo con pymongo connection('localhost'port= documents stored in mongodb are found in collections inside databases each running instance of the mongodb server can have multiple databasesand each database can have multiple collections suppose wanted to store the twitter api data from earlier in the firsti can access the (currently emptytweets collectiontweets con db tweets theni load the list of tweets and write each of them to the collection using tweets save (which writes the python dict to mongodb)import requestsjson url 'data json loads(requests get(urltextfor tweet in data['results']tweets save(tweetnowif wanted to get all of my tweets (if anyfrom the collectioni can query the collection with the following syntaxcursor tweets find({'from_user''wesmckinn'}the cursor returned is an iterator that yields each document as dict as above can convert this into dataframeoptionally extracting subset of the data fields in each tweettweet_fields ['created_at''from_user''id''text'result dataframe(list(cursor)columns=tweet_fields data loadingstorageand file formats www it-ebooks info |
11,938 | data wranglingcleantransformmergereshape much of the programming work in data analysis and modeling is spent on data preparationloadingcleaningtransformingand rearranging sometimes the way that data is stored in files or databases is not the way you need it for data processing application many people choose to do ad hoc processing of data from one form to another using general purpose programminglike pythonperlror javaor unix text processing tools like sed or awk fortunatelypandas along with the python standard library provide you with high-levelflexibleand high-performance set of core manipulations and algorithms to enable you to wrangle data into the right form without much trouble if you identify type of data manipulation that isn' anywhere in this book or elsewhere in the pandas libraryfeel free to suggest it on the mailing list or github site indeedmuch of the design and implementation of pandas has been driven by the needs of real world applications combining and merging data sets data contained in pandas objects can be combined together in number of built-in wayspandas merge connects rows in dataframes based on one or more keys this will be familiar to users of sql or other relational databasesas it implements database join operations pandas concat glues or stacks together objects along an axis combine_first instance method enables splicing together overlapping data to fill in missing values in one object with values from another will address each of these and give number of examples they'll be utilized in examples throughout the rest of the book www it-ebooks info |
11,939 | merge or join operations combine data sets by linking rows using one or more keys these operations are central to relational databases the merge function in pandas is the main entry point for using these algorithms on your data let' start with simple examplein [ ]df dataframe({'key'[' '' '' '' '' '' '' ']'data 'range( )}in [ ]df dataframe({'key'[' '' '' ']'data 'range( )}in [ ]df out[ ]data key in [ ]df out[ ]data key this is an example of many-to-one merge situationthe data in df has multiple rows labeled and bwhereas df has only one row for each value in the key column calling merge with these objects we obtainin [ ]pd merge(df df out[ ]data key data note that didn' specify which column to join on if not specifiedmerge uses the overlapping column names as the keys it' good practice to specify explicitlythoughin [ ]pd merge(df df on='key'out[ ]data key data if the column names are different in each objectyou can specify them separatelyin [ ]df dataframe({'lkey'[' '' '' '' '' '' '' ']'data 'range( )} data wranglingcleantransformmergereshape www it-ebooks info |
11,940 | 'data 'range( )}in [ ]pd merge(df df left_on='lkey'right_on='rkey'out[ ]data lkey data rkey you probably noticed that the 'cand 'dvalues and associated data are missing from the result by default merge does an 'innerjointhe keys in the result are the intersection other possible options are 'left''right'and 'outerthe outer join takes the union of the keyscombining the effect of applying both left and right joinsin [ ]pd merge(df df how='outer'out[ ]data key data nan nan many-to-many merges have well-defined though not necessarily intuitive behavior here' an examplein [ ]df dataframe({'key'[' '' '' '' '' '' ']'data 'range( )}in [ ]df dataframe({'key'[' '' '' '' '' ']'data 'range( )}in [ ]df out[ ]data key in [ ]df out[ ]data key in [ ]pd merge(df df on='key'how='left'out[ ]data key data combining and merging data sets www it-ebooks info |
11,941 | nan many-to-many joins form the cartesian product of the rows since there were 'brows in the left dataframe and in the right onethere are 'brows in the result the join method only affects the distinct key values appearing in the resultin [ ]pd merge(df df how='inner'out[ ]data key data to merge with multiple keyspass list of column namesin [ ]left dataframe({'key '['foo''foo''bar']'key '['one''two''one']'lval'[ ]}in [ ]right dataframe({'key '['foo''foo''bar''bar']'key '['one''one''one''two']'rval'[ ]}in [ ]pd merge(leftrighton=['key ''key ']how='outer'out[ ]key key lval rval bar one bar two nan foo one foo one foo two nan to determine which key combinations will appear in the result depending on the choice of merge methodthink of the multiple keys as forming an array of tuples to be used as single join key (even though it' not actually implemented that waywhen joining columns-on-columnsthe indexes on the passed dataframe objects are discarded data wranglingcleantransformmergereshape www it-ebooks info |
11,942 | names while you can address the overlap manually (see the later section on renaming axis labels)merge has suffixes option for specifying strings to append to overlapping names in the left and right dataframe objectsin [ ]pd merge(leftrighton='key 'out[ ]key key _x lval key _y rval bar one one bar one two foo one one foo one one foo two one foo two one in [ ]pd merge(leftrighton='key 'suffixes=('_left''_right')out[ ]key key _left lval key _right rval bar one one bar one two foo one one foo one one foo two one foo two one see table - for an argument reference on merge joining on index is the subject of the next section table - merge function arguments argument description left dataframe to be merged on the left side right dataframe to be merged on the right side how one of 'inner''outer''leftor 'right'innerby default on column names to join on must be found in both dataframe objects if not specified and no other join keys givenwill use the intersection of the column names in left and right as the join keys left_on columns in left dataframe to use as join keys right_on analogous to left_on for left dataframe left_index use row index in left as its join key (or keysif multiindexright_index analogous to left_index sort sort merged data lexicographically by join keystrue by default disable to get better performance in some cases on large datasets suffixes tuple of string values to append to column names in case of overlapdefaults to ('_x''_y'for exampleif 'datain both dataframe objectswould appear as 'data_xand 'data_yin result copy if falseavoid copying data into resulting data structure in some exceptional cases by default always copies combining and merging data sets www it-ebooks info |
11,943 | in some casesthe merge key or keys in dataframe will be found in its index in this caseyou can pass left_index=true or right_index=true (or bothto indicate that the index should be used as the merge keyin [ ]left dataframe({'key'[' '' '' '' '' '' ']'value'range( )}in [ ]right dataframe({'group_val'[ ]}index=[' '' ']in [ ]left out[ ]key value in [ ]right out[ ]group_val in [ ]pd merge(left right left_on='key'right_index=trueout[ ]key value group_val since the default merge method is to intersect the join keysyou can instead form the union of them with an outer joinin [ ]pd merge(left right left_on='key'right_index=truehow='outer'out[ ]key value group_val nan with hierarchically-indexed datathings are bit more complicatedin [ ]lefth dataframe({'key '['ohio''ohio''ohio''nevada''nevada']'key '[ ]'data'np arange( )}in [ ]righth dataframe(np arange( reshape(( ))index=[['nevada''nevada''ohio''ohio''ohio''ohio'][ ]]columns=['event ''event ']in [ ]lefth out[ ]in [ ]righth out[ ] data wranglingcleantransformmergereshape www it-ebooks info |
11,944 | data key ohio ohio ohio nevada nevada key nevada ohio event event in this caseyou have to indicate multiple columns to merge on as list (pay attention to the handling of duplicate index values)in [ ]pd merge(lefthrighthleft_on=['key ''key ']right_index=trueout[ ]data key key event event nevada ohio ohio ohio ohio in [ ]pd merge(lefthrighthleft_on=['key ''key ']right_index=truehow='outer'out[ ]data key key event event nan nevada nevada nevada nan nan ohio ohio ohio ohio using the indexes of both sides of the merge is also not an issuein [ ]left dataframe([[ ][ ][ ]]index=[' '' '' ']columns=['ohio''nevada']in [ ]right dataframe([[ ][ ][ ][ ]]index=[' '' '' '' ']columns=['missouri''alabama']in [ ]left out[ ]ohio nevada in [ ]right out[ ]missouri alabama in [ ]pd merge(left right how='outer'left_index=trueright_index=trueout[ ]ohio nevada missouri alabama nan nan nan nan nan nan combining and merging data sets www it-ebooks info |
11,945 | used to combine together many dataframe objects having the same or similar indexes but non-overlapping columns in the prior examplewe could have writtenin [ ]left join(right how='outer'out[ ]ohio nevada missouri alabama nan nan nan nan nan nan in part for legacy reasons (much earlier versions of pandas)dataframe' join method performs left join on the join keys it also supports joining the index of the passed dataframe on one of the columns of the calling dataframein [ ]left join(right on='key'out[ ]key value group_val nan lastlyfor simple index-on-index mergesyou can pass list of dataframes to join as an alternative to using the more general concat function described belowin [ ]another dataframe([[ ][ ][ ][ ]]index=[' '' '' '' ']columns=['new york''oregon']in [ ]left join([right another]out[ ]ohio nevada missouri alabama new york nan nan oregon in [ ]left join([right another]how='outer'out[ ]ohio nevada missouri alabama new york oregon nan nan nan nan nan nan nan nan nan nan nan nan nan nan data wranglingcleantransformmergereshape www it-ebooks info |
11,946 | another kind of data combination operation is alternatively referred to as concatenationbindingor stacking numpy has concatenate function for doing this with raw numpy arraysin [ ]arr np arange( reshape(( )in [ ]arr out[ ]array([ ] ] ]]in [ ]np concatenate([arrarr]axis= out[ ]array([ ] ] ]]in the context of pandas objects such as series and dataframehaving labeled axes enable you to further generalize array concatenation in particularyou have number of additional things to think aboutif the objects are indexed differently on the other axesshould the collection of axes be unioned or intersecteddo the groups need to be identifiable in the resulting objectdoes the concatenation axis matter at allthe concat function in pandas provides consistent way to address each of these concerns 'll give number of examples to illustrate how it works suppose we have three series with no index overlapin [ ] series([ ]index=[' '' ']in [ ] series([ ]index=[' '' '' ']in [ ] series([ ]index=[' '' ']calling concat with these object in list glues together the values and indexesin [ ]pd concat([ ]out[ ] combining and merging data sets www it-ebooks info |
11,947 | result will instead be dataframe (axis= is the columns)in [ ]pd concat([ ]axis= out[ ] nan nan nan nan nan nan nan nan nan nan nan nan nan nan in this case there is no overlap on the other axiswhich as you can see is the sorted union (the 'outerjoinof the indexes you can instead intersect them by passing join='inner'in [ ] pd concat([ ]in [ ]pd concat([ ]axis= out[ ] nan nan in [ ]pd concat([ ]axis= join='inner'out[ ] you can even specify the axes to be used on the other axes with join_axesin [ ]pd concat([ ]axis= join_axes=[[' '' '' '' ']]out[ ] nan nan nan nan one issue is that the concatenated pieces are not identifiable in the result suppose instead you wanted to create hierarchical index on the concatenation axis to do thisuse the keys argumentin [ ]result pd concat([ ]keys=['one''two''three']in [ ]result out[ ]one two three much more on the unstack function later in [ ]result unstack(out[ ] data wranglingcleantransformmergereshape www it-ebooks info |
11,948 | one nan nan two nan nan three nan nan in the case of combining series along axis= the keys become the dataframe column headersin [ ]pd concat([ ]axis= keys=['one''two''three']out[ ]one two three nan nan nan nan nan nan nan nan nan nan nan nan nan nan the same logic extends to dataframe objectsin [ ]df dataframe(np arange( reshape( )index=[' '' '' ']columns=['one''two']in [ ]df dataframe( np arange( reshape( )index=[' '' ']columns=['three''four']in [ ]pd concat([df df ]axis= keys=['level ''level ']out[ ]level level one two three four nan nan if you pass dict of objects instead of listthe dict' keys will be used for the keys optionin [ ]pd concat({'level 'df 'level 'df }axis= out[ ]level level one two three four nan nan there are couple of additional arguments governing how the hierarchical index is created (see table - )in [ ]pd concat([df df ]axis= keys=['level ''level ']names=['upper''lower']out[ ]upper level level lower one two three four nan nan combining and merging data sets www it-ebooks info |
11,949 | the context of the analysisin [ ]df dataframe(np random randn( )columns=[' '' '' '' ']in [ ]df dataframe(np random randn( )columns=[' '' '' ']in [ ]df out[ ] - - - - in [ ]df out[ ] - - in this caseyou can pass ignore_index=truein [ ]pd concat([df df ]ignore_index=trueout[ ] - - - - nan - nan - table - concat function arguments argument description objs list or dict of pandas objects to be concatenated the only required argument axis axis to concatenate alongdefaults to join one of 'inner''outer'defaulting to 'outer'whether to intersection (inneror union (outertogether indexes along the other axes join_axes specific indexes to use for the other - axes instead of performing union/intersection logic keys values to associate with objects being concatenatedforming hierarchical index along the concatenation axis can either be list or array of arbitrary valuesan array of tuplesor list of arrays (if multiple level arrays passed in levelslevels specific indexes to use as hierarchical index level or levels if keys passed names names for created hierarchical levels if keys and or levels passed verify_integrity check new axis in concatenated object for duplicates and raise exception if so by default (falseallows duplicates ignore_index do not preserve indexes along concatenation axisinstead producing new range(total_lengthindex combining data with overlap another data combination situation can' be expressed as either merge or concatenation operation you may have two datasets whose indexes overlap in full or part as motivating exampleconsider numpy' where functionwhich expressed vectorized if-else data wranglingcleantransformmergereshape www it-ebooks info |
11,950 | index=[' '' '' '' '' '' ']in [ ] series(np arange(len( )dtype=np float )index=[' '' '' '' '' '' ']in [ ] [- np nan in [ ] out[ ] nan nan nan in [ ] out[ ] nan in [ ]np where(pd isnull( )baout[ ] nan series has combine_first methodwhich performs the equivalent of this operation plus data alignmentin [ ] [:- combine_first( [ :]out[ ] nan with dataframescombine_first naturally does the same thing column by columnso you can think of it as "patchingmissing data in the calling object with data from the object you passin [ ]df dataframe({' '[ np nan np nan]' '[np nan np nan ]' 'range( )}in [ ]df dataframe({' '[ np nan ]' '[np nan ]}in [ ]df combine_first(df out[ ] nan nan reshaping and pivoting there are number of fundamental operations for rearranging tabular data these are alternatingly referred to as reshape or pivot operations reshaping and pivoting www it-ebooks info |
11,951 | hierarchical indexing provides consistent way to rearrange data in dataframe there are two primary actionsstackthis "rotatesor pivots from the columns in the data to the rows unstackthis pivots from the rows into the columns 'll illustrate these operations through series of examples consider small dataframe with string arrays as row and column indexesin [ ]data dataframe(np arange( reshape(( ))index=pd index(['ohio''colorado']name='state')columns=pd index(['one''two''three']name='number')in [ ]data out[ ]number one state ohio colorado two three using the stack method on this data pivots the columns into the rowsproducing seriesin [ ]result data stack(in [ ]result out[ ]state number ohio one two three colorado one two three from hierarchically-indexed seriesyou can rearrange the data back into dataframe with unstackin [ ]result unstack(out[ ]number one two three state ohio colorado by default the innermost level is unstacked (same with stackyou can unstack different level by passing level number or namein [ ]result unstack( out[ ]state ohio colorado number one in [ ]result unstack('state'out[ ]state ohio colorado number one data wranglingcleantransformmergereshape www it-ebooks info |
11,952 | three two three unstacking might introduce missing data if all of the values in the level aren' found in each of the subgroupsin [ ] series([ ]index=[' '' '' '' ']in [ ] series([ ]index=[' '' '' ']in [ ]data pd concat([ ]keys=['one''two']in [ ]data unstack(out[ ] one nan two nan nan stacking filters out missing data by defaultso the operation is easily invertiblein [ ]data unstack(stack(out[ ]one two in [ ]data unstack(stack(dropna=falseout[ ]one nan two nan nan when unstacking in dataframethe level unstacked becomes the lowest level in the resultin [ ]df dataframe({'left'result'right'result }columns=pd index(['left''right']name='side')in [ ]df out[ ]side state number ohio one two three colorado one two three left right in [ ]df unstack('state'out[ ]side left right state ohio colorado ohio number one two colorado in [ ]df unstack('state'stack('side'out[ ]state ohio colorado number side one left right two left reshaping and pivoting www it-ebooks info |
11,953 | three right left right pivoting "longto "wideformat common way to store multiple time series in databases and csv is in so-called long or stacked formatin [ ]ldata[: out[ ]date : : : : : : : : : : : : : : : : : : : : item realgdp infl unemp realgdp infl unemp realgdp infl unemp realgdp value data is frequently stored this way in relational databases like mysql as fixed schema (column names and data typesallows the number of distinct values in the item column to increase or decrease as data is added or deleted in the table in the above example date and item would usually be the primary keys (in relational database parlance)offering both relational integrity and easier joins and programmatic queries in many cases the downsideof courseis that the data may not be easy to work with in long formatyou might prefer to have dataframe containing one column per distinct item value indexed by timestamps in the date column dataframe' pivot method performs exactly this transformationin [ ]pivoted ldata pivot('date''item''value'in [ ]pivoted head(out[ ]item infl realgdp unemp date the first two values passed are the columns to be used as the row and column indexand finally an optional value column to fill the dataframe suppose you had two value columns that you wanted to reshape simultaneouslyin [ ]ldata['value 'np random randn(len(ldata)in [ ]ldata[: out[ ] data wranglingcleantransformmergereshape www it-ebooks info |
11,954 | : : : : : : : : : : : : : : : : : : : : item realgdp infl unemp realgdp infl unemp realgdp infl unemp realgdp value value - - - - by omitting the last argumentyou obtain dataframe with hierarchical columnsin [ ]pivoted ldata pivot('date''item'in [ ]pivoted[: out[ ]value value item infl realgdp unemp infl realgdp unemp date - - - - - - - in [ ]pivoted['value'][: out[ ]item infl realgdp unemp date note that pivot is just shortcut for creating hierarchical index using set_index and reshaping with unstackin [ ]unstacked ldata set_index(['date''item']unstack('item'in [ ]unstacked[: out[ ]value item infl realgdp date unemp value infl realgdp unemp - - - - - - - - - - - reshaping and pivoting www it-ebooks info |
11,955 | so far in this we've been concerned with rearranging data filteringcleaningand other tranformations are another class of important operations removing duplicates duplicate rows may be found in dataframe for any number of reasons here is an examplein [ ]data dataframe({' '['one' ['two' ' '[ ]}in [ ]data out[ ] one one one two two two two the dataframe method duplicated returns boolean series indicating whether each row is duplicate or notin [ ]data duplicated(out[ ] false true false false true false true relatedlydrop_duplicates returns dataframe where the duplicated array is truein [ ]data drop_duplicates(out[ ] one one two two both of these methods by default consider all of the columnsalternatively you can specify any subset of them to detect duplicates suppose we had an additional column of values and wanted to filter duplicates only based on the ' columnin [ ]data[' 'range( in [ ]data drop_duplicates([' '] data wranglingcleantransformmergereshape www it-ebooks info |
11,956 | one two duplicated and drop_duplicates by default keep the first observed value combination passing take_last=true will return the last onein [ ]data drop_duplicates([' '' ']take_last=trueout[ ] one one two two transforming data using function or mapping for many data setsyou may wish to perform some transformation based on the values in an arrayseriesor column in dataframe consider the following hypothetical data collected about some kinds of meatin [ ]data dataframe({'food'['bacon''pulled pork''bacon''pastrami''corned beef''bacon''pastrami''honey ham''nova lox']'ounces'[ ]}in [ ]data out[ ]food bacon pulled pork bacon pastrami corned beef bacon pastrami honey ham nova lox ounces suppose you wanted to add column indicating the type of animal that each food came from let' write down mapping of each distinct meat type to the kind of animalmeat_to_animal 'bacon''pig''pulled pork''pig''pastrami''cow''corned beef''cow''honey ham''pig''nova lox''salmondata transformation www it-ebooks info |
11,957 | but here we have small problem in that some of the meats above are capitalized and others are not thuswe also need to convert each value to lower casein [ ]data['animal'data['food'map(str lowermap(meat_to_animalin [ ]data out[ ]food bacon pulled pork bacon pastrami corned beef bacon pastrami honey ham nova lox ounces animal pig pig pig cow cow pig cow pig salmon we could also have passed function that does all the workin [ ]data['food'map(lambda xmeat_to_animal[ lower()]out[ ] pig pig pig cow cow pig cow pig salmon namefood using map is convenient way to perform element-wise transformations and other data cleaning-related operations replacing values filling in missing data with the fillna method can be thought of as special case of more general value replacement while mapas you've seen abovecan be used to modify subset of values in an objectreplace provides simpler and more flexible way to do so let' consider this seriesin [ ]data series([ - - - ]in [ ]data out[ ] - - - data wranglingcleantransformmergereshape www it-ebooks info |
11,958 | values that pandas understandswe can use replaceproducing new seriesin [ ]data replace(- np nanout[ ] nan nan - if you want to replace multiple values at onceyou instead pass list then the substitute valuein [ ]data replace([- - ]np nanout[ ] nan nan nan to use different replacement for each valuepass list of substitutesin [ ]data replace([- - ][np nan ]out[ ] nan nan the argument passed can also be dictin [ ]data replace({- np nan- }out[ ] nan nan renaming axis indexes like values in seriesaxis labels can be similarly transformed by function or mapping of some form to produce newdifferently labeled objects the axes can also be modified in place without creating new data structure here' simple examplein [ ]data dataframe(np arange( reshape(( ))index=['ohio''colorado''new york']columns=['one''two''three''four']data transformation www it-ebooks info |
11,959 | in [ ]data index map(str upperout[ ]array([ohiocoloradonew york]dtype=objectyou can assign to indexmodifying the dataframe in placein [ ]data index data index map(str upperin [ ]data out[ ]one two ohio colorado new york three four if you want to create transformed version of data set without modifying the originala useful method is renamein [ ]data rename(index=str titlecolumns=str upperout[ ]one two three four ohio colorado new york notablyrename can be used in conjunction with dict-like object providing new values for subset of the axis labelsin [ ]data rename(index={'ohio''indiana'}columns={'three''peekaboo'}out[ ]one two peekaboo four indiana colorado new york rename saves having to copy the dataframe manually and assign to its index and col umns attributes should you wish to modify data set in placepass inplace=truealways returns reference to dataframe in [ ] data rename(index={'ohio''indiana'}inplace=truein [ ]data out[ ]one two indiana colorado new york three four data wranglingcleantransformmergereshape www it-ebooks info |
11,960 | continuous data is often discretized or otherwised separated into "binsfor analysis suppose you have data about group of people in studyand you want to group them into discrete age bucketsin [ ]ages [ let' divide these into bins of to to to and finally and older to do soyou have to use cuta function in pandasin [ ]bins [ in [ ]cats pd cut(agesbinsin [ ]cats out[ ]categoricalarray([( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]]dtype=objectlevels ( )index([( ]( ]( ]( ]]dtype=objectthe object pandas returns is special categorical object you can treat it like an array of strings indicating the bin nameinternally it contains levels array indicating the distinct category names along with labeling for the ages data in the labels attributein [ ]cats labels out[ ]array([ ]in [ ]cats levels out[ ]index([( ]( ]( ]( ]]dtype=objectin [ ]pd value_counts(catsout[ ]( ( ( ( consistent with mathematical notation for intervalsa parenthesis means that the side is open while the square bracket means it is closed (inclusivewhich side is closed can be changed by passing right=falsein [ ]pd cut(ages[ ]right=falseout[ ]categoricalarray([[ )[ )[ )[ )[ )[ )[ )[ )[ )[ )[ )[ )]dtype=objectlevels ( )index([[ )[ )[ )[ )]dtype=objectyou can also pass your own bin names by passing list or array to the labels optionin [ ]group_names ['youth''youngadult''middleaged''senior'in [ ]pd cut(agesbinslabels=group_namesout[ ]data transformation www it-ebooks info |
11,961 | array([youthyouthyouthyoungadultyouthyouthmiddleagedyoungadultseniormiddleagedmiddleagedyoungadult]dtype=objectlevels ( )index([youthyoungadultmiddleagedsenior]dtype=objectif you pass cut integer number of bins instead of explicit bin edgesit will compute equal-length bins based on the minimum and maximum values in the data consider the case of some uniformly distributed data chopped into fourthsin [ ]data np random rand( in [ ]pd cut(data precision= out[ ]categoricalarray([( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]( ]]dtype=objectlevels ( )index([( ]( ]( ]( ]]dtype=objecta closely related functionqcutbins the data based on sample quantiles depending on the distribution of the datausing cut will not usually result in each bin having the same number of data points since qcut uses sample quantiles insteadby definition you will obtain roughly equal-size binsin [ ]data np random randn( normally distributed in [ ]cats pd qcut(data cut into quartiles in [ ]cats out[ ]categoricalarray([(- ][- - ]( ](- - ]( ](- - ]]dtype=objectlevels ( )index([[- - ](- - ](- ]( ]]dtype=objectin [ ]pd value_counts(catsout[ ][- - ( (- - (- similar to cut you can pass your own quantiles (numbers between and inclusive)in [ ]pd qcut(data[ ]out[ ]categoricalarray([(- ](- - ](- ](- - ](- ](- - ]]dtype=objectlevels ( )index([[- - ](- - ](- ]( ]]dtype=object data wranglingcleantransformmergereshape www it-ebooks info |
11,962 | as these discretization functions are especially useful for quantile and group analysis detecting and filtering outliers filtering or transforming outliers is largely matter of applying array operations consider dataframe with some normally distributed datain [ ]np random seed( in [ ]data dataframe(np random randn( )in [ ]data describe(out[ ] count mean - std min - - - - - max - - - - - - suppose you wanted to find values in one of the columns exceeding three in magnitudein [ ]col data[ in [ ]col[np abs(col out[ ] - - name to select all rows having value exceeding or - you can use the any method on boolean dataframein [ ]data[(np abs(data any( )out[ ] - - - - - - - - - - - - - - - - - - - - - - values can just as easily be set based on these criteria here is code to cap values outside the interval - to data transformation www it-ebooks info |
11,963 | in [ ]data describe(out[ ] count mean - std min - - - - - max - - - - - - the ufunc np sign returns an array of and - depending on the sign of the values permutation and random sampling permuting (randomly reorderinga series or the rows in dataframe is easy to do using the numpy random permutation function calling permutation with the length of the axis you want to permute produces an array of integers indicating the new orderingin [ ]df dataframe(np arange( reshape( )in [ ]sampler np random permutation( in [ ]sampler out[ ]array([ ]that array can then be used in ix-based indexing or the take functionin [ ]df out[ ] in [ ]df take(samplerout[ ] to select random subset without replacementone way is to slice off the first elements of the array returned by permutationwhere is the desired subset size there are much more efficient sampling-without-replacement algorithmsbut this is an easy strategy that uses readily available toolsin [ ]df take(np random permutation(len(df))[: ]out[ ] to generate sample with replacementthe fastest way is to use np random randint to draw random integers data wranglingcleantransformmergereshape www it-ebooks info |
11,964 | in [ ]sampler np random randint( len(bag)size= in [ ]sampler out[ ]array([ ]in [ ]draws bag take(samplerin [ ]draws out[ ]array( - - - ]computing indicator/dummy variables another type of transformation for statistical modeling or machine learning applications is converting categorical variable into "dummyor "indicatormatrix if column in dataframe has distinct valuesyou would derive matrix or dataframe containing columns containing all ' and ' pandas has get_dummies function for doing thisthough devising one yourself is not difficult let' return to an earlier example dataframein [ ]df dataframe({'key'[' '' '' '' '' '' ']'data 'range( )}in [ ]pd get_dummies(df['key']out[ ] in some casesyou may want to add prefix to the columns in the indicator dataframewhich can then be merged with the other data get_dummies has prefix argument for doing just thisin [ ]dummies pd get_dummies(df['key']prefix='key'in [ ]df_with_dummy df[['data ']join(dummiesin [ ]df_with_dummy out[ ]data key_a key_b key_c data transformation www it-ebooks info |
11,965 | in [ ]mnames ['movie_id''title''genres'in [ ]movies pd read_table('ch /movies dat'sep='::'header=nonenames=mnamesin [ ]movies[: out[ ]movie_id title genres toy story ( animation|children' |comedy jumanji ( adventure|children' |fantasy grumpier old men ( comedy|romance waiting to exhale ( comedy|drama father of the bride part ii ( comedy heat ( action|crime|thriller sabrina ( comedy|romance tom and huck ( adventure|children' sudden death ( action goldeneye ( action|adventure|thriller adding indicator variables for each genre requires little bit of wrangling firstwe extract the list of unique genres in the dataset (using nice set union trick)in [ ]genre_iter (set( split('|')for in movies genresin [ ]genres sorted(set union(*genre_iter)nowone way to construct the indicator dataframe is to start with dataframe of all zerosin [ ]dummies dataframe(np zeros((len(movies)len(genres)))columns=genresnowiterate through each movie and set entries in each row of dummies to in [ ]for igen in enumerate(movies genres)dummies ix[igen split('|') thenas aboveyou can combine this with moviesin [ ]movies_windic movies join(dummies add_prefix('genre_')in [ ]movies_windic ix[ out[ ]movie_id title toy story ( genres animation|children' |comedy genre_action genre_adventure genre_animation genre_children' genre_comedy genre_crime genre_documentary genre_drama genre_fantasy data wranglingcleantransformmergereshape www it-ebooks info |
11,966 | genre_horror genre_musical genre_mystery genre_romance genre_sci-fi genre_thriller genre_war genre_western name for much larger datathis method of constructing indicator variables with multiple membership is not especially speedy lower-level function leveraging the internals of the dataframe could certainly be written useful recipe for statistical applications is to combine get_dummies with discretization function like cutin [ ]values np random rand( in [ ]values out[ ]array( ] in [ ]bins [ in [ ]pd get_dummies(pd cut(valuesbins)out[ ]( ( ( ( ( string manipulation python has long been popular data munging language in part due to its ease-of-use for string and text processing most text operations are made simple with the string object' built-in methods for more complex pattern matching and text manipulationsregular expressions may be needed pandas adds to the mix by enabling you to apply string and regular expressions concisely on whole arrays of dataadditionally handling the annoyance of missing data string manipulation www it-ebooks info |
11,967 | in many string munging and scripting applicationsbuilt-in string methods are sufficient as an examplea comma-separated string can be broken into pieces with splitin [ ]val ' ,bguidoin [ ]val split(','out[ ][' '' 'guido'split is often combined with strip to trim whitespace (including newlines)in [ ]pieces [ strip(for in val split(',')in [ ]pieces out[ ][' '' ''guido'these substrings could be concatenated together with two-colon delimiter using additionin [ ]firstsecondthird pieces in [ ]first '::second '::third out[ ]' :: ::guidobutthis isn' practical generic method faster and more pythonic way is to pass list or tuple to the join method on the string '::'in [ ]'::join(piecesout[ ]' :: ::guidoother methods are concerned with locating substrings using python' in keyword is the best way to detect substringthough index and find can also be usedin [ ]'guidoin val out[ ]true in [ ]val index(','out[ ] in [ ]val find(':'out[ ]- note the difference between find and index is that index raises an exception if the string isn' found (versus returning - )in [ ]val index(':'valueerror traceback (most recent call lastin (---- val index(':'valueerrorsubstring not found relatedlycount returns the number of occurrences of particular substringin [ ]val count(','out[ ] replace will substitute occurrences of one pattern for another this is commonly used to delete patternstooby passing an empty string data wranglingcleantransformmergereshape www it-ebooks info |
11,968 | out[ ]' :: :guidoin [ ]val replace(','''out[ ]'ab guidoregular expressions can also be used with many of these operations as you'll see below table - python built-in string methods argument description count return the number of non-overlapping occurrences of substring in the string endswithstartswith returns true if string ends with suffix (starts with prefixjoin use string as delimiter for concatenating sequence of other strings index return position of first character in substring if found in the string raises valueer ror if not found find return position of first character of first occurrence of substring in the string like indexbut returns - if not found rfind return position of first character of last occurrence of substring in the string returns - if not found replace replace occurrences of string with another string striprstriplstrip trim whitespaceincluding newlinesequivalent to strip((and rstriplstriprespectivelyfor each element split break string into list of substrings using passed delimiter lowerupper convert alphabet characters to lowercase or uppercaserespectively ljustrjust left justify or right justifyrespectively pad opposite side of string with spaces (or some other fill characterto return string with minimum width regular expressions regular expressions provide flexible way to search or match string patterns in text single expressioncommonly called regexis string formed according to the regular expression language python' built-in re module is responsible for applying regular expressions to stringsi'll give number of examples of its use here the art of writing regular expressions could be of its own and thus is outside the book' scope there are many excellent tutorials and references on the internetsuch as zed shaw' learn regex the hard way (the re module functions fall into three categoriespattern matchingsubstitutionand splitting naturally these are all relateda regex describes pattern to locate in the textwhich can then be used for many purposes let' look at simple examplesuppose wanted to split string with variable number of whitespace characters (tabsspacesand newlinesthe regex describing one or more whitespace characters is \ +string manipulation www it-ebooks info |
11,969 | in [ ]text "foo bar\ baz \tquxin [ ]re split('\ +'textout[ ]['foo''bar''baz''qux'when you call re split('\ +'text)the regular expression is first compiledthen its split method is called on the passed text you can compile the regex yourself with re compileforming reusable regex objectin [ ]regex re compile('\ +'in [ ]regex split(textout[ ]['foo''bar''baz''qux'ifinsteadyou wanted to get list of all patterns matching the regexyou can use the findall methodin [ ]regex findall(textout[ ][''\ '\ 'to avoid unwanted escaping with in regular expressionuse raw string literals like ' :\xinstead of the equivalent ' :\\xcreating regex object with re compile is highly recommended if you intend to apply the same expression to many stringsdoing so will save cpu cycles match and search are closely related to findall while findall returns all matches in stringsearch returns only the first match more rigidlymatch only matches at the beginning of the string as less trivial examplelet' consider block of text and regular expression capable of identifying most email addressestext """dave dave@google com steve steve@gmail com rob rob@gmail com ryan ryan@yahoo com ""pattern '[ - - %+-]+@[ - - -]+[ - ]{ , }re ignorecase makes the regex case-insensitive regex re compile(patternflags=re ignorecaseusing findall on the text produces list of the -mail addressesin [ ]regex findall(textout[ ]['dave@google com''steve@gmail com''rob@gmail com''ryan@yahoo com'search returns special match object for the first email address in the text for the above regexthe match object can only tell us the start and end position of the pattern in the string data wranglingcleantransformmergereshape www it-ebooks info |
11,970 | in [ ] out[ ]in [ ]text[ start(): end()out[ ]'dave@google comregex match returns noneas it only will match if the pattern occurs at the start of the stringin [ ]print regex match(textnone relatedlysub will return new string with occurrences of the pattern replaced by the new stringin [ ]print regex sub('redacted'textdave redacted steve redacted rob redacted ryan redacted suppose you wanted to find email addresses and simultaneously segment each address into its componentsusernamedomain nameand domain suffix to do thisput parentheses around the parts of the pattern to segmentin [ ]pattern '([ - - %+-]+)@([ - - -]+)([ - ]{ , })in [ ]regex re compile(patternflags=re ignorecasea match object produced by this modified regex returns tuple of the pattern components with its groups methodin [ ] regex match('wesm@bright net'in [ ] groups(out[ ]('wesm''bright''net'findall returns list of tuples when the pattern has groupsin [ ]regex findall(textout[ ][('dave''google''com')('steve''gmail''com')('rob''gmail''com')('ryan''yahoo''com')sub also has access to groups in each match using special symbols like \ \ etc in [ ]print regex sub( 'username\ domain\ suffix\ 'textdave usernamedavedomaingooglesuffixcom steve usernamestevedomaingmailsuffixcom rob usernamerobdomaingmailsuffixcom ryan usernameryandomainyahoosuffixcom string manipulation www it-ebooks info |
11,971 | book' scope to give you flavorone variation on the above email regex gives names to the match groupsregex re compile( ""(? [ - - %+-]+(? [ - - -]+(? [ - ]{ , })"""flags=re ignorecase|re verbosethe match object produced by such regex can produce handy dict with the specified group namesin [ ] regex match('wesm@bright net'in [ ] groupdict(out[ ]{'domain''bright''suffix''net''username''wesm'table - regular expression methods argument description findallfinditer return all non-overlapping matching patterns in string findall returns list of all patterns while finditer returns them one by one from an iterator match match pattern at start of string and optionally segment pattern components into groups if the pattern matchesreturns match objectotherwise none search scan string for match to patternreturning match object if so unlike matchthe match can be anywhere in the string as opposed to only at the beginning split break string into pieces at each occurrence of pattern subsubn replace all (subor first occurrences (subnof pattern in string with replacement expression use symbols \ \ to refer to match group elements in the replacement string vectorized string functions in pandas cleaning up messy data set for analysis often requires lot of string munging and regularization to complicate mattersa column containing strings will sometimes have missing datain [ ]data {'dave''dave@google com''steve''steve@gmail com''rob''rob@gmail com''wes'np nanin [ ]data series(datain [ ]data out[ ]dave dave@google com rob rob@gmail com steve steve@gmail com wes nan in [ ]data isnull(out[ ]dave false rob false steve false wes true data wranglingcleantransformmergereshape www it-ebooks info |
11,972 | has concise methods for string operations that skip na values these are accessed through series' str attributefor examplewe could check whether each email address has 'gmailin it with str containsin [ ]data str contains('gmail'out[ ]dave false rob true steve true wes nan regular expressions can be usedtooalong with any re options like ignorecasein [ ]pattern out[ ]'([ - - %+-]+)@([ - - -]+)\([ - ]{ , })in [ ]data str findall(patternflags=re ignorecaseout[ ]dave [('dave''google''com')rob [('rob''gmail''com')steve [('steve''gmail''com')wes nan there are couple of ways to do vectorized element retrieval either use str get or index into the str attributein [ ]matches data str match(patternflags=re ignorecasein [ ]matches out[ ]dave ('dave''google''com'rob ('rob''gmail''com'steve ('steve''gmail''com'wes nan in [ ]matches str get( out[ ]dave google rob gmail steve gmail wes nan in [ ]matches str[ out[ ]dave dave rob rob steve steve wes nan you can similarly slice strings using this syntaxin [ ]data str[: out[ ]dave daverob rob@ steve steve wes nan string manipulation www it-ebooks info |
11,973 | method description cat concatenate strings element-wise with optional delimiter contains return boolean array if each string contains pattern/regex count count occurrences of pattern endswithstartswith equivalent to endswith(patternor startswith(patternfor each element findall compute list of all occurrences of pattern/regex for each string get index into each element (retrieve -th elementjoin join strings in each element of the series with passed separator len compute length of each string lowerupper convert casesequivalent to lower(or upper(for each element match use re match with the passed regular expression on each elementreturning matched groups as list pad add whitespace to leftrightor both sides of strings center equivalent to pad(side='both'repeat duplicate valuesfor example str repeat( equivalent to for each string replace replace occurrences of pattern/regex with some other string slice slice each string in the series split split strings on delimiter or regular expression striprstriplstrip trim whitespaceincluding newlinesequivalent to strip((and rstriplstriprespectivelyfor each element exampleusda food database the us department of agriculture makes available database of food nutrient information ashley williamsan english hackerhas made available version of this database in json format ("id" "description""kentucky fried chickenfried chickenextra crispywingmeat and skin with breading""tags"["kfc"]"manufacturer""kentucky fried chicken""group""fast foods""portions""amount" "unit""wingwith skin""grams" } data wranglingcleantransformmergereshape www it-ebooks info |
11,974 | ]"nutrients""value" "units"" ""description""protein""group""composition}each food has number of identifying attributes along with two lists of nutrients and portion sizes having the data in this form is not particularly amenable for analysisso we need to do some work to wrangle the data into better form after downloading and extracting the data from the link aboveyou can load it into python with any json library of your choosing 'll use the built-in python json modulein [ ]import json in [ ]db json load(open('ch /foodsjson')in [ ]len(dbout[ ] each entry in db is dict containing all the data for single food the 'nutrientsfield is list of dictsone for each nutrientin [ ]db[ keys(out[ ][ 'portions' 'description' 'tags' 'nutrients' 'group' 'id' 'manufacturer'in [ ]db[ ]['nutrients'][ out[ ]{ 'description' 'protein' 'group' 'composition' 'units' ' ' 'value' in [ ]nutrients dataframe(db[ ]['nutrients']in [ ]nutrients[: out[ ]description protein total lipid (fat carbohydrateby difference ash energy water energy group units value composition composition composition other energy kcal composition energy kj exampleusda food database www it-ebooks info |
11,975 | we'll take the food namesgroupidand manufacturerin [ ]info_keys ['description''group''id''manufacturer'in [ ]info dataframe(dbcolumns=info_keysin [ ]info[: out[ ]description cheesecaraway cheesecheddar cheeseedam cheesefeta cheesemozzarellapart skim milk group dairy and egg products dairy and egg products dairy and egg products dairy and egg products dairy and egg products id manufacturer in [ ]info out[ ]int index entries to data columnsdescription non-null values group non-null values id non-null values manufacturer non-null values dtypesint ( )object( you can see the distribution of food groups with value_countsin [ ]pd value_counts(info group)[: out[ ]vegetables and vegetable products beef products baked products breakfast cereals legumes and legume products fast foods lambvealand game products sweets pork products fruits and fruit juices nowto do some analysis on all of the nutrient datait' easiest to assemble the nutrients for each food into single large table to do sowe need to take several steps firsti'll convert each list of food nutrients to dataframeadd column for the food idand append the dataframe to list thenthese can be concatenated together with concatnutrients [for rec in dbfnuts dataframe(rec['nutrients']fnuts['id'rec['id'nutrients append(fnutsnutrients pd concat(nutrientsignore_index=true data wranglingcleantransformmergereshape www it-ebooks info |
11,976 | in [ ]nutrients out[ ]int index entries to data columnsdescription non-null values group non-null values units non-null values value non-null values id non-null values dtypesfloat ( )int ( )object( noticed thatfor whatever reasonthere are duplicates in this dataframeso it makes things easier to drop themin [ ]nutrients duplicated(sum(out[ ] in [ ]nutrients nutrients drop_duplicates(since 'groupand 'descriptionis in both dataframe objectswe can rename them to make it clear what is whatin [ ]col_mapping {'description'food''group'fgroup'in [ ]info info rename(columns=col_mappingcopy=falsein [ ]info out[ ]int index entries to data columnsfood non-null values fgroup non-null values id non-null values manufacturer non-null values dtypesint ( )object( in [ ]col_mapping {'description'nutrient''group'nutgroup'in [ ]nutrients nutrients rename(columns=col_mappingcopy=falsein [ ]nutrients out[ ]int index entries to data columnsnutrient non-null values nutgroup non-null values units non-null values value non-null values exampleusda food database www it-ebooks info |
11,977 | non-null values dtypesfloat ( )int ( )object( with all of this donewe're ready to merge info with nutrientsin [ ]ndata pd merge(nutrientsinfoon='id'how='outer'in [ ]ndata out[ ]int index entries to data columnsnutrient non-null values nutgroup non-null values units non-null values value non-null values id non-null values food non-null values fgroup non-null values manufacturer non-null values dtypesfloat ( )int ( )object( in [ ]ndata ix[ out[ ]nutrient folic acid nutgroup vitamins units mcg value id food ostrichtop loincooked fgroup poultry products manufacturer name the tools that you need to slice and diceaggregateand visualize this dataset will be explored in detail in the next two so after you get handle on those methods you might return to this dataset for examplewe could plot of median values by food group and nutrient type (see figure - )in [ ]result ndata groupby(['nutrient''fgroup'])['value'quantile( in [ ]result['zinczn'order(plot(kind='barh'with little clevernessyou can find which food is most dense in each nutrientby_nutrient ndata groupby(['nutgroup''nutrient']get_maximum lambda xx xs( value idxmax()get_minimum lambda xx xs( value idxmin()max_foods by_nutrient apply(get_maximum)[['value''food']make the food little smaller max_foods food max_foods food str[: data wranglingcleantransformmergereshape www it-ebooks info |
11,978 | the resulting dataframe is bit too large to display in the bookhere is just the 'amino acidsnutrient groupin [ ]max_foods ix['amino acids']['food'out[ ]nutrient alanine gelatinsdry powderunsweetened arginine seedssesame flourlow-fat aspartic acid soy protein isolate cystine seedscottonseed flourlow fat (glandlessglutamic acid soy protein isolate glycine gelatinsdry powderunsweetened histidine whalebelugameatdried (alaska nativehydroxyproline kentucky fried chickenfried chickenoriginal isoleucine soy protein isolateprotein technologies interna leucine soy protein isolateprotein technologies interna lysine sealbearded (oogruk)meatdried (alaska nativ methionine fishcodatlanticdried and salted phenylalanine soy protein isolateprotein technologies interna proline gelatinsdry powderunsweetened serine soy protein isolateprotein technologies interna threonine soy protein isolateprotein technologies interna tryptophan sea lionstellermeat with fat (alaska nativetyrosine soy protein isolateprotein technologies interna valine soy protein isolateprotein technologies interna namefood exampleusda food database www it-ebooks info |
11,979 | |
11,980 | plotting and visualization making plots and static or interactive visualizations is one of the most important tasks in data analysis it may be part of the exploratory processfor examplehelping identify outliersneeded data transformationsor coming up with ideas for models for othersbuilding an interactive visualization for the web using toolkit like js (http// js org/may be the end goal python has many visualization tools (see the end of this but 'll be mainly focused on matplotlib netmatplotlib is (primarily ddesktop plotting package designed for creating publication-quality plots the project was started by john hunter in to enable matlab-like plotting interface in python hefernando perez (of ipython)and others have collaborated for many years since then to make ipython combined with matplotlib very functional and productive environment for scientific computing when used in tandem with gui toolkit (for examplewithin ipython)matplotlib has interactive features like zooming and panning it supports many different gui backends on all operating systems and additionally can export graphics to all of the common vector and raster graphics formatspdfsvgjpgpngbmpgifetc have used it to produce almost all of the graphics outside of diagrams in this book matplotlib has number of add-on toolkitssuch as mplot for plots and basemap for mapping and projections will give an example using basemap to plot data on map and to read shapefiles at the end of the to follow along with the code examples in the make sure you have started ipython in pylab mode (ipython --pylabor enabled gui event loop integration with the %gui magic brief matplotlib api primer there are several ways to interact with matplotlib the most common is through pylab mode in ipython by running ipython --pylab this launches ipython configured to be able to support the matplotlib gui backend of your choice (tkwxpythonpyqtmac www it-ebooks info |
11,981 | os nativegtkfor most usersthe default backend will be sufficient pylab mode also imports large set of modules and functions into ipython to provide more matlab-like interface you can test that everything is working by making simple plotplot(np arange( )if everything is set up righta new window should pop up with line plot you can close it by using the mouse or entering close(matplotlib api functions like plot and close are all in the matplotlib pyplot modulewhich is typically imported by convention asimport matplotlib pyplot as plt while the pandas plotting functions described later deal with many of the mundane details of making plotsshould you wish to customize them beyond the function options provided you will need to learn bit about the matplotlib api there is not enough room in the book to give comprehensive treatment to the breadth and depth of functionality in matplotlib it should be enough to teach you the ropes to get up and running the matplotlib gallery and documentation are the best resource for becoming plotting guru and using advanced features figures and subplots plots in matplotlib reside within figure object you can create new figure with plt figurein [ ]fig plt figure( plotting and visualization www it-ebooks info |
11,982 | ure has number of optionsnotably figsize will guarantee the figure has certain size and aspect ratio if saved to disk figures in matplotlib also support numbering scheme (for exampleplt figure( )that mimics matlab you can get reference to the active figure using plt gcf(you can' make plot with blank figure you have to create one or more subplots using add_subplotin [ ]ax fig add_subplot( this means that the figure should be and we're selecting the first of subplots (numbered from if you create the next two subplotsyou'll end up with figure that looks like figure - in [ ]ax fig add_subplot( in [ ]ax fig add_subplot( figure - an empty matplotlib figure with subplots when you issue plotting command like plt plot([ - ])matplotlib draws on the last figure and subplot used (creating one if necessary)thus hiding the figure and subplot creation thusif we run the following commandyou'll get something like figure - in [ ]from numpy random import randn in [ ]plt plot(randn( cumsum()' --'the ' --is style option instructing matplotlib to plot black dashed line the objects returned by fig add_subplot above are axessubplot objectson which you can directly plot on the other empty subplots by calling each one' instance methodssee figure - brief matplotlib api primer www it-ebooks info |
11,983 | figure - figure after additional plots in [ ] ax hist(randn( )bins= color=' 'alpha= in [ ]ax scatter(np arange( )np arange( randn( )you can find comprehensive catalogue of plot types in the matplotlib documentation since creating figure with multiple subplots according to particular layout is such common taskthere is convenience methodplt subplotsthat creates new figure and returns numpy array containing the created subplot objects plotting and visualization www it-ebooks info |
11,984 | in [ ]axes out[ ]array([[axes( , ; )axes( , ; )axes( , ; )][axes( , ; )axes( , ; )axes( , ; )]]dtype=objectthis is very useful as the axes array can be easily indexed like two-dimensional arrayfor exampleaxes[ you can also indicate that subplots should have the same or axis using sharex and shareyrespectively this is especially useful when comparing data on the same scaleotherwisematplotlib auto-scales plot limits independently see table - for more on this method table - pyplot subplots options argument description nrows number of rows of subplots ncols number of columns of subplots sharex all subplots should use the same -axis ticks (adjusting the xlim will affect all subplotssharey all subplots should use the same -axis ticks (adjusting the ylim will affect all subplotssubplot_kw dict of keywords for creating the **fig_kw additional keywords to subplots are used when creating the figuresuch as plt subplots( figsize=( )adjusting the spacing around subplots by default matplotlib leaves certain amount of padding around the outside of the subplots and spacing between subplots this spacing is all specified relative to the height and width of the plotso that if you resize the plot either programmatically or manually using the gui windowthe plot will dynamically adjust itself the spacing can be most easily changed using the subplots_adjust figure methodalso available as top-level functionsubplots_adjust(left=nonebottom=noneright=nonetop=nonewspace=nonehspace=nonewspace and hspace controls the percent of the figure width and figure heightrespectivelyto use as spacing between subplots here is small example where shrink the spacing all the way to zero (see figure - )figaxes plt subplots( sharex=truesharey=truefor in range( )for in range( )axes[ijhist(randn( )bins= color=' 'alpha= plt subplots_adjust(wspace= hspace= brief matplotlib api primer www it-ebooks info |
11,985 | you may notice that the axis labels overlap matplotlib doesn' check whether the labels overlapso in case like this you would need to fix the labels yourself by specifying explicit tick locations and tick labels more on this in the coming sections colorsmarkersand line styles matplotlib' main plot function accepts arrays of and coordinates and optionally string abbreviation indicating color and line style for exampleto plot versus with green dashesyou would executeax plot(xy' --'this way of specifying both color and linestyle in string is provided as conveniencein practice if you were creating plots programmatically you might prefer not to have to munge strings together to create plots with the desired style the same plot could also have been expressed more explicitly asax plot(xylinestyle='--'color=' 'there are number of color abbreviations provided for commonly-used colorsbut any color on the spectrum can be used by specifying its rgb value (for example'#cece ce'you can see the full set of linestyles by looking at the docstring for plot line plots can additionally have markers to highlight the actual data points since matplotlib creates continuous line plotinterpolating between pointsit can occasionally be unclear where the points lie the marker can be part of the style stringwhich must have color followed by marker type and line style (see figure - )in [ ]plt plot(randn( cumsum()'ko--' plotting and visualization www it-ebooks info |
11,986 | this could also have been written more explicitly asplot(randn( cumsum()color=' 'linestyle='dashed'marker=' 'for line plotsyou will notice that subsequent points are linearly interpolated by default this can be altered with the drawstyle optionin [ ]data randn( cumsum(in [ ]plt plot(data' --'label='default'out[ ][in [ ]plt plot(data' -'drawstyle='steps-post'label='steps-post'out[ ][in [ ]plt legend(loc='best'tickslabelsand legends for most kinds of plot decorationsthere are two main ways to do thingsusing the procedural pyplot interface (which will be very familiar to matlab usersand the more object-oriented native matplotlib api the pyplot interfacedesigned for interactive useconsists of methods like xlimxticksand xticklabels these control the plot rangetick locationsand tick labelsrespectively they can be used in two wayscalled with no arguments returns the current parameter value for example plt xlim(returns the current axis plotting range brief matplotlib api primer www it-ebooks info |
11,987 | called with parameters sets the parameter value so plt xlim([ ])sets the axis range to to all such methods act on the active or most recently-created axessubplot each of them corresponds to two methods on the subplot object itselfin the case of xlim these are ax get_xlim and ax set_xlim prefer to use the subplot instance methods myself in the interest of being explicit (and especially when working with multiple subplots)but you can certainly use whichever you find more convenient setting the titleaxis labelsticksand ticklabels to illustrate customizing the axesi'll create simple figure and plot of random walk (see figure - )in [ ]fig plt figure()ax fig add_subplot( in [ ]ax plot(randn( cumsum()to change the axis ticksit' easiest to use set_xticks and set_xticklabels the former instructs matplotlib where to place the ticks along the data rangeby default these locations will also be the labels but we can set any other values as the labels using set_xticklabelsin [ ]ticks ax set_xticks([ ]in [ ]labels ax set_xticklabels(['one''two''three''four''five']rotation= fontsize='small'lastlyset_xlabel gives name to the axis and set_title the subplot title plotting and visualization www it-ebooks info |
11,988 | in [ ]ax set_title('my first matplotlib plot'out[ ]in [ ]ax set_xlabel('stages'see figure - for the resulting figure modifying the axis consists of the same processsubstituting for in the above figure - simple plot for illustrating xticks brief matplotlib api primer www it-ebooks info |
11,989 | adding legends legends are another critical element for identifying plot elements there are couple of ways to add one the easiest is to pass the label argument when adding each piece of the plotin [ ]fig plt figure()ax fig add_subplot( in [ ]ax plot(randn( cumsum()' 'label='one'out[ ][in [ ]ax plot(randn( cumsum()' --'label='two'out[ ][in [ ]ax plot(randn( cumsum()' 'label='three'out[ ][once you've done thisyou can either call ax legend(or plt legend(to automatically create legendin [ ]ax legend(loc='best'see figure - the loc tells matplotlib where to place the plot if you aren' picky 'bestis good optionas it will choose location that is most out of the way to exclude one or more elements from the legendpass no label or label='_nolegend_annotations and drawing on subplot in addition to the standard plot typesyou may wish to draw your own plot annotationswhich could consist of textarrowsor other shapes plotting and visualization www it-ebooks info |
11,990 | text draws text at given coordinates (xyon the plot with optional custom stylingax text(xy'hello world!'family='monospace'fontsize= annotations can draw both text and arrows arranged appropriately as an examplelet' plot the closing & index price since (obtained from yahoofinanceand annotate it with some of the important dates from the - financial crisis see figure - for the resultfrom datetime import datetime fig plt figure(ax fig add_subplot( data pd read_csv('ch /spx csv'index_col= parse_dates=truespx data['spx'spx plot(ax=axstyle=' -'crisis_data (datetime( )'peak of bull market')(datetime( )'bear stearns fails')(datetime( )'lehman bankruptcy'for datelabel in crisis_dataax annotate(labelxy=(datespx asof(date )xytext=(datespx asof(date )arrowprops=dict(facecolor='black')horizontalalignment='left'verticalalignment='top'zoom in on - ax set_xlim(['''']ax set_ylim([ ]ax set_title('important dates in - financial crisis'see the online matplotlib gallery for many more annotation examples to learn from drawing shapes requires some more care matplotlib has objects that represent many common shapesreferred to as patches some of theselike rectangle and circle are found in matplotlib pyplotbut the full set is located in matplotlib patches to add shape to plotyou create the patch object shp and add it to subplot by calling ax add_patch(shp(see figure - )fig plt figure(ax fig add_subplot( rect plt rectangle(( ) color=' 'alpha= circ plt circle(( ) color=' 'alpha= pgon plt polygon([[ ][ ][ ]]color=' 'alpha= brief matplotlib api primer www it-ebooks info |
11,991 | ax add_patch(circax add_patch(pgonfigure - important dates in - financial crisis figure - figure composed from different patches if you look at the implementation of many familiar plot typesyou will see that they are assembled from patches plotting and visualization www it-ebooks info |
11,992 | the active figure can be saved to file using plt savefig this method is equivalent to the figure object' savefig instance method for exampleto save an svg version of figureyou need only typeplt savefig('figpath svg'the file type is inferred from the file extension so if you used pdf instead you would get pdf there are couple of important options that use frequently for publishing graphicsdpiwhich controls the dots-per-inch resolutionand bbox_incheswhich can trim the whitespace around the actual figure to get the same plot as png above with minimal whitespace around the plot and at dpiyou would doplt savefig('figpath png'dpi= bbox_inches='tight'savefig doesn' have to write to diskit can also write to any file-like objectsuch as stringiofrom io import stringio buffer stringio(plt savefig(bufferplot_data buffer getvalue(for examplethis is useful for serving dynamically-generated images over the web table - figure savefig options argument description fname string containing filepath or python file-like object the figure format is inferred from the file extensione pdf for pdf or png for png dpi the figure resolution in dots per inchdefaults to out of the box but can be configured facecoloredge color the color of the figure background outside of the subplots ' (white)by default format the explicit file format to use ('png''pdf''svg''ps''eps'bbox_inches the portion of the figure to save if 'tightis passedwill attempt to trim the empty space around the figure matplotlib configuration matplotlib comes configured with color schemes and defaults that are geared primarily toward preparing figures for publication fortunatelynearly all of the default behavior can be customized via an extensive set of global parameters governing figure sizesubplot spacingcolorsfont sizesgrid stylesand so on there are two main ways to interact with the matplotlib configuration system the first is programmatically from python using the rc method for exampleto set the global default figure size to be you could enterplt rc('figure'figsize=( ) brief matplotlib api primer www it-ebooks info |
11,993 | 'axes''xtick''ytick''grid''legendor many others after that can follow sequence of keyword arguments indicating the new parameters an easy way to write down the options in your program is as dictfont_options {'family'monospace''weight'bold''size'small'plt rc('font'**font_optionsfor more extensive customization and to see list of all the optionsmatplotlib comes with configuration file matplotlibrc in the matplotlib/mpl-data directory if you customize this file and place it in your home directory titled matplotlibrcit will be loaded each time you use matplotlib plotting functions in pandas as you've seenmatplotlib is actually fairly low-level tool you assemble plot from its base componentsthe data display (the type of plotlinebarboxscattercontouretc )legendtitletick labelsand other annotations part of the reason for this is that in many cases the data needed to make complete plot is spread across many objects in pandas we have row labelscolumn labelsand possibly grouping information this means that many kinds of fully-formed plots that would ordinarily require lot of matplotlib code can be expressed in one or two concise statements thereforepandas has an increasing number of high-level plotting methods for creating standard visualizations that take advantage of how data is organized in dataframe objects as of this writingthe plotting functionality in pandas is undergoing quite bit of work as part of the google summer of code programa student is working full time to add features and to make the interface more consistent and usable thusit' possible that this code may fall out-of-date faster than the other things in this book the online pandas documentation will be the best resource in that event line plots series and dataframe each have plot method for making many different plot types by defaultthey make line plots (see figure - )in [ ] series(np random randn( cumsum()index=np arange( )in [ ] plot(the series object' index is passed to matplotlib for plotting on the axisthough this can be disabled by passing use_index=false the axis ticks and limits can be adjusted using the xticks and xlim optionsand axis respectively using yticks and ylim see plotting and visualization www it-ebooks info |
11,994 | table - for full listing of plot options 'll comment on few more of them throughout this section and leave the rest to you to explore most of pandas' plotting methods accept an optional ax parameterwhich can be matplotlib subplot object this gives you more flexible placement of subplots in grid layout there will be more on this in the later section on the matplotlib api dataframe' plot method plots each of its columns as different line on the same subplotcreating legend automatically (see figure - )in [ ]df dataframe(np random randn( cumsum( )columns=[' '' '' '' ']index=np arange( )in [ ]df plot(additional keyword arguments to plot are passed through to the respective matplotlib plotting functionso you can further customize these plots by learning more about the matplotlib api table - series plot method arguments argument description label label for plot legend ax matplotlib subplot object to plot on if nothing passeduses active matplotlib subplot style style stringlike 'ko--'to be passed to matplotlib alpha the plot fill opacity (from to plotting functions in pandas www it-ebooks info |
11,995 | argument description kind can be 'line''bar''barh''kdelogy use logarithmic scaling on the axis use_index use the object index for tick labels rot rotation of tick labels ( through xticks values to use for axis ticks yticks values to use for axis ticks xlim axis limits ( [ ]ylim axis limits grid display axis grid (on by defaultdataframe has number of options allowing some flexibility with how the columns are handledfor examplewhether to plot them all on the same subplot or to create separate subplots see table - for more on these table - dataframe-specific plot arguments argument description subplots plot each dataframe column in separate subplot sharex if subplots=trueshare the same axislinking ticks and limits sharey if subplots=trueshare the same axis figsize size of figure to create as tuple plotting and visualization www it-ebooks info |
11,996 | description title plot title as string legend add subplot legend (true by defaultsort_columns plot columns in alphabetical orderby default uses existing column order for time series plottingsee bar plots making bar plots instead of line plots is simple as passing kind='bar(for vertical barsor kind='barh(for horizontal barsin this casethe series or dataframe index will be used as the (baror (barhticks (see figure - )in [ ]figaxes plt subplots( in [ ]data series(np random rand( )index=list('abcdefghijklmnop')in [ ]data plot(kind='bar'ax=axes[ ]color=' 'alpha= out[ ]in [ ]data plot(kind='barh'ax=axes[ ]color=' 'alpha= for more on the plt subplots function and matplotlib axes and figuressee the later section in this with dataframebar plots group the values in each row together in group in barsside by sidefor each value see figure - in [ ]df dataframe(np random rand( )index=['one''two''three''four''five''six']columns=pd index([' '' '' '' ']name='genus')in [ ]df out[ ]genus one two three four five six in [ ]df plot(kind='bar'plotting functions in pandas www it-ebooks info |
11,997 | note that the name "genuson the dataframe' columns is used to title the legend stacked bar plots are created from dataframe by passing stacked=trueresulting in the value in each row being stacked together (see figure - )in [ ]df plot(kind='barh'stacked=truealpha= useful recipe for bar plots (as seen in an earlier is to visualize series' value frequency using value_countss value_counts (plot(kind='bar'returning to the tipping data set used earlier in the booksuppose we wanted to make stacked bar plot showing the percentage of data points for each party size on each day load the data using read_csv and make cross-tabulation by day and party sizein [ ]tips pd read_csv('ch /tips csv'in [ ]party_counts pd crosstab(tips daytips sizein [ ]party_counts out[ ]size day fri sat sun thur plotting and visualization www it-ebooks info |
11,998 | in [ ]party_counts party_counts ix[: : figure - dataframe bar plot example figure - dataframe stacked bar plot example thennormalize so that each row sums to ( have to cast to float to avoid integer division issues on python and make the plot (see figure - )normalize to sum to in [ ]party_pcts party_counts div(party_counts sum( astype(float)axis= plotting functions in pandas www it-ebooks info |
11,999 | out[ ]size day fri sat sun thur in [ ]party_pcts plot(kind='bar'stacked=truefigure - fraction of parties by size on each day so you can see that party sizes appear to increase on the weekend in this data set histograms and density plots histogramwith which you may be well-acquaintedis kind of bar plot that gives discretized display of value frequency the data points are split into discreteevenly spaced binsand the number of data points in each bin is plotted using the tipping data from beforewe can make histogram of tip percentages of the total bill using the hist method on the series (see figure - )in [ ]tips['tip_pct'tips['tip'tips['total_bill'in [ ]tips['tip_pct'hist(bins= plotting and visualization www it-ebooks info |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.