id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
12,000 | related plot type is density plotwhich is formed by computing an estimate of continuous probability distribution that might have generated the observed data usual procedure is to approximate this distribution as mixture of kernelsthat issimpler distributions like the normal (gaussiandistribution thusdensity plots are also known as kde (kernel density estimateplots using plot with kind='kdemakes density plot using the standard mixture-of-normals kde (see figure - )in [ ]tips['tip_pct'plot(kind='kde'these two plot types are often plotted togetherthe histogram in normalized form (to give binned densitywith kernel density estimate plotted on top as an exampleconsider bimodal distribution consisting of draws from two different standard normal distributions (see figure - )in [ ]comp np random normal( size= ( in [ ]comp np random normal( size= ( in [ ]values series(np concatenate([comp comp ])in [ ]values hist(bins= alpha= color=' 'normed=trueout[ ]in [ ]values plot(kind='kde'style=' --'scatter plots scatter plots are useful way of examining the relationship between two one-dimensional data series matplotlib has scatter plotting method that is the workhorse of plotting functions in pandas www it-ebooks info |
12,001 | figure - normalized histogram of normal mixture with density estimate making these kinds of plots to give an examplei load the macrodata dataset from the statsmodels projectselect few variablesthen compute log differencesin [ ]macro pd read_csv('ch /macrodata csv'in [ ]data macro[['cpi'' ''tbilrate''unemp']in [ ]trans_data np log(datadiff(dropna( plotting and visualization www it-ebooks info |
12,002 | out[ ]cpi tbilrate - - - - - - unemp it' easy to plot simple scatter plot using plt scatter (see figure - )in [ ]plt scatter(trans_data[' ']trans_data['unemp']out[ ]in [ ]plt title('changes in log % vs log % (' ''unemp')figure - simple scatter plot in exploratory data analysis it' helpful to be able to look at all the scatter plots among group of variablesthis is known as pairs plot or scatter plot matrix making such plot from scratch is bit of workso pandas has scatter_matrix function for creating one from dataframe it also supports placing histograms or density plots of each variable along the diagonal see figure - for the resulting plotin [ ]scatter_matrix(trans_datadiagonal='kde'color=' 'alpha= plotting mapsvisualizing haiti earthquake crisis data ushahidi is non-profit software company that enables crowdsourcing of information related to natural disasters and geopolitical events via text message many of these data sets are then published on their website for analysis and visualization downloaded plotting mapsvisualizing haiti earthquake crisis data www it-ebooks info |
12,003 | the data collected during the haiti earthquake crisis and aftermathand 'll show you how prepared the data for analysis and visualization using pandas and other tools we have looked at thus far after downloading the csv file from the above linkwe can load it into dataframe using read_csvin [ ]data pd read_csv('ch /haiti csv'in [ ]data out[ ]int index entries to data columnsserial non-null values incident title non-null values incident date non-null values location non-null values description non-null values category non-null values latitude non-null values longitude non-null values approved non-null values verified non-null values dtypesfloat ( )int ( )object( it' easy now to tinker with this data set to see what kinds of things we might want to do with it each row represents report sent from someone' mobile phone indicating an emergency or some other problem each has an associated timestamp and location as latitude and longitudein [ ]data[['incident date''latitude''longitude']][: out[ ]incident date latitude longitude plotting and visualization www it-ebooks info |
12,004 | : : : : : : : : : : - - - - - - - the category field contains comma-separated list of codes indicating the type of messagein [ ]data['category'][: out[ ] urgences emergency public health urgences emergency urgences logistiques urgences logistiques vital lines autre urgences emergency urgences emergency communication lines downnamecategory if you notice above in the data summarysome of the categories are missingso we might want to drop these data points additionallycalling describe shows that there are some aberrant locationsin [ ]data describe(out[ ]serial latitude count mean std min max longitude - - - - - cleaning the bad locations and removing the missing categories is now fairly simplein [ ]data data[(data latitude (data latitude (data longitude - (data longitude - data category notnull()now we might want to do some analysis or visualization of this data by categorybut each category field may have multiple categories additionallyeach category is given as code plus an english and possibly also french code name thusa little bit of wrangling is required to get the data into more agreeable form firsti wrote these two functions to get list of all the categories and to split each category into code and an english namedef to_cat_list(catstr)stripped ( strip(for in catstr split(',')plotting mapsvisualizing haiti earthquake crisis data www it-ebooks info |
12,005 | def get_all_categories(cat_series)cat_sets (set(to_cat_list( )for in cat_seriesreturn sorted(set union(*cat_sets)def get_english(cat)codenames cat split('if '|in namesnames names split(')[ return codenames strip(you can test out that the get_english function does what you expectin [ ]get_english(' urgences logistiques vital lines'out[ ](' ''vital lines'nowi make dict mapping code to name because we'll use the codes for analysis we'll use this later when adorning plots (note the use of generator expression in lieu of list comprehension)in [ ]all_cats get_all_categories(data categorygenerator expression in [ ]english_mapping dict(get_english(xfor in all_catsin [ ]english_mapping[' 'out[ ]'food shortagein [ ]english_mapping[' 'out[ ]'earthquake and aftershocksthere are many ways to go about augmenting the data set to be able to easily select records by category one way is to add indicator (or dummycolumnsone for each category to do thatfirst extract the unique category codes and construct dataframe of zeros having those as its columns and the same index as datadef get_code(seq)return [ split(')[ for in seq if xall_codes get_code(all_catscode_index pd index(np unique(all_codes)dummy_frame dataframe(np zeros((len(data)len(code_index)))index=data indexcolumns=code_indexif all goes welldummy_frame should look something like thisin [ ]dummy_frame ix[:: out[ ]int index entries to data columns non-null values non-null values non-null values non-null values plotting and visualization www it-ebooks info |
12,006 | non-null values non-null values dtypesfloat ( as you recallthe trick is then to set the appropriate entries of each row to lastly joining this with datafor rowcat in zip(data indexdata category)codes get_code(to_cat_list(cat)dummy_frame ix[rowcodes data data join(dummy_frame add_prefix('category_')data finally now has new columns likein [ ]data ix[: : out[ ]int index entries to data columnscategory_ non-null values category_ non-null values category_ non-null values category_ non-null values category_ non-null values dtypesfloat ( let' make some plotsas this is spatial datawe' like to plot the data by category on map of haiti the basemap toolkit (to matplotlibenables plotting data on maps in python basemap provides many different globe projections and means for transforming projecting latitude and longitude coordinates on the globe onto two-dimensional matplotlib plot after some trial and error and using the above data as guidelinei wrote this function which draws simple black and white map of haitifrom mpl_toolkits basemap import basemap import matplotlib pyplot as plt def basic_haiti_map(ax=nonelllat= urlat= lllon=- urlon=- )create polar stereographic basemap instance basemap(ax=axprojection='stere'lon_ =(urlon lllon lat_ =(urlat lllat llcrnrlat=lllaturcrnrlat=urlatllcrnrlon=lllonurcrnrlon=urlonresolution=' 'draw coastlinesstate and country boundariesedge of map drawcoastlines( drawstates( drawcountries(return the ideanowis that the returned basemap objectknows how to transform coordinates onto the canvas wrote the following code to plot the data observations for number plotting mapsvisualizing haiti earthquake crisis data www it-ebooks info |
12,007 | of report categories for each categoryi filter down the data set to the coordinates labeled by that categoryplot basemap on the appropriate subplottransform the coordinatesthen plot the points using the basemap' plot methodfigaxes plt subplots(nrows= ncols= figsize=( )fig subplots_adjust(hspace= wspace= to_plot [' '' '' '' 'lllat= urlat= lllon=- urlon=- for codeax in zip(to_plotaxes flat) basic_haiti_map(axlllat=lllaturlat=urlatlllon=lllonurlon=urloncat_data data[data['category_%scode= compute map proj coordinates xy (cat_data longitudecat_data latitudem plot(xy' 'alpha= ax set_title('% % (codeenglish_mapping[code])the resulting figure can be seen in figure - it seems from the plot that most of the data is concentrated around the most populous cityport-au-prince basemap allows you to overlap additional map data which comes from what are called shapefiles first downloaded shapefile with roads in port-auprince (see conveniently has readshapefile method so thatafter extracting the road data archivei added just the following lines to my code plotting and visualization www it-ebooks info |
12,008 | shapefile_path 'ch /portauprince_roads/portauprince_roadsm readshapefile(shapefile_path'roads'after little more trial and error with the latitude and longitude boundariesi was able to make figure - for the "food shortagecategory python visualization tool ecosystem as is common with open sourcethere are plethora of options for creating graphics in python (too many to listin addition to open sourcethere are numerous commercial libraries with python bindings in this and throughout the booki have been primarily concerned with matplotlib as it is the most widely used plotting tool in python while it' an important part of the scientific python ecosystemmatplotlib has plenty of shortcomings when it comes to the creation and display of statistical graphics matlab users will likely find matplotlib familiarwhile users (especially users of the excellent ggplot and trel lis packagesmay be somewhat disappointed (at least as of this writingit is possible to make beautiful plots for display on the web in matplotlibbut doing so often requires significant effort as the library is designed for the printed page aesthetics asideit is sufficient for most needs in pandasialong with the other developershave sought to build convenient user interface that makes it easier to make most kinds of plots commonplace in data analysis there are number of other visualization tools in wide use list few of them here and encourage you to explore the ecosystem python visualization tool ecosystem www it-ebooks info |
12,009 | chaco (with matplotlibchaco has much better support for interacting with plot elements and rendering is very fastmaking it good choice for building interactive gui applications figure - chaco example plot mayavi the mayavi projectdeveloped by prabhu ramachandrangael varoquauxand othersis graphics toolkit built on the open source +graphics library vtk mayavilike matplotlibintegrates with ipython so that it is easy to use interactively the plots can be pannedrotatedand zoomed using the mouse and keyboard used mayavi to make one of the illustrations of broadcasting in while don' show any mayavi-using code herethere is plenty of documentation and examples available online in many casesi believe it is good alternative to technology like webglthough the graphics are harder to share in interactive form other packages of coursethere are numerous other visualization libraries and applications available in pythonpyqwtveuszgnuplot-pybigglesand others have seen pyqwt put to good use in gui applications built using the qt application framework using pyqt while many of these libraries continue to be under active development (some of them plotting and visualization www it-ebooks info |
12,010 | toward web-based technologies and away from desktop graphics 'll say few more words about this in the next section the future of visualization toolsvisualizations built on web technologies (that isjavascript-basedappear to be the inevitable future doubtlessly you have used many different kinds of static or interactive visualizations built in flash or javascript over the years new toolkits (such as js and its numerous off-shoot projectsfor building such displays are appearing all the time in contrastdevelopment in non web-based visualization has slowed significantly in recent years this holds true of python as well as other data analysis and statistical computing environments like the development challengethenwill be in building tighter integration between data analysis and preparation toolssuch as pandasand the web browser am hopeful that this will become fruitful point of collaboration between python and non-python users as well python visualization tool ecosystem www it-ebooks info |
12,011 | |
12,012 | data aggregation and group operations categorizing data set and applying function to each groupwhether an aggregation or transformationis often critical component of data analysis workflow after loadingmergingand preparing data seta familiar task is to compute group statistics or possibly pivot tables for reporting or visualization purposes pandas provides flexible and high-performance groupby facilityenabling you to slice and diceand summarize data sets in natural way one reason for the popularity of relational databases and sql (which stands for "structured query language"is the ease with which data can be joinedfilteredtransformedand aggregated howeverquery languages like sql are rather limited in the kinds of group operations that can be performed as you will seewith the expressiveness and power of python and pandaswe can perform much more complex grouped operations by utilizing any function that accepts pandas object or numpy array in this you will learn how tosplit pandas object into pieces using one or more keys (in the form of functionsarraysor dataframe column namescomputing group summary statisticslike countmeanor standard deviationor user-defined function apply varying set of functions to each column of dataframe apply within-group transformations or other manipulationslike normalizationlinear regressionrankor subset selection compute pivot tables and cross-tabulations perform quantile analysis and other data-derived group analyses www it-ebooks info |
12,013 | to as resampling in this book and will receive separate treatment in groupby mechanics hadley wickhaman author of many popular packages for the programming languagecoined the term split-apply-combine for talking about group operationsand think that' good description of the process in the first stage of the processdata contained in pandas objectwhether seriesdataframeor otherwiseis split into groups based on one or more keys that you provide the splitting is performed on particular axis of an object for examplea dataframe can be grouped on its rows (axis= or its columns (axis= once this is donea function is applied to each groupproducing new value finallythe results of all those function applications are combined into result object the form of the resulting object will usually depend on what' being done to the data see figure - for mockup of simple group aggregation figure - illustration of group aggregation each grouping key can take many formsand the keys do not have to be all of the same typea list or array of values that is the same length as the axis being grouped value indicating column name in dataframe data aggregation and group operations www it-ebooks info |
12,014 | grouped and the group names function to be invoked on the axis index or the individual labels in the index note that the latter three methods are all just shortcuts for producing an array of values to be used to split up the object don' worry if this all seems very abstract throughout this will give many examples of all of these methods to get startedhere is very simple small tabular dataset as dataframein [ ]df dataframe({'key [' '' '' '' '' ']'key ['one''two''one''two''one']'data np random randn( )'data np random randn( )}in [ ]df out[ ]data - - - data key key one two one two one suppose you wanted to compute the mean of the data column using the groups labels from key there are number of ways to do this one is to access data and call groupby with the column ( seriesat key in [ ]grouped df['data 'groupby(df['key ']in [ ]grouped out[ ]this grouped variable is now groupby object it has not actually computed anything yet except for some intermediate data about the group key df['key 'the idea is that this object has all of the information needed to then apply some operation to each of the groups for exampleto compute group means we can call the groupby' mean methodin [ ]grouped mean(out[ ]key - lateri'll explain more about what' going on when you call mean(the important thing here is that the data ( serieshas been aggregated according to the group keyproducing new series that is now indexed by the unique values in the key column the result index has the name 'key because the dataframe column df['key 'did if instead we had passed multiple arrays as listwe get something differentin [ ]means df['data 'groupby([df['key ']df['key ']]mean(groupby mechanics www it-ebooks info |
12,015 | out[ ]key key one two one - two - in this casewe grouped the data using two keysand the resulting series now has hierarchical index consisting of the unique pairs of keys observedin [ ]means unstack(out[ ]key one two key - - in these examplesthe group keys are all seriesthough they could be any arrays of the right lengthin [ ]states np array(['ohio''california''california''ohio''ohio']in [ ]years np array([ ]in [ ]df['data 'groupby([statesyears]mean(out[ ]california - ohio - frequently the grouping information to be found in the same dataframe as the data you want to work on in that caseyou can pass column names (whether those are stringsnumbersor other python objectsas the group keysin [ ]df groupby('key 'mean(out[ ]data data key - in [ ]df groupby(['key ''key ']mean(out[ ]data data key key one two one - two - you may have noticed in the first case df groupby('key 'mean(that there is no key column in the result because df['key 'is not numeric datait is said to be nuisance columnwhich is therefore excluded from the result by defaultall of the data aggregation and group operations www it-ebooks info |
12,016 | see soon regardless of the objective in using groupbya generally useful groupby method is size which return series containing group sizesin [ ]df groupby(['key ''key ']size(out[ ]key key one two one two as of this writingany missing values in group key will be excluded from the result it' possible (andin factquite likely)that by the time you are reading this there will be an option to include the na group in the result iterating over groups the groupby object supports iterationgenerating sequence of -tuples containing the group name along with the chunk of data consider the following small example data setin [ ]for namegroup in df groupby('key ')print name print group data data key key - one two one data data key key - one - two in the case of multiple keysthe first element in the tuple will be tuple of key valuesin [ ]for ( )group in df groupby(['key ''key '])print print group one data data key key - one one two data data key key two one data data key key groupby mechanics www it-ebooks info |
12,017 | one two data data key key - two of courseyou can choose to do whatever you want with the pieces of data recipe you may find useful is computing dict of the data pieces as one-linerin [ ]pieces dict(list(df groupby('key '))in [ ]pieces[' 'out[ ]data data key key - one - two by default groupby groups on axis= but you can group on any of the other axes for examplewe could group the columns of our example df here by dtype like soin [ ]df dtypes out[ ]data float data float key object key object in [ ]grouped df groupby(df dtypesaxis= in [ ]dict(list(grouped)out[ ]{dtype('float ')data - - - dtype('object')key key one two one two onedata selecting column or subset of columns indexing groupby object created from dataframe with column name or array of column names has the effect of selecting those columns for aggregation this means thatdf groupby('key ')['data 'df groupby('key ')[['data ']are syntactic sugar fordf['data 'groupby(df['key ']df[['data ']groupby(df['key '] data aggregation and group operations www it-ebooks info |
12,018 | examplein the above data setto compute means for just the data column and get the result as dataframewe could writein [ ]df groupby(['key ''key '])[['data ']mean(out[ ]data key key one two one two the object returned by this indexing operation is grouped dataframe if list or array is passed and grouped series is just single column name that is passed as scalarin [ ]s_grouped df groupby(['key ''key '])['data 'in [ ]s_grouped out[ ]in [ ]s_grouped mean(out[ ]key key one two one two namedata grouping with dicts and series grouping information may exist in form other than an array let' consider another example dataframein [ ]people dataframe(np random randn( )columns=[' '' '' '' '' ']index=['joe''steve''wes''jim''travis']in [ ]people ix[ : [' '' ']np nan add few na values in [ ]people out[ ] joe - steve - - - wes - nan nan - - jim travis - - - - - nowsuppose have group correspondence for the columns and want to sum together the columns by groupin [ ]mapping {' ''red'' ''red'' ''blue'' ''blue'' ''red'' 'orange'groupby mechanics www it-ebooks info |
12,019 | we can just pass the dictin [ ]by_column people groupby(mappingaxis= in [ ]by_column sum(out[ ]blue red joe steve - wes - - jim travis - - the same functionality holds for serieswhich can be viewed as fixed size mapping when used series as group keys in the above examplespandas doesin factinspect each series to ensure that its index is aligned with the axis it' groupingin [ ]map_series series(mappingin [ ]map_series out[ ] red red blue blue red orange in [ ]people groupby(map_seriesaxis= count(out[ ]blue red joe steve wes jim travis grouping with functions using python functions in what can be fairly creative ways is more abstract way of defining group mapping compared with dict or series any function passed as group key will be called once per index valuewith the return values being used as the group names more concretelyconsider the example dataframe from the previous sectionwhich has people' first names as index values suppose you wanted to group by the length of the namesyou could compute an array of string lengthsbut instead you can just pass the len functionin [ ]people groupby(lensum(out[ ] - - data aggregation and group operations www it-ebooks info |
12,020 | - - - - - mixing functions with arraysdictsor series is not problem as everything gets converted to arrays internallyin [ ]key_list ['one''one''one''two''two'in [ ]people groupby([lenkey_list]min(out[ ] one - - - - two one - - - two - - - - - grouping by index levels final convenience for hierarchically-indexed data sets is the ability to aggregate using one of the levels of an axis index to do thispass the level number or name using the level keywordin [ ]columns pd multiindex from_arrays([['us''us''us''jp''jp'][ ]]names=['cty''tenor']in [ ]hier_df dataframe(np random randn( )columns=columnsin [ ]hier_df out[ ]cty us jp tenor - - - - - - - - - in [ ]hier_df groupby(level='cty'axis= count(out[ ]cty jp us data aggregation by aggregationi am generally referring to any data transformation that produces scalar values from arrays in the examples above have used several of themsuch as meancountmin and sum you may wonder what is going on when you invoke mean(on groupby object many common aggregationssuch as those found in table - have optimized implementations that compute the statistics on the dataset in place howeveryou are not limited to only this set of methods you can use aggregations of your data aggregation www it-ebooks info |
12,021 | object for exampleas you recall quantile computes sample quantiles of series or dataframe' columns in [ ]df out[ ]data - - - data key key one two one two one in [ ]grouped df groupby('key 'in [ ]grouped['data 'quantile( out[ ]key - while quantile is not explicitly implemented for groupbyit is series method and thus available for use internallygroupby efficiently slices up the seriescalls piece quantile( for each piecethen assembles those results together into the result object to use your own aggregation functionspass any function that aggregates an array to the aggregate or agg methodin [ ]def peak_to_peak(arr)return arr max(arr min(in [ ]grouped agg(peak_to_peakout[ ]data data key you'll notice that some methods like describe also workeven though they are not aggregationsstrictly speakingin [ ]grouped describe(out[ ]data data key count mean std min - note that quantile performs linear interpolation if there is no value at exactly the passed percentile data aggregation and group operations www it-ebooks info |
12,022 | max count mean - std min - - - - max - will explain in more detail what has happened here in the next major section on groupwise operations and transformations you may notice that custom aggregation functions are much slower than the optimized functions found in table - this is because there is significant overhead (function callsdata rearrangementin constructing the intermediate group data chunks table - optimized groupby methods function name description count number of non-na values in the group sum sum of non-na values mean mean of non-na values median arithmetic median of non-na values stdvar unbiased ( denominatorstandard deviation and variance minmax minimum and maximum of non-na values prod product of non-na values firstlast first and last non-na values to illustrate some more advanced aggregation featuresi'll use less trivial dataseta dataset on restaurant tipping obtained it from the reshape packageit was originally found in bryant smith' text on business statistics (and found in the book' github repositoryafter loading it with read_csvi add tipping percentage column tip_pct in [ ]tips pd read_csv('ch /tips csv'add tip percentage of total bill in [ ]tips['tip_pct'tips['tip'tips['total_bill'in [ ]tips[: out[ ]total_bill tip sex smoker female no male no day sun sun time size tip_pct dinner dinner data aggregation www it-ebooks info |
12,023 | male male female male no no no no sun sun sun sun dinner dinner dinner dinner column-wise and multiple function application as you've seen aboveaggregating series or all of the columns of dataframe is matter of using aggregate with the desired function or calling method like mean or std howeveryou may want to aggregate using different function depending on the column or multiple functions at once fortunatelythis is straightforward to dowhich 'll illustrate through number of examples firsti'll group the tips by sex and smokerin [ ]grouped tips groupby(['sex''smoker']note that for descriptive statistics like those in table - you can pass the name of the function as stringin [ ]grouped_pct grouped['tip_pct'in [ ]grouped_pct agg('mean'out[ ]sex smoker female no yes male no yes nametip_pct if you pass list of functions or function names insteadyou get back dataframe with column names taken from the functionsin [ ]grouped_pct agg(['mean''std'peak_to_peak]out[ ]mean std peak_to_peak sex smoker female no yes male no yes you don' need to accept the names that groupby gives to the columnsnotably lambda functions have the name 'which make them hard to identify (you can see for yourself by looking at function' __name__ attributeas suchif you pass list of (namefunctiontuplesthe first element of each tuple will be used as the dataframe column names (you can think of list of -tuples as an ordered mapping)in [ ]grouped_pct agg([('foo''mean')('bar'np std)]out[ ]foo bar sex smoker female no yes data aggregation and group operations www it-ebooks info |
12,024 | no yes with dataframeyou have more options as you can specify list of functions to apply to all of the columns or different functions per column to startsuppose we wanted to compute the same three statistics for the tip_pct and total_bill columnsin [ ]functions ['count''mean''max'in [ ]result grouped['tip_pct''total_bill'agg(functionsin [ ]result out[ ]sex smoker female no yes male no yes tip_pct count mean max total_bill count mean max as you can seethe resulting dataframe has hierarchical columnsthe same as you would get aggregating each column separately and using concat to glue the results together using the column names as the keys argumentin [ ]result['tip_pct'out[ ]count mean sex smoker female no yes male no yes max as abovea list of tuples with custom names can be passedin [ ]ftuples [('durchschnitt''mean')('abweichung'np var)in [ ]grouped['tip_pct''total_bill'agg(ftuplesout[ ]tip_pct total_bill durchschnitt abweichung durchschnitt abweichung sex smoker female no yes male no yes nowsuppose you wanted to apply potentially different functions to one or more of the columns the trick is to pass dict to agg that contains mapping of column names to any of the function specifications listed so farin [ ]grouped agg({'tipnp max'size'sum'}out[ ]size tip sex smoker data aggregation www it-ebooks info |
12,025 | yes male no yes in [ ]grouped agg({'tip_pct['min''max''mean''std']'size'sum'}out[ ]tip_pct size min max mean std sum sex smoker female no yes male no yes dataframe will have hierarchical columns only if multiple functions are applied to at least one column returning aggregated data in "unindexedform in all of the examples up until nowthe aggregated data comes back with an indexpotentially hierarchicalcomposed from the unique group key combinations observed since this isn' always desirableyou can disable this behavior in most cases by passing as_index=false to groupbyin [ ]tips groupby(['sex''smoker']as_index=falsemean(out[ ]sex smoker total_bill tip size tip_pct female no female yes male no male yes of courseit' always possible to obtain the result in this format by calling reset_index on the result using groupby in this way is generally less flexibleresults with hierarchical columnsfor exampleare not currently implemented as the form of the result would have to be somewhat arbitrary group-wise operations and transformations aggregation is only one kind of group operation it is special case in the more general class of data transformationsthat isit accepts functions that reduce one-dimensional array to scalar value in this sectioni will introduce you to the transform and apply methodswhich will enable you to do many other kinds of group operations supposeinsteadwe wanted to add column to dataframe containing group means for each index one way to do this is to aggregatethen merge data aggregation and group operations www it-ebooks info |
12,026 | out[ ]data - - - data key key one two one two one in [ ] _means df groupby('key 'mean(add_prefix('mean_'in [ ] _means out[ ]mean_data mean_data key - in [ ]pd merge(dfk _meansleft_on='key 'right_index=trueout[ ]data data key key mean_data mean_data - one two one - one - - two - this worksbut is somewhat inflexible you can think of the operation as transforming the two data columns using the np mean function let' look back at the people dataframe from earlier in the and use the transform method on groupbyin [ ]key ['one''two''one''two''one'in [ ]people groupby(keymean(out[ ] one - - - - - two - in [ ]people groupby(keytransform(np meanout[ ] joe - - - - - steve - wes - - - - - jim - travis - - - - - as you may guesstransform applies function to each groupthen places the results in the appropriate locations if each group produces scalar valueit will be propagated (broadcastedsuppose instead you wanted to subtract the mean value from each group to do thiscreate demeaning function and pass it to transformin [ ]def demean(arr)return arr arr mean(group-wise operations and transformations www it-ebooks info |
12,027 | in [ ]demeaned out[ ] joe - steve - - - wes - nan nan - - jim - - travis - - - - you can check that demeaned now has zero group meansin [ ]demeaned groupby(keymean(out[ ] one - two - as you'll see in the next sectiongroup demeaning can be achieved using apply also applygeneral split-apply-combine like aggregatetransform is more specialized function having rigid requirementsthe passed function must either produce scalar value to be broadcasted (like np meanor transformed array of the same size the most general purpose groupby method is applywhich is the subject of the rest of this section as in figure - apply splits the object being manipulated into piecesinvokes the passed function on each piecethen attempts to concatenate the pieces together returning to the tipping data set abovesuppose you wanted to select the top five tip_pct values by group firstit' straightforward to write function that selects the rows with the largest values in particular columnin [ ]def top(dfn= column='tip_pct')return df sort_index(by=column)[- :in [ ]top(tipsn= out[ ]total_bill tip sex smoker female yes male yes male no female yes female yes male yes day sat sun sat sat sun sun time size tip_pct dinner dinner dinner dinner dinner dinner nowif we group by smokersayand call apply with this functionwe get the followingin [ ]tips groupby('smoker'apply(topout[ ]total_bill tip sex smoker smoker day data aggregation and group operations www it-ebooks info time size tip_pct |
12,028 | yes male male female male male female male female female male no thur lunch no sun dinner no sun dinner no thur lunch no sat dinner yes sat dinner yes sun dinner yes sat dinner yes sun dinner yes sun dinner what has happened herethe top function is called on each piece of the dataframethen the results are glued together using pandas concatlabeling the pieces with the group names the result therefore has hierarchical index whose inner level contains index values from the original dataframe if you pass function to apply that takes other arguments or keywordsyou can pass these after the functionin [ ]tips groupby(['smoker''day']apply(topn= column='total_bill'out[ ]total_bill tip sex smoker day time size tip_pct smoker day no fri female no fri dinner sat male no sat dinner sun male no sun dinner thur male no thur lunch yes fri male yes fri dinner sat male yes sat dinner sun male yes sun dinner thur female yes thur lunch beyond these basic usage mechanicsgetting the most out of apply is largely matter of creativity what occurs inside the function passed is up to youit only needs to return pandas object or scalar value the rest of this will mainly consist of examples showing you how to solve various problems using groupby you may recall above called describe on groupby objectin [ ]result tips groupby('smoker')['tip_pct'describe(in [ ]result out[ ]smoker no count mean std min max group-wise operations and transformations www it-ebooks info |
12,029 | count mean std min max in [ ]result unstack('smoker'out[ ]smoker no yes count mean std min max inside groupbywhen you invoke method like describeit is actually just shortcut forf lambda xx describe(grouped apply(fsuppressing the group keys in the examples aboveyou see that the resulting object has hierarchical index formed from the group keys along with the indexes of each piece of the original object this can be disabled by passing group_keys=false to groupbyin [ ]tips groupby('smoker'group_keys=falseapply(topout[ ]total_bill tip sex smoker day time size tip_pct male no thur lunch male no sun dinner female no sun dinner male no thur lunch male no sat dinner female yes sat dinner male yes sun dinner female yes sat dinner female yes sun dinner male yes sun dinner quantile and bucket analysis as you may recall from pandas has some toolsin particular cut and qcutfor slicing data up into buckets with bins of your choosing or by sample quantiles combining these functions with groupbyit becomes very simple to perform bucket or data aggregation and group operations www it-ebooks info |
12,030 | bucket categorization using cutin [ ]frame dataframe({'data 'np random randn( )'data 'np random randn( )}in [ ]factor pd cut(frame data in [ ]factor[: out[ ]categoricalarray([(- ](- - ](- ]( ](- ]( ](- ](- ]( ]( ]]dtype=objectlevels ( )index([(- - ](- ]( ]( ]]dtype=objectthe factor object returned by cut can be passed directly to groupby so we could compute set of statistics for the data column like soin [ ]def get_stats(group)return {'min'group min()'max'group max()'count'group count()'mean'group mean()in [ ]grouped frame data groupby(factorin [ ]grouped apply(get_statsunstack(out[ ]count max mean min data (- - - (- - - - ( - ( - these were equal-length bucketsto compute equal-size buckets based on sample quantilesuse qcut 'll pass labels=false to just get quantile numbers return quantile numbers in [ ]grouping pd qcut(frame data labels=falsein [ ]grouped frame data groupby(groupingin [ ]grouped apply(get_statsunstack(out[ ]count max mean min - - - - - - - - - - - - - - - group-wise operations and transformations www it-ebooks info |
12,031 | when cleaning up missing datain some cases you will filter out data observations using dropnabut in others you may want to impute (fill inthe na values using fixed value or some value derived from the data fillna is the right tool to usefor example here fill in na values with the meanin [ ] series(np random randn( )in [ ] [:: np nan in [ ] out[ ] nan - nan - nan in [ ] fillna( mean()out[ ] - - - - - suppose you need the fill value to vary by group as you may guessyou need only group the data and use apply with function that calls fillna on each data chunk here is some sample data on some us states divided into eastern and western statesin [ ]states ['ohio''new york''vermont''florida''oregon''nevada''california''idaho'in [ ]group_key ['east' ['west' in [ ]data series(np random randn( )index=statesin [ ]data[['vermont''nevada''idaho']np nan in [ ]data out[ ]ohio new york - vermont nan florida - oregon nevada nan california idaho nan in [ ]data groupby(group_keymean(out[ ] data aggregation and group operations www it-ebooks info |
12,032 | west - we can fill the na values using the group means like soin [ ]fill_mean lambda gg fillna( mean()in [ ]data groupby(group_keyapply(fill_meanout[ ]ohio new york - vermont - florida - oregon nevada california idaho in another caseyou might have pre-defined fill values in your code that vary by group since the groups have name attribute set internallywe can use thatin [ ]fill_values {'east' 'west'- in [ ]fill_func lambda gg fillna(fill_values[ name]in [ ]data groupby(group_keyapply(fill_funcout[ ]ohio new york - vermont florida - oregon nevada - california idaho - examplerandom sampling and permutation suppose you wanted to draw random sample (with or without replacementfrom large dataset for monte carlo simulation purposes or some other application there are number of ways to perform the "draws"some are much more efficient than others one way is to select the first elements of np random permutation( )where is the size of your complete dataset and the desired sample size as more fun examplehere' way to construct deck of english-style playing cardsheartsspadesclubsdiamonds suits [' '' '' '' 'card_val (range( [ base_names [' 'range( [' '' '' 'cards [for suit in [' '' '' '' ']cards extend(str(numsuit for num in base_namesdeck series(card_valindex=cardsgroup-wise operations and transformations www it-ebooks info |
12,033 | the ones used in blackjack and other games (to keep things simplei just let the ace be )in [ ]deck[: out[ ]ah jh kh qh nowbased on what said abovedrawing hand of cards from the desk could be written asin [ ]def draw(deckn= )return deck take(np random permutation(len(deck))[: ]in [ ]draw(deckout[ ]ad kc suppose you wanted two random cards from each suit because the suit is the last character of each card namewe can group based on this and use applyin [ ]get_suit lambda cardcard[- last letter is suit in [ ]deck groupby(get_suitapply(drawn= out[ ] kd kh alternatively in [ ]deck groupby(get_suitgroup_keys=falseapply(drawn= out[ ]kc jc ad data aggregation and group operations www it-ebooks info |
12,034 | ks examplegroup weighted average and correlation under the split-apply-combine paradigm of groupbyoperations between columns in dataframe or two seriessuch group weighted averagebecome routine affair as an exampletake this dataset containing group keysvaluesand some weightsin [ ]df dataframe({'category'[' '' '' '' '' '' '' '' ']'data'np random randn( )'weights'np random rand( )}in [ ]df out[ ]category data - - - - weights the group weighted average by category would then bein [ ]grouped df groupby('category'in [ ]get_wavg lambda gnp average( ['data']weights= ['weights']in [ ]grouped apply(get_wavgout[ ]category - as less trivial exampleconsider data set from yahoofinance containing end of day prices for few stocks and the & index (the spx ticker)in [ ]close_px pd read_csv('ch /stock_px csv'parse_dates=trueindex_col= in [ ]close_px out[ ]datetimeindex entries : : to : : data columnsaapl non-null values msft non-null values xom non-null values spx non-null values dtypesfloat ( group-wise operations and transformations www it-ebooks info |
12,035 | out[ ]aapl msft xom spx one task of interest might be to compute dataframe consisting of the yearly correlations of daily returns (computed from percent changeswith spx here is one way to do itin [ ]rets close_px pct_change(dropna(in [ ]spx_corr lambda xx corrwith( ['spx']in [ ]by_year rets groupby(lambda xx yearin [ ]by_year apply(spx_corrout[ ]aapl msft xom spx there isof coursenothing to stop you from computing inter-column correlationsannual correlation of apple with microsoft in [ ]by_year apply(lambda gg['aapl'corr( ['msft'])out[ ] examplegroup-wise linear regression in the same vein as the previous exampleyou can use groupby to perform more complex group-wise statistical analysisas long as the function returns pandas object or scalar value for examplei can define the following regress function (using the statsmo dels econometrics librarywhich executes an ordinary least squares (olsregression on each chunk of data data aggregation and group operations www it-ebooks info |
12,036 | def regress(datayvarxvars) data[yvarx data[xvarsx['intercept' result sm ols(yxfit(return result params nowto run yearly linear regression of aapl on spx returnsi executein [ ]by_year apply(regress'aapl'['spx']out[ ]spx intercept - pivot tables and cross-tabulation pivot table is data summarization tool frequently found in spreadsheet programs and other data analysis software it aggregates table of data by one or more keysarranging the data in rectangle with some of the group keys along the rows and some along the columns pivot tables in python with pandas are made possible using the groupby facility described in this combined with reshape operations utilizing hierarchical indexing dataframe has pivot_table methodand additionally there is top-level pandas pivot_table function in addition to providing convenience interface to groupbypivot_table also can add partial totalsalso known as margins returning to the tipping data setsuppose wanted to compute table of group means (the default pivot_table aggregation typearranged by sex and smoker on the rowsin [ ]tips pivot_table(rows=['sex''smoker']out[ ]size tip tip_pct total_bill sex smoker female no yes male no yes this could have been easily produced using groupby nowsuppose we want to aggregate only tip_pct and sizeand additionally group by day 'll put smoker in the table columns and day in the rowsin [ ]tips pivot_table(['tip_pct''size']rows=['sex''day']cols='smoker'out[ ]pivot tables and cross-tabulation www it-ebooks info |
12,037 | sex day female fri sat sun thur male fri sat sun thur tip_pct no yes size no yes this table could be augmented to include partial totals by passing margins=true this has the effect of adding all row and column labelswith corresponding values being the group statistics for all the data within single tier in this below examplethe all values are means without taking into account smoker vs non-smoker (the all columnsor any of the two levels of grouping on the rows (the all row)in [ ]tips pivot_table(['tip_pct''size']rows=['sex''day']cols='smoker'margins=trueout[ ]size tip_pct smoker no yes all no yes all sex day female fri sat sun thur male fri sat sun thur all to use different aggregation functionpass it to aggfunc for example'countor len will give you cross-tabulation (count or frequencyof group sizesin [ ]tips pivot_table('tip_pct'rows=['sex''smoker']cols='day'aggfunc=lenmargins=trueout[ ]day fri sat sun thur all sex smoker female no yes male no yes all if some combinations are empty (or otherwise na)you may wish to pass fill_valuein [ ]tips pivot_table('size'rows=['time''sex''smoker']cols='day'aggfunc='sum'fill_value= out[ ]day fri sat sun thur time sex smoker dinner female no data aggregation and group operations www it-ebooks info |
12,038 | no yes female no yes male no yes male lunch see table - for summary of pivot_table methods table - pivot_table options function name description values column name or names to aggregate by default aggregates all numeric columns rows column names or other group keys to group on the rows of the resulting pivot table cols column names or other group keys to group on the columns of the resulting pivot table aggfunc aggregation function or list of functions'meanby default can be any function valid in groupby context fill_value replace missing values in result table margins add row/column subtotals and grand totalfalse by default cross-tabulationscrosstab cross-tabulation (or crosstab for shortis special case of pivot table that computes group frequencies here is canonical example taken from the wikipedia page on crosstabulationin [ ]data out[ ]sample gender female male female male male male female female male female handedness right-handed left-handed right-handed right-handed left-handed right-handed right-handed left-handed right-handed right-handed as part of some survey analysiswe might want to summarize this data by gender and handedness you could use pivot_table to do thisbut the pandas crosstab function is very convenientin [ ]pd crosstab(data genderdata handednessmargins=trueout[ ]handedness left-handed right-handed all gender female male all pivot tables and cross-tabulation www it-ebooks info |
12,039 | arrays as in the tips datain [ ]pd crosstab([tips timetips day]tips smokermargins=trueout[ ]smoker no yes all time day dinner fri sat sun thur lunch fri thur all example federal election commission database the us federal election commission publishes data on contributions to political campaigns this includes contributor namesoccupation and employeraddressand contribution amount an interesting dataset is from the us presidential election (dataset for all states is megabyte csv file -all csvwhich can be loaded with pandas read_csvin [ ]fec pd read_csv('ch / -all csv'in [ ]fec out[ ]int index entries to data columnscmte_id non-null values cand_id non-null values cand_nm non-null values contbr_nm non-null values contbr_city non-null values contbr_st non-null values contbr_zip non-null values contbr_employer non-null values contbr_occupation non-null values contb_receipt_amt non-null values contb_receipt_dt non-null values receipt_desc non-null values memo_cd non-null values memo_text non-null values form_tp non-null values file_num non-null values dtypesfloat ( )int ( )object( sample record in the dataframe looks like thisin [ ]fec ix[ out[ ]cmte_id data aggregation and group operations www it-ebooks info |
12,040 | cand_nm contbr_nm contbr_city contbr_st contbr_zip contbr_employer contbr_occupation contb_receipt_amt contb_receipt_dt receipt_desc memo_cd memo_text form_tp file_num name obamabarack ellmanira tempe az arizona state university professor -dec- nan nan nan sa you can probably think of many ways to start slicing and dicing this data to extract informative statistics about donors and patterns in the campaign contributions 'll spend the next several pages showing you number of different analyses that apply techniques you have learned about so far you can see that there are no political party affiliations in the dataso this would be useful to add you can get list of all the unique political candidates using unique (note that numpy suppresses the quotes around the strings in the output)in [ ]unique_cands fec cand_nm unique(in [ ]unique_cands out[ ]array([bachmannmichelleromneymittobamabarackroemercharles 'buddyiiipawlentytimothyjohnsongary earlpaulronsantorumrickcainhermangingrichnewtmccotterthaddeus ghuntsmanjonperryrick]dtype=objectin [ ]unique_cands[ out[ ]'obamabarackan easy way to indicate party affiliation is using dict: parties {'bachmannmichelle''republican''cainherman''republican''gingrichnewt''republican''huntsmanjon''republican''johnsongary earl''republican''mccotterthaddeus ''republican''obamabarack''democrat''paulron''republican''pawlentytimothy''republican''perryrick''republican'"roemercharles 'buddyiii"'republican' this makes the simplifying assumption that gary johnson is republican even though he later became the libertarian party candidate example federal election commission database www it-ebooks info |
12,041 | 'santorumrick''republican'nowusing this mapping and the map method on series objectsyou can compute an array of political parties from the candidate namesin [ ]fec cand_nm[ : out[ ] obamabarack obamabarack obamabarack obamabarack obamabarack namecand_nm in [ ]fec cand_nm[ : map(partiesout[ ] democrat democrat democrat democrat democrat namecand_nm add it as column in [ ]fec['party'fec cand_nm map(partiesin [ ]fec['party'value_counts(out[ ]democrat republican couple of data preparation points firstthis data includes both contributions and refunds (negative contribution amount)in [ ](fec contb_receipt_amt value_counts(out[ ]true false to simplify the analysisi'll restrict the data set to positive contributionsin [ ]fec fec[fec contb_receipt_amt since barack obama and mitt romney are the main two candidatesi'll also prepare subset that just has contributions to their campaignsin [ ]fec_mrbo fec[fec cand_nm isin(['obamabarack''romneymitt'])donation statistics by occupation and employer donations by occupation is another oft-studied statistic for examplelawyers (attorneystend to donate more money to democratswhile business executives tend to donate more to republicans you have no reason to believe meyou can see for yourself in the data firstthe total number of donations by occupation is easy data aggregation and group operations www it-ebooks info |
12,042 | out[ ]retired information requested attorney homemaker physician information requested per best efforts engineer teacher consultant professor you will notice by looking at the occupations that many refer to the same basic job typeor there are several variants of the same thing here is code snippet illustrates technique for cleaning up few of them by mapping from one occupation to anothernote the "trickof using dict get to allow occupations with no mapping to "pass through"occ_mapping 'information requested per best efforts'not provided''information requested'not provided''information requested (best efforts)'not provided'' ''ceoif no mapping providedreturn lambda xocc_mapping get(xxfec contbr_occupation fec contbr_occupation map(fi'll also do the same thing for employersemp_mapping 'information requested per best efforts'not provided''information requested'not provided''self'self-employed''self employed'self-employed'if no mapping providedreturn lambda xemp_mapping get(xxfec contbr_employer fec contbr_employer map(fnowyou can use pivot_table to aggregate the data by party and occupationthen filter down to the subset that donated at least $ million overallin [ ]by_occupation fec pivot_table('contb_receipt_amt'rows='contbr_occupation'cols='party'aggfunc='sum'in [ ]over_ mm by_occupation[by_occupation sum( in [ ]over_ mm out[ ]party contbr_occupation democrat republican example federal election commission database www it-ebooks info |
12,043 | ceo consultant engineer executive homemaker investor lawyer manager not provided owner physician president professor real estate retired self-employed it can be easier to look at this data graphically as bar plot ('barhmeans horizontal bar plotsee figure - )in [ ]over_ mm plot(kind='barh'figure - total donations by party for top occupations you might be interested in the top donor occupations or top companies donating to obama and romney to do thisyou can group by candidate name and use variant of the top method from earlier in the def get_top_amounts(groupkeyn= )totals group groupby(key)['contb_receipt_amt'sum(order totals by key in descending order return totals order(ascending=false)[- : data aggregation and group operations www it-ebooks info |
12,044 | in [ ]grouped fec_mrbo groupby('cand_nm'in [ ]grouped apply(get_top_amounts'contbr_occupation' = out[ ]cand_nm contbr_occupation obamabarack retired attorney not provided homemaker physician lawyer consultant romneymitt retired not provided homemaker attorney president executive namecontb_receipt_amt in [ ]grouped apply(get_top_amounts'contbr_employer' = out[ ]cand_nm contbr_employer obamabarack retired self-employed not employed not provided homemaker student volunteer microsoft sidley austin llp refused romneymitt not provided retired homemaker self-employed student credit suisse morgan stanley goldman sach co barclays capital capital namecontb_receipt_amt bucketing donation amounts useful way to analyze this data is to use the cut function to discretize the contributor amounts into buckets by contribution sizein [ ]bins np array([ ]example federal election commission database www it-ebooks info |
12,045 | in [ ]labels out[ ]categorical:contb_receipt_amt array([( ]( ]( ]( ]( ]( ]]dtype=objectlevels ( )array([( ]( ]( ]( ]( ]( ]( ]( ]]dtype=objectwe can then group the data for obama and romney by name and bin label to get histogram by donation sizein [ ]grouped fec_mrbo groupby(['cand_nm'labels]in [ ]grouped size(unstack( out[ ]cand_nm obamabarack contb_receipt_amt ( ( ( ( ( ( ( ( romneymitt nan nan this data shows that obama has received significantly larger number of small donations than romney you can also sum the contribution amounts and normalize within buckets to visualize percentage of total donations of each size by candidatein [ ]bucket_sums grouped contb_receipt_amt sum(unstack( in [ ]bucket_sums out[ ]cand_nm obamabarack contb_receipt_amt ( ( ( ( ( ( ( ( romneymitt nan nan in [ ]normed_sums bucket_sums div(bucket_sums sum(axis= )axis= in [ ]normed_sums out[ ]cand_nm obamabarack contb_receipt_amt ( ( ( romneymitt data aggregation and group operations www it-ebooks info |
12,046 | ( ( ( ( nan nan in [ ]normed_sums[:- plot(kind='barh'stacked=truei excluded the two largest bins as these are not donations by individuals see figure - for the resulting figure figure - percentage of total donations received by candidates for each donation size there are of course many refinements and improvements of this analysis for exampleyou could aggregate donations by donor name and zip code to adjust for donors who gave many small amounts versus one or more large donations encourage you to download it and explore it yourself donation statistics by state aggregating the data by candidate and state is routine affairin [ ]grouped fec_mrbo groupby(['cand_nm''contbr_st']in [ ]totals grouped contb_receipt_amt sum(unstack( fillna( in [ ]totals totals[totals sum( in [ ]totals[: out[ ]cand_nm obamabarack contbr_st romneymitt example federal election commission database www it-ebooks info |
12,047 | al ar az ca co ct dc de fl if you divide each row by the total contribution amountyou get the relative percentage of total donations by state for each candidatein [ ]percent totals div(totals sum( )axis= in [ ]percent[: out[ ]cand_nm obamabarack contbr_st ak al ar az ca co ct dc de fl romneymitt thought it would be interesting to look at this data plotted on mapusing ideas from after locating shape file for the state boundaries (atlasftp html?open=chpboundand learning bit more about matplotlib and its basemap toolkit ( was aided by blog posting from thomas lecocq) ended up with the following code for plotting these relative percentagesfrom mpl_toolkits basemap import basemapcm import numpy as np from matplotlib import rcparams from matplotlib collections import linecollection import matplotlib pyplot as plt from shapelib import shapefile import dbflib obama percent['obamabarack'fig plt figure(figsize=( )ax fig add_axes([ , , , ]lllat urlat lllon - urlon - data aggregation and group operations www it-ebooks info |
12,048 | lon_ =(urlon lllon lat_ =(urlat lllat llcrnrlat=lllaturcrnrlat=urlatllcrnrlon=lllonurcrnrlon=urlonresolution=' ' drawcoastlines( drawcountries(shp shapefile(/states/statesp 'dbf dbflib open(/states/statesp 'for npoly in range(shp info()[ ])draw colored polygons on the map shpsegs [shp_object shp read_object(npolyverts shp_object vertices(rings len(vertsfor ring in range(rings)lonslats zip(*verts[ring]xy (lonslatsshpsegs append(zip( , )if ring = shapedict dbf read_record(npolyname shapedict['state'lines linecollection(shpsegs,antialiaseds=( ,)state_to_code dicte 'alaska-'ak'omitted tryper obama[state_to_code[name upper()]except keyerrorcontinue lines set_facecolors(' 'lines set_alpha( pershrink the percentage bit lines set_edgecolors(' 'lines set_linewidth( ax add_collection(linesplt show(see figure - for the result example federal election commission database www it-ebooks info |
12,049 | data aggregation and group operations www it-ebooks info |
12,050 | time series time series data is an important form of structured data in many different fieldssuch as financeeconomicsecologyneuroscienceor physics anything that is observed or measured at many points in time forms time series many time series are fixed frequencywhich is to say that data points occur at regular intervals according to some rulesuch as every secondsevery minutesor once per month time series can also be irregular without fixed unit or time or offset between units how you mark and refer to time series data depends on the application and you may have one of the followingtimestampsspecific instants in time fixed periodssuch as the month january or the full year intervals of timeindicated by start and end timestamp periods can be thought of as special cases of intervals experiment or elapsed timeeach timestamp is measure of time relative to particular start time for examplethe diameter of cookie baking each second since being placed in the oven in this am mainly concerned with time series in the first categoriesthough many of the techniques can be applied to experimental time series where the index may be an integer or floating point number indicating elapsed time from the start of the experiment the simplest and most widely used kind of time series are those indexed by timestamp pandas provides standard set of time series tools and data algorithms with thisyou can efficiently work with very large time series and easily slice and diceaggregateand resample irregular and fixed frequency time series as you might guessmany of these tools are especially useful for financial and economics applicationsbut you could certainly use them to analyze server log datatoo www it-ebooks info |
12,051 | this were derived from the now defunct scikits timeseries library date and time data types and tools the python standard library includes data types for date and time dataas well as calendar-related functionality the datetimetimeand calendar modules are the main places to start the datetime datetime typeor simply datetimeis widely usedin [ ]from datetime import datetime in [ ]now datetime now(in [ ]now out[ ]datetime datetime( in [ ]now yearnow monthnow day out[ ]( datetime stores both the date and time down to the microsecond datetime time delta represents the temporal difference between two datetime objectsin [ ]delta datetime( datetime( in [ ]delta out[ ]datetime timedelta( in [ ]delta days out[ ] in [ ]delta seconds out[ ] you can add (or subtracta timedelta or multiple thereof to datetime object to yield new shifted objectin [ ]from datetime import timedelta in [ ]start datetime( in [ ]start timedelta( out[ ]datetime datetime( in [ ]start timedelta( out[ ]datetime datetime( the data types in the datetime module are summarized in table - while this is mainly concerned with the data types in pandas and higher level time series manipulationyou will undoubtedly encounter the datetime-based types in many other places in python the wild time series www it-ebooks info |
12,052 | type description date store calendar date (yearmonthdayusing the gregorian calendar time store time of day as hoursminutessecondsand microseconds datetime stores both date and time timedelta represents the difference between two datetime values (as dayssecondsand microsecondsconverting between string and datetime datetime objects and pandas timestamp objectswhich 'll introduce latercan be formatted as strings using str or the strftime methodpassing format specificationin [ ]stamp datetime( in [ ]str(stampout[ ] : : in [ ]stamp strftime('% -% -% 'out[ ]'see table - for complete list of the format codes these same format codes can be used to convert strings to dates using datetime strptimein [ ]value 'in [ ]datetime strptime(value'% -% -% 'out[ ]datetime datetime( in [ ]datestrs [''''in [ ][datetime strptime( '% /% /% 'for in datestrsout[ ][datetime datetime( )datetime datetime( )datetime strptime is the best way to parse date with known format howeverit can be bit annoying to have to write format spec each timeespecially for common date formats in this caseyou can use the parser parse method in the third party dateutil packagein [ ]from dateutil parser import parse in [ ]parse(''out[ ]datetime datetime( dateutil is capable of parsing almost any human-intelligible date representationin [ ]parse('jan : pm'out[ ]datetime datetime( in international localesday appearing before month is very commonso you can pass dayfirst=true to indicate thisin [ ]parse(''dayfirst=trueout[ ]datetime datetime( date and time data types and tools www it-ebooks info |
12,053 | axis index or column in dataframe the to_datetime method parses many different kinds of date representations standard date formats like iso can be parsed very quickly in [ ]datestrs out[ ][''''in [ ]pd to_datetime(datestrsout[ ] : : : : length freqnonetimezonenone it also handles values that should be considered missing (noneempty stringetc )in [ ]idx pd to_datetime(datestrs [none]in [ ]idx out[ ] : : natlength freqnonetimezonenone in [ ]idx[ out[ ]nat in [ ]pd isnull(idxout[ ]array([falsefalsetrue]dtype=boolnat (not timeis pandas' na value for timestamp data dateutil parser is usefulbut not perfect tool notablyit will recognize some strings as dates that you might prefer that it didn'tlike ' will be parsed as the year with today' calendar date table - datetime format specification (iso compatibletype description % -digit year % -digit year % -digit month [ % -digit day [ % hour ( -hour clock[ % hour ( -hour clock[ % -digit minute [ % second [ (seconds account for leap seconds% weekday as integer [ (sunday) time series www it-ebooks info |
12,054 | description % week number of the year [ sunday is considered the first day of the weekand days before the first sunday of the year are "week % week number of the year [ monday is considered the first day of the weekand days before the first monday of the year are "week % utc time zone offset as +hhmm or -hhmmempty if time zone naive % shortcut for % -% -%dfor example % shortcut for % /% /%yfor example datetime objects also have number of locale-specific formatting options for systems in other countries or languages for examplethe abbreviated month names will be different on german or french systems compared with english systems table - locale-specific date formatting type description % abbreviated weekday name % full weekday name % abbreviated month name % full month name % full date and timefor example 'tue may : : pm% locale equivalent of am or pm % locale-appropriate formatted datee in us may yields '% locale-appropriate timee ' : : pmtime series basics the most basic kind of time series object in pandas is series indexed by timestampswhich is often represented external to pandas as python strings or datetime objectsin [ ]from datetime import datetime in [ ]dates [datetime( )datetime( )datetime( )datetime( )datetime( )datetime( )in [ ]ts series(np random randn( )index=datesin [ ]ts out[ ] - - time series basics www it-ebooks info |
12,055 | - - under the hoodthese datetime objects have been put in datetimeindexand the variable ts is now of type timeseriesin [ ]type(tsout[ ]pandas core series timeseries in [ ]ts index out[ ] : : : : length freqnonetimezonenone it' not necessary to use the timeseries constructor explicitlywhen creating series with datetimeindexpandas knows that the object is time series like other seriesarithmetic operations between differently-indexed time series automatically align on the datesin [ ]ts ts[:: out[ ] nan - nan - nan pandas stores timestamps using numpy' datetime data type at the nanosecond resolutionin [ ]ts index dtype out[ ]dtype('datetime [ns]'scalar values from datetimeindex are pandas timestamp objects in [ ]stamp ts index[ in [ ]stamp out[ ] timestamp can be substituted anywhere you would use datetime object additionallyit can store frequency information (if anyand understands how to do time zone conversions and other kinds of manipulations more on both of these things later indexingselectionsubsetting timeseries is subclass of series and thus behaves in the same way with regard to indexing and selecting data based on label time series www it-ebooks info |
12,056 | in [ ]ts[stampout[ ]- as convenienceyou can also pass string that is interpretable as datein [ ]ts[''out[ ]- in [ ]ts[' 'out[ ]- for longer time seriesa year or only year and month can be passed to easily select slices of datain [ ]longer_ts series(np random randn( )index=pd date_range(''periods= )in [ ]longer_ts out[ ] - - - - freqdlength in [ ]longer_ts[' 'out[ ]- - - - freqdlength in [ ]longer_ts[' - 'out[ ] - - - - - freqdlength slicing with dates works just like with regular seriesin [ ]ts[datetime( ):out[ ]- - - - because most time series data is ordered chronologicallyyou can slice with timestamps not contained in time series to perform range queryin [ ]ts out[ ] in [ ]ts['':''out[ ]- time series basics www it-ebooks info |
12,057 | - - - - - - as before you can pass either string datedatetimeor timestamp remember that slicing in this manner produces views on the source time series just like slicing numpy arrays there is an equivalent instance method truncate which slices timeseries between two datesin [ ]ts truncate(after=''out[ ] - - all of the above holds true for dataframe as wellindexing on its rowsin [ ]dates pd date_range(''periods= freq=' -wed'in [ ]long_df dataframe(np random randn( )index=datescolumns=['colorado''texas''new york''ohio']in [ ]long_df ix[' - 'out[ ]colorado texas new york ohio - - - - - - - - - - time series with duplicate indices in some applicationsthere may be multiple data observations falling on particular timestamp here is an examplein [ ]dates pd datetimeindex(['''''''''']in [ ]dup_ts series(np arange( )index=datesin [ ]dup_ts out[ ] we can tell that the index is not unique by checking its is_unique property time series www it-ebooks info |
12,058 | out[ ]false indexing into this time series will now either produce scalar values or slices depending on whether timestamp is duplicatedin [ ]dup_ts[''not duplicated out[ ] in [ ]dup_ts[''duplicated out[ ] suppose you wanted to aggregate the data having non-unique timestamps one way to do this is to use groupby and pass level= (the only level of indexing!)in [ ]grouped dup_ts groupby(level= in [ ]grouped mean(out[ ] in [ ]grouped count(out[ ] date rangesfrequenciesand shifting generic time series in pandas are assumed to be irregularthat isthey have no fixed frequency for many applications this is sufficient howeverit' often desirable to work relative to fixed frequencysuch as dailymonthlyor every minuteseven if that means introducing missing values into time series fortunately pandas has full suite of standard time series frequencies and tools for resamplinginferring frequenciesand generating fixed frequency date ranges for examplein the example time seriesconverting it to be fixed daily frequency can be accomplished by calling resamplein [ ]ts out[ ] - - - - in [ ]ts resample(' 'out[ ] nan nan nan - - nan - nan - freqd conversion between frequencies or resampling is big enough topic to have its own section later here 'll show you how to use the base frequencies and multiples thereof date rangesfrequenciesand shifting www it-ebooks info |
12,059 | while used it previously without explanationyou may have guessed that pan das date_range is responsible for generating datetimeindex with an indicated length according to particular frequencyin [ ]index pd date_range(''''in [ ]index out[ ] : : : : length freqdtimezonenone by defaultdate_range generates daily timestamps if you pass only start or end dateyou must pass number of periods to generatein [ ]pd date_range(start=''periods= out[ ] : : : : length freqdtimezonenone in [ ]pd date_range(end=''periods= out[ ] : : : : length freqdtimezonenone the start and end dates define strict boundaries for the generated date index for exampleif you wanted date index containing the last business day of each monthyou would pass the 'bmfrequency (business end of monthand only dates falling on or inside the date interval will be includedin [ ]pd date_range(''''freq='bm'out[ ] : : : : length freqbmtimezonenone date_range by default preserves the time (if anyof the start or end timestampin [ ]pd date_range( : : 'periods= out[ ] : : : : length freqdtimezonenone sometimes you will have start or end dates with time information but want to generate set of timestamps normalized to midnight as convention to do thisthere is normalize optionin [ ]pd date_range( : : 'periods= normalize=trueout[ ] time series www it-ebooks info |
12,060 | length freqdtimezonenone frequencies and date offsets frequencies in pandas are composed of base frequency and multiplier base frequencies are typically referred to by string aliaslike 'mfor monthly or 'hfor hourly for each base frequencythere is an object defined generally referred to as date offset for examplehourly frequency can be represented with the hour classin [ ]from pandas tseries offsets import hourminute in [ ]hour hour(in [ ]hour out[ ]you can define multiple of an offset by passing an integerin [ ]four_hours hour( in [ ]four_hours out[ ]in most applicationsyou would never need to explicitly create one of these objectsinstead using string alias like 'hor ' hputting an integer before the base frequency creates multiplein [ ]pd date_range('' : 'freq=' 'out[ ] : : : : length freq htimezonenone many offsets can be combined together by additionin [ ]hour( minute( out[ ]similarlyyou can pass frequency strings like ' minwhich will effectively be parsed to the same expressionin [ ]pd date_range(''periods= freq=' min'out[ ] : : : : length freq ttimezonenone some frequencies describe points in time that are not evenly spaced for example' (calendar month endand 'bm(last business/weekday of monthdepend on the number of days in month andin the latter casewhether the month ends on weekend or not for lack of better termi call these anchored offsets see table - for listing of frequency codes and date offset classes available in pandas date rangesfrequenciesand shifting www it-ebooks info |
12,061 | logic not available in pandasthough the full details of that are outside the scope of this book table - base time series frequencies alias offset type description day calendar daily businessday business daily hour hourly or min minute minutely second secondly or ms milli millisecond ( / th of secondu micro microsecond ( / th of secondm monthend last calendar day of month bm businessmonthend last business day (weekdayof month ms monthbegin first calendar day of month bms businessmonthbegin first weekday of month -monw-tueweek weekly on given day of weekmontuewedthufrisator sun wom- monwom- monweekofmonth generate weekly dates in the firstsecondthirdor fourth week of the month for examplewom- fri for the rd friday of each month -janq-febquarterend quarterly dates anchored on last calendar day of each monthfor year ending in indicated monthjanfebmaraprmayjunjulaugsepoctnovor dec bq-janbq-febbusinessquarterend quarterly dates anchored on last weekday day of each monthfor year ending in indicated month qs-janqs-febquarterbegin quarterly dates anchored on first calendar day of each monthfor year ending in indicated month bqs-janbqs-febbusinessquarterbegin quarterly dates anchored on first weekday day of each monthfor year ending in indicated month -jana-febyearend annual dates anchored on last calendar day of given monthjanfebmaraprmayjunjulaugsepoctnovor dec ba-janba-febbusinessyearend annual dates anchored on last weekday of given month as-janas-febyearbegin annual dates anchored on first day of given month bas-janbas-febbusinessyearbegin annual dates anchored on first weekday of given month time series www it-ebooks info |
12,062 | one useful frequency class is "week of month"starting with wom this enables you to get dates like the third friday of each monthin [ ]rng pd date_range(''''freq='wom- fri'in [ ]list(rngout[ ][traders of us equity options will recognize these dates as the standard dates of monthly expiry shifting (leading and laggingdata "shiftingrefers to moving data backward and forward through time both series and dataframe have shift method for doing naive shifts forward or backwardleaving the index unmodifiedin [ ]ts series(np random randn( )index=pd date_range(''periods= freq=' ')in [ ]ts out[ ]freqm in [ ]ts shift( out[ ]nan nan freqm in [ ]ts shift(- out[ ] nan nan freqm common use of shift is computing percent changes in time series or multiple time series as dataframe columns this is expressed as ts ts shift( because naive shifts leave the index unmodifiedsome data is discarded thus if the frequency is knownit can be passed to shift to advance the timestamps instead of simply the datain [ ]ts shift( freq=' 'out[ ] freqm date rangesfrequenciesand shifting www it-ebooks info |
12,063 | lag the datain [ ]ts shift( freq=' 'out[ ] in [ ]ts shift( freq=' 'out[ ] in [ ]ts shift( freq=' 'out[ ] : : : : : : : : shifting dates with offsets the pandas date offsets can also be used with datetime or timestamp objectsin [ ]from pandas tseries offsets import daymonthend in [ ]now datetime( in [ ]now day(out[ ]datetime datetime( if you add an anchored offset like monthendthe first increment will roll forward date to the next date according to the frequency rulein [ ]now monthend(out[ ]datetime datetime( in [ ]now monthend( out[ ]datetime datetime( anchored offsets can explicitly "rolldates forward or backward using their rollfor ward and rollback methodsrespectivelyin [ ]offset monthend(in [ ]offset rollforward(nowout[ ]datetime datetime( in [ ]offset rollback(nowout[ ]datetime datetime( clever use of date offsets is to use these methods with groupbyin [ ]ts series(np random randn( )index=pd date_range(''periods= freq=' ')in [ ]ts groupby(offset rollforwardmean(out[ ]- time series www it-ebooks info |
12,064 | - of coursean easier and faster way to do this is using resample (much more on this later)in [ ]ts resample(' 'how='mean'out[ ]- - freqm time zone handling working with time zones is generally considered one of the most unpleasant parts of time series manipulation in particulardaylight savings time (dsttransitions are common source of complication as suchmany time series users choose to work with time series in coordinated universal time or utcwhich is the successor to greenwich mean time and is the current international standard time zones are expressed as offsets from utcfor examplenew york is four hours behind utc during daylight savings time and hours the rest of the year in pythontime zone information comes from the rd party pytz librarywhich exposes the olson databasea compilation of world time zone information this is especially important for historical data because the dst transition dates (and even utc offsetshave been changed numerous times depending on the whims of local governments in the united states,the dst transition times have been changed many times since for detailed information about pytz libraryyou'll need to look at that library' documentation as far as this book is concernedpandas wraps pytz' functionality so you can ignore its api outside of the time zone names time zone names can be found interactively and in the docsin [ ]import pytz in [ ]pytz common_timezones[- :out[ ]['us/eastern''us/hawaii''us/mountain''us/pacific''utc'to get time zone object from pytzuse pytz timezonein [ ]tz pytz timezone('us/eastern'in [ ]tz out[ ]methods in pandas will accept either time zone names or these objects recommend just using the names time zone handling www it-ebooks info |
12,065 | by defaulttime series in pandas are time zone naive consider the following time seriesrng pd date_range( : 'periods= freq=' 'ts series(np random randn(len(rng))index=rngthe index' tz field is nonein [ ]print(ts index tznone date ranges can be generated with time zone setin [ ]pd date_range( : 'periods= freq=' 'tz='utc'out[ ] : : : : length freqdtimezoneutc conversion from naive to localized is handled by the tz_localize methodin [ ]ts_utc ts tz_localize('utc'in [ ]ts_utc out[ ] : : + : : : + : : : + : : : + : : : + : : : + : freqd - in [ ]ts_utc index out[ ] : : : : length freqdtimezoneutc once time series has been localized to particular time zoneit can be converted to another time zone using tz_convertin [ ]ts_utc tz_convert('us/eastern'out[ ] : : - : : : - : : : - : : : - : - : : - : : : - : freqd in the case of the above time serieswhich straddles dst transition in the us/eastern time zonewe could localize to est and convert tosayutc or berlin timein [ ]ts_eastern ts tz_localize('us/eastern' time series www it-ebooks info |
12,066 | out[ ] : : + : : : + : : : + : : : + : - : : + : : : + : freqd in [ ]ts_eastern tz_convert('europe/berlin'out[ ] : : + : : : + : : : + : : : + : - : : + : : : + : freqd tz_localize and tz_convert are also instance methods on datetimeindexin [ ]ts index tz_localize('asia/shanghai'out[ ] : : : : length freqdtimezoneasia/shanghai localizing naive timestamps also checks for ambiguous or non-existent times around daylight savings time transitions operations with time zone-aware timestamp objects similar to time series and date rangesindividual timestamp objects similarly can be localized from naive to time zone-aware and converted from one time zone to anotherin [ ]stamp pd timestamp( : 'in [ ]stamp_utc stamp tz_localize('utc'in [ ]stamp_utc tz_convert('us/eastern'out[ ]you can also pass time zone when creating the timestampin [ ]stamp_moscow pd timestamp( : 'tz='europe/moscow'in [ ]stamp_moscow out[ ]time zone-aware timestamp objects internally store utc timestamp value as nanoseconds since the unix epoch (january )this utc value is invariant between time zone conversionstime zone handling www it-ebooks info |
12,067 | out[ ] in [ ]stamp_utc tz_convert('us/eastern'value out[ ] when performing time arithmetic using pandas' dateoffset objectsdaylight savings time transitions are respected where possible minutes before dst transition in [ ]from pandas tseries offsets import hour in [ ]stamp pd timestamp( : 'tz='us/eastern'in [ ]stamp out[ ]in [ ]stamp hour(out[ ] minutes before dst transition in [ ]stamp pd timestamp( : 'tz='us/eastern'in [ ]stamp out[ ]in [ ]stamp hour(out[ ]operations between different time zones if two time series with different time zones are combinedthe result will be utc since the timestamps are stored under the hood in utcthis is straightforward operation and requires no conversion to happenin [ ]rng pd date_range( : 'periods= freq=' 'in [ ]ts series(np random randn(len(rng))index=rngin [ ]ts out[ ] : : : : : : : : : : : : : : : : : : : : freqb - - - - - - - - in [ ]ts ts[: tz_localize('europe/london' time series www it-ebooks info |
12,068 | in [ ]result ts ts in [ ]result index out[ ] : : : : length freqbtimezoneutc periods and period arithmetic periods represent time spanslike daysmonthsquartersor years the period class represents this data typerequiring string or integer and frequency from the above tablein [ ] pd period( freq=' -dec'in [ ] out[ ]period(' '' -dec'in this casethe period object represents the full timespan from january to december inclusive convenientlyadding and subtracting integers from periods has the effect of shifting by their frequencyin [ ] out[ ]period(' '' -dec'in [ ] out[ ]period(' '' -dec'if two periods have the same frequencytheir difference is the number of units between themin [ ]pd period(' 'freq=' -dec' out[ ] regular ranges of periods can be constructed using the period_range functionin [ ]rng pd period_range(''''freq=' 'in [ ]rng out[ ]freqm [ - - length the periodindex class stores sequence of periods and can serve as an axis index in any pandas data structurein [ ]series(np random randn( )index=rngout[ ] - - - - - - - - periods and period arithmetic www it-ebooks info |
12,069 | freqm if you have an array of stringsyou can also appeal to the periodindex class itselfin [ ]values [' '' '' 'in [ ]index pd periodindex(valuesfreq=' -dec'in [ ]index out[ ]freqq-dec [ length period frequency conversion periods and periodindex objects can be converted to another frequency using their asfreq method as an examplesuppose we had an annual period and wanted to convert it into monthly period either at the start or end of the year this is fairly straightforwardin [ ] pd period(' 'freq=' -dec'in [ ] asfreq(' 'how='start'out[ ]period(' - '' 'in [ ] asfreq(' 'how='end'out[ ]period(' - '' 'you can think of period(' '' -dec'as being cursor pointing to span of timesubdivided by monthly periods see figure - for an illustration of this for fiscal year ending on month other than decemberthe monthly subperiods belonging are differentin [ ] pd period(' 'freq=' -jun'in [ ] asfreq(' ''start'out[ ]period(' - '' 'in [ ] asfreq(' ''end'out[ ]period(' - '' 'when converting from high to low frequencythe superperiod will be determined depending on where the subperiod "belongsfor examplein -jun frequencythe month aug- is actually part of the periodin [ ] pd period(' - '' 'in [ ] asfreq(' -jun'out[ ]period(' '' -jun'whole periodindex objects or timeseries can be similarly converted with the same semanticsin [ ]rng pd period_range(' '' 'freq=' -dec'in [ ]ts series(np random randn(len(rng))index=rngin [ ]ts time series www it-ebooks info |
12,070 | - - freqa-dec in [ ]ts asfreq(' 'how='start'out[ ] - - - - - - freqm in [ ]ts asfreq(' 'how='end'out[ ]- - freqb figure - period frequency conversion illustration quarterly period frequencies quarterly data is standard in accountingfinanceand other fields much quarterly data is reported relative to fiscal year endtypically the last calendar or business day of one of the months of the year as suchthe period has different meaning depending on fiscal year end pandas supports all possible quarterly frequencies as qjan through -decin [ ] pd period(' 'freq=' -jan'in [ ] out[ ]period(' '' -jan'in the case of fiscal year ending in january runs from november through januarywhich you can check by converting to daily frequency see figure - for an illustrationin [ ] asfreq(' ''start'out[ ]period(''' 'in [ ] asfreq(' ''end'out[ ]period(''' 'periods and period arithmetic www it-ebooks info |
12,071 | at pm on the nd to last business day of the quarteryou could doin [ ] pm ( asfreq(' '' ' asfreq(' '' ' in [ ] pm out[ ]period( : '' 'in [ ] pm to_timestamp(out[ ]figure - different quarterly frequency conventions generating quarterly ranges works as you would expect using period_range arithmetic is identicaltooin [ ]rng pd period_range(' '' 'freq=' -jan'in [ ]ts series(np arange(len(rng))index=rngin [ ]ts out[ ] freqq-jan in [ ]new_rng (rng asfreq(' '' ' asfreq(' '' ' in [ ]ts index new_rng to_timestamp(in [ ]ts out[ ] : : : : : : : : : : : : time series www it-ebooks info |
12,072 | series and dataframe objects indexed by timestamps can be converted to periods using the to_period methodin [ ]rng pd date_range(''periods= freq=' 'in [ ]ts series(randn( )index=rngin [ ]pts ts to_period(in [ ]ts out[ ]- - freqm in [ ]pts out[ ] - - - - - freqm since periods always refer to non-overlapping timespansa timestamp can only belong to single period for given frequency while the frequency of the new periodindex is inferred from the timestamps by defaultyou can specify any frequency you want there is also no problem with having duplicate periods in the resultin [ ]rng pd date_range(''periods= freq=' 'in [ ]ts series(randn( )index=rngin [ ]ts to_period(' 'out[ ] - - - - - - - - - freqm to convert back to timestampsuse to_timestampin [ ]pts ts to_period(in [ ]pts out[ ] - - - - - freqm in [ ]pts to_timestamp(how='end'out[ ]- - freqm periods and period arithmetic www it-ebooks info |
12,073 | fixed frequency data sets are sometimes stored with timespan information spread across multiple columns for examplein this macroeconomic data setthe year and quarter are in different columnsin [ ]data pd read_csv('ch /macrodata csv'in [ ]data year out[ ] nameyearlength in [ ]data quarter out[ ] namequarterlength by passing these arrays to periodindex with frequencythey can be combined to form an index for the dataframein [ ]index pd periodindex(year=data yearquarter=data quarterfreq=' -dec'in [ ]index out[ ]freqq-dec [ length in [ ]data index index in [ ]data infl out[ ] - freqq-decnameinfllength resampling and frequency conversion resampling refers to the process of converting time series from one frequency to another aggregating higher frequency data to lower frequency is called downsamplingwhile converting lower frequency to higher frequency is called upsampling not time series www it-ebooks info |
12,074 | on wednesdayto -fri is neither upsampling nor downstampling pandas objects are equipped with resample methodwhich is the workhorse function for all frequency conversionin [ ]rng pd date_range(''periods= freq=' 'in [ ]ts series(randn(len(rng))index=rngin [ ]ts resample(' 'how='mean'out[ ] freqm in [ ]ts resample(' 'how='mean'kind='period'out[ ] - - - - freqm resample is flexible and high-performance method that can be used to process very large time series 'll illustrate its semantics and use through series of examples table - resample method arguments argument description freq string or dateoffset indicating desired resampled frequencye ' '' min'or sec ond( how='meanfunction name or array function producing aggregated valuefor example 'mean''ohlc'np max defaults to 'meanother common values'first''last''median''ohlc''max''minaxis= axis to resample ondefault axis= fill_method=none how to interpolate when upsamplingas in 'ffillor 'bfillby default does no interpolation closed='rightin downsamplingwhich end of each interval is closed (inclusive)'rightor 'leftdefaults to 'rightlabel='rightin downsamplinghow to label the aggregated resultwith the 'rightor 'leftbin edge for examplethe : to : -minute interval could be labeled : or : defaults to 'right(or : in this exampleloffset=none time adjustment to the bin labelssuch as '- ssecond(- to shift the aggregate labels one second earlier limit=none when forward or backward fillingthe maximum number of periods to fill resampling and frequency conversion www it-ebooks info |
12,075 | description kind=none aggregate to periods ('period'or timestamps ('timestamp')defaults to kind of index the time series has convention=none when resampling periodsthe convention ('startor 'end'for converting the low frequency period to high frequency defaults to 'enddownsampling aggregating data to regularlower frequency is pretty normal time series task the data you're aggregating doesn' need to be fixed frequentlythe desired frequency defines bin edges that are used to slice the time series into pieces to aggregate for exampleto convert to monthly'mor 'bm'the data need to be chopped up into one month intervals each interval is said to be half-opena data point can only belong to one intervaland the union of the intervals must make up the whole time frame there are couple things to think about when using resample to downsample datawhich side of each interval is closed how to label each aggregated bineither with the start of the interval or the end to illustratelet' look at some one-minute datain [ ]rng pd date_range(''periods= freq=' 'in [ ]ts series(np arange( )index=rngin [ ]ts out[ ] : : : : : : : : : : : : : : : : : : : : : : : : freqt suppose you wanted to aggregate this data into five-minute chunks or bars by taking the sum of each groupin [ ]ts resample(' min'how='sum'out[ ] : : : : : : : : freq time series www it-ebooks info |
12,076 | right bin edge is inclusiveso the : value is included in the : to : interval passing closed='leftchanges the interval to be closed on the leftin [ ]ts resample(' min'how='sum'closed='left'out[ ] : : : : : : freq as you can seethe resulting time series is labeled by the timestamps from the right side of each bin by passing label='leftyou can label them with the left bin edgein [ ]ts resample(' min'how='sum'closed='left'label='left'out[ ] : : : : : : freq see figure - for an illustration of minutely data being resampled to five-minute figure - -minute resampling illustration of closedlabel conventions lastlyyou might want to shift the result index by some amountsay subtracting one second from the right edge to make it more clear which interval the timestamp refers to to do thispass string or date offset to loffsetin [ ]ts resample(' min'how='sum'loffset='- 'out[ ] : : : : : : : : freq the choice of closed='right'label='rightas the default might seem bit odd to some users in practice the choice is somewhat arbitraryfor some target frequenciesclosed='leftis preferablewhile for others closed='rightmakes more sense the important thing is that you keep in mind exactly how you are segmenting the data resampling and frequency conversion www it-ebooks info |
12,077 | without the loffset open-high-low-close (ohlcresampling in financean ubiquitous way to aggregate time series is to compute four values for each bucketthe first (open)last (close)maximum (high)and minimal (lowvalues by passing how='ohlcyou will obtain dataframe having columns containing these four aggregateswhich are efficiently computed in single sweep of the datain [ ]ts resample(' min'how='ohlc'out[ ]open high low close : : : : : : : : resampling with groupby an alternate way to downsample is to use pandas' groupby functionality for exampleyou can group by month or weekday by passing function that accesses those fields on the time series' indexin [ ]rng pd date_range(''periods= freq=' 'in [ ]ts series(np arange( )index=rngin [ ]ts groupby(lambda xx monthmean(out[ ] in [ ]ts groupby(lambda xx weekdaymean(out[ ] upsampling and interpolation when converting from low frequency to higher frequencyno aggregation is needed let' consider dataframe with some weekly datain [ ]frame dataframe(np random randn( )index=pd date_range(''periods= freq=' -wed')columns=['colorado''texas''new york''ohio'] time series www it-ebooks info |
12,078 | out[ ]colorado texas new york ohio - - - - - when resampling this to daily frequencyby default missing values are introducedin [ ]df_daily frame resample(' 'in [ ]df_daily out[ ]colorado texas new york ohio - - nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan - - - suppose you wanted to fill forward each weekly value on the non-wednesdays the same filling or interpolation methods available in the fillna and reindex methods are available for resamplingin [ ]frame resample(' 'fill_method='ffill'out[ ]colorado texas new york ohio - - - - - - - - - - - - - - - - - you can similarly choose to only fill certain number of periods forward to limit how far to continue using an observed valuein [ ]frame resample(' 'fill_method='ffill'limit= out[ ]colorado texas new york ohio - - - - - - nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan - - - notablythe new date index need not overlap with the old one at allresampling and frequency conversion www it-ebooks info |
12,079 | out[ ]colorado texas new york ohio - - - - - resampling with periods resampling data indexed by periods is reasonably straightforward and works as you would hopein [ ]frame dataframe(np random randn( )index=pd period_range(' - '' - 'freq=' ')columns=['colorado''texas''new york''ohio']in [ ]frame[: out[ ]colorado texas new york ohio - - - - - - - - - - - - - - in [ ]annual_frame frame resample(' -dec'how='mean'in [ ]annual_frame out[ ]colorado texas new york ohio - - - upsampling is more nuanced as you must make decision about which end of the timespan in the new frequency to place the values before resamplingjust like the asfreq method the convention argument defaults to 'endbut can also be 'start' -decquarterlyyear ending in december in [ ]annual_frame resample(' -dec'fill_method='ffill'out[ ]colorado texas new york ohio - - - - - - - - - in [ ]annual_frame resample(' -dec'fill_method='ffill'convention='start'out[ ]colorado texas new york ohio - - - - - - - - - time series www it-ebooks info |
12,080 | more rigidin downsamplingthe target frequency must be subperiod of the source frequency in upsamplingthe target frequency must be superperiod of the source frequency if these rules are not satisfiedan exception will be raised this mainly affects the quarterlyannualand weekly frequenciesfor examplethe timespans defined by -mar only line up with -mara-juna-sepand -decin [ ]annual_frame resample(' -mar'fill_method='ffill'out[ ]colorado texas new york ohio - - - - - - - - - time series plotting plots with pandas time series have improved date formatting compared with matplotlib out of the box as an examplei downloaded some stock price data on few common us stock from yahoofinancein [ ]close_px_all pd read_csv('ch /stock_px csv'parse_dates=trueindex_col= in [ ]close_px close_px_all[['aapl''msft''xom']in [ ]close_px close_px resample(' 'fill_method='ffill'in [ ]close_px out[ ]datetimeindex entries : : to : : freqb data columnsaapl non-null values msft non-null values xom non-null values dtypesfloat ( calling plot on one of the columns grenerates simple plotseen in figure - in [ ]close_px['aapl'plot(when called on dataframeas you would expectall of the time series are drawn on single subplot with legend indicating which is which 'll plot only the year data so you can see how both months and years are formatted on the axissee figure - in [ ]close_px ix[' 'plot(time series plotting www it-ebooks info |
12,081 | figure - stock prices in in [ ]close_px['aapl'ix[' - ':' - 'plot(quarterly frequency data is also more nicely formatted with quarterly markerssomething that would be quite bit more work to do by hand see figure - in [ ]appl_q close_px['aapl'resample(' -dec'fill_method='ffill'in [ ]appl_q ix[' ':plot( last feature of time series plotting in pandas is that by right-clicking and dragging to zoom in and outthe dates will be dynamically expanded or contracted and reformatting depending on the timespan contained in the plot view this is of course only true when using matplotlib in interactive mode moving window functions common class of array transformations intended for time series operations are statistics and other functions evaluated over sliding window or with exponentially de time series www it-ebooks info |
12,082 | figure - apple quarterly price - caying weights call these moving window functionseven though it includes functions without fixed-length window like exponentially-weighted moving average like other statistical functionsthese also automatically exclude missing data rolling_mean is one of the simplest such functions it takes timeseries or dataframe along with window (expressed as number of periods)in [ ]close_px aapl plot(out[ ]in [ ]pd rolling_mean(close_px aapl plot(see figure - for the plot by default functions like rolling_mean require the indicated number of non-na observations this behavior can be changed to account for missing data andin particularthe fact that you will have fewer than window periods of data at the beginning of the time series (see figure - )moving window functions www it-ebooks info |
12,083 | in [ ]appl_std [ : out[ ]nan nan nan nan freqb in [ ]appl_std plot(figure - apple price with -day ma figure - apple -day daily return standard deviation to compute an expanding window meanyou can see that an expanding window is just special case where the window is the length of the time seriesbut only one or more periods is required to compute value time series www it-ebooks info |
12,084 | in [ ]expanding_mean lambda xrolling_mean(xlen( )min_periods= calling rolling_mean and friends on dataframe applies the transformation to each column (see figure - )in [ ]pd rolling_mean(close_px plot(logy=truefigure - stocks prices -day ma (log -axissee table - for listing of related functions in pandas table - moving window and exponentially-weighted functions function description rolling_count returns number of non-na observations in each trailing window rolling_sum moving window sum rolling_mean moving window mean rolling_median moving window median rolling_varrolling_std moving window variance and standard deviationrespectively uses denominator rolling_skewrolling_kurt moving window skewness ( rd momentand kurtosis ( th moment)respectively rolling_minrolling_max moving window minimum and maximum rolling_quantile moving window score at percentile/sample quantile rolling_corrrolling_cov moving window correlation and covariance rolling_apply apply generic array function over moving window ewma exponentially-weighted moving average ewmvarewmstd exponentially-weighted moving variance and standard deviation ewmcorrewmcov exponentially-weighted moving correlation and covariance moving window functions www it-ebooks info |
12,085 | implementation of nan-friendly moving window functions and may be worth looking at depending on your application exponentially-weighted functions an alternative to using static window size with equally-weighted observations is to specify constant decay factor to give more weight to more recent observations in mathematical termsif mat is the moving average result at time and is the time series in questioneach value in the result is computed as mat mat ( x_twhere is the decay factor there are couple of ways to specify the decay factora popular one is using spanwhich makes the result comparable to simple moving window function with window size equal to the span since an exponentially-weighted statistic places more weight on more recent observationsit "adaptsfaster to changes compared with the equal-weighted version here' an example comparing -day moving average of apple' stock price with an ew moving average with span= (see figure - )figaxes plt subplots(nrows= ncols= sharex=truesharey=truefigsize=( )aapl_px close_px aapl[' ':' 'ma pd rolling_mean(aapl_px min_periods= ewma pd ewma(aapl_pxspan= aapl_px plot(style=' -'ax=axes[ ]ma plot(style=' --'ax=axes[ ]aapl_px plot(style=' -'ax=axes[ ]ewma plot(style=' --'ax=axes[ ]axes[ set_title('simple ma'axes[ set_title('exponentially-weighted ma'binary moving window functions some statistical operatorslike correlation and covarianceneed to operate on two time series as an examplefinancial analysts are often interested in stock' correlation to benchmark index like the & we can compute that by computing the percent changes and using rolling_corr (see figure - )in [ ]spx_rets spx_px spx_px shift( in [ ]returns close_px pct_change(in [ ]corr pd rolling_corr(returns aaplspx_rets min_periods= in [ ]corr plot( time series www it-ebooks info |
12,086 | figure - six-month aapl return correlation to & suppose you wanted to compute the correlation of the & index with many stocks at once writing loop and creating new dataframe would be easy but maybe get repetitiveso if you pass timeseries and dataframea function like rolling_corr will compute the correlation of the timeseries (spx_rets in this casewith each column in the dataframe see figure - for the plot of the resultin [ ]corr pd rolling_corr(returnsspx_rets min_periods= in [ ]corr plot(moving window functions www it-ebooks info |
12,087 | figure - percentile rank of aapl return over year window user-defined moving window functions the rolling_apply function provides means to apply an array function of your own devising over moving window the only requirement is that the function produce single value ( reductionfrom each piece of the array for examplewhile we can compute sample quantiles using rolling_quantilewe might be interested in the percentile rank of particular value over the sample the scipy stats percentileof score function does just thisin [ ]from scipy stats import percentileofscore in [ ]score_at_ percent lambda xpercentileofscore( in [ ]result pd rolling_apply(returns aapl score_at_ percentin [ ]result plot( time series www it-ebooks info |
12,088 | timestamps and periods are represented as -bit integers using numpy' date time dtype this means that for each data pointthere is an associated bytes of memory per timestamp thusa time series with million float data points has memory footprint of approximately megabytes since pandas makes every effort to share indexes among time seriescreating views on existing time series do not cause any more memory to be used additionallyindexes for lower frequencies (daily and upare stored in central cacheso that any fixed-frequency index is view on the date cache thusif you have large collection of low-frequency time seriesthe memory footprint of the indexes will not be as significant performance-wisepandas has been highly optimized for data alignment operations (the behind-the-scenes work of differently indexed ts ts and resampling here is an example of aggregating mm data points to ohlcin [ ]rng pd date_range(''periods= freq=' ms'in [ ]ts series(np random randn(len(rng))index=rngin [ ]ts out[ ] : : - : : : : - : : - : : : : : : : : freq llength in [ ]ts resample(' min'how='ohlc'out[ ]datetimeindex entries : : to : : freq data columnsopen non-null values high non-null values low non-null values close non-null values dtypesfloat ( in [ ]%timeit ts resample(' min'how='ohlc' loopsbest of ms per loop the runtime may depend slightly on the relative size of the aggregated resulthigher frequency aggregates unsurprisingly take longer to computein [ ]rng pd date_range(''periods= freq=' 'performance and memory usage notes www it-ebooks info |
12,089 | in [ ]%timeit ts resample(' 'how='ohlc' loopsbest of ms per loop it' possible that by the time you read thisthe performance of these algorithms may be even further improved as an examplethere are currently no optimizations for conversions between regular frequenciesbut that would be fairly straightforward to do time series www it-ebooks info |
12,090 | financial and economic data applications the use of python in the financial industry has been increasing rapidly since led largely by the maturation of libraries (like numpy and pandasand the availability of skilled python programmers institutions have found that python is well-suited both as an interactive analysis environment as well as enabling robust systems to be developed often in fraction of the time it would have taken in java or +python is also an ideal glue layerit is easy to build python interfaces to legacy libraries built in or +while the field of financial analysis is broad enough to fill an entire booki hope to show you how the tools in this book can be applied to number of specific problems in finance as with other research and analysis domainstoo much programming effort is often spent wrangling data rather than solving the core modeling and research problems personally got started building pandas in while grappling with inadequate data tools in these examplesi'll use the term cross-section to refer to data at fixed point in time for examplethe closing prices of all the stocks in the & index on particular date form cross-section cross-sectional data at multiple points in time over multiple data items (for exampleprices together with volumeform panel panel data can either be represented as hierarchically-indexed dataframe or using the three-dimensional panel pandas object data munging topics many helpful data munging tools for financial applications are spread across the earlier here 'll highlight number of topics as they relate to this problem domain www it-ebooks info |
12,091 | one of the most time-consuming issues in working with financial data is the so-called data alignment problem two related time series may have indexes that don' line up perfectlyor two dataframe objects might have columns or row labels that don' match users of matlabrand other matrix-programming languages often invest significant effort in wrangling data into perfectly aligned forms in my experiencehaving to align data by hand (and worsehaving to verify that data is alignedis far too rigid and tedious way to work it is also rife with potential for bugs due to combining misaligned data pandas take an alternate approach by automatically aligning data in arithmetic operations in practicethis grants immense freedom and enhances your productivity as an examplelet' consider couple of dataframes containing time series of stock prices and volumein [ ]prices out[ ]aapl in [ ]volume out[ ]jnj aapl spx jnj xom xom suppose you wanted to compute volume-weighted average price using all available data (and making the simplifying assumption that the volume data is subset of the price datasince pandas aligns the data automatically in arithmetic and excludes missing data in functions like sumwe can express this concisely asin [ ]prices volume out[ ]aapl jnj nan nan nan nan spx nan nan nan nan nan nan nan xom nan nan in [ ]vwap (prices volumesum(volume sum( financial and economic data applications www it-ebooks info |
12,092 | out[ ]aapl jnj spx nan xom in [ ]vwap dropna(out[ ]aapl jnj xom since spx wasn' found in volumeyou can choose to explicitly discard that at any point should you wish to align by handyou can use dataframe' align methodwhich returns tuple of reindexed versions of the two objectsin [ ]prices align(volumejoin='inner'out[ ]aapl jnj xom aapl jnj xom another indispensable feature is constructing dataframe from collection of potentially differently indexed seriesin [ ] series(range( )index=[' '' '' ']in [ ] series(range( )index=[' '' '' '' ']in [ ] series(range( )index=[' '' '' ']in [ ]dataframe({'one' 'two' 'three' }out[ ]one three two nan nan nan nan nan nan nan nan as you have seen earlieryou can of course specify explicitly the index of the resultdiscarding the rest of the datain [ ]dataframe({'one' 'two' 'three' }index=list('face')out[ ]one three two nan nan nan nan nan data munging topics www it-ebooks info |
12,093 | economic time series are often of annualquarterlymonthlydailyor some other more specialized frequency some are completely irregularfor exampleearnings revisions for stock may arrive at any time the two main tools for frequency conversion and realignment are the resample and reindex methods resample converts data to fixed frequency while reindex conforms data to new index both support optional interpolation (such as forward fillinglogic let' consider small weekly time seriesin [ ]ts series(np random randn( )index=pd date_range(''periods= freq=' -wed')in [ ]ts out[ ]- - freqw-wed if you resample this to business daily (monday-fridayfrequencyyou get holes on the days where there is no datain [ ]ts resample(' 'out[ ]- nan nan nan nan nan nan nan nan - freqb of courseusing 'ffillas the fill_method forward fills values in those gaps this is common practice with lower frequency data as you compute time series of values on each timestamp having the latest valid or "as ofvaluein [ ]ts resample(' 'fill_method='ffill'out[ ]- - - - - financial and economic data applications www it-ebooks info |
12,094 | freqb - in practiceupsampling lower frequency data to higherregular frequency is fine solutionbut in the more general irregular time series case it may be poor fit consider an irregularly sampled time series from the same general time periodin [ ]dates pd datetimeindex(['''''''''''']in [ ]ts series(np random randn( )index=datesin [ ]ts out[ ]- - - if you wanted to add the "as ofvalues in ts (forward fillingto ts one option would be to resample both to regular frequency then addbut if you want to maintain the date index in ts using reindex is more precise solutionin [ ]ts reindex(ts indexmethod='ffill'out[ ]nan - - - in [ ]ts ts reindex(ts indexmethod='ffill'out[ ]nan - - - - using periods instead of timestamps periods (representing time spansprovide an alternate means of working with different frequency time seriesespecially financial or economic series with annual or quarterly frequency having particular reporting convention for examplea company might announce its quarterly earnings with fiscal year ending in junethus having -jun frequency consider pair of macroeconomic time series related to gdp and inflationin [ ]gdp series([ ]index=pd period_range(' 'periods= freq=' -sep')data munging topics www it-ebooks info |
12,095 | index=pd period_range(' 'periods= freq=' -dec')in [ ]gdp out[ ] freqq-sep in [ ]infl out[ ] freqa-dec unlike time series with timestampsoperations between different-frequency time series indexed by periods are not possible without explicit conversions in this caseif we know that infl values were observed at the end of each yearwe can then convert to -sep to get the right periods in that frequencyin [ ]infl_q infl asfreq(' -sep'how='end'in [ ]infl_q out[ ] freqq-sep that time series can then be reindexed with forward-filling to match gdpin [ ]infl_q reindex(gdp indexmethod='ffill'out[ ] freqq-sep time of day and "as ofdata selection suppose you have long time series containing intraday market data and you want to extract the prices at particular time of day on each day of the data what if the data are irregular such that observations do not fall exactly on the desired timein practice this task can make for error-prone data munging if you are not careful here is an example for illustration purposesmake an intraday date range and time series in [ ]rng pd date_range( : ' : 'freq=' 'make -day series of : - : values financial and economic data applications www it-ebooks info |
12,096 | in [ ]ts series(np arange(len(rng)dtype=float)index=rngin [ ]ts out[ ] : : : : : : : : : : : : : : : : length indexing with python datetime time object will extract values at those timesin [ ]from datetime import time in [ ]ts[time( )out[ ] : : : : : : : : under the hoodthis uses an instance method at_time (available on individual time series and dataframe objects alike)in [ ]ts at_time(time( )out[ ] : : : : : : : : you can select values between two times using the related between_time methodin [ ]ts between_time(time( )time( )out[ ] : : : : : : : : : : : : : : : : as mentioned aboveit might be the case that no data actually fall exactly at time like ambut you might want to know the last known value at amset most of the time series randomly to na in [ ]indexer np sort(np random permutation(len(ts))[ :]data munging topics www it-ebooks info |
12,097 | in [ ]irr_ts[indexernp nan in [ ]irr_ts[ : ': : 'out[ ] : : : : nan : : : : : : nan : : : : nan : : nan : : nan : : nan : : nan by passing an array of timestamps to the asof methodyou will obtain an array of the last valid (non-navalues at or before each timestamp so we construct date range at am for each day and pass that to asofin [ ]selection pd date_range( : 'periods= freq=' 'in [ ]irr_ts asof(selectionout[ ] : : : : : : : : freqb splicing together data sources in described number of strategies for merging together two related data sets in financial or economic contextthere are few widely occurring use casesswitching from one data source ( time series or collection of time seriesto another at specific point in time "patchingmissing values in time series at the beginningmiddleor end using another time series completely replacing the data for subset of symbols (countriesasset tickersand so onin the first caseswitching from one set of time series to another at specific instantit is matter of splicing together two timeseries or dataframe objects using pandas con catin [ ]data dataframe(np ones(( )dtype=float)columns=[' '' '' ']index=pd date_range(''periods= ) financial and economic data applications www it-ebooks info |
12,098 | columns=[' '' '' ']index=pd date_range(''periods= )in [ ]spliced pd concat([data ix[:'']data ix['':]]in [ ]spliced out[ ] suppose in similar example that data was missing time series present in data in [ ]data dataframe(np ones(( )dtype=float columns=[' '' '' '' ']index=pd date_range(''periods= )in [ ]spliced pd concat([data ix[:'']data ix['':]]in [ ]spliced out[ ] nan nan nan using combine_firstyou can bring in data from before the splice point to extend the history for 'ditemin [ ]spliced_filled spliced combine_first(data in [ ]spliced_filled out[ ] nan since data does not have any values for no values are filled on that day dataframe has related method update for performing in-place updates you have to pass overwrite=false to make it only fill the holesdata munging topics www it-ebooks info |
12,099 | in [ ]spliced out[ ] nan to replace the data for subset of symbolsyou can use any of the above techniquesbut sometimes it' simpler to just set the columns directly with dataframe indexingin [ ]cp_spliced spliced copy(in [ ]cp_spliced[[' '' ']data [[' '' ']in [ ]cp_spliced out[ ] nan nan nan return indexes and cumulative returns in financial contextreturns usually refer to percent changes in the price of an asset let' consider price data for apple in and in [ ]import pandas io data as web in [ ]price web get_data_yahoo('aapl''')['adj close'in [ ]price[- :out[ ]date nameadj close for applewhich has no dividendscomputing the cumulative percent return between two points in time requires computing only the percent change in the pricein [ ]price[''price['' out[ ] financial and economic data applications www it-ebooks info |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.