id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
12,100 | holding stock can be more complicated the adjusted close values used here have been adjusted for splits and dividendshowever in all casesit' quite common to derive return indexwhich is time series indicating the value of unit investment (one dollarsaymany assumptions can underlie the return indexfor examplesome will choose to reinvest profit and others not in the case of applewe can compute simple return index using cumprodin [ ]returns price pct_change(in [ ]ret_index ( returnscumprod(in [ ]ret_index[ set first value to in [ ]ret_index out[ ]date length with return index in handcomputing cumulative returns at particular resolution is simplein [ ]m_returns ret_index resample('bm'how='last'pct_change(in [ ]m_returns[' 'out[ ]date - - freqbm of coursein this simple case (no dividends or other adjustments to take into accountthese could have been computed from the daily percent changed by resampling with aggregation (hereto periods)in [ ]m_rets ( returnsresample(' 'how='prod'kind='period' in [ ]m_rets[' 'out[ ]date data munging topics www it-ebooks info |
12,101 | - - - - - - freqm - - if you had dividend dates and percentagesincluding them in the total return per day would look likereturns[dividend_dates+dividend_pcts group transforms and analysis in you learned the basics of computing group statistics and applying your own transformations to groups in dataset let' consider collection of hypothetical stock portfolios first randomly generate broad universe of tickersimport randomrandom seed( import string def rands( )choices string ascii_uppercase return 'join([random choice(choicesfor in xrange( )]tickers np array([rands( for in xrange( )] then create dataframe containing columns representing hypotheticalbut random portfolios for subset of tickersm df dataframe({'momentumnp random randn( 'valuenp random randn( 'shortinterestnp random randn( }index=tickers[: ]nextlet' create random industry classification for the tickers to keep things simplei'll just keep it to industriesstoring the mapping in seriesind_names np array(['financial''tech']sampler np random randint( len(ind_names)nindustries series(ind_names[sampler]index=tickersname='industry'now we can group by industries and carry out group aggregation and transformationsin [ ]by_industry df groupby(industriesin [ ]by_industry mean(out[ ]momentum shortinterest value financial and economic data applications www it-ebooks info |
12,102 | financial tech - - in [ ]by_industry describe(out[ ]momentum shortinterest value industry financial count mean - std min - - - - max - tech count mean - std min - - - - max - by defining transformation functionsit' easy to transform these portfolios by industry for examplestandardizing within industry is widely used in equity portfolio constructionwithin-industry standardize def zscore(group)return (group group mean()group std(df_stand by_industry apply(zscoreyou can verify that each industry has mean and standard deviation in [ ]df_stand groupby(industriesagg(['mean''std']out[ ]momentum shortinterest value mean std mean std mean std industry financial tech - - - otherbuilt-in kinds of transformationslike rankcan be used more conciselywithin-industry rank descending in [ ]ind_rank by_industry rank(ascending=falsein [ ]ind_rank groupby(industriesagg(['min''max']out[ ]momentum shortinterest value min max min max min max industry financial tech group transforms and analysis www it-ebooks info |
12,103 | you could do this by chaining together rank and zscore like soindustry rank and standardize in [ ]by_industry apply(lambda xzscore( rank())out[ ]index entriesvtkgn to ptdqe data columnsmomentum non-null values shortinterest non-null values value non-null values dtypesfloat ( group factor exposures factor analysis is technique in quantitative portfolio management portfolio holdings and performance (profit and lessare decomposed using one or more factors (risk factors are one examplerepresented as portfolio of weights for examplea stock price' co-movement with benchmark (like & indexis known as its betaa common risk factor let' consider contrived example of portfolio constructed from randomly-generated factors (usually called the factor loadingsand some weightsfrom numpy random import rand fac fac fac np random rand( ticker_subset tickers take(np random permutation( )[: ]weighted sum of factors plus noise port series( fac fac fac rand( )index=ticker_subsetfactors dataframe({' 'fac ' 'fac ' 'fac }index=ticker_subsetvector correlations between each factor and the portfolio may not indicate too muchin [ ]factors corrwith(portout[ ] - the standard way to compute the factor exposures is by least squares regressionusing pandas ols with factors as the explanatory variables we can compute exposures over the entire set of tickersin [ ]pd ols( =portx=factorsbeta out[ ] - intercept financial and economic data applications www it-ebooks info |
12,104 | too much additional random noise added to the portfolio using groupby you can compute exposures industry by industry to do sowrite function like sodef beta_exposure(chunkfactors=none)return pd ols( =chunkx=factorsbeta thengroup by industries and apply that functionpassing the dataframe of factor loadingsin [ ]by_ind port groupby(industriesin [ ]exposures by_ind apply(beta_exposurefactors=factorsin [ ]exposures unstack(out[ ] industry financial - tech - intercept decile and quartile analysis analyzing data based on sample quantiles is another important tool for financial analysts for examplethe performance of stock portfolio could be broken down into quartiles (four equal-sized chunksbased on each stock' price-to-earnings using pan das qcut combined with groupby makes quantile analysis reasonably straightforward as an examplelet' consider simple trend following or momentum strategy trading the & index via the spy exchange-traded fund you can download the price history from yahoofinancein [ ]import pandas io data as web in [ ]data web get_data_yahoo('spy'''in [ ]data out[ ]datetimeindex entries : : to : : data columnsopen non-null values high non-null values low non-null values close non-null values volume non-null values adj close non-null values dtypesfloat ( )int ( nowwe'll compute daily returns and function for transforming the returns into trend signal formed from lagged moving sumpx data['adj close'returns px pct_change(group transforms and analysis www it-ebooks info |
12,105 | index ( retscumprod(first_loc max(index notnull(argmax( index values[first_loc return index def trend_signal(retslookbacklag)signal pd rolling_sum(retslookbackmin_periods=lookback return signal shift(lagusing this functionwe can (naivelycreate and test trading strategy that trades this momentum signal every fridayin [ ]signal trend_signal(returns in [ ]trade_friday signal resample(' -fri'resample(' 'fill_method='ffill'in [ ]trade_rets trade_friday shift( returns we can then convert the strategy returns to return index and plot them (see figure - )in [ ]to_index(trade_retsplot(figure - spy momentum strategy return index suppose you wanted to decompose the strategy performance into more and less volatile periods of trading trailing one-year annualized standard deviation is simple measure of volatilityand we can compute sharpe ratios to assess the reward-to-risk ratio in various volatility regimesvol pd rolling_std(returns min_periods= np sqrt( financial and economic data applications www it-ebooks info |
12,106 | return rets mean(rets std(np sqrt(annnowdividing vol into quartiles with qcut and aggregating with sharpe we obtainin [ ]trade_rets groupby(pd qcut(vol )agg(sharpeout[ ][ ( ( - ( these results show that the strategy performed the best during the period when the volatility was the highest more example applications here is small set of additional examples signal frontier analysis in this sectioni'll describe simplified cross-sectional momentum portfolio and show how you might explore grid of model parameterizations firsti'll load historical prices for portfolio of financial and technology stocksnames ['aapl''goog''msft''dell''gs''ms''bac'' 'def get_px(stockstartend)return web get_data_yahoo(stockstartend)['adj close'px dataframe({nget_px( ''''for in names}we can easily plot the cumulative returns of each stock (see figure - )in [ ]px px asfreq(' 'fillna(method='pad'in [ ]rets px pct_change(in [ ](( retscumprod( plot(for the portfolio constructionwe'll compute momentum over certain lookbackthen rank in descending order and standardizedef calc_mom(pricelookbacklag)mom_ret price shift(lagpct_change(lookbackranks mom_ret rank(axis= ascending=falsedemeaned ranks ranks mean(axis= return demeaned demeaned std(axis= with this transform function in handwe can set up strategy backtesting function that computes portfolio for particular lookback and holding period (days between trading)returning the overall sharpe ratiocompound lambda ( xprod( daily_sr lambda xx mean( std(more example applications www it-ebooks info |
12,107 | compute portfolio weights freq '%dbhold port calc_mom(priceslblag= daily_rets prices pct_change(compute portfolio returns port port shift( resample(freqhow='first'returns daily_rets resample(freqhow=compoundport_rets (port returnssum(axis= return daily_sr(port_retsnp sqrt( holdfigure - cumulative returns for each of the stocks when called with the prices and parameter combinationthis function returns scalar valuein [ ]strat_sr(px out[ ] from thereyou can evaluate the strat_sr function over grid of parametersstoring them as you go in defaultdict and finally putting the results in dataframefrom collections import defaultdict lookbacks range( holdings range( dd defaultdict(dictfor lb in lookbacksfor hold in holdingsdd[lb][holdstrat_sr(pxlbhold financial and economic data applications www it-ebooks info |
12,108 | ddf index name 'holding periodddf columns name 'lookback periodto visualize the results and get an idea of what' going onhere is function that uses matplotlib to produce heatmap with some adornmentsimport matplotlib pyplot as plt def heatmap(dfcmap=plt cm gray_r)fig plt figure(ax fig add_subplot( axim ax imshow(df valuescmap=cmapinterpolation='nearest'ax set_xlabel(df columns nameax set_xticks(np arange(len(df columns))ax set_xticklabels(list(df columns)ax set_ylabel(df index nameax set_yticks(np arange(len(df index))ax set_yticklabels(list(df index)plt colorbar(aximcalling this function on the backtest resultswe get figure - in [ ]heatmap(ddffigure - heatmap of momentum strategy sharpe ratio (higher is betterover various lookbacks and holding periods future contract rolling future is an ubiquitous form of derivative contractit is an agreement to take delivery of certain asset (such as oilgoldor shares of the ftse indexon particular date in practicemodeling and trading futures contracts on equitiescurrenciesmore example applications www it-ebooks info |
12,109 | of each contract for exampleat any given time for type of future (say silver or copper futuresmultiple contracts with different expiration dates may be traded in many casesthe future contract expiring next (the near contractwill be the most liquid (highest volume and lowest bid-ask spreadfor the purposes of modeling and forecastingit can be much easier to work with continuous return index indicating the profit and loss associated with always holding the near contract transitioning from an expiring contract to the next (or farcontract is referred to as rolling computing continuous future series from the individual contract data is not necessarily straightforward exercise and typically requires deeper understanding of the market and how the instruments are traded for examplein practice when and how quickly would you trade out of an expiring contract and into the next contracthere describe one such process firsti'll use scaled prices for the spy exchange-traded fund as proxy for the & indexin [ ]import pandas io data as web approximate price of & index in [ ]px web get_data_yahoo('spy')['adj close' in [ ]px out[ ]date nameadj closelength nowa little bit of setup put couple of & future contracts and expiry dates in seriesfrom datetime import datetime expiry {'esu 'datetime( )'esz 'datetime( )expiry series(expiryorder(expiry then looks likein [ ]expiry out[ ]esu : : esz : : financial and economic data applications www it-ebooks info |
12,110 | simulate the two contracts into the futurenp random seed( walk (np random randint( size= perturb (np random randint( size= walk walk cumsum(rng pd date_range(px index[ ]periods=len(pxnfreq=' 'near np concatenate([px valuespx values[- walk]far np concatenate([px valuespx values[- walk perturb]prices dataframe({'esu 'near'esz 'far}index=rngprices then has two time series for the contracts that differ from each other by random amountin [ ]prices tail(out[ ]esu esz one way to splice time series together into single continuous series is to construct weighting matrix active contracts would have weight of until the expiry date approaches at that point you have to decide on roll convention here is function that computes weighting matrix with linear decay over number of periods leading up to expirydef get_roll_weights(startexpiryitemsroll_periods= )start first date to compute weighting dataframe expiry series of ticker -expiration dates items sequence of contract names dates pd date_range(startexpiry[- ]freq=' 'weights dataframe(np zeros((len(dates)len(items)))index=datescolumns=itemsprev_date weights index[ for (itemex_datein enumerate(expiry iteritems())if len(expiry weights ix[prev_date:ex_date pd offsets bday()item roll_rng pd date_range(end=ex_date pd offsets bday()periods=roll_periods freq=' 'decay_weights np linspace( roll_periods weights ix[roll_rngitem decay_weights weights ix[roll_rngexpiry index[ ]decay_weights elseweights ix[prev_date:item prev_date ex_date more example applications www it-ebooks info |
12,111 | the weights look like this around the esu expiryin [ ]weights get_roll_weights(''expiryprices columnsin [ ]weights ix['':''out[ ]esu esz finallythe rolled future returns are just weighted sum of the contract returnsin [ ]rolled_returns (prices pct_change(weightssum( rolling correlation and linear regression dynamic models play an important role in financial modeling as they can be used to simulate trading decisions over historical period moving window and exponentiallyweighted time series functions are an example of tools that are used for dynamic models correlation is one way to look at the co-movement between the changes in two asset time series pandas' rolling_corr function can be called with two return series to compute the moving window correlation firsti load some price series from yahoofinance and compute daily returnsaapl web get_data_yahoo('aapl''')['adj close'msft web get_data_yahoo('msft''')['adj close'aapl_rets aapl pct_change(msft_rets msft pct_change(theni compute and plot the one-year moving correlation (see figure - )in [ ]pd rolling_corr(aapl_retsmsft_rets plot(one issue with correlation between two assets is that it does not capture differences in volatility least-squares regression provides another means for modeling the dynamic relationship between variable and one or more other predictor variables in [ ]model pd ols( =aapl_retsx={'msft'msft_rets}window= in [ ]model beta out[ ]datetimeindex entries : : to : : data columns financial and economic data applications www it-ebooks info |
12,112 | non-null values intercept non-null values dtypesfloat ( in [ ]model beta['msft'plot(figure - one-year correlation of apple with microsoft figure - one-year beta (ols regression coefficientof apple to microsoft pandas' ols function implements static and dynamic (expanding or rolling windowleast squares regressions for more sophisticated statistical and econometrics modelssee the statsmodels project (more example applications www it-ebooks info |
12,113 | |
12,114 | advanced numpy ndarray object internals the numpy ndarray provides means to interpret block of homogeneous data (either contiguous or stridedmore on this lateras multidimensional array object as you've seenthe data typeor dtypedetermines how the data is interpreted as being floating pointintegerbooleanor any of the other types we've been looking at part of what makes ndarray powerful is that every array object is strided view on block of data you might wonderfor examplehow the array view arr[:: ::- does not copy any data simply putthe ndarray is more than just chunk of memory and dtypeit also has striding information which enables the array to move through memory with varying step sizes more preciselythe ndarray internally consists of the followinga pointer to datathat is block of system memory the data type or dtype tuple indicating the array' shapefor examplea by array would have shape ( in [ ]np ones(( )shape out[ ]( tuple of stridesintegers indicating the number of bytes to "stepin order to advance one element along dimensionfor examplea typical ( ordermore on this later array of float ( -bytevalues has strides ( in [ ]np ones(( )dtype=np float strides out[ ]( while it is rare that typical numpy user would be interested in the array stridesthey are the critical ingredient in constructing copyless array views strides can even be negative which enables an array to move backward through memorywhich would be the case in slice like obj[::- or obj[:::- www it-ebooks info |
12,115 | figure - the numpy ndarray object numpy dtype hierarchy you may occasionally have code which needs to check whether an array contains integersfloating point numbersstringsor python objects because there are many types of floating point numbers (float through float )checking that the dtype is among list of types would be very verbose fortunatelythe dtypes have superclasses such as np integer and np floating which can be used in conjunction with the np issubd type functionin [ ]ints np ones( dtype=np uint in [ ]floats np ones( dtype=np float in [ ]np issubdtype(ints dtypenp integerout[ ]true in [ ]np issubdtype(floats dtypenp floatingout[ ]true you can see all of the parent classes of specific dtype by calling the type' mro methodin [ ]np float mro(out[ ][numpy float numpy floatingnumpy inexactnumpy numbernumpy genericfloatobjectmost numpy users will never have to know about thisbut it occasionally comes in handy see figure - for graph of the dtype hierarchy and parent-subclass relationships some of the dtypes have trailing underscores in their names these are there to avoid variable name conflicts between the numpy-specific types and the python built-in ones advanced numpy www it-ebooks info |
12,116 | advanced array manipulation there are many ways to work with arrays beyond fancy indexingslicingand boolean subsetting while much of the heavy lifting for data analysis applications is handled by higher level functions in pandasyou may at some point need to write data algorithm that is not found in one of the existing libraries reshaping arrays given what we know about numpy arraysit should come as little surprise that you can convert an array from one shape to another without copying any data to do thispass tuple indicating the new shape to the reshape array instance method for examplesuppose we had one-dimensional array of values that we wished to rearrange into matrixin [ ]arr np arange( in [ ]arr out[ ]array([ ]in [ ]arr reshape(( )out[ ]array([[ ][ ][ ][ ]] multidimensional array can also be reshapedin [ ]arr reshape(( )reshape(( )out[ ]advanced array manipulation www it-ebooks info |
12,117 | [ ]]one of the passed shape dimensions can be - in which case the value used for that dimension will be inferred from the datain [ ]arr np arange( in [ ]arr reshape(( - )out[ ]array([ ] ] ] ][ ]]since an array' shape attribute is tupleit can be passed to reshapetooin [ ]other_arr np ones(( )in [ ]other_arr shape out[ ]( in [ ]arr reshape(other_arr shapeout[ ]array([ ] ][ ]]the opposite operation of reshape from one-dimensional to higher dimension is typically known as flattening or ravelingin [ ]arr np arange( reshape(( )in [ ]arr ravel(out[ ]array( in [ ]arr out[ ]array([ ] ] ] ][ ]] ]ravel does not produce copy of the underlying data if it does not have to (more on this belowthe flatten method behaves like ravel except it always returns copy of the datain [ ]arr flatten(out[ ]array( ]the data can be reshaped or raveled in different orders this is slightly nuanced topic for new numpy users and is therefore the next subtopic versus fortran order contrary to some other scientific computing environments like and matlabnumpy gives you much more control and flexibility over the layout of your data in advanced numpy www it-ebooks info |
12,118 | that if you have two-dimensional array of datathe items in each row of the array are stored in adjacent memory locations the alternative to row major ordering is column major orderwhich means that (you guessed itvalues within each column of data are stored in adjacent memory locations for historical reasonsrow and column major order are also know as and fortran orderrespectively in fortran the language of our forebearsmatrices were all column major functions like reshape and ravelaccept an order argument indicating the order to use the data in the array this can be 'cor 'fin most cases (there are also less commonlyused options 'aand ' 'see the numpy documentationthese are illustrated in figure - in [ ]arr np arange( reshape(( )in [ ]arr out[ ]array([ ] ] ]]in [ ]arr ravel(out[ ]array( ]in [ ]arr ravel(' 'out[ ]array( ]reshaping arrays with more than two dimensions can be bit mind-bending the key difference between and fortran order is the order in which the dimensions are walkedc row major ordertraverse higher dimensions first ( axis before advancing on axis fortran column major ordertraverse higher dimensions last ( axis before advancing on axis concatenating and splitting arrays numpy concatenate takes sequence (tuplelistetc of arrays and joins them together in order along the input axis in [ ]arr np array([[ ][ ]]in [ ]arr np array([[ ][ ]]in [ ]np concatenate([arr arr ]axis= out[ ]array([ ] ]advanced array manipulation www it-ebooks info |
12,119 | [ ]]in [ ]np concatenate([arr arr ]axis= out[ ]array([ ] ]]figure - reshaping in (row majoror fortran (column majororder there are some convenience functionslike vstack and hstackfor common kinds of concatenation the above operations could have been expressed asin [ ]np vstack((arr arr )out[ ]array([ ] ] ][ ]]in [ ]np hstack((arr arr )out[ ]array([ ] ]]spliton the other handslices apart an array into multiple arrays along an axisin [ ]from numpy random import randn in [ ]arr randn( in [ ]arr out[ ]array([ [- ] ] ] ] ]]in [ ]firstsecondthird np split(arr[ ]in [ ]first advanced numpy www it-ebooks info |
12,120 | in [ ]second out[ ]array([ ]]in [ ]third out[ ]array([[- ] ]] ] ]]see table - for list of all relevant concatenation and splitting functionssome of which are provided only as convenience of the very general purpose concatenate table - array concatenation functions function description concatenate most general functionconcatenates collection of arrays along one axis vstackrow_stack stack arrays row-wise (along axis hstack stack arrays column-wise (along axis column_stack like hstackbut converts arrays to column vectors first dstack stack arrays "depth"-wise (along axis split split array at passed locations along particular axis hsplit vsplit dsplit convenience functions for splitting on axis and respectively stacking helpersr_ and c_ there are two special objects in the numpy namespacer_ and c_that make stacking arrays more concisein [ ]arr np arange( in [ ]arr arr reshape(( )in [ ]arr randn( in [ ]np r_[arr arr out[ ]array([ ] ] ] - ][- - ][- ]]in [ ]np c_[np r_[arr arr ]arrout[ ]array([ ] ] ] - ][- - ][- ]]these additionally can translate slices to arraysin [ ]np c_[ : - :- out[ ]array([ - ] - ] - ] - ] - ]]see the docstring for more on what you can do with c_ and r_ advanced array manipulation www it-ebooks info |
12,121 | the need to replicate or repeat arrays is less common with numpy than it is with other popular array programming languages like matlab the main reason for this is that broadcasting fulfills this need betterwhich is the subject of the next section the two main tools for repeating or replicating arrays to produce larger arrays are the repeat and tile functions repeat replicates each element in an array some number of timesproducing larger arrayin [ ]arr np arange( in [ ]arr repeat( out[ ]array([ ]by defaultif you pass an integereach element will be repeated that number of times if you pass an array of integerseach element can be repeated different number of timesin [ ]arr repeat([ ]out[ ]array([ ]multidimensional arrays can have their elements repeated along particular axis in [ ]arr randn( in [ ]arr out[ ]array([ - ] ]]in [ ]arr repeat( axis= out[ ]array([ - ] - ] ] ]]note that if no axis is passedthe array will be flattened firstwhich is likely not what you want similarly you can pass an array of integers when repeating multidimensional array to repeat given slice different number of timesin [ ]arr repeat([ ]axis= out[ ]array([ - ] - ] ] ] ]]in [ ]arr repeat([ ]axis= out[ ]array([ - - - ] ]] advanced numpy www it-ebooks info |
12,122 | can visually think about it as like "laying down tiles"in [ ]arr out[ ]array([ - ] ]]in [ ]np tile(arr out[ ]array([ - - ] ]]the second argument is the number of tileswith scalarthe tiling is made row-byrowrather than column by columnthe second argument to tile can be tuple indicating the layout of the "tiling"in [ ]arr out[ ]array([ - ] ]]in [ ]np tile(arr( )out[ ]array([ - ] ] - ] ]]in [ ]np tile(arr( )out[ ]array([ - - ] ] - - ] ] - - ] ]]fancy indexing equivalentstake and put as you may recall from one way to get and set subsets of arrays is by fancy indexing using integer arraysin [ ]arr np arange( in [ ]inds [ in [ ]arr[indsout[ ]array([ ]there are alternate ndarray methods that are useful in the special case of only making selection on single axisin [ ]arr take(indsout[ ]array([ ]in [ ]arr put(inds in [ ]arr out[ ]array( ]in [ ]arr put(inds[ ]advanced array manipulation www it-ebooks info |
12,123 | out[ ]array( ]to use take along other axesyou can pass the axis keywordin [ ]inds [ in [ ]arr randn( in [ ]arr out[ ]array([[- - - ] - ]]in [ ]arr take(indsaxis= out[ ]array([[- - - ] - ]]put does not accept an axis argument but rather indexes into the flattened (one-dimensionalc orderversion of the array (this could be changed in principlethuswhen you need to set elements using an index array on other axesyou will want to use fancy indexing as of this writingthe take and put functions in general have better performance than their fancy indexing equivalents by significant margin regard this as "bugand something to be fixed in numpybut it' something worth keeping in mind if you're selecting subsets of large arrays using integer arraysin [ ]arr randn( random sample of rows in [ ]inds np random permutation( )[: in [ ]%timeit arr[inds loopsbest of us per loop in [ ]%timeit arr take(indsaxis= loopsbest of us per loop broadcasting broadcasting describes how arithmetic works between arrays of different shapes it is very powerful featurebut one that can be easily misunderstoodeven by experienced users the simplest example of broadcasting occurs when combining scalar value with an arrayin [ ]arr np arange( in [ ]arr out[ ]array([ ]in [ ]arr out[ ]array( advanced numpy www it-ebooks info ] |
12,124 | the multiplication operation for examplewe can demean each column of an array by subtracting the column means in this caseit is very simplein [ ]arr randn( in [ ]arr mean( out[ ]array( ]in [ ]demeaned arr arr mean( in [ ]demeaned out[ ]array([ - - ][- - ][- - ] - ]]in [ ]demeaned mean( out[ ]array( - - ]see figure - for an illustration of this operation demeaning the rows as broadcast operation requires bit more care fortunatelybroadcasting potentially lower dimensional values across any dimension of an array (like subtracting the row means from each column of two-dimensional arrayis possible as long as you follow the rules this brings us tofigure - broadcasting over axis with array the broadcasting ru two arrays are compatible for broadcasting if for each trailing dimension (that isstarting from the end)the axis lengths match or if either of the lengths is broadcasting is then performed over the missing and or length dimensions even as an experienced numpy useri often must stop to draw pictures and think about the broadcasting rule consider the last example and suppose we wished instead to subtract the mean value from each row since arr mean( has length it is compatible broadcasting www it-ebooks info |
12,125 | matches according to the rulesto subtract over axis (that issubtract the row mean from each row)the smaller array must have shape ( )in [ ]arr out[ ]array([ - ] ][- ] - ]]in [ ]row_means arr mean( in [ ]row_means reshape(( )out[ ]array([ ] ] ] ]]in [ ]demeaned arr row_means reshape(( )in [ ]demeaned mean( out[ ]array( ]has your head exploded yetsee figure - for an illustration of this operation figure - broadcasting over axis of array see figure - for another illustrationthis time subtracting two-dimensional array from three-dimensional one across axis broadcasting over other axes broadcasting with higher dimensional arrays can seem even more mind-bendingbut it is really matter of following the rules if you don'tyou'll get an error like thisin [ ]arr arr mean( valueerror traceback (most recent call lastin ( advanced numpy www it-ebooks info |
12,126 | valueerroroperands could not be broadcast together with shapes ( , ( figure - broadcasting over axis of array it' quite common to want to perform an arithmetic operation with lower dimensional array across axes other than axis according to the broadcasting rulethe "broadcast dimensionsmust be in the smaller array in the example of row demeaning above this meant reshaping the row means to be shape ( instead of ( ,)in [ ]arr arr mean( reshape(( )out[ ]array([ - ][- - ][- - ] - ]]in the three-dimensional casebroadcasting over any of the three dimensions is only matter of reshaping the data to be shape-compatible see figure - for nice visualization of the shapes required to broadcast over each axis of three-dimensional array very common problemthereforeis needing to add new axis with length specifically for broadcasting purposesespecially in generic algorithms using reshape is one optionbut inserting an axis requires constructing tuple indicating the new shape this can often be tedious exercise thusnumpy arrays offer special syntax for inserting new axes by indexing we use the special np newaxis attribute along with "fullslices to insert the new axisin [ ]arr np zeros(( )in [ ]arr_ arr[:np newaxis:in [ ]arr_ shape out[ ]( in [ ]arr_ np random normal(size= in [ ]arr_ [:np newaxisout[ ]in [ ]arr_ [np newaxis:out[ ]array([[- - ]]broadcasting www it-ebooks info |
12,127 | ][- ]]figure - compatible array shapes for broadcasting over array thusif we had three-dimensional array and wanted to demean axis saywe would only need to writein [ ]arr randn( in [ ]depth_means arr mean( in [ ]depth_means out[ ]array([ - ] - ] - ]]in [ ]demeaned arr depth_means[::np newaxisin [ ]demeaned mean( out[ ]array([ - ] - - ][- - ]]if you're completely confused by thisdon' worry with practice you will get the hang of it advanced numpy www it-ebooks info |
12,128 | without sacrificing performance there isin factbut it requires some indexing gymnasticsdef demean_axis(arraxis= )means arr mean(axisthis generalized things like [::np newaxisto dimensions indexer [slice(none)arr ndim indexer[axisnp newaxis return arr means[indexersetting array values by broadcasting the same broadcasting rule governing arithmetic operations also applies to setting values via array indexing in the simplest casewe can do things likein [ ]arr np zeros(( )in [ ]arr[: in [ ]arr out[ ]array([ ] ] ] ]]howeverif we had one-dimensional array of values we wanted to set into the columns of the arraywe can do that as long as the shape is compatiblein [ ]col np array([ - ]in [ ]arr[:col[:np newaxisin [ ]arr out[ ]array([ ][- - - ] ] ]]in [ ]arr[: [[- ][ ]in [ ]arr out[ ]array([[- - - ] ] ] ]]advanced ufunc usage while many numpy users will only make use of the fast element-wise operations provided by the universal functionsthere are number of additional features that occasionally can help you write more concise code without loops advanced ufunc usage www it-ebooks info |
12,129 | each of numpy' binary ufuncs has special methods for performing certain kinds of special vectorized operations these are summarized in table - but 'll give few concrete examples to illustrate how they work reduce takes single array and aggregates its valuesoptionally along an axisby performing sequence of binary operations for examplean alternate way to sum elements in an array is to use np add reducein [ ]arr np arange( in [ ]np add reduce(arrout[ ] in [ ]arr sum(out[ ] the starting value ( for adddepends on the ufunc if an axis is passedthe reduction is performed along that axis this allows you to answer certain kinds of questions in concise way as less trivial examplewe can use np logical_and to check whether the values in each row of an array are sortedin [ ]arr randn( in [ ]arr[:: sort( sort few rows in [ ]arr[::- arr[: :out[ ]array([truetruetruetrue][falsetruefalsefalse]truetruetruetrue]truefalsetruetrue]truetruetruetrue]]dtype=boolin [ ]np logical_and reduce(arr[::- arr[: :]axis= out[ ]array(truefalsetruefalsetrue]dtype=boolof courselogical_and reduce is equivalent to the all method accumulate is related to reduce like cumsum is related to sum it produces an array of the same size with the intermediate "accumulatedvaluesin [ ]arr np arange( reshape(( )in [ ]np add accumulate(arraxis= out[ ]array([ ] ][ ]]outer performs pairwise cross-product between two arraysin [ ]arr np arange( repeat([ ] advanced numpy www it-ebooks info |
12,130 | out[ ]array([ ]in [ ]np multiply outer(arrnp arange( )out[ ]array([[ ][ ][ ][ ][ ]]the output of outer will have dimension that is the sum of the dimensions of the inputsin [ ]result np subtract outer(randn( )randn( )in [ ]result shape out[ ]( the last methodreduceatperforms "local reduce"in essence an array groupby operation in which slices of the array are aggregated together while it' less flexible than the groupby capabilities in pandasit can be very fast and powerful in the right circumstances it accepts sequence of "bin edgeswhich indicate how to split and aggregate the valuesin [ ]arr np arange( in [ ]np add reduceat(arr[ ]out[ ]array([ ]the results are the reductions (heresumsperformed over arr[ : ]arr[ : ]and arr[ :like the other methodsyou can pass an axis argumentin [ ]arr np multiply outer(np arange( )np arange( )in [ ]arr out[ ]array([ in [ ]np add reduceat(arr[ ]axis= out[ ]array([ ] ] ] ]] ] ] ] ]]table - ufunc methods method description reduce(xaggregate values by successive applications of the operation accumulate(xaggregate valuespreserving all partial aggregates reduceat(xbins"localreduce or "group byreduce contiguous slices of data to produce aggregated array outer(xyapply operation to all pairs of elements in and result array has shape shape shape advanced ufunc usage www it-ebooks info |
12,131 | there are couple facilities for creating your own functions with ufunc-like semantics numpy frompyfunc accepts python function along with specification for the number of inputs and outputs for examplea simple function that adds element-wise would be specified asin [ ]def add_elements(xy)return in [ ]add_them np frompyfunc(add_elements in [ ]add_them(np arange( )np arange( )out[ ]array([ ]dtype=objectfunctions created using frompyfunc always return arrays of python objects which isn' very convenient fortunatelythere is an alternatebut slightly less featureful function numpy vectorize that is bit more intelligent about type inferencein [ ]add_them np vectorize(add_elementsotypes=[np float ]in [ ]add_them(np arange( )np arange( )out[ ]array( ]these functions provide way to create ufunc-like functionsbut they are very slow because they require python function call to compute each elementwhich is lot slower than numpy' -based ufunc loopsin [ ]arr randn( in [ ]%timeit add_them(arrarr loopsbest of ms per loop in [ ]%timeit np add(arrarr loopsbest of us per loop there are number of projects under way in the scientific python community to make it easier to define new ufuncs whose performance is closer to that of the built-in ones structured and record arrays you may have noticed up until now that ndarray is homogeneous data containerthat isit represents block of memory in which each element takes up the same number of bytesdetermined by the dtype on the surfacethis would appear to not allow you to represent heterogeneous or tabular-like data structured array is an ndarray in which each element can be thought of as representing struct in (hence the "structurednameor row in sql table with multiple named fieldsin [ ]dtype [(' 'np float )(' 'np int )in [ ]sarr np array([( )(np pi- )]dtype=dtype advanced numpy www it-ebooks info |
12,132 | out[ ]array([( )( - )]dtype=[(' ''< ')(' ''< ')]there are several ways to specify structured dtype (see the online numpy documentationone typical way is as list of tuples with (field_namefield_data_typenowthe elements of the array are tuple-like objects whose elements can be accessed like dictionaryin [ ]sarr[ out[ ]( in [ ]sarr[ ][' 'out[ ] the field names are stored in the dtype names attribute on accessing field on the structured arraya strided view on the data is returned thus copying nothingin [ ]sarr[' 'out[ ]array( ]nested dtypes and multidimensional fields when specifying structured dtypeyou can additionally pass shape (as an int or tuple)in [ ]dtype [(' 'np int )(' 'np int )in [ ]arr np zeros( dtype=dtypein [ ]arr out[ ]array([([ ] )([ ] )([ ] )([ ] )]dtype=[(' ''< '( ,))(' ''< ')]in this casethe field now refers to an array of length three for each recordin [ ]arr[ ][' 'out[ ]array([ ]convenientlyaccessing arr[' 'then returns two-dimensional array instead of one-dimensional array as in prior examplesin [ ]arr[' 'out[ ]array([[ ][ ][ ][ ]]this enables you to express more complicatednested structures as single block of memory in an array thoughsince dtypes can be arbitrarily complexwhy not nested dtypeshere is simple examplestructured and record arrays www it-ebooks info |
12,133 | in [ ]data np array([(( ) )(( ) )]dtype=dtypein [ ]data[' 'out[ ]array([( )( )]dtype=[(' ''< ')(' ''< ')]in [ ]data[' 'out[ ]array([ ]dtype=int in [ ]data[' '][' 'out[ ]array( ]as you can seevariable-shape fields and nested records is very rich feature that can be the right tool in certain circumstances dataframe from pandasby contrastdoes not support this feature directlythough it is similar to hierarchical indexing why use structured arrayscompared withsaya dataframe from pandasnumpy structured arrays are comparatively low-level tool they provide means to interpreting block of memory as tabular structure with arbitrarily complex nested columns since each element in the array is represented in memory as fixed number of bytesstructured arrays provide very fast and efficient way of writing data to and from disk (including memory mapsmore on this later)transporting it over the networkand other such use as another common use for structured arrayswriting data files as fixed length record byte streams is common way to serialize data in and +codewhich is commonly found in legacy systems in industry as long as the format of the file is known (the size of each record and the orderbyte sizeand data type of each element)the data can be read into memory using np fromfile specialized uses like this are beyond the scope of this bookbut it' worth knowing that such things are possible structured array manipulationsnumpy lib recfunctions while there is not as much functionality available for structured arrays as for dataframesthe numpy module numpy lib recfunctions has some helpful tools for adding and dropping fields or doing basic join-like operations the thing to remember with these tools is that it is typically necessary to create new array to make any modifications to the dtype (like adding or dropping columnthese functions are left to the interested reader to explore as do not use them anywhere in this book advanced numpy www it-ebooks info |
12,134 | like python' built-in listthe ndarray sort instance method is an in-place sortmeaning that the array contents are rearranged without producing new arrayin [ ]arr randn( in [ ]arr sort(in [ ]arr out[ ]array([- ]when sorting arrays in-placeremember that if the array is view on different ndarraythe original array will be modifiedin [ ]arr randn( in [ ]arr out[ ]array([[- - [- - - - ] - ] - ]]in [ ]arr[: sort(sort first column values in-place in [ ]arr out[ ]array([[- - [- - - - ] - ] - ]]on the other handnumpy sort creates newsorted copy of an array otherwise it accepts the same arguments (such as kindmore on this belowas ndarray sortin [ ]arr randn( in [ ]arr out[ ]array([- - - - ]in [ ]np sort(arrout[ ]array([- - - - ]in [ ]arr out[ ]array([- - - - ]all of these sort methods take an axis argument for sorting the sections of data along the passed axis independentlyin [ ]arr randn( in [ ]arr out[ ]array([ - - ][- - - ] - - - ]]more about sorting www it-ebooks info |
12,135 | in [ ]arr out[ ]array([[- - [- - - [- - - ] ] ]]you may notice that none of the sort methods have an option to sort in descending order this is not actually big deal because array slicing produces viewsthus not producing copy or requiring any computational work many python users are familiar with the "trickthat for list valuesvalues[::- returns list in reverse order the same is true for ndarraysin [ ]arr[:::- out[ ]array([ - - ] - - - ] - - - ]]indirect sortsargsort and lexsort in data analysis it' very common to need to reorder data sets by one or more keys for examplea table of data about some students might need to be sorted by last name then by first name this is an example of an indirect sortand if you've read the pandasrelated you have already seen many higher-level examples given key or keys (an array or values or multiple arrays of values)you wish to obtain an array of integer indices ( refer to them colloquially as indexersthat tells you how to reorder the data to be in sorted order the two main methods for this are argsort and numpy lexsort as trivial examplein [ ]values np array([ ]in [ ]indexer values argsort(in [ ]indexer out[ ]array([ ]in [ ]values[indexerout[ ]array([ ]as less trivial examplethis code reorders array by its first rowin [ ]arr randn( in [ ]arr[ values in [ ]arr out[ ]array([ [- - [- ] - ] - ]] advanced numpy www it-ebooks info |
12,136 | out[ ]array([ ][- - - ] - - ]]lexsort is similar to argsortbut it performs an indirect lexicographical sort on multiple key arrays suppose we wanted to sort some data identified by first and last namesin [ ]first_name np array(['bob''jane''steve''bill''barbara']in [ ]last_name np array(['jones''arnold''arnold''jones''walters']in [ ]sorter np lexsort((first_namelast_name)in [ ]zip(last_name[sorter]first_name[sorter]out[ ][('arnold''jane')('arnold''steve')('jones''bill')('jones''bob')('walters''barbara')lexsort can be bit confusing the first time you use it because the order in which the keys are used to order the data starts with the last array passed as you can seelast_name was used before first_name pandas methods like series' and dataframe' sort_index methods and the series order method are implemented with variants of these functions (which also must take into account missing valuesalternate sort algorithms stable sorting algorithm preserves the relative position of equal elements this can be especially important in indirect sorts where the relative ordering is meaningfulin [ ]values np array([' :first'' :second'' :first'' :second'' :third']in [ ]key np array([ ]in [ ]indexer key argsort(kind='mergesort'in [ ]indexer out[ ]array([ ]in [ ]values take(indexerout[ ]array([' :first'' :second'' :third'' :first'' :second']dtype='| 'the only stable sort available is mergesort which has guaranteed ( log nperformance (for complexity buffs)but its performance is on average worse than the default more about sorting www it-ebooks info |
12,137 | performance (and performance guaranteesthis is not something that most users will ever have to think about but useful to know that it' there table - array sorting methods kind speed stable work space worst-case 'quicksort no ( 'mergesort yes / ( log 'heapsort no ( log nat the time of this writingsort algorithms other than quicksort are not available on arrays of python objects (dtype=objectthis means occasionally that algorithms requiring stable sorting will require workarounds when dealing with python objects numpy searchsortedfinding elements in sorted array searchsorted is an array method that performs binary search on sorted arrayreturning the location in the array where the value would need to be inserted to maintain sortednessin [ ]arr np array([ ]in [ ]arr searchsorted( out[ ] as you might expectyou can also pass an array of values to get an array of indices backin [ ]arr searchsorted([ ]out[ ]array([ ]you might have noticed that searchsorted returned for the element this is because the default behavior is to return the index at the left side of group of equal valuesin [ ]arr np array([ ]in [ ]arr searchsorted([ ]out[ ]array([ ]in [ ]arr searchsorted([ ]side='right'out[ ]array([ ]as another application of searchsortedsuppose we had an array of values between and , and separate array of "bucket edgesthat we wanted to use to bin the datain [ ]data np floor(np random uniform( size= )in [ ]bins np array([ ]in [ ]data advanced numpy www it-ebooks info |
12,138 | array( ] to then get labeling of which interval each data point belongs to (where would mean the bucket [ ))we can simply use searchsortedin [ ]labels bins searchsorted(datain [ ]labels out[ ]array([ ]thiscombined with pandas' groupbycan be used to easily bin datain [ ]series(datagroupby(labelsmean(out[ ] note that numpy actually has function digitize that computes this bin labelingin [ ]np digitize(databinsout[ ]array([ ]numpy matrix class compared with other languages for matrix operations and linear algebralike matlabjuliaand gaussnumpy' linear algebra syntax can often be quite verbose one reason is that matrix multiplication requires using numpy dot also numpy' indexing semantics are differentwhich makes porting code to python less straightforward at times selecting single row ( [ :]or column ( [: ]from array yields array compared with array as insaymatlab in [ ] np array([ [- - ] ] ] ]]in [ ] [: one-dimensional out[ ]array( - ]in [ ] [:: two-dimensional by slicing numpy matrix class www it-ebooks info |
12,139 | out[ ]array([ [- - ] ] ] ]]in [ ] out[ ]array([ ] ][- ] ]]in this casethe product yt would be expressed like soin [ ]np dot( tnp dot(xy)out[ ]array([ ]]to aid in writing code with lot of matrix operationsnumpy has matrix class which has modified indexing behavior to make it more matlab-likesingle rows and columns come back two-dimensional and multiplication with is matrix multiplication the above operation with numpy matrix would look likein [ ]xm np matrix(xin [ ]ym xm[: in [ ]xm out[ ]matrix([ [- - ] ] ] ]]in [ ]ym out[ ]matrix([ ] ][- ] ]]in [ ]ym xm ym out[ ]matrix([ ]]matrix also has special attribute which returns the matrix inversein [ ]xm out[ ]matrix([ - - - ] ] ] ]] advanced numpy www it-ebooks info |
12,140 | they are generally more seldom used in individual functions with lots of linear algebrait may be helpful to convert the function argument to matrix typethen cast back to regular arrays with np asarray (which does not copy any databefore returning them advanced array input and output in introduced you to np save and np load for storing arrays in binary format on disk there are number of additional options to consider for more sophisticated use in particularmemory maps have the additional benefit of enabling you to work with data sets that do not fit into ram memory-mapped files memory-mapped file is method for treating potentially very large binary data on disk as an in-memory array numpy implements memmap object that is ndarray-likeenabling small segments of large file to be read and written without reading the whole array into memory additionallya memmap has the same methods as an in-memory array and thus can be substituted into many algorithms where an ndarray would be expected to create new memmapuse the function np memmap and pass file pathdtypeshapeand file modein [ ]mmap np memmap('mymmap'dtype='float 'mode=' +'shape=( )in [ ]mmap out[ ]memmap([ ] ] ] ] ] ]]slicing memmap returns views on the data on diskin [ ]section mmap[: if you assign data to theseit will be buffered in memory (like python file object)but can be written to disk by calling flushin [ ]section[:np random randn( in [ ]mmap flush(in [ ]mmap out[ ]memmap([[- - - - - ] - - - - ][- - - - ]advanced array input and output www it-ebooks info |
12,141 | ]]]]in [ ]del mmap whenever memory map falls out of scope and is garbage-collectedany changes will be flushed to disk also when opening an existing memory mapyou still have to specify the dtype and shape as the file is just block of binary data with no metadata on diskin [ ]mmap np memmap('mymmap'dtype='float 'shape=( )in [ ]mmap out[ ]memmap([[- - - - - ] - - - - ][- - - - ] ] ] ]]since memory map is just an on-disk ndarraythere are no issues using structured dtype as described above hdf and other array storage options pytables and py are two python projects providing numpy-friendly interfaces for storing array data in the efficient and compressible hdf format (hdf stands for hierarchical data formatyou can safely store hundreds of gigabytes or even terabytes of data in hdf format the use of these libraries is unfortunately outside the scope of the book pytables provides rich facility for working with structured arrays with advanced querying features and the ability to add column indexes to accelerate queries this is very similar to the table indexing capabilities provided by relational databases performance tips getting good performance out of code utilizing numpy is often straightforwardas array operations typically replace otherwise comparatively extremely slow pure python loops here is brief list of some of the things to keep in mindconvert python loops and conditional logic to array operations and boolean array operations use broadcasting whenever possible avoid copying data using array views (slicingutilize ufuncs and ufunc methods advanced numpy www it-ebooks info |
12,142 | by numpy alonewriting code in cfortranor especially cython (see bit more on this belowmay be in order personally use cython (own work as an easy way to get -like performance with minimal development the importance of contiguous memory while the full extent of this topic is bit outside the scope of this bookin some applications the memory layout of an array can significantly affect the speed of computations this is based partly on performance differences having to do with the cache hierarchy of the cpuoperations accessing contiguous blocks of memory (for examplesumming the rows of order arraywill generally be the fastest because the memory subsystem will buffer the appropriate blocks of memory into the ultrafast or cpu cache alsocertain code paths inside numpy' codebase have been optimized for the contiguous case in which generic strided memory access can be avoided to say that an array' memory layout is contiguous means that the elements are stored in memory in the order that they appear in the array with respect to fortran (column majoror (row majorordering by defaultnumpy arrays are created as -contiguous or just simply contiguous column major arraysuch as the transpose of ccontiguous arrayis thus said to be fortran-contiguous these properties can be explicitly checked via the flags attribute on the ndarrayin [ ]arr_c np ones(( )order=' 'in [ ]arr_f np ones(( )order=' 'in [ ]arr_c flags out[ ]c_contiguous true f_contiguous false owndata true writeable true aligned true updateifcopy false in [ ]arr_f flags out[ ]c_contiguous false f_contiguous true owndata true writeable true aligned true updateifcopy false in [ ]arr_f flags f_contiguous out[ ]true in this examplesumming the rows of these arrays shouldin theorybe faster for arr_c than arr_f since the rows are contiguous in memory here check for sure using %timeit in ipythonin [ ]%timeit arr_c sum( loopsbest of ms per loop in [ ]%timeit arr_f sum( loopsbest of ms per loop performance tips www it-ebooks info |
12,143 | invest some effort if you have an array that does not have the desired memory orderyou can use copy and pass either 'cor ' 'in [ ]arr_f copy(' 'flags out[ ]c_contiguous true f_contiguous false owndata true writeable true aligned true updateifcopy false when constructing view on an arraykeep in mind that the result is not guaranteed to be contiguousin [ ]arr_c[: flags contiguous out[ ]true in [ ]arr_c[:: flags out[ ]c_contiguous false f_contiguous false owndata false writeable true aligned true updateifcopy false other speed optionscythonf pyc in recent yearsthe cython project ((for many scientific python programmers for implementing fast code that may need to interact with or +librariesbut without having to write pure code you can think of cython as python with static types and the ability to interleave functions implemented in into python-like code for examplea simple cython function to sum the elements of one-dimensional array might look likefrom numpy cimport ndarrayfloat _t def sum_elements(ndarray[float _tarr)cdef py_ssize_t in len(arrcdef float _t result for in range( )result +arr[ireturn result cython takes this codetranslates it to cthen compiles the generated code to create python extension cython is an attractive option for performance computing because the code is only slightly more time-consuming to write than pure python code and it integrates closely with numpy common workflow is to get an algorithm working in pythonthen translate it to cython by adding type declarations and handful of other tweaks for moresee the project documentation advanced numpy www it-ebooks info |
12,144 | wrapper generator for fortran and codeand writing pure extensions performance tips www it-ebooks info |
12,145 | |
12,146 | python language essentials knowledge is treasurebut practice is the key to it --thomas fuller people often ask me about good resources for learning python for data-centric applications while there are many excellent python language booksi am usually hesitant to recommend some of them as they are intended for general audience rather than tailored for someone who wants to load in some data setsdo some computationsand plot some of the results there are actually couple of books on "scientific programming in python"but they are geared toward numerical computing and engineering applicationssolving differential equationscomputing integralsdoing monte carlo simulationsand various topics that are more mathematically-oriented rather than being about data analysis and statistics as this is book about becoming proficient at working with data in pythoni think it is valuable to spend some time highlighting the most important features of python' built-in data structures and libraries from the perspective of processing and manipulating structured and unstructured data as suchi will only present roughly enough information to enable you to follow along with the rest of the book this is not intended to be an exhaustive introduction to the python language but rather biasedno-frills overview of features which are used repeatedly throughout this book for new python programmersi recommend that you supplement this with the official python tutorial (many excellent (and much longerbooks on general purpose python programming in my opinionit is not necessary to become proficient at building good software in python to be able to productively do data analysis encourage you to use ipython to experiment with the code examples and to explore the documentation for the various typesfunctionsand methods note that some of the code used in the examples may not necessarily be fully-introduced at this point much of this book focuses on high performance array-based computing tools for working with large data sets in order to use those tools you must often first do some munging to corral messy data into more nicely structured form fortunatelypython is one of www it-ebooks info |
12,147 | facility with pythonthe languagethe easier it will be for you to prepare new data sets for analysis the python interpreter python is an interpreted language the python interpreter runs program by executing one statement at time the standard interactive python interpreter can be invoked on the command line with the python commandpython python (defaultoct : : [gcc on linux type "help""copyright""creditsor "licensefor more information print the you see is the prompt where you'll type expressions to exit the python interpreter and return to the command promptyou can either type exit(or press ctrl- running python programs is as simple as calling python with py file as its first argument suppose we had created hello_world py with these contentsprint 'hello worldthis can be run from the terminal simply aspython hello_world py hello world while many python programmers execute all of their python code in this waymany scientific python programmers make use of ipythonan enhanced interactive python interpreter is dedicated to the ipython system by using the %run commandipython executes the code in the specified file in the same processenabling you to explore the results interactively when it' done ipython python |epd ( -bit)(defaultjul : : type "copyright""creditsor "licensefor more information ipython -an enhanced interactive python -introduction and overview of ipython' features %quickref -quick reference help -python' own help system object-details about 'object'use 'object??for extra details in [ ]%run hello_world py hello world in [ ] appendixpython language essentials www it-ebooks info |
12,148 | standard prompt the basics language semantics the python language design is distinguished by its emphasis on readabilitysimplicityand explicitness some people go so far as to liken it to "executable pseudocodeindentationnot braces python uses whitespace (tabs or spacesto structure code instead of using braces as in many other languages like rc++javaand perl take the for loop in the above quicksort algorithmfor in arrayif pivotless append(xelsegreater append(xa colon denotes the start of an indented code block after which all of the code must be indented by the same amount until the end of the block in another languageyou might instead have something likefor in array if pivot less append(xelse greater append(xone major reason that whitespace matters is that it results in most python code looking cosmetically similarwhich means less cognitive dissonance when you read piece of code that you didn' write yourself (or wrote in hurry year ago!in language without significant whitespaceyou might stumble on some differently formatted code likefor in array if pivot less append(xelse greater append(xthe basics www it-ebooks info |
12,149 | love it or hate itsignificant whitespace is fact of life for python programmersand in my experience it helps make python code lot more readable than other languages 've used while it may seem foreign at firsti suspect that it will grow on you after while strongly recommend that you use spaces to as your default indentation and that your editor replace tabs with spaces many text editors have setting that will replace tab stops with spaces automatically (do this!some people use tabs or different number of spaceswith spaces not being terribly uncommon spaces is by and large the standard adopted by the vast majority of python programmersso recommend doing that in the absence of compelling reason otherwise as you can see by nowpython statements also do not need to be terminated by semicolons semicolons can be usedhoweverto separate multiple statements on single linea putting multiple statements on one line is generally discouraged in python as it often makes code less readable everything is an object an important characteristic of the python language is the consistency of its object model every numberstringdata structurefunctionclassmoduleand so on exists in the python interpreter in its own "boxwhich is referred to as python object each object has an associated type (for examplestring or functionand internal data in practice this makes the language very flexibleas even functions can be treated just like any other object comments any text preceded by the hash mark (pound signis ignored by the python interpreter this is often used to add comments to code at times you may also want to exclude certain blocks of code without deleting them an easy solution is to comment out the coderesults [for line in file_handlekeep the empty lines for now if len(line= continue results append(line replace('foo''bar') appendixpython language essentials www it-ebooks info |
12,150 | functions are called using parentheses and passing zero or more argumentsoptionally assigning the returned value to variableresult (xyzg(almost every object in python has attached functionsknown as methodsthat have access to the object' internal contents they can be called using the syntaxobj some_method(xyzfunctions can take both positional and keyword argumentsresult (abcd= ='foo'more on this later variables and pass-by-reference when assigning variable (or namein pythonyou are creating reference to the object on the right hand side of the equals sign in practical termsconsider list of integersin [ ] [ suppose we assign to new variable bin [ ] in some languagesthis assignment would cause the data [ to be copied in pythona and actually now refer to the same objectthe original list [ (see figure - for mockupyou can prove this to yourself by appending an element to and then examining bin [ ] append( in [ ] out[ ][ figure - two references for the same object understanding the semantics of references in python and whenhowand why data is copied is especially critical when working with larger data sets in python the basics www it-ebooks info |
12,151 | an object variables names that have been assigned may occasionally be referred to as bound variables when you pass objects as arguments to functionyou are only passing referencesno copying occurs thuspython is said to pass by referencewhereas some other languages support both pass by value (creating copiesand pass by reference this means that function can mutate the internals of its arguments suppose we had the following functiondef append_element(some_listelement)some_list append(elementthen given what' been saidthis should not come as surprisein [ ]data [ in [ ]append_element(data in [ ]data out[ ][ dynamic referencesstrong types in contrast with many compiled languagessuch as java and ++object references in python have no type associated with them there is no problem with the followingin [ ] in [ ]type(aout[ ]int in [ ] 'fooin [ ]type(aout[ ]str variables are names for objects within particular namespacethe type information is stored in the object itself some observers might hastily conclude that python is not "typed languagethis is not trueconsider this examplein [ ]' typeerror traceback (most recent call lastin (---- ' typeerrorcannot concatenate 'strand 'intobjects in some languagessuch as visual basicthe string ' might get implicitly converted (or castedto an integerthus yielding yet in other languagessuch as javascriptthe integer might be casted to stringyielding the concatenated string ' in this regard python is considered strongly-typed languagewhich means that every object has specific type (or class)and implicit conversions will occur only in certain obvious circumstancessuch as the following appendixpython language essentials www it-ebooks info |
12,152 | in [ ] string formattingto be visited later in [ ]print ' is %sb is % (type( )type( ) is is in [ ] out[ ] knowing the type of an object is importantand it' useful to be able to write functions that can handle many different kinds of input you can check that an object is an instance of particular type using the isinstance functionin [ ] in [ ]isinstance(aintout[ ]true isinstance can accept tuple of types if you want to check that an object' type is among those present in the tuplein [ ] in [ ]isinstance( (intfloat)out[ ]true in [ ]isinstance( (intfloat)out[ ]true attributes and methods objects in python typically have both attributesother python objects stored "insidethe objectand methodsfunctions associated with an object which can have access to the object' internal data both of them are accessed via the syntax obj attribute_namein [ ] 'fooin [ ] capitalize format center index count isalnum decode isalpha encode isdigit endswith islower expandtabs isspace find istitle isupper join ljust lower lstrip partition replace rfind rindex rjust rpartition rsplit rstrip split splitlines startswith strip swapcase title translate upper zfill attributes and methods can also be accessed by name using the getattr functiongetattr( 'split'while we will not extensively use the functions getattr and related functions hasattr and setattr in this bookthey can be used very effectively to write genericreusable code the basics www it-ebooks info |
12,153 | often you may not care about the type of an object but rather only whether it has certain methods or behavior for exampleyou can verify that an object is iterable if it implemented the iterator protocol for many objectsthis means it has __iter__ "magic method"though an alternative and better way to check is to try using the iter functiondef isiterable(obj)tryiter(objreturn true except typeerrornot iterable return false this function would return true for strings as well as most python collection typesin [ ]isiterable(' string'out[ ]true in [ ]isiterable([ ]out[ ]true in [ ]isiterable( out[ ]false place where use this functionality all the time is to write functions that can accept multiple kinds of input common case is writing function that can accept any kind of sequence (listtuplendarrayor even an iterator you can first check if the object is list (or numpy arrayandif it is notconvert it to be oneif not isinstance(xlistand isiterable( ) list(ximports in python module is simply py file containing function and variable definitions along with such things imported from other py files suppose that we had the following modulesome_module py pi def ( )return def (ab)return if we wanted to access the variables and functions defined in some_module pyfrom another file in the same directory we could doimport some_module result some_module ( pi some_module pi or equivalentlyfrom some_module import fgpi result ( pi appendixpython language essentials www it-ebooks info |
12,154 | import some_module as sm from some_module import pi as pig as gf sm (pir gf( pibinary operators and comparisons most of the binary math operations and comparisons are as you might expectin [ ] out[ ]- in [ ] out[ ] in [ ] < out[ ]false see table - for all of the available binary operators to check if two references refer to the same objectuse the is keyword is not is also perfectly valid if you want to check that two objects are not the samein [ ] [ in [ ] notethe list function always creates new list in [ ] list(ain [ ] is out[ ]true in [ ] is not out[ ]true note this is not the same thing is comparing with ==because in this case we havein [ ] = out[ ]true very common use of is and is not is to check if variable is nonesince there is only one instance of nonein [ ] none in [ ] is none out[ ]true table - binary operators operation description add and subtract from multiply by divide by / floor-divide by bdropping any fractional remainder the basics www it-ebooks info |
12,155 | description * raise to the power true if both and are true for integerstake the bitwise and true if either or is true for integerstake the bitwise or for booleanstrue if or is truebut not both for integerstake the bitwise exclusive-or = true if equals ! true if is not equal to <ba true if is less than (less than or equalto ba > true if is greater than (greater than or equalto is true if and reference same python object is not true if and reference different python objects strictness versus laziness when using any programming languageit' important to understand when expressions are evaluated consider the simple expressiona in pythononce these statements are evaluatedthe calculation is immediately (or strictlycarried outsetting the value of to in another programming paradigmsuch as in pure functional programming language like haskellthe value of might not be evaluated until it is actually used elsewhere the idea of deferring computations in this way is commonly known as lazy evaluation pythonon the other handis very strict (or eagerlanguage nearly all of the timecomputations and expressions are evaluated immediately even in the above simple expressionthe result of is computed as separate step before adding it to there are python techniquesespecially using iterators and generatorswhich can be used to achieve laziness when performing very expensive computations which are only necessary some of the timethis can be an important technique in data-intensive applications mutable and immutable objects most objects in python are mutablesuch as listsdictsnumpy arraysor most userdefined types (classesthis means that the object or values that they contain can be modified in [ ]a_list ['foo' [ ]in [ ]a_list[ ( in [ ]a_list out[ ]['foo' ( ) appendixpython language essentials www it-ebooks info |
12,156 | in [ ]a_tuple ( ( )in [ ]a_tuple[ 'fourtypeerror traceback (most recent call lastin (---- a_tuple[ 'fourtypeerror'tupleobject does not support item assignment remember that just because you can mutate an object does not mean that you always should such actions are known in programming as side effects for examplewhen writing functionany side effects should be explicitly communicated to the user in the function' documentation or comments if possiblei recommend trying to avoid side effects and favor immutabilityeven though there may be mutable objects involved scalar types python has small set of built-in types for handling numerical datastringsboolean (true or falsevaluesand dates and time see table - for list of the main scalar types date and time handling will be discussed separately as these are provided by the datetime module in the standard library table - standard python scalar types type description none the python "nullvalue (only one instance of the none object existsstr string type ascii-valued only in python and unicode in python unicode unicode string type float double-precision ( -bitfloating point number note there is no separate double type bool true or false value int signed integer with maximum value determined by the platform long arbitrary precision signed integer large int values are automatically converted to long numeric types the primary python types for numbers are int and float the size of the integer which can be stored as an int is dependent on your platform (whether or -bit)but python will transparently convert very large integer to longwhich can store arbitrarily large integers in [ ]ival in [ ]ival * out[ ] the basics www it-ebooks info |
12,157 | each one is double-precision ( bitsvalue they can also be expressed using scientific notationin [ ]fval in [ ]fval - in python integer division not resulting in whole number will always yield floating point numberin [ ] out[ ] in python and below (which some readers will likely be using)you can enable this behavior by default by putting the following cryptic-looking statement at the top of your modulefrom __future__ import division without this in placeyou can always explicitly convert the denominator into floating point numberin [ ] float( out[ ] to get -style integer division (which drops the fractional part if the result is not whole number)use the floor division operator //in [ ] / out[ ] complex numbers are written using for the imaginary partin [ ]cval in [ ]cval ( jout[ ]( + jstrings many people use python for its powerful and flexible built-in string processing capabilities you can write string literal using either single quotes or double quotes " 'one way of writing stringb "another wayfor multiline strings with line breaksyou can use triple quoteseither ''or """ ""this is longer string that spans multiple lines ""python strings are immutableyou cannot modify string without creating new string appendixpython language essentials www it-ebooks info |
12,158 | in [ ] [ 'ftypeerror traceback (most recent call lastin (---- [ 'ftypeerror'strobject does not support item assignment in [ ] replace('string''longer string'in [ ] out[ ]'this is longer stringmany python objects can be converted to string using the str functionin [ ] in [ ] str(ain [ ] out[ ]' strings are sequence of characters and therefore can be treated like other sequencessuch as lists and tuplesin [ ] 'pythonin [ ]list(sout[ ][' '' '' '' '' '' 'in [ ] [: out[ ]'pytthe backslash character is an escape charactermeaning that it is used to specify special characters like newline \ or unicode characters to write string literal with backslashesyou need to escape themin [ ] ' \\ in [ ]print \ if you have string with lot of backslashes and no special charactersyou might find this bit annoying fortunately you can preface the leading quote of the string with which means that the characters should be interpreted as isin [ ] 'this\has\no\special\charactersin [ ] out[ ]'this\\has\\no\\special\\charactersadding two strings together concatenates them and produces new stringin [ ] 'this is the first half in [ ] 'and this is the second halfin [ ] out[ ]'this is the first half and this is the second halfthe basics www it-ebooks info |
12,159 | so has expanded with the advent of python here will briefly describe the mechanics of one of the main interfaces strings with followed by one or more format characters is target for inserting value into that string (this is quite similar to the printf function in cas an exampleconsider this stringin [ ]template ' % are worth $%din this string% means to format an argument as string number with decimal placesand % an integer to substitute arguments for these format parametersuse the binary operator with tuple of valuesin [ ]template ( 'argentine pesos' out[ ]' argentine pesos are worth $ string formatting is broad topicthere are multiple methods and numerous options and tweaks available to control how values are formatted in the resulting string to learn morei recommend you seek out more information on the web discuss general string processing as it relates to data analysis in more detail in booleans the two boolean values in python are written as true and false comparisons and other conditional expressions evaluate to either true or false boolean values are combined with the and and or keywordsin [ ]true and true out[ ]true in [ ]false or true out[ ]true almost all built-in python tops and any class defining the __nonzero__ magic method have true or false interpretation in an if statementin [ ] [ if aprint ' found something! found somethingin [ ] [if not bprint 'empty!emptymost objects in python have notion of trueor falseness for exampleempty sequences (listsdictstuplesetc are treated as false if used in control flow (as above with the empty list byou can see exactly what boolean value an object coerces to by invoking bool on it appendixpython language essentials www it-ebooks info |
12,160 | out[ ](falsetruein [ ]bool('hello world!')bool(''out[ ](truefalsein [ ]bool( )bool( out[ ](falsetruetype casting the strboolint and float types are also functions which can be used to cast values to those typesin [ ] ' in [ ]fval float(sin [ ]int(fvalout[ ] in [ ]type(fvalout[ ]float in [ ]bool(fvalout[ ]true in [ ]bool( out[ ]false none none is the python null value type if function does not explicitly return valueit implicitly returns none in [ ] none in [ ] is none out[ ]true in [ ] in [ ] is not none out[ ]true none is also common default value for optional function argumentsdef add_and_maybe_multiply(abc=none)result if is not noneresult result return result while technical pointit' worth bearing in mind that none is not reserved keyword but rather unique instance of nonetype dates and times the built-in python datetime module provides datetimedateand time types the datetime type as you may imagine combines the information stored in date and time and is the most commonly usedin [ ]from datetime import datetimedatetime in [ ]dt datetime( the basics www it-ebooks info |
12,161 | out[ ] in [ ]dt minute out[ ] given datetime instanceyou can extract the equivalent date and time objects by calling methods on the datetime of the same namein [ ]dt date(out[ ]datetime date( in [ ]dt time(out[ ]datetime time( the strftime method formats datetime as stringin [ ]dt strftime('% /% /% % :% 'out[ ] : strings can be converted (parsedinto datetime objects using the strptime functionin [ ]datetime strptime(' ''% % % 'out[ ]datetime datetime( see table - for full list of format specifications when aggregating of otherwise grouping time series datait will occasionally be useful to replace fields of series of datetimesfor example replacing the minute and second fields with zeroproducing new objectin [ ]dt replace(minute= second= out[ ]datetime datetime( the difference of two datetime objects produces datetime timedelta typein [ ]dt datetime( in [ ]delta dt dt in [ ]delta out[ ]datetime timedelta( in [ ]type(deltaout[ ]datetime timedelta adding timedelta to datetime produces new shifted datetimein [ ]dt out[ ]datetime datetime( in [ ]dt delta out[ ]datetime datetime( control flow ifelifand else the if statement is one of the most well-known control flow statement types it checks condition whichif trueevaluates the code in the block that followsif print 'it' negative appendixpython language essentials www it-ebooks info |
12,162 | else block if all of the conditions are falseif print 'it' negativeelif = print 'equal to zeroelif print 'positive but smaller than elseprint 'positive and larger than or equal to if any of the conditions is trueno further elif or else blocks will be reached with compound condition using and or orconditions are evaluated left-to-right and will short circuitin [ ] in [ ] in [ ]if dprint 'made itmade it in this examplethe comparison never gets evaluated because the first comparison was true for loops for loops are for iterating over collection (like list or tupleor an iterater the standard syntax for for loop isfor value in collectiondo something with value for loop can be advanced to the next iterationskipping the remainder of the blockusing the continue keyword consider this code which sums up integers in list and skips none valuessequence [ none none total for value in sequenceif value is nonecontinue total +value for loop can be exited altogether using the break keyword this code sums elements of the list until is reachedsequence [ total_until_ for value in sequenceif value = break total_until_ +value the basics www it-ebooks info |
12,163 | (tuples or listssay)they can be conveniently unpacked into variables in the for loop statementfor abc in iteratordo something while loops while loop specifies condition and block of code that is to be executed until the condition evaluates to false or the loop is explicitly ended with breakx total while if total break total + / pass pass is the "no-opstatement in python it can be used in blocks where no action is to be takenit is only required because python uses whitespace to delimit blocksif print 'negative!elif = todoput something smart here pass elseprint 'positive!it' common to use pass as place-holder in code while working on new piece of functionalitydef (xyz)todoimplement this functionpass exception handling handling python errors or exceptions gracefully is an important part of building robust programs in data analysis applicationsmany functions only work on certain kinds of input as an examplepython' float function is capable of casting string to floating point numberbut fails with valueerror on improper inputsin [ ]float(' 'out[ ] in [ ]float('something'valueerror traceback (most recent call lastin ( appendixpython language essentials www it-ebooks info |
12,164 | valueerrorcould not convert string to floatsomething suppose we wanted version of float that fails gracefullyreturning the input argument we can do this by writing function that encloses the call to float in tryexcept blockdef attempt_float( )tryreturn float(xexceptreturn the code in the except part of the block will only be executed if float(xraises an exceptionin [ ]attempt_float(' 'out[ ] in [ ]attempt_float('something'out[ ]'somethingyou might notice that float can raise exceptions other than valueerrorin [ ]float(( )typeerror traceback (most recent call lastin (---- float(( )typeerrorfloat(argument must be string or number you might want to only suppress valueerrorsince typeerror (the input was not string or numeric valuemight indicate legitimate bug in your program to do thatwrite the exception type after exceptdef attempt_float( )tryreturn float(xexcept valueerrorreturn we have thenin [ ]attempt_float(( )typeerror traceback (most recent call lastin (---- attempt_float(( )in attempt_float( def attempt_float( ) try---- return float( except valueerror return typeerrorfloat(argument must be string or number the basics www it-ebooks info |
12,165 | (the parentheses are required)def attempt_float( )tryreturn float(xexcept (typeerrorvalueerror)return in some casesyou may not want to suppress an exceptionbut you want some code to be executed regardless of whether the code in the try block succeeds or not to do thisuse finallyf open(path' 'trywrite_to_file(ffinallyf close(herethe file handle will always get closed similarlyyou can have code that executes only if the tryblock succeeds using elsef open(path' 'trywrite_to_file(fexceptprint 'failedelseprint 'succeededfinallyf close(range and xrange the range function produces list of evenly-spaced integersin [ ]range( out[ ][ both startendand step can be givenin [ ]range( out[ ][ as you can seerange produces integers up to but not including the endpoint common use of range is for iterating through sequences by indexseq [ for in range(len(seq))val seq[ifor very long rangesit' recommended to use xrangewhich takes the same arguments as range but returns an iterator that generates integers one by one rather than generating appendixpython language essentials www it-ebooks info |
12,166 | all numbers from to that are multiples of or sum for in xrange( )is the modulo operator if = or = sum + in python range always returns an iteratorand thus it is not necessary to use the xrange function ternary expressions ternary expression in python allows you combine an if-else block which produces value into single line or expression the syntax for this in python is value true-expr if condition else false-expr heretrue-expr and false-expr can be any python expressions it has the identical effect as the more verbose if conditionvalue true-expr elsevalue false-expr this is more concrete examplein [ ] in [ ]'non-negativeif > else 'negativeout[ ]'non-negativeas with if-else blocksonly one of the expressions will be evaluated while it may be tempting to always use ternary expressions to condense your coderealize that you may sacrifice readability if the condition as well and the true and false expressions are very complex data structures and sequences python' data structures are simplebut powerful mastering their use is critical part of becoming proficient python programmer data structures and sequences www it-ebooks info |
12,167 | tuple is one-dimensionalfixed-lengthimmutable sequence of python objects the easiest way to create one is with comma-separated sequence of valuesin [ ]tup in [ ]tup out[ ]( when defining tuples in more complicated expressionsit' often necessary to enclose the values in parenthesesas in this example of creating tuple of tuplesin [ ]nested_tup ( )( in [ ]nested_tup out[ ](( )( )any sequence or iterator can be converted to tuple by invoking tuplein [ ]tuple([ ]out[ ]( in [ ]tup tuple('string'in [ ]tup out[ ](' '' '' '' '' '' 'elements can be accessed with square brackets [as with most other sequence types like cc++javaand many other languagessequences are -indexed in pythonin [ ]tup[ out[ ]'swhile the objects stored in tuple may be mutable themselvesonce created it' not possible to modify which object is stored in each slotin [ ]tup tuple(['foo'[ ]true]in [ ]tup[ false typeerror traceback (most recent call lastin (---- tup[ false typeerror'tupleobject does not support item assignment however in [ ]tup[ append( in [ ]tup out[ ]('foo'[ ]truetuples can be concatenated using the operator to produce longer tuplesin [ ]( none'foo'( ('bar',out[ ]( none'foo' 'bar' appendixpython language essentials www it-ebooks info |
12,168 | that many copies of the tuple in [ ]('foo''bar' out[ ]('foo''bar''foo''bar''foo''bar''foo''bar'note that the objects themselves are not copiedonly the references to them unpacking tuples if you try to assign to tuple-like expression of variablespython will attempt to unpack the value on the right-hand side of the equals signin [ ]tup ( in [ ]abc tup in [ ] out[ ] even sequences with nested tuples can be unpackedin [ ]tup ( in [ ]ab(cdtup in [ ] out[ ] using this functionality it' easy to swap variable namesa task which in many languages might look liketmp tmp ba ab one of the most common uses of variable unpacking when iterating over sequences of tuples or listsseq [( )( )( )for abc in seqpass another common use is for returning multiple values from function more on this later tuple methods since the size and contents of tuple cannot be modifiedit is very light on instance methods one particularly useful one (also available on listsis countwhich counts the number of occurrences of valuein [ ] ( data structures and sequences www it-ebooks info |
12,169 | out[ ] list in contrast with tupleslists are variable-length and their contents can be modified they can be defined using square brackets [or using the list type functionin [ ]a_list [ nonein [ ]tup ('foo''bar''baz'in [ ]b_list list(tupin [ ]b_list out[ ]['foo''bar''baz'in [ ]b_list[ 'peekabooin [ ]b_list out[ ]['foo''peekaboo''baz'lists and tuples are semantically similar as one-dimensional sequences of objects and thus can be used interchangeably in many functions adding and removing elements elements can be appended to the end of the list with the append methodin [ ]b_list append('dwarf'in [ ]b_list out[ ]['foo''peekaboo''baz''dwarf'using insert you can insert an element at specific location in the listin [ ]b_list insert( 'red'in [ ]b_list out[ ]['foo''red''peekaboo''baz''dwarf'insert is computationally expensive compared with append as references to subsequent elements have to be shifted internally to make room for the new element the inverse operation to insert is popwhich removes and returns an element at particular indexin [ ]b_list pop( out[ ]'peekabooin [ ]b_list out[ ]['foo''red''baz''dwarf'elements can be removed by value using removewhich locates the first such value and removes it from the last appendixpython language essentials www it-ebooks info |
12,170 | in [ ]b_list remove('foo'in [ ]b_list out[ ]['red''baz''dwarf''foo'if performance is not concernby using append and removea python list can be used as perfectly suitable "multi-setdata structure you can check if list contains value using the in keywordin [ ]'dwarfin b_list out[ ]true note that checking whether list contains value is lot slower than dicts and sets as python makes linear scan across the values of the listwhereas the others (based on hash tablescan make the check in constant time concatenating and combining lists similar to tuplesadding two lists together with concatenates themin [ ][ none'foo'[ ( )out[ ][ none'foo' ( )if you have list already definedyou can append multiple elements to it using the extend methodin [ ] [ none'foo'in [ ] extend([ ( )]in [ ] out[ ][ none'foo' ( )note that list concatenation is compartively expensive operation since new list must be created and the objects copied over using extend to append elements to an existing listespecially if you are building up large listis usually preferable thuseverything [for chunk in list_of_listseverything extend(chunkis faster than than the concatenative alternative everything [for chunk in list_of_listseverything everything chunk sorting list can be sorted in-place (without creating new objectby calling its sort functionin [ ] [ data structures and sequences www it-ebooks info |
12,171 | in [ ] out[ ][ sort has few options that will occasionally come in handy one is the ability to pass secondary sort keyi function that produces value to use to sort the objects for examplewe could sort collection of strings by their lengthsin [ ] ['saw''small''he''foxes''six'in [ ] sort(key=lenin [ ] out[ ]['he''saw''six''small''foxes'binary search and maintaining sorted list the built-in bisect module implements binary-search and insertion into sorted list bisect bisect finds the location where an element should be inserted to keep it sortedwhile bisect insort actually inserts the element into that locationin [ ]import bisect in [ ] [ in [ ]bisect bisect( out[ ] in [ ]bisect bisect( out[ ] in [ ]bisect insort( in [ ] out[ ][ the bisect module functions do not check whether the list is sorted as doing so would be computationally expensive thususing them with an unsorted list will succeed without error but may lead to incorrect results slicing you can select sections of list-like types (arraystuplesnumpy arraysby using slice notationwhich in its basic form consists of start:stop passed to the indexing operator []in [ ]seq [ in [ ]seq[ : out[ ][ slices can also be assigned to with sequencein [ ]seq[ : [ appendixpython language essentials www it-ebooks info |
12,172 | out[ ][ while element at the start index is includedthe stop index is not includedso that the number of elements in the result is stop start either the start or stop can be omitted in which case they default to the start of the sequence and the end of the sequencerespectivelyin [ ]seq[: out[ ][ in [ ]seq[ :out[ ][ negative indices slice the sequence relative to the endin [ ]seq[- :out[ ][ in [ ]seq[- :- out[ ][ slicing semantics takes bit of getting used toespecially if you're coming from or matlab see figure - for helpful illustrating of slicing with positive and negative integers step can also be used after second colon tosaytake every other elementin [ ]seq[:: out[ ][ clever use of this is to pass - which has the useful effect of reversing list or tuplein [ ]seq[::- out[ ][ figure - illustration of python slicing conventions built-in sequence functions python has handful of useful sequence functions that you should familiarize yourself with and use at any opportunity data structures and sequences www it-ebooks info |
12,173 | it' common when iterating over sequence to want to keep track of the index of the current item do-it-yourself approach would look likei for value in collectiondo something with value + since this is so commonpython has built-in function enumerate which returns sequence of (ivaluetuplesfor ivalue in enumerate(collection)do something with value when indexing dataa useful pattern that uses enumerate is computing dict mapping the values of sequence (which are assumed to be uniqueto their locations in the sequencein [ ]some_list ['foo''bar''baz'in [ ]mapping dict((vifor iv in enumerate(some_list)in [ ]mapping out[ ]{'bar' 'baz' 'foo' sorted the sorted function returns new sorted list from the elements of any sequencein [ ]sorted([ ]out[ ][ in [ ]sorted('horse race'out[ ]['' '' '' '' '' '' '' '' '' ' common pattern for getting sorted list of the unique elements in sequence is to combine sorted with setin [ ]sorted(set('this is just some string')out[ ]['' '' '' '' '' '' '' '' '' '' '' '' 'zip zip "pairsup the elements of number of liststuplesor other sequencesto create list of tuplesin [ ]seq ['foo''bar''baz'in [ ]seq ['one''two''three'in [ ]zip(seq seq out[ ][('foo''one')('bar''two')('baz''three') appendixpython language essentials www it-ebooks info |
12,174 | is determined by the shortest sequencein [ ]seq [falsetruein [ ]zip(seq seq seq out[ ][('foo''one'false)('bar''two'true) very common use of zip is for simultaneously iterating over multiple sequencespossibly also combined with enumeratein [ ]for (abin enumerate(zip(seq seq ))print('% % % (iab) fooone bartwo bazthree given "zippedsequencezip can be applied in clever way to "unzipthe sequence another way to think about this is converting list of rows into list of columns the syntaxwhich looks bit magicalisin [ ]pitchers [('nolan''ryan')('roger''clemens')('schilling''curt')in [ ]first_nameslast_names zip(*pitchersin [ ]first_names out[ ]('nolan''roger''schilling'in [ ]last_names out[ ]('ryan''clemens''curt'we'll look in more detail at the use of in function call it is equivalent to the followingzip(seq[ ]seq[ ]seq[len(seq ]reversed reversed iterates over the elements of sequence in reverse orderin [ ]list(reversed(range( ))out[ ][ dict dict is likely the most important built-in python data structure more common name for it is hash map or associative array it is flexibly-sized collection of key-value pairswhere key and value are python objects one way to create one is by using curly braces {and using colons to separate keys and valuesin [ ]empty_dict {in [ ] {' 'some value'' [ ]data structures and sequences www it-ebooks info |
12,175 | out[ ]{' ''some value'' '[ ]elements can be accessed and inserted or set using the same syntax as accessing elements of list or tuplein [ ] [ 'an integerin [ ] out[ ]{ 'an integer'' ''some value'' '[ ]in [ ] [' 'out[ ][ you can check if dict contains key using the same syntax as with checking whether list or tuple contains valuein [ ]'bin out[ ]true values can be deleted either using the del keyword or the pop method (which simultaneously returns the value and deletes the key)in [ ] [ 'some valuein [ ] ['dummy''another valuein [ ]del [ in [ ]ret pop('dummy'in [ ]ret out[ ]'another valuethe keys and values method give you lists of the keys and valuesrespectively while the key-value pairs are not in any particular orderthese functions output the keys and values in the same orderin [ ] keys(out[ ][' '' ' in [ ] values(out[ ]['some value'[ ]'an integer'if you're using python dict keys(and dict values(are iterators instead of lists one dict can be merged into another using the update methodin [ ] update({' 'foo'' }in [ ] out[ ]{ 'an integer'' ''some value'' ''foo'' ' appendixpython language essentials www it-ebooks info |
12,176 | it' common to occasionally end up with two sequences that you want to pair up element-wise in dict as first cutyou might write code like thismapping {for keyvalue in zip(key_listvalue_list)mapping[keyvalue since dict is essentially collection of -tuplesit should be no shock that the dict type function accepts list of -tuplesin [ ]mapping dict(zip(range( )reversed(range( )))in [ ]mapping out[ ]{ in later section we'll talk about dict comprehensionsanother elegant way to construct dicts default values it' very common to have logic likeif key in some_dictvalue some_dict[keyelsevalue default_value thusthe dict methods get and pop can take default value to be returnedso that the above if-else block can be written simply asvalue some_dict get(keydefault_valueget by default will return none if the key is not presentwhile pop will raise an exception with setting valuesa common case is for the values in dict to be other collectionslike lists for exampleyou could imagine categorizing list of words by their first letters as dict of listsin [ ]words ['apple''bat''bar''atom''book'in [ ]by_letter {in [ ]for word in wordsletter word[ if letter not in by_letterby_letter[letter[wordelseby_letter[letterappend(wordin [ ]by_letter out[ ]{' '['apple''atom']' '['bat''bar''book']the setdefault dict method is for precisely this purpose the if-else block above can be rewritten asdata structures and sequences www it-ebooks info |
12,177 | the built-in collections module has useful classdefaultdictwhich makes this even easier one is created by passing type or function for generating the default value for each slot in the dictfrom collections import defaultdict by_letter defaultdict(listfor word in wordsby_letter[word[ ]append(wordthe initializer to defaultdict only needs to be callable object ( any function)not necessarily type thusif you wanted the default value to be you could pass function returning counts defaultdict(lambda valid dict key types while the values of dict can be any python objectthe keys have to be immutable objects like scalar types (intfloatstringor tuples (all the objects in the tuple need to be immutabletoothe technical term here is hashability you can check whether an object is hashable (can be used as key in dictwith the hash functionin [ ]hash('string'out[ ]- in [ ]hash(( ( ))out[ ] in [ ]hash(( [ ])fails because lists are mutable typeerror traceback (most recent call lastin (---- hash(( [ ])fails because lists are mutable typeerrorunhashable type'listto use list as keyan easy fix is to convert it to tuplein [ ] {in [ ] [tuple([ ]) in [ ] out[ ]{( ) set set is an unordered collection of unique elements you can think of them like dictsbut keys onlyno values set can be created in two waysvia the set function or using set literal with curly bracesin [ ]set([ ]out[ ]set([ ] appendixpython language essentials www it-ebooks info |
12,178 | out[ ]set([ ]sets support mathematical set operations like unionintersectiondifferenceand symmetric difference see table - for list of commonly used set methods in [ ] { in [ ] { in [ ] union (orout[ ]set([ ]in [ ] intersection (andout[ ]set([ ]in [ ] difference out[ ]set([ ]in [ ] symmetric difference (xorout[ ]set([ ]you can also check if set is subset of (is contained inor superset of (contains all elements ofanother setin [ ]a_set { in [ ]{ issubset(a_setout[ ]true in [ ]a_set issuperset({ }out[ ]true as you might guesssets are equal if their contents are equalin [ ]{ ={ out[ ]true table - python set operations function alternate syntax description add(xn/ add element to the set remove(xn/ remove element from the set union(ba all of the unique elements in and intersection(ba all of the elements in both and difference(ba the elements in that are not in symmetric_difference(ba all of the elements in or but not both issubset(bn/ true if the elements of are all contained in issuperset(bn/ true if the elements of are all contained in isdisjoint(bn/ true if and have no elements in common data structures and sequences www it-ebooks info |
12,179 | list comprehensions are one of the most-loved python language features they allow you to concisely form new list by filtering the elements of collection and transforming the elements passing the filter in one conscise expression they take the basic form[expr for val in collection if conditionthis is equivalent to the following for loopresult [for val in collectionif conditionresult append(exprthe filter condition can be omittedleaving only the expression for examplegiven list of stringswe could filter out strings with length or less and also convert them to uppercase like thisin [ ]strings [' ''as''bat''car''dove''python'in [ ][ upper(for in strings if len( out[ ]['bat''car''dove''python'set and dict comprehensions are natural extensionproducing sets and dicts in idiomatically similar way instead of lists dict comprehension looks like thisdict_comp {key-expr value-expr for value in collection if conditiona set comprehension looks like the equivalent list comprehension except with curly braces instead of square bracketsset_comp {expr for value in collection if conditionlike list comprehensionsset and dict comprehensions are just syntactic sugarbut they similarly can make code both easier to write and read consider the list of strings above suppose we wanted set containing just the lengths of the strings contained in the collectionthis could be easily computed using set comprehensionin [ ]unique_lengths {len(xfor in stringsin [ ]unique_lengths out[ ]set([ ]as simple dict comprehension examplewe could create lookup map of these strings to their locations in the listin [ ]loc_mapping {val index for indexval in enumerate(strings)in [ ]loc_mapping out[ ]{' ' 'as' 'bat' 'car' 'dove' 'python' note that this dict could be equivalently constructed byloc_mapping dict((validxfor idxval in enumerate(strings) appendixpython language essentials www it-ebooks info |
12,180 | dict and set comprehensions were added to python fairly recently in python and python nested list comprehensions suppose we have list of lists containing some boy and girl namesin [ ]all_data [['tom''billy''jefferson''andrew''wesley''steven''joe']['susie''casey''jill''ana''eva''jennifer''stephanie']you might have gotten these names from couple of files and decided to keep the boy and girl names separate nowsuppose we wanted to get single list containing all names with two or more ' in them we could certainly do this with simple for loopnames_of_interest [for names in all_dataenough_es [name for name in names if name count(' ' names_of_interest extend(enough_esyou can actually wrap this whole operation up in single nested list comprehensionwhich will look likein [ ]result [name for names in all_data for name in names if name count(' '> in [ ]result out[ ]['jefferson''wesley''steven''jennifer''stephanie'at firstnested list comprehensions are bit hard to wrap your head around the for parts of the list comprehension are arranged according to the order of nestingand any filter condition is put at the end as before here is another example where we "flattena list of tuples of integers into simple list of integersin [ ]some_tuples [( )( )( )in [ ]flattened [ for tup in some_tuples for in tupin [ ]flattened out[ ][ keep in mind that the order of the for expressions would be the same if you wrote nested for loop instead of list comprehensionflattened [for tup in some_tuplesfor in tupflattened append(xdata structures and sequences www it-ebooks info |
12,181 | three levels of nesting you should probably start to question your data structure design it' important to distinguish the above syntax from list comprehension inside list comprehensionwhich is also perfectly validin [ ][[ for in tupfor tup in some_tuplesfunctions functions are the primary and most important method of code organization and reuse in python there may not be such thing as having too many functions in facti would argue that most programmers doing data analysis don' write enough functionsas you have likely inferred from prior examplesfunctions are declared using the def keyword and returned from using the return keyworddef my_function(xyz= )if return ( yelsereturn ( ythere is no issue with having multiple return statements if the end of function is reached without encountering return statementnone is returned each function can have some number of positional arguments and some number of keyword arguments keyword arguments are most commonly used to specify default values or optional arguments in the above functionx and are positional arguments while is keyword argument this means that it can be called in either of these equivalent waysmy_function( = my_function( the main restriction on function arguments it that the keyword arguments must follow the positional arguments (if anyyou can specify keyword arguments in any orderthis frees you from having to remember which order the function arguments were specified in and only what their names are namespacesscopeand local functions functions can access variables in two different scopesglobal and local an alternate and more descriptive name describing variable scope in python is namespace any variables that are assigned within function by default are assigned to the local namespace the local namespace is created when the function is called and immediately populated by the function' arguments after the function is finishedthe local namespace is destroyed (with some exceptionssee section on closures belowconsider the following function appendixpython language essentials www it-ebooks info |
12,182 | [for in range( ) append(iupon calling func()the empty list is created elements are appendedthen is destroyed when the function exits suppose instead we had declared [def func()for in range( ) append(iassigning global variables within function is possiblebut those variables must be declared as global using the global keywordin [ ] none in [ ]def bind_a_variable()global [bind_a_variable(in [ ]print [ generally discourage people from using the global keyword frequently typically global variables are used to store some kind of state in system if you find yourself using lot of themit' probably sign that some object-oriented programming (using classesis in order functions can be declared anywhereand there is no problem with having local functions that are dynamically created when function is calleddef outer_function(xyz)def inner_function(abc)pass pass in the above codethe inner_function will not exist until outer_function is called as soon as outer_function is done executingthe inner_function is destroyed nested inner functions can access the local namespace of the enclosing functionbut they cannot bind new variables in it 'll talk bit more about this in the section on closures in strict senseall functions are local to some scopethat scope may just be the module level scope functions www it-ebooks info |
12,183 | when first programmed in python after having programmed in java and ++one of my favorite features was the ability to return multiple values from function here' simple exampledef () return abc abc (in data analysis and other scientific applicationsyou will likely find yourself doing this very often as many functions may have multiple outputswhether those are data structures or other auxiliary data computed inside the function if you think about tuple packing and unpacking from earlier in this you may realize that what' happening here is that the function is actually just returning one objectnamely tuplewhich is then being unpacked into the result variables in the above examplewe could have done insteadreturn_value (in this casereturn_value would beas you may guessa -tuple with the three returned variables potentially attractive alternative to returning multiple values like above might be to return dict insteaddef () return {'aa'bb'ccfunctions are objects since python functions are objectsmany constructs can be easily expressed that are difficult to do in other languages suppose we were doing some data cleaning and needed to apply bunch of transformations to the following list of stringsstates [alabama ''georgia!''georgia''georgia''florida''south carolina##''west virginia?'anyone who has ever worked with user-submitted survey data can expect messy results like these lots of things need to happen to make this list of strings uniform and ready for analysiswhitespace strippingremoving punctuation symbolsand proper capitalization as first passwe might write some code likeimport re regular expression module def clean_strings(strings)result [ appendixpython language essentials www it-ebooks info |
12,184 | value value strip(value re sub('[!#?]'''valueremove punctuation value value title(result append(valuereturn result the result looks like thisin [ ]clean_strings(statesout[ ]['alabama''georgia''georgia''georgia''florida''south carolina''west virginia'an alternate approach that you may find useful is to make list of the operations you want to apply to particular set of stringsdef remove_punctuation(value)return re sub('[!#?]'''valueclean_ops [str stripremove_punctuationstr titledef clean_strings(stringsops)result [for value in stringsfor function in opsvalue function(valueresult append(valuereturn result then we have in [ ]clean_strings(statesclean_opsout[ ]['alabama''georgia''georgia''georgia''florida''south carolina''west virginia' more functional pattern like this enables you to easily modify how the strings are transformed at very high level the clean_strings function is also now more reusableyou can naturally use functions as arguments to other functions like the built-in map functionwhich applies function to collection of some kindin [ ]map(remove_punctuationstatesout[ ][alabama ''georgia'functions www it-ebooks info |
12,185 | 'georgia''florida''south carolina''west virginia'anonymous (lambdafunctions python has support for so-called anonymous or lambda functionswhich are really just simple functions consisting of single statementthe result of which is the return value they are defined using the lambda keywordwhich has no meaning other than "we are declaring an anonymous function def short_function( )return equiv_anon lambda xx usually refer to these as lambda functions in the rest of the book they are especially convenient in data analysis becauseas you'll seethere are many cases where data transformation functions will take functions as arguments it' often less typing (and clearerto pass lambda function as opposed to writing full-out function declaration or even assigning the lambda function to local variable for exampleconsider this silly exampledef apply_to_list(some_listf)return [ (xfor in some_listints [ apply_to_list(intslambda xx you could also have written [ for in ints]but here we were able to succintly pass custom operator to the apply_to_list function as another examplesuppose you wanted to sort collection of strings by the number of distinct letters in each stringin [ ]strings ['foo''card''bar''aaaa''abab'here we could pass lambda function to the list' sort methodin [ ]strings sort(key=lambda xlen(set(list( )))in [ ]strings out[ ]['aaaa''foo''abab''bar''card'one reason lambda functions are called anonymous functions is that the function object itself is never given name attribute appendixpython language essentials www it-ebooks info |
12,186 | closures are nothing to fear they can actually be very useful and powerful tool in the right circumstancein nutshella closure is any dynamically-generated function returned by another function the key property is that the returned function has access to the variables in the local namespace where it was created here is very simple exampledef make_closure( )def closure()print(' know the secret%dareturn closure closure make_closure( the difference between closure and regular python function is that the closure continues to have access to the namespace (the functionwhere it was createdeven though that function is done executing so in the above casethe returned closure will always print know the secret whenever you call it while it' common to create closures whose internal state (in this exampleonly the value of ais staticyou can just as easily have mutable object like dictsetor list that can be modified for examplehere' function that returns function that keeps track of arguments it has been called withdef make_watcher()have_seen {def has_been_seen( )if in have_seenreturn true elsehave_seen[xtrue return false return has_been_seen using this on sequence of integers obtainin [ ]watcher make_watcher(in [ ]vals [ in [ ][watcher(xfor in valsout[ ][falsefalsefalsetruetruetruefalsetruehoweverone technical limitation to keep in mind is that while you can mutate any internal state objects (like adding key-value pairs to dict)you cannot bind variables in the enclosing function scope one way to work around this is to modify dict or list rather than binding variablesdef make_counter()count [ def counter()functions www it-ebooks info |
12,187 | count[ + return count[ return counter counter make_counter(you might be wondering why this is useful in practiceyou can write very general functions with lots of optionsthen fabricate simplermore specialized functions here' an example of creating string formatting functiondef format_and_pad(templatespace)def formatter( )return (template xrjust(spacereturn formatter you could then create floating point formatter that always returns length- string like soin [ ]fmt format_and_pad(' ' in [ ]fmt( out[ ] if you learn more about object-oriented programming in pythonyou might observe that these patterns also could be implemented (albeit more verboselyusing classes extended call syntax with *args**kwargs the way that function arguments work under the hood in python is actually very simple when you write func(abcd=somee=value)the positional and keyword arguments are actually packed up into tuple and dictrespectively so the internal function receives tuple args and dict kwargs and internally does the equivalent ofabc args kwargs get(' 'd_default_valuee kwargs get(' 'e_default_valuethis all happens nicely behind the scenes of courseit also does some error checking and allows you to specify some of the positional arguments as keywords also (even if they aren' keyword in the function declaration!def say_hello_then_call_f( *args**kwargs)print 'args is'args print 'kwargs is'kwargs print("hellonow ' going to call %sfreturn (*args**kwargsdef (xyz= )return ( yz then if we call with say_hello_then_call_f we get appendixpython language essentials www it-ebooks info |
12,188 | args is ( kwargs is {' ' hellonow ' going to call out[ ] curryingpartial argument application currying is fun computer science term which means deriving new functions from existing ones by partial argument application for examplesuppose we had trivial function that adds two numbers togetherdef add_numbers(xy)return using this functionwe could derive new function of one variableadd_fivethat adds to its argumentadd_five lambda yadd_numbers( ythe second argument to add_numbers is said to be curried there' nothing very fancy here as we really only have defined new function that calls an existing function the built-in functools module can simplify this process using the partial functionfrom functools import partial add_five partial(add_numbers when discussing pandas and time series datawe'll use this technique to create specialized functions for transforming data series compute -day moving average of time series ma lambda xpandas rolling_mean( take the -day moving average of of all time series in data data apply(ma generators having consistent way to iterate over sequenceslike objects in list or lines in fileis an important python feature this is accomplished by means of the iterator protocola generic way to make objects iterable for exampleiterating over dict yields the dict keysin [ ]some_dict {' ' ' ' ' ' in [ ]for key in some_dictprint keya when you write for key in some_dictthe python interpreter first attempts to create an iterator out of some_dictin [ ]dict_iterator iter(some_dictfunctions www it-ebooks info |
12,189 | out[ ]any iterator is any object that will yield objects to the python interpreter when used in context like for loop most methods expecting list or list-like object will also accept any iterable object this includes built-in methods such as minmaxand sumand type constructors like list and tuplein [ ]list(dict_iteratorout[ ][' '' '' ' generator is simple way to construct new iterable object whereas normal functions execute and return single valuegenerators return sequence of values lazilypausing after each one until the next one is requested to create generatoruse the yield keyword instead of return in functiondef squares( = )for in xrange( )print 'generating squares from to % ( * yield * when you actually call the generatorno code is immediately executedin [ ]gen squares(in [ ]gen out[ ]it is not until you request elements from the generator that it begins executing its codein [ ]for in genprint xgenerating squares from to as less trivial examplesuppose we wished to find all unique ways to make change for $ ( centsusing an arbitrary set of coins you can probably think of various ways to implement this and how to store the unique combinations as you come up with them one way is to write generator that yields lists of coins (represented as integers)def make_change(amountcoins=[ ]hand=none)hand [if hand is none else hand if amount = yield hand for coin in coinsensures we don' give too much changeand combinations are unique if coin amount or (len(hand and hand[- coin)continue for result in make_change(amount coincoins=coinshand=hand [coin])yield result the details of the algorithm are not that important (can you think of shorter way?then we can write appendixpython language essentials www it-ebooks info |
12,190 | print way [ [ [ [ [ [ in [ ]len(list(make_change( ))out[ ] generator expresssions simple way to make generator is by using generator expression this is generator analogue to listdict and set comprehensionsto create oneenclose what would otherwise be list comprehension with parenthesis instead of bracketsin [ ]gen ( * for in xrange( )in [ ]gen out[ ]at this is completely equivalent to the following more verbose generatordef _make_gen()for in xrange( )yield * gen _make_gen(generator expressions can be used inside any python function that will accept generatorin [ ]sum( * for in xrange( )out[ ] in [ ]dict((ii ** for in xrange( )out[ ]{ itertools module the standard library itertools module has collection of generators for many common data algorithms for examplegroupby takes any sequence and functionthis groups consecutive elements in the sequence by return value of the function here' an examplein [ ]import itertools in [ ]first_letter lambda xx[ in [ ]names ['alan''adam''wes''will''albert''steven'in [ ]for letternames in itertools groupby(namesfirst_letter)print letterlist(namesnames is generator ['alan''adam'functions www it-ebooks info |
12,191 | ['albert' ['steven'see table - for list of few other itertools functions 've frequently found useful table - some useful itertools functions function description imap(func*iterablesgenerator version of the built-in mapapplies func to each zipped tuple of the passed sequences generator version of the built-in filteryields elements for which ifilter(funciterablefunc(xis true combinations(iterablekgenerates sequence of all possible -tuples of elements in the iterableignoring order permutations(iterablekgenerates sequence of all possible -tuples of elements in the iterablerespecting order groupby(iterable[keyfunc]generates (keysub-iteratorfor each unique key in python several built-in functions (zipmapfilterproducing lists have been replaced by their generator versions found in itertools in python files and the operating system most of this book uses high-level tools like pandas read_csv to read data files from disk into python data structures howeverit' important to understand the basics of how to work with files in python fortunatelyit' very simplewhich is part of why python is so popular for text and file munging to open file for reading or writinguse the built-in open function with either relative or absolute file pathin [ ]path 'ch /segismundo txtin [ ] open(pathby defaultthe file is opened in read-only mode 'rwe can then treat the file handle like list and iterate over the lines like so for line in fpass the lines come out of the file with the end-of-line (eolmarkers intactso you'll often see code to get an eol-free list of lines in file like in [ ]lines [ rstrip(for in open(path)in [ ]lines appendixpython language essentials www it-ebooks info |
12,192 | ['sue\xc \xb el rico en su riqueza,''que \xc \xa cuidados le ofrece;''''sue\xc \xb el pobre que padece''su miseria su pobreza;''''sue\xc \xb el que medrar empieza,''sue\xc \xb el que afana pretende,''sue\xc \xb el que agravia ofende,'''' en el mundoen conclusi\xc \xb ,''todos sue\xc \xb an lo que son,''aunque ninguno lo entiende '''if we had typed open(path' ') new file at ch /segismundo txt would have been createdoverwriting any one in its place see below for list of all valid file readwrite modes table - python file modes mode description read-only mode write-only mode creates new file (deleting any file with the same namea append to existing file (create it if it does not existrread and write add to mode for binary filesthat is 'rbor 'wbu use universal newline mode pass by itself 'uor appended to one of the read modes like 'ruto write text to fileyou can use either the file' write or writelines methods for examplewe could create version of prof_mod py with no blank lines like soin [ ]with open('tmp txt'' 'as handlehandle writelines( for in open(pathif len( in [ ]open('tmp txt'readlines(out[ ]['sue\xc \xb el rico en su riqueza,\ ''que \xc \xa cuidados le ofrece;\ ''sue\xc \xb el pobre que padece\ ''su miseria su pobreza;\ ''sue\xc \xb el que medrar empieza,\ ''sue\xc \xb el que afana pretende,\ ''sue\xc \xb el que agravia ofende,\ '' en el mundoen conclusi\xc \xb ,\ ''todos sue\xc \xb an lo que son,\ ''aunque ninguno lo entiende \ 'see table - for many of the most commonly-used file methods files and the operating system www it-ebooks info |
12,193 | method description read([size]return data from file as stringwith optional size argument indicating the number of bytes to read readlines([size]return list of lines in the filewith optional size argument readlines([size]return list of lines (as stringsin the file write(strwrite passed string to file writelines(stringswrite passed sequence of strings to the file close(close the handle flush(flush the internal / buffer to disk seek(posmove to indicated file position (integertell(return current file position as integer closed true is the file is closed appendixpython language essentials www it-ebooks info |
12,194 | symbols character !operator !cmd command "two-languageproblem - (hash mark) $path variable character % datetime format % datetime format %alias magic function %automagic magic function % datetime format % datetime format %bookmark magic function % datetime format %cd magic function %cpaste magic function - % datetime format % datetime format % format character %debug magic function - %dhist magic function %dirs magic function %env magic function % datetime format %gui magic function % datetime format %hist magic function % datetime format %logstart magic function %logstop magic function %lprun magic function % datetime format % datetime format %magic magic function % datetime format %page magic function %paste magic function %pdb magic function %popd magic function %prun magic function %pushd magic function %pwd magic function %quickref magic function %reset magic function %run magic function - % datetime format % format character %time magic function %timeit magic function % datetime format % datetime format % datetime format %who magic function %whos magic function %who_ls magic function % datetime format % datetime format %xdel magic function %xmode magic function % datetime format % datetime format % datetime format operator operator operator federal election commission database example - we' like to hear your suggestions for improving our indexes send email to index@oreilly com www it-ebooks info |
12,195 | donation statistics by occupation and employer - donation statistics by state - =operator prompt (question mark) [(brackets) (backslash) (underscore) __ (two underscores) {(braces) operator file mode abs function accumulate method add method add_patch method add_subplot method aggfunc option aggregate method aggregations algorithms for sorting - alignment of data - all method alpha argument and keyword annotating in matplotlib - anonymous functions any method append method apply method - apt package management tool arange function arccos function arccosh function arcsin function arcsinh function arctan function arctanh function argmax method argmin method argsort method arithmetic - operations between dataframe and series - with fill values - arrays boolean arrays boolean indexing for - conditional logic as operation - creating - creating periodindex from data types for - fancy indexing - file input and output with - saving and loading text files - storing on disk in binary format finding elements in sorted array - in numpy - concatenating - c_ object layout of in memory - replicating - reshaping - r_ object saving to file - splitting - subsets for - indexes for - operations between - setting values by broadcasting slicing - sorting - statistical methods for structured arrays - benefits of mainpulating nested data types - swapping axes in - transposing - unique function - where function - arrow function as keyword asarray function asfreq method asof method - astype method attributes in python starting with underscore average method ax argument axes index www it-ebooks info |
12,196 | concatenating along - labels for - renaming indexes for - swapping in arrays - axessubplot object axis argument axis method broadcasting - defined over other axes - setting array values by bucketing - file mode backslash (\) bar plots - basemap object bashrc file bash_profile file bbox_inches option benefits of python - glue for code solving "two-languageproblem with of structured arrays beta function defined between_time method bfill method bin edges binary data formats - hdf - microsoft excel files storing arrays in - binary moving window functions - binary search of lists binary universal functions binding defined variables binomial function bisect module bookmarking directories in ipython boolean arrays data type indexing for arrays - bottleneck library braces ({}) brackets ([]) break keyword calendar module casting cat method categorical object ceil function center method chaco chisquare function chunksize argument clearing screen shortcut clipboardexecuting code from - clock function close method closures - cmd exe collections module colons cols option columnsgrouping on - column_stack function combinations function combine_first method combining data sources - data sourceswith overlap - lists commands (see also magic commandsdebugger history in ipython - input and output variables - logging of - reusing command history searching for comment argument comments in python compile method complex data type complex data type complex data type concat function index www it-ebooks info |
12,197 | along axis - arrays - conditional logic as array operation - conferences configuring matplotlib - conforming contains method contiguous memory - continue keyword continuous return convention argument converting between string and datetime - timestamps to periods coordinated universal time (utc) copy argument copy method copysign function corr method correlation - corrwith method cos function cosh function count method counter class cov method covariance - cpython cross-section crosstab function - crowdsourcing csv files - ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl- keyboard shortcut ctrl-shift- keyboard shortcut ctrl- keyboard shortcut cummax method cummin method cumprod method cumsum method cumulative returns - currying cursormoving with keyboard custom universal functions cut function cython project - c_ object data aggregation - returning data in unindexed form using multiple functions - data alignment - arithmetic methods with fill values operations between dataframe and series - data munging - asof method - combining data - for data alignment - for specialized frequencies - data structures for pandas - dataframe - index objects - panel - series - data types for arrays - for ndarray - for numpy - hierarchy of for python - boolean data type dates and times - none data type numeric data types - str data type - type casting in for time series data - converting between string and datetime - nested - data wrangling manipulating strings - methods for - vectorized string methods - with regular expressions - merging data - index www it-ebooks info |
12,198 | concatenating along axis - dataframe merges - on index - pivoting - reshaping - transforming data - discretization - dummy variables - filtering outliers - mapping - permutation removing duplicates - renaming axis indexes - replacing values - usda food database example - databases reading and writing to - dataframe data structure arithmetic operations between series and - hierarchical indexing using - merging data with - dates and times (see also time series datadata types for - date ranges datetime type - datetimeindex index object dateutil package date_parser argument date_range function dayfirst argument debug function debuggeripython in ipython - def keyword defaults profiles values for dicts - del keyword delete method delimited formats - density plots - describe method design tips - flat is better than nested keeping relevant objects and data alive overcoming fear of longer files - det function development tools in ipython - debugger - profiling code - profiling function line-by-line - timing code - diag function dicts - creating default values for - dict comprehensions - grouping on - keys for returning system environment variables as diff method difference method digitize function directories bookmarking in ipython changingcommands for discretization - div method divide function dmg file donation statistics by occupation and employer - by state - dot function doublequote option downsampling dpi (dots-per-inchoption dreload function drop method dropna method drop_duplicates method dsplit function dstack function dtype object (see data types"ducktyping in python dummy variables - dumps function duplicated method duplicates indices - removing from data - dynamically-generated functions index www it-ebooks info |
12,199 | edgecolo option edit-compile-run workflow eig function elif blocks (see if statementselse block (see if statementsempty function empty namespace encoding argument endswith method enumerate function environment variables epd (enthought python distribution) - equal function escapechar option ewma function ewmcorr function ewmcov function ewmstd function ewmvar function excelfile class except block exceptions automatically entering debugger after defined handling in python - exec keyword execute-explore workflow execution time of code of single statement exit command exp function expanding window mean exponentially-weighted functions extend method extensible markup language (xmlfiles eye function fabs function facecolor option factor analysis - factor object factors fancy indexing defined for arrays - ffill method figsize argument figure object file input/output binary data formats for - hdf - microsoft excel files for arrays - hdf memory-mapped files - saving and loading text files - storing on disk in binary format in python - saving plot to file text files - delimited formats - html files - json data - lxml library - reading in pieces - writing to - xml files - with databases - with web apis - filling in missing data - - fillna method fill_method argument fill_value option filtering in pandas - missing data - outliers - financial applications cumulative returns - data munging - asof method - combining data - for data alignment - for specialized frequencies - future contract rolling - grouping for - factor analysis with - quartile analysis - linear regression - return indexes - rolling correlation - index www it-ebooks info |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.