id
int64
0
25.6k
text
stringlengths
0
4.59k
11,800
in [ ]df boys[boys year = in [ ]in df sort_index(by='prop'ascending=falseprop cumsum(in [ ]in searchsorted( out[ ] it should now be fairly straightforward to apply this operation to each year/sex combinationgroupby those fields and apply function returning the count for each groupdef get_quantile_count(groupq= )group group sort_index(by='prop'ascending=falsereturn group prop cumsum(searchsorted( diversity top groupby(['year''sex']apply(get_quantile_countdiversity diversity unstack('sex'this resulting dataframe diversity now has two time seriesone for each sexindexed by year this can be inspected in ipython and plotted as before (see figure - )in [ ]diversity head(out[ ]sex year in [ ]diversity plot(title="number of popular names in top %"figure - plot of diversity metric by year us baby names - www it-ebooks info
11,801
have only become more so over time further analysis of what exactly is driving the diversitylike the increase of alternate spellingsis left to the reader the "last letterrevolution in baby name researcher laura wattenberg pointed out on her website (http//www babynamewizard comthat the distribution of boy names by final letter has changed significantly over the last years to see thisi first aggregate all of the births in the full data set by yearsexand final letterextract last letter from name column get_last_letter lambda xx[- last_letters names name map(get_last_letterlast_letters name 'last_lettertable names pivot_table('births'rows=last_letterscols=['sex''year']aggfunc=sumtheni select out three representative years spanning the history and print the first few rowsin [ ]subtable table reindex(columns=[ ]level='year'in [ ]subtable head(out[ ]sex year last_letter nan nextnormalize the table by total births to compute new table containing proportion of total births for each sex ending in each letterin [ ]subtable sum(out[ ]sex year in [ ]letter_prop subtable subtable sum(astype(floatwith the letter proportions now in handi can make bar plots for each sex broken down by year see figure - import matplotlib pyplot as plt introductory examples www it-ebooks info
11,802
letter_prop[' 'plot(kind='bar'rot= ax=axes[ ]title='male'letter_prop[' 'plot(kind='bar'rot= ax=axes[ ]title='female'legend=falsefigure - proportion of boy and girl names ending in each letter as you can seeboy names ending in "nhave experienced significant growth since the going back to the full table created abovei again normalize by year and sex and select subset of letters for the boy namesfinally transposing to make each column time seriesin [ ]letter_prop table table sum(astype(floatin [ ]dny_ts letter_prop ix[[' '' '' ']' ' in [ ]dny_ts head(out[ ] year with this dataframe of time series in handi can make plot of the trends over time again with its plot method (see figure - )in [ ]dny_ts plot(us baby names - www it-ebooks info
11,803
boy names that became girl names (and vice versaanother fun trend is looking at boy names that were more popular with one sex earlier in the sample but have "changed sexesin the present one example is the name lesley or leslie going back to the top dataseti compute list of names occurring in the dataset starting with 'lesl'in [ ]all_names top name unique(in [ ]mask np array(['leslin lower(for in all_names]in [ ]lesley_like all_names[maskin [ ]lesley_like out[ ]array([leslielesleylesleeleslilesly]dtype=objectfrom therewe can filter down to just those names and sum births grouped by name to see the relative frequenciesin [ ]filtered top [top name isin(lesley_like)in [ ]filtered groupby('name'births sum(out[ ]name leslee lesley lesli leslie lesly namebirths nextlet' aggregate by sex and year and normalize within year introductory examples www it-ebooks info
11,804
cols='sex'aggfunc='sum'in [ ]table table div(table sum( )axis= in [ ]table tail(out[ ]sex year nan nan nan nan nan lastlyit' now easy to make plot of the breakdown by sex over time (figure - )in [ ]table plot(style={' '' -'' '' --'}figure - proportion of male/female lesley-like names over time conclusions and the path ahead the examples in this are rather simplebut they're here to give you bit of flavor of what sorts of things you can expect in the upcoming the focus of this book is on tools as opposed to presenting more sophisticated analytical methods mastering the techniques in this book will enable you to implement your own analyses (assuming you know what you want to do!in short order conclusions and the path ahead www it-ebooks info
11,805
11,806
ipythonan interactive computing and development environment act without doingwork without effort think of the small as large and the few as many confront the difficult while it is still easyaccomplish the great task by series of small acts --laozi people often ask me"what is your python development environment?my answer is almost always the same"ipython and text editoryou may choose to substitute an integrated development environment (idefor text editor in order to take advantage of more advanced graphical tools and code completion capabilities even if soi strongly recommend making ipython an important part of your workflow some ides even provide ipython integrationso it' possible to get the best of both worlds the ipython project began in as fernando perez' side project to make better interactive python interpreter in the subsequent years it has grown into what' widely considered one of the most important tools in the modern scientific python computing stack while it does not provide any computational or data analytical tools by itselfipython is designed from the ground up to maximize your productivity in both interactive computing and software development it encourages an execute-explore workflow instead of the typical edit-compile-run workflow of many other programming languages it also provides very tight integration with the operating system' shell and file system since much of data analysis coding involves explorationtrial and errorand iterationipython willin almost all caseshelp you get the job done faster of coursethe ipython project now encompasses great deal more than just an enhancedinteractive python shell it also includes rich gui console with inline plottinga web-based interactive notebook formatand lightweightfast parallel computing engine andas with so many other tools designed for and by programmersit is highly customizable 'll discuss some of these features later in the www it-ebooks info
11,807
ipythoni recommend that you follow along with the examples to get feel for how things work as with any keyboard-driven console-like environmentdeveloping muscle-memory for the common commands is part of the learning curve many parts of this (for exampleprofiling and debuggingcan be safely omitted on first reading as they are not necessary for understanding the rest of the book this is intended to provide standalonerich overview of the functionality provided by ipython ipython basics you can launch ipython on the command line just like launching the regular python interpreter except with the ipython commandipython python (defaultmay : : type "copyright""creditsor "licensefor more information ipython -an enhanced interactive python -introduction and overview of ipython' features %quickref -quick reference help -python' own help system object-details about 'object'use 'object??for extra details in [ ] in [ ] out[ ] you can execute arbitrary python statements by typing them in and pressing when typing just variable into ipythonit renders string representation of the objectin [ ]data { randn(for in range( )in [ ]data out[ ]{ - - - - ipythonan interactive computing and development environment www it-ebooks info
11,808
which is distinct from normal printing with print if you printed dict like the above in the standard python interpreterit would be much less readablefrom numpy random import randn data { randn(for in range( )print data { - - - ipython also provides facilities to make it easy to execute arbitrary blocks of code (via somewhat glorified copy-and-pastingand whole python scripts these will be discussed shortly tab completion on the surfacethe ipython shell looks like cosmetically slightly-different interactive python interpreter users of mathematica may find the enumerated input and output prompts familiar one of the major improvements over the standard python shell is tab completiona feature common to most interactive data analysis environments while entering expressions in the shellpressing will search the namespace for any variables (objectsfunctionsetc matching the characters you have typed so farin [ ]an_apple in [ ]an_example in [ ]an an_apple and an_example any in this examplenote that ipython displayed both the two variables defined as well as the python keyword and and built-in function any naturallyyou can also complete methods and attributes on any object after typing periodin [ ] [ in [ ] append extend count index insert pop remove reverse sort the same goes for modulesin [ ]import datetime in [ ]datetime datetime date datetime datetime datetime datetime_capi datetime maxyear datetime minyear datetime time datetime timedelta datetime tzinfo ipython basics www it-ebooks info
11,809
underscoressuch as magic methods and internal "privatemethods and attributesin order to avoid cluttering the display (and confusing new python users!thesetoocan be tab-completed but you must first type an underscore to see them if you prefer to always see such methods in tab completionyou can change this setting in the ipython configuration tab completion works in many contexts outside of searching the interactive namespace and completing object or module attributes when typing anything that looks like file path (even in python string)pressing will complete anything on your computer' file system matching what you've typedin [ ]book_scriptsbook_scripts/cprof_example py book_scripts/ipython_bug py book_scripts/ipython_script_test py book_scripts/prof_mod py in [ ]path 'book_scriptsbook_scripts/cprof_example py book_scripts/ipython_bug py book_scripts/ipython_script_test py book_scripts/prof_mod py combined with the %run command (see later section)this functionality will undoubtedly save you many keystrokes another area where tab completion saves time is in the completion of function keyword arguments (including the sign!introspection using question mark (?before or after variable will display some general information about the objectin [ ]btypelist string form:[ length docstringlist(-new empty list list(iterable-new list initialized from iterable' items this is referred to as object introspection if the object is function or instance methodthe docstringif definedwill also be shown suppose we' written the following functiondef add_numbers(ab)""add two numbers together returns the_sum type of arguments ipythonan interactive computing and development environment www it-ebooks info
11,810
return then using shows us the docstringin [ ]add_numberstypefunction string formfilebook_scriptsdefinitionadd_numbers(abdocstringadd two numbers together returns the_sum type of arguments using ?will also show the function' source code if possiblein [ ]add_numbers?typefunction string formfilebook_scriptsdefinitionadd_numbers(absourcedef add_numbers(ab)""add two numbers together returns the_sum type of arguments ""return has final usagewhich is for searching the ipython namespace in manner similar to the standard unix or windows command line number of characters combined with the wildcard (*will show all names matching the wildcard expression for examplewe could get list of all functions in the top level numpy namespace containing loadin [ ]np *load*np load np loads np loadtxt np pkgload the %run command any file can be run as python program inside the environment of your ipython session using the %run command suppose you had the following simple script stored in ipy thon_script_test pydef (xyz)return ( yz ipython basics www it-ebooks info
11,811
result (abcthis can be executed by passing the file name to %runin [ ]%run ipython_script_test py the script is run in an empty namespace (with no imports or other variables definedso that the behavior should be identical to running the program on the command line using python script py all of the variables (importsfunctionsand globalsdefined in the file (up until an exceptionif anyis raisedwill then be accessible in the ipython shellin [ ] out[ ] in [ ]result out[ ] if python script expects command line arguments (to be found in sys argv)these can be passed after the file path as though run on the command line should you wish to give script access to variables already defined in the interactive ipython namespaceuse %run - instead of plain %run interrupting running code pressing while any code is runningwhether script through %run or longrunning commandwill cause keyboardinterrupt to be raised this will cause nearly all python programs to stop immediately except in very exceptional cases when piece of python code has called into some compiled extension modulespressing will not cause the program execution to stop immediately in all cases in such casesyou will have to either wait until control is returned to the python interpreterorin more dire circumstancesforcibly terminate the python process via the os task manager executing code from the clipboard quick-and-dirty way to execute code in ipython is via pasting from the clipboard this might seem fairly crudebut in practice it is very useful for examplewhile developing complex or time-consuming applicationyou may wish to execute script piece by piecepausing at each stage to examine the currently loaded data and results oryou might find code snippet on the internet that you want to run and play around withbut you' rather not create new py file for it ipythonan interactive computing and development environment www it-ebooks info
11,812
line into ipythonand line breaks are treated as this means that if you paste code with an indented block and there is blank lineipython will think that the indented block is over once the next line in the block is executedan indentationer ror will be raised for example the following codex if + will not work if simply pastedin [ ] in [ ] in [ ]if + in [ ] indentationerrorunexpected indent if you want to paste code into ipythontry the %paste and %cpaste magic functions as the error message suggestswe should instead use the %paste and %cpaste magic functions %paste takes whatever text is in the clipboard and executes it as single block in the shellin [ ]%paste if + #-end pasted text -depending on your platform and how you installed pythonthere' small chance that %paste will not work packaged distributions like epdfree (as described in in the introshould not be problem %cpaste is similarexcept that it gives you special prompt for pasting code intoin [ ]%cpaste pasting codeenter '--alone on the line to stop or use ctrl- : : :if ipython basics www it-ebooks info
11,813
+ :-with the %cpaste blockyou have the freedom to paste as much code as you like before executing it you might decide to use %cpaste in order to look at the pasted code before executing it if you accidentally paste the wrong codeyou can break out of the %cpaste prompt by pressing lateri'll introduce the ipython html notebook which brings new level of sophistication for developing analyses block-by-block in browser-based notebook format with executable code cells ipython interaction with editors and ides some text editorssuch as emacs and vimhave rd party extensions enabling blocks of code to be sent directly from the editor to running ipython shell refer to the ipython website or do an internet search to find out more some idessuch as the pydev plugin for eclipse and python tools for visual studio from microsoft (and possibly others)have integration with the ipython terminal application if you want to work in an ide but don' want to give up the ipython console featuresthis may be good option for you keyboard shortcuts ipython has many keyboard shortcuts for navigating the prompt (which will be familiar to users of the emacs text editor or the unix bash shelland interacting with the shell' command history (see later sectiontable - summarizes some of the most commonly used shortcuts see figure - for an illustration of few of thesesuch as cursor movement figure - illustration of some of ipython' keyboard shortcuts ipythonan interactive computing and development environment www it-ebooks info
11,814
command description ctrl- or up-arrow search backward in command history for commands starting with currently-entered text ctrl- or down-arrow search forward in command history for commands starting with currently-entered text ctrl- readline-style reverse history search (partial matchingctrl-shift- paste text from clipboard ctrl- interrupt currently-executing code ctrl- move cursor to beginning of line ctrl- move cursor to end of line ctrl- delete text from cursor until end of line ctrl- discard all text on current line ctrl- move cursor forward one character ctrl- move cursor back one character ctrl- clear screen exceptions and tracebacks if an exception is raised while %run-ing script or executing any statementipython will by default print full call stack trace (tracebackwith few lines of context around the position at each point in the stack in [ ]%run ch /ipython_bug py assertionerror traceback (most recent call last/home/wesm/code/ipython/ipython/utils/py compat pyc in execfile(fname*where else filename fname -- __builtin__ execfile(filename*wherebook_scripts/ch /ipython_bug py in ( throws_an_exception( --- calling_things(book_scripts/ch /ipython_bug py in calling_things( def calling_things() works_fine(--- throws_an_exception( calling_things(book_scripts/ch /ipython_bug py in throws_an_exception( ---- assert( = def calling_things()assertionerroripython basics www it-ebooks info
11,815
can be controlled using the %xmode magic commandfrom minimal (same as the standard python interpreterto verbose (which inlines function argument values and moreas you will see later in the you can step into the stack (using the %debug or %pdb magicsafter an error has occurred for interactive post-mortem debugging magic commands ipython has many special commandsknown as "magiccommandswhich are designed to faciliate common tasks and enable you to easily control the behavior of the ipython system magic command is any command prefixed by the the percent symbol for exampleyou can check the execution time of any python statementsuch as matrix multiplicationusing the %timeit magic function (which will be discussed in more detail later)in [ ] np random randn( in [ ]%timeit np dot(aa loopsbest of us per loop magic commands can be viewed as command line programs to be run within the ipython system many of them have additional "command lineoptionswhich can all be viewed (as you might expectusing ?in [ ]%resetresets the namespace by removing all names defined by the user parameters force reset without asking for confirmation - 'softresetonly clears your namespaceleaving history intact references to objects may be kept by default (without this option)we do 'hardresetgiving you new session and removing all references to objects from the current session examples in [ ] in [ ] out[ ] in [ ]'ain _ip user_ns out[ ]true in [ ]%reset - in [ ]'ain _ip user_ns out[ ]false ipythonan interactive computing and development environment www it-ebooks info
11,816
is defined with the same name as the magic function in question this feature is called automagic and can be enabled or disabled using %automagic since ipython' documentation is easily accessible from within the systemi encourage you to explore all of the special commands available by typing %quickref or %magic will highlight few more of the most critical ones for being productive in interactive computing and python development in ipython table - frequently-used ipython magic commands command description %quickref display the ipython quick reference card %magic display detailed documentation for all of the available magic commands %debug enter the interactive debugger at the bottom of the last exception traceback %hist print command input (and optionally outputhistory %pdb automatically enter debugger after any exception %paste execute pre-formatted python code from clipboard %cpaste open special prompt for manually pasting python code to be executed %reset delete all variables names defined in interactive namespace %page object pretty print the object and display it through pager %run script py run python script inside ipython %prun statement execute statement with cprofile and report the profiler output %time statement report the execution time of single statement %timeit statement run statement multiple times to compute an emsemble average execution time useful for timing code with very short execution time %who%who_ls%whos display variables defined in interactive namespacewith varying levels of information verbosity %xdel variable delete variable and attempt to clear any references to the object in the ipython internals qt-based rich gui console the ipython team has developed qt framework-based gui consoledesigned to wed the features of the terminal-only applications with the features provided by rich text widgetlike embedded imagesmultiline editingand syntax highlighting if you have either pyqt or pyside installedthe application can be launched with inline plotting by running this on the command lineipython qtconsole --pylab=inline the qt console can launch multiple ipython processes in tabsenabling you to switch between tasks it can also share process with the ipython html notebook applicationwhich 'll highlight later ipython basics www it-ebooks info
11,817
matplotlib integration and pylab mode part of why ipython is so widely used in scientific computing is that it is designed as companion to libraries like matplotlib and other gui toolkits don' worry if you have never used matplotlib beforeit will be discussed in much more detail later in this book if you create matplotlib plot window in the regular python shellyou'll be sad to find that the gui event loop "takes controlof the python session until the plot window is closed that won' work for interactive data analysis and visualizationso ipython has ipythonan interactive computing and development environment www it-ebooks info
11,818
with the shell the typical way to launch ipython with matplotlib integration is by adding the -pylab flag (two dashesipython --pylab this will cause several things to happen first ipython will launch with the default gui backend integration enabled so that matplotlib plot windows can be created with no issues secondlymost of numpy and matplotlib will be imported into the top level interactive namespace to produce an interactive computing environment reminiscent of matlab and other domain-specific scientific computing environments it' possible to do this setup by hand by using %guitoo (try running %guito find out howfigure - pylab modeipython with matplotlib windows ipython basics www it-ebooks info
11,819
ipython maintains small on-disk database containing the text of each command that you execute this serves various purposessearchingcompletingand executing previously-executed commands with minimal typing persisting the command history between sessions logging the input/output history to file searching and reusing the command history being able to search and execute previous commands isfor many peoplethe most useful feature since ipython encourages an iterativeinteractive code development workflowyou may often find yourself repeating the same commandssuch as %run command or some other code snippet suppose you had runin[ ]%run first/second/third/data_script py and then explored the results of the script (assuming it ran successfully)only to find that you made an incorrect calculation after figuring out the problem and modifying data_script pyyou can start typing few letters of the %run command then press either the key combination or the key this will search the command history for the first prior command matching the letters you typed pressing either or multiple times will continue to search through the history if you pass over the command you wish to executefear not you can move forward through the command history by pressing either or after doing this few times you may start pressing these keys without thinkingusing gives you the same partial incremental searching capability provided by the readline used in unix-style shellssuch as the bash shell on windowsread line functionality is emulated by ipython to use thispress then type few characters contained in the input line you want to search forin [ ]a_command foo(xyz(reverse- -search)`com'a_command foo(xyzpressing will cycle through the history for each line matching the characters you've typed input and output variables forgetting to assign the result of function call to variable can be very annoying fortunatelyipython stores references to both the input (the text that you typeand output (the object that is returnedin special variables the previous two outputs are stored in the (one underscoreand __ (two underscoresvariablesrespectively ipythonan interactive computing and development environment www it-ebooks info
11,820
out[ ] in [ ] out[ ] input variables are stored in variables named like _ixwhere is the input line number for each such input variables there is corresponding output variable _x so after input line saythere will be two new variables (for the outputand _i for the input in [ ]foo 'barin [ ]foo out[ ]'barin [ ]_i out[ ] 'fooin [ ] out[ ]'barsince the input variables are stringsthat can be executed again using the python exec keywordin [ ]exec _i several magic functions allow you to work with the input and output history %hist is capable of printing all or part of the input historywith or without line numbers %reset is for clearing the interactive namespace and optionally the input and output caches the %xdel magic function is intended for removing all references to particular object from the ipython machinery see the documentation for both of these magics for more details when working with very large data setskeep in mind that ipython' input and output history causes any object referenced there to not be garbage collected (freeing up the memory)even if you delete the variables from the interactive namespace using the del keyword in such casescareful usage of %xdel and %reset can help you avoid running into memory problems logging the input and output ipython is capable of logging the entire console session including input and output logging is turned on by typing %logstartin [ ]%logstart activating auto-logging current session state plus future input saved filename ipython_log py mode rotate output logging false raw input log false using the command history www it-ebooks info
11,821
state false active ipython logging can be enabled at any time and it will record your entire session (including previous commandsthusif you are working on something and you decide you want to save everything you didyou can simply enable logging see the docstring of %logstart for more options (including changing the output file path)as well as the companion functions %logoff%logon%logstateand %logstop interacting with the operating system another important feature of ipython is that it provides very strong integration with the operating system shell this meansamong other thingsthat you can perform most standard command line actions as you would in the windows or unix (linuxos xshell without having to exit ipython this includes executing shell commandschanging directoriesand storing the results of command in python object (list or stringthere are also simple shell command aliasing and directory bookmarking features see table - for summary of magic functions and syntax for calling shell commands 'll briefly visit these features in the next few sections table - ipython system-related commands command description !cmd execute cmd in the system shell output !cmd args run cmd and store the stdout in output %alias alias_name cmd define an alias for system (shellcommand %bookmark utilize ipython' directory bookmarking system %cd directory change system working directory to passed directory %pwd return the current system working directory %pushd directory place current directory on stack and change to target directory %popd change to directory popped off the top of the stack %dirs return list containing the current directory stack %dhist print the history of visited directories %env return the system environment variables as dict shell commands and aliases starting line in ipython with an exclamation point !or bangtells ipython to execute everything after the bang in the system shell this means that you can delete files (using rm or deldepending on your os)change directoriesor execute any other process it' even possible to start processes that take control away from ipythoneven another python interpreter ipythonan interactive computing and development environment www it-ebooks info
11,822
python |epd ( -bit)(defaultjul : : [gcc (red hat - )on linux type "packages""demoor "enthoughtfor more information the console output of shell command can be stored in variable by assigning the !escaped expression to variable for exampleon my linux-based machine connected to the internet via etherneti can get my ip address as python variablein [ ]ip_info !ifconfig eth grep "inet in [ ]ip_info[ strip(out[ ]'inet addr bcast mask the returned python object ip_info is actually custom list type containing various versions of the console output ipython can also substitute in python values defined in the current environment when using to do thispreface the variable name by the dollar sign $in [ ]foo 'test*in [ ]!ls $foo test py test py test xml the %alias magic function can define custom shortcuts for shell commands as simple examplein [ ]%alias ll ls - in [ ]ll /usr total drwxr-xr- root root : bindrwxr-xr- root root : gamesdrwxr-xr- root root : includedrwxr-xr- root root : libdrwxr-xr- root root : lib lrwxrwxrwx root root : lib -libdrwxr-xr- root root : localdrwxr-xr- root root : sbindrwxr-xr- root root : sharedrwxrwsr- root src : srcmultiple commands can be executed just as on the command line by separating them with semicolonsin [ ]%alias test_alias (cd ch lscd in [ ]test_alias macrodata csv spx csv tips csv you'll notice that ipython "forgetsany aliases you define interactively as soon as the session is closed to create permanent aliasesyou will need to use the configuration system see later in the interacting with the operating system www it-ebooks info
11,823
ipython has simple directory bookmarking system to enable you to save aliases for common directories so that you can jump around very easily for examplei' an avid user of dropboxso can define bookmark to make it easy to change directories to my dropboxin [ ]%bookmark db /home/wesm/dropboxonce 've done thiswhen use the %cd magici can use any bookmarks 've defined in [ ]cd db (bookmark:db-/home/wesm/dropbox/home/wesm/dropbox if bookmark name conflicts with directory name in your current working directoryyou can use the - flag to override and use the bookmark location using the - option with %bookmark lists all of your bookmarksin [ ]%bookmark - current bookmarksdb -/home/wesm/dropboxbookmarksunlike aliasesare automatically persisted between ipython sessions software development tools in addition to being comfortable environment for interactive computing and data explorationipython is well suited as software development environment in data analysis applicationsit' important first to have correct code fortunatelyipython has closely integrated and enhanced the built-in python pdb debugger secondly you want your code to be fast for this ipython has easy-to-use code timing and profiling tools will give an overview of these tools in detail here interactive debugger ipython' debugger enhances pdb with tab completionsyntax highlightingand context for each line in exception tracebacks one of the best times to debug code is right after an error has occurred the %debug commandwhen entered immediately after an exceptioninvokes the "post-mortemdebugger and drops you into the stack frame where the exception was raisedin [ ]run ch /ipython_bug py assertionerror traceback (most recent call last/home/wesm/book_scripts/ch /ipython_bug py in ( throws_an_exception( --- calling_things(/home/wesm/book_scripts/ch /ipython_bug py in calling_things( ipythonan interactive computing and development environment www it-ebooks info
11,824
works_fine(--- throws_an_exception( calling_things(/home/wesm/book_scripts/ch /ipython_bug py in throws_an_exception( ---- assert( = def calling_things()assertionerrorin [ ]%debug /home/wesm/book_scripts/ch /ipython_bug py( )throws_an_exception( ---- assert( = ipdbonce inside the debuggeryou can execute arbitrary python code and explore all of the objects and data (which have been "kept aliveby the interpreterinside each stack frame by default you start in the lowest levelwhere the error occurred by pressing (upand (down)you can switch between the levels of the stack traceipdbu /home/wesm/book_scripts/ch /ipython_bug py( )calling_things( works_fine(--- throws_an_exception( executing the %pdb command makes it so that ipython automatically invokes the debugger after any exceptiona mode that many users will find especially useful it' also easy to use the debugger to help develop codeespecially when you wish to set breakpoints or step through the execution of function or script to examine the state at each stage there are several ways to accomplish this the first is by using %run with the - flagwhich invokes the debugger before executing any code in the passed script you must immediately press (stepto enter the scriptin [ ]run - ch /ipython_bug py breakpoint at /home/wesm/book_scripts/ch /ipython_bug py: noteenter 'cat the ipdbprompt to start your script ( )(ipdbs --call-/home/wesm/book_scripts/ch /ipython_bug py( )( --- def works_fine() software development tools www it-ebooks info
11,825
examplein the above exceptionwe could set breakpoint right before calling the works_fine method and run the script until we reach the breakpoint by pressing (continue)ipdbb ipdbc /home/wesm/book_scripts/ch /ipython_bug py( )calling_things( def calling_things() -- works_fine( throws_an_exception(at this pointyou can step into works_fine(or execute works_fine(by pressing (nextto advance to the next lineipdbn /home/wesm/book_scripts/ch /ipython_bug py( )calling_things( works_fine(--- throws_an_exception( thenwe could step into throws_an_exception and advance to the line where the error occurs and look at the variables in the scope note that debugger commands take precedence over variable namesin such cases preface the variables with to examine their contents ipdbs --call-/home/wesm/book_scripts/ch /ipython_bug py( )throws_an_exception( ---- def throws_an_exception() ipdbn /home/wesm/book_scripts/ch /ipython_bug py( )throws_an_exception( def throws_an_exception()---- ipdbn /home/wesm/book_scripts/ch /ipython_bug py( )throws_an_exception( ---- assert( = ipdbn /home/wesm/book_scripts/ch /ipython_bug py( )throws_an_exception( ---- assert( = ipdb! ipdb! ipythonan interactive computing and development environment www it-ebooks info
11,826
to an ideyou might find the terminal-driven debugger to be bit bewildering at firstbut that will improve in time most of the python ides have excellent gui debuggersbut it is usually significant productivity gain to remain in ipython for your debugging table - ( )python debugger commands command action (elpdisplay command list help command show documentation for command (ontinueresume program execution (uitexit debugger without executing any more code (reaknumber set breakpoint at number in current file path/to/file py:number set breakpoint at line number in specified file (tepstep into function call (extexecute current line and advance to next line at current level (pd(ownmove up/down in function call stack (rgsshow arguments for current function debug statement invoke statement statement in new (recursivedebugger (iststatement show current position and context at current level of stack (hereprint full stack trace with context at current position other ways to make use of the debugger there are couple of other useful ways to invoke the debugger the first is by using special set_trace function (named after pdb set_trace)which is basically "poor man' breakpointhere are two small recipes you might want to put somewhere for your general use (potentially adding them to your ipython profile as do)def set_trace()from ipython core debugger import pdb pdb(color_scheme='linux'set_trace(sys _getframe(f_backdef debug( *args**kwargs)from ipython core debugger import pdb pdb pdb(color_scheme='linux'return pdb runcall( *args**kwargsthe first functionset_traceis very simple put set_trace(anywhere in your code that you want to stop and take look around (for exampleright before an exception occurs)in [ ]run ch /ipython_bug py /home/wesm/book_scripts/ch /ipython_bug py( )calling_things( set_trace(software development tools www it-ebooks info
11,827
throws_an_exception(pressing (continuewill cause the code to resume normally with no harm done the debug function above enables you to invoke the interactive debugger easily on an arbitrary function call suppose we had written function like def (xyz= )tmp return tmp and we wished to step through its logic ordinarily using would look like ( = to instead step into fpass as the first argument to debug followed by the positional and keyword arguments to be passed to fin [ ]debug( = ( ) ( def (xyz)---- tmp return tmp ipdbi find that these two simple recipes save me lot of time on day-to-day basis lastlythe debugger can be used in conjunction with %run by running script with %run -dyou will be dropped directly into the debuggerready to set any breakpoints and start the scriptin [ ]%run - ch /ipython_bug py breakpoint at /home/wesm/book_scripts/ch /ipython_bug py: noteenter 'cat the ipdbprompt to start your script ( )(ipdbadding - with line number starts the debugger with breakpoint set alreadyin [ ]%run - - ch /ipython_bug py breakpoint at /home/wesm/book_scripts/ch /ipython_bug py: noteenter 'cat the ipdbprompt to start your script ( )(ipdbc /home/wesm/book_scripts/ch /ipython_bug py( )works_fine( def works_fine() --- ipdb ipythonan interactive computing and development environment www it-ebooks info
11,828
for larger-scale or longer-running data analysis applicationsyou may wish to measure the execution time of various components or of individual statements or function calls you may want report of which functions are taking up the most time in complex process fortunatelyipython enables you to get this information very easily while you are developing and testing your code timing code by hand using the built-in time module and its functions time clock and time time is often tedious and repetitiveas you must write the same uninteresting boilerplate codeimport time start time time(for in range(iterations)some code to run here elapsed_per (time time(startiterations since this is such common operationipython has two magic functions %time and %timeit to automate this process for you %time runs statement oncereporting the total execution time suppose we had large list of strings and we wanted to compare different methods of selecting all strings starting with particular prefix here is simple list of , strings and two identical methods of selecting only the ones that start with 'foo' very large list of strings strings ['foo''foobar''baz''qux''python''guido van rossum' method [ for in strings if startswith('foo')method [ for in strings if [: ='foo'it looks like they should be about the same performance-wiserightwe can check for sure using %timein [ ]%time method [ for in strings if startswith('foo')cpu timesuser ssys stotal wall time in [ ]%time method [ for in strings if [: ='foo'cpu timesuser ssys stotal wall time the wall time is the main number of interest soit looks like the first method takes more than twice as longbut it' not very precise measurement if you try %time-ing those statements multiple times yourselfyou'll find that the results are somewhat variable to get more precise measurementuse the %timeit magic function given an arbitrary statementit has heuristic to run statement multiple times to produce fairly accurate average runtime in [ ]%timeit [ for in strings if startswith('foo') loopsbest of ms per loop software development tools www it-ebooks info
11,829
loopsbest of ms per loop this seemingly innocuous example illustrates that it is worth understanding the performance characteristics of the python standard librarynumpypandasand other libraries used in this book in larger-scale data analysis applicationsthose milliseconds will start to add up%timeit is especially useful for analyzing statements and functions with very short execution timeseven at the level of microseconds ( - secondsor nanoseconds ( - secondsthese may seem like insignificant amounts of timebut of course microsecond function invoked million times takes seconds longer than microsecond function in the above examplewe could very directly compare the two string operations to understand their performance characteristicsin [ ] 'foobarin [ ] 'fooin [ ]%timeit startswith( loopsbest of ns per loop in [ ]%timeit [: = loopsbest of ns per loop basic profiling%prun and %run - profiling code is closely related to timing codeexcept it is concerned with determining where time is spent the main python profiling tool is the cprofile modulewhich is not specific to ipython at all cprofile executes program or any arbitrary block of code while keeping track of how much time is spent in each function common way to use cprofile is on the command linerunning an entire program and outputting the aggregated time per function suppose we had simple script which does some linear algebra in loop (computing the maximum absolute eigenvalues of series of matrices)import numpy as np from numpy linalg import eigvals def run_experiment(niter= ) results [for in xrange(niter)mat np random randn(kkmax_eigenvalue np abs(eigvals(mat)max(results append(max_eigenvaluereturn results some_results run_experiment(print 'largest one we saw%snp max(some_results ipythonan interactive computing and development environment www it-ebooks info
11,830
cprofile by running the following in the command linepython - cprofile cprof_example py if you try thatyou'll find that the results are outputted sorted by function name this makes it bit hard to get an idea of where the most time is spentso it' very common to specify sort order using the - flagpython - cprofile - cumulative cprof_example py largest one we saw function calls ( primitive callsin seconds ordered bycumulative time ncalls tottime percall cumtime percall filename:lineno(function cprof_example py: ( linalg py: (eigvals {numpy linalg lapack_lite dgeev __init__ py: ( {method 'randn' add_newdocs py: ( __init__ py: ( __init__ py: ( type_check py: ( __init__ py: ( numeric py: ( __init__ py: ( __init__ py: ( function_base py: (add_newdoc linalg py: (_assertfiniteonly the first rows of the output are shown it' easiest to read by scanning down the cumtime column to see how much total time was spent inside each function note that if function calls some other functionthe clock does not stop running cprofile records the start and end time of each function call and uses that to produce the timing in addition to the above command-line usagecprofile can also be used programmatically to profile arbitrary blocks of code without having to run new process ipython has convenient interface to this capability using the %prun command and the - option to %run %prun takes the same "command line optionsas cprofile but will profile an arbitrary python statement instead of while py filein [ ]%prun - - cumulative run_experiment( function calls in seconds ordered bycumulative time list reduced from to due to restriction ncalls tottime percall cumtime percall filename:lineno(function : ( cprof_example py: (run_experiment linalg py: (eigvalssoftware development tools www it-ebooks info
11,831
{numpy linalg lapack_lite dgeev {method 'randn' linalg py: (_assertfinite {method 'allof 'numpy ndarrayobjectssimilarlycalling %run - - cumulative cprof_example py has the same effect as the command-line approach aboveexcept you never have to leave ipython profiling function line-by-line in some cases the information you obtain from %prun (or another cprofile-based profile methodmay not tell the whole story about function' execution timeor it may be so complex that the resultsaggregated by function nameare hard to interpret for this casethere is small library called line_profiler (obtainable via pypi or one of the package management toolsit contains an ipython extension enabling new magic function %lprun that computes line-by-line-profiling of one or more functions you can enable this extension by modifying your ipython configuration (see the ipython documentation or the section on configuration later in this to include the following linea list of dotted module names of ipython extensions to load terminalipythonapp extensions ['line_profiler'line_profiler can be used programmatically (see the full documentation)but it is perhaps most powerful when used interactively in ipython suppose you had module prof_mod with the following code doing some numpy array operationsfrom numpy random import randn def add_and_sum(xy)added summed added sum(axis= return summed def call_function() randn( randn( return add_and_sum(xyif we wanted to understand the performance of the add_and_sum function%prun gives us the followingin [ ]%run prof_mod in [ ] randn( in [ ] randn( in [ ]%prun add_and_sum(xy function calls in seconds ordered byinternal time ncalls tottime percall cumtime percall filename:lineno(function prof_mod py: (add_and_sum ipythonan interactive computing and development environment www it-ebooks info
11,832
{method 'sumof 'numpy ndarrayobjects : ( {method 'disableof '_lsprof profilerobjectsthis is not especially enlightening with the line_profiler ipython extension activateda new command %lprun is available the only difference in usage is that we must instruct %lprun which function or functions we wish to profile the general syntax is%lprun - func - func statement_to_profile in this casewe want to profile add_and_sumso we runin [ ]%lprun - add_and_sum add_and_sum(xytimer unit - filebook_scripts/prof_mod py functionadd_and_sum at line total time line hits time per hit time line contents ============================================================= def add_and_sum(xy) added summed added sum(axis= return summed you'll probably agree this is much easier to interpret in this case we profiled the same function we used in the statement looking at the module code abovewe could call call_function and profile that as well as add_and_sumthus getting full picture of the performance of the codein [ ]%lprun - add_and_sum - call_function call_function(timer unit - filebook_scripts/prof_mod py functionadd_and_sum at line total time line hits time per hit time line contents ============================================================= def add_and_sum(xy) added summed added sum(axis= return summed filebook_scripts/prof_mod py functioncall_function at line total time line hits time per hit time line contents ============================================================= def call_function() randn( randn( return add_and_sum(xyas general rule of thumbi tend to prefer %prun (cprofilefor "macroprofiling and %lprun (line_profilerfor "microprofiling it' worthwhile to have good understanding of both tools software development tools www it-ebooks info
11,833
you want to profile with %lprun is that the overhead of "tracingthe execution time of each line is significant tracing functions that are not of interest would potentially significantly alter the profile results ipython html notebook starting in the ipython teamled by brian grangerbuilt web technology-based interactive computational document format that is commonly known as the ipython notebook it has grown into wonderful tool for interactive computing and an ideal medium for reproducible research and teaching 've used it while writing most of the examples in the booki encourage you to make use of ittoo it has json-based ipynb document format that enables easy sharing of codeoutputand figures recently in python conferencesa popular approach for demonstrations has been to use the notebook and post the ipynb files online afterward for everyone to play with the notebook application runs as lightweight server process on the command line it can be started by runningipython notebook --pylab=inline [notebookappusing existing profile diru'/home/wesmconfig/ipython/profile_default[notebookappserving notebooks from /home/wesm/book_scripts [notebookappthe ipython notebook is running at[notebookappuse control- to stop this server and shut down all kernels on most platformsyour primary web browser will automatically open up to the notebook dashboard in some cases you may have to navigate to the listed url from thereyou can create new notebook and start exploring since you use the notebook inside web browserthe server process can run anywhere you can even securely connect to notebooks running on cloud service providers like amazon ec as of this writinga new project notebookcloud appspot commakes it easy to launch notebooks on ec tips for productive code development using ipython writing code in way that makes it easy to developdebugand ultimately use interactively may be paradigm shift for many users there are procedural details like code reloading that may require some adjustment as well as coding style concerns as suchmost of this section is more of an art than science and will require some experimentation on your part to determine way to write your python code that is effective and productive for you ultimately you want to structure your code in way that makes it easy to use iteratively and to be able to explore the results of running program or function as effortlessly as possible have found software designed with ipythonan interactive computing and development environment www it-ebooks info
11,834
ipython in mind to be easier to work with than code intended only to be run as as standalone command-line application this becomes especially important when something goes wrong and you have to diagnose an error in code that you or someone else might have written months or years beforehand tips for productive code development using ipython www it-ebooks info
11,835
in pythonwhen you type import some_libthe code in some_lib is executed and all the variablesfunctionsand imports defined within are stored in the newly created some_lib module namespace the next time you type import some_libyou will get reference to the existing module namespace the potential difficulty in interactive code development in ipython comes when yousay%run script that depends on some other module where you may have made changes suppose had the following code in test_script pyimport some_lib [ result some_lib get_answer(xyif you were to execute %run test_script py then modify some_lib pythe next time you execute %run test_script py you will still get the old version of some_lib because of python' "load-oncemodule system this behavior differs from some other data analysis environmentslike matlabwhich automatically propagate code changes to cope with thisyou have couple of options the first way is to use python' built-in reload functionaltering test_script py to look like the followingimport some_lib reload(some_libx [ result some_lib get_answer(xythis guarantees that you will get fresh copy of some_lib every time you run test_script py obviouslyif the dependencies go deeperit might be bit tricky to be inserting usages of reload all over the place for this problemipython has special dreload function (not magic functionfor "deep(recursivereloading of modules if were to run import some_lib then type dreload(some_lib)it will attempt to reload some_lib as well as all of its dependencies this will not work in all casesunfortunatelybut when it does it beats having to restart ipython code design tips there' no simple recipe for thisbut here are some high-level principles have found effective in my own work since module or package may be imported in many different places in particular programpython caches module' code the first time it is imported rather than executing the code in the module every time otherwisemodularity and good code organization could potentially cause inefficiency in an application ipythonan interactive computing and development environment www it-ebooks info
11,836
it' not unusual to see program written for the command line with structure somewhat like the following trivial examplefrom my_functions import def (xy)return ( ydef main() result if __name__ ='__main__'main(do you see what might be wrong with this program if we were to run it in ipythonafter it' donenone of the results or objects defined in the main function willl be accessible in the ipython shell better way is to have whatever code is in main execute directly in the module' global namespace (or in the if __name__ ='__main__'blockif you want the module to also be importablethat waywhen you %run the codeyou'll be able to look at all of the variables defined in main it' less meaningful in this simple examplebut in this book we'll be looking at some complex data analysis problems involving large data sets that you will want to be able to play with in ipython flat is better than nested deeply nested code makes me think about the many layers of an onion when testing or debugging functionhow many layers of the onion must you peel back in order to reach the code of interestthe idea that "flat is better than nestedis part of the zen of pythonand it applies generally to developing code for interactive use as well making functions and classes as decoupled and modular as possible makes them easier to test (if you are writing unit tests)debugand use interactively overcome fear of longer files if you come from java (or another such languagebackgroundyou may have been told to keep files short in many languagesthis is sound advicelong length is usually bad "code smell"indicating refactoring or reorganization may be necessary howeverwhile developing code using ipythonworking with smallbut interconnected files (undersay lines eachis likely to cause you more headache in general than single large file or two or three longer files fewer files means fewer modules to reload and less jumping between files while editingtoo have found maintaining larger moduleseach with high internal cohesionto be much more useful and pythonic after iterating toward solutionit sometimes will make sense to refactor larger files into smaller ones tips for productive code development using ipython www it-ebooks info
11,837
put all of your code in single monstrous file finding sensible and intuitive module and package structure for large codebase often takes bit of workbut it is especially important to get right in teams each module should be internally cohesiveand it should be as obvious as possible where to find functions and classes responsible for each area of functionality advanced ipython features making your own classes ipython-friendly ipython makes every effort to display console-friendly string representation of any object that you inspect for many objectslike dictslistsand tuplesthe built-in pprint module is used to do the nice formatting in user-defined classeshoweveryou have to generate the desired string output yourself suppose we had the following simple classclass messagedef __init__(selfmsg)self msg msg if you wrote thisyou would be disappointed to discover that the default output for your class isn' very nicein [ ] message(' have secret'in [ ] out[ ]ipython takes the string returned by the __repr__ magic method (by doing output repr(obj)and prints that to the console thuswe can add simple __repr__ method to the above class to get more helpful outputclass messagedef __init__(selfmsg)self msg msg def __repr__(self)return 'message%sself msg in [ ] message(' have secret'in [ ] out[ ]messagei have secret ipythonan interactive computing and development environment www it-ebooks info
11,838
most aspects of the appearance (colorspromptspacing between linesetc and behavior of the ipython shell are configurable through an extensive configuration system here are some of the things you can do via configurationchange the color scheme change how the input and output prompts lookor remove the blank line after out and before the next in prompt change how the input and output prompts look execute an arbitrary list of python statements these could be imports that you use all the time or anything else you want to happen each time you launch ipython enable ipython extensionslike the %lprun magic in line_profiler define your own magics or system aliases all of these configuration options are specified in special ipython_config py file which will be found in the ~config/ipythondirectory on unix-like systems and %home %ipythondirectory on windows where your home directory is depends on your system configuration is performed based on particular profile when you start ipython normallyyou load upby defaultthe default profilestored in the pro file_default directory thuson my linux os the full path to my default ipython configuration file is/home/wesmconfig/ipython/profile_default/ipython_config py 'll spare you the gory details of what' in this file fortunately it has comments describing what each configuration option is forso will leave it to the reader to tinker and customize one additional useful feature is that it' possible to have multiple profiles suppose you wanted to have an alternate ipython configuration tailored for particular application or project creating new profile is as simple is typing something like ipython profile create secret_project once you've done thisedit the config files in the newly-created pro file_secret_project directory then launch ipython like so ipython --profile=secret_project python |epd ( -bit)(defaultjul : : type "copyright""creditsor "licensefor more information ipython -an enhanced interactive python -introduction and overview of ipython' features %quickref -quick reference help -python' own help system object-details about 'object'use 'object??for extra details ipython profilesecret_project advanced ipython features www it-ebooks info
11,839
as alwaysthe online ipython documentation is an excellent resource for more on profiles and configuration credits parts of this were derived from the wonderful documentation put together by the ipython development team can' thank them enough for all of their work building this amazing set of tools ipythonan interactive computing and development environment www it-ebooks info
11,840
numpy basicsarrays and vectorized computation numpyshort for numerical pythonis the fundamental package required for high performance scientific computing and data analysis it is the foundation on which nearly all of the higher-level tools in this book are built here are some of the things it providesndarraya fast and space-efficient multidimensional array providing vectorized arithmetic operations and sophisticated broadcasting capabilities standard mathematical functions for fast operations on entire arrays of data without having to write loops tools for reading writing array data to disk and working with memory-mapped files linear algebrarandom number generationand fourier transform capabilities tools for integrating code written in cc++and fortran the last bullet point is also one of the most important ones from an ecosystem point of view because numpy provides an easy-to-use apiit is very easy to pass data to external libraries written in low-level language and also for external libraries to return data to python as numpy arrays this feature has made python language of choice for wrapping legacy / ++/fortran codebases and giving them dynamic and easyto-use interface while numpy by itself does not provide very much high-level data analytical functionalityhaving an understanding of numpy arrays and array-oriented computing will help you use tools like pandas much more effectively if you're new to python and just looking to get your hands dirty working with data using pandasfeel free to give this skim for more on advanced numpy features like broadcastingsee www it-ebooks info
11,841
fast vectorized array operations for data munging and cleaningsubsetting and filteringtransformationand any other kinds of computations common array algorithms like sortinguniqueand set operations efficient descriptive statistics and aggregating/summarizing data data alignment and relational data manipulations for merging and joining together heterogeneous data sets expressing conditional logic as array expressions instead of loops with if-elifelse branches group-wise data manipulations (aggregationtransformationfunction applicationmuch more on this in while numpy provides the computational foundation for these operationsyou will likely want to use pandas as your basis for most kinds of data analysis (especially for structured or tabular dataas it provides richhigh-level interface making most common data tasks very concise and simple pandas also provides some more domainspecific functionality like time series manipulationwhich is not present in numpy in this and throughout the booki use the standard numpy convention of always using import numpy as np you areof coursewelcome to put from numpy import in your code to avoid having to write np but would caution you against making habit of this the numpy ndarraya multidimensional array object one of the key features of numpy is its -dimensional array objector ndarraywhich is fastflexible container for large data sets in python arrays enable you to perform mathematical operations on whole blocks of data using similar syntax to the equivalent operations between scalar elementsin [ ]data out[ ]array([ - - ] ]]in [ ]data out[ ]array([ - - ] ]]in [ ]data data out[ ]array([ - - ] ]]an ndarray is generic multidimensional container for homogeneous datathat isall of the elements must be the same type every array has shapea tuple indicating the size of each dimensionand dtypean object describing the data type of the arrayin [ ]data shape out[ ]( numpy basicsarrays and vectorized computation www it-ebooks info
11,842
out[ ]dtype('float 'this will introduce you to the basics of using numpy arraysand should be sufficient for following along with the rest of the book while it' not necessary to have deep understanding of numpy for many data analytical applicationsbecoming proficient in array-oriented programming and thinking is key step along the way to becoming scientific python guru whenever you see "array""numpy array"or "ndarrayin the textwith few exceptions they all refer to the same thingthe ndarray object creating ndarrays the easiest way to create an array is to use the array function this accepts any sequence-like object (including other arraysand produces new numpy array containing the passed data for examplea list is good candidate for conversionin [ ]data [ in [ ]arr np array(data in [ ]arr out[ ]array( ]nested sequenceslike list of equal-length listswill be converted into multidimensional arrayin [ ]data [[ ][ ]in [ ]arr np array(data in [ ]arr out[ ]array([[ ][ ]]in [ ]arr ndim out[ ] in [ ]arr shape out[ ]( unless explicitly specified (more on this later)np array tries to infer good data type for the array that it creates the data type is stored in special dtype objectfor examplein the above two examples we havein [ ]arr dtype out[ ]dtype('float 'the numpy ndarraya multidimensional array object www it-ebooks info
11,843
out[ ]dtype('int 'in addition to np arraythere are number of other functions for creating new arrays as exampleszeros and ones create arrays of ' or 'srespectivelywith given length or shape empty creates an array without initializing its values to any particular value to create higher dimensional array with these methodspass tuple for the shapein [ ]np zeros( out[ ]array( in [ ]np zeros(( )out[ ]array([ in [ ]np empty(( )out[ ]array([[ - - - - ] - ] - ]][ + - + + - ]- - ] + ]]] ] ] ] ]]it' not safe to assume that np empty will return an array of all zeros in many casesas previously shownit will return uninitialized garbage values arange is an array-valued version of the built-in python range functionin [ ]np arange( out[ ]array( ]see table - for short list of standard array creation functions since numpy is focused on numerical computingthe data typeif not specifiedwill in many cases be float (floating pointtable - array creation functions function description array convert input data (listtuplearrayor other sequence typeto an ndarray either by inferring dtype or explicitly specifying dtype copies the input data by default asarray convert input to ndarraybut do not copy if the input is already an ndarray arange like the built-in range but returns an ndarray instead of list onesones_like produce an array of all ' with the given shape and dtype ones_like takes another array and produces ones array of the same shape and dtype zeroszeros_like like ones and ones_like but producing arrays of ' instead numpy basicsarrays and vectorized computation www it-ebooks info
11,844
description emptyempty_like create new arrays by allocating new memorybut do not populate with any values like ones and zeros eyeidentity create square identity matrix ( ' on the diagonal and ' elsewheredata types for ndarrays the data type or dtype is special object containing the information the ndarray needs to interpret chunk of memory as particular type of datain [ ]arr np array([ ]dtype=np float in [ ]arr np array([ ]dtype=np int in [ ]arr dtype out[ ]dtype('float 'in [ ]arr dtype out[ ]dtype('int 'dtypes are part of what make numpy so powerful and flexible in most cases they map directly onto an underlying machine representationwhich makes it easy to read and write binary streams of data to disk and also to connect to code written in low-level language like or fortran the numerical dtypes are named the same waya type namelike float or intfollowed by number indicating the number of bits per element standard double-precision floating point value (what' used under the hood in python' float objecttakes up bytes or bits thusthis type is known in numpy as float see table - for full listing of numpy' supported data types don' worry about memorizing the numpy dtypesespecially if you're new user it' often only necessary to care about the general kind of data you're dealing withwhether floating pointcomplexintegerbooleanstringor general python object when you need more control over how data are stored in memory and on diskespecially large data setsit is good to know that you have control over the storage type table - numpy data types type type code description int uint signed and unsigned -bit ( byteinteger types int uint signed and unsigned -bit integer types int uint signed and unsigned -bit integer types int uint signed and unsigned -bit integer types float half-precision floating point float or standard single-precision floating point compatible with float float float or standard double-precision floating point compatible with double and python float object the numpy ndarraya multidimensional array object www it-ebooks info
11,845
type code description float or extended-precision floating point complex complex complex complex numbers represented by two or floatsrespectively bool boolean type storing true and false values object python object type string_ fixed-length string type ( byte per characterfor exampleto create string dtype with length use ' unicode_ fixed-length unicode type (number of bytes platform specificsame specification semantics as string_ ( ' 'you can explicitly convert or cast an array from one dtype to another using ndarray' astype methodin [ ]arr np array([ ]in [ ]arr dtype out[ ]dtype('int 'in [ ]float_arr arr astype(np float in [ ]float_arr dtype out[ ]dtype('float 'in this exampleintegers were cast to floating point if cast some floating point numbers to be of integer dtypethe decimal part will be truncatedin [ ]arr np array([ - - ]in [ ]arr out[ ]array( - - ]in [ ]arr astype(np int out[ ]array( - - ]dtype=int should you have an array of strings representing numbersyou can use astype to convert them to numeric formin [ ]numeric_strings np array([' ''- '' ']dtype=np string_in [ ]numeric_strings astype(floatout[ ]array( - ]if casting were to fail for some reason (like string that cannot be converted to float ) typeerror will be raised see that was bit lazy and wrote float instead of np float numpy is smart enough to alias the python types to the equivalent dtypes you can also use another array' dtype attributein [ ]int_array np arange( numpy basicsarrays and vectorized computation www it-ebooks info
11,846
in [ ]int_array astype(calibers dtypeout[ ]array( ]there are shorthand type code strings you can also use to refer to dtypein [ ]empty_uint np empty( dtype=' 'in [ ]empty_uint out[ ]array( ]dtype=uint calling astype always creates new array ( copy of the data)even if the new dtype is the same as the old dtype it' worth keeping in mind that floating point numberssuch as those in float and float arraysare only capable of approximating fractional quantities in complex computationsyou may accrue some floating point errormaking comparisons only valid up to certain number of decimal places operations between arrays and scalars arrays are important because they enable you to express batch operations on data without writing any for loops this is usually called vectorization any arithmetic operations between equal-size arrays applies the operation elementwisein [ ]arr np array([[ ][ ]]in [ ]arr out[ ]array([ ] ]]in [ ]arr arr out[ ]array([ ] ]]in [ ]arr arr out[ ]array([ ] ]]arithmetic operations with scalars are as you would expectpropagating the value to each elementin [ ] arr out[ ]array([ ] ]]in [ ]arr * out[ ]array([ ] ]]the numpy ndarraya multidimensional array object www it-ebooks info
11,847
in more detail in having deep understanding of broadcasting is not necessary for most of this book basic indexing and slicing numpy array indexing is rich topicas there are many ways you may want to select subset of your data or individual elements one-dimensional arrays are simpleon the surface they act similarly to python listsin [ ]arr np arange( in [ ]arr out[ ]array([ ]in [ ]arr[ out[ ] in [ ]arr[ : out[ ]array([ ]in [ ]arr[ : in [ ]arr out[ ]array( ]as you can seeif you assign scalar value to sliceas in arr[ : the value is propagated (or broadcasted henceforthto the entire selection an important first distinction from lists is that array slices are views on the original array this means that the data is not copiedand any modifications to the view will be reflected in the source arrayin [ ]arr_slice arr[ : in [ ]arr_slice[ in [ ]arr out[ ]array( ] ]in [ ]arr_slice[: in [ ]arr out[ ]array( if you are new to numpyyou might be surprised by thisespecially if they have used other array programming languages which copy data more zealously as numpy has been designed with large data use cases in mindyou could imagine performance and memory problems if numpy insisted on copying data left and right numpy basicsarrays and vectorized computation www it-ebooks info
11,848
need to explicitly copy the arrayfor example arr[ : copy(with higher dimensional arraysyou have many more options in two-dimensional arraythe elements at each index are no longer scalars but rather one-dimensional arraysin [ ]arr np array([[ ][ ][ ]]in [ ]arr [ out[ ]array([ ]thusindividual elements can be accessed recursively but that is bit too much workso you can pass comma-separated list of indices to select individual elements so these are equivalentin [ ]arr [ ][ out[ ] in [ ]arr [ out[ ] see figure - for an illustration of indexing on array figure - indexing elements in numpy array in multidimensional arraysif you omit later indicesthe returned object will be lowerdimensional ndarray consisting of all the data along the higher dimensions so in the array arr in [ ]arr np array([[[ ][ ]][[ ][ ]]]in [ ]arr out[ ]array([[ ]the numpy ndarraya multidimensional array object www it-ebooks info
11,849
[ ][ ]]]arr [ is arrayin [ ]arr [ out[ ]array([[ ][ ]]both scalar values and arrays can be assigned to arr [ ]in [ ]old_values arr [ copy(in [ ]arr [ in [ ]arr out[ ]array([[[ ][ ]][ ][ ]]]in [ ]arr [ old_values in [ ]arr out[ ]array([[ ] ]][ ][ ]]]similarlyarr [ gives you all of the values whose indices start with ( )forming -dimensional arrayin [ ]arr [ out[ ]array([ ]note that in all of these cases where subsections of the array have been selectedthe returned arrays are views indexing with slices like one-dimensional objects such as python listsndarrays can be sliced using the familiar syntaxin [ ]arr[ : out[ ]array( ]higher dimensional objects give you more options as you can slice one or more axes and also mix integers consider the array abovearr slicing this array is bit differentin [ ]arr out[ ]in [ ]arr [: out[ ] numpy basicsarrays and vectorized computation www it-ebooks info
11,850
[ ][ ]]array([[ ][ ]]as you can seeit has sliced along axis the first axis slicethereforeselects range of elements along an axis you can pass multiple slices just like you can pass multiple indexesin [ ]arr [: :out[ ]array([[ ][ ]]when slicing like thisyou always obtain array views of the same number of dimensions by mixing integer indexes and slicesyou get lower dimensional slicesin [ ]arr [ : out[ ]array([ ]in [ ]arr [ : out[ ]array([ ]see figure - for an illustration note that colon by itself means to take the entire axisso you can slice only higher dimensional axes by doingin [ ]arr [:: out[ ]array([[ ][ ][ ]]of courseassigning to slice expression assigns to the whole selectionin [ ]arr [: : boolean indexing let' consider an example where we have some data in an array and an array of names with duplicates ' going to use here the randn function in numpy random to generate some random normally distributed datain [ ]names np array(['bob''joe''will''bob''will''joe''joe']in [ ]data randn( in [ ]names out[ ]array(['bob''joe''will''bob''will''joe''joe']dtype='| 'in [ ]data out[ ]array([[- - ][- - ][- - ] - ][- - ]the numpy ndarraya multidimensional array object www it-ebooks info
11,851
] - - ]]figure - two-dimensional array slicing suppose each name corresponds to row in the data array if we wanted to select all the rows with corresponding name 'boblike arithmetic operationscomparisons (such as ==with arrays are also vectorized thuscomparing names with the string 'bobyields boolean arrayin [ ]names ='bobout[ ]array(truefalsefalsetruefalsefalsefalse]dtype=boolthis boolean array can be passed when indexing the arrayin [ ]data[names ='bob'out[ ]array([[- - - ] ]]the boolean array must be of the same length as the axis it' indexing you can even mix and match boolean arrays with slices or integers (or sequences of integersmore on this later)in [ ]data[names ='bob' :out[ ]array([[- ] numpy basicsarrays and vectorized computation www it-ebooks info
11,852
]]in [ ]data[names ='bob' out[ ]array( ]to select everything but 'bob'you can either use !or negate the condition using -in [ ]names !'bobout[ ]array([falsetruetruefalsetruetruetrue]dtype=boolin [ ]data[-(names ='bob')out[ ]array([[- - ][- - ][- - ] ] - - ]]selecting two of the three names to combine multiple boolean conditionsuse boolean arithmetic operators like (andand (or)in [ ]mask (names ='bob'(names ='will'in [ ]mask out[ ]array([truefalsetruetruetruefalsefalse]dtype=boolin [ ]data[maskout[ ]array([[- - [- - - [- - ] ] ] ]]selecting data from an array by boolean indexing always creates copy of the dataeven if the returned array is unchanged the python keywords and and or do not work with boolean arrays setting values with boolean arrays works in common-sense way to set all of the negative values in data to we need only doin [ ]data[data in [ ]data out[ ]array([ ] ] ] ] ] ] ]]the numpy ndarraya multidimensional array object www it-ebooks info
11,853
in [ ]data[names !'joe' in [ ]data out[ ]array([ ] ] ] ] ] ] ]]fancy indexing fancy indexing is term adopted by numpy to describe indexing using integer arrays suppose we had arrayin [ ]arr np empty(( )in [ ]for in range( )arr[ii in [ ]arr out[ ]array([ ] ] ] ] ] ] ] ]]to select out subset of the rows in particular orderyou can simply pass list or ndarray of integers specifying the desired orderin [ ]arr[[ ]out[ ]array([ ] ] ] ]]hopefully this code did what you expectedusing negative indices select rows from the endin [ ]arr[[- - - ]out[ ]array([ ] ] ]] numpy basicsarrays and vectorized computation www it-ebooks info
11,854
elements corresponding to each tuple of indicesmore on reshape in in [ ]arr np arange( reshape(( )in [ ]arr out[ ]array([ ] ] ][ ][ ][ ][ ][ ]]in [ ]arr[[ ][ ]out[ ]array( ]take moment to understand what just happenedthe elements ( )( )( )and ( were selected the behavior of fancy indexing in this case is bit different from what some users might have expected (myself included)which is the rectangular region formed by selecting subset of the matrix' rows and columns here is one way to get thatin [ ]arr[[ ]][:[ ]out[ ]array([ ][ ][ ] ]]another way is to use the np ix_ functionwhich converts two integer arrays to an indexer that selects the square regionin [ ]arr[np ix_([ ][ ])out[ ]array([ ][ ][ ] ]]keep in mind that fancy indexingunlike slicingalways copies the data into new array transposing arrays and swapping axes transposing is special form of reshaping which similarly returns view on the underlying data without copying anything arrays have the transpose method and also the special attributein [ ]arr np arange( reshape(( )in [ ]arr in [ ]arr the numpy ndarraya multidimensional array object www it-ebooks info
11,855
array([ ] ][ ]]out[ ]array([ ] ] ] ] ]]when doing matrix computationsyou will do this very oftenlike for example computing the inner matrix product xtx using np dotin [ ]arr np random randn( in [ ]np dot(arr tarrout[ ]array([ ] ] ]]for higher dimensional arraystranspose will accept tuple of axis numbers to permute the axes (for extra mind bending)in [ ]arr np arange( reshape(( )in [ ]arr out[ ]array([[ ] ]][ ][ ]]]in [ ]arr transpose(( )out[ ]array([[ ] ]][ ][ ]]]simple transposing with is just special case of swapping axes ndarray has the method swapaxes which takes pair of axis numbersin [ ]arr out[ ]array([[ ] ]][ ][ ]]]in [ ]arr swapaxes( out[ ]array([[ ] ] ] ]][ ] ][ ][ ]]]swapaxes similarly returns view on the data without making copy numpy basicsarrays and vectorized computation www it-ebooks info
11,856
universal functionor ufuncis function that performs elementwise operations on data in ndarrays you can think of them as fast vectorized wrappers for simple functions that take one or more scalar values and produce one or more scalar results many ufuncs are simple elementwise transformationslike sqrt or expin [ ]arr np arange( in [ ]np sqrt(arrout[ ]array( ]in [ ]np exp(arrout[ ]array( ]these are referred to as unary ufuncs otherssuch as add or maximumtake arrays (thusbinary ufuncsand return single array as the resultin [ ] randn( in [ ] randn( in [ ] out[ ]array( - ] - in [ ] out[ ]array( - - - ] - in [ ]np maximum(xyelement-wise maximum out[ ]array( - ] while not commona ufunc can return multiple arrays modf is one examplea vectorized version of the built-in python divmodit returns the fractional and integral parts of floating point arrayin [ ]arr randn( in [ ]np modf(arrout[ ](array([- - - array([- - - - ]) - ])universal functionsfast element-wise array functions www it-ebooks info
11,857
table - unary ufuncs function description absfabs compute the absolute value element-wise for integerfloating pointor complex values use fabs as faster alternative for non-complex-valued data sqrt compute the square root of each element equivalent to arr * square compute the square of each element equivalent to arr * exp compute the exponent ex of each element loglog log log natural logarithm (base )log base log base and log( )respectively sign compute the sign of each element (positive) (zero)or - (negativeceil compute the ceiling of each elementi the smallest integer greater than or equal to each element floor compute the floor of each elementi the largest integer less than or equal to each element rint round elements to the nearest integerpreserving the dtype modf return fractional and integral parts of array as separate array isnan return boolean array indicating whether each value is nan (not numberisfiniteisinf return boolean array indicating whether each element is finite (non-infnon-nanor infiniterespectively coscoshsinsinhtantanh regular and hyperbolic trigonometric functions arccosarccosharcsinarcsinharctanarctanh inverse trigonometric functions logical_not compute truth value of not element-wise equivalent to -arr table - binary universal functions function description add add corresponding elements in arrays subtract subtract elements in second array from first array multiply multiply array elements dividefloor_divide divide or floor divide (truncating the remainderpower raise elements in first array to powers indicated in second array maximumfmax element-wise maximum fmax ignores nan minimumfmin element-wise minimum fmin ignores nan mod element-wise modulus (remainder of divisioncopysign copy sign of values in second argument to values in first argument numpy basicsarrays and vectorized computation www it-ebooks info
11,858
description greatergreater_equallessless_equalequalnot_equal perform element-wise comparisonyielding boolean array equivalent to infix operators logical_andlogical_orlogical_xor compute element-wise truth value of logical operation equivalent to infix operators >>=<<===!|data processing using arrays using numpy arrays enables you to express many kinds of data processing tasks as concise array expressions that might otherwise require writing loops this practice of replacing explicit loops with array expressions is commonly referred to as vectorization in generalvectorized array operations will often be one or two (or moreorders of magnitude faster than their pure python equivalentswith the biggest impact in any kind of numerical computations laterin will explain broadcastinga powerful method for vectorizing computations as simple examplesuppose we wished to evaluate the function sqrt( ^ ^ across regular grid of values the np meshgrid function takes two arrays and produces two matrices corresponding to all pairs of (xyin the two arraysin [ ]points np arange(- equally spaced points in [ ]xsys np meshgrid(pointspointsin [ ]ys out[ ]array([[- - - - - - ][- - - - - - ][- - - - - - ] ] ] ]]nowevaluating the function is simple matter of writing the same expression you would write with two pointsin [ ]import matplotlib pyplot as plt in [ ] np sqrt(xs * ys * in [ ] out[ ]array([ ] ] ] ] ] ]]data processing using arrays www it-ebooks info
11,859
out[ ]in [ ]plt title("image plot of $\sqrt{ ^ ^ }for grid of values"out[ ]see figure - here used the matplotlib function imshow to create an image plot from array of function values figure - plot of function evaluated on grid expressing conditional logic as array operations the numpy where function is vectorized version of the ternary expression if condi tion else suppose we had boolean array and two arrays of valuesin [ ]xarr np array([ ]in [ ]yarr np array([ ]in [ ]cond np array([truefalsetruetruefalse]suppose we wanted to take value from xarr whenever the corresponding value in cond is true otherwise take the value from yarr list comprehension doing this might look likein [ ]result [( if else yfor xyc in zip(xarryarrcond)in [ ]result out[ ][ numpy basicsarrays and vectorized computation www it-ebooks info
11,860
the work is being done in pure pythonsecondlyit will not work with multidimensional arrays with np where you can write this very conciselyin [ ]result np where(condxarryarrin [ ]result out[ ]array( ]the second and third arguments to np where don' need to be arraysone or both of them can be scalars typical use of where in data analysis is to produce new array of values based on another array suppose you had matrix of randomly generated data and you wanted to replace all positive values with and all negative values with - this is very easy to do with np wherein [ ]arr randn( in [ ]arr out[ ]array([ [- - [- ] ] - ] - ]]in [ ]np where(arr - out[ ]array([ ][- - ][- - ] - ]]in [ ]np where(arr arrset only positive values to out[ ]array([ ][- - ][- - ] - ]]the arrays passed to where can be more than just equal sizes array or scalers with some cleverness you can use where to express more complicated logicconsider this example where have two boolean arrayscond and cond and wish to assign different value for each of the possible pairs of boolean valuesresult [for in range( )if cond [iand cond [ ]result append( elif cond [ ]result append( elif cond [ ]result append( elseresult append( data processing using arrays www it-ebooks info
11,861
where expressionnp where(cond cond np where(cond np where(cond ))in this particular examplewe can also take advantage of the fact that boolean values are treated as or in calculationsso this could alternatively be expressed (though bit more crypticallyas an arithmetic operationresult cond cond -(cond cond mathematical and statistical methods set of mathematical functions which compute statistics about an entire array or about the data along an axis are accessible as array methods aggregations (often called reductionslike summeanand standard deviation std can either be used by calling the array instance method or using the top level numpy functionin [ ]arr np random randn( normally-distributed data in [ ]arr mean(out[ ] in [ ]np mean(arrout[ ] in [ ]arr sum(out[ ] functions like mean and sum take an optional axis argument which computes the statistic over the given axisresulting in an array with one fewer dimensionin [ ]arr mean(axis= out[ ]array([- - ]in [ ]arr sum( out[ ]array([- - ]other methods like cumsum and cumprod do not aggregateinstead producing an array of the intermediate resultsin [ ]arr np array([[ ][ ][ ]]in [ ]arr cumsum( out[ ]array([ ] ] ]]in [ ]arr cumprod( out[ ]array([ ] ] ]]see table - for full listing we'll see many examples of these methods in action in later numpy basicsarrays and vectorized computation www it-ebooks info
11,862
method description sum sum of all the elements in the array or along an axis zero-length arrays have sum mean arithmetic mean zero-length arrays have nan mean stdvar standard deviation and variancerespectivelywith optional degrees of freedom adjustment (default denominator nminmax minimum and maximum argminargmax indices of minimum and maximum elementsrespectively cumsum cumulative sum of elements starting from cumprod cumulative product of elements starting from methods for boolean arrays boolean values are coerced to (trueand (falsein the above methods thussum is often used as means of counting true values in boolean arrayin [ ]arr randn( in [ ](arr sum(number of positive values out[ ] there are two additional methodsany and alluseful especially for boolean arrays any tests whether one or more values in an array is truewhile all checks if every value is truein [ ]bools np array([falsefalsetruefalse]in [ ]bools any(out[ ]true in [ ]bools all(out[ ]false these methods also work with non-boolean arrayswhere non-zero elements evaluate to true sorting like python' built-in list typenumpy arrays can be sorted in-place using the sort methodin [ ]arr randn( in [ ]arr out[ ]array( - ] - - in [ ]arr sort(data processing using arrays www it-ebooks info
11,863
out[ ]array([- - - ] multidimensional arrays can have each section of values sorted in-place along an axis by passing the axis number to sortin [ ]arr randn( in [ ]arr out[ ]array([[- - - ] - - ][- - ][- - ] - ]]in [ ]arr sort( in [ ]arr out[ ]array([[- - - ][- - ][- - ][- - ][- ]]the top level method np sort returns sorted copy of an array instead of modifying the array in place quick-and-dirty way to compute the quantiles of an array is to sort it and select the value at particular rankin [ ]large_arr randn( in [ ]large_arr sort(in [ ]large_arr[int( len(large_arr)) quantile out[ ]- for more details on using numpy' sorting methodsand more advanced techniques like indirect sortssee several other kinds of data manipulations related to sorting (for examplesorting table of data by one or more columnsare also to be found in pandas unique and other set logic numpy has some basic set operations for one-dimensional ndarrays probably the most commonly used one is np uniquewhich returns the sorted unique values in an arrayin [ ]names np array(['bob''joe''will''bob''will''joe''joe']in [ ]np unique(namesout[ ] numpy basicsarrays and vectorized computation www it-ebooks info
11,864
dtype='| 'in [ ]ints np array([ ]in [ ]np unique(intsout[ ]array([ ]contrast np unique with the pure python alternativein [ ]sorted(set(names)out[ ]['bob''joe''will'another functionnp in dtests membership of the values in one array in anotherreturning boolean arrayin [ ]values np array([ ]in [ ]np in (values[ ]out[ ]array(truefalsefalsetruetruefalsetrue]dtype=boolsee table - for listing of set functions in numpy table - array set operations method description unique(xcompute the sortedunique elements in intersect (xycompute the sortedcommon elements in and union (xycompute the sorted union of elements in (xycompute boolean array indicating whether each element of is contained in setdiff (xyset differenceelements in that are not in setxor (xyset symmetric differenceselements that are in either of the arraysbut not both file input and output with arrays numpy is able to save and load data to and from disk either in text or binary format in later you will learn about tools in pandas for reading tabular data into memory storing arrays on disk in binary format np save and np load are the two workhorse functions for efficiently saving and loading array data on disk arrays are saved by default in an uncompressed raw binary format with file extension npy in [ ]arr np arange( in [ ]np save('some_array'arrfile input and output with arrays www it-ebooks info
11,865
on disk can then be loaded using np loadin [ ]np load('some_array npy'out[ ]array([ ]you save multiple arrays in zip archive using np savez and passing the arrays as keyword argumentsin [ ]np savez('array_archive npz' =arrb=arrwhen loading an npz fileyou get back dict-like object which loads the individual arrays lazilyin [ ]arch np load('array_archive npz'in [ ]arch[' 'out[ ]array([ ]saving and loading text files loading text from files is fairly standard task the landscape of file reading and writing functions in python can be bit confusing for newcomerso will focus mainly on the read_csv and read_table functions in pandas it will at times be useful to load data into vanilla numpy arrays using np loadtxt or the more specialized np genfromtxt these functions have many options allowing you to specify different delimitersconverter functions for certain columnsskipping rowsand other things take simple case of comma-separated file (csvlike thisin [ ]!cat array_ex txt , , , ,- ,- , - , ,- , - , ,- , , , ,- - , , , this can be loaded into array like soin [ ]arr np loadtxt('array_ex txt'delimiter=','in [ ]arr out[ ]array([ ] - - ][- - ][- - ] - ][- ]]np savetxt performs the inverse operationwriting an array to delimited text file genfromtxt is similar to loadtxt but is geared for structured arrays and missing data handlingsee for more on structured arrays numpy basicsarrays and vectorized computation www it-ebooks info
11,866
linear algebra linear algebralike matrix multiplicationdecompositionsdeterminantsand other square matrix mathis an important part of any array library unlike some languages like matlabmultiplying two two-dimensional arrays with is an element-wise product instead of matrix dot product as suchthere is function dotboth an array methodand function in the numpy namespacefor matrix multiplicationin [ ] np array([[ ][ ]]in [ ] np array([[ ][- ][ ]]in [ ] out[ ]array([ ] ]]in [ ] out[ ]array([ ]- ] ]]in [ ] dot(yequivalently np dot(xyout[ ]array([ ] ]] matrix product between array and suitably sized array results in arrayin [ ]np dot(xnp ones( )out[ ]array( ]numpy linalg has standard set of matrix decompositions and things like inverse and determinant these are implemented under the hood using the same industry-standard fortran libraries used in other languages like matlab and rsuch as like blaslapackor possibly (depending on your numpy buildthe intel mklin [ ]from numpy linalg import invqr in [ ] randn( in [ ]mat dot(xin [ ]inv(matout[ ]array([ - - - - ][- ][- ][- ][- ]]in [ ]mat dot(inv(mat)linear algebra www it-ebooks info
11,867
array([ - - - - - ] ] ] - ] ]]in [ ]qr qr(matin [ ] out[ ]array([- - - - - - ] - ] ]- ] ]]see table - for list of some of the most commonly-used linear algebra functions the scientific python community is hopeful that there may be matrix multiplication infix operator implemented somedayproviding syntactically nicer alternative to using np dot but for now this is the way table - commonly-used numpy linalg functions function description diag return the diagonal (or off-diagonalelements of square matrix as arrayor convert array into square matrix with zeros on the off-diagonal dot matrix multiplication trace compute the sum of the diagonal elements det compute the matrix determinant eig compute the eigenvalues and eigenvectors of square matrix inv compute the inverse of square matrix pinv compute the moore-penrose pseudo-inverse inverse of square matrix qr compute the qr decomposition svd compute the singular value decomposition (svdsolve solve the linear system ax for xwhere is square matrix lstsq compute the least-squares solution to xb random number generation the numpy random module supplements the built-in python random with functions for efficiently generating whole arrays of sample values from many kinds of probability numpy basicsarrays and vectorized computation www it-ebooks info
11,868
normal distribution using normalin [ ]samples np random normal(size=( )in [ ]samples out[ ]array([ ] - - - ][- - - ] - - ]]python' built-in random moduleby contrastonly samples one value at time as you can see from this benchmarknumpy random is well over an order of magnitude faster for generating very large samplesin [ ]from random import normalvariate in [ ] in [ ]%timeit samples [normalvariate( for in xrange( ) loopsbest of per loop in [ ]%timeit np random normal(size= loopsbest of ms per loop see table table - for partial list of functions available in numpy random 'll give some examples of leveraging these functionsability to generate large arrays of samples all at once in the next section table - partial list of numpy random functions function description seed seed the random number generator permutation return random permutation of sequenceor return permuted range shuffle randomly permute sequence in place rand draw samples from uniform distribution randint draw random integers from given low-to-high range randn draw samples from normal distribution with mean and standard deviation (matlab-like interfacebinomial draw samples binomial distribution normal draw samples from normal (gaussiandistribution beta draw samples from beta distribution chisquare draw samples from chi-square distribution gamma draw samples from gamma distribution uniform draw samples from uniform [ distribution random number generation www it-ebooks info
11,869
an illustrative application of utilizing array operations is in the simulation of random walks let' first consider simple random walk starting at with steps of and - occurring with equal probability pure python way to implement single random walk with , steps using the built-in random moduleimport random position walk [positionsteps for in xrange(steps)step if random randint( else - position +step walk append(positionsee figure - for an example plot of the first values on one of these random walks figure - simple random walk you might make the observation that walk is simply the cumulative sum of the random steps and could be evaluated as an array expression thusi use the np random module to draw , coin flips at onceset these to and - and compute the cumulative sumin [ ]nsteps in [ ]draws np random randint( size=nstepsin [ ]steps np where(draws - in [ ]walk steps cumsum( numpy basicsarrays and vectorized computation www it-ebooks info
11,870
the walk' trajectoryin [ ]walk min(out[ ]- in [ ]walk max(out[ ] more complicated statistic is the first crossing timethe step at which the random walk reaches particular value here we might want to know how long it took the random walk to get at least steps away from the origin in either direction np abs(walk> gives us boolean array indicating where the walk has reached or exceeded but we want the index of the first or - turns out this can be computed using argmaxwhich returns the first index of the maximum value in the boolean array (true is the maximum value)in [ ](np abs(walk> argmax(out[ ] note that using argmax here is not always efficient because it always makes full scan of the array in this special case once true is observed we know it to be the maximum value simulating many random walks at once if your goal was to simulate many random walkssay , of themyou can generate all of the random walks with minor modifications to the above code the numpy ran dom functions if passed -tuple will generate array of drawsand we can compute the cumulative sum across the rows to compute all , random walks in one shotin [ ]nwalks in [ ]nsteps in [ ]draws np random randint( size=(nwalksnsteps) or in [ ]steps np where(draws - in [ ]walks steps cumsum( in [ ]walks out[ ]array([ - - - - ] ] ] ] ]- - - - ]]nowwe can compute the maximum and minimum values obtained over all of the walksin [ ]walks max(out[ ] in [ ]walks min(out[ ]- examplerandom walks www it-ebooks info
11,871
slightly tricky because not all , of them reach we can check this using the any methodin [ ]hits (np abs(walks> any( in [ ]hits out[ ]array([falsetruefalsefalsetruefalse]dtype=boolin [ ]hits sum(number that hit or - out[ ] we can use this boolean array to select out the rows of walks that actually cross the absolute level and call argmax across axis to get the crossing timesin [ ]crossing_times (np abs(walks[hits ]> argmax( in [ ]crossing_times mean(out[ ] feel free to experiment with other distributions for the steps other than equal sized coin flips you need only use different random number generation functionlike normal to generate normally distributed steps with some mean and standard deviationin [ ]steps np random normal(loc= scale= size=(nwalksnsteps) numpy basicsarrays and vectorized computation www it-ebooks info
11,872
getting started with pandas pandas will be the primary library of interest throughout much of the rest of the book it contains high-level data structures and manipulation tools designed to make data analysis fast and easy in python pandas is built on top of numpy and makes it easy to use in numpy-centric applications as bit of backgroundi started building pandas in early during my tenure at aqra quantitative investment management firm at the timei had distinct set of requirements that were not well-addressed by any single tool at my disposaldata structures with labeled axes supporting automatic or explicit data alignment this prevents common errors resulting from misaligned data and working with differently-indexed data coming from different sources integrated time series functionality the same data structures handle both time series data and non-time series data arithmetic operations and reductions (like summing across an axiswould pass on the metadata (axis labelsflexible handling of missing data merge and other relational operations found in popular database databases (sqlbasedfor examplei wanted to be able to do all of these things in one placepreferably in language wellsuited to general purpose software development python was good candidate language for thisbut at that time there was not an integrated set of data structures and tools providing this functionality over the last four yearspandas has matured into quite large library capable of solving much broader set of data handling problems than ever anticipatedbut it has expanded in its scope without compromising the simplicity and ease-of-use that desired from the very beginning hope that after reading this bookyou will find it to be just as much of an indispensable tool as do throughout the rest of the booki use the following import conventions for pandas www it-ebooks info
11,873
in [ ]import pandas as pd thuswhenever you see pd in codeit' referring to pandas series and dataframe are used so much that find it easier to import them into the local namespace introduction to pandas data structures to get started with pandasyou will need to get comfortable with its two workhorse data structuresseries and dataframe while they are not universal solution for every problemthey provide solideasy-to-use basis for most applications series series is one-dimensional array-like object containing an array of data (of any numpy data typeand an associated array of data labelscalled its index the simplest series is formed from only an array of datain [ ]obj series([ - ]in [ ]obj out[ ] - the string representation of series displayed interactively shows the index on the left and the values on the right since we did not specify an index for the dataa default one consisting of the integers through (where is the length of the datais created you can get the array representation and index object of the series via its values and index attributesrespectivelyin [ ]obj values out[ ]array( - ]in [ ]obj index out[ ]int index([ ]often it will be desirable to create series with an index identifying each data pointin [ ]obj series([ - ]index=[' '' '' '' ']in [ ]obj out[ ] - getting started with pandas www it-ebooks info
11,874
out[ ]index([dbac]dtype=objectcompared with regular numpy arrayyou can use values in the index when selecting single values or set of valuesin [ ]obj [' 'out[ ]- in [ ]obj [' ' in [ ]obj [[' '' '' ']out[ ] - numpy array operationssuch as filtering with boolean arrayscalar multiplicationor applying math functionswill preserve the index-value linkin [ ]obj out[ ] - in [ ]obj [obj out[ ] in [ ]obj out[ ] - in [ ]np exp(obj out[ ] another way to think about series is as fixed-lengthordered dictas it is mapping of index values to data values it can be substituted into many functions that expect dictin [ ]'bin obj out[ ]true in [ ]'ein obj out[ ]false should you have data contained in python dictyou can create series from it by passing the dictin [ ]sdata {'ohio' 'texas' 'oregon' 'utah' in [ ]obj series(sdatain [ ]obj out[ ]ohio oregon introduction to pandas data structures www it-ebooks info
11,875
utah when only passing dictthe index in the resulting series will have the dict' keys in sorted order in [ ]states ['california''ohio''oregon''texas'in [ ]obj series(sdataindex=statesin [ ]obj out[ ]california nan ohio oregon texas in this case values found in sdata were placed in the appropriate locationsbut since no value for 'californiawas foundit appears as nan (not numberwhich is considered in pandas to mark missing or na values will use the terms "missingor "nato refer to missing data the isnull and notnull functions in pandas should be used to detect missing datain [ ]pd isnull(obj out[ ]california true ohio false oregon false texas false in [ ]pd notnull(obj out[ ]california false ohio true oregon true texas true series also has these as instance methodsin [ ]obj isnull(out[ ]california true ohio false oregon false texas false discuss working with missing data in more detail later in this critical series feature for many applications is that it automatically aligns differentlyindexed data in arithmetic operationsin [ ]obj out[ ]ohio oregon texas utah in [ ]obj out[ ]california nan ohio oregon texas in [ ]obj obj out[ ]california nan ohio oregon getting started with pandas www it-ebooks info
11,876
utah nan data alignment features are addressed as separate topic both the series object itself and its index have name attributewhich integrates with other key areas of pandas functionalityin [ ]obj name 'populationin [ ]obj index name 'statein [ ]obj out[ ]state california nan ohio oregon texas namepopulation series' index can be altered in place by assignmentin [ ]obj index ['bob''steve''jeff''ryan'in [ ]obj out[ ]bob steve jeff - ryan dataframe dataframe represents tabularspreadsheet-like data structure containing an ordered collection of columnseach of which can be different value type (numericstringbooleanetc the dataframe has both row and column indexit can be thought of as dict of series (one for all sharing the same indexcompared with other such dataframe-like structures you may have used before (like ' data frame)roworiented and column-oriented operations in dataframe are treated roughly symmetrically under the hoodthe data is stored as one or more two-dimensional blocks rather than listdictor some other collection of one-dimensional arrays the exact details of dataframe' internals are far outside the scope of this book while dataframe stores the data internally in two-dimensional formatyou can easily represent much higher-dimensional data in tabular format using hierarchical indexinga subject of later section and key ingredient in many of the more advanced data-handling features in pandas introduction to pandas data structures www it-ebooks info
11,877
is from dict of equal-length lists or numpy arrays data {'state'['ohio''ohio''ohio''nevada''nevada']'year'[ ]'pop'[ ]frame dataframe(datathe resulting dataframe will have its index assigned automatically as with seriesand the columns are placed in sorted orderin [ ]frame out[ ]pop state ohio ohio ohio nevada nevada year if you specify sequence of columnsthe dataframe' columns will be exactly what you passin [ ]dataframe(datacolumns=['year''state''pop']out[ ]year state pop ohio ohio ohio nevada nevada as with seriesif you pass column that isn' contained in datait will appear with na values in the resultin [ ]frame dataframe(datacolumns=['year''state''pop''debt']index=['one''two''three''four''five']in [ ]frame out[ ]year state one ohio two ohio three ohio four nevada five nevada pop debt nan nan nan nan nan in [ ]frame columns out[ ]index([yearstatepopdebt]dtype=objecta column in dataframe can be retrieved as series either by dict-like notation or by attributein [ ]frame ['state'out[ ]one ohio in [ ]frame year out[ ]one getting started with pandas www it-ebooks info
11,878
ohio three ohio four nevada five nevada namestate two three four five nameyear note that the returned series have the same index as the dataframeand their name attribute has been appropriately set rows can also be retrieved by position or name by couple of methodssuch as the ix indexing field (much more on this later)in [ ]frame ix['three'out[ ]year state ohio pop debt nan namethree columns can be modified by assignment for examplethe empty 'debtcolumn could be assigned scalar value or an array of valuesin [ ]frame ['debt' in [ ]frame out[ ]year state one ohio two ohio three ohio four nevada five nevada pop debt in [ ]frame ['debt'np arange( in [ ]frame out[ ]year state one ohio two ohio three ohio four nevada five nevada pop debt when assigning lists or arrays to columnthe value' length must match the length of the dataframe if you assign seriesit will be instead conformed exactly to the dataframe' indexinserting missing values in any holesin [ ]val series([- - - ]index=['two''four''five']in [ ]frame ['debt'val in [ ]frame out[ ]year state pop debt introduction to pandas data structures www it-ebooks info
11,879
two three four five ohio ohio ohio nevada nevada nan - nan - - assigning column that doesn' exist will create new column the del keyword will delete columns as with dictin [ ]frame ['eastern'frame state ='ohioin [ ]frame out[ ]year state one ohio two ohio three ohio four nevada five nevada pop debt eastern nan true - true nan true - false - false in [ ]del frame ['eastern'in [ ]frame columns out[ ]index([yearstatepopdebt]dtype=objectthe column returned when indexing dataframe is view on the underlying datanot copy thusany in-place modifications to the series will be reflected in the dataframe the column can be explicitly copied using the series' copy method another common form of data is nested dict of dicts formatin [ ]pop {'nevada'{ }'ohio'{ }if passed to dataframeit will interpret the outer dict keys as the columns and the inner keys as the row indicesin [ ]frame dataframe(popin [ ]frame out[ ]nevada ohio nan of course you can always transpose the resultin [ ]frame out[ ] nevada nan ohio getting started with pandas www it-ebooks info
11,880
isn' true if an explicit index is specifiedin [ ]dataframe(popindex=[ ]out[ ]nevada ohio nan nan dicts of series are treated much in the same wayin [ ]pdata {'ohio'frame ['ohio'][:- ]'nevada'frame ['nevada'][: ]in [ ]dataframe(pdataout[ ]nevada ohio nan for complete list of things you can pass the dataframe constructorsee table - if dataframe' index and columns have their name attributes setthese will also be displayedin [ ]frame index name 'year'frame columns name 'statein [ ]frame out[ ]state nevada ohio year nan like seriesthe values attribute returns the data contained in the dataframe as ndarrayin [ ]frame values out[ ]array([nan ] ] ]]if the dataframe' columns are different dtypesthe dtype of the values array will be chosen to accomodate all of the columnsin [ ]frame values out[ ]array([[ ohio nan][ ohio - ][ ohio nan][ nevada - ][ nevada - ]]dtype=objectintroduction to pandas data structures www it-ebooks info
11,881
type notes ndarray matrix of datapassing optional row and column labels dict of arrayslistsor tuples each sequence becomes column in the dataframe all sequences must be the same length numpy structured/record array treated as the "dict of arrayscase dict of series each value becomes column indexes from each series are unioned together to form the result' row index if no explicit index is passed dict of dicts each inner dict becomes column keys are unioned to form the row index as in the "dict of seriescase list of dicts or series each item becomes row in the dataframe union of dict keys or series indexes become the dataframe' column labels list of lists or tuples treated as the " ndarraycase another dataframe the dataframe' indexes are used unless different ones are passed numpy maskedarray like the " ndarraycase except masked values become na/missing in the dataframe result index objects pandas' index objects are responsible for holding the axis labels and other metadata (like the axis name or namesany array or other sequence of labels used when constructing series or dataframe is internally converted to an indexin [ ]obj series(range( )index=[' '' '' ']in [ ]index obj index in [ ]index out[ ]index([abc]dtype=objectin [ ]index[ :out[ ]index([bc]dtype=objectindex objects are immutable and thus can' be modified by the userin [ ]index[ 'dexception traceback (most recent call lastin (---- index[ ' /users/wesm/code/pandas/pandas/core/index pyc in __setitem__(selfkeyvalue def __setitem__(selfkeyvalue) """disable the setting of values ""-- raise exception(str(self __class__object is immutable' def __getitem__(selfkey)exceptionobject is immutable getting started with pandas www it-ebooks info
11,882
structuresin [ ]index pd index(np arange( )in [ ]obj series([ - ]index=indexin [ ]obj index is index out[ ]true table - has list of built-in index classes in the library with some development effortindex can even be subclassed to implement specialized axis indexing functionality many users will not need to know much about index objectsbut they're nonetheless an important part of pandas' data model table - main index objects in pandas class description index the most general index objectrepresenting axis labels in numpy array of python objects int index specialized index for integer values multiindex "hierarchicalindex object representing multiple levels of indexing on single axis can be thought of as similar to an array of tuples datetimeindex stores nanosecond timestamps (represented using numpy' datetime dtypeperiodindex specialized index for period data (timespansin addition to being array-likean index also functions as fixed-size setin [ ]frame out[ ]state nevada ohio year nan in [ ]'ohioin frame columns out[ ]true in [ ] in frame index out[ ]false each index has number of methods and properties for set logic and answering other common questions about the data it contains these are summarized in table - introduction to pandas data structures www it-ebooks info
11,883
method description append concatenate with additional index objectsproducing new index diff compute set difference as an index intersection compute set intersection union compute set union isin compute boolean array indicating whether each value is contained in the passed collection delete compute new index with element at index deleted drop compute new index by deleting passed values insert compute new index by inserting element at index is_monotonic returns true if each element is greater than or equal to the previous element is_unique returns true if the index has no duplicate values unique compute the array of unique values in the index essential functionality in this sectioni'll walk you through the fundamental mechanics of interacting with the data contained in series or dataframe upcoming will delve more deeply into data analysis and manipulation topics using pandas this book is not intended to serve as exhaustive documentation for the pandas libraryi instead focus on the most important featuresleaving the less common (that ismore esotericthings for you to explore on your own reindexing critical method on pandas objects is reindexwhich means to create new object with the data conformed to new index consider simple example from abovein [ ]obj series([ - ]index=[' '' '' '' ']in [ ]obj out[ ] - calling reindex on this series rearranges the data according to the new indexintroducing missing values if any index values were not already presentin [ ]obj obj reindex([' '' '' '' '' ']in [ ]obj out[ ] - getting started with pandas www it-ebooks info
11,884
nan in [ ]obj reindex([' '' '' '' '' ']fill_value= out[ ] - for ordered data like time seriesit may be desirable to do some interpolation or filling of values when reindexing the method option allows us to do thisusing method such as ffill which forward fills the valuesin [ ]obj series(['blue''purple''yellow']index=[ ]in [ ]obj reindex(range( )method='ffill'out[ ] blue blue purple purple yellow yellow table - lists available method options at this timeinterpolation more sophisticated than forwardand backfilling would need to be applied after the fact table - reindex method (interpolationoptions argument description ffill or pad fill (or carryvalues forward bfill or backfill fill (or carryvalues backward with dataframereindex can alter either the (rowindexcolumnsor both when passed just sequencethe rows are reindexed in the resultin [ ]frame dataframe(np arange( reshape(( ))index=[' '' '' ']columns=['ohio''texas''california']in [ ]frame out[ ]ohio texas california in [ ]frame frame reindex([' '' '' '' ']in [ ]frame out[ ]essential functionality www it-ebooks info
11,885
ohio nan texas nan california nan the columns can be reindexed using the columns keywordin [ ]states ['texas''utah''california'in [ ]frame reindex(columns=statesout[ ]texas utah california nan nan nan both can be reindexed in one shotthough interpolation will only apply row-wise (axis )in [ ]frame reindex(index=[' '' '' '' ']method='ffill'columns=statesout[ ]texas utah california nan nan nan nan as you'll see soonreindexing can be done more succinctly by label-indexing with ixin [ ]frame ix[[' '' '' '' ']statesout[ ]texas utah california nan nan nan nan nan nan table - reindex function arguments argument description index new sequence to use as index can be index instance or any other sequence-like python data structure an index will be used exactly as is without any copying method interpolation (fillmethodsee table - for options fill_value substitute value to use when introducing missing data by reindexing limit when forwardor backfillingmaximum size gap to fill level match simple index on level of multiindexotherwise select subset of copy do not copy underlying data if new index is equivalent to old index true by default ( always copy data getting started with pandas www it-ebooks info
11,886
dropping one or more entries from an axis is easy if you have an index array or list without those entries as that can require bit of munging and set logicthe drop method will return new object with the indicated value or values deleted from an axisin [ ]obj series(np arange( )index=[' '' '' '' '' ']in [ ]new_obj obj drop(' 'in [ ]new_obj out[ ] in [ ]obj drop([' '' ']out[ ] with dataframeindex values can be deleted from either axisin [ ]data dataframe(np arange( reshape(( ))index=['ohio''colorado''utah''new york']columns=['one''two''three''four']in [ ]data drop(['colorado''ohio']out[ ]one two three four utah new york in [ ]data drop('two'axis= out[ ]one three four ohio colorado utah new york in [ ]data drop(['two''four']axis= out[ ]one three ohio colorado utah new york indexingselectionand filtering series indexing (obj]works analogously to numpy array indexingexcept you can use the series' index values instead of only integers here are some examples thisin [ ]obj series(np arange( )index=[' '' '' '' ']in [ ]obj[' 'out[ ] in [ ]obj[ out[ ] in [ ]obj[ : out[ ]in [ ]obj[[' '' '' ']out[ ]essential functionality www it-ebooks info
11,887
in [ ]obj[[ ]out[ ] in [ ]obj[obj out[ ] slicing with labels behaves differently than normal python slicing in that the endpoint is inclusivein [ ]obj[' ':' 'out[ ] setting using these methods works just as you would expectin [ ]obj[' ':' ' in [ ]obj out[ ] as you've seen aboveindexing into dataframe is for retrieving one or more columns either with single value or sequencein [ ]data dataframe(np arange( reshape(( ))index=['ohio''colorado''utah''new york']columns=['one''two''three''four']in [ ]data out[ ]one two ohio colorado utah new york three in [ ]data['two'out[ ]ohio colorado utah new york nametwo four in [ ]data[['three''one']out[ ]three one ohio colorado utah new york indexing like this has few special cases first selecting rows by slicing or boolean arrayin [ ]data[: out[ ]one two three four in [ ]data[data['three' out[ ]one two three four getting started with pandas www it-ebooks info
11,888
colorado colorado utah new york this might seem inconsistent to some readersbut this syntax arose out of practicality and nothing more another use case is in indexing with boolean dataframesuch as one produced by scalar comparisonin [ ]data out[ ]one two ohio true true colorado true false utah false false new york false false three true false false false four true false false false in [ ]data[data in [ ]data out[ ]one two ohio colorado utah new york three four this is intended to make dataframe syntactically more like an ndarray in this case for dataframe label-indexing on the rowsi introduce the special indexing field ix it enables you to select subset of the rows and columns from dataframe with numpylike notation plus axis labels as mentioned earlierthis is also less verbose way to do reindexingin [ ]data ix['colorado'['two''three']out[ ]two three namecolorado in [ ]data ix[['colorado''utah'][ ]out[ ]four one two colorado utah in [ ]data ix[ out[ ]one two three four nameutah in [ ]data ix[:'utah''two'out[ ]ohio colorado utah nametwo in [ ]data ix[data three : out[ ]essential functionality www it-ebooks info
11,889
colorado utah new york two three so there are many ways to select and rearrange the data contained in pandas object for dataframethere is short summary of many of them in table - you have number of additional options when working with hierarchical indexes as you'll later see when designing pandasi felt that having to type frame[:colto select column was too verbose (and error-prone)since column selection is one of the most common operations thus made the design trade-off to push all of the rich label-indexing into ix table - indexing options with dataframe type notes obj[valselect single column or sequence of columns from the dataframe special case conveniencesboolean array (filter rows)slice (slice rows)or boolean dataframe (set values based on some criterionobj ix[valselects single row of subset of rows from the dataframe obj ix[:valselects single column of subset of columns obj ix[val val select both rows and columns reindex method conform one or more axes to new indexes xs method select single row or column as series by label icolirow methods select single column or rowrespectivelyas series by integer location get_valueset_value methods select single value by row and column label arithmetic and data alignment one of the most important pandas features is the behavior of arithmetic between objects with different indexes when adding together objectsif any index pairs are not the samethe respective index in the result will be the union of the index pairs let' look at simple examplein [ ] series([ - ]index=[' '' '' '' ']in [ ] series([- - ]index=[' '' '' '' '' ']in [ ] out[ ] - in [ ] out[ ] - - getting started with pandas www it-ebooks info
11,890
adding these together yieldsin [ ] out[ ] nan nan nan the internal data alignment introduces na values in the indices that don' overlap missing values propagate in arithmetic computations in the case of dataframealignment is performed on both the rows and the columnsin [ ]df dataframe(np arange( reshape(( ))columns=list('bcd')index=['ohio''texas''colorado']in [ ]df dataframe(np arange( reshape(( ))columns=list('bde')index=['utah''ohio''texas''oregon']in [ ]df out[ ] ohio texas colorado in [ ]df out[ ] utah ohio texas oregon adding these together returns dataframe whose index and columns are the unions of the ones in each dataframein [ ]df df out[ ] colorado nan nan nan nan ohio nan nan oregon nan nan nan nan texas nan nan utah nan nan nan nan arithmetic methods with fill values in arithmetic operations between differently-indexed objectsyou might want to fill with special valuelike when an axis label is found in one object but not the otherin [ ]df dataframe(np arange( reshape(( ))columns=list('abcd')in [ ]df dataframe(np arange( reshape(( ))columns=list('abcde')in [ ]df out[ ] in [ ]df out[ ] essential functionality www it-ebooks info
11,891
adding these together results in na values in the locations that don' overlapin [ ]df df out[ ] nan nan nan nan nan nan nan nan using the add method on df pass df and an argument to fill_valuein [ ]df add(df fill_value= out[ ] relatedlywhen reindexing series or dataframeyou can also specify different fill valuein [ ]df reindex(columns=df columnsfill_value= out[ ] table - flexible arithmetic methods method description add method for addition (+sub method for subtraction (-div method for division (/mul method for multiplication (*operations between dataframe and series as with numpy arraysarithmetic between dataframe and series is well-defined firstas motivating exampleconsider the difference between array and one of its rowsin [ ]arr np arange( reshape(( )in [ ]arr out[ ]array([ ] ] getting started with pandas www it-ebooks info
11,892
]]in [ ]arr[ out[ ]array( in [ ]arr arr[ out[ ]array([ ] ] ]] ]this is referred to as broadcasting and is explained in more detail in operations between dataframe and series are similarin [ ]frame dataframe(np arange( reshape(( ))columns=list('bde')index=['utah''ohio''texas''oregon']in [ ]series frame ix[ in [ ]frame out[ ] utah ohio texas oregon in [ ]series out[ ] nameutah by defaultarithmetic between dataframe and series matches the index of the series on the dataframe' columnsbroadcasting down the rowsin [ ]frame series out[ ] utah ohio texas oregon if an index value is not found in either the dataframe' columns or the series' indexthe objects will be reindexed to form the unionin [ ]series series(range( )index=[' '' '' ']in [ ]frame series out[ ] utah nan nan ohio nan nan texas nan nan oregon nan nan if you want to instead broadcast over the columnsmatching on the rowsyou have to use one of the arithmetic methods for examplein [ ]series frame[' 'in [ ]frame in [ ]series essential functionality www it-ebooks info
11,893
utah ohio texas oregon out[ ]utah ohio texas oregon named in [ ]frame sub(series axis= out[ ] utah - ohio - texas - oregon - the axis number that you pass is the axis to match on in this case we mean to match on the dataframe' row index and broadcast across function application and mapping numpy ufuncs (element-wise array methodswork fine with pandas objectsin [ ]frame dataframe(np random randn( )columns=list('bde')index=['utah''ohio''texas''oregon']in [ ]frame out[ ] utah - ohio - texas oregon - - in [ ]np abs(frameout[ ] utah ohio texas oregon another frequent operation is applying function on arrays to each column or row dataframe' apply method does exactly thisin [ ] lambda xx max( min(in [ ]frame apply(fout[ ] in [ ]frame apply(faxis= out[ ]utah ohio texas oregon many of the most common array statistics (like sum and meanare dataframe methodsso using apply is not necessary the function passed to apply need not return scalar valueit can also return series with multiple valuesin [ ]def ( )return series([ min() max()]index=['min''max']in [ ]frame apply( getting started with pandas www it-ebooks info
11,894
min - max - element-wise python functions can be usedtoo suppose you wanted to compute formatted string from each floating point value in frame you can do this with applymapin [ ]format lambda ' fx in [ ]frame applymap(formatout[ ] utah - - ohio - texas oregon - the reason for the name applymap is that series has map method for applying an element-wise functionin [ ]frame[' 'map(formatout[ ]utah - ohio texas oregon - namee sorting and ranking sorting data set by some criterion is another important built-in operation to sort lexicographically by row or column indexuse the sort_index methodwhich returns newsorted objectin [ ]obj series(range( )index=[' '' '' '' ']in [ ]obj sort_index(out[ ] with dataframeyou can sort by index on either axisin [ ]frame dataframe(np arange( reshape(( ))index=['three''one']columns=[' '' '' '' ']in [ ]frame sort_index(out[ ] one three in [ ]frame sort_index(axis= out[ ] three one essential functionality www it-ebooks info
11,895
tooin [ ]frame sort_index(axis= ascending=falseout[ ] three one to sort series by its valuesuse its order methodin [ ]obj series([ - ]in [ ]obj order(out[ ] - any missing values are sorted to the end of the series by defaultin [ ]obj series([ np nan np nan- ]in [ ]obj order(out[ ] - nan nan on dataframeyou may want to sort by the values in one or more columns to do sopass one or more column names to the by optionin [ ]frame dataframe({' '[ - ]' '[ ]}in [ ]frame out[ ] - in [ ]frame sort_index(by=' 'out[ ] - to sort by multiple columnspass list of namesin [ ]frame sort_index(by=[' '' ']out[ ] - getting started with pandas www it-ebooks info
11,896
valid data points in an array it is similar to the indirect sort indices produced by numpy argsortexcept that ties are broken according to rule the rank methods for series and dataframe are the place to lookby default rank breaks ties by assigning each group the mean rankin [ ]obj series([ - ]in [ ]obj rank(out[ ] ranks can also be assigned according to the order they're observed in the datain [ ]obj rank(method='first'out[ ] naturallyyou can rank in descending ordertooin [ ]obj rank(ascending=falsemethod='max'out[ ] see table - for list of tie-breaking methods available dataframe can compute ranks over the rows or the columnsin [ ]frame dataframe({' '[ - ]' '[ ]' '[- - ]}in [ ]frame out[ ] - - - in [ ]frame rank(axis= out[ ] essential functionality www it-ebooks info
11,897
method description 'averagedefaultassign the average rank to each entry in the equal group 'minuse the minimum rank for the whole group 'maxuse the maximum rank for the whole group 'firstassign ranks in the order the values appear in the data axis indexes with duplicate values up until now all of the examples 've showed you have had unique axis labels (index valueswhile many pandas functions (like reindexrequire that the labels be uniqueit' not mandatory let' consider small series with duplicate indicesin [ ]obj series(range( )index=[' '' '' '' '' ']in [ ]obj out[ ] the index' is_unique property can tell you whether its values are unique or notin [ ]obj index is_unique out[ ]false data selection is one of the main things that behaves differently with duplicates indexing value with multiple entries returns series while single entries return scalar valuein [ ]obj[' 'out[ ] in [ ]obj[' 'out[ ] the same logic extends to indexing rows in dataframein [ ]df dataframe(np random randn( )index=[' '' '' '' ']in [ ]df out[ ] - - - - - in [ ]df ix[' 'out[ ] getting started with pandas www it-ebooks info
11,898
- summarizing and computing descriptive statistics pandas objects are equipped with set of common mathematical and statistical methods most of these fall into the category of reductions or summary statisticsmethods that extract single value (like the sum or meanfrom series or series of values from the rows or columns of dataframe compared with the equivalent methods of vanilla numpy arraysthey are all built from the ground up to exclude missing data consider small dataframein [ ]df dataframe([[ np nan][ - ][np nannp nan][ - ]]index=[' '' '' '' ']columns=['one''two']in [ ]df out[ ]one two nan - nan nan - calling dataframe' sum method returns series containing column sumsin [ ]df sum(out[ ]one two - passing axis= sums over the rows insteadin [ ]df sum(axis= out[ ] nan - na values are excluded unless the entire slice (row or column in this caseis na this can be disabled using the skipna optionin [ ]df mean(axis= skipna=falseout[ ] nan nan - see table - for list of common options for each reduction method options summarizing and computing descriptive statistics www it-ebooks info
11,899
method description axis axis to reduce over for dataframe' rows and for columns skipna exclude missing valuestrue by default level reduce grouped by level if the axis is hierarchically-indexed (multiindexsome methodslike idxmin and idxmaxreturn indirect statistics like the index value where the minimum or maximum values are attainedin [ ]df idxmax(out[ ]one two other methods are accumulationsin [ ]df cumsum(out[ ]one two nan - nan nan - another type of method is neither reduction nor an accumulation describe is one such exampleproducing multiple summary statistics in one shotin [ ]df describe(out[ ]one two count mean - std min - - - - max - on non-numeric datadescribe produces alternate summary statisticsin [ ]obj series([' '' '' '' ' in [ ]obj describe(out[ ]count unique top freq see table - for full list of summary statistics and related methods getting started with pandas www it-ebooks info