id
int64
0
25.6k
text
stringlengths
0
4.59k
4,700
self text 'def write(selfstring)self text +string def writelines(selflines)for line in linesself write(lineempty string when created add string of bytes add each line in list class inputsimulated input file def __init__(selfinput='')default argument self text input save string when created def read(selfsize=none)optional argument if size =noneread bytesor all resself text self text'elseresself text self text[:size]self text[size:return res def readline(self)eoln self text find('\ 'find offset of next eoln if eoln =- slice off through eoln resself text self text'elseresself text self text[:eoln+ ]self text[eoln+ :return res def redirect(functionpargskargsinput)savestreams sys stdinsys stdout sys stdin input(inputsys stdout output(tryresult function(*pargs**kargsoutput sys stdout text finallysys stdinsys stdout savestreams return (resultoutputredirect stdin/out run function object return stdout text run function with args restore if exc or not return result if no exc this module defines two classes that masquerade as real filesoutput provides the write method interface ( protocolexpected of output files but saves all output in an in-memory string as it is written input provides the interface expected of input filesbut provides input on demand from an in-memory string passed in at object construction time the redirect function at the bottom of this file combines these two objects to run single function with input and output redirected entirely to python class objects the passed-in function to run need not know or care that its print and input function calls and stdin and stdout method calls are talking to class rather than to real filepipeor user to demonstrateimport and run the interact function at the heart of the test streams script of example - that we've been running from the shell (to use the script execution context
4,701
directlythe function reads from the keyboard and writes to the screenjust as if it were run as program without redirectionc:\pp \system\streamspython from teststreams import interact interact(hello stream world enter number> squared is enter number> squared is enter number^ bye nowlet' run this function under the control of the redirection function in redirect py and pass in some canned input text in this modethe interact function takes its input from the string we pass in (' \ \ \ '--three lines with explicit endof-line characters)and the result of running the function is tuple with its return value plus string containing all the text written to the standard output streamfrom redirect import redirect (resultoutputredirect(interact(){}' \ \ \ 'print(resultnone output 'hello stream world\nenter number> squared is \nenter number> squared is \nenter number> squared is \nenter number>bye\nthe output is singlelong string containing the concatenation of all text written to standard output to make this look betterwe can pass it to print or split it up with the string object' splitlines methodfor line in output splitlines()print(linehello stream world enter number> squared is enter number> squared is enter number> squared is enter number>bye better stillwe can reuse the more py module we wrote in the preceding (example - )it' less to type and rememberand it' already known to work well (the followinglike all cross-directory imports in this book' examplesassumes that the directory containing the pp root is on your module search path--change your python path setting as needed)from pp system more import more more(outputhello stream world enter number> squared is enter number> squared is standard streams
4,702
enter number>bye this is an artificial exampleof coursebut the techniques illustrated are widely applicable for instanceit' straightforward to add gui interface to program written to interact with command-line user simply intercept standard output with an object such as the output class instance shown earlier and throw the text string up in window similarlystandard input can be reset to an object that fetches text from graphical interface ( popped-up dialog boxbecause classes are plug-and-play compatible with real fileswe can use them in any tool that expects file watch for gui streamredirection module named guistreams in that provides concrete implementation of some of these ideas the io stringio and io bytesio utility classes the prior section' technique of redirecting streams to objects proved so handy that now standard library module automates the task for many use cases (though some use casessuch as guismay still require more custom codethe standard library tool provides an object that maps file object interface to and from in-memory strings for examplefrom io import stringio buff stringio(buff write('spam\ ' buff write('eggs\ ' buff getvalue('spam\neggs\nbuff stringio('ham\nspam\ 'buff readline('ham\nbuff readline('spam\nbuff readline('save written text to string provide input from string as in the prior sectioninstances of stringio objects can be assigned to sys stdin and sys stdout to redirect streams for input and print calls and can be passed to any code that was written to expect real file object againin pythonthe object interfacenot the concrete datatypeis the name of the gamefrom io import stringio import sys buff stringio(temp sys stdout sys stdout buff print( 'spam' script execution context or printfile=buff
4,703
buff getvalue(' spam \nrestore original stream note that there is also an io bytesio class with similar behaviorbut which maps file operations to an in-memory bytes bufferinstead of str stringfrom io import bytesio stream bytesio(stream write( 'spam'stream getvalue( 'spamstream bytesio( 'dpam'stream read( 'dpamdue to the sharp distinction that python draws between text and binary datathis alternative may be better suited for scripts that deal with binary data we'll learn more about the text-versus-binary issue in the next when we explore files capturing the stderr stream we've been focusing on stdin and stdout redirectionbut stderr can be similarly reset to filespipesand objects although some shells support thisit' also straightforward within python script for instanceassigning sys stderr to another instance of class such as output or stringio object in the preceding section' example allows your script to intercept text written to standard errortoo python itself uses standard error for error message text (and the idle gui interface intercepts it and colors it red by defaulthoweverno higher-level tools for standard error do what print and input do for the output and input streams if you wish to print to the error streamyou'll want to call sys stderr write(explicitly or read the next section for print call trick that makes this easier redirecting standard errors from shell command line is bit more complex and less portable on most unix-like systemswe can usually capture stderr output by using shell-redirection syntax of the form command output >& this may not work on some platformsthoughand can even vary per unix shellsee your shell' manpages for more details redirection syntax in print calls because resetting the stream attributes to new objects was so popularthe python print built-in is also extended to include an explicit file to which output is to be sent statement of this formprint(stufffile=afileafile is an objectnot string name standard streams
4,704
return to the original output stream (as shown in the section on redirecting streams to objectsfor exampleimport sys print('spam file=sys stderrwill send text the standard error stream object rather than sys stdout for the duration of this single print call only the next normal print statement (without fileprints to standard output as usual similarlywe can use either our custom class or the standard library' class as the output file with this hookfrom io import stringio buff stringio(print( file=buffprint('spam'file=buffprint(buff getvalue() spam from redirect import output buff output(print( file=buffprint('eggs'file=buffprint(buff text eggs other redirection optionsos popen and subprocess revisited near the end of the preceding we took first look at the built-in os popen function and its subprocess popen relativewhich provide way to redirect another command' streams from within python program as we sawthese tools can be used to run shell command line ( string we would normally type at dos or csh promptbut also provide python file-like object connected to the command' output stream-reading the file object allows script to read another program' output suggested that these tools may be used to tap into input streams as well because of thatthe os popen and subprocess tools are another way to redirect streams of spawned programs and are close cousins to some of the techniques we just met their effect is much like the shell command-line pipe syntax for redirecting streams to programs (in facttheir names mean "pipe open")but they are run within script and provide file-like interface to piped streams they are similar in spirit to the redirect functionbut are based on running programs (not calling functions)and the command' streams are processed in the spawning script as files (not tied to class objectsthese tools redirect the streams of program that script startsinstead of redirecting the streams of the script itself script execution context
4,705
in factby passing in the desired mode flagwe redirect either spawned program' output or input streams to file in the calling scriptsand we can obtain the spawned program' exit status code from the close method (none means "no errorhereto illustrateconsider the following two scriptsc:\pp \system\streamstype hello-out py print('hello shell world' :\pp \system\streamstype hello-in py inp input(open('hello-in txt'' 'write('hello inp '\ 'these scripts can be run from system shell window as usualc:\pp \system\streamspython hello-out py hello shell world :\pp \system\streamspython hello-in py brian :\pp \system\streamstype hello-in txt hello brian as we saw in the prior python scripts can read output from other programs and scripts like thesetoousing code like the followingc:\pp \system\streamspython import os pipe os popen('python hello-out py'pipe read('hello shell world\nprint(pipe close()none 'ris default--read stdout exit statusnone is good but python scripts can also provide input to spawned programsstandard input streams--passing "wmode argumentinstead of the default " "connects the returned object to the spawned program' input stream what we write on the spawning end shows up as input in the program startedpipe os popen('python hello-in py'' 'pipe write('gumby\ ' pipe close(open('hello-in txt'read('hello gumby\ ' '--write to program stdin \ at end is optional output sent to file the popen call is also smart enough to run the command string as an independent process on platforms that support such notion it accepts an optional third argument that can be used to control buffering of written textwhich we'll finesse here standard streams
4,706
for even more control over the streams of spawned programswe can employ the subprocess module we introduced in the preceding as we learned earlierthis module can emulate os popen functionalitybut it can also achieve feats such as bidirectional stream communication (accessing both program' input and outputand tying the output of one program to the input of another for instancethis module provides multiple ways to spawn program and get both its standard output text and exit status here are three common ways to leverage this module to start program and redirect its output stream (recall from that you may need to pass shell=true argument to popen and call to make this section' examples work on unix-like platforms as they are coded here) :\pp \system\streamspython from subprocess import popenpipecall call('python hello-out py'hello shell world pipe popen('python hello-out py'stdout=pipepipe communicate()[ 'hello shell world\ \npipe returncode pipe popen('python hello-out py'stdout=pipepipe stdout read( 'hello shell world\ \npipe wait( convenience (stdoutstderrexit status exit status the call in the first of these three techniques is just convenience function (there are more of these which you can look up in the python library manual)and the communicate in the second is roughly convenience for the third (it sends data to stdinreads data from stdout until end-of-fileand waits for the process to end)redirecting and connecting to the spawned program' input stream is just as simplethough bit more complex than the os popen approach with 'wfile mode shown in the preceding section (as mentioned in the last os popen is implemented with subprocessand is thus itself just something of convenience function today)pipe popen('python hello-in py'stdin=pipepipe stdin write( 'pokey\ ' pipe stdin close(pipe wait( open('hello-in txt'read('hello pokey\ script execution context output sent to file
4,707
this module let' reuse the simple writer and reader scripts we wrote earlier to demonstratec:\pp \system\streamstype writer py print("helphelpi' being repressed!"print( :\pp \system\streamstype reader py print('got this"% "input()import sys data sys stdin readline()[:- print('the meaning of life is'dataint(data code like the following can both read from and write to the reader script--the pipe object has two file-like objects available as attached attributesone connecting to the input streamand one to the output (python users might recognize these as equivalent to the tuple returned by the now-defunct os popen )pipe popen('python reader py'stdin=pipestdout=pipepipe stdin write( 'lumberjack\ ' pipe stdin write( ' \ ' pipe stdin close(output pipe stdout read(pipe wait( output 'got this"lumberjack"\ \nthe meaning of life is \ \nas we'll learn in we have to be cautious when talking back and forth to program like thisbuffered output streams can lead to deadlock if writes and reads are interleavedand we may eventually need to consider tools like the pexpect utility as workaround (more on this laterfinallyeven more exotic stream control is possible--the following connects two programsby piping the output of one python script into anotherfirst with shell syntaxand then with the subprocess modulec:\pp \system\streamspython writer py python reader py got this"helphelpi' being repressed!the meaning of life is :\pp \system\streamspython from subprocess import popenpipe popen('python writer py'stdout=pipep popen('python reader py'stdin= stdoutstdout=pipeoutput communicate()[ output 'got this"helphelpi\' being repressed!"\ \nthe meaning of life is \ \np returncode standard streams
4,708
(and not bothprevents us from catching the second script' output in our codeimport os os popen('python writer py'' ' os popen('python reader py'' ' writep read( close(got this"helphelpi' being repressed!the meaning of life is print(xnone from the broader perspectivethe os popen call and subprocess module are python' portable equivalents of unix-like shell syntax for redirecting the streams of spawned programs the python versions also work on windowsthoughand are the most platform-neutral way to launch another program from python script the commandline strings you pass to them may vary per platform ( directory listing requires an ls on unix but dir on windows)but the call itself works on all major python platforms on unix-like platformsthe combination of the calls os forkos pipeos dupand some os exec variants can also be used to start new independent program with streams connected to the parent program' streams as suchit' yet another way to redirect streams and low-level equivalent to tools such as os popen (os fork is available in cygwin' python on windowssince these are all more advanced parallel processing toolsthoughwe'll defer further details on this front until especially its coverage of pipes and exit status codes and we'll resurrect subprocess again in to code regression tester that intercepts all three standard streams of spawned test scripts--inputsoutputsand errors but first continues our survey of python system interfaces by exploring the tools available for processing files and directories although we'll be shifting focus somewhatwe'll find that some of what we've learned here will already begin to come in handy as general system-related tools spawning shell commandsfor instanceprovides ways to inspect directoriesand the file interface we will expand on in the next is at the heart of the stream processing techniques we have studied here script execution context
4,709
if you are familiar with other common shell script languagesit might be useful to see how python compares here is simple script in unix shell language called csh that mails all the files in the current working directory with suffix of py ( all python source filesto hopefully fictitious address#!/bin/csh foreach (pyecho $ mail eric@halfabee com - $ $ end an equivalent python script looks similar#!/usr/bin/python import osglob for in glob glob('py')print(xos system('mail eric@halfabee com - % % (xx)but is slightly more verbose since pythonunlike cshisn' meant just for shell scriptssystem interfaces must be imported and called explicitly and since python isn' just string-processing languagecharacter strings must be enclosed in quotesas in although this can add few extra keystrokes in simple scripts like thisbeing generalpurpose language makes python better tool once we leave the realm of trivial programs we couldfor exampleextend the preceding script to do things like transfer files by ftppop up gui message selector and status barfetch messages from an sql databaseand employ com objects on windowsall using standard python tools python scripts also tend to be more portable to other platforms than csh for instanceif we used the python smtp interface module to send mail instead of relying on unix command-line mail toolthe script would run on any machine with python and an internet link (as we'll see in smtp requires only socketsand like cwe don' need to evaluate variableswhat else would you expect in free languagestandard streams
4,710
file and directory tools "erase your hard drive in five easy steps!this continues our look at system interfaces in python by focusing on file and directory-related tools as you'll seeit' easy to process files and directory trees with python' built-in and standard library support because files are part of the core python languagesome of this material is review of file basics covered in books like learning pythonfourth editionand we'll defer to such resources for more background details on some file-related concepts for exampleiterationcontext managersand the file object' support for unicode encodings are demonstrated along the waybut these topics are not repeated in full here this goal is to tell enough of the file story to get you started writing useful scripts file tools external files are at the heart of much of what we do with system utilities for instancea testing system may read its inputs from one filestore program results in another fileand check expected results by loading yet another file even user interface and internetoriented programs may load binary images and audio clips from files on the underlying computer it' core programming concept in pythonthe built-in open function is the primary tool scripts use to access the files on the underlying computer system since this function is an inherent part of the python languageyou may already be familiar with its basic workings when calledthe open function returns new file object that is connected to the external filethe file object has methods that transfer data to and from the file and perform variety of file-related operations the open function also provides portable interface to the underlying filesystem--it works the same way on every platform on which python runs other file-related modules built into python allow us to do things such as manipulate lower-level descriptor-based files (os)copyremoveand move files and collections of files (os and shutil)store data and objects in files by key (dbm and shelve)and access
4,711
related to database topicsaddressed in in this sectionwe'll take brief tutorial look at the built-in file object and explore handful of more advanced file-related topics as usualyou should consult either python' library manual or reference books such as python pocket reference for further details and methods we don' have space to cover here rememberfor quick interactive helpyou can also run dir(fileon an open file object to see an attributes list that includes methodshelp(filefor general helpand help(file readfor help on specific method such as readthough the file object implementation in provides less information for help than the library manual and other resources the file object model in python just like the string types we noted in file support in python is bit richer than it was in the past as we noted earlierin python str strings always represent unicode text (ascii or wider)and bytes and bytearray strings represent raw binary data python draws similar and related distinction between files containing text and binary datatext files contain unicode text in your scripttext file content is always str string-- sequence of characters (technicallyunicode "code points"text files perform the automatic line-end translations described in this by default and automatically apply unicode encodings to file contentthey encode to and decode from raw binary bytes on transfers to and from the fileaccording to provided or default encoding name encoding is trivial for ascii textbut may be sophisticated in other cases binary files contain raw -bit bytes in your scriptbinary file content is always byte stringusually bytes object-- sequence of small integerswhich supports most str operations and displays as ascii characters whenever possible binary files perform no translations of data when it is transferred to and from filesno lineend translations or unicode encodings are performed in practicetext files are used for all truly text-related dataand binary files store items like packed binary dataimagesaudio filesexecutablesand so on as programmer you distinguish between the two file types in the mode string argument you pass to openadding " ( 'rb''wb'means the file contains binary data for coding new file contentuse normal strings for text ( 'spamor bytes decode()and byte strings for binary ( 'spamor str encode()unless your file scope is limited to ascii textthe text/binary distinction can sometimes impact your code text files create and require str stringsand binary files use byte stringsbecause you cannot freely mix the two string types in expressionsyou must choose file mode carefully many built-in tools we'll use in this book make the choice for usthe struct and pickle modulesfor instancedeal in byte strings in file and directory tools
4,712
distinction when using system tools like pipe descriptors and socketsbecause they transfer data as byte strings today (though their content can be decoded and encoded as unicode text if neededmoreoverbecause text-mode files require that content be decodable per unicode encoding schemeyou must read undecodable file content in binary modeas byte strings (or catch unicode exceptions in try statements and skip the file altogetherthis may include both truly binary files as well as text files that use encodings that are nondefault and unknown as we'll see later in this because str strings are always unicode in xit' sometimes also necessary to select byte string mode for the names of files in directory tools such as os listdirglob globand os walk if they cannot be decoded (passing in byte strings essentially suppresses decodingin factwe'll see examples where the python distinction between str text and bytes binary pops up in tools beyond basic files throughout this book--in and when we explore socketsin and when we'll need to ignore unicode errors in file and directory searchesin where we'll see how clientside internet protocol modules such as ftp and emailwhich run atop socketsimply file modes and encoding requirementsand more but just as for string typesalthough we will see some of these concepts in action in this we're going to take much of this story as given here file and string objects are core language material and are prerequisite to this text as mentioned earlierbecause they are addressed by -page in the book learning pythonfourth editioni won' repeat their coverage in full in this book if you find yourself confused by the unicode and binary file and string concepts in the following sectionsi encourage you to refer to that text or other resources for more background information in this domain using built-in file objects despite the text/binary dichotomy in python xfiles are still very straightforward to use for most purposesin factthe open built-in function and its files objects are all you need to remember to process files in your scripts the file object returned by open has methods for reading data (readreadlinereadlines)writing data (writewritelines)freeing system resources (close)moving to arbitrary positions in the file (seek)forcing data in output buffers to be transferred to disk (flush)fetching the underlying file handle (fileno)and more since the built-in file object is so easy to uselet' jump right into few interactive examples output files to make new filecall open with two argumentsthe external name of the file to be created and mode string (short for writeto store data on the filecall the file object' write method with string containing the data to storeand then call the close method file tools
4,713
we'll sometimes omit in this book to save space)and as we'll seeclose calls are often optionalunless you need to open and read the file again during the same program or sessionc:\temppython file open('data txt'' 'file write('hello file world!\ ' file write('bye file world \ ' file close(open output file objectcreates writes strings verbatim returns number chars/bytes written closed on gc and exit too and that' it--you've just generated brand-new text file on your computerregardless of the computer on which you type this codec:\tempdir data txt / data txt :\temptype data txt hello file worldbye file world there is nothing unusual about the new fileherei use the dos dir and type commands to list and display the new filebut it shows up in file explorer guitoo opening in the open function call shown in the preceding examplethe first argument can optionally specify complete directory path as part of the filename string if we pass just simple filename without paththe file will appear in python' current working directory that isit shows up in the place where the code is run herethe directory :\temp on my machine is implied by the bare filename data txtso this actually creates file at :\temp\data txt more accuratelythe filename is relative to the current working directory if it does not include complete absolute directory path see "current working directoryon page ()for refresher on this topic also note that when opening in modepython either creates the external file if it does not yet exist or erases the file' current contents if it is already present on your machine (so be careful out there--you'll delete whatever was in the file beforewriting notice that we added an explicit \ end-of-line character to lines written to the fileunlike the print built-in functionfile object write methods write exactly what they are passed without adding any extra formatting the string passed to write shows up character for character on the external file in text filesdata written may undergo lineend or unicode translations which we'll describe aheadbut these are undone when the data is later read back output files also sport writelines methodwhich simply writes all of the strings in list one at time without adding any extra formatting for examplehere is write lines equivalent to the two write calls shown earlierfile writelines(['hello file world!\ ''bye file and directory tools file world \ ']
4,714
iteration tool)but it is convenient in scripts that save output in list to be written later closing the file close method used earlier finalizes file contents and frees up system resources for instanceclosing forces buffered output data to be flushed out to disk normallyfiles are automatically closed when the file object is garbage collected by the interpreter (that iswhen it is no longer referencedthis includes all remaining open files when the python session or program exits because of thatclose calls are often optional in factit' common to see file-processing code in python in this idiomopen('somefile txt'' 'write(" 'day bruce\ "open('somefile txt'' 'read(write to temporary object read from temporary object since both these expressions make temporary file objectuse it immediatelyand do not save reference to itthe file object is reclaimed right after data is transferredand is automatically closed in the process there is usually no need for such code to call the close method explicitly in some contextsthoughyou may wish to explicitly close anyhowfor onebecause the jython implementation relies on java' garbage collectoryou can' always be as sure about when files will be reclaimed as you can in standard python if you run your python code with jythonyou may need to close manually if many files are created in short amount of time ( in loop)in order to avoid running out of file resources on operating systems where this matters for anothersome idessuch as python' standard idle guimay hold on to your file objects longer than you expect (in stack tracebacks of prior errorsfor instance)and thus prevent them from being garbage collected as soon as you might expect if you write to an output file in idlebe sure to explicitly close (or flushyour file if you need to reliably read it back during the same idle session otherwiseoutput buffers might not be flushed to disk and your file may be incomplete when read and while it seems very unlikely todayit' not impossible that this auto-close on reclaim file feature could change in future this is technically feature of the file object' implementationwhich may or may not be considered part of the language definition over time for these reasonsmanual close calls are not bad idea in nontrivial programseven if they are technically not required closing is generally harmless but robust habit to form ensuring file closureexception handlers and context managers manual file close method calls are easy in straight-line codebut how do you ensure file closure when exceptions might kick your program beyond the point where the close call is codedfirst of allmake sure you must--files close themselves when they are collectedand this will happen eventuallyeven when exceptions occur file tools
4,715
finally clause is the most generalsince it allows you to provide general exit actions for any type of exceptionsmyfile open(filename' 'tryprocess myfile finallymyfile close(in recent python releasesthoughthe with statement provides more concise alternative for some specific objects and exit actionsincluding closing fileswith open(filename' 'as myfileprocess myfileauto-closed on statement exit this statement relies on the file object' context managercode automatically run both on statement entry and on statement exit regardless of exception behavior because the file object' exit code closes the file automaticallythis guarantees file closure whether an exception occurs during the statement or not the with statement is notably shorter ( linesthan the try/finally alternativebut it' also less general--with applies only to objects that support the context manager protocolwhereas try/finally allows arbitrary exit actions for arbitrary exception contexts while some other object types have context managerstoo ( thread locks)with is limited in scope in factif you want to remember just one exit actions optiontry/finally is the most inclusive stillwith yields less code for files that must be closed and can serve well in such specific roles it can even save line of code when no exceptions are expected (albeit at the expense of further nesting and indenting file processing logic)myfile open(filename' 'process myfile myfile close(traditional form with open(filenameas myfileprocess myfile context manager form in python and laterthis statement can also specify multiple ( nestedcontext managers--any number of context manager items may be separated by commasand multiple items work the same as nested with statements in general termsthe and later codewith (as ab(as bstatements runs the same as the followingwhich works in and with (as awith (as bstatements file and directory tools
4,716
actions are automatically run to close the filesregardless of exception outcomeswith open('data'as finopen('results'' 'as foutfor line in finfout write(transform(line)context manager-dependent code like this seems to have become more common in recent yearsbut this is likely at least in part because newcomers are accustomed to languages that require manual close calls in all cases in most contexts there is no need to wrap all your python file-processing code in with statements--the files object' autoclose-on-collection behavior often sufficesand manual close calls are enough for many other scripts you should use the with or try options outlined here only if you must closeand only in the presence of potential exceptions since standard python automatically closes files on collectionthoughneither option is required in many (and perhaps mostscripts input files reading data from external files is just as easy as writingbut there are more methods that let us load data in variety of modes input text files are opened with either mode flag of (for "read"or no mode flag at all--it defaults to if omittedand it commonly is once openedwe can read the lines of text file with the readlines methodc:\temppython file open('data txt'lines file readlines(for line in linesprint(lineend=''hello file worldbye file world open input file object'rdefault read into line string list but use file line iterator(aheadlines have '\nat end the readlines method loads the entire contents of the file into memory and gives it to our scripts as list of line strings that we can step through in loop in factthere are many ways to read an input filefile read(returns string containing all the characters (or bytesstored in the file file read(nreturns string containing the next characters (or bytesfrom the file file readline(reads through the next \ and returns line string file readlines(reads the entire file and returns list of line strings file tools
4,717
seek( call is used here before each test to rewind the file to its beginning (more on this call in moment)file seek( file read('hello file world!\nbye file world \nfile seek( file readlines(['hello file world!\ ''bye file seek( file readline('hello file world!\nfile readline('bye file world \nfile readline('file seek( file read( )file read( (' ''ello fil'go back to the front of file read entire file into string read entire file into lines list file world \ 'read one line at time empty string at end-of-file read (or remainingchars/bytes empty string at end-of-file all of these input methods let us be specific about how much to fetch here are few rules of thumb about which to chooseread(and readlines(load the entire file into memory all at once that makes them handy for grabbing file' contents with as little code as possible it also makes them generally fastbut costly in terms of memory for huge files--loading multigigabyte file into memory is not generally good thing to do (and might not be possible at all on given computeron the other handbecause the readline(and read(ncalls fetch just part of the file (the next line or -character-or-byte block)they are safer for potentially big files but bit less convenient and sometimes slower both return an empty string when they reach end-of-file if speed matters and your files aren' hugeread or readlines may be generally better choice see also the discussion of the newer file iterators in the next section as we'll seeiterators combine the convenience of readlines(with the space efficiency of read line(and are the preferred way to read text files by lines today the seek( call used repeatedly here means "go back to the start of the file in our exampleit is an alternative to reopening the file each time in filesall read and write operations take place at the current positionfiles normally start at offset when opened and advance as data is transferred the seek call simply lets us move to new position for the next transfer operation more on this method later when we explore random access files file and directory tools
4,718
in older versions of pythonthe traditional way to read file line by line in for loop was to read the file into list that could be stepped through as usualfile open('data txt'for line in file readlines()print(lineend=''don' do this anymoreif you've already studied the core language using first book like learning pythonyou may already know that this coding pattern is actually more work than is needed today-both for you and your computer' memory in recent pythonsthe file object includes an iterator which is smart enough to grab just one line per request in all iteration contextsincluding for loops and list comprehensions the practical benefit of this extension is that you no longer need to call readlines in for loop to scan line by line--the iterator reads lines on request automaticallyfile open('data txt'for line in fileprint(lineend=''hello file worldbye file world no need to call readlines iterator reads next line each time better stillyou can open the file in the loop statement itselfas temporary which will be automatically closed on garbage collection when the loop ends (that' normally the file' sole reference)for line in open('data txt')print(lineend=''hello file worldbye file world even shortertemporary file object auto-closed when garbage collected moreoverthis file line-iterator form does not load the entire file into line' list all at onceso it will be more space efficient for large text files because of thatthis is the prescribed way to read line by line today if you want to see what really happens inside the for loopyou can use the iterator manuallyit' just __next__ method (run by the next built-in function)which is similar to calling the readline method each time throughexcept that read methods return an empty string at end-of-file (eofand the iterator raises an exception to end the iterationfile open('data txt'file readline('hello file world!\nfile readline('bye file world \nfile readline('read methodsempty at eof file open('data txt'file __next__('hello file world!\niteratorsexception at eof no need to call iter(filefirstsince files are their own iterator file tools
4,719
'bye file world \nfile __next__(traceback (most recent call last)file ""line in stopiteration interestinglyiterators are automatically used in all iteration contextsincluding the list constructor calllist comprehension expressionsmap callsand in membership checksopen('data txt'readlines(['hello file world!\ ''bye file world \ 'always read lines list(open('data txt')['hello file world!\ ''bye force line iteration file world \ 'lines [line rstrip(for line in open('data txt')lines ['hello file world!''bye file world 'comprehension lines [line upper(for line in open('data txt')lines ['hello file world!\ ''bye file world \ 'arbitrary actions list(map(str splitopen('data txt'))[['hello''file''world!']['bye''file''world ']apply function line 'hello file world!\nline in open('data txt'true line membership iterators may seem somewhat implicit at first glancebut they're representative of the many ways that python makes developerslives easier over time other open options besides the and (defaultr file open modesmost platforms support an mode stringmeaning "append in this output modewrite methods add data to the end of the fileand the open call will not erase the current contents of the filefile open('data txt'' 'open in append modedoesn' erase file write('the life of brian'added at end of existing data file close(open('data txt'read(open and read entire file 'hello file world!\nbye file world \nthe life of brianin factalthough most files are opened using the sorts of calls we just ranopen actually supports additional arguments for more specific processing needsthe first three of which are the most commonly used--the filenamethe open modeand buffering specification all but the first of these are optionalif omittedthe open mode argument file and directory tools
4,720
needshere are few things you should know about these three open argumentsfilename as mentioned earlierfilenames can include an explicit directory path to refer to files in arbitrary places on your computerif they do notthey are taken to be names relative to the current working directory (described in the prior in generalmost filename forms you can type in your system shell will work in an open call for instancea relative filename argument \temp\spam txton windows means spam txt in the temp subdirectory of the current working directory' parent--up oneand down to directory temp open mode the open function accepts other modestoosome of which we'll see at work later in this + +and ato open for reads and writesand any mode string with to designate binary mode for instancemode rmeans both reads and writes are allowed on an existing filewallows reads and writes but creates the file anewerasing any prior contentrb and wb read and write data in binary mode without any translationsand wband + both combine binary mode and input plus output in generalthe mode string defaults to for read but can be for write and for appendand you may add for updateas well as or for binary or text modeorder is largely irrelevant as we'll see later in this the modes are often used in conjunction with the file object' seek method to achieve random read/write access regardless of modefile contents are always strings in python programs--read methods return stringand we pass string to write methods as also described laterthoughthe mode string implies which type of string is usedstr for text mode or bytes and other byte string types for binary mode buffering policy the open call also takes an optional third buffering policy argument which lets you control buffering for the file--the way that data is queued up before being transferredto boost performance if passed means file operations are unbuffered (data is transferred immediatelybut allowed in binary modes only) means they are line bufferedand any other positive value means to use full buffering (which is the defaultif no buffering argument is passedas usualpython' library manual and reference texts have the full story on additional open arguments beyond these three for instancethe open call supports additional arguments related to the end-of-line mapping behavior and the automatic unicode encoding of content performed for text-mode files since we'll discuss both of these concepts in the next sectionlet' move ahead file tools
4,721
all of the preceding examples process simple text filesbut python scripts can also open and process files containing binary data--jpeg imagesaudio clipspacked binary data produced by fortran and programsencoded textand anything else that can be stored in files as bytes the primary difference in terms of your code is the mode argument passed to the built-in open functionfile open('data txt''wb'file open('data txt''rb'open binary output file open binary input file once you've opened binary files in this wayyou may read and write their contents using the same methods just illustratedreadwriteand so on the readline and readlines methods as well as the file' line iterator still work here for text files opened in binary modebut they don' make sense for truly binary data that isn' line oriented (end-of-line bytes are meaninglessif they appear at allin all casesdata transferred between files and your programs is represented as python strings within scriptseven if it is binary data for binary mode filesthoughfile content is represented as byte strings continuing with our text file from preceding examplesopen('data txt'read('hello file world!\nbye file world \nthe life of briantext modestr open('data txt''rb'read(binary modebytes 'hello file world!\ \nbye file world \ \nthe life of brianfile open('data txt''rb'for line in fileprint(lineb'hello file world!\ \nb'bye file world \ \nb'the life of brianthis occurs because python treats text-mode files as unicodeand automatically decodes content on input and encodes it on output binary mode files instead give us access to file content as raw byte stringswith no translation of content--they reflect exactly what is stored on the file because str strings are always unicode text in xthe special bytes string is required to represent binary data as sequence of byte-size integers which may contain any -bit value because normal and byte strings have almost identical operation setsmany programs can largely take this on faithbut keep in mind that you really must open truly binary data in binary mode for inputbecause it will not generally be decodable as unicode text similarlyyou must also supply byte strings for binary mode output--normal strings are not raw binary databut are decoded unicode characters ( code pointswhich are encoded to binary on text-mode outputopen('data bin''wb'write( 'spam\ ' open('data bin''rb'read( file and directory tools
4,722
open('data bin''wb'write('spam\ 'typeerrormust be bytes or buffernot str but notice that this file' line ends with just \ninstead of the windows \ \ that showed up in the preceding example for the text file in binary mode strictly speakingbinary mode disables unicode encoding translationbut it also prevents the automatic endof-line character translation performed by text-mode files by default before we can understand this fullythoughwe need to study the two main ways in which text files differ from binary unicode encodings for text files as mentioned earliertext-mode file objects always translate data according to default or provided unicode encoding typewhen the data is transferred to and from external file their content is encoded on filesbut decoded in memory binary mode files don' perform any such translationwhich is what we want for truly binary data for instanceconsider the following stringwhich embeds unicode character whose binary value is outside the normal -bit range of the ascii encoding standarddata 'sp\xe mdata 'spam xe bin( xe )chr( xe ( ' '' 'it' possible to manually encode this string according to variety of unicode encoding types--its raw binary byte string form is different under some encodingsdata encode('latin ' 'sp\xe -bit charactersascii extras data encode('utf ' 'sp\xc \xa bytes for special characters only data encode('ascii'does not encode per ascii unicodeencodeerror'asciicodec can' encode character '\xe in position ordinal not in range( python displays printable characters in these strings normallybut nonprintable bytes show as \xnn hexadecimal escapes which become more prevalent under more sophisticated encoding schemes (cp in the following is an ebcdic encoding)data encode('utf ' '\xff\xfes\ \ \xe \ \ bytes per character plus preamble data encode('cp ' '\xa \ \ an ebcdic encodingvery different the encoded results here reflect the string' raw binary form when stored in files manual encoding is usually unnecessarythoughbecause text files handle encodings automatically on data transfers--reads decode and writes encodeaccording file tools
4,723
sys getdefaultencodingcontinuing our interactive sessionopen('data txt'' 'encoding='latin 'write(data open('data txt'' 'encoding='latin 'read('spamopen('data txt''rb'read( 'sp\xe mif we open in binary modethoughno encoding translation occurs--the last command in the preceding example shows us what' actually stored on the file to see how file content differs for other encodingslet' save the same string againopen('data txt'' 'encoding='utf 'write(data open('data txt'' 'encoding='utf 'read('spamopen('data txt''rb'read( 'sp\xc \xa mencode data per utf decodeundo encoding no data translations this timeraw file content is differentbut text mode' auto-decoding makes the string the same by the time it' read back by our script reallyencodings pertain only to strings while they are in filesonce they are loaded into memorystrings are simply sequences of unicode characters ("code points"this translation step is what we want for text filesbut not for binary because binary modes skip the translationyou'll want to use them for truly binary data if factyou usually must--trying to write unencodable data and attempting to read undecodable data is an erroropen('data txt'' 'encoding='ascii'write(dataunicodeencodeerror'asciicodec can' encode character '\xe in position ordinal not in range( open( ' :\python \python exe'' 'read(unicodedecodeerror'charmapcodec can' decode byte in position character maps to binary mode is also last resort for reading text filesif they cannot be decoded per the underlying platform' defaultand the encoding type is unknown--the following recreates the original strings if encoding type is knownbut fails if it is not known unless binary mode is used (such failure may occur either on inputting the data or printing itbut it fails nevertheless)open('data txt'' 'encoding='cp 'writelines(['spam\ ''ham\ ']open('data txt'' 'encoding='cp 'readlines(['spam\ ''ham\ 'open('data txt'' 'readlines(unicodedecodeerror'charmapcodec can' decode byte in position character maps to open('data txt''rb'readlines([ '\xa \ \ \ \ %\ \ \ \ %' file and directory tools
4,724
'\xa \ \ \ \ %\ \ \ \ %if all your text is ascii you generally can ignore encoding altogetherdata in files maps directly to characters in stringsbecause ascii is subset of most platformsdefault encodings if you must process files created with other encodingsand possibly on different platforms (obtained from the webfor instance)binary mode may be required if encoding type is unknown keep in mindhoweverthat text in still-encoded binary form might not work as you expectbecause it is encoded per given encoding schemeit might not accurately compare or combine with text encoded in other schemes againsee other resources for more on the unicode story we'll revisit the unicode story at various points in this bookespecially in to see how it relates to the tkinter text widgetand in part ivcovering internet programmingto learn what it means for data shipped over networks by protocols such as ftpemailand the web at large text files have another featurethoughwhich is similarly nonfeature for binary dataline-end translationsthe topic of the next section end-of-line translations for text files for historical reasonsthe end of line of text in file is represented by different characters on different platforms it' single \ character on unix-like platformsbut the two-character sequence \ \ on windows that' why files moved between linux and windows may look odd in your text editor after transfer--they may still be stored using the original platform' end-of-line convention for examplemost windows editors handle text in unix formatbut notepad has been notable exception--text files copied from unix or linux may look like one long line when viewed in notepadwith strange characters inside (\nsimilarlytransferring file from windows to unix in binary mode retains the \ characters (which often appear as ^ in text editorspython scripts that process text files don' normally have to carebecause the files object automatically maps the dos \ \ sequence to single \ it works like this by default-when scripts are run on windowsfor files opened in text mode\ \ is translated to \ when input for files opened in text mode\ is translated to \ \ when output for files opened in binary modeno translation occurs on input or output on unix-like platformsno translations occurbecause \ is used in files you should keep in mind two important consequences of these rules firstthe end-of-line character for text-mode files is almost always represented as single \ within python scriptsregardless of how it is stored in external files on the underlying platform by mapping to and from \ on input and outputpython hides the platform-specific difference the second consequence of the mapping is subtlerwhen processing binary filesbinary open modes ( grbwbeffectively turn off line-end translations if they did notthe file tools
4,725
random \ in data might be dropped on inputor added for \ in the data on output the net effect is that your binary data would be trashed when read and written-probably not quite what you want for your audio files and imagesthis issue has become almost secondary in python xbecause we generally cannot use binary data with text-mode files anyhow--because text-mode files automatically apply unicode encodings to contenttransfers will generally fail when the data cannot be decoded on input or encoded on output using binary mode avoids unicode errorsand automatically disables line-end translations as well (unicode error can be caught in try statements as wellstillthe fact that binary mode prevents end-of-line translations to protect file content is best noted as separate featureespecially if you work in an ascii-only world where unicode encoding issues are irrelevant here' the end-of-line translation at work in python on windows--text mode translates to and from the platform-specific line-end sequence so our scripts are portableopen('temp txt'' 'write('shrubbery\ ' open('temp txt''rb'read( 'shrubbery\ \nopen('temp txt'' 'read('shrubbery\ntext output mode\ -\ \ binary inputactual file bytes test input mode\ \ -\ by contrastwriting data in binary mode prevents all translations as expectedeven if the data happens to contain bytes that are part of line-ends in text mode (byte strings print their characters as ascii if printableelse as hexadecimal escapes)data ' \ \rc\ \ndlen(data open('temp bin''wb'write(data open('temp bin''rb'read( ' \ \rc\ \nd escape code bytes normal write binary data to file as is read as binaryno translation but reading binary data in text modewhether accidental or notcan corrupt the data when transferred because of line-end translations (assuming it passes as decodable at allascii bytes like these do on this windows platform)open('temp bin'' 'read(' \ \nc\ndtext mode readbotches \ similarlywriting binary data in text mode can have as the same effect--line-end bytes may be changed or inserted (againassuming the data is encodable per the platform' default)open('temp bin'' 'write(datatypeerrormust be strnot bytes data decode(' \ \rc\ \nd file and directory tools must pass str for text mode use bytes decode(for to-str
4,726
open('temp bin''rb'read( ' \ \rc\ \ \ndopen('temp bin'' 'read(' \ \nc\ \ndtext mode writeadded \ again dropsalters \ on input the short story to remember here is that you should generally use \ to refer to endline in all your text file contentand you should always open binary data in binary file modes to suppress both end-of-line translations and any unicode encodings file' content generally determines its open modeand file open modes usually process file content exactly as we want keep in mindthoughthat you might also need to use binary file modes for text in special contexts for instancein ' exampleswe'll sometimes open text files in binary mode to avoid possible unicode decoding errorsfor files generated on arbitrary platforms that may have been encoded in arbitrary ways doing so avoids encoding errorsbut also can mean that some text might not work as expected--searches might not always be accurate when applied to such raw textsince the search key must be in bytes string formatted and encoded according to specific and possibly incompatible encoding scheme in ' pyeditwe'll also need to catch unicode exceptions in "grepdirectory file search utilityand we'll go further to allow unicode encodings to be specified for file content across entire trees moreovera script that attempts to translate between different platformsend-of-line character conventions explicitly may need to read text in binary mode to retain the original line-end representation truly present in the filein text modethey would already be translated to \ by the time they reached the script it' also possible to disable or further tailor end-of-line translations in text mode with additional open arguments we will finesse here see the newline argument in open reference documentation for detailsin shortpassing an empty string to this argument also prevents line-end translation but retains other text-mode behavior for this let' turn next to two common use cases for binary data filespacked binary data and random access parsing packed binary data with the struct module by using the letter in the open callyou can open binary datafiles in platform-neutral way and read and write their content with normal file object methods but how do you process binary data once it has been readit will be returned to your script as simple string of bytesmost of which are probably not printable characters if you just need to pass binary data along to another file or programyour work is done--for instancesimply pass the byte string to another file opened in binary mode and if you just need to extract number of bytes from specific positionstring slicing will do the jobyou can even follow up with bitwise operations if you need to to get file tools
4,727
contentsthe standard library struct module is more powerful alternative the struct module provides calls to pack and unpack binary dataas though the data was laid out in -language struct declaration it is also capable of composing and decomposing using any endian-ness you desire (endian-ness determines whether the most significant bits of binary numbers are on the left or right sidebuilding binary datafilefor instanceis straightforward--pack python values into byte string and write them to file the format string here in the pack call means big-endian (>)with an integerfour-character stringhalf integerand floating-point numberimport struct data struct pack('> shf' 'spam' data '\ \ \ \ spam\ \ ?\ \xf \xb file open('data bin''wb'file write(data file close(notice how the struct module returns bytes stringwe're in the realm of binary data herenot textand must use binary mode files to store as usualpython displays most of the packed binary data' bytes here with \xnn hexadecimal escape sequencesbecause the bytes are not printable characters to parse data like that which we just producedread it off the file and pass it to the struct module with the same format string--you get back tuple containing the values parsed out of the string and converted to python objectsimport struct file open('data bin''rb'bytes file read(values struct unpack('> shf'datavalues ( 'spam' parsed-out strings are byte strings againand we can apply string and bitwise operations to probe deeperbin(values[ ' values[ ]list(values[ ])values[ ][ ( 'spam'[ ] accessing bits and bytes also note that slicing comes in handy in this domainto grab just the four-character string in the middle of the packed binary data we just readwe can simply slice it out numeric values could similarly be sliced out and then passed to struct unpack for conversionbytes '\ \ \ \ spam\ \ ?\ \xf \xb bytes[ : 'spam file and directory tools
4,728
number '\ \ struct unpack('> 'number( ,packed binary data crops up in many contextsincluding some networking tasksand in data produced by other programming languages because it' not part of every programming job' descriptionthoughwe'll defer to the struct module' entry in the python library manual for more details random access files binary files also typically see action in random access processing earlierwe mentioned that adding to the open mode string allows file to be both read and written this mode is typically used in conjunction with the file object' seek method to support random read/write access such flexible file processing modes allow us to read bytes from one locationwrite to anotherand so on when scripts combine this with binary file modesthey may fetch and update arbitrary bytes within file we used seek earlier to rewind files instead of closing and reopening as mentionedread and write operations always take place at the current position in the filefiles normally start at offset when opened and advance as data is transferred the seek call lets us move to new position for the next transfer operation by passing in byte offset python' seek method also accepts an optional second argument that has one of three values-- for absolute file positioning (the default) to seek relative to the current positionand to seek relative to the file' end that' why passing just an offset of to seek is roughly file rewind operationit repositions the file to its absolute start in generalseek supports random access on byte-offset basis seeking to multiple of record' size in binary filefor instanceallows us to fetch record by its relative position although you can use seek without modes in open ( to just read from random locations)it' most flexible when combined with input/output files and while you can perform random access in text modetoothe fact that text modes perform unicode encodings and line-end translations make them difficult to use when absolute byte offsets and lengths are required for seeks and reads--your data may look very different when stored in files text mode may also make your data nonportable to platforms with different default encodingsunless you're willing to always specify an explicit encoding for opens except for simple unencoded ascii text without line-endsseek tends to works best with binary mode files to demonstratelet' create file in + mode (equivalent to wb+and write some data to itthis mode allows us to both read and writebut initializes the file to be empty if it' already present (all modes doafter writing some datawe seek back to file start to read its content (some integer return values are omitted in this example again for brevity)file tools
4,729
records [ 'ssssssss' 'pppppppp' 'aaaaaaaa' 'mmmmmmmm'file open('random bin'' + 'for rec in recordssize file write(recfile flush(pos file seek( print(file read() 'ssssssssppppppppaaaaaaaammmmmmmmwrite four records bytes for binary mode read entire file nowlet' reopen our file in + modethis mode allows both reads and writes againbut does not initialize the file to be empty this timewe seek and read in multiples of the size of data items ("records"storedto both fetch and update them at randomc:\temppython file open('random bin'' + 'print(file read() 'ssssssssppppppppaaaaaaaammmmmmmmrecord ' file seek( file write(recordfile seek(len(record file write( ' file seek( file read(len(record) 'ppppppppfile read(len(record) 'yyyyyyyyread entire file update first record update third record fetch second record fetch next (thirdrecord file seek( file read( 'xxxxxxxxppppppppyyyyyyyymmmmmmmmread entire file :\temptype random bin xxxxxxxxppppppppyyyyyyyymmmmmmmm the view outside python finallykeep in mind that seek can be used to achieve random accesseven if it' just for input the following seeks in multiples of record size to read (but not writefixedlength records at random notice that it also uses text modesince this data is simple ascii text bytes and has no line-endstext and binary modes work the same on this platformc:\temppython file open('random bin'' 'reclen file seek(reclen file read(reclen'mmmmmmmmfile seek(reclen file read(reclen file and directory tools text mode ok if no encoding/endlines fetch record fetch record
4,730
file open('random bin''rb'file seek(reclen file read(reclenb'yyyyyyyybinary mode works the same here fetch record returns byte strings but unless your file' content is always simple unencoded text form like ascii and has no translated line-endstext mode should not generally be used if you are going to seek--line-ends may be translated on windows and unicode encodings may make arbitrary transformationsboth of which can make absolute seek offsets difficult to use in the followingfor examplethe positions of characters after the first non-ascii no longer match between the string in python and its encoded representation on the filedata 'sp\xe mdatalen(data('spam' data encode('utf ')len(data encode('utf ')( 'sp\xc \xa ' data to your script unicode chars nonascii bytes written to file open('test'mode=' +'encoding='utf 'use text modeencoded write(dataf flush( seek( ) read( ascii bytes work 'sf seek( ) read( as does -byte nonascii 'adata[ but offset is not ' 'mf seek( ) read( unicodedecodeerror'utf codec can' decode byte xa in position unexpected code byte as you can seepython' file modes provide flexible file processing for programs that require it in factthe os module offers even more file processing optionsas the next section describes lower-level file tools in the os module the os module contains an additional set of file-processing functions that are distinct from the built-in file object tools demonstrated in previous examples for instancehere is partial list of os file-related callsos openpathflagsmode opens file and returns its descriptor os readdescriptorn reads at most bytes and returns byte string os writedescriptorstring writes bytes in byte string string to the file file tools
4,731
moves to position in the file technicallyos calls process files by their descriptorswhich are integer codes or "handlesthat identify files in the operating system descriptor-based files deal in raw bytesand have no notion of the line-end or unicode translations for text that we studied in the prior section in factapart from extras like bufferingdescriptor-based files generally correspond to binary mode file objectsand we similarly read and write bytes stringsnot str strings howeverbecause the descriptor-based file tools in os are lower level and more complex than the built-in file objects created with the built-in open functionyou should generally use the latter for all but very special file-processing needs using os open files to give you the general flavor of this tool setthoughlet' run few interactive experiments although built-in file objects and os module descriptor files are processed with distinct tool setsthey are in fact related--the file system used by file objects simply adds layer of logic on top of descriptor-based files in factthe fileno file object method returns the integer descriptor associated with built-in file object for instancethe standard stream file objects have descriptors and calling the os write function to send data to stdout by descriptor has the same effect as calling the sys stdout write methodimport sys for stream in (sys stdinsys stdoutsys stderr)print(stream fileno() sys stdout write('hello stdio world\ 'hello stdio world import os os write( 'hello descriptor world\ 'hello descriptor world write via file method write via os module because file objects we open explicitly behave the same wayit' also possible to process given real external file on the underlying computer through the built-in open functiontools in the os moduleor both (some integer return values are omitted here for brevity)for instanceto process pipesdescribed in the python os pipe call returns two file descriptorswhich can be processed with os module file tools or wrapped in file object with os fdopen when used with descriptor-based file tools in ospipes deal in byte stringsnot text some device files may require lower-level control as well file and directory tools
4,732
file write('hello stdio file\ 'file flush(fd file fileno(fd import os os write(fdb'hello descriptor file\ 'file close(create external fileobject write via file object method else os write to disk firstget descriptor from object :\temptype spam txt hello stdio file hello descriptor file lines from both schemes write via os module os open mode flags so why the extra file tools in osin shortthey give more low-level control over file processing the built-in open function is easy to usebut it may be limited by the underlying filesystem that it usesand it adds extra behavior that we do not want the os module lets scripts be more specific--for examplethe following opens descriptorbased file in read-write and binary modes by performing binary "oron two mode flags exported by osfdfile os open( ' :\temp\spam txt'(os o_rdwr os o_binary)os read(fdfile 'hello stdio file\ \nheos lseek(fdfile os read(fdfile 'hello stdio file\ \nhello descriptor file\nos lseek(fdfile os write(fdfileb'hello' go back to start of file binary mode retains "\ \noverwrite first bytes :\temptype spam txt hello stdio file hello descriptor file in this casebinary mode strings rband + in the basic open call are equivalentfile open( ' :\temp\spam txt''rb+'file read( 'hello stdio file\ \nhefile seek( file read( 'hello stdio file\ \nhello descriptor file\nfile seek( file write( 'jello' file seek( file read( 'jello stdio file\ \nhello descriptor file\nsame but with open/objects file tools
4,733
access (o_excland nonblocking modes (o_nonblockwhen file is opened some of these flags are not portable across platforms (another reason to use built-in file objects most of the time)see the library manual or run dir(oscall on your machine for an exhaustive list of other open flags available one final note hereusing os open with the o_excl flag is the most portable way to lock files for concurrent updates or other process synchronization in python today we'll see contexts where this can matter in the next when we begin to explore multiprocessing tools programs running in parallel on server machinefor instancemay need to lock files before performing updatesif multiple threads or processes might attempt such updates at the same time wrapping descriptors in file objects we saw earlier how to go from file object to field descriptor with the fileno file object methodgiven descriptorwe can use os module tools for lower-level file access to the underlying file we can also go the other way--the os fdopen call wraps file descriptor in file object because conversions work both wayswe can generally use either tool set--file object or os modulefdfile os open( ' :\temp\spam txt'(os o_rdwr os o_binary)fdfile objfile os fdopen(fdfile'rb'objfile read( 'jello stdio file\ \nhello descriptor file\nin factwe can wrap file descriptor in either binary or text-mode file objectin text modereads and writes perform the unicode encodings and line-end translations we studied earlier and deal in str strings instead of bytesc:\pp \systempython import os fdfile os open( ' :\temp\spam txt'(os o_rdwr os o_binary)objfile os fdopen(fdfile' 'objfile read('jello stdio file\nhello descriptor file\nin python xthe built-in open call also accepts file descriptor instead of file name stringin this mode it works much like os fdopenbut gives you greater control--for exampleyou can use additional arguments to specify nondefault unicode encoding for text and suppress the default descriptor close reallythoughos fdopen accepts the same extra-control arguments in xbecause it has been redefined to do little but call back to the built-in open (see os py in the standard library) :\pp \systempython import os fdfile os open( ' :\temp\spam txt'(os o_rdwr os o_binary)fdfile file and directory tools
4,734
objfile read('jello stdio file\nhello descriptor file\nobjfile os fdopen(fdfile' 'encoding='latin 'closefd=trueobjfile seek( objfile read('jello stdio file\nhello descriptor file\nwe'll make use of this file object wrapper technique to simplify text-oriented pipes and other descriptor-like objects later in this book ( sockets have makefile method which achieves similar effectsother os module file tools the os module also includes an assortment of file tools that accept file pathname string and accomplish file-related tasks such as renaming (os rename)deleting (os remove)and changing the file' owner and permission settings (os chownos chmodlet' step through few examples of these tools in actionos chmod('spam txt' enable all accesses this os chmod file permissions call passes -bit string composed of three sets of three bits each from left to rightthe three sets represent the file' owning userthe file' groupand all others within each setthe three bits reflect readwriteand execute access permissions when bit is " in this stringit means that the corresponding operation is allowed for the assessor for instanceoctal is string of nine " bits in binaryso it enables all three kinds of accesses for all three user groupsoctal means that the file can be read and written only by the user that owns it (when written in binary octal is really bits this scheme stems from unix file permission settingsbut the call works on windows as well if it' puzzlingsee your system' documentation ( unix manpagefor chmod moving onos rename( ' :\temp\spam txt' ' :\temp\eggs txt'fromto os remove( ' :\temp\spam txt'delete filewindowserror[error the system cannot find the file specified' :\\temp\os remove( ' :\temp\eggs txt'the os rename call used here changes file' namethe os remove file deletion call deletes file from your system and is synonymous with os unlink (the latter reflects the call' name on unix but was obscure to users of other platformsthe os module also exports the stat system callfor related toolssee also the shutil module in python' standard libraryit has higher-level tools for copying and removing files and more we'll also write directory comparecopyand search tools of our own in after we've had chance to study the directory tools presented later in this file tools
4,735
+ for \ added import os info os stat( ' :\temp\spam txt'info nt stat_result(st_mode= st_ino= st_dev= st_nlink= st_uid= st_gid= st_size= st_atime= st_mtime= st_ctime= info st_modeinfo st_size ( via named-tuple item attr names import stat info[stat st_mode]info[stat st_sizevia stat module presets ( stat s_isdir(info st_mode)stat s_isreg(info st_mode(falsetruethe os stat call returns tuple of values (reallyin special kind of tuple with named itemsgiving low-level information about the named fileand the stat module exports constants and functions for querying this information in portable way for instanceindexing an os stat result on offset stat st_size returns the file' sizeand calling stat s_isdir with the mode item from an os stat result checks whether the file is directory as shown earlierthoughboth of these operations are available in the os path moduletooso it' rarely necessary to use os stat except for low-level file queriespath ' :\temp\spam txtos path isdir(path)os path isfile(path)os path getsize(path(falsetrue file scanners before we leave our file tools surveyit' time for something that performs more tangible task and illustrates some of what we've learned so far unlike some shell-tool languagespython doesn' have an implicit file-scanning loop procedurebut it' simple to write general one that we can reuse for all time the module in example - defines general file-scanning routinewhich simply applies passed-in python function to each line in an external file example - pp \system\filetools\scanfile py def scanner(namefunction)file open(name' 'while trueline file readline(if not linebreak function(linefile close(create file object call file methods until end-of-file call function object the scanner function doesn' care what line-processing function is passed inand that accounts for most of its generality--it is happy to apply any single-argument function file and directory tools
4,736
and put it in directory on the module search pathwe can use it any time we need to step through file line by line example - is client script that does simple line translations example - pp \system\filetools\commands py #!/usr/local/bin/python from sys import argv from scanfile import scanner class unknowncommand(exception)pass def processline(line)if line[ ='*'print("ms "line[ :- ]elif line[ ='+'print("mr "line[ :- ]elseraise unknowncommand(linefilename 'data txtif len(argv= filename argv[ scanner(filenameprocesslinedefine function applied to each line strip first and last char\ raise an exception allow filename cmd arg start the scanner the text file hillbillies txt contains the following lines*granny +jethro *elly may +"uncle jedand our commands script could be run as followsc:\pp \system\filetoolspython commands py hillbillies txt ms granny mr jethro ms elly may mr "uncle jedthis worksbut there are variety of coding alternatives for both filessome of which may be better than those listed above for instancewe could also code the command processor of example - in the following wayespecially if the number of command options starts to become largesuch data-driven approach may be more concise and easier to maintain than large if statement with essentially redundant actions (if you ever have to change the way output lines printyou'll have to change it in only one place with this form)commands {'*''ms ''+''mr 'data is easier to expand than codedef processline(line)tryprint(commands[line[ ]]line[ :- ]except keyerrorraise unknowncommand(linefile tools
4,737
things up by shifting processing from python code to built-in tools for instanceif we're concerned with speedwe can probably make our file scanner faster by using the file' line iterator to step through the file instead of the manual readline loop in example - (though you' have to time this with your python to be sure)def scanner(namefunction)for line in open(name' ')function(linescan line by line call function object and we can work more magic in example - with the iteration tools like the map builtin functionthe list comprehension expressionand the generator expression here are three minimalist' versionsthe for loop is replaced by map or comprehensionand we let python close the file for us when it is garbage collected or the script exits (these all build temporary list of results along the way to run through their iterationsbut this overhead is likely trivial for all but the largest of files)def scanner(namefunction)list(map(functionopen(name' '))def scanner(namefunction)[function(linefor line in open(name' ')def scanner(namefunction)list(function(linefor line in open(name' ')file filters the preceding works as plannedbut what if we also want to change file while scanning itexample - shows two approachesone uses explicit filesand the other uses the standard input/output streams to allow for redirection on the command line example - pp \system\filetools\filters py import sys def filter_files(namefunction)input open(name' 'output open(name out'' 'for line in inputoutput write(function(line)input close(output close(filter file through function create file objects explicit output file too def filter_stream(function)while trueline sys stdin readline(if not linebreak print(function(line)end=''no explicit files use standard streams orinput(if __name__ ='__main__'filter_stream(lambda lineline file and directory tools write the modified line output has outsuffix orsys stdout write(copy stdin to stdout if run
4,738
lines here in the file-based filter of example - and also guarantee immediate file closures if the processing function fails with an exceptiondef filter_files(namefunction)with open(name' 'as inputopen(name out'' 'as outputfor line in inputoutput write(function(line)write the modified line and againfile object line iterators could simplify the stream-based filter' code in this example as welldef filter_stream(function)for line in sys stdinprint(function(line)end=''read by lines automatically since the standard streams are preopened for usthey're often easier to use when run standaloneit simply parrots stdin to stdoutc:\pp \system\filetoolsfilters py hillbillies txt *granny +jethro *elly may +"uncle jedbut this module is also useful when imported as library (clients provide the lineprocessing function)from filters import filter_files filter_files('hillbillies txt'str upperprint(open('hillbillies txt out'read()*granny +jethro *elly may +"uncle jedwe'll see files in action often in the remainder of this bookespecially in the more complete and functional system examples of first thoughwe turn to tools for processing our fileshome directory tools one of the more common tasks in the shell utilities domain is applying an operation to set of files in directory-- "folderin windows-speak by running script on batch of fileswe can automate (that isscripttasks we might have to otherwise run repeatedly by hand for instancesuppose you need to search all of your python files in development directory for global variable name (perhaps you've forgotten where it is usedthere are many platform-specific ways to do this ( the find and grep commands in unix)but python scripts that accomplish such tasks will work on every platform where python works--windowsunixlinuxmacintoshand just about any other platform directory tools
4,739
it onit will work regardless of which other tools are available thereall you need is python moreovercoding such tasks in python also allows you to perform arbitrary actions along the way--replacementsdeletionsand whatever else you can code in the python language walking one directory the most common way to go about writing such tools is to first grab list of the names of the files you wish to processand then step through that list with python for loop or other iteration toolprocessing each file in turn the trick we need to learn herethenis how to get such directory list within our scripts for scanning directories there are at least three optionsrunning shell listing commands with os popenmatching filename patterns with glob globand getting directory listings with os listdir they vary in interfaceresult formatand portability running shell listing commands with os popen how did you go about getting directory file listings before you heard of pythonif you're new to shell tools programmingthe answer may be "welli started windows file explorer and clicked on things,but ' thinking here in terms of less gui-oriented command-line mechanisms on unixdirectory listings are usually obtained by typing ls in shellon windowsthey can be generated with dir command typed in an ms-dos console box because python scripts may use os popen to run any command line that we can type in shellthey are the most general way to grab directory listing inside python program we met os popen in the prior it runs shell command string and gives us file object from which we can read the command' output to illustratelet' first assume the following directory structures-- have both the usual dir and unix-like ls command from cygwin on my windows laptopc:\tempdir / parts pp random bin spam txt temp bin temp txt :\tempc:\cygwin\bin\ls pp parts random bin spam txt temp bin temp txt :\tempc:\cygwin\bin\ls parts part part part part the parts and pp names are nested subdirectory in :\temp here (the latter is copy of the prior edition' examples treewhich used occasionally in this textnowas file and directory tools
4,740
spawning the appropriate platform-specific command line and reading its output (the text normally thrown up on the console window) :\temppython import os os popen('dir / 'readlines(['parts\ ''pp \ ''random bin\ ''spam txt\ ''temp bin\ ''temp txt\ 'lines read from shell command come back with trailing end-of-line characterbut it' easy enough to slice it offthe os popen result also gives us line iterator just like normal filesfor line in os popen('dir / ')print(line[:- ]parts pp random bin spam txt temp bin temp txt lines [line[:- for line in os popen('dir / ')lines ['parts''pp ''random bin''spam txt''temp bin''temp txt'for pipe objectsthe effect of iterators may be even more useful than simply avoiding loading the entire result into memory all at oncereadlines will always block the caller until the spawned program is completely finishedwhereas the iterator might not the dir and ls commands let us be specific about filename patterns to be matched and directory names to be listed by using name patternsagainwe're just running shell commands hereso anything you can type at shell prompt goesos popen('dir bin / 'readlines(['random bin\ ''temp bin\ 'os popen( ' :\cygwin\bin\ls bin'readlines(['random bin\ ''temp bin\ 'list(os popen( 'dir parts / ')['part \ ''part \ ''part \ ''part \ '[fname for fname in os popen( ' :\cygwin\bin\ls parts')['part \ ''part \ ''part \ ''part \ 'these calls use general tools and work as advertised as noted earlierthoughthe downsides of os popen are that it requires using platform-specific shell command and it incurs performance hit to start up an independent program in factdifferent listing tools may sometimes produce different resultslist(os popen( 'dir parts\part/ ')['part \ ''part \ ''part \ ''part \ 'directory tools
4,741
list(os popen( ' :\cygwin\bin\ls parts/part*')['parts/part \ ''parts/part \ ''parts/part \ ''parts/part \ 'the next two alternative techniques do better on both counts the glob module the term globbing comes from the wildcard character in filename patternsper computing folklorea matches "globof characters in less poetic termsglobbing simply means collecting the names of all entries in directory--files and subdirectories-whose names match given filename pattern in unix shellsglobbing expands filename patterns within command line into all matching filenames before the command is ever run in pythonwe can do something similar by calling the glob glob built-in-- tool that accepts filename pattern to expandand returns list (not generatorof matching file namesimport glob glob glob('*'['parts''pp ''random bin''spam txt''temp bin''temp txt'glob glob('bin'['random bin''temp bin'glob glob('parts'['parts'glob glob('parts/*'['parts\\part ''parts\\part ''parts\\part ''parts\\part 'glob glob('parts\part*'['parts\\part ''parts\\part ''parts\\part ''parts\\part 'the glob call accepts the usual filename pattern syntax used in shellsmeans any one charactermeans any number of charactersand [is character selection set +the pattern should include directory path if you wish to glob in something other than the current working directoryand the module accepts either unix or dos-style directory separators (or \this call is implemented without spawning shell command (it uses os listdirdescribed in the next sectionand so is likely to be faster and more portable and uniform across all python platforms than the os popen schemes shown earlier technically speakingglob is bit more powerful than described so far in factusing it to list files in one directory is just one use of its pattern-matching skills for instanceit can also be used to collect matching names across multiple directoriessimply because each level in passed-in directory path can be pattern toofor path in glob glob( 'pp \examples\pp \*\spy')print(path+in factglob just uses the standard fnmatch module to match name patternssee the fnmatch description in ' find module example for more details file and directory tools
4,742
pp \examples\pp \lang\summer py pp \examples\pp \pytools\search_all py herewe get back filenames from two different directories that match the spy patternbecause the directory name preceding this is wildcardpython collects all possible ways to reach the base filenames using os popen to spawn shell commands achieves the same effectbut only if the underlying shell or listing command doestooand with possibly different result formats across tools and platforms the os listdir call the os module' listdir call provides yet another way to collect filenames in python list it takes simple directory name stringnot filename patternand returns list containing the names of all entries in that directory--both simple files and nested directories--for use in the calling scriptimport os os listdir('['parts''pp ''random bin''spam txt''temp bin''temp txt'os listdir(os curdir['parts''pp ''random bin''spam txt''temp bin''temp txt'os listdir('parts'['part ''part ''part ''part 'thistoois done without resorting to shell commands and so is both fast and portable to all major python platforms the result is not in any particular order across platforms (but can be sorted with the list sort method or sorted built-in function)returns base filenames without their directory path prefixesdoes not include names or if presentand includes names of both files and directories at the listed level to compare all three listing techniqueslet' run them here side by side on an explicit directory they differ in some ways but are mostly just variations on theme for this task--os popen returns end-of-lines and may sort filenames on some platformsglob glob accepts pattern and returns filenames with directory prefixesand os list dir takes simple directory name and returns names without directory prefixesos popen('dir / parts'readlines(['part \ ''part \ ''part \ ''part \ 'glob glob( 'parts\*'['parts\\part ''parts\\part ''parts\\part ''parts\\part 'os listdir('parts'['part ''part ''part ''part 'of these threeglob and listdir are generally better options if you care about script portability and result uniformityand listdir seems fastest in recent python releases (but gauge its performance yourself--implementations may change over timedirectory tools
4,743
in the last examplei pointed out that glob returns names with directory pathswhereas listdir gives raw base filenames for convenient processingscripts often need to split glob results into base files or expand listdir results into full paths such translations are easy if we let the os path module do all the work for us for examplea script that intends to copy all files elsewhere will typically need to first split off the base filenames from glob results so that it can add different directory names on the frontdirname ' :\temp\partsimport glob for file in glob glob(dirname '/*')headtail os path split(fileprint(headtail'=>'(' :\\other\\tail) :\temp\parts part = :\other\part :\temp\parts part = :\other\part :\temp\parts part = :\other\part :\temp\parts part = :\other\part herethe names after the =represent names that files might be moved to converselya script that means to process all files in different directory than the one it runs in will probably need to prepend listdir results with the target directory name before passing filenames on to other toolsimport os for file in os listdir(dirname)print(dirnamefile'=>'os path join(dirnamefile) :\temp\parts part = :\temp\parts\part :\temp\parts part = :\temp\parts\part :\temp\parts part = :\temp\parts\part :\temp\parts part = :\temp\parts\part when you begin writing realistic directory processing tools of the sort we'll develop in you'll find these calls to be almost habit walking directory trees you may have noticed that almost all of the techniques in this section so far return the names of files in only single directory (globbing with more involved patterns is the only exceptionthat' fine for many tasksbut what if you want to apply an operation to every file in every directory and subdirectory in an entire directory treefor instancesuppose again that we need to find every occurrence of global name in our python scripts this timethoughour scripts are arranged into module packagea directory with nested subdirectorieswhich may have subdirectories of their own we could rerun our hypothetical single-directory searcher manually in every directory in the treebut that' tediouserror proneand just plain not fun file and directory tools
4,744
directory we can either write recursive routine to traverse the treeor use treewalker utility built into the os module such tools can be used to searchcopycompareand otherwise process arbitrary directory trees on any platform that python runs on (and that' just about everywherethe os walk visitor to make it easy to apply an operation to all files in complete directory treepython comes with utility that scans trees for us and runs code we provide at every directory along the waythe os walk function is called with directory root name and automatically walks the entire tree at root and below operationallyos walk is generator function--at each directory in the treeit yields three-item tuplecontaining the name of the current directory as well as lists of both all the files and all the subdirectories in the current directory because it' generatorits walk is usually run by for loop (or other iteration tool)on each iterationthe walker advances to the next subdirectoryand the loop runs its code for the next level of the tree (for instanceopening and searching all the files at that levelthat description might sound complex the first time you hear itbut os walk is fairly straightforward once you get the hang of it in the followingfor examplethe loop body' code is run for each directory in the tree rooted at the current working directory along the waythe loop simply prints the directory name and all the files at the current level after prepending the directory name it' simpler in python than in english ( removed the pp subdirectory for this test to keep the output short)import os for (dirnamesubsherefilesherein os walk(')print('[dirname ']'for fname in fileshereprint(os path join(dirnamefname)handle one file \random bin \spam txt \temp bin \temp txt \parts\parts\part \parts\part \parts\part \parts\part in other wordswe've coded our own custom and easily changed recursive directory listing tool in python because this may be something we would like to tweak and reuse elsewherelet' make it permanently available in module fileas shown in example - now that we've worked out the details interactively directory tools
4,745
"list file tree with os walkimport sysos def lister(root)for (thisdirsubsherefilesherein os walk(root)print('[thisdir ']'for fname in filesherepath os path join(thisdirfnameprint(pathif __name__ ='__main__'lister(sys argv[ ]for root dir generate dirs in tree print files in this dir add dir name prefix dir name in cmdline when packaged this waythe code can also be run from shell command line here it is being launched with the root directory to be listed passed in as command-line argumentc:\pp \system\filetoolspython lister_walk py :\temp\test [ :\temp\testc:\temp\test\random bin :\temp\test\spam txt :\temp\test\temp bin :\temp\test\temp txt [ :\temp\test\partsc:\temp\test\parts\part :\temp\test\parts\part :\temp\test\parts\part :\temp\test\parts\part here' more involved example of os walk in action suppose you have directory tree of files and you want to find all python source files within it that reference the mime types module we'll study in the following is one (albeit hardcoded and overly specificway to accomplish this taskimport os matches [for (dirnamedirsherefilesherein os walk( ' :\temp\pp \examples')for filename in fileshereif filename endswith(py')pathname os path join(dirnamefilenameif 'mimetypesin open(pathnameread()matches append(pathnamefor name in matchesprint(namec:\temp\pp \examples\pp \internet\email\mailtools\mailparser py :\temp\pp \examples\pp \internet\email\mailtools\mailsender py :\temp\pp \examples\pp \internet\ftp\mirror\downloadflat py :\temp\pp \examples\pp \internet\ftp\mirror\downloadflat_modular py :\temp\pp \examples\pp \internet\ftp\mirror\ftptools py :\temp\pp \examples\pp \internet\ftp\mirror\uploadflat py :\temp\pp \examples\pp \system\media\playfile py file and directory tools
4,746
of their names and which contain the search string when match is foundits full name is appended to the results list objectalternativelywe could also simply build list of all py files and search each in for loop after the walk since we're going to code much more general solution to this type of problem in thoughwe'll let this stand for now if you want to see what' really going on in the os walk generatorcall its __next__ method (or equivalentlypass it to the next built-in functionmanually few timesjust as the for loop does automaticallyeach timeyou advance to the next subdirectory in the treegen os walk( ' :\temp\test'gen __next__((' :\\temp\\test'['parts']['random bin''spam txt''temp bin''temp txt']gen __next__((' :\\temp\\test\\parts'[]['part ''part ''part ''part ']gen __next__(traceback (most recent call last)file ""line in stopiteration the library manual documents os walk further than we will here for instanceit supports bottom-up instead of top-down walks with its optional topdown=false argumentand callers may prune tree branches by deleting names in the subdirectories lists of the yielded tuples internallythe os walk call generates filename lists at each level with the os listdir call we met earlierwhich collects both file and directory names in no particular order and returns them without their directory pathsos walk segregates this list into subdirectories and files (technicallynondirectoriesbefore yielding result also note that walk uses the very same subdirectories list it yields to callers in order to later descend into subdirectories because lists are mutable objects that can be changed in placeif your code modifies the yielded subdirectory names listit will impact what walk does next for exampledeleting directory names will prune traversal branchesand sorting the list will order the walk recursive os listdir traversals the os walk tool does the work of tree traversals for uswe simply provide loop code with task-specific logic howeverit' sometimes more flexible and hardly any more work to do the walking ourselves the following script recodes the directory listing script with manual recursive traversal function ( function that calls itself to repeat its actionsthe mylister function in example - is almost the same as lister in example - but calls os listdir to generate file paths manually and calls itself recursively to descend into subdirectories directory tools
4,747
list files in dir tree by recursion import sysos def mylister(currdir)print('[currdir ']'for file in os listdir(currdir)path os path join(currdirfileif not os path isdir(path)print(pathelsemylister(pathif __name__ ='__main__'mylister(sys argv[ ]list files here add dir path back recur into subdirs dir name in cmdline as usualthis file can be both imported and called or run as scriptthough the fact that its result is printed text makes it less useful as an imported component unless its output stream is captured by another program when run as scriptthis file' output is equivalent to that of example - but not identical--unlike the os walk versionour recursive walker here doesn' order the walk to visit files before stepping into subdirectories it could by looping through the filenames list twice (selecting files first)but as codedthe order is dependent on os list dir results for most use casesthe walk order would be irrelevantc:\pp \system\filetoolspython lister_recur py :\temp\test [ :\temp\test[ :\temp\test\partsc:\temp\test\parts\part :\temp\test\parts\part :\temp\test\parts\part :\temp\test\parts\part :\temp\test\random bin :\temp\test\spam txt :\temp\test\temp bin :\temp\test\temp txt we'll make better use of most of this section' techniques in later examples in and in this book at large for examplescripts for copying and comparing directory trees use the tree-walker techniques introduced here watch for these tools in action along the way we'll also code find utility in that combines the tree traversal of os walk with the filename pattern expansion of glob glob handling unicode filenames in xlistdirwalkglob because all normal strings are unicode in python xthe directory and file names generated by os listdiros walkand glob glob so far in this are technically unicode strings this can have some ramifications if your directories contain unusual names that might not decode properly file and directory tools
4,748
modes in xgiven bytes argumentthis function will return filenames as encoded byte stringsgiven normal str string argumentit instead returns filenames as unicode stringsdecoded per the filesystem' encoding schemec:\pp \system\filetoolspython import os os listdir(')[: ['bigext-tree py''bigpy-dir py''bigpy-path py''bigpy-tree py'os listdir( ')[: [ 'bigext-tree py' 'bigpy-dir py' 'bigpy-path py' 'bigpy-tree py'the byte string version can be used if undecodable file names may be present because os walk and glob glob both work by calling os listdir internallythey inherit this behavior by proxy the os walk tree walkerfor examplecalls os listdir at each directory levelpassing byte string arguments suppresses decoding and returns byte string resultsfor (dirsubsfilesin os walk(')print(dir\environment \filetools \processes for (dirsubsfilesin os walk( ')print(dirbb\\environmentb\\filetoolsb\\processesthe glob glob tool similarly calls os listdir internally before applying name patternsand so also returns undecoded byte string names for byte string argumentsglob glob(\*')[: [\\bigext-out txt'\\bigext-tree py'\\bigpy-dir py'glob glob( \*')[: [ \\bigext-out txt' \\bigext-tree py' \\bigpy-dir py'given normal string name (as command-line argumentfor example)you can force the issue by converting to byte strings with manual encoding to suppress decodingname os listdir(name encode())[: [ 'bigext-out txt' 'bigext-tree py' 'bigpy-dir py' 'bigpy-path py'the upshot is that if your directories may contain names which cannot be decoded according to the underlying platform' unicode encoding schemeyou may need to pass byte strings to these tools to avoid unicode encoding errors you'll get byte strings backwhich may be less readable if printedbut you'll avoid errors while traversing directories and files directory tools
4,749
latin- but may contain files with arbitrarily encoded names from cross-machine copiesthe weband so on depending upon contextexception handlers may be used to suppress some types of encoding errors as well we'll see an example of how this can matter in the first section of where an undecodable directory name generates an error if printed during full disk scan (although that specific error seems more related to printing than to decoding in generalnote that the basic open built-in function allows the name of the file being opened to be passed as either unicode str or raw bytestoothough this is used only to name the file initiallythe additional mode argument determines whether the file' content is handled in text or binary modes passing byte string filename allows you to name files with arbitrarily encoded names unicode policiesfile content versus file names in factit' important to keep in mind that there are two different unicode concepts related to filesthe encoding of file content and the encoding of file name python provides your platform' defaults for these settings in two different attributeson windows import sys sys getdefaultencoding('utf- sys getfilesystemencoding('mbcsfile content encodingplatform default file name encodingplatform scheme these settings allow you to be explicit when needed--the content encoding is used when data is read and written to the fileand the name encoding is used when dealing with names prior to transferring data in additionusing bytes for file name tools may work around incompatibilities with the underlying file system' schemeand opening files in binary mode can suppress unicode decoding errors for content as we've seenthoughopening text files in binary mode may also mean that the raw and still-encoded text will not match search strings as expectedsearch strings must also be byte strings encoded per specific and possibly incompatible encoding scheme in factthis approach essentially mimics the behavior of text files in python xand underscores why elevating unicode in is generally desirable--such text files sometimes may appear to work even though they probably shouldn' on the other handopening text in binary mode to suppress unicode content decoding and avoid decoding errors might still be useful if you do not wish to skip undecodable files and content is largely irrelevant file and directory tools
4,750
if it might be outside the platform defaultand you should rely on the default unicode api for file names in most cases againsee python' manuals for more on the unicode file name story than we have space to cover fully hereand see learning pythonfourth editionfor more on unicode in general in we're going to put the tools we met in this to realistic use for examplewe'll apply file and directory tools to implement file splitterstesting systemsdirectory copies and comparesand variety of utilities based on tree walking we'll find that python' directory tools we met here have an enabling quality that allows us to automate large set of real-world tasks firstthough concludes our basic tool surveyby exploring another system topic that tends to weave its way into wide variety of application domains--parallel processing in python directory tools
4,751
parallel system tools "telling the monkeys what to domost computers spend lot of time doing nothing if you start system monitor tool and watch the cpu utilizationyou'll see what mean--it' rare to see one hit percenteven when you are running multiple programs there are just too many delays built into softwaredisk accessesnetwork trafficdatabase querieswaiting for users to click buttonand so on in factthe majority of modern cpu' capacity is often spent in an idle statefaster chips help speed up performance demand peaksbut much of their power can go largely unused early on in computingprogrammers realized that they could tap into such unused processing power by running more than one program at the same time by dividing the cpu' attention among set of tasksits capacity need not go to waste while any given task is waiting for an external event to occur the technique is usually called parallel processing (and sometimes "multiprocessingor even "multitasking"because many tasks seem to be performed at onceoverlapping and parallel in time it' at the heart of modern operating systemsand it gave rise to the notion of multiple-active-window computer interfaces we've all come to take for granted even within single programdividing processing into tasks that run in parallel can make the overall system fasterat least as measured by the clock on your wall just as important is that modern software systems are expected to be responsive to users regardless of the amount of work they must perform behind the scenes it' usually unacceptable for program to stall while busy carrying out request consider an email-browser user interfacefor examplewhen asked to fetch email from serverthe program must download text from server over network if you have enough email or slow enough internet linkthat step alone can take minutes to finish but while the to watch on windowsclick the start buttonselect all programs accessories system tools resource monitorand monitor cpu/processor usage (task manager' performance tab may give similar resultsthe graph rarely climbed above single-digit percentages on my laptop machine while writing this footnote (at least until typed while truepass in python interactive session window
4,752
to screen redrawsmouse clicksand so on parallel processing comes to the rescue heretoo by performing such long-running tasks in parallel with the rest of the programthe system at large can remain responsive no matter how busy some of its parts may be moreoverthe parallel processing model is natural fit for structuring such programs and otherssome tasks are more easily conceptualized and coded as components running as independentparallel entities there are two fundamental ways to get tasks running at the same time in python-process forks and spawned threads functionallyboth rely on underlying operating system services to run bits of python code in parallel procedurallythey are very different in terms of interfaceportabilityand communication for instanceat this writing direct process forks are not supported on windows under standard python (though they are under cygwin python on windowsby contrastpython' thread support works on all major platforms moreoverthe os spawn family of calls provides additional ways to launch programs in platformneutral way that is similar to forksand the os popen and os system calls and subprocess module we studied in and can be used to portably spawn programs with shell commands the newer multiprocessing module offers additional ways to run processes portably in many contexts in this which is continuation of our look at system interfaces available to python programmerswe explore python' built-in tools for starting tasks in parallelas well as communicating with those tasks in some sensewe've already begun doing so--os systemos popenand subprocesswhich we learned and applied over the last three are fairly portable way to spawn and speak with command-line programstoo we won' repeat full coverage of those tools here insteadour emphasis in this is on introducing more direct techniques--forksthreadspipessignalssocketsand other launching techniques--and on using python' built-in tools that support themsuch as the os fork call and the threadingqueueand multiprocessing modules in the next (and in the remainder of this book)we use these techniques in more realistic programsso be sure you understand the basics here before flipping ahead one note up frontalthough the processthreadand ipc mechanisms we will explore in this are the primary parallel processing tools in python scriptsthe third party domain offers additional options which may serve more advanced or specialized roles as just one examplethe mpi for python system allows python scripts to also employ the message passing interface (mpistandardallowing python programs to exploit multiple processors in various ways (see the web for detailswhile such specific extensions are beyond our scope in this bookthe fundamentals of multiprocessing that we will explore here should apply to more advanced techniques you may encounter in your parallel futures parallel system tools
4,753
forked processes are traditional way to structure parallel tasksand they are fundamental part of the unix tool set forking is straightforward way to start an independent programwhether it is different from the calling program or not forking is based on the notion of copying programswhen program calls the fork routinethe operating system makes new copy of that program and its process in memory and starts running that copy in parallel with the original some systems don' really copy the original program (it' an expensive operation)but the new copy works as if it were literal copy after fork operationthe original copy of the program is called the parent processand the copy created by os fork is called the child process in generalparents can make any number of childrenand children can create child processes of their ownall forked processes run independently and in parallel under the operating system' controland children may continue to run after their parent exits this is probably simpler in practice than in theorythough the python script in example - forks new child processes until you type the letter at the console example - pp \system\processes\fork py "forks child processes until you type ' 'import os def child()print('hello from child'os getpid()os _exit( else goes back to parent loop def parent()while truenewpid os fork(if newpid = child(elseprint('hello from parent'os getpid()newpidif input(=' 'break parent(python' process forking toolsavailable in the os moduleare simply thin wrappers over standard forking calls in the system library also used by language programs to start newparallel processcall the os fork built-in function because this function generates copy of the calling programit returns different value in each copyzero in the child process and the process id of the new child in the parent forking processes
4,754
scriptfor instanceruns the child function in child processes only because forking is ingrained in the unix programming modelthis script works well on unixlinuxand modern macs unfortunatelythis script won' work on the standard version of python for windows todaybecause fork is too much at odds with the windows model python scripts can always spawn threads on windowsand the multi processing module described later in this provides an alternative for running processes portablywhich can obviate the need for process forks on windows in contexts that conform to its constraints (albeit at some potential cost in low-level controlthe script in example - does work on windowshoweverif you use the python shipped with the cygwin system (or build one of your own from source-code with cygwin' librariescygwin is freeopen source system that provides full unix-like functionality for windows (and is described further in "more on cygwin python for windowson page you can fork with python on windows under cygwineven though its behavior is not exactly the same as true unix forks because it' close enough for this book' examplesthoughlet' use it to run our script live[ :\pp \system\processes]python fork py hello from parent hello from child hello from parent hello from child hello from parent hello from child these messages represent three forked child processesthe unique identifiers of all the processes involved are fetched and displayed with the os getpid call subtle pointthe child process function is also careful to exit explicitly with an os _exit call we'll discuss this call in more detail later in this but if it' not madethe child process would live on after the child function returns (rememberit' just copy of the original processthe net effect is that the child would go back to the loop in parent and start forking children of its own ( the parent would have grandchildrenif you delete the exit call and rerunyou'll likely have to type more than one to stopbecause multiple processes are running in the parent function in example - each process exits very soon after it startsso there' little overlap in time let' do something slightly more sophisticated to better illustrate multiple forked processes running in parallel example - starts up copies of itselfeach copy counting up to with one-second delay between iterations the time sleep standard library at least in the current python implementationcalling os fork in python script actually copies the python interpreter process (if you look at your process listyou'll see two python entries after forkbut since the python interpreter records everything about your running scriptit' ok to think of fork as copying your program directly it really will if python scripts are ever compiled to binary machine code parallel system tools
4,755
example - pp \system\processes\fork-count py ""fork basicsstart copies of this program running in parallel with the originaleach copy counts up to on the same stdout stream--forks copy process memoryincluding file descriptorsfork doesn' currently work on windows without cygwinuse os spawnv or multiprocessing on windows insteadspawnv is roughly like fork+exec combination""import ostime def counter(count)for in range(count)time sleep( print('[% =% (os getpid() )run in new process simulate real work for in range( )pid os fork(if pid ! print('process % spawnedpidelsecounter( os _exit( in parentcontinue print('main process exiting 'parent need not wait else in child/new process run function and exit when runthis script starts processes immediately and exits all forked processes check in with their first count display one second later and every second thereafter notice that child processes continue to runeven if the parent process that created them terminates[ :\pp \system\processes]python fork-count py process spawned process spawned process spawned process spawned process spawned main process exiting [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = forking processes
4,756
[ = more output omitted the output of all of these processes shows up on the same screenbecause all of them share the standard output stream (and system prompt may show up along the waytootechnicallya forked process gets copy of the original process' global memoryincluding open file descriptors because of thatglobal objects like files start out with the same values in child processso all the processes here are tied to the same single stream but it' important to remember that global memory is copiednot sharedif child process changes global objectit changes only its own copy (as we'll seethis works differently in threadsthe topic of the next section the fork/exec combination in examples - and - child processes simply ran function within the python program and then exited on unix-like platformsforks are often the basis of starting independently running programs that are completely different from the program that performed the fork call for instanceexample - forks new processes until we type againbut child processes run brand-new program instead of calling function in the same file example - pp \system\processes\fork-exec py "starts programs until you type ' 'import os parm while trueparm + pid os fork(if pid = copy process os execlp('python''python''child py'str(parm)overlay program assert false'error starting programshouldn' return elseprint('child is'pidif input(=' 'break if you've done much unix developmentthe fork/exec combination will probably look familiar the main thing to notice is the os execlp call in this code in nutshellthis call replaces (overlaysthe program running in the current process with brand new program because of thatthe combination of os fork and os execlp means start new process and run new program in that process--in other wordslaunch new program in parallel with the original program parallel system tools
4,757
the arguments to os execlp specify the program to be run by giving command-line arguments used to start the program ( what python scripts know as sys argvif successfulthe new program begins running and the call to os execlp itself never returns (since the original program has been replacedthere' really nothing to return toif the call does returnan error has occurredso we code an assert after it that will always raise an exception if reached there are handful of os exec variants in the python standard librarysome allow us to configure environment variables for the new programpass command-line arguments in different formsand so on all are available on both unix and windowsand they replace the calling program ( the python interpreterexec comes in eight flavorswhich can be bit confusing unless you generalizeos execv(programcommandlinesequencethe basic "vexec form is passed an executable program' namealong with list or tuple of command-line argument strings used to run the executable (that isthe words you would normally type in shell to start programos execl(programcmdarg cmdarg cmdargnthe basic "lexec form is passed an executable' namefollowed by one or more command-line arguments passed as individual function arguments this is the same as os execv(program(cmdarg cmdarg )os execlp os execvp adding the letter to the execv and execl names means that python will locate the executable' directory using your system search-path setting ( pathos execle os execve adding letter to the execv and execl names means an extralast argument is dictionary containing shell environment variables to send to the program os execvpe os execlpe adding the letters and to the basic exec names means to use the search path and to accept shell environment settings dictionary so when the script in example - calls os execlpindividually passed parameters specify command line for the program to be run onand the word python maps to an executable file according to the underlying system search-path setting environment variable (pathit' as if we were running command of the form python child py in shellbut with different command-line argument on the end each time forking processes
4,758
just as when typed at shellthe string of arguments passed to os execlp by the forkexec script in example - starts another python program fileas shown in example - example - pp \system\processes\child py import ossys print('hello from child'os getpid()sys argv[ ]here is this code in action on linux it doesn' look much different from the original fork pybut it' really running new program in each forked process more observant readers may notice that the child process id displayed is the same in the parent program and the launched child py programos execlp simply overlays program in the same process[ :\pp \system\processes]python fork-exec py child is hello from child child is hello from child child is hello from child there are other ways to start up programs in python besides the fork/exec combination for examplethe os system and os popen calls and subprocess module which we explored in and allow us to spawn shell commands and the os spawnv call and multiprocessing modulewhich we'll meet later in this allow us to start independent programs and processes more portably in factwe'll see later that multi processing' process spawning model can be used as sort of portable replacement for os fork in some contexts (albeit less efficient oneand used in conjunction with the os execcalls shown here to achieve similar effect in standard windows python we'll see more process fork examples later in this especially in the program exits and process communication sectionsso we'll forego additional examples here we'll also discuss additional process topics in later of this book for instanceforks are revisited in to deal with servers and their zombies--dead processes lurking in system tables after their demise for nowlet' move on to threadsa subject which at least some programmers find to be substantially less frightening parallel system tools
4,759
as mentionedthe os fork call is present in the cygwin version of python on windows even though this call is missing in the standard version of python for windowsyou can fork processes on windows with python if you install and use cygwin howeverthe cygwin fork call is not as efficient and does not work exactly the same as fork on true unix systems cygwin is freeopen source package which includes library that attempts to provide unix-like api for use on windows machinesalong with set of command-line tools that implement unix-like environment it makes it easier to apply unix skills and code on windows computers according to its faq documentationthough"cygwin fork(essentially works like noncopy on write version of fork((like old unix versions used to dobecause of this it can be little slow in most casesyou are better off using the spawn family of calls if possible since this book' fork examples don' need to care about performancecygwin' fork suffices in addition to the fork callcygwin provides other unix tools that would otherwise not be available on all flavors of windowsincluding os mkfifo (discussed later in this it also comes with gcc compiler environment for building extensions for python on windows that will be familiar to unix developers as long as you're willing to use cygwin libraries to build your application and power your pythonit' very close to unix on windows like all third-party librariesthoughcygwin adds an extra dependency to your systems perhaps more criticallycygwin currently uses the gnu gpl licensewhich adds distribution requirements beyond those of standard python unlike using python itselfshipping program that uses cygwin libraries may require that your program' source code be made freely available (though redhat offers "buy-outoption which can relieve you of this requirementnote that this is complex legal issueand you should study cygwin' license on your own if this may impact your programs its license doeshoweverimpose more constraints than python' (python uses "bsd"-style licensenot the gpldespite licensing issuecygwin still can be great way to get unix-like functionality on windows without installing completely different operating system such as linux-- more complete but generally more complex option for more detailssee see also the standard library' multiprocessing module and os spawn family of callscovered later in this for alternative way to start parallel tasks and programs on unix and windows that do not require fork and exec calls to run simple function call in parallel on windows (rather than on an external program)also see the section on standard library threads later in this threadsmultiprocessingand os spawn calls work on windows in standard python forking processes
4,760
official python was still python to get python under cygwinit had to be built from its source code if this is still required when you read thismake sure you have gcc and make installed on your cygwinthen fetch python' source code package from python orgunpack itand build python with the usual commands/configure make make test sudo make install this will install python as python the same procedure works on all unix-like platformson os and cygwinthe executable is called python exeelsewhere it' named python you can generally skip the last two of the above steps if you're willing to run python out of your own build directory be sure to also check if python is standard cygwin package by the time you read thiswhen building from source you may have to tweak few files ( had to comment-out #define in modules/main )but these are too specific and temporal to get into here threads threads are another way to start activities running at the same time in shortthey run call to function (or any other type of callable objectin parallel with the rest of the program threads are sometimes called "lightweight processes,because they run in parallel like forked processesbut all of them run within the same single process while processes are commonly used to start independent programsthreads are commonly used for tasks such as nonblocking input calls and long-running tasks in gui they also provide natural model for algorithms that can be expressed as independently running tasks for applications that can benefit from parallel processingsome developers consider threads to offer number of advantagesperformance because all threads run within the same processthey don' generally incur big startup cost to copy the process itself the costs of both copying forked processes and running threads can vary per platformbut threads are usually considered less expensive in terms of performance overhead simplicity to many observersthreads can be noticeably simpler to programtooespecially when some of the more complex aspects of processes enter the picture ( process exitscommunication schemesand zombie processescovered in shared global memory on related notebecause threads run in single processevery thread shares the same global memory space of the process this provides natural and easy way for threads to communicate--by fetching and setting names or objects accessible to all the threads to the python programmerthis means that things like global parallel system tools
4,761
components such as imported modules are shared among all threads in programif one thread assigns global variablefor instanceits new value will be seen by other threads some care must be taken to control access to shared itemsbut to some this seems generally simpler to use than the process communication tools necessary for forked processeswhich we'll meet later in this and book ( pipesstreamssignalssocketsetc like much in programmingthis is not universally shared viewhoweverso you'll have to weigh the difference for your programs and platforms yourself portability perhaps most important is the fact that threads are more portable than forked processes at this writingos fork is not supported by the standard version of python on windowsbut threads are if you want to run parallel tasks portably in python script today and you are unwilling or unable to install unix-like library such as cygwin on windowsthreads may be your best bet python' thread tools automatically account for any platform-specific thread differencesand they provide consistent interface across all operating systems having said thatthe relatively new multiprocessing module described later in this offers another answer to the process portability issue in some use cases so what' the catchthere are three potential downsides you should be aware of before you start spinning your threadsfunction calls versus programs first of allthreads are not way--at leastnot direct way--to start up another program ratherthreads are designed to run call to function (technicallyany callableincluding bound and unbound methodsin parallel with the rest of the program as we saw in the prior sectionby contrastforked processes can either call function or start new program naturallythe threaded function can run scripts with the exec built-in function and can start new programs with tools such as os systemos popen and the subprocess moduleespecially if doing so is itself long-running task but fundamentallythreads run in-program functions in practicethis is usually not limitation for many applicationsparallel functions are sufficiently powerful for instanceif you want to implement nonblocking input and output and avoid blocking gui or its users with long-running tasksthreads do the jobsimply spawn thread to run function that performs the potentially long-running task the rest of the program will continue independently thread synchronization and queues secondlythe fact that threads share objects and names in global process memory is both good news and bad news--it provides communication mechanismbut we have to be careful to synchronize variety of operations as we'll seeeven operations such as printing are potential conflict since there is only one sys stdout per processwhich is shared by all threads threads
4,762
realistic threaded programs are usually structured as one or more producer ( workerthreads that add data to queuealong with one or more consumer threads that take the data off the queue and process it in typical threaded guifor exampleproducers may download or compute data and place it on the queuethe consumer--the main gui thread--checks the queue for data periodically with timer event and displays it in the gui when it arrives because the shared queue is thread-safeprograms structured this way automatically synchronize much crossthread data communication the global interpreter lock (gilfinallyas we'll learn in more detail later in this sectionpython' implementation of threads means that only one thread is ever really running its python language code in the python virtual machine at any point in time python threads are true operating system threadsbut all threads must acquire single shared lock when they are ready to runand each thread may be swapped out after running for short period of time (currentlyafter set number of virtual machine instructionsthough this implementation may change in python because of this structurethe python language parts of python threads cannot today be distributed across multiple cpus on multi-cpu computer to leverage more than one cpuyou'll simply need to use process forkingnot threads (the amount and complexity of code required for both are roughly the samemoreoverthe parts of thread that perform long-running tasks implemented as extensions can run truly independently if they release the gil to allow the python code of other threads to run while their task is in progress python codehowevercannot truly overlap in time the advantage of python' implementation of threads is performance--when it was attemptedmaking the virtual machine truly thread-safe reportedly slowed all programs by factor of two on windows and by an even larger factor on linux even nonthreaded programs ran at half speed even though the gil' multiplexing of python language code makes python threads less useful for leveraging capacity on multiple cpu machinesthreads are still useful as programming tools to implement nonblocking operationsespecially in guis moreoverthe newer multiprocessing module we'll meet later offers another solution heretoo--by providing portable thread-like api that is implemented with processesprograms can both leverage the simplicity and programmability of threads and benefit from the scalability of independent processes across cpus despite what you may think after reading the preceding overviewthreads are remarkably easy to use in python in factwhen program is started it is already running threadusually called the "main threadof the process to start newindependent threads of execution within processpython code uses either the low-level _thread module to run function call in spawned threador the higher-level threading module parallel system tools
4,763
for synchronizing access to shared objects with locks this book presents both the _thread and threading modulesand its examples use both interchangeably some python users would recommend that you always use threading rather than _thread in general in factthe latter was renamed from thread to _thread in to suggest such lesser status for it personallyi think that is too extreme (and this is one reason this book sometimes uses as thread in imports to retain the original module nameunless you need the more powerful tools in threadingthe choice is largely arbitraryand the threading module' extra requirements may be unwarranted the basic thread module does not impose oopand as you'll see in the examples of this sectionis very straightforward to use the threading module may be better for more complex tasks which require per-thread state retention or joinsbut not all threaded programs require its extra toolsand many use threads in more limited scopes in factthis is roughly the same as comparing the os walk call and visitor classes we'll meet in --both have valid audiences and use cases the most general python rule of thumb applies here as alwayskeep it simpleunless it has to be complex the _thread module since the basic _thread module is bit simpler than the more advanced threading module covered later in this sectionlet' look at some of its interfaces first this module provides portable interface to whatever threading system is available in your platformits interfaces work the same on windowssolarissgiand any system with an installed pthreads posix threads implementation (including linux and otherspython scripts that use the python _thread module work on all of these platforms without changing their source code basic usage let' start off by experimenting with script that demonstrates the main thread interfaces the script in example - spawns threads until you reply with at the consoleit' similar in spirit to (and bit simpler thanthe script in example - but it goes parallel with threads instead of process forks example - pp \system\threads\thread py "spawn threads until you type ' 'import _thread def child(tid)threads
4,764
def parent() while truei + _thread start_new_thread(child( ,)if input(=' 'break parent(this script really contains only two thread-specific linesthe import of the _thread module and the thread creation call to start threadwe simply call the _thread start_new_thread functionno matter what platform we're programming on +this call takes function (or other callableobject and an arguments tuple and starts new thread to execute call to the passed function with the passed arguments it' almost like python' function(*argscall syntaxand similarly accepts an optional keyword arguments dictionarytoobut in this case the function call begins running in parallel with the rest of the program operationally speakingthe _thread start_new_thread call itself returns immediately with no useful valueand the thread it spawns silently exits when the function being run returns (the return value of the threaded function call is simply ignoredmoreoverif function run in thread raises an uncaught exceptiona stack trace is printed and the thread exitsbut the rest of the program continues with the _thread modulethe entire program exits silently on most platforms when the main thread does (though as we'll see laterthe threading module may require special handling if child threads are still runningin practicethoughit' almost trivial to use threads in python script let' run this program to launch few threadswe can run it on both unix-like platforms and windows this timebecause threads are more portable than process forks--here it is spawning threads on windowsc:\pp \system\threadspython thread py hello from thread hello from thread hello from thread hello from thread +the _thread examples in this book now all use start_new_thread this call is also available as thread start_new for historical reasonsbut this synonym may be removed in future python release as of python both names are still availablebut the help documentation for start_new claims that it is obsoletein other wordsyou should probably prefer the other if you care about the future (and this book must! parallel system tools
4,765
started other ways to code threads with _thread although the preceding script runs simple functionany callable object may be run in the threadbecause all threads live in the same process for instancea thread can also run lambda function or bound method of an object (the following code is part of file thread-alts py in the book examples package)import _thread all print def action( )print( * function run in threads class powerdef __init__(selfi)self def action(self)print(self * bound method run in threads _thread start_new_thread(action( ,)simple function _thread start_new_thread((lambdaaction( ))()lambda function to defer obj power( _thread start_new_thread(obj action()bound method object as we'll see in larger examples later in this bookbound methods are especially useful in this role--because they remember both the method function and instance objectthey also give access to state information and class methods for use within and during the thread more fundamentallybecause threads all run in the same processbound methods run by threads reference the original in-process instance objectnot copy of it henceany changes to its state made in thread will be visible to all threads automatically moreoversince bound methods of class instance pass for callables interchangeably with simple functionsusing them in threads this way just works and as we'll see laterthe fact that they are normal objects also allows them to be stored freely on shared queues running multiple threads to really understand the power of threads running in parallelthoughwe have to do something more long-lived in our threadsjust as we did earlier for processes let' mutate the fork-count program of the prior section to use threads the script in example - starts copies of its counter function running in parallel threads example - pp \system\threads\thread-count py ""thread basicsstart copies of function running in parallelthreads
4,766
thread outputs may be intermixed in this version arbitrarily ""import _thread as threadtime def counter(myidcount)for in range(count)time sleep( print('[% =% (myidi)function run in threads for in range( )thread start_new_thread(counter( )spawn threads each thread loops times time sleep( print('main thread exiting 'don' exit too early simulate real work each parallel copy of the counter function simply counts from zero up to four here and prints message to standard output for each count notice how this script sleeps for seconds at the end on windows and linux machines this has been tested onthe main thread shouldn' exit while any spawned threads are running if it cares about their workif it does exitall spawned threads are immediately terminated this differs from processeswhere spawned children live on when parents exit without the sleep herethe spawned threads would die almost immediately after they are started this may seem ad hocbut it isn' required on all platformsand programs are usually structured such that the main thread naturally lives as long as the threads it starts for instancea user interface may start an ftp download running in threadbut the download lives much shorter life than the user interface itself later in this sectionwe'll also see different ways to avoid this sleep using global locks and flags that let threads signal their completion moreoverwe'll later find that the threading module both provides join method that lets us wait for spawned threads to finish explicitlyand refuses to allow program to exit at all if any of its normal threads are still running (which may be useful in this casebut can require extra work to shut down in othersthe multiprocessing module we'll meet later in this also allows spawned children to outlive their parentsthough this is largely an artifact of its process-based model nowwhen example - is run on windows under python here is the output getc:\pp \system\threadspython thread-count py [ = [ = [ = [ = [ = [ = [ = parallel system tools
4,767
[ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = more output omitted main thread exiting if this looks oddit' because it should in factthis demonstrates probably the most unusual aspect of threads what' happening here is that the output of the threads run in parallel is intermixed--because all the threaded function calls run in the same processthey all share the same standard output stream (in python termsthere is just one sys stdout file between themwhich is where printed text is sentthe net effect is that their output can be combined and confused arbitrarily in factthis script' output can differ on each run this jumbling of output grew even more pronounced in python presumably due to its new file output implementation more fundamentallywhen multiple threads can access shared resource like thistheir access must be synchronized to avoid overlap in time--as explained in the next section synchronizing access to shared objects and names one of the nice things about threads is that they automatically come with cross-task communications mechanismobjects and namespaces in process that span the life of threads are shared by all spawned threads for instancebecause every thread runs in the same processif one python thread changes global scope variablethe change can be seen by every other thread in the processmain or child similarlythreads can share and change mutable objects in the process' memory as long as they hold reference to them ( passed-in argumentsthis serves as simple way for program' threads to pass information--exit flagsresult objectsevent indicatorsand so on--back and forth to each other the downside to this scheme is that our threads must sometimes be careful to avoid changing global objects and names at the same time if two threads may change shared object at onceit' not impossible that one of the two changes will be lost (or worsethreads
4,768
the work done so far by another whose operations are still in progress the extent to which this becomes an issue varies per applicationand sometimes it isn' an issue at all but even things that aren' obviously at risk may be at risk files and streamsfor exampleare shared by all threads in programif multiple threads write to one stream at the same timethe stream might wind up with interleavedgarbled data example - of the prior section was simple demonstration of this phenomenon in actionbut it' indicative of the sorts of clashes in time that can occur when our programs go parallel even simple changes can go awry if they might happen concurrently to be robustthreaded programs need to control access to shared global items like these so that only one thread uses them at once luckilypython' _thread module comes with its own easy-to-use tools for synchronizing access to objects shared among threads these tools are based on the concept of lock--to change shared objectthreads acquire lockmake their changesand then release the lock for other threads to grab python ensures that only one thread can hold lock at any point in timeif others request it while it' heldthey are blocked until the lock becomes available lock objects are allocated and processed with simple and portable calls in the _thread module that are automatically mapped to thread locking mechanisms on the underlying platform for instancein example - lock object created by _thread allocate_lock is acquired and released by each thread around the print call that writes to the shared standard output stream example - pp \system\threads\thread-count-mutex py ""synchronize access to stdoutbecause it is shared globalthread outputs may be intermixed if not synchronized ""import _thread as threadtime def counter(myidcount)for in range(count)time sleep( mutex acquire(print('[% =% (myidi)mutex release(function run in threads mutex thread allocate_lock(for in range( )thread start_new_thread(counter( )make global lock object spawn threads each thread loops times time sleep( print('main thread exiting 'don' exit too early simulate real work print isn' interrupted now reallythis script simply augments example - to synchronize prints with thread lock the net effect of the additional lock calls in this script is that no two threads will parallel system tools
4,769
access to the stdout stream hencethe output of this script is similar to that of the original versionexcept that standard output text is never mangled by overlapping printsc:\pp \system\threadsthread-count-mutex py [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = main thread exiting though somewhat platform-specificthe order in which the threads check in with their prints may still be arbitrary from run to run because they execute in parallel (getting work done in parallel is the whole point of threadsafter all)but they no longer collide in time while printing their text we'll see other cases where the lock idiom comes in to play later in this -it' core component of the multithreading model waiting for spawned thread exits besides avoiding print collisionsthread module locks are surprisingly useful they can form the basis of higher-level synchronization paradigms ( semaphoresand can be used as general thread communication devices ss for instanceexample - uses global list of locks to know when all child threads have finished ss they cannothoweverbe used to directly synchronize processes since processes are more independentthey usually require locking mechanisms that are more long-lived and external to programs ' os open call with an open flag of o_excl allows scripts to lock and unlock files and so is ideal as cross-process locking tool see also the synchronization tools in the multiprocessing and threading modules and the ipc section later in this for other general synchronization ideas threads
4,770
""uses mutexes to know when threads are done in parent/main threadinstead of time sleeplock stdout to avoid comingled prints""import _thread as thread stdoutmutex thread allocate_lock(exitmutexes [thread allocate_lock(for in range( )def counter(myidcount)for in range(count)stdoutmutex acquire(print('[% =% (myidi)stdoutmutex release(exitmutexes[myidacquire(signal main thread for in range( )thread start_new_thread(counter( )for mutex in exitmutexeswhile not mutex locked()pass print('main thread exiting ' lock' locked method can be used to check its state to make this workthe main thread makes one lock per child and tacks them onto global exitmutexes list (rememberthe threaded function shares global scope with the main threadon exiteach thread acquires its lock on the listand the main thread simply watches for all locks to be acquired this is much more accurate than naively sleeping while child threads run in hopes that all will have exited after the sleep run this on your own to see its output--all spawned threads count up to (they run in arbitrarily interleaved order that can vary per run and platformbut their prints run atomically and do not comingle)before the main thread exits depending on how your threads runthis could be even simplersince threads share global memory anyhowwe can usually achieve the same effect with simple global list of integers instead of locks in example - the module' namespace (scopeis shared by top-level code and the threaded functionas before exitmutexes refers to the same list object in the main thread and all threads it spawns because of thatchanges made in thread are still noticed in the main thread without resorting to extra locks example - pp \system\threads\thread-count-wait py ""uses simple shared global data (not mutexesto know when threads are done in parent/main threadthreads share list but not its itemsassumes list won' move in memory once it has been created initially ""import _thread as thread stdoutmutex thread allocate_lock( parallel system tools
4,771
def counter(myidcount)for in range(count)stdoutmutex acquire(print('[% =% (myidi)stdoutmutex release(exitmutexes[myidtrue signal main thread for in range( )thread start_new_thread(counter( )while false in exitmutexespass print('main thread exiting 'the output of this script is similar to the prior-- threads counting to in parallel and synchronizing their prints along the way in factboth of the last two counting thread scripts produce roughly the same output as the original thread_count pyalbeit without stdout corruption and with larger counts and different random ordering of output lines the main difference is that the main thread exits immediately after (and no sooner than!the spawned child threadsc:\pp \system\threadspython thread-count-wait py more deleted [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = main thread exiting coding alternativesbusy loopsargumentsand context managers notice how the main threads of both of the last two scripts fall into busy-wait loops at the endwhich might become significant performance drains in tight applications if sosimply add time sleep call in the wait loops to insert pause between end tests and to free up the cpu for other tasksthis call pauses the calling thread only (in this casethe main oneyou might also try experimenting with adding sleep calls to the thread function to simulate real work threads
4,772
global scope might be more coherenttoo when passed inall threads reference the same objectbecause they are all part of the same process reallythe process' object memory is shared memory for threadsregardless of how objects in that shared memory are referenced (whether through global scope variablespassed argument namesobject attributesor another wayand while we're at itthe with statement can be used to ensure thread operations around nested block of codemuch like its use to ensure file closure in the prior the thread lock' context manager acquires the lock on with statement entry and releases it on statement exit regardless of exception outcomes the net effect is to save one line of codebut also to guarantee lock release when exceptions are possible example - adds all these coding alternatives to our threaded counter script example - pp \system\threads\thread-count-wait py ""passed in mutex object shared by all threads instead of globalsuse with context manager statement for auto acquire/releasesleep calls added to avoid busy loops and simulate real work ""import _thread as threadtime stdoutmutex thread allocate_lock(numthreads exitmutexes [thread allocate_lock(for in range(numthreads)def counter(myidcountmutex)for in range(count)time sleep( (myid+ )with mutexprint('[% =% (myidi)exitmutexes[myidacquire(shared object passed in diff fractions of second auto acquire/releasewith globalsignal main thread for in range(numthreads)thread start_new_thread(counter( stdoutmutex)while not all(mutex locked(for mutex in exitmutexes)time sleep( print('main thread exiting 'when runthe different sleep times per thread make them run more independentlyc:\pp \system\threadsthread-count-wait py [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = parallel system tools
4,773
[ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = main thread exiting of coursethreads are for much more than counting we'll put shared global data to more practical use in "adding user interfaceon page where it will serve as completion signals from child processing threads transferring data over network to main thread controlling tkinter gui displayand again later in ' threadtools and ' pymailgui to post results of email operations to gui (watch for "previewguis and threadson page for more pointers on this topicglobal data shared among threads also turns out to be the basis of queueswhich are discussed later in this each thread gets or puts data using the same shared queue object the threading module the python standard library comes with two thread modules_threadthe basic lowerlevel interface illustrated thus farand threadinga higher-level interface based on objects and classes the threading module internally uses the _thread module to implement objects that represent threads and common synchronization tools it is loosely based on subset of the java language' threading modelbut it differs in ways that only java programmers would notice |example - morphs our counting threads example again to demonstrate this new module' interfaces example - pp \system\threads\thread-classes py ""thread class instances with state and run(for thread' actionuses higher-level java-like threading module object join method (not mutexes or shared global varsto know when threads are done in main parent threadsee library manual for more details on threading""import threading |but in case this means youpython' lock and condition variables are distinct objectsnot something inherent in all objectsand python' thread class doesn' have all the features of java' see python' library manual for further details threads
4,774
def __init__(selfmyidcountmutex)self myid myid self count count self mutex mutex threading thread __init__(selfsubclass thread object per-thread state information shared objectsnot globals def run(self)run provides thread logic for in range(self count)still sync stdout access with self mutexprint('[% =% (self myidi)stdoutmutex threading lock(threads [for in range( )thread mythread( stdoutmutexthread start(threads append(threadfor thread in threadsthread join(print('main thread exiting 'same as thread allocate_lock(make/start threads starts run method in thread wait for thread exits the output of this script is the same as that shown for its ancestors earlier (againthreads may be randomly distributed in timedepending on your platform) :\pp \system\threadspython thread-classes py more deleted [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = [ = main thread exiting using the threading module this way is largely matter of specializing classes threads in this module are implemented with thread objecta python class which we may customize per application by providing run method that defines the thread' action for examplethis script subclasses thread with its own mythread classthe run method will be executed by the thread framework in new thread when we make mythread and call its start method parallel system tools
4,775
the advantage of taking this more coding-intensive route is that we get both per-thread state information (the usual instance attribute namespace)and set of additional thread-related tools from the framework "for free the thread join method used near the end of this scriptfor instancewaits until the thread exits (by default)we can use this method to prevent the main thread from exiting before its childrenrather than using the time sleep calls and global locks and variables we relied on in earlier threading examples the example script also uses threading lock to synchronize stream access as before (though this name is really just synonym for _thread allocate_lock in the current implementationthe threading module may provide the extra structure of classesbut it doesn' remove the specter of concurrent updates in the multithreading model in general other ways to code threads with threading the thread class can also be used to start simple functionor any other type of callable objectwithout coding subclasses at all--if not redefinedthe thread class' default run method simply calls whatever you pass to its constructor' target argumentwith any provided arguments passed to args (which defaults to (for nonethis allows us to use thread to run simple functionstoothough this call form is not noticeably simpler than the basic _thread module for instancethe following code snippets sketch four different ways to spawn the same sort of thread (see four-threadspy in the examples treeyou can run all four in the same scriptbut would have to also synchronize prints to avoid overlap)import threading_thread def action( )print( * subclass with state class mythread(threading thread)def __init__(selfi)self threading thread __init__(selfdef run(self)print(self * mythread( start(pass action in thread threading thread(target=(lambdaaction( ))thread start(redefine run for action start invokes run(run invokes target same but no lambda wrapper for state threading thread(target=actionargs=( ,)start(callable plus its args basic thread module _thread start_new_thread(action( ,)all-function interface threads
4,776
stateor can leverage any of oop' many benefits in general your thread classes don' necessarily have to subclass threadthough in factjust as in the _thread modulethe thread' target in threading may be any type of callable object when combined with techniques such as bound methods and nested scope referencesthe choice between coding techniques becomes even less clear-cuta non-thread class with stateoop class powerdef __init__(selfi)self def action(self)print(self * obj power( threading thread(target=obj actionstart(thread runs bound method nested scope to retain state def action( )def power()print( * return power threading thread(target=action( )start(thread runs returned function both with basic thread module _thread start_new_thread(obj action()_thread start_new_thread(action( )()thread runs callable object as usualthe threading apis are as flexible as the python language itself synchronizing access to shared objects and names revisited earlierwe saw how print operations in threads need to be synchronized with locks to avoid overlapbecause the output stream is shared by all threads more formallythreads need to synchronize their changes to any item that may be shared across thread in process--both objects and namespaces depending on given program' goalsthis might includemutable object in memory (passed or otherwise referenced objects whose lifetimes span threadsnames in global scopes (changeable variables outside thread functions and classesthe contents of modules (each has just one shared copy in the system' module tablefor instanceeven simple global variables can require coordination if concurrent updates are possibleas in example - parallel system tools
4,777
"prints different results on different runs on windows import threadingtime count def adder()global count count count time sleep( count count update shared name in global scope threads share object memory and global names threads [for in range( )thread threading thread(target=adderargs=()thread start(threads append(threadfor thread in threadsthread join(print(counthere threads are spawned to update the same global scope variable twice (with sleep between updates to better interleave their operationswhen run on windows with python different runs produce different resultsc:\pp \system\threadsthread-add-random py :\pp \system\threadsthread-add-random py :\pp \system\threadsthread-add-random py :\pp \system\threadsthread-add-random py this happens because threads overlap arbitrarily in timestatementseven the simple assignment statements like those hereare not guaranteed to run to completion by themselves (that isthey are not atomicas one thread updates the globalit may be using the partial result of another thread' work in progress the net effect is this seemingly random behavior to make this script work correctlywe need to again use thread locks to synchronize the updates--when example - is runit always prints as expected example - pp \system\threads\thread-add-synch py "prints each timebecause shared resource access synchronizedimport threadingtime count def adder(addlock)shared lock object passed in threads
4,778
with addlockcount count time sleep( with addlockcount count auto acquire/release around stmt only thread updating at once addlock threading lock(threads [for in range( )thread threading thread(target=adderargs=(addlock,)thread start(threads append(threadfor thread in threadsthread join(print(countalthough some basic operations in the python language are atomic and need not be synchronizedyou're probably better off doing so for every potential concurrent update not only might the set of atomic operations change over timebut the internal implementation of threads in general can as well (and in factit may in python as described aheadof coursethis is an artificial example (spawning threads to add twice isn' exactly real-world use case for threads!)but it illustrates the issues that threads must address for any sort of potentially concurrent updates to shared object or name luckilyfor many or most realistic applicationsthe queue module of the next section can make thread synchronization an automatic artifact of program structure before we move aheadi should point out that besides thread and lockthe threading module also includes higher-level objects for synchronizing access to shared items ( semaphoreconditionevent)--many morein factthan we have space to cover heresee the library manual for details for more examples of threads and forks in generalsee the remainder this as well as the examples in the gui and network scripting parts of this book we will thread guisfor instanceto avoid blocking themand we will thread and fork network servers to avoid denying service to clients we'll also explore the threading module' approach to program exits in the absence of join calls in conjunction with queues--our next topic the queue module you can synchronize your threadsaccess to shared resources with locksbut you often don' have to as mentionedrealistically scaled threaded programs are often structured as set of producer and consumer threadswhich communicate by placing data onand taking it off ofa shared queue as long as the queue synchronizes access to itselfthis automatically synchronizes the threadsinteractions parallel system tools
4,779
data structure-- first-in first-out (fifolist of python objectsin which items are added on one end and removed from the other like normal liststhe queues provided by this module may contain any type of python objectincluding both simple types (stringslistsdictionariesand so onand more exotic types (class instancesarbitrary callables like functions and bound methodsand moreunlike normal liststhoughthe queue object is automatically controlled with thread lock acquire and release operationssuch that only one thread can modify the queue at any given point in time because of thisprograms that use queue for their crossthread communication will be thread-safe and can usually avoid dealing with locks of their own for data passed between threads like the other tools in python' threading arsenalqueues are surprisingly simple to use the script in example - for instancespawns two consumer threads that watch for data to appear on the shared queue and four producer threads that place data on the queue periodically after sleep interval (each of their sleep durations differs to simulate reallong-running taskin other wordsthis program runs threads (including the main one) of which access the shared queue in parallel example - pp \system\threads\queuetest py "producer and consumer threads communicating with shared queuenumconsumers numproducers nummessages how many consumers to start how many producers to start messages per producer to put import _thread as threadqueuetime safeprint thread allocate_lock(else prints may overlap dataqueue queue queue(shared globalinfinite size def producer(idnum)for msgnum in range(nummessages)time sleep(idnumdataqueue put('[producer id=%dcount=% ](idnummsgnum)def consumer(idnum)while truetime sleep( trydata dataqueue get(block=falseexcept queue emptypass elsewith safeprintprint('consumer'idnum'got =>'dataif __name__ ='__main__'for in range(numconsumers)thread start_new_thread(consumer( ,)for in range(numproducers)threads
4,780
time sleep(((numproducers- nummessages print('main thread exit 'before show you this script' outputi want to highlight few points in its code arguments versus globals notice how the queue is assigned to global variablebecause of thatit is shared by all of the spawned threads (all of them run in the same process and in the same global scopesince these threads change an object instead of variable nameit would work just as well to pass the queue object in to the threaded functions as an argument--the queue is shared object in memoryregardless of how it is referenced (see queuetest py in the examples tree for full version that does this)dataqueue queue queue(shared objectinfinite size def producer(idnumdataqueue)for msgnum in range(nummessages)time sleep(idnumdataqueue put('[producer id=%dcount=% ](idnummsgnum)def consumer(idnumdataqueue)if __name__ ='__main__'for in range(numproducers)thread start_new_thread(producer(idataqueue)for in range(numproducers)thread start_new_thread(producer(idataqueue)program exit with child threads also notice how this script exits when the main thread doeseven though consumer threads are still running in their infinite loops this works fine on windows (and most other platforms)--with the basic _thread modulethe program ends silently when the main thread does this is why we've had to sleep in some examples to give threads time to do their workbut is also why we do not need to be concerned about exiting while consumer threads are still running here in the alternative threading modulethoughthe program will not exit if any spawned threads are runningunless they are set to be daemon threads specificallythe entire program exits when only daemon threads are left threads inherit default initial daemonic value from the thread that creates them the initial thread of python program is considered not daemonicthough alien threads created outside this module' control are considered daemonic (including some threads created in codeto override inherited defaultsa thread object' daemon flag can be set manually in other wordsnondaemon threads prevent program exitand programs by default do not exit until all threading-managed threads finish parallel system tools
4,781
worker threads to finish their tasks in the absence of join calls or sleepsbut it can prevent programs like the one in example - from shutting down when they wish to make this example work with threadinguse the following alternative code (see queuetest py in the examples tree for complete version of thisas well as thread-countthreading pyalso in the treefor case where this refusal to exit can come in handy)import threadingqueuetime def producer(idnumdataqueue)def consumer(idnumdataqueue)if __name__ ='__main__'for in range(numconsumers)thread threading thread(target=consumerargs=(idataqueue)thread daemon true else cannot exitthread start(waitfor [for in range(numproducers)thread threading thread(target=producerargs=(idataqueue)waitfor append(threadthread start(for thread in waitforthread join(print('main thread exit 'or time sleep(long enough here we'll revisit the daemons and exits issue in while studying guisas we'll seeit' no different in that contextexcept that the main thread is usually the gui itself running the script nowas coded in example - the following is the output of this example when run on my windows machine notice that even though the queue automatically coordinates the communication of data between the threadsthis script still must use lock to manually synchronize access to the standard output streamqueues synchronize data passingbut some programs may still need to use locks for other purposes as in prior examplesif the safeprint lock is not usedthe printed lines from one consumer may be intermixed with those of another it is not impossible that consumer may be paused in the middle of print operationc:\pp \system\threadsqueuetest py consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= threads
4,782
consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= main thread exit try adjusting the parameters at the top of this script to experiment with different scenarios single consumerfor instancewould simulate gui' main thread here is the output of single-consumer run--producers still add to the queue in fairly random fashionbecause threads run in parallel with each other and with the consumerc:\pp \system\threadsqueuetest py consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= consumer got =[producer id= count= main thread exit in addition to the basics used in our scriptqueues may be fixed or infinite in sizeand get and put calls may or may not blocksee the python library manual for more details on queue interface options since we just simulated typical gui structurethoughlet' explore the notion bit further previewguis and threads we will return to threads and queues and see additional thread and queue examples when we study guis later in this book the pymailgui example in for instancewill make extensive use of thread and queue tools introduced here and developed further in and will discuss threading in the context of the tkinter gui toolkit once we've had chance to study it although we can' get into code at this pointthreads are usually an integral part of most nontrivial guis in factthe activity model of many guis is combination of threadsa queueand timerbased loop here' why in the context of guiany operation that can block or take long time to complete must be spawned off to run in parallel so that the gui (the main thread parallel system tools
4,783
as processesthe efficiency and shared-state model of threads make them ideal for this role moreoversince most gui toolkits do not allow multiple threads to update the gui in parallelupdates are best restricted to the main thread because only the main thread should generally update the displaygui programs typically take the form of main gui thread and one or more long-running producer threads--one for each long-running task being performed to synchronize their points of interfaceall of the threads share data on global queuenon-gui threads post resultsand the gui thread consumes them more specificallythe main thread handles all gui updates and runs timer-based loop that wakes up periodically to check for new data on the queue to be displayed on-screen in python' tkinter toolkitfor instancethe widget after(msecsfunc*argsmethod can be used to schedule queue-check events because such events are dispatched by the gui' event processorall gui updates occur only in this main thread (and often mustdue to the lack of thread safety in gui toolkitsthe child threads don' do anything gui-related they just produce data and put it on the queue to be picked up by the main thread alternativelychild threads can place callback function on the queueto be picked up and run by the main thread it' not generally sufficient to simply pass in gui update callback function from the main thread to the child thread and run it from therethe function in shared memory will still be executed in the child threadand potentially in parallel with other threads since threads are much more responsive than timer event loop in the guithis scheme both avoids blocking the gui (producer threads run in parallel with the gui)and avoids missing incoming events (producer threads run independent of the gui event loop and as fast as they canthe main gui thread will display the queued results as quickly as it canin the context of slower gui event loop also keep in mind that regardless of the thread safety of gui toolkitthreaded gui programs must still adhere to the principles of threaded programs in general--access to shared resources may still need to be synchronized if it falls outside the scope of the producer/consumer shared queue model if spawned threads might also update another shared state that is used by the main gui threadthread locks may also be required to avoid operation overlap for instancespawned threads that download and cache email probably cannot overlap with others that use or update the same cache that isqueues may not be enoughunless you can restrict threadswork to queuing their resultsthreaded guis still must address concurrent updates we'll see how the threaded gui model can be realized in code later in this book for more on this subjectsee especially the discussion of threaded tkinter guis in threads
4,784
example in later in this we'll also meet the multiprocessing modulewhose process and queue support offers new options for implementing this gui model using processes instead of threadsas suchthey work around the limitations of the thread gilbut may incur extra performance overheads that can vary per platformand may not be directly usable at all in threading contexts (the direct shared and mutable object state of threads is not supportedthough messaging isfor nowlet' cover few final thread fine points thread timers versus gui timers interestinglythe threading module exports general timer functionwhichlike the tkinter widget after methodcan be used to run another function after timer has expiredtimer( msomefuncstart(after seconds run somefunc timer objects have start(method to set the timer as well as cancel(method to cancel the scheduled eventand they implement the wait state in spawned thread for examplethe following prints message after secondsimport sys from threading import timer timer( lambdaprint('spam!') start(spamspawned thread this may be useful in variety of contextsbut it doesn' quite apply to guisbecause the time-delayed function call is run in spawned threadnot in the main gui threadit should not generally perform gui updates because the tkinter after method is run from the main thread' event processing loop insteadit runs in the main gui thread and can freely update the gui as previewfor instancethe following displays pop-up message window in seconds in the main thread of tkinter gui (you might also have to run win main loop(in some interfaces)from tkinter import tk from tkinter messagebox import showinfo win tk(win after( lambdashowinfo('popup''spam!')the last call here schedules the function to be run once in the main gui threadbut it does not pause the caller during the waitand so does not block the gui it' equivalent to this simpler formwin after( showinfo'popup''spam'stay tuned for much more on tkinter in the next part of this bookand watch for the full story on its after timer events in and the roles of threads in guis in parallel system tools
4,785
although it' lower-level topic than you generally need to do useful thread work in pythonthe implementation of python' threads can have impacts on both performance and coding this section summarizes implementation details and some of their ramifications threads implementation in the upcoming python this section describes the current implementation of threads up to and including python at this writingpython is still in developmentbut one of its likely enhancements is new version of the gil that provides better performanceespecially on some multicore cpus the new gil implementation will still synchronize access to the pvm (python language code is still multiplexed as before)but it will use context switching scheme that is more efficient than the current -bytecode-instruction approach among other thingsthe current sys setcheckinterval call will likely be replaced with timer duration call in the new scheme specificallythe concept of check interval for thread switches will be abandoned and replaced by an absolute time duration expressed in seconds it' anticipated that this duration will default to millisecondsbut it will be tunable through sys setswitchinterval moreoverthere have been variety of plans made to remove the gil altogether (including goals of the unladen swallow project being conducted by google employees)though none have managed to produce any fruit thus far since cannot predict the futureplease see python release documents to follow this (well thread strictly speakingpython currently uses the global interpreter lock (gilmechanism introduced at the start of this sectionwhich guarantees that one threadat mostis running code within the python interpreter at any given point in time in additionto make sure that each thread gets chance to runthe interpreter automatically switches its attention between threads at regular intervals (in python by releasing and acquiring the lock after number of bytecode instructionsas well as at the start of longrunning operations ( on some file input/output requeststhis scheme avoids problems that could arise if multiple threads were to update python system data at the same time for instanceif two threads were allowed to simultaneously change an object' reference countthe result might be unpredictable this scheme can also have subtle consequences in this threading examplesfor instancethe stdout stream can be corrupted unless each thread' call to write text is synchronized with thread locks moreovereven though the gil prevents more than one python thread from running at the same timeit is not enough to ensure thread safety in generaland it does not threads
4,786
than one thread might attempt to update the same variable at the same timethe threads should generally be given exclusive access to the object with locks otherwiseit' not impossible that thread switches will occur in the middle of an update statement' bytecode locks are not strictly required for all shared object accessespecially if single thread updates an object inspected by other threads as rule of thumbthoughyou should generally use locks to synchronize threads whenever update rendezvous are possible instead of relying on artifacts of the current thread implementation the thread switch interval some concurrent updates might work without locks if the thread-switch interval is set high enough to allow each thread to finish without being swapped out the sys setcheckinterval(ncall sets the frequency with which the interpreter checks for things like thread switches and signal handlers this interval defines the number of bytecode instructions before switch it does not need to be reset for most programsbut it can be used to tune thread performance setting higher values means switches happen less oftenthreads incur less overhead but they are less responsive to events setting lower values makes threads more responsive to events but increases thread switch overhead atomic operations because of the way python uses the gil to synchronize threadsaccess to the virtual machinewhole statements are not generally thread-safebut each bytecode instruction is because of this bytecode indivisibilitysome python language operations are threadsafe--also called atomicbecause they run without interruption--and do not require the use of locks or queues to avoid concurrent update issues for instanceas of this writinglist appendfetches and some assignments for variableslist itemsdictionary keysand object attributesand other operations were still atomic in standard pythonotherssuch as + (and any operation in general that reads datamodifies itand writes it backwere not as mentioned earlierthoughrelying on these rules is bit of gamblebecause they require deep understanding of python internals and may vary per release indeedthe set of atomic operations may be radically changed if new free-threaded implementation ever appears as rule of thumbit may be easier to use locks for all access to global and shared objects than to try to remember which types of access may or may not be safe across multiple threads api thread considerations finallyif you plan to mix python with calso see the thread interfaces described in the python/ api standard manual in threaded programsc extensions must release parallel system tools
4,787
extension function should release the lock on entry and reacquire it on exit when resuming python code also note that even though the python code in python threads cannot truly overlap in time due to the gil synchronizationthe -coded portions of threads can any number may be running in parallelas long as they do work outside the scope of the python virtual machine in factc threads may overlap both with other threads and with python language threads run in the virtual machine because of thissplitting code off to libraries is one way that python applications can still take advantage of multi-cpu machines stillit may often be easier to leverage such machines by simply writing python programs that fork processes instead of starting threads the complexity of process and thread code is similar for more on extensions and their threading requirementssee in shortpython includes language tools (including pair of gil management macrosthat can be used to wrap long-running operations in -coded extensions and that allow other python language threads to run in parallel process-based alternativemultiprocessing (aheadby nowyou should have basic grasp of parallel processes and threadsand python' tools that support them later in this we'll revisit both ideas to study the multiprocessing module-- standard library tool that seeks to combine the simplicity and portability of threads with the benefits of processesby implementing threadinglike api that runs processes instead of threads it seeks to address the portability issue of processesas well as the multiple-cpu limitations imposed in threads by the gilbut it cannot be used as replacement for forking in some contextsand it imposes some constraints that threads do notwhich stem from its process-based model (for instancemutable object state is not directly shared because objects are copied across process boundariesand unpickleable objects such as bound methods cannot be as freely usedbecause the multiprocessing module also implements tools to simplify tasks such as inter-process communication and exit statusthoughlet' first get handle on python' support in those domains as welland explore some more process and thread examples along the way program exits as we've seenunlike cthere is no "mainfunction in python when we run programwe simply execute all of the code in the top-level filefrom top to bottom ( in the filename we listed in the command lineclicked in file explorerand so onscripts program exits
4,788
exit explicitly with tools in the sys and os modules sys module exits for examplethe built-in sys exit function ends program when calledand earlier than normalsys exit(nexit with status nelse exits on end of script interestinglythis call really just raises the built-in systemexit exception because of thiswe can catch it as usual to intercept early exits and perform cleanup activitiesif uncaughtthe interpreter exits as usual for instancec:\pp \systempython import sys trysys exit(see alsoos _exittk(quit(except systemexitprint('ignoring exit'ignoring exit programming tools such as debuggers can make use of this hook to avoid shutting down in factexplicitly raising the built-in systemexit exception with python raise statement is equivalent to calling sys exit more realisticallya try block would catch the exit exception raised elsewhere in programthe script in example - for instanceexits from within processing function example - pp \system\exits\testexit_sys py def later()import sys print('bye sys world'sys exit( print('never reached'if __name__ ='__main__'later(running this program as script causes it to exit before the interpreter falls off the end of the file but because sys exit raises python exceptionimporters of its function can trap and override its exit exception or specify finally cleanup block to be run during program exit processingc:\pp \system\exitspython testexit_sys py bye sys world :\pp \system\exitspython from testexit_sys import later trylater(except systemexit parallel system tools
4,789
print('ignored 'bye sys world ignored trylater(finallyprint('cleanup'bye sys world cleanup :\pp \system\exitsinteractive session process exits os module exits it' possible to exit python in other waystoo for instancewithin forked child process on unixwe typically call the os _exit function rather than sys exitthreads may exit with _thread exit calland tkinter gui applications often end by calling something named tk(quit(we'll meet the tkinter module later in this booklet' take look at os exits here on os _exitthe calling process exits immediately instead of raising an exception that could be trapped and ignored in factthe process also exits without flushing output stream buffers or running cleanup handlers (defined by the atexit standard library module)so this generally should be used only by child processes after forkwhere overall program shutdown actions aren' desired example - illustrates the basics example - pp \system\exits\testexit_os py def outahere()import os print('bye os world'os _exit( print('never reached'if __name__ ='__main__'outahere(unlike sys exitos _exit is immune to both try/except and try/finally interceptionc:\pp \system\exitspython testexit_os py bye os world :\pp \system\exitspython from testexit_os import outahere tryoutahere(exceptprint('ignored'bye os world exits interactive process :\pp \system\exitspython program exits
4,790
tryoutahere(finallyprint('cleanup'bye os world ditto shell command exit status codes both the sys and os exit calls we just met accept an argument that denotes the exit status code of the process (it' optional in the sys call but required by osafter exitthis code may be interrogated in shells and by programs that ran the script as child process on linuxfor examplewe ask for the status shell variable' value in order to fetch the last program' exit statusby conventiona nonzero status generally indicates that some sort of problem occurred[mark@linux]python testexit_sys py bye sys world [mark@linux]echo $status [mark@linux]python testexit_os py bye os world [mark@linux]echo $status in chain of command-line programsexit statuses could be checked along the way as simple form of cross-program communication we can also grab hold of the exit status of program run by another script for instanceas introduced in and when launching shell commandsexit status is provided asthe return value of an os system call the return value of the close method of an os popen object (for historical reasonsnone is returned if the exit status was which means no error occurreda variety of interfaces in the subprocess module ( the call function' return valuea popen object' returnvalue attribute and wait method resultin additionwhen running programs by forking processesthe exit status is available through the os wait and os waitpid calls in parent process exit status with os system and os popen let' look at the case of the shell commands first--the followingrun on linuxspawns example - and example - reads the output streams through pipes and fetches their exit status codes[mark@linux]python import os pipe os popen('python testexit_sys py' parallel system tools
4,791
'bye sys world\ stat pipe close(stat hex(stat' stat > returns exit code extract status from bitmask on unix-likes pipe os popen('python testexit_os py'stat pipe close(statstat > ( this code works the same under cygwin python on windows when using os popen on such unix-like platformsfor reasons we won' go into herethe exit status is actually packed into specific bit positions of the return valueit' really therebut we need to shift the result right by eight bits to see it commands run with os system send their statuses back directly through the python library callstat os system('python testexit_sys py'bye sys world statstat > ( stat os system('python testexit_os py'bye os world statstat > ( all of this code works under the standard version of python for windowstoothough exit status is not encoded in bit mask (test sys platform if your code must handle both formats) :\pp \system\exitspython os system('python testexit_sys py'bye sys world os system('python testexit_os py'bye os world pipe os popen('python testexit_sys py'pipe read('bye sys world\npipe close( os popen('python testexit_os py'close( program exits
4,792
notice that the last test in the preceding code didn' attempt to read the command' output pipe if we dowe may have to run the target script in unbuffered mode with the - python command-line flag or change the script to flush its output manually with sys stdout flush otherwisethe text printed to the standard output stream might not be flushed from its buffer when os _exit is called in this case for immediate shutdown by defaultstandard output is fully buffered when connected to pipe like thisit' only line-buffered when connected to terminalpipe os popen('python testexit_os py'pipe read('pipe os popen('python - testexit_os py'pipe read('bye os world\nstreams not flushed on exit force unbuffered streams confusinglyyou can pass mode and buffering argument to specify line buffering in both os popen and subprocess popenbut this won' help here--arguments passed to these tools pertain to the calling process' input end of the pipenot to the spawned program' output streampipe os popen('python testexit_os py'' ' pipe read('line buffered only but my pipenot program'sfrom subprocess import popenpipe pipe popen('python testexit_os py'bufsize= stdout=pipepipe stdout read( 'for my pipe doesn' help reallybuffering mode arguments in these tools pertain to output the caller writes to command' standard input streamnot to output read from that command if requiredthe spawned script itself can also manually flush its output buffers periodically or before forced exits more on buffering when we discuss the potential for deadlocks later in this and again in and where we'll see how it applies to sockets since we brought up subprocessthoughlet' turn to its exit tools next exit status with subprocess the alternative subprocess module offers exit status in variety of waysas we saw in and ( none value in returncode indicates that the spawned program has not yet terminated) :\pp \system\exitspython from subprocess import popenpipecall pipe popen('python testexit_sys py'stdout=pipepipe stdout read( 'bye sys world\ \ parallel system tools
4,793
call('python testexit_sys py'bye sys world pipe popen('python testexit_sys py'stdout=pipepipe communicate(( 'bye sys world\ \ 'nonepipe returncode the subprocess module works the same on unix-like platforms like cygwinbut unlike os popenthe exit status is not encodedand so it matches the windows result (note that shell=true is needed to run this as is on cygwin and unix-like platformsas we learned in on windows this argument is required only to run commands built into the shelllike dir)[ :\pp \system\exits]python from subprocess import popenpipecall pipe popen('python testexit_sys py'stdout=pipeshell=truepipe stdout read( 'bye sys world\npipe wait( call('python testexit_sys py'shell=truebye sys world process exit status and shared state nowto learn how to obtain the exit status from forked processeslet' write simple forking programthe script in example - forks child processes and prints child process exit statuses returned by os wait calls in the parent until "qis typed at the console example - pp \system\exits\testexit_fork py ""fork child processes to watch exit status with os waitfork works on unix and cygwin but not standard windows python notespawned threads share globalsbut each forked process has its own copy of them (forks share file descriptors)--exitstat is always the same here but will vary if for threads""import os exitstat def child()global exitstat exitstat + could os exit script here change this process' global exit status to parent' wait program exits
4,794
os _exit(exitstatprint('never reached'def parent()while truenewpid os fork(start new copy of process if newpid = if in copyrun child logic child(loop until 'qconsole input elsepidstatus os wait(print('parent got'pidstatus(status > )if input(=' 'break if __name__ ='__main__'parent(running this program on linuxunixor cygwin (rememberfork still doesn' work on standard windows python as write the fourth edition of this bookproduces the following sort of results[ :\pp \system\exits]python testexit_fork py hello from child parent got hello from child parent got hello from child parent got if you study this output closelyyou'll notice that the exit status (the last number printedis always the same--the number because forked processes begin life as copies of the process that created themthey also have copies of global memory because of thateach forked child gets and changes its own exitstat global variable without changing any other process' copy of this variable at the same timeforked processes copy and thus share file descriptorswhich is why prints go to the same place thread exits and shared state in contrastthreads run in parallel within the same process and share global memory each thread in example - changes the single shared global variableexitstat example - pp \system\exits\testexit_thread py ""spawn threads to watch shared global memory changethreads normally exit when the function they run returnsbut _thread exit(can be called to exit calling thread_thread exit is the same as sys exit and raising systemexitthreads communicate with possibly locked global varscaveatmay need to make print/input calls atomic on some platforms--shared stdout"" parallel system tools
4,795
exitstat def child()global exitstat process global names exitstat + shared by all threads threadid thread get_ident(print('hello from child'threadidexitstatthread exit(print('never reached'def parent()while truethread start_new_thread(child()if input(=' 'break if __name__ ='__main__'parent(the following shows this script in action on windowsunlike forksthreads run in the standard version of python on windowstoo thread identifiers created by python differ each time--they are arbitrary but unique among all currently active threads and so may be used as dictionary keys to keep per-thread information ( thread' id may be reused after it exits on some platforms) :\pp \system\exitspython testexit_thread py hello from child hello from child hello from child hello from child notice how the value of this script' global exitstat is changed by each threadbecause threads share global memory within the process in factthis is often how threads communicate in general rather than exit status codesthreads assign module-level globals or change shared mutable objects in-place to signal conditionsand they use thread module locks and queues to synchronize access to shared items if needed this script might need to synchronizetooif it ever does something more realistic--for global counter changesbut even print and input may have to be synchronized if they overlap stream access badly on some platforms for this simple demowe forego locks by assuming threads won' mix their operations oddly as we've learneda thread normally exits silently when the function it runs returnsand the function return value is ignored optionallythe _thread exit function can be called to terminate the calling thread explicitly and silently this call works almost exactly like sys exit (but takes no return status argument)and it works by raising systemexit exception in the calling thread because of thata thread can also prematurely end by calling sys exit or by directly raising systemexit be sure not to call os _exit within thread functionthough--doing so can have odd results (the last time program exits
4,796
process on windows!the alternative threading module for threads has no method equivalent to _thread exit()but since all that the latter does is raise system-exit exceptiondoing the same in threading has the same effect--the thread exits immediately and silentlyas in the following sort of code (see testexit-threading py in the example tree for this code)import threadingsystime def action()sys exit(print('not reached'or raise systemexit(threading thread(target=actionstart(time sleep( print('main exit'on related notekeep in mind that threads and processes have default lifespan modelswhich we explored earlier by way of reviewwhen child threads are still runningthe two thread modulesbehavior differs--programs on most platforms exit when the parent thread does under _threadbut not normally under threading unless children are made daemons when using processeschildren normally outlive their parent this different process behavior makes sense if you remember that threads are in-process function callsbut processes are more independent and autonomous when used wellexit status can be used to implement error detection and simple communication protocols in systems composed of command-line scripts but having said thati should underscore that most scripts do simply fall off the end of the source to exitand most thread functions simply returnexplicit exit calls are generally employed for exceptional conditions and in limited contexts only more typicallyprograms communicate with richer tools than integer exit codesthe next section shows how interprocess communication as we saw earlierwhen scripts spawn threads--tasks that run in parallel within the program--they can naturally communicate by changing and inspecting names and objects in shared global memory this includes both accessible variables and attributesas well as referenced mutable objects as we also sawsome care must be taken to use locks to synchronize access to shared items that can be updated concurrently stillthreads offer fairly straightforward communication modeland the queue module can make this nearly automatic for many programs things aren' quite as simple when scripts start child processes and independent programs that do not share memory in general if we limit the kinds of communications that can happen between programsmany options are availablemost of which we've parallel system tools
4,797
simple files command-line arguments program exit status codes shell environment variables standard stream redirections stream pipes managed by os popen and subprocess for instancesending command-line options and writing to input streams lets us pass in program execution parametersreading program output streams and exit codes gives us way to grab result because shell environment variable settings are inherited by spawned programsthey provide another way to pass context in and pipes made by os popen or subprocess allow even more dynamic communication data can be sent between programs at arbitrary timesnot only at program start and exit beyond this setthere are other tools in the python library for performing inter-process communication (ipcthis includes socketsshared memorysignalsanonymous and named pipesand more some vary in portabilityand all vary in complexity and utility for instancesignals allow programs to send simple notification events to other programs anonymous pipes allow threads and related processes that share file descriptors to pass databut generally rely on the unix-like forking model for processeswhich is not universally portable named pipes are mapped to the system' filesystem--they allow completely unrelated programs to conversebut are not available in python on all platforms sockets map to system-wide port numbers--they similarly let us transfer data between arbitrary programs running on the same computerbut also between programs located on remote networked machinesand offer more portable option while some of these can be used as communication devices by threadstootheir full power becomes more evident when leveraged by separate processes which do not share memory at large in this sectionwe explore directly managed pipes (both anonymous and named)as well as signals we also take first look at sockets herebut largely as previewsockets can be used for ipc on single machinebut because the larger socket story also involves their role in networkingwe'll save most of their details until the internet part of this book other ipc tools are available to python programmers ( shared memory as provided by the mmap modulebut are not covered here for lack of spacesearch the python interprocess communication
4,798
after this sectionwe'll also study the multiprocessing modulewhich offers additional and portable ipc options as part of its general process-launching apiincluding shared memoryand pipes and queues of arbitrary pickled python objects for nowlet' study traditional approaches first anonymous pipes pipesa cross-program communication deviceare implemented by your operating system and made available in the python standard library pipes are unidirectional channels that work something like shared memory bufferbut with an interface resembling simple file on each of two ends in typical useone program writes data on one end of the pipeand another reads that data on the other end each program sees only its end of the pipes and processes it using normal python file calls pipes are much more within the operating systemthough for instancecalls to read pipe will normally block the caller until data becomes available ( is sent by the program on the other endinstead of returning an end-of-file indicator moreoverread calls on pipe always return the oldest data written to the piperesulting in first-infirst-out model--the first data written is the first to be read because of such propertiespipes are also way to synchronize the execution of independent programs pipes come in two flavors--anonymous and named named pipes (often called fifosare represented by file on your computer because named pipes are really external filesthe communicating processes need not be related at allin factthey can be independently started programs by contrastanonymous pipes exist only within processes and are typically used in conjunction with process forks as way to link parent and spawned child processes within an application parent and child converse over shared pipe file descriptorswhich are inherited by spawned processes because threads run in the same process and share all global memory in generalanonymous pipes apply to them as well anonymous pipe basics since they are more traditionallet' start with look at anonymous pipes to illustratethe script in example - uses the os fork call to make copy of the calling process as usual (we met forks earlier in this after forkingthe original parent process and its child copy speak through the two ends of pipe created with os pipe prior to the fork the os pipe call returns tuple of two file descriptors--the low-level file identifiers we met in --representing the input and output sides of the pipe because forked child processes get copies of their parentsfile descriptorswriting to the pipe' output descriptor in the child sends data back to the parent on the pipe created before the child was spawned parallel system tools
4,799
import ostime def child(pipeout)zzz while truetime sleep(zzzmsg ('spam % dzzzencode(os write(pipeoutmsgzzz (zzz+ make parent wait pipes are binary bytes send to parent goto after def parent()pipeinpipeout os pipe(make -ended pipe if os fork(= copy this process child(pipeoutin copyrun child elsein parentlisten to pipe while trueline os read(pipein blocks until data sent print('parent % got [%sat % (os getpid()linetime time())parent(if you run this program on linuxcygwinor another unix-like platform (pipe is available on standard windows pythonbut fork is not)the parent process waits for the child to send data on the pipe each time it calls os read it' almost as if the child and parent act as client and server here--the parent starts the child and waits for it to initiate communication to simulate differing task durationsthe child keeps the parent waiting one second longer between messages with time sleep callsuntil the delay has reached four seconds when the zzz delay counter hits it rolls back down to and starts again[ :\pp \system\processes]python pipe py parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at #we will clarify the notions of "clientand "serverin the internet programming part of this book therewe'll communicate with sockets (which we'll see later in this are roughly like bidirectional pipes for programs running both across networks and on the same machine)but the overall conversation model is similar named pipes (fifos)described aheadare also better match to the client/server model because they can be accessed by arbitraryunrelated processes (no forks are requiredbut as we'll seethe socket port model is generally used by most internet scripting protocols--emailfor instanceis mostly just formatted strings shipped over sockets between programs on standard port numbers reserved for the email protocol interprocess communication