id
int64
0
25.6k
text
stringlengths
0
4.59k
4,800
etc ctrl- to exit notice how the parent received bytes string through the pipe raw pipes normally deal in binary byte strings when their descriptors are used directly this way with the descriptor-based file tools we met in (as we saw theredescriptor read and write tools in os always return and expect byte stringsthat' why we also have to manually encode to bytes when writing in the child--the string formatting operation is not available on bytes as the next section showsit' also possible to wrap pipe descriptor in text-mode file objectmuch as we did in the file examples in but that object simply performs encoding and decoding automatically on transfersit' still bytes in the pipe wrapping pipe descriptors in file objects if you look closely at the preceding outputyou'll see that when the child' delay counter hits the parent ends up reading two messages from the pipe at the same timethe child wrote two distinct messagesbut on some platforms or configurations (other than that used herethey might be interleaved or processed close enough in time to be fetched as single unit by the parent reallythe parent blindly asks to readat most bytes each timebut it gets back whatever text is available in the pipewhen it becomes available to distinguish messages betterwe can mandate separator character in the pipe an end-of-line makes this easybecause we can wrap the pipe descriptor in file object with os fdopen and rely on the file object' readline method to scan up through the next \ separator in the pipe this also lets us leverage the more powerful tools of the text-mode file object we met in example - implements this scheme for the parent' end of the pipe example - pp \system\processes\pipe py same as pipe pybut wrap pipe input in stdio file object to read by lineand close unused pipe fds in both processes import ostime def child(pipeout)zzz while truetime sleep(zzzmsg ('spam % \nzzzencode(os write(pipeoutmsgzzz (zzz+ def parent()pipeinpipeout os pipe(if os fork(= os close(pipeinchild(pipeout parallel system tools make parent wait pipes are binary in send to parent roll to at make -ended pipe in childwrite to pipe close input side here
4,801
in parentlisten to pipe os close(pipeoutclose output side here pipein os fdopen(pipeinmake text mode input file object while trueline pipein readline()[:- blocks until data sent print('parent % got [%sat % (os getpid()linetime time())parent(this version has also been augmented to close the unused end of the pipe in each process ( after the forkthe parent process closes its copy of the output side of the pipe written by the child)programs should close unused pipe ends in general running with this new version reliably returns single child message to the parent each time it reads from the pipebecause they are separated with markers when written[ :\pp \system\processes]python pipe py parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at parent got [spam at etc ctrl- to exit notice that this version' reads also return text data str object nowper the default text mode for os fdopen as mentionedpipes normally deal in binary byte strings when their descriptors are used directly with os file toolsbut wrapping in text-mode files allows us to use str strings to represent text data instead of bytes in this examplebytes are decoded to str when read by the parentusing os fdopen and text mode in the child would allow us to avoid its manual encoding callbut the file object would encode the str data anyhow (though the encoding is trivial for ascii bytes like those used hereas for simple filesthe best mode for processing pipe data in is determined by its nature anonymous pipes and threads although the os fork call required by the prior section' examples isn' available on standard windows pythonos pipe is because threads all run in the same process and share file descriptors (and global memory in general)this makes anonymous pipes usable as communication and synchronization device for threadstoo this is an arguably lower-level mechanism than queues or shared names and objectsbut it provides an additional ipc option for threads example - for instancedemonstrates the same type of pipe-based communication occurring between threads instead of processes interprocess communication
4,802
anonymous pipes and threadsnot processesthis version works on windows import ostimethreading def child(pipeout)zzz while truetime sleep(zzzmsg ('spam % dzzzencode(os write(pipeoutmsgzzz (zzz+ make parent wait pipes are binary bytes send to parent goto after def parent(pipein)while trueline os read(pipein blocks until data sent print('parent % got [%sat % (os getpid()linetime time())pipeinpipeout os pipe(threading thread(target=childargs=(pipeout,)start(parent(pipeinsince threads work on standard windows pythonthis script does too the output is similar herebut the speakers are in-process threadsnot processes (note that because of its simple-minded infinite loopsat least one of its threads may not die on ctrl- -on windows you may need to use task manager to kill the python exe process running this script or close its window to exit) :\pp \system\processespipe-thread py parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at parent got [ 'spam 'at etc ctrl- or task manager to exit bidirectional ipc with anonymous pipes pipes normally let data flow in only one direction--one side is inputone is output what if you need your programs to talk back and forththoughfor exampleone program might send another request for information and then wait for that information to be sent back single pipe can' generally handle such bidirectional conversationsbut two pipes can one pipe can be used to pass requests to program and another can be used to ship replies back to the requestor this really does have real-world applications for instancei once added gui interface to command-line debugger for -like programming language by connecting two processes with pipes this way the gui ran as separate process that constructed and parallel system tools
4,803
that showed up in the debugger' output stream pipe in effectthe gui acted like programmer typing commands at keyboard and client to the debugger server more generallyby spawning command-line programs with streams attached by pipessystems can add new interfaces to legacy programs in factwe'll see simple example of this sort of gui program structure in the module in example - demonstrates one way to apply this idea to link the input and output streams of two programs its spawn function forks new child program and connects the input and output streams of the parent to the output and input streams of the child that iswhen the parent reads from its standard inputit is reading text sent to the child' standard output when the parent writes to its standard outputit is sending data to the child' standard input the net effect is that the two independent programs communicate by speaking over their standard streams example - pp \system\processes\pipes py ""spawn child process/programconnect my stdin/stdout to child process' stdout/stdin--my reads and writes map to output and input streams of the spawned programmuch like tying together streams with subprocess module""import ossys def spawn(prog*args)stdinfd sys stdin fileno(stdoutfd sys stdout fileno(parentstdinchildstdout os pipe(childstdinparentstdout os pipe(pid os fork(if pidos close(childstdoutos close(childstdinos dup (parentstdinstdinfdos dup (parentstdoutstdoutfdelseos close(parentstdinos close(parentstdoutos dup (childstdinstdinfdos dup (childstdoutstdoutfdargs (prog,args os execvp(progargsassert false'execvp failed!pass prognamecmdline args get descriptors for streams normally stdin= stdout= make two ipc pipe channels pipe returns (inputfdoutoutfdmake copy of this process in parent process after forkclose child ends in parent my sys stdin copy pipe [ my sys stdout copy pipe [ in child process after forkclose parent ends in child my sys stdin copy pipe [ my sys stdout copy pipe [ new program in this process os exec call never returns here if __name__ ='__main__'interprocess communication
4,804
spawn('python''pipes-testchild py''spam'fork child program print('hello from parent'mypidsys stdout flush(reply input(sys stderr write('parent got"% "\nreplyto child' stdin subvert stdio buffering from child' stdout stderr not tied to pipeprint('hello from parent'mypidsys stdout flush(reply sys stdin readline(sys stderr write('parent got"% "\nreply[:- ]the spawn function in this module does not work on standard windows python (remember that fork isn' yet available there todayin factmost of the calls in this module map straight to unix system calls (and may be arbitrarily terrifying at first glance to non-unix developers!we've already met some of these ( os fork)but much of this code depends on unix concepts we don' have time to address well in this text but in simple termshere is brief summary of the system calls demonstrated in this codeos fork copies the calling process as usual and returns the child' process id in the parent process only os execvp overlays new program in the calling processit' just like the os execlp used earlier but takes tuple or list of command-line argument strings (collected with the *args form in the function headeros pipe returns tuple of file descriptors representing the input and output ends of pipeas in earlier examples os close(fdcloses the descriptor-based file fd os dup (fd ,fd copies all system information associated with the file named by the file descriptor fd to the file named by fd in terms of connecting standard streamsos dup is the real nitty-gritty here for examplethe call os dup (parentstdin,stdinfdessentially assigns the parent process' stdin file to the input end of one of the two pipes createdall stdin reads will henceforth come from the pipe by connecting the other end of this pipe to the child process' copy of the stdout stream file with os dup (childstdout,stdoutfd)text written by the child to its sdtdout winds up being routed through the pipe to the parent' stdin stream the effect is reminiscent of the way we tied together streams with the subprocess module in but this script is more low-level and less portable parallel system tools
4,805
in example - in child process and reads and writes standard streams to converse with it over two pipes example - pp \system\processes\pipes-testchild py import ostimesys mypid os getpid(parentpid os getppid(sys stderr write('child % of % got arg"% "\ (mypidparentpidsys argv[ ])for in range( )time sleep( make parent process wait by sleeping here recv input(stdin tied to pipecomes from parent' stdout time sleep( send 'child % got[% ](mypidrecvprint(sendstdout tied to pipegoes to parent' stdin sys stdout flush(make sure it' sent now or else process blocks the following is our test in action on cygwin (it' similar other unix-like platforms like linux)its output is not incredibly impressive to readbut it represents two programs running independently and shipping data back and forth through pipe device managed by the operating system this is even more like client/server model (if you imagine the child as the serverresponding to requests sent from the parentthe text in square brackets in this output went from the parent process to the child and back to the parent againall through pipes connected to standard streams[ :\pp \system\processes]python pipes py child of got arg"spamparent got"child got[hello from parent ]parent got"child got[hello from parent ]output stream buffering revisiteddeadlocks and flushes the two processes of the prior section' example engage in simple dialogbut it' already enough to illustrate some of the dangers lurking in cross-program communications first of allnotice that both programs need to write to stderr to display messagetheir stdout streams are tied to the other program' input stream because processes share file descriptorsstderr is the same in both parent and childso status messages show up in the same place more subtlynote that both parent and child call sys stdout flush after they print text to the output stream input requests on pipes normally block the caller if no data is availablebut it seems that this shouldn' be problem in our example because there are as many writes as there are reads on the other side of the pipe by defaultthoughsys stdout is buffered in this contextso the printed text may not actually be transmitted until some time in the future (when the output buffers fill upin factif the flush calls are not madeboth processes may get stuck on some platforms waiting for input from the other--input that is sitting in buffer and is never flushed out over the pipe they interprocess communication
4,806
occur technicallyby default stdout is just line-buffered when connected to terminalbut it is fully buffered when connected to other devices such as filessocketsand the pipes used here this is why you see script' printed text in shell window immediately as it is producedbut not until the process exits or its buffer fills when its output stream is connected to something else this output buffering is really function of the system libraries used to access pipesnot of the pipes themselves (pipes do queue up output databut they never hide it from readers!in factit appears to occur in this example only because we copy the pipe' information over to sys stdouta built-in file object that uses stream buffering by default howeversuch anomalies can also occur when using other cross-process tools in general termsif your programs engage in two-way dialog like thisthere are variety of ways to avoid buffering-related deadlock problemsflushesas demonstrated in examples - and - manually flushing output pipe streams by calling the file object flush method is an easy way to force buffers to be cleared use sys stdout flush for the output stream used by print argumentsas introduced earlier in this the - python command-line flag turns off full buffering for the sys stdout stream in python programs setting your pythonunbuffered environment variable to nonempty value is equivalent to passing this flag but applies to every program run open modesit' possible to use pipes themselves in unbuffered mode either use low-level os module calls to read and write pipe descriptors directlyor pass buffer size argument of (for unbufferedor (for line-bufferedto os fdopen to disable buffering in the file object used to wrap the descriptor you can use open arguments the same way to control buffering for output to fifo files (described in the next sectionnote that in python xfully unbuffered mode is allowed only for binary mode filesnot text command pipesas mentioned earlier in this you can similarly specify buffering mode arguments for command-line pipes when they are created by os popen and subprocess popenbut this pertains to the caller' end of the pipenot those of the spawned program hence it cannot prevent delayed outputs from the latterbut can be used for text sent to another program' input pipe socketsas we'll see laterthe socket makefile call accepts similar buffering mode argument for sockets (described later in this and book)but in python this call requires buffering for text-mode access and appears to not support linebuffered mode (more on this on toolsfor more complex taskswe can also use higher-level tools that essentially fool program into believing it is connected to terminal these address programs parallel system tools
4,807
see "more on stream bufferingpty and pexpecton page thread can avoid blocking main guitoobut really just delegate the problem (the spawned thread will still be deadlockedof the options listedthe first two--manual flushes and command-line arguments--are often the simplest solutions in factbecause it is so usefulthe second technique listed above merits few more words try thiscomment-out all the sys stdout flush calls in examples - and - (the files pipes py and pipes-testchild pyand change the parent' spawn call in pipes py to this ( add - command-line argument)spawn('python''- ''pipes-testchild py''spam'then start the program with command line like thispython - pipes py it will work as it did with the manual stdout flush callsbecause stdout will be operating in unbuffered mode in both parent and child we'll revisit the effects of unbuffered output streams in where we'll code simple gui that displays the output of non-gui program by reading it over both nonblocking socket and pipe in thread we'll explore the topic again in more depth in where we will redirect standard streams to sockets in more general ways deadlock in generalthoughis bigger problem than we have space to address fully here on the other handif you know enough that you want to do ipc in pythonyou're probably already veteran of the deadlock wars anonymous pipes allow related tasks to communicate but are not directly suited for independently launched programs to allow the latter group to conversewe need to move on to the next section and explore devices that have broader visibility more on stream bufferingpty and pexpect on unix-like platformsyou may also be able to use the python pty standard library module to force another program' standard output to be unbufferedespecially if it' not python program and you cannot change its code technicallydefault buffering for stdout in other programs is determined outside python by whether the underlying file descriptor refers to terminal this occurs in the stdio file system library and cannot be controlled by the spawning program in generaloutput to terminals is line bufferedand output to nonterminals (including filespipesand socketsis fully buffered this policy is used for efficiency files and streams created within python script follow the same defaultsbut you can specify buffering policies in python' file creation tools the pty module essentially fools the spawned program into thinking it is connected to terminal so that only one line is buffered for stdout the net effect is that each newline flushes the prior line--typical of interactive programsand what you need if you wish to grab each piece of the printed output as it is produced notehoweverthat the pty module is not required for this role when spawning python scripts with pipessimply use the - python command-line flagpass line-buffered mode interprocess communication
4,808
program the pty module is also not available on all python platforms today (most notablyit runs on cygwin but not the standard windows pythonthe pexpect packagea pure-python equivalent of the unix expect programuses pty to provide additional functionality and to handle interactions that bypass standard streams ( password inputssee the python library manual for more on ptyand search the web for pexpect named pipes (fifoson some platformsit is also possible to create long-lived pipe that exists as real named file in the filesystem such files are called named pipes (orsometimesfifosbecause they behave just like the pipes created by the previous section' programs because fifos are associated with real file on your computerthoughthey are external to any particular program--they do not rely on memory shared between tasksand so they can be used as an ipc mechanism for threadsprocessesand independently launched programs once named pipe file is createdclients open it by name and read and write data using normal file operations fifos are unidirectional streams in typical operationa server program reads data from the fifoand one or more client programs write data to it in additiona set of two fifos can be used to implement bidirectional communication just as we did for anonymous pipes in the prior section because fifos reside in the filesystemthey are longer-lived than in-process anonymous pipes and can be accessed by programs started independently the unnamedinprocess pipe examples thus far depend on the fact that file descriptors (including pipesare copied to child processesmemory that makes it difficult to use anonymous pipes to connect programs started independently with fifospipes are accessed instead by filename visible to all programs running on the computerregardless of any parentchild process relationships in factlike normal filesfifos typically outlive the programs that access them unlike normal filesthoughthe operating system synchronizes fifo accessmaking them ideal for ipc because of their distinctionsfifo pipes are better suited as general ipc mechanisms for independent client and server programs for instancea perpetually running server program may create and listen for requests on fifo that can be accessed later by arbitrary clients not forked by the server in sensefifos are an alternative to the socket port interface we'll meet in the next section unlike socketsthoughfifos do not directly support remote network connectionsare not available in standard windows python todayand are accessed using the standard file interface instead of the more unique socket port numbers and calls we'll study later parallel system tools
4,809
in pythonnamed pipe files are created with the os mkfifo callwhich is available today on unix-like platformsincluding cygwin' python on windowsbut is not currently available in standard windows python this call creates only the external filethoughto send and receive data through fifoit must be opened and processed as if it were standard file to illustrateexample - is derivation of the pipe py script listed in example - but rewritten here to use fifos rather than anonymous pipes much like pipe pythis script opens the fifo using os open in the child for low-level byte string accessbut with the open built-in in the parent to treat the pipe as textin generaleither end may use either technique to treat the pipe' data as bytes or text example - pp \system\processes\pipefifo py ""named pipesos mkfifo is not available on windows (without cygwin)there is no reason to fork heresince fifo file pipes are external to processes--shared fds in parent/child processes are irrelevent""import ostimesys fifoname '/tmp/pipefifodef child()pipeout os open(fifonameos o_wronlyzzz while truetime sleep(zzzmsg ('spam % \nzzzencode(os write(pipeoutmsgzzz (zzz+ must open same name open fifo pipe file as fd binary as opened here def parent()pipein open(fifoname' 'open fifo as text file object while trueline pipein readline()[:- blocks until data sent print('parent % got "%sat % (os getpid()linetime time())if __name__ ='__main__'if not os path exists(fifoname)os mkfifo(fifonameif len(sys argv= parent(elsechild(create named pipe file run as parent if no args else run as child process because the fifo exists independently of both parent and childthere' no reason to fork here the child may be started independently of the parent as long as it opens fifo file by the same name herefor instanceon cygwin the parent is started in one shell interprocess communication
4,810
window only after the child is started and begins writing messages onto the fifo file[ :\pp \system\processespython pipefifo py parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at parent got "spam at etcctrl- to exit parent window [ :\pp \system\processes]file /tmp/pipefifo /tmp/pipefifofifo (named pipechild window [ :\pp \system\processes]python pipefifo py -child ctrl- to exit named pipe use cases by mapping communication points to file system entity accessible to all programs run on machinefifos can address broad range of ipc goals on platforms where they are supported for instancealthough this section' example runs independent programsnamed pipes can also be used as an ipc device by both in-process threads and directly forked related processesmuch as we saw for anonymous pipes earlier by also supporting unrelated programsthoughfifo files are more widely applicable to general client/server models for examplenamed pipes can make the gui and command-line debugger integration described earlier for anonymous pipes even more flexible--by using fifo files to connect the gui to the non-gui debugger' streamsthe gui could be started independently when needed sockets provide similar functionality but also buy us both inherent network awareness and broader portability to windows--as the next section explains socketsa first look socketsimplemented by the python socket moduleare more general ipc device than the pipes we've seen so far sockets let us transfer data between programs running on the same computeras well as programs located on remote networked machines when used as an ipc mechanism on the same machineprograms connect to sockets by machine-global port number and transfer data when used as networking connectionprograms provide both machine name and port number to transfer data to remotely-running program parallel system tools
4,811
although sockets are one of the most commonly used ipc toolsit' impossible to fully grasp their api without also seeing its role in networking because of thatwe'll defer most of our socket coverage until we can explore their use in network scripting in this section provides brief introduction and previewso you can compare with the prior section' named pipes ( fifosin shortlike fifossockets are global across machinethey do not require shared memory among threads or processesand are thus applicable to independent programs unlike fifossockets are identified by port numbernot filesystem path namethey employ very different nonfile apithough they can be wrapped in file-like objectand they are more portablethey work on nearly every python platformincluding standard windows python in additionsockets support networking roles that go beyond both ipc and this scope to illustrate the basicsthoughexample - launches server and clients in threads running in parallel on the same machineto communicate over socket--because all threads connect to the same portthe server consumes the data added by each of the clients example - pp \system\processes\socket_preview py ""sockets for cross-task communicationstart threads to communicate over socketsindependent programs can toobecause sockets are system-widemuch like fifossee the gui and internet parts of the book for more realistic socket use casessome socket servers may also need to talk to clients in threads or processessockets pass byte stringsbut can be pickled objects or encoded unicode textcaveatprints in threads may need to be synchronized if their output overlaps""from socket import socketaf_inetsock_stream port host 'localhostportable socket api port number identifies socket on machine server and client run on same local machine here def server()sock socket(af_inetsock_streamsock bind((''port)sock listen( while trueconnaddr sock accept(data conn recv( reply 'server got[% ]data conn send(reply encode()def client(name)sock socket(af_inetsock_streamsock connect((hostport)sock send(name encode()reply sock recv( ip addresses tcp connection bind to port on this machine allow up to pending clients wait for client to connect read bytes data from this client conn is new connected socket send bytes reply back to client connect to socket port send bytes data to listener receive bytes data from listener interprocess communication
4,812
print('client got[% ]replyup to bytes in message if __name__ ='__main__'from threading import thread sthread thread(target=serversthread daemon true don' wait for server thread sthread start(do wait for children to exit for in range( )thread(target=clientargs=('client%si,)start(study this script' code and comments to see how the socket objectsmethods are used to transfer data in nutshellwith this type of socket the server accepts client connectionwhich by default blocks until client requests serviceand returns new socket connected to the client once connectedthe client and server transfer byte strings by using send and receive calls instead of writes and readsthough as we'll see later in the booksockets can be wrapped in file objects much as we did earlier for pipe descriptors also like pipe descriptorsunwrapped sockets deal in binary bytes stringsnot text strthat' why string formatting results are manually encoded again here here is this script' output on windowsc:\pp \system\processessocket_preview py client got[ "server got[ 'client ']"client got[ "server got[ 'client ']"client got[ "server got[ 'client ']"client got[ "server got[ 'client ']"client got[ "server got[ 'client ']"this output isn' much to look atbut each line reflects data sent from client to serverand then back againthe server receives bytes string from connected client and echoes it back in larger reply string because all threads run in parallelthe order in which the clients are served is random on this machine sockets and independent programs although sockets work for threadsthe shared memory model of threads often allows them to employ simpler communication devices such as shared names and objects and queues sockets tend to shine brighter when used for ipc by separate processes and independently launched programs example - for instancereuses the server and client functions of the prior examplebut runs them in both processes and threads of independently launched programs example - pp \system\processes\socket-preview-progs py ""same socketbut talk between independent programs toonot just threadsserver here runs in process and serves both process and thread clientssockets are machine-globalmuch like fifosdon' require shared memory ""from socket_preview import serverclient parallel system tools both use same port number
4,813
from threading import thread mode int(sys argv[ ]if mode = run server in this process server(elif mode = run client in this process client('client:process=%sos getpid()elserun client threads in process for in range( )thread(target=clientargs=('client:thread=%si,)start(let' run this script on windowstoo (againthis portability is major advantage of socketsfirststart the server in process as an independently launched program in its own windowthis process runs perpetually waiting for clients to request connections (and as for our prior pipe example you may need to use task manager or window close to kill the server process eventually) :\pp \system\processessocket-preview-progs py nowin another windowrun few clients in both processes and threadby launching them as independent programs--using as the command-line argument runs single client processbut spawns five threads to converse with the server on parallelc:\pp \system\processessocket-preview-progs py client got[ "server got[ 'client:process= ']" :\pp \system\processessocket-preview-progs py client got[ "server got[ 'client:process= ']" :\pp \system\processessocket-preview-progs py client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']" :\pp \system\processessocket-preview-progs py client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']"client got[ "server got[ 'client:thread= ']" :\pp \system\processessocket-preview-progs py client got[ "server got[ 'client:process= ']"socket use cases this section' examples illustrate the basic ipc role of socketsbut this only hints at their full utility despite their seemingly limited byte string naturehigher-order use cases for sockets are not difficult to imagine with little extra workfor instanceinterprocess communication
4,814
be transferred over socketstooby shipping the serialized byte strings produced by python' pickle module introduced in and covered in full in as we'll see in the printed output of simple script can be redirected to gui windowby connecting the script' output stream to socket on which gui is listening in nonblocking mode programs that fetch arbitrary text off the web might read it as byte strings over socketsbut manually decode it using encoding names embedded in content-type headers or tags in the data itself in factthe entire internet can be seen as socket use case--as we'll see in at the bottomemailftpand web pages are largely just formatted byte string messages shipped over sockets plus any other context in which programs exchange data--sockets are generalportableand flexible tool for instancethey would provide the same utility as fifos for the gui/debugger example used earlierbut would also work in python on windows and would even allow the gui to connect to debugger running on different computer altogether as suchthey are seen by many as more powerful ipc tool againyou should consider this section just previewbecause the grander socket story also entails networking conceptswe'll defer more in-depth look at the socket api until we'll also see sockets again briefly in in the gui stream redirection use case listed aboveand we'll explore variety of additional socket use cases in the internet part of this book in part ivfor instancewe'll use sockets to transfer entire files and write more robust socket servers that spawn threads or processes to converse with clients to avoid denying connections for the purposes of this let' move on to one last traditional ipc tool--the signal signals for lack of better analogysignals are way to poke stick at process programs generate signals to trigger handler for that signal in another process the operating system pokestoo--some signals are generated on unusual system events and may kill the program if not handled if this sounds little like raising exceptions in pythonit shouldsignals are software-generated events and the cross-process analog of exceptions unlike exceptionsthoughsignals are identified by numberare not stackedand are really an asynchronous event mechanism outside the scope of the python interpreter controlled by the operating system in order to make signals available to scriptspython provides signal module that allows python programs to register python functions as handlers for signal events this module is available on both unix-like platforms and windows (though the windows version may define fewer kinds of signals to be caughtto illustrate the basic signal parallel system tools
4,815
number passed in as command-line argument example - pp \system\processes\signal py ""catch signals in pythonpass signal number as command-line arguse "kill - pidshell command to send this process signalmost signal handlers restored by python after caught (see network scripting for sigchld details)on windowssignal module is availablebut it defines only few signal types thereand os kill is missing""import syssignaltime def now()return time ctime(time time()current time string def onsignal(signumstackframe)print('got signal'signum'at'now()python signal handler most handlers stay in effect signum int(sys argv[ ]signal signal(signumonsignalwhile truesignal pause(install signal handler wait for signals (orpassthere are only two signal module calls at work heresignal signal takes signal number and function object and installs that function to handle that signal number when it is raised python automatically restores most signal handlers when signals occurso there is no need to recall this function within the signal handler itself to reregister the handler that isexcept for sigchlda signal handler remains installed until explicitly reset ( by setting the handler to sig_dfl to restore default behavior or to sig_ign to ignore the signalsigchld behavior is platform specific signal pause makes the process sleep until the next signal is caught time sleep call is similar but doesn' work with signals on my linux boxit generates an interrupted system call error busy while truepass loop here would pause the scripttoobut may squander cpu resources here is what this script looks like running on cygwin on windows (it works the same on other unix-like platforms like linux) signal number to watch for ( is passed in on the command lineand the program is made to run in the background with an shell operator (available in most unix-like shells)[ :\pp \system\processes]python signal py [ ps pid ppid pgid winpid tty uid stime command con : : /usr/bin/bash con : : /usr/local/bin/python interprocess communication
4,816
con : : /usr/bin/ps kill - got signal at sun mar : : kill - got signal at sun mar : : kill - [ ]killed python signal py inputs and outputs can be bit jumbled here because the process prints to the same screen used to type new shell commands to send the program signalthe kill shell command takes signal number and process id to be signaled ( )every time new kill command sends signalthe process replies with message generated by python signal handler function signal always kills the process altogether the signal module also exports signal alarm function for scheduling sigalrm signal to occur at some number of seconds in the future to trigger and catch timeoutsset the alarm and install sigalrm handler as shown in example - example - pp \system\processes\signal py ""set and catch alarm timeout signals in pythontime sleep doesn' play well with alarm (or signal in general in my linux pc)so we call signal pause here to do nothing until signal is received""import syssignaltime def now()return time asctime(def onsignal(signumstackframe)print('got alarm'signum'at'now()python signal handler most handlers stay in effect while trueprint('setting at'now()signal signal(signal sigalrmonsignalsignal alarm( signal pause(install signal handler do signal in seconds wait for signals running this script on cygwin on windows causes its onsignal handler function to be invoked every five seconds[ :\pp \system\processes]python signal py setting at sun mar : : got alarm at sun mar : : setting at sun mar : : got alarm at sun mar : : setting at sun mar : : got alarm at sun mar : : setting at sun mar : : got alarm at sun mar : : parallel system tools
4,817
ctrl- to exit : : generally speakingsignals must be used with cautions not made obvious by the examples we've just seen for instancesome system calls don' react well to being interrupted by signalsand only the main thread can install signal handlers and respond to signals in multithreaded program when used wellthoughsignals provide an event-based communication mechanism they are less powerful than data streams such as pipesbut are sufficient in situations in which you just need to tell program that something important has occurred and don' need to pass along any details about the event itself signals are sometimes also combined with other ipc tools for examplean initial signal may inform program that client wishes to communicate over named pipe--the equivalent of tapping someone' shoulder to get their attention before speaking most platforms reserve one or more sigusr signal numbers for user-defined events of this sort such an integration structure is sometimes an alternative to running blocking input call in spawned thread see also the os kill(pidsigcall for sending signals to known processes from within python script on unix-like platformsmuch like the kill shell command used earlierthe required process id can be obtained from the os fork call' child process id return value or from other interfaces like os forkthis call is also available in cygwin pythonbut not in standard windows python also watch for the discussion about using signal handlers to clean up "zombieprocesses in the multiprocessing module now that you know about ipc alternatives and have had chance to explore processesthreadsand both process nonportability and thread gil limitationsit turns out that there is another alternativewhich aims to provide just the best of both worlds as mentioned earlierpython' standard library multiprocessing module package allows scripts to spawn processes using an api very similar to the threading module this relatively new package works on both unix and windowsunlike low-level process forks it supports process spawning model which is largely platform-neutraland provides tools for related goalssuch as ipcincluding lockspipesand queues in additionbecause it uses processes instead of threads to run code in parallelit effectively works around the limitations of the thread gil hencemultiprocessing allows the programmer to leverage the capacity of multiple processors for parallel taskswhile retaining much of the simplicity and portability of the threading model why multiprocessingso why learn yet another parallel processing paradigm and toolkitwhen we already have the threadsprocessesand ipc tools like socketspipesand thread queues that the multiprocessing module
4,818
about why you may (or may notcare about this package in more specific termsalthough this module' performance may not compete with that of pure threads or process forks for some applicationsthis module offers compelling solution for manycompared to raw process forksyou gain cross-platform portability and powerful ipc tools compared to threadsyou essentially trade some potential and platformdependent extra task start-up time for the ability to run tasks in truly parallel fashion on multi-core or multi-cpu machines on the other handthis module imposes some constraints and tradeoffs that threads do notsince objects are copied across process boundariesshared mutable state does not work as it does for threads--changes in one process are not generally noticed in the other reallyfreely shared state may be the most compelling reason to use threadsits absence in this module may prove limiting in some threading contexts because this module requires pickleability for both its processes on windowsas well as some of its ipc tools in generalsome coding paradigms are difficult or nonportable--especially if they use bound methods or pass unpickleable objects such as sockets to spawned processes for instancecommon coding patterns with lambda that work for the threading module cannot be used as process target callables in this module on windowsbecause they cannot be pickled similarlybecause bound object methods are also not pickleablea threaded program may require more indirect design if it either runs bound methods in its threads or implements thread exit actions by posting arbitrary callables (possibly including bound methodson shared queues the in-process model of threads supports such direct lambda and bound method usebut the separate processes of multiprocessing do not in fact we'll write thread manager for guis in that relies on queueing in-process callables this way to implement thread exit actions--the callables are queued by worker threadsand fetched and dispatched by the main thread because the threaded pymailgui program we'll code in both uses this manager to queue bound methods for thread exit actions and runs bound methods as the main action of thread itselfit could not be directly translated to the separate process model implied by multiprocessing without getting into too many details hereto use multiprocessingpymailgui' actions might have to be coded as simple functions or complete process subclasses for pickleability worsethey may have to be implemented as simpler action identifiers dispatched in the main processif they update either the gui itself or object state in general --pickling results in an object copy in the receiving processnot reference to the originaland forks on unix essentially copy an entire process updating the state parallel system tools
4,819
has no effect on the original the pickleability constraints for process arguments on windows can limit multiprocessing' scope in other contexts as well for instancein we'll find that this module doesn' directly solve the lack of portability for the os fork call for traditionally coded socket servers on windowsbecause connected sockets are not pickled correctly when passed into new process created by this module to converse with client in this contextthreads provide more portable and likely more efficient solution applications that pass simpler types of messagesof coursemay fare better message constraints are easier to accommodate when they are part of an initial process-based design moreoverother tools in this modulesuch as its managers and shared memory apiwhile narrowly focused and not as general as shared thread stateoffer additional mutable state options for some programs fundamentallythoughbecause multiprocessing is based on separate processesit may be best geared for tasks which are relatively independentdo not share mutable object state freelyand can make do with the message passing and shared memory tools provided by this module this includes many applicationsbut this module is not necessarily direct replacement for every threaded programand it is not an alternative to process forks in all contexts to truly understand both this module package' benefitsas well as its tradeoffslet' turn to first example and explore this package' implementation along the way the basicsprocesses and locks we don' have space to do full justice to this sophisticated module in this booksee its coverage in the python library manual for the full story but as brief introductionby design most of this module' interfaces mirror the threading and queue modules we've already metso they should already seem familiar for examplethe multiprocessing module' process class is intended to mimic the threading module' thread class we met earlier--it allows us to launch function call in parallel with the calling scriptwith this modulethoughthe function runs in process instead of thread example - illustrates these basics in actionexample - pp \system\processes\multi py ""multiprocess basicsprocess works like threading threadbut runs function call in parallel in process instead of threadlocks can be used to synchronizee prints on some platformsstarts new interpreter on windowsforks new process on unix""import os from multiprocessing import processlock the multiprocessing module
4,820
msg '%sname:%spid:%swith lockprint(msg (label__name__os getpid())if __name__ ='__main__'lock lock(whoami('function call'lockp process(target=whoamiargs=('spawned child'lock) start( join(for in range( )process(target=whoamiargs=(('run process %si)lock)start(with lockprint('main process exit 'when runthis script first calls function directly and in-processthen launches call to that function in new process and waits for it to exitand finally spawns five function call processes in parallel in loop--all using an api identical to that of the threading thread model we studied earlier in this here' this script' output on windowsnotice how the five child processes spawned at the end of this script outlive their parentas is the usual case for processesc:\pp \system\processesmulti py function callname:__main__pid: spawned childname:__main__pid: main process exit run process name:__main__pid: run process name:__main__pid: run process name:__main__pid: run process name:__main__pid: run process name:__main__pid: just like the threading thread class we met earlierthe multiprocessing process object can either be passed target with arguments (as done hereor subclassed to redefine its run action method its start method invokes its run method in new processand the default run simply calls the passed-in target also like threadinga join method waits for child process exitand lock object is provided as one of handful of process synchronization toolsit' used here to ensure that prints don' overlap among processes on platforms where this might matter (it may not on windowsimplementation and usage rules technicallyto achieve its portabilitythis module currently works by selecting from platform-specific alternativeson unixit forks new child process and invokes the process object' run method in the new child parallel system tools
4,821
and starting "python -ccommand line in the new processwhich runs special python-coded function in this package that reads and unpickles the process and invokes its run method we met pickling briefly in and we will study it further later in this book the implementation is bit more complex than thisand is prone to change over timeof coursebut it' really quite an amazing trick while the portable api generally hides these details from your codeits basic structure can still have subtle impacts on the way you're allowed to use it for instanceon windowsthe main process' logic should generally be nested under __name__ =__main__ test as done here when using this moduleso it can be imported freely by new interpreter without side effects as we'll learn in more detail in unpickling classes and functions requires an import of their enclosing moduleand this is the root of this requirement moreoverwhen globals are accessed in child processes on windowstheir values may not be the same as that in the parent at start timebecause their module will be imported into new process also on windowsall arguments to process must be pickleable because this includes targettargets should be simple functions so they can be pickledthey cannot be bound or unbound object methods and cannot be functions created with lambda see pickle in python' library manual for more on pickleability rulesnearly every object type worksbut callables like functions and classes must be importable--they are pickled by name onlyand later imported to recreate bytecode on windowsobjects with system statesuch as connected socketswon' generally work as arguments to process target eitherbecause they are not pickleable similarlyinstances of custom process subclasses must be pickleable on windows as well this includes all their attribute values objects available in this package ( lock in example - are pickleableand so may be used as both process constructor arguments and subclass attributes ipc objects in this package that appear in later examples like pipe and queue accept only pickleable objectsbecause of their implementation (more on this in the next sectionon unixalthough child process can make use of shared global item created in the parentit' better to pass the object as an argument to the child process' constructorboth for portability to windows and to avoid potential problems if such objects were garbage collected in the parent there are additional rules documented in the library manual in generalthoughif you stick to passing in shared objects to processes and using the synchronization and the multiprocessing module
4,822
correct let' look next at few of those tools in action ipc toolspipesshared memoryand queues while the processes created by this package can always communicate using general system-wide tools like the sockets and fifo files we met earlierthe multiprocessing module also provides portable message passing tools specifically geared to this purpose for the processes it spawnsits pipe object provides an anonymous pipewhich serves as connection between two processes when calledpipe returns two connection objects that represent the ends of the pipe pipes are bidirectional by defaultand allow arbitrary pickleable python objects to be sent and received on unix they are implemented internally today with either connected socket pair or the os pipe call we met earlierand on windows with named pipes specific to that platform much like the process object described earlierthoughthe pipe object' portable api spares callers from such things its value and array objects implement shared process/thread-safe memory for communication between processes these calls return scalar and array objects based in the ctypes module and created in shared memorywith access synchronized by default its queue object serves as fifo list of python objectswhich allows multiple producers and consumers queue is essentially pipe with extra locking mechanisms to coordinate more arbitrary accessesand inherits the pickleability constraints of pipe because these devices are safe to use across multiple processesthey can often serve to synchronize points of communication and obviate lower-level tools like locksmuch the same as the thread queues we met earlier as usuala pipe (or pair of themmay be used to implement request/reply model queues support more flexible modelsin facta gui that wishes to avoid the limitations of the gil might use the multiprocessing module' process and queue to spawn long-running tasks that post resultsrather than threads as mentionedalthough this may incur extra start-up overhead on some platformsunlike threads todaytasks coded this way can be as truly parallel as the underlying platform allows one constraint worth noting herethis package' pipes (and by proxyqueuespickle the objects passed through themso that they can be reconstructed in the receiving process (as we've seenon windows the receiver process may be fully independent python interpreterbecause of thatthey do not support unpickleable objectsas suggested earlierthis includes some callables like bound methods and lambda functions (see file multi-badq py in the book examples package for demonstration of code that violates this constraintobjects with system statesuch as socketsmay fail as well parallel system tools
4,823
pipes and queues also keep in mind that because they are pickledobjects transferred this way are effectively copied in the receiving processdirect in-place changes to mutable objectsstate won' be noticed in the sender this makes sense if you remember that this package runs independent processes with their own memory spacesstate cannot be as freely shared as in threadingregardless of which ipc tools you use multiprocessing pipes to demonstrate the ipc tools listed abovethe next three examples implement three flavors of communication between parent and child processes example - uses simple shared pipe object to send and receive data between parent and child processes example - pp \system\processes\multi py ""use multiprocess anonymous pipes to communicate returns connection object representing ends of the pipeobjects are sent on one end and received on the otherthough pipes are bidirectional by default ""import os from multiprocessing import processpipe def sender(pipe)""send object to parent on anonymous pipe ""pipe send(['spam'[ 'eggs']pipe close(def talker(pipe)""send and receive objects on pipe ""pipe send(dict(name='bob'spam= )reply pipe recv(print('talker got:'replyif __name__ ='__main__'(parentendchildendpipe(process(target=senderargs=(childend,)start(print('parent got:'parentend recv()parentend close((parentendchildendpipe(child process(target=talkerargs=(childend,)child start(print('parent got:'parentend recv()parentend send({ for in 'spam'}spawn child with pipe receive from child or auto-closed on gc receieve from child send to child the multiprocessing module
4,824
print('parent exit'wait for child exit when run on windowshere' this script' output--one child passes an object to the parentand the other both sends and receives on the same pipec:\pp \system\processesmulti py parent got['spam' 'eggs'parent got{'name''bob''spam' talker got{'ss''aa''pp''mm'parent exit this module' pipe objects make communication between two processes portable (and nearly trivialshared memory and globals example - uses shared memory to serve as both inputs and outputs of spawned processes to make this work portablywe must create objects defined by the package and pass them to process constructors the last test in this demo ("loop "probably represents the most common use case for shared memory--that of distributing computation work to multiple parallel processes example - pp \system\processes\multi py ""use multiprocess shared memory objects to communicate passed objects are sharedbut globals are not on windows last test here reflects common use casedistributing work ""import os from multiprocessing import processvaluearray procs count per-process globalsnot shared def showdata(labelvalarr)""print data values in this process ""msg '%- spid:% sglobal:%svalue:%sarray:%sprint(msg (labelos getpid()countval valuelist(arr))def updater(valarr)""communicate via shared memory ""global count count + val value + for in range( )arr[ + if __name__ ='__main__' parallel system tools global count not shared passed in objects are
4,825
vector array(' 'procsshared memoryprocess/thread safe type codes from ctypesintdouble show start value in parent process showdata('parent start'scalarvectorspawn childpass in shared memory process(target=showdataargs=('child 'scalarvector) start() join(pass in shared memory updated in parentwait for each to finish each child sees updates in parent so far for args (but not globalprint('\nloop (updates in parentserial children'for in range(procs)count + scalar value + vector[ + process(target=showdataargs=(('process %si)scalarvector) start() join(same as priorbut allow children to run in parallel all see the last iteration' result because all share objects print('\nloop (updates in parentparallel children'ps [for in range(procs)count + scalar value + vector[ + process(target=showdataargs=(('process %si)scalarvector) start(ps append(pfor in psp join(shared memory updated in spawned childrenwait for each print('\nloop (updates in serial children'for in range(procs) process(target=updaterargs=(scalarvector) start( join(showdata('parent temp'scalarvectorsamebut allow children to update in parallel ps [print('\nloop (updates in parallel children'for in range(procs) process(target=updaterargs=(scalarvector) start(ps append(pfor in psp join(global count= in parent only the multiprocessing module
4,826
scalar= + parent+ in children showdata('parent end'scalarvectorarray[ ]= + parent+ in children the following is this script' output on windows trace through this and the code to see how it runsnotice how the changed value of the global variable is not shared by the spawned processes on windowsbut passed-in value and array objects are the final output line reflects changes made to shared memory in both the parent and spawned children--the array' final values are all because they were incremented twice in the parentand once in each of six spawned childrenthe scalar value similarly reflects changes made by both parent and childbut unlike for threadsthe global is per-process data on windowsc:\pp \system\processesmulti py parent startpid: global: value: array:[ child pid: global: value: array:[ loop (updates in parentserial childrenprocess pid: global: value: array:[ process pid: global: value: array:[ process pid: global: value: array:[ loop (updates in parentparallel childrenprocess pid: global: value: array:[ process pid: global: value: array:[ process pid: global: value: array:[ loop (updates in serial childrenparent temp pid: global: value: array:[ loop (updates in parallel childrenparent end pid: global: value: array:[ if you imagine the last test here run with much larger array and many more parallel childrenyou might begin to sense some of the power of this package for distributing work queues and subclassing finallybesides basic spawning and ipc toolsthe multiprocessing module alsoallows its process class to be subclassed to provide structure and state retention (much like threading threadbut for processesimplements process-safe queue object which may be shared by any number of processes for more general communication needs (much like queue queuebut for processesqueues support more flexible multiple client/server model example - for instancespawns three producer threads to post to shared queue and repeatedly polls for results to appear--in much the same fashion that gui might collect results in parallel with the display itselfthough here the concurrency is achieved with processes instead of threads parallel system tools
4,827
""process class can also be subclassed just like threading threadqueue works like queue queue but for cross-processnot cross-thread ""import ostimequeue from multiprocessing import processqueue process-safe shared queue queue is pipe locks/semas class counter(process)label @def __init__(selfstartqueue)self state start self post queue process __init__(selfretain state for use in run def run(self)for in range( )time sleep( self state + print(self label ,self pidself stateself post put([self pidself state]print(self labelself pid'-'run in newprocess on start(self pid is this child' pid stdout file is shared by all if __name__ ='__main__'print('start'os getpid()expected post queue( counter( postq counter( postr counter( postp start() start() start(start processes sharing queue children are producers while expectedtime sleep( trydata post get(block=falseexcept queue emptyprint('no data 'elseprint('posted:'dataexpected - parent consumes data on queue this is essentially like guithough guis often use threads join() join() join(print('finish'os getpid() exitcodemust get before join putter exitcode is child exit status notice in this code howthe time sleep calls in this code' producer simulate long-running tasks all four processes share the same output streamprint calls go the same place and don' overlap badly on windows (as we saw earlierthe multiprocessing module also has shareable lock object to synchronize access if requiredthe multiprocessing module
4,828
attribute when runthe output of the main consumer process traces its queue fetchesand the (indentedoutput of spawned child producer processes gives process ids and state :\pp \system\processesmulti py start no data no data posted[ posted[ posted[ posted[ posted[ posted[ posted[ posted[ posted[ finish if you imagine the "@lines here as results of long-running operations and the others as main gui threadthe wide relevance of this package may become more apparent starting independent programs as we learned earlierindependent programs generally communicate with systemglobal tools such as sockets and the fifo files we studied earlier although processes spawned by multiprocessing can leverage these toolstootheir closer relationship affords them the host of additional ipc communication devices provided by this module like threadsmultiprocessing is designed to run function calls in parallelnot to start entirely separate programs directly spawned functions might use tools like os systemos popenand subprocess to start program if such an operation might block the callerbut there' otherwise often no point in starting process that just starts program (you might as well start the program and skip stepin facton windowsmultiprocessing today uses the same process creation call as subprocessso there' little point in starting two processes to run one parallel system tools
4,829
tools like the os execcalls we met earlier--by spawning process portably with multiprocessing and overlaying it with new program this waywe start new independent programand effectively work around the lack of the os fork call in standard windows python this generally assumes that the new program doesn' require any resources passed in by the process apiof course (once new program startsit erases that which was running)but it offers portable equivalent to the fork/exec combination on unix furthermoreprograms started this way can still make use of more traditional ipc toolssuch as sockets and fifoswe met earlier in this example - illustrates the technique example - pp \system\processes\multi py "use multiprocessing to start independent programsos fork or notimport os from multiprocessing import process def runprogram(arg)os execlp('python''python''child py'str(arg)if __name__ ='__main__'for in range( )process(target=runprogramargs=( ,)start(print('parent exit'this script starts instances of the child py script we wrote in example - as independent processeswithout waiting for them to finish here' this script at work on windowsafter deleting superfluous system prompt that shows up arbitrarily in the middle of its output (it runs the same on cygwinbut the output is not interleaved there) :\pp \system\processestype child py import ossys print('hello from child'os getpid()sys argv[ ] :\pp \system\processesmulti py parent exit hello from child hello from child hello from child hello from child hello from child this technique isn' possible with threadsbecause all threads run in the same processoverlaying it with new program would kill all its threads though this is unlikely to be as fast as fork/exec combination on unixit at least provides similar and portable functionality on windows when required the multiprocessing module
4,830
finallymultiprocessing provides many more tools than these examples deployincluding conditioneventand semaphore synchronization toolsand local and remote managers that implement servers for shared object for instanceexample - demonstrates its support for pools--spawned children that work in concert on given task example - pp \system\processes\multi py "plus much moreprocess poolsmanagerslocksconditionimport os from multiprocessing import pool def powers( )#print(os getpid()return * enable to watch children if __name__ ='__main__'workers pool(processes= results workers map(powers[ ]* print(results[: ]print(results[- :]results workers map(powersrange( )print(results[: ]print(results[- :]when runpython arranges to delegate portions of the task to workers run in parallelc:\pp \system\processesmulti py [ [ [ [ and little less to be fairbesides such additional features and toolsmultiprocessing also comes with additional constraints beyond those we've already covered (pickleabilitymutable stateand so onfor exampleconsider the following sort of codedef action(arg arg )print(arg arg if __name__ ='__main__'process(target=actionargs=('spam''eggs')start(shell waits for child this works as expectedbut if we change the last line to the following it fails on windows because lambdas are not pickleable (reallynot importable)process(target=(lambdaaction('spam''eggs'))start( parallel system tools fails!-not pickleable
4,831
we'll use often for callbacks in the gui part of this book moreoverthis differs from the threading module that is the model for this package--calls like the following which work for threads must be translated to callable and argumentsthreading thread(target=(lambdaaction( ))start(but lambdas work here converselysome behavior of the threading module is mimicked by multiprocessingwhether you wish it did or not because programs using this package wait for child processes to end by defaultwe must mark processes as daemon if we don' want to block the shell where the following sort of code is run (technicallyparents attempt to terminate daemonic children on exitwhich means that the program can exit when only daemonic children remainmuch like threading)def action(arg arg )print(arg arg time sleep( normally prevents the parent from exiting if __name__ ='__main__' process(target=actionargs=('spam''eggs') daemon true start(don' wait for it there' more on some of these issues in the python library manualthey are not showstoppers by any stretchbut special cases and potential pitfalls to some we'll revisit the lambda and daemon issues in more realistic context in where we'll use multiprocessing to launch gui demos independently why multiprocessingthe conclusion as this section' examples suggestmultiprocessing provides powerful alternative which aims to combine the portability and much of the utility of threads with the fully parallel potential of processes and offers additional solutions to ipcexit statusand other parallel processing goals hopefullythis section has also given you better understanding of this module' tradeoffs discussed at its beginning in particularits separate process model precludes the freely shared mutable state of threadsand bound methods and lambdas are prohibited by both the pickleability requirements of its ipc pipes and queuesas well as its process action implementation on windows moreoverits requirement of pickleability for process arguments on windows also precludes it as an option for conversing with clients in socket servers portably while not replacement for threading in all applicationsthoughmultiprocessing offers compelling solutions for many especially for parallel-programming tasks which can be designed to avoid its limitationsthis module can offer both performance and portability that python' more direct multitasking tools cannot the multiprocessing module
4,832
treatment of this module in this book for more detailsrefer to the python library manual herewe turn next to handful of additional program launching tools and wrap up of this other ways to start programs we've seen variety of ways to launch programs in this book so far--from the os forkexec combination on unixto portable shell command-line launchers like os systemos popenand subprocessto the portable multiprocessing module options of the last section there are still other ways to start programs in the python standard librarysome of which are more platform neutral or obscure than others this section wraps up this with quick tour through this set the os spawn calls the os spawnv and os spawnve calls were originally introduced to launch programs on windowsmuch like fork/exec call combination on unix-like platforms todaythese calls work on both windows and unix-like systemsand additional variants have been added to parrot os exec in recent versions of pythonthe portable subprocess module has started to supersede these calls in factpython' library manual includes note stating that this module has more powerful and equivalent tools and should be preferred to os spawn calls moreoverthe newer multiprocessing module can achieve similarly portable results today when combined with os exec callsas we saw earlier stillthe os spawn calls continue to work as advertised and may appear in python code you encounter the os spawn family of calls execute program named by command line in new processon both windows and unix-like systems in basic operationthey are similar to the fork/exec call combination on unix and can be used as alternatives to the system and popen calls we've already learned in the following interactionfor instancewe start python program with command line in two traditional ways (the second also reads its output) :\pp \system\processespython print(open('makewords py'read()print('spam'print('eggs'print('ham'import os os system('python makewords py'spam eggs ham parallel system tools
4,833
print(resultspam eggs ham the equivalent os spawn calls achieve the same effectwith slightly more complex call signature that provides more control over the way the program is launchedos spawnv(os p_waitr' :\python \python'('python''makewords py')spam eggs ham os spawnl(os p_nowaitr' :\python \python''python''makewords py' spam eggs ham the spawn calls are also much like forking programs in unix they don' actually copy the calling process (so shared descriptor operations won' work)but they can be used to start program running completely independent of the calling programeven on windows the script in example - makes the similarity to unix programming patterns more obvious it launches program with fork/exec combination on unix-like platforms (including cygwin)or an os spawnv call on windows example - pp \system\processes\spawnv py ""start up copies of child py running in paralleluse spawnv to launch program on windows (like fork+exec)p_overlay replacesp_detach makes child stdout go nowhereor use portable subprocess or multiprocessing options today""import ossys for in range( )if sys platform[: ='win'pypath sys executable os spawnv(os p_nowaitpypath('python''child py'str( ))elsepid os fork(if pid ! print('process % spawnedpidelseos execlp('python''python''child py'str( )print('main process exiting 'to make sense of these examplesyou have to understand the arguments being passed to the spawn calls in this scriptwe call os spawnv with process mode flagthe full directory path to the python interpreterand tuple of strings representing the shell command line with which to start new program the path to the python interpreter other ways to start programs
4,834
process mode flag is taken from these predefined valuesos p_nowait and os p_nowaito the spawn functions will return as soon as the new process has been createdwith the process id as the return value available on unix and windows os p_wait the spawn functions will not return until the new process has run to completion and will return the exit code of the process if the run is successful or "-signalif signal kills the process available on unix and windows os p_detach and os p_overlay p_detach is similar to p_nowaitbut the new process is detached from the console of the calling process if p_overlay is usedthe current program will be replaced (much like os execavailable on windows in factthere are eight different calls in the spawn familywhich all start program but vary slightly in their call signatures in their namesan "lmeans you list arguments individually"pmeans the executable file is looked up on the system pathand "emeans dictionary is passed in to provide the shelled environment of the spawned programthe os spawnve callfor exampleworks the same way as os spawnv but accepts an extra fourth dictionary argument to specify different shell environment for the spawned program (whichby defaultinherits all of the parent' settings)os spawnl(modepathos spawnle(modepathenvos spawnlp(modefileos spawnlpe(modefileenvos spawnv(modepathargsos spawnve(modepathargsenvos spawnvp(modefileargsos spawnvpe(modefileargsenvunix only unix only unix only unix only because these calls mimic the names and call signatures of the os exec variantssee earlier in this for more details on the differences between these call forms unlike the os exec callsonly half of the os spawn forms--those without system path checking (and hence without "pin their names)--are currently implemented on windows all the process mode flags are supported on windowsbut detach and overlay modes are not available on unix because this sort of detail may be prone to changeto verify which are presentbe sure to see the library manual or run dir builtin function call on the os module after an import here is the script in example - at work on windowsspawning independent copies of the child py python program we met earlier in this :\pp \system\processestype child py import ossys print('hello from child'os getpid()sys argv[ ] :\pp \system\processespython spawnv py parallel system tools
4,835
hello from child - hello from child - hello from child - main process exiting hello from child - hello from child - hello from child - hello from child - hello from child - hello from child - notice that the copies print their output in random orderand the parent program exits before all children doall of these programs are really running in parallel on windows also observe that the child program' output shows up in the console box where spawnv py was runwhen using p_nowaitstandard output comes to the parent' consolebut it seems to go nowhere when using p_detach (which is most likely feature when spawning gui programsbut having shown you this calli need to again point out that both the subprocess and multiprocessing modules offer more portable alternatives for spawning programs with command lines today in factunless os spawn calls provide unique behavior you can' live without ( control of shell window pop ups on windows)the platform-specific alternatives code of example - can be replaced altogether with the portable multi processing code in example - the os startfile call on windows although os spawn calls may be largely superfluous todaythere are other tools that can still make strong case for themselves for instancethe os system call can be used on windows to launch dos start commandwhich opens ( runsa file independently based on its windows filename associationsas though it were clicked os startfile makes this even simpler in recent python releasesand it can avoid blocking its callerunlike some other tools using the dos start command to understand whyfirst you need to know how the dos start command works in general roughlya dos command line of the form start command works as if command were typed in the windows run dialog box available in the start button menu if command is filenameit is opened exactly as if its name was double-clicked in the windows explorer file selector gui for instancethe following three dos commands automatically start internet explorermy registered image viewer programand my sound media player program on the files named in the commands windows simply opens the file with whatever program is associated to handle filenames of that form moreoverall three of these programs run independently of the dos console box where the command is typedother ways to start programs
4,836
:\pp \system\mediastart ora-lp jpg :\pp \system\mediastart sousa au because the start command can run any file and command linethere is no reason it cannot also be used to start an independently running python programc:\pp \system\processesstart child py this works because python is registered to open names ending in py when it is installed the script child py is launched independently of the dos console window even though we didn' provide the name or path of the python interpreter program because child py simply prints message and exitsthoughthe result isn' exactly satisfyinga new dos window pops up to serve as the script' standard outputand it immediately goes away when the child exits to do betteradd an input call at the bottom of the program file to wait for key press before exitingc:\pp \system\processestype child-wait py import ossys print('hello from child'os getpid()sys argv[ ]input("press "don' flash on windows :\pp \system\processesstart child-wait py now the child' dos window pops up and stays up after the start command has returned pressing the enter key in the pop-up dos window makes it go away using start in python scripts since we know that python' os system and os popen can be called by script to run any command line that can be typed at dos shell promptwe can also start independently running programs from python script by simply running dos start command line for instancec:\pp \system\mediapython import os cmd 'start lp -preface-preview htmlos system(cmd start ie browser runs independent the python os system calls here start whatever web page browser is registered on your machine to open html files (unless these programs are already runningthe launched programs run completely independent of the python session--when running dos start commandos system does not wait for the spawned program to exit parallel system tools
4,837
in factstart is so useful that recent python releases also include an os startfile callwhich is essentially the same as spawning dos start command with os system and works as though the named file were double-clicked the following callsfor instancehave similar effectos startfile('lp-code-readme txt'os system('start lp-code-readme txt'both pop up the text file in notepad on my windows computer unlike the second of these callsthoughos startfile provides no option to wait for the application to close (the dos start command' /wait option doesand no way to retrieve the application' exit status (returned from os systemon recent versions of windowsthe following has similar effecttoobecause the registry is used at the command line (though this form pauses until the file' viewer is closed--like using start /wait)os system('lp-code-readme txt''startis optional today this is convenient way to open arbitrary document and media filesbut keep in mind that the os startfile call works only on windowsbecause it uses the windows registry to know how to open file in factthere are even more obscure and nonportable ways to launch programsincluding windows-specific options in the pywin packagewhich we'll finesse here if you want to be more platform neutralconsider using one of the other many program launcher tools we've seensuch as os popen or os spawnv or better yetwrite module to hide the details--as the next and final section demonstrates portable program-launch framework with all of these different ways to start programs on different platformsit can be difficult to remember what tools to use in given situation moreoversome of these tools are called in ways that are complicated and thus easy to forget although modules like subprocess and multiprocessing offer fully portable options todayother tools sometimes provide more specific behavior that' better on given platformshell window pop ups on windowsfor exampleare often better suppressed write scripts that need to launch python programs often enough that eventually wrote module to try to hide most of the underlying details by encapsulating the details in this modulei' free to change them to use new tools in the future without breaking code that relies on them while was at iti made this module smart enough to automatically pick "bestlaunch scheme based on the underlying platform laziness is the mother of many useful module portable program-launch framework
4,838
it implements an abstract superclasslaunchmodewhich defines what it means to start python program named by shell command linebut it doesn' define how insteadits subclasses provide run method that actually starts python program according to given scheme and (optionallydefine an announce method to display program' name at startup time example - pp \launchmodes py ""##################################################################################launch python programs with command lines and reusable launcher scheme classesauto inserts "pythonand/or path to python executable at front of command linesome of this module may assume 'pythonis on your system path (see launcher py)subprocess module would work toobut os popen(uses it internallyand the goal is to start program running independently herenot to connect to its streamsmultiprocessing module also is an optionbut this is command-linesnot functionsdoesn' make sense to start process which would just do one of the options herenew in this editionruns script filename path through normpath(to change any to for windows tools where requiredfix is inherited by pyedit and otherson windowsis generally allowed for file opensbut not by all launcher tools##################################################################################""import sysos pyfile (sys platform[: ='winand 'python exe'or 'pythonpypath sys executable use sys in newer pys def fixwindowspath(cmdline)""change all to in script filename path at front of cmdlineused only by classes which run tools that require this on windowson other platformsthis does not hurt ( os system on unix)""splitline cmdline lstrip(split('split on spaces fixedpath os path normpath(splitline[ ]fix forward slashes return join([fixedpathsplitline[ :]put it back together class launchmode""on call to instanceannounce label and run commandsubclasses format command lines as required in run()command should begin with name of the python script file to runand not with "pythonor its full path""def __init__(selflabelcommand)self what label self where command def __call__(self)on callexbutton press callback self announce(self whatself run(self wheresubclasses must define run(def announce(selftext)subclasses may redefine announce( parallel system tools
4,839
methods instead of if/elif logic def run(selfcmdline)assert false'run must be definedclass system(launchmode)""run python script named in shell command line caveatmay block callerunless added on unix ""def run(selfcmdline)cmdline fixwindowspath(cmdlineos system('% % (pypathcmdline)class popen(launchmode)""run shell command line in new process caveatmay block callersince pipe closed too soon ""def run(selfcmdline)cmdline fixwindowspath(cmdlineos popen(pypath cmdlineassume nothing to be read class fork(launchmode)""run command in explicitly created new process for unix-like systems onlyincluding cygwin ""def run(selfcmdline)assert hasattr(os'fork'cmdline cmdline split(if os fork(= os execvp(pypath[pyfilecmdlineconvert string to list start new child process run new program in child class start(launchmode)""run command independent of caller for windows onlyuses filename associations ""def run(selfcmdline)assert sys platform[: ='wincmdline fixwindowspath(cmdlineos startfile(cmdlineclass startargs(launchmode)""for windows onlyargs may require real start forward slashes are okay here ""def run(selfcmdline)assert sys platform[: ='winos system('start cmdlinemay create pop-up window class spawn(launchmode)""run python in new process independent of caller portable program-launch framework
4,840
forward slashes are okay here ""def run(selfcmdline)os spawnv(os p_detachpypath(pyfilecmdline)class top_level(launchmode)""run in new windowsame process tbdrequires gui class info too ""def run(selfcmdline)assert false'sorry mode not yet implementedpick "bestlauncher for this platform may need to specialize the choice elsewhere if sys platform[: ='win'portablelauncher spawn elseportablelauncher fork class quietportablelauncher(portablelauncher)def announce(selftext)pass def selftest()file 'echo pyinput('default mode 'launcher portablelauncher(filefilelauncher(input('system mode 'system(filefile)(if sys platform[: ='win'input('dos start mode 'startargs(filefile)(no block blocks no block if __name__ ='__main__'selftest(near the end of the filethe module picks default class based on the sys platform attributeportablelauncher is set to class that uses spawnv on windows and one that uses the fork/exec combination elsewherein recent pythonswe could probably just use the spawnv scheme on most platformsbut the alternatives in this module are used in additional contexts if you import this module and always use its portable launcher attributeyou can forget many of the platform-specific details enumerated in this to run python programsimply import the portablelauncher classmake an instance by passing label and command line (without leading "pythonword)and then call parallel system tools
4,841
method--so that the classes in this module can also be used to generate callback handlers in tkinter-based guis as we'll see in the upcoming button-presses in tkinter invoke callable object with no argumentsby registering portablelauncher instance to handle the press eventwe can automatically start new program from another program' gui gui might associate launcher with gui' button press with code like thisbutton(roottext=namecommand=portablelauncher(namecommandline)when run standalonethis module' selftest function is invoked as usual as codedsystem blocks the caller until the program exitsbut portablelauncher (reallyspawn or forkand start do notc:\pp etype echo py print('spam'input('press enter' :\pp epython launchmodes py default mode echo py system mode echo py spam press enter dos start mode echo py as more practical applicationsthis file is also used in to launch gui dialog demos independentlyand again in number of ' examplesincluding pydemos and pygadgets--launcher scripts designed to run major examples in this book in portable fashionwhich live at the top of this book' examples distribution directory because these launcher scripts simply import portablelauncher and register instances to respond to gui eventsthey run on both windows and unix unchanged (tkinter' portability helpstooof coursethe pygadgets script even customizes portablelauncher to update gui label at start timeclass launcher(launchmodes portablelauncher)def announce(selftext)info config(text=textuse wrapped launcher class customize to set gui label we'll explore these two client scriptsand otherssuch as ' pyedit after we start coding guis in part iii partly because of its role in pyeditthis edition extends this module to automatically replace forward slashes with backward slashes in the script' file path name pyedit uses forward slashes in some filenames because they are allowed in file opens on windowsbut some windows launcher tools require the backslash form instead specificallysystempopenand startfile in os require backslashesbut spawnv does not pyedit and others inherit the new pathname fix of fixwindowspath here simply by importing and using this module' classespyedit portable program-launch framework
4,842
also notice how some of the classes in this example use the sys executable path string to obtain the python executable' full path name this is partly due to their role in userfriendly demo launchers in prior versions that predated sys executablethese classes instead called two functions exported by module named launcher py to find suitable python executableregardless of whether the user had added its directory to the system path variable' setting this search is no longer required since 'll describe this module' other roles in the next and since this search has been largely precluded by python' perpetual pandering to programmersprofessional proclivitiesi'll postpone any pointless pedagogical presentation here (period other system tools coverage that concludes our tour of python system tools in this and the prior three we've met most of the commonly used system tools in the python library along the waywe've also learned how to use them to do useful things such as start programsprocess directoriesand so on the next wraps up this domain by using the tools we've just met to implement scripts that do useful and more realistic system-level work still other system-related tools in python appear later in this text for instancesocketsused to communicate with other programs and networks and introduced briefly hereshow up again in in common gui use case and are covered in full in select callsused to multiplex among tasksare also introduced in as way to implement servers file locking with os openintroduced in is discussed again in conjunction with later examples regular expressionsstring pattern matching used by many text processing tools in the system administration domaindon' appear until moreoverthings like forks and threads are used extensively in the internet scripting see the discussion of threaded guis in and the server implementations in the ftp client gui in and the pymailgui program in along the waywe'll also meet higher-level python modulessuch as socketserverwhich implement fork and thread-based socket server code for us in factmany of the last four tools will pop up constantly in later examples in this book--about what one would expect of general-purpose portable libraries parallel system tools
4,843
tools in the python library don' appear in this book at all with hundreds of library modulesmore appearing all the timeand even more in the third-party domainpython book authors have to pick and choose their topics frugallyas alwaysbe sure to browse the python library manuals and web early and often in your python career other system tools coverage
4,844
complete system programs "the greps of wraththis wraps up our look at the system interfaces domain in python by presenting collection of larger python scripts that do real systems work--comparing and copying directory treessplitting filessearching files and directoriestesting other programsconfiguring launched programsshell environmentsand so on the examples here are python system utility programs that illustrate typical tasks and techniques in this domain and focus on applying built-in toolssuch as file and directory tree processing although the main point of this case-study is to give you feel for realistic scripts in actionthe size of these examples also gives us an opportunity to see python' support for development paradigms like object-oriented programming (oopand reuse at work it' really only in the context of nontrivial programs such as the ones we'll meet here that such tools begin to bear tangible fruit this also emphasizes the "whyof system toolsnot just the "how"along the wayi'll point out real-world needs met by the examples we'll studyto help you put the details in context one note up frontthis moves quicklyand few of its examples are largely listed just for independent study because all the scripts here are heavily documented and use python system tools described in the preceding won' go through all the code in exhaustive detail you should read the source code listings and experiment with these programs on your own computer to get better feel for how to combine system interfaces to accomplish realistic tasks all are available in source code form in the book' examples distribution and most work on all major platforms should also mention that most of these are programs have really usednot examples written just for this book they were coded over period of years and perform widely differing tasksso there is no obvious common thread to connect the dots here other than need on the other handthey help explain why system tools are useful in the first placedemonstrate larger development concepts that simpler examples cannotand bear collective witness to the simplicity and portability of automating system tasks with python once you've mastered the basicsyou'll wish you had done so sooner
4,845
quickwhat' the biggest python source file on your computerthis was the query innocently posed by student in one of my python classes because didn' know eitherit became an official exercise in subsequent classesand it provides good example of ways to apply python system tools for realistic purpose in this book reallythe query is bit vaguebecause its scope is unclear do we mean the largest python file in directoryin full directory treein the standard libraryon the module import search pathor on your entire hard drivedifferent scopes imply different solutions scanning the standard library directory for instanceexample - is first-cut solution that looks for the biggest python file in one directory-- limited scopebut enough to get started example - pp \system\filetools\bigpy-dir py ""find the largest python source file in single directory search windows python source libunless dir command-line arg ""import osglobsys dirname ' :\python \libif len(sys argv= else sys argv[ allsizes [allpy glob glob(dirname os sep 'py'for filename in allpyfilesize os path getsize(filenameallsizes append((filesizefilename)allsizes sort(print(allsizes[: ]print(allsizes[- :]this script uses the glob module to run through directory' files and detects the largest by storing sizes and names on list that is sorted at the end--because size appears first in the list' tuplesit will dominate the ascending value sortand the largest percolates to the end of the list we could instead keep track of the currently largest as we gobut the list scheme is more flexible when runthis script scans the python standard library' source directory on windowsunless you pass different directory on the command lineand it prints both the two smallest and largest files it findsc:\pp \system\filetoolsbigpy-dir py [( ' :\\python \\lib\\build_class py')( ' :\\python \\lib\\struct py')[( ' :\\python \\lib\\turtle py')( ' :\\python \\lib\\decimal py') :\pp \system\filetoolsbigpy-dir py [( \\__init__ py')( \\bigpy-dir py') complete system programs
4,846
:\pp \system\filetoolsbigpy-dir py [( \\__init__ py')( \\testargv py')[( \\testargv py')( \\more py')scanning the standard library tree the prior section' solution worksbut it' obviously partial answer--python files are usually located in more than one directory even within the standard librarythere are many subdirectories for module packagesand they may be arbitrarily nested we really need to traverse an entire directory tree moreoverthe first output above is difficult to readpython' pprint (for "pretty print"module can help here example - puts these extensions into code example - pp \system\filetools\bigpy-tree py ""find the largest python source file in an entire directory tree search the python source libuse pprint to display results nicely ""import sysospprint trace false if sys platform startswith('win')dirname ' :\python \libelsedirname '/usr/lib/pythonwindows unixlinuxcygwin allsizes [for (thisdirsubsherefilesherein os walk(dirname)if traceprint(thisdirfor filename in fileshereif filename endswith(py')if traceprint('filenamefullname os path join(thisdirfilenamefullsize os path getsize(fullnameallsizes append((fullsizefullname)allsizes sort(pprint pprint(allsizes[: ]pprint pprint(allsizes[- :]when runthis new version uses os walk to search an entire tree of directories for the largest python source file change this script' trace variable if you want to track its progress through the tree as codedit searches the python standard library' source treetailored for windows and unix-like locationsc:\pp \system\filetoolsbigpy-tree py [( ' :\\python \\lib\\build_class py')( ' :\\python \\lib\\email\\mime\\__init__ py')[( ' :\\python \\lib\\decimal py')( ' :\\python \\lib\\pydoc_data\\topics py') quick game of "find the biggest python file
4,847
sure enough--the prior section' script found smallest and largest files in subdirectories while searching python' entire standard library tree this way is more inclusiveit' still incompletethere may be additional modules installed elsewhere on your computerwhich are accessible from the module import search path but outside python' source tree to be more exhaustivewe could instead essentially perform the same tree searchbut for every directory on the module import search path example - adds this extension to include every importable python-coded module on your computer-located both on the path directly and nested in package directory trees example - pp \system\filetools\bigpy-path py ""find the largest python source file on the module import search path skip already-visited directoriesnormalize path and case so they will match properlyand include line counts in pprinted result it' not enough to use os environ['pythonpath']this is subset of sys path ""import sysospprint trace =dirs =+files visited {allsizes [for srcdir in sys pathfor (thisdirsubsherefilesherein os walk(srcdir)if trace print(thisdirthisdir os path normpath(thisdirfixcase os path normcase(thisdirif fixcase in visitedcontinue elsevisited[fixcasetrue for filename in fileshereif filename endswith(py')if trace print('filenamepypath os path join(thisdirfilenametrypysize os path getsize(pypathexcept os errorprint('skipping'pypathsys exc_info()[ ]elsepylines len(open(pypath'rb'readlines()allsizes append((pysizepylinespypath)print('by size 'allsizes sort(pprint pprint(allsizes[: ]pprint pprint(allsizes[- :]print('by lines 'allsizes sort(key=lambda xx[ ] complete system programs
4,848
pprint pprint(allsizes[- :]when runthis script marches down the module import path andfor each valid directory it containsattempts to search the entire tree rooted there in factit nests loops three deep--for items on the pathdirectories in the item' treeand files in the directory because the module path may contain directories named in arbitrary waysalong the way this script must take care tonormalize directory paths--fixing up slashes and dots to map directories to common form normalize directory name case--converting to lowercase on case-insensitive windowsso that same names match by string equalitybut leaving case unchanged on unixwhere it matters detect repeats to avoid visiting the same directory twice (the same directory might be reached from more than one entry on sys pathskip any file-like item in the tree for which os path getsize fails (by default os walk itself silently ignores things it cannot treat as directoriesboth at the top of and within the treeavoid potential unicode decoding errors in file content by opening files in binary mode in order to count their lines text mode requires decodable contentand some files in python ' library tree cannot be decoded properly on windows catching unicode exceptions with try statement would avoid program exitstoobut might skip candidate files this version also adds line countsthis might add significant run time to this script toobut it' useful metric to report in factthis version uses this value as sort key to report the three largest and smallest files by line counts too--this may differ from results based upon raw file size here' the script in action in python on my windows machinesince these results depend on platforminstalled extensionsand path settingsyour sys path and largest and smallest files may varyc:\pp \system\filetoolsbigpy-path py by size [( ' :\\python \\lib\\build_class py')( ' :\\python \\lib\\email\\mime\\__init__ py')( ' :\\python \\lib\\email\\test\\__init__ py')[( ' :\\python \\lib\\tkinter\\__init__ py')( ' :\\python \\lib\\decimal py')( ' :\\python \\lib\\pydoc_data\\topics py')by lines [( ' :\\python \\lib\\build_class py')( ' :\\python \\lib\\email\\mime\\__init__ py')( ' :\\python \\lib\\email\\test\\__init__ py')[( ' :\\python \\lib\\turtle py')( ' :\\python \\lib\\test\\test_descr py')( ' :\\python \\lib\\decimal py') quick game of "find the biggest python file
4,849
tree as you can seethe results for largest files differ when viewed by size and lines- disparity which we'll probably have to hash out in our next requirements meeting scanning the entire machine finallyalthough searching trees rooted in the module import path normally includes every python source file you can import on your computerit' still not complete technicallythis approach checks only modulespython source files which are toplevel scripts run directly do not need to be included in the module path moreoverthe module search path may be manually changed by some scripts dynamically at runtime (for exampleby direct sys path updates in scripts that run on web serversto include additional directories that example - won' catch ultimatelyfinding the largest source file on your computer requires searching your entire drive-- feat which our tree searcher in example - almost supportsif we generalize it to accept the root directory name as an argument and add some of the bells and whistles of the path searcher version (we really want to avoid visiting the same directory twice if we're scanning an entire machineand we might as well skip errors and check line-based sizes if we're investing the timeexample - implements such general tree scansoutfitted for the heavier lifting required for scanning drives example - pp \system\filetools\bigext-tree py ""find the largest file of given type in an arbitrary directory tree avoid repeat pathscatch errorsadd tracing and line count size also uses setsfile iterators and generator to avoid loading entire fileand attempts to work around undecodable dir/file name prints ""import ospprint from sys import argvexc_info trace dirnameextname os curdirpyif len(argv dirname argv[ if len(argv extname argv[ if len(argv trace int(argv[ ] =off =dirs =+files default is py files in cwd exc:\ :\python \lib expywtxt expy def tryprint(arg)tryprint(argexcept unicodeencodeerrorprint(arg encode()unprintable filenametry raw byte string visited set(allsizes [for (thisdirsubsherefilesherein os walk(dirname)if tracetryprint(thisdirthisdir os path normpath(thisdir complete system programs
4,850
if fixname in visitedif tracetryprint('skipping thisdirelsevisited add(fixnamefor filename in fileshereif filename endswith(extname)if trace tryprint('+++filenamefullname os path join(thisdirfilenametrybytesize os path getsize(fullnamelinesize sum(+ for line in open(fullname'rb')except exceptionprint('error'exc_info()[ ]elseallsizes append((bytesizelinesizefullname)for (titlekeyin [('bytes' )('lines' )]print('\nby % titleallsizes sort(key=lambda xx[key]pprint pprint(allsizes[: ]pprint pprint(allsizes[- :]unlike the prior tree versionthis one allows us to search in specific directoriesand for specific extensions the default is to simply search the current working directory for python filesc:\pp \system\filetoolsbigext-tree py by bytes [( \\__init__ py')( \\bigpy-dir py')( \\bigpy-tree py')[( \\join py')( \\bigext-tree py')( \\split py')by lines [( \\__init__ py')( \\bigpy-dir py')( \\bigpy-tree py')[( \\join py')( \\bigext-tree py')( \\split py')for more custom workwe can pass in directory nameextension typeand trace level on the command-line now (trace level disables tracingand the defaultshows directories visited along the way) :\pp \system\filetoolsbigext-tree py py by bytes [( \\__init__ py')( \\filetools\\__init__ py') quick game of "find the biggest python file
4,851
[( \\processes\\multi py')( \\filetools\\split py')( \\tester\\tester py')by lines [( \\__init__ py')( \\filetools\\__init__ py')( \\streams\\hello-out py')[( \\filetools\\split py')( \\processes\\multi py')( \\tester\\tester py')this script also lets us scan for different file typeshere it is picking out the smallest and largest text file from one level up (at the time ran this scriptat least) :\pp \system\filetoolsbigext-tree py txt \environment \filetools \processes \streams \tester \tester\args \tester\errors \tester\inputs \tester\outputs \tester\scripts \tester\xxold \threads by bytes [( \\streams\\input txt')( \\streams\\hello-in txt')( \\streams\\data txt')[( \\streams\\output txt')( \\tester\\xxold\\readme txt txt')( \\filetools\\temp txt')by lines [( \\streams\\hello-in txt')( \\spam txt')( \\streams\\input txt')[( \\streams\\data txt')( \\streams\\output txt')( \\filetools\\temp txt')and nowto search your entire systemsimply pass in your machine' root directory name (use instead of :on unix-like machines)along with an optional file extension type py is just the default nowthe winner is (pleaseno wagering) :\pp \dev\examples\pp \system\filetoolsbigext-tree py : : :\$recycle bin :\$recycle bin\ --- :\cygwin complete system programs
4,852
:\cygwin\cygdrive :\cygwin\dev :\cygwin\dev\mqueue :\cygwin\dev\shm :\cygwin\etc many more lines omitted by bytes [( ' :\\cygwin\\\python \\python-\\lib\\build_class py')( ' :\\cygwin\\\python \\python-\\lib\\email\\mime\\__init__ py')( ' :\\cygwin\\\python \\python-\\lib\\email\\test\\__init__ py')[( ' :\\python \\lib\\pydoc_data\\topics py')( ' :\\\install\\source\\python- \\lib\\pydoc_topics py')( ' :\\python \\lib\\pydoc_topics py')by lines [( ' :\\cygwin\\\python \\python-\\lib\\build_class py')( ' :\\cygwin\\\python \\python-\\lib\\email\\mime\\__init__ py')( ' :\\cygwin\\\python \\python-\\lib\\email\\test\\__init__ py')[( ' :\\install\\source\\python- \\lib\\decimal py')( ' :\\cygwin\\\python \\python-\\lib\\decimal py')( ' :\\python \\lib\\decimal py')the script' trace logic is preset to allow you to monitor its directory progress 've shortened some directory names to protect the innocent here (and to fit on this pagethis command may take long time to finish on your computer--on my sadly underpowered windows netbookit took minutes to scan solid state drive with some of data filesand directories when the system was lightly loaded ( minutes when not tracing directory namesbut half an hour when many other applications were runningneverthelessit provides the most exhaustive solution to the original query of all our attempts this is also as complete solution as we have space for in this book for more funconsider that you may need to scan more than one driveand some python source files may also appear in zip archivesboth on the module path or not (os walk silently ignores zip files in example - they might also be named in other ways--with pyw extensions to suppress shell pop ups on windowsand with arbitrary extensions for some top-level scripts in facttop-level scripts might have no filename extension at alleven though they are python source files and while they're generally not python filessome importable modules may also appear in frozen binaries or be statically linked into the python executable in the interest of spacewe'll leave such higher resolution (and potentially intractable!search extensions as suggested exercises printing unicode filenames one fine point before we move onnotice the seemingly superfluous exception handling in example - ' tryprint function when first tried to scan an entire drive as shown in the preceding sectionthis script died on unicode encoding error while trying to quick game of "find the biggest python file
4,853
error entirely this demonstrates subtle but pragmatically important issuepython ' unicode orientation extends to filenameseven if they are just printed as we learned in because filenames may contain arbitrary textos listdir returns filenames in two different ways--we get back decoded unicode strings when we pass in normal str argumentand still-encoded byte strings when we send bytesimport os os listdir(')[: ['bigext-tree py''bigpy-dir py''bigpy-path py''bigpy-tree py'os listdir( ')[: [ 'bigext-tree py' 'bigpy-dir py' 'bigpy-path py' 'bigpy-tree py'both os walk (used in the example - scriptand glob glob inherit this behavior for the directory and file names they returnbecause they work by calling os listdir internally at each directory level for all these callspassing in byte string argument suppresses unicode decoding of file and directory names passing normal string assumes that filenames are decodable per the file system' unicode scheme the reason this potentially mattered to this section' example is that running the tree search version over an entire hard drive eventually reached an undecodable filename (an old saved web page with an odd name)which generated an exception when the print function tried to display it here' simplified recreation of the errorrun in shell window (command prompton windowsroot ' :\py for (dirsubsfilesin os walk(root)print(dirc:\py :\py \futureproofpython pythoninfo wiki_files :\py \oakwinter_com code >porting setuptools to py k_files traceback (most recent call last)file ""line in file " :\python \lib\encodings\cp py"line in encode return codecs charmap_encode(input,self errors,encoding_map)[ unicodeencodeerror'charmapcodec can' encode character '\ in position character maps to one way out of this dilemma is to use bytes strings for the directory root name--this suppresses filename decoding in the os listdir calls run by os walkand effectively limits the scope of later printing to raw bytes since printing does not have to deal with encodingsit works without error manually encoding to bytes prior to printing works toobut the results are slightly differentroot encode( ' :\\py for (dirsubsfilesin os walk(root encode())print(dir complete system programs
4,854
' :\\py \\futureproofpython pythoninfo wiki_filesb' :\\py \\oakwinter_com code \xbb porting setuptools to py k_filesb' :\\py \\what\ new in python \ python documentationfor (dirsubsfilesin os walk(root)print(dir encode() ' :\\py ' :\\py \\futureproofpython pythoninfo wiki_filesb' :\\py \\oakwinter_com code \xc \xbb porting setuptools to py k_filesb' :\\py \\what\xe \ \ new in python \xe \ \ python documentationunfortunatelyeither approach means that all the directory names printed during the walk display as cryptic byte strings to maintain the better readability of normal stringsi instead opted for the exception handler approach used in the script' code this avoids the issues entirelyfor (dirsubsfilesin os walk(root)tryprint(direxcept unicodeencodeerrorprint(dir encode()or simply punt if enocde may fail too :\py :\py \futureproofpython pythoninfo wiki_files :\py \oakwinter_com code >porting setuptools to py k_files ' :\\py \\what\xe \ \ new in python \xe \ \ python documentationoddlythoughthe error seems more related to printing than to unicode encodings of filenames--because the filename did not fail until printedit must have been decodable when its string was created initially that' why wrapping up the print in try sufficesotherwisethe error would occur earlier moreoverthis error does not occur if the script' output is redirected to fileeither at the shell level (bigext-tree py :out)or by the print call itself (print(dirfile= )in the latter case the output file must later be read back in binary modeas text mode triggers the same error when printing the file' content to the shell window (but againnot until printedin factthe exact same code that fails when run in system shell command prompt on windows works without error when run in the idle gui on the same platform--the tkinter gui used by idle handles display of characters that printing to standard output connected to shell terminal window does notimport os run in idle ( tkinter gui)not system shell root ' :\py for (dirsubsfilesin os walk(root)print(dirc:\py :\py \futureproofpython pythoninfo wiki_files :\py \oakwinter_com code >porting setuptools to py k_files :\py \what' new in python -python documentation_files in other wordsthe exception occurs only when printing to shell windowand long after the file name string is created this reflects an artifact of extra translations quick game of "find the biggest python file
4,855
no room for further exploration herethoughwe'll have to be satisfied with the fact that our exception handler sidesteps the printing problem altogether you should still be aware of the implications of unicode filename decodingthoughon some platforms you may need to pass byte strings to os walk in this script to prevent decoding errors as filenames are created since unicode is still relatively new in be sure to test for such errors on your computer and your python also see also python' manuals for more on the treatment of unicode filenamesand the text learning python for more on unicode in general as noted earlierour scripts also had to open text files in binary mode because some might contain undecodable content too it might seem surprising that unicode issues can crop up in basic printing like this toobut such is life in the brave new unicode world many real-world scripts don' need to care much about unicodeof course--including those we'll explore in the next section splitting and joining files like most kidsmine spent lot of time on the internet when they were growing up as far as could tellit was the thing to do among their generationcomputer geeks and gurus seem to have been held in the same sort of esteem that my generation once held rock stars when kids disappeared into their roomschances were good that they were hacking on computersnot mastering guitar riffs (wellreal onesat leastit may or may not be healthier than some of the diversions of my own misspent youthbut that' topic for another kind of book despite the rhetoric of techno-pundits about the web' potential to empower an upcoming generation in ways unimaginable by their predecessorsmy kids seemed to spend most of their time playing games to fetch new ones in my house at the timethey had to download to shared computer which had internet access and transfer those games to their own computers to install (their own machines did not have internet access until laterfor reasons that most parents in the crowd could probably expand upon the problem with this scheme is that game files are not small they were usually much too big to fit on floppy or memory stick of the timeand burning cd or dvd took away valuable game-playing time if all the machines in my house ran linuxthis would have been nonissue there are standard command-line programs on unix for chopping file into pieces small enough to fit on transfer device (split)and others for for related print issuesee ' workaround for program aborts when printing stack tracebacks to standard output from spawned programs unlike the problem described herethat issue does not appear to be related to unicode characters that may be unprintable in shell windows but reflects another regression for standard output prints in general in python which may or may not be repaired by the time you read this text see also the python environment variable pythonioencodingwhich can override the default encoding used for standard streams complete system programs
4,856
sorts of different machines in the housethoughwe needed more portable solution splitting files portably since all the computers in my house ran pythona simple portable python script came to the rescue the python program in example - distributes single file' contents among set of part files and stores those part files in directory example - pp \system\filetools\split py #!/usr/bin/python ""###############################################################################split file into set of partsjoin py puts them back togetherthis is customizable version of the standard unix split command-line utilitybecause it is written in pythonit also works on windows and can be easily modifiedbecause it exports functionits logic can also be imported and reused in other applications###############################################################################""import sysos kilobytes megabytes kilobytes chunksize int( megabytesdefaultroughly floppy def split(fromfiletodirchunksize=chunksize)if not os path exists(todir)caller handles errors os mkdir(todirmake dirread/write parts elsefor fname in os listdir(todir)delete any existing files os remove(os path join(todirfname)partnum input open(fromfile'rb'binaryno decodeendline while trueeof=empty string from read chunk input read(chunksizeget next part <chunksize if not chunkbreak partnum + filename os path join(todir('part% dpartnum)fileobj open(filename'wb'fileobj write(chunkfileobj close(or simply open(write(input close(assert partnum < join sort fails if digits return partnum should note that this background story stems from the second edition of this bookwritten in some ten years laterfloppies have largely gone the way of the parallel port and the dinosaur moreoverburning cd or dvd is no longer as painful as it once wasthere are new options today such as large flash memory cardswireless home networksand simple emailand naturallymy home computers configuration isn' what it once was for that mattersome of my kids are no longer kids (though they've retained some backward compatibility with their former selvessplitting and joining files
4,857
if len(sys argv= and sys argv[ ='-help'print('usesplit py [file-to-split target-dir [chunksize]]'elseif len(sys argv interactive true fromfile input('file to be split'input if clicked todir input('directory to store part files'elseinteractive false fromfiletodir sys argv[ : args in cmdline if len(sys argv= chunksize int(sys argv[ ]absfromabsto map(os path abspath[fromfiletodir]print('splitting'absfrom'to'absto'by'chunksizetryparts split(fromfiletodirchunksizeexceptprint('error during split:'print(sys exc_info()[ ]sys exc_info()[ ]elseprint('split finished:'parts'parts are in'abstoif interactiveinput('press enter key'pause if clicked by defaultthis script splits the input file into chunks that are roughly the size of floppy disk--perfect for moving big files between the electronically isolated machines of the time most importantlybecause this is all portable python codethis script will run on just about any machineeven ones without their own file splitter all it requires is an installed python here it is at work splitting python self-installer executable located in the current working directory on windows ( 've omitted few dir output lines to save space hereuse ls - on unix) :\tempcd :\temp :\tempdir python- msi more : pm , , python- msi file( , , bytes dir( , , , bytes free :\temppython :\pp \system\filetools\split py -help usesplit py [file-to-split target-dir [chunksize] :\temppython :\ \system\filetools\split py python- msi pysplit splitting :\temp\python- msi to :\temp\pysplit by split finished parts are in :\temp\pysplit :\tempdir pysplit more : am : am : am : am , , part , , part complete system programs
4,858
: am , , part : am , , part : am , , part : am , , part : am , , part : am , , part : am , , part : am , part file( , , bytes dir( , , , bytes free each of these generated part files represents one binary chunk of the file python- msi-- chunk small enough to fit comfortably on floppy disk of the time in factif you add the sizes of the generated part files given by the ls commandyou'll come up with exactly the same number of bytes as the original file' size before we see how to put these files back together againhere are few points to ponder as you study this script' codeoperation modes this script is designed to input its parameters in either interactive or commandline modeit checks the number of command-line arguments to find out the mode in which it is being used in command-line modeyou list the file to be split and the output directory on the command lineand you can optionally override the default part file size with third command-line argument in interactive modethe script asks for filename and output directory at the console window with input and pauses for key press at the end before exiting this mode is nice when the program file is started by clicking on its iconon windowsparameters are typed into pop-up dos box that doesn' automatically disappear the script also shows the absolute paths of its parameters (by running them through os path abspathbecause they may not be obvious in interactive mode binary file mode this code is careful to open both input and output files in binary mode (rbwb)because it needs to portably handle things like executables and audio filesnot just text in we learned that on windowstext-mode files automatically map \ \ end-of-line sequences to \ on input and map \ to \ \ on output for true binary datawe really don' want any \ characters in the data to go away when readand we don' want any superfluous \ characters to be added on output binary-mode files suppress this \ mapping when the script is run on windows and so avoid data corruption in python xbinary mode also means that file data is bytes objects in our scriptnot encoded str textthough we don' need to do anything special--this script' file processing code runs the same on python as it did on in factbinary mode is required in for this programbecause the target file' data may not be encoded text at alltext mode requires that file content must be decodable in xand that might fail both for truly binary data and text files obtained from other splitting and joining files
4,859
manually closing files this script also goes out of its way to manually close its files as we also saw in we can often get by with single lineopen(partname'wb'write(chunkthis shorter form relies on the fact that the current python implementation automatically closes files for you when file objects are reclaimed ( when they are garbage collectedbecause there are no more references to the file objectin this one-linerthe file object would be reclaimed immediatelybecause the open result is temporary in an expression and is never referenced by longer-lived name similarlythe input file is reclaimed when the split function exits howeverit' not impossible that this automatic-close behavior may go away in the future moreoverthe jython java-based python implementation does not reclaim unreferenced objects as immediately as the standard python you should close manually if you care about the java portyour script may potentially create many files in short amount of timeand it may run on machine that has limit on the number of open files per program because the split function in this module is intended to be general-purpose toolit accommodates such worst-case scenarios also see ' mention of the file context manager and the with statementthis provides an alternative way to guarantee file closes joining files portably back to moving big files around the houseafter downloading big game program fileyou can run the previous splitter script by clicking on its name in windows explorer and typing filenames after splitsimply copy each part file onto its own floppy (or other more modern medium)walk the files to the destination machineand re-create the split output directory on the target computer by copying the part files finallythe script in example - is clicked or otherwise run to put the parts back together example - pp \system\filetools\join py #!/usr/bin/python ""###############################################################################join all part files in dir created by split pyto re-create file this is roughly like 'cat fromdir/tofilecommand on unixbut is more portable and configurableand exports the join operation as reusable function relies on sort order of filenamesmust be same length could extend split/join to pop up tkinter file selectors ###############################################################################""import ossys readsize complete system programs
4,860
output open(tofile'wb'parts os listdir(fromdirparts sort(for filename in partsfilepath os path join(fromdirfilenamefileobj open(filepath'rb'while truefilebytes fileobj read(readsizeif not filebytesbreak output write(filebytesfileobj close(output close(if __name__ ='__main__'if len(sys argv= and sys argv[ ='-help'print('usejoin py [from-dir-name to-file-name]'elseif len(sys argv! interactive true fromdir input('directory containing part files'tofile input('name of file to be recreated'elseinteractive false fromdirtofile sys argv[ :absfromabsto map(os path abspath[fromdirtofile]print('joining'absfrom'to make'abstotryjoin(fromdirtofileexceptprint('error joining files:'print(sys exc_info()[ ]sys exc_info()[ ]elseprint('join completesee'abstoif interactiveinput('press enter key'pause if clicked here is join in progress on windowscombining the split files we made moment agoafter running the join scriptyou still may need to run something like zipgzipor tar to unpack an archive file unless it' shipped as an executablebut at least the original downloaded file is set to go++ :\temppython :\pp \system\filetools\join py -help usejoin py [from-dir-name to-file-name+it turns out that the zipgzipand tar commands can all be replaced with pure python code todaytoo the gzip module in the python standard library provides tools for reading and writing compressed gzip filesusually named with gz filename extension it can serve as an all-python equivalent of the standard gzip and gunzip command-line utility programs this built-in module uses another module called zlib that implements gzip-compatible data compressions in recent python releasesthe zipfile module can be imported to make and use zip format archives (zip is an archive and compression formatgzip is compression scheme)and the tarfile module allows scripts to read and write tar archives see the python library manual for details splitting and joining files
4,861
joining :\temp\pysplit to make :\temp\mypy msi join completesee :\temp\mypy msi :\tempdir msi more : am , , mypy msi : pm , , python- msi file( , , bytes dir( , , , bytes free :\tempfc / mypy msi python- msi comparing files mypy msi and python- msi fcno differences encountered the join script simply uses os listdir to collect all the part files in directory created by splitand sorts the filename list to put the parts back together in the correct order we get back an exact byte-for-byte copy of the original file (proved by the dos fc command in the codeuse cmp on unixsome of this process is still manualof course ( never did figure out how to script the "walk the floppies to your bedroomstep)but the split and join scripts make it both quick and simple to move big files around because this script is also portable python codeit runs on any platform to which we cared to move split files for instancemy home computers ran both windows and linux at the timesince this script runs on either platformthe gamers were covered before we move onhere are couple of implementation details worth underscoring in the join script' codereading by blocks or files first of allnotice that this script deals with files in binary mode but also reads each part file in blocks of kb each in factthe readsize setting here (the size of each block read from an input part filehas no relation to chunksize in split py (the total size of each output part fileas we learned in this script could instead read each part file all at onceoutput write(open(filepath'rb'read()the downside to this scheme is that it really does load all of file into memory at once for examplereading mb part file into memory all at once with the file object read method generates mb string in memory to hold the file' bytes since split allows users to specify even larger chunk sizesthe join script plans for the worst and reads in terms of limited-size blocks to be completely robustthe split script could read its input data in smaller chunks toobut this hasn' become concern in practice (recall that as your program runspython automatically reclaims strings that are no longer referencedso this isn' as wasteful as it might seemsorting filenames if you study this script' code closelyyou may also notice that the join scheme it uses relies completely on the sort order of filenames in the parts directory because it simply calls the list sort method on the filenames list returned by os listdirit implicitly requires that filenames have the same length and format when created complete system programs
4,862
string formatting expression ('part% 'to make sure that filenames all have the same number of digits at the end (fourwhen sortedthe leading zero characters in small numbers guarantee that part files are ordered for joining correctly alternativelywe could strip off digits in filenamesconvert them with intand sort numericallyby using the list sort method' keys argumentbut that would still imply that all filenames must start with the some type of substringand so doesn' quite remove the file-naming dependency between the split and join scripts because these scripts are designed to be two steps of the same processthoughsome dependencies between them seem reasonable usage variations finallylet' run few more experiments with these python system utilities to demonstrate other usage modes when run without full command-line argumentsboth split and join are smart enough to input their parameters interactively here they are chopping and gluing the python self-installer file on windows againwith parameters typed in the dos console windowc:\temppython :\pp \system\filetools\split py file to be splitpython- msi directory to store part filessplitout splitting :\temp\python- msi to :\temp\splitout by split finished parts are in :\temp\splitout press enter key :\temppython :\pp \system\filetools\join py directory containing part filessplitout name of file to be recreatednewpy msi joining :\temp\splitout to make :\temp\newpy msi join completesee :\temp\newpy msi press enter key :\tempfc / python- msi newpy msi comparing files python- msi and newpy msi fcno differences encountered when these program files are double-clicked in windows file explorer guithey work the same way (there are usually no command-line arguments when they are launched this wayin this modeabsolute path displays help clarify where files really are rememberthe current working directory is the script' home directory when clicked like thisso simple name actually maps to source code directorytype full path to make the split files show up somewhere else[in pop-up dos console box when split py is clickedfile to be splitc:\temp\python- msi directory to store part filesc:\temp\parts splitting :\temp\python- msi to :\temp\parts by split finished parts are in :\temp\parts press enter key splitting and joining files
4,863
directory containing part filesc:\temp\parts name of file to be recreatedc:\temp\morepy msi joining :\temp\parts to make :\temp\morepy msi join completesee :\temp\morepy msi press enter key because these scripts package their core logic in functionsthoughit' just as easy to reuse their code by importing and calling from another python component (make sure your module import search path includes the directory containing the pp root firstthe first abbreviated line here is one way to do so) :\tempset pythonpath= :\dev\examples :\temppython from pp system filetools split import split from pp system filetools join import join numparts split('python- msi''calldir'numparts join('calldir''callpy msi'import os os system('fc / python- msi callpy msi'comparing files python- msi and callpy msi fcno differences encountered word about performanceall the split and join tests shown so far process mb filebut they take less than one second of real wall-clock time to finish on my windows ghz atom processor laptop computer--plenty fast for just about any use could imagine both scripts run just as fast for other reasonable part file sizestoohere is the splitter chopping up the file into mb and kb partsc:\tempc:\pp \system\filetools\split py python- msi tempsplit splitting :\temp\python- msi to :\temp\tempsplit by split finished parts are in :\temp\tempsplit :\tempdir tempsplit more directory of :\temp\tempsplit : pm : pm : pm , , part : pm , , part : pm , , part : pm , , part file( , , bytes dir( , , , bytes free complete system programs
4,864
splitting :\temp\python- msi to :\temp\tempsplit by split finished parts are in :\temp\tempsplit :\tempdir tempsplit more directory of :\temp\tempsplit : pm : pm : pm , part : pm , part : pm , part : pm , part : pm , part more lines omitted : pm , part : pm , part : pm , part : pm , part : pm , part file( , , bytes dir( , , , bytes free the split can take noticeably longer to finishbut only if the part file' size is set small enough to generate thousands of part files--splitting into , parts works but runs slower (though some machines today are quick enough that you might not notice) :\tempc:\pp \system\filetools\split py python- msi tempsplit splitting :\temp\python- msi to :\temp\tempsplit by split finished parts are in :\temp\tempsplit :\tempc:\pp \system\filetools\join py tempsplit manypy msi joining :\temp\tempsplit to make :\temp\manypy msi join completesee :\temp\manypy msi :\tempfc / python- msi manypy msi comparing files python- msi and manypy msi fcno differences encountered :\tempdir tempsplit more directory of :\temp\tempsplit : pm : pm : pm : pm : pm : pm : pm , part , part , part , part , part splitting and joining files
4,865
: pm , part : pm , part : pm , part : pm , part : pm , part file( , , bytes dir( , , , bytes free finallythe splitter is also smart enough to create the output directory if it doesn' yet exist and to clear out any old files there if it does exist--the followingfor exampleleaves only new files in the output directory because the joiner combines whatever files exist in the output directorythis is nice ergonomic touch if the output directory was not cleared before each splitit would be too easy to forget that prior run' files are still there given that target audience for these scriptsthey needed to be as forgiving as possibleyour user base may vary (though you often shouldn' assume soc:\tempc:\pp \system\filetools\split py python- msi tempsplit splitting :\temp\python- msi to :\temp\tempsplit by split finished parts are in :\temp\tempsplit :\tempdir tempsplit more directory of :\temp\tempsplit : pm : pm : pm , , part : pm , , part : pm , , part file( , , bytes dir( , , , bytes free of coursethe dilemma that these scripts address might today be more easily addressed by simply buying bigger memory stick or giving kids their own internet access stillonce you catch the scripting bugyou'll find the ease and flexibility of python to be powerful and enabling toolsespecially for writing custom automation scripts like these when used wellpython may well become your swiss army knife of computing generating redirection web pages moving is rarely painlesseven in cyberspace changing your website' internet address can lead to all sorts of confusion you need to ask known contacts to use the new address and hope that others will eventually stumble onto it themselves but if you rely on the internetmoves are bound to generate at least as much confusion as an address change in the real world unfortunatelysuch site relocations are often unavoidable both internet service providers (ispsand server machines can come and go over the years moreoversome isps complete system programs
4,866
up with such an ispthere is not much recourse but to change providersand that often implies change of web addresses ss imaginethoughthat you are an 'reilly author and have published your website' address in multiple books sold widely all over the world what do you do when your isp' service level requires site changenotifying each of the hundreds of thousands of readers out there isn' exactly practical solution probably the best you can do is to leave forwarding instructions at the old site for some reasonably long period of time--the virtual equivalent of "we've movedsign in storefront window on the websuch sign can also send visitors to the new site automaticallysimply leave page at the old site containing hyperlink to the page' address at the new sitealong with timed auto-relocation specifications with such forward-link files in placevisitors to the old addresses will be only one click or few seconds away from reaching the new ones that sounds simple enough but because visitors might try to directly access the address of any file at your old siteyou generally need to leave one forward-link file for every old file--html pagesimagesand so on unless your prior server supports autoredirection (and mine did not)this represents dilemma if you happen to enjoy doing lots of mindless typingyou could create each forward-link file by hand but given that my home site contained over html files at the time wrote this paragraphthe prospect of running one editor session per file was more than enough motivation for an automated solution page template file here' what came up with first of alli create general page template text fileshown in example - to describe how all the forward-link files should lookwith parts to be filled in later example - pp \system\filetools\template html site redirection page$filethis page has moved this page now lives at this addressss it happens in factmost people who spend any substantial amount of time in cyberspace could probably tell horror story or two mine goes like thisa number of years agoi had an account with an isp that went completely offline for few weeks in response to security breach by an ex-employee worsenot only was personal email disabledbut queued up messages were permanently lost if your livelihood depends on email and the web as much as mine doesyou'll appreciate the havoc such an outage can wreak generating redirection web pages
4,867
please click on the new address to jump to this pageand update any links accordingly you will be redirectly shortly to fully understand this templateyou have to know something about htmla web page description language that we'll explore in part iv but for the purposes of this exampleyou can ignore most of this file and focus on just the parts surrounded by dollar signsthe strings $server$$home$and $fileare targets to be replaced with real values by global text substitutions they represent items that vary per site relocation and file page generator script nowgiven page template filethe python script in example - generates all the required forward-link files automatically example - pp \system\filetools\site-forward py ""###############################################################################create forward-link pages for relocating web site generates one page for every existing site html fileupload the generated files to your old web site see ftplib later in the book for ways to run uploads in scripts either after or during page file creation ###############################################################################""import os servername 'learning-python comhomedir 'bookssitefilesdir ' :\temp\public_htmluploaddir ' :\temp\isp-forwardtemplatename 'template htmltrywhere site is relocating to where site will be rooted where site files live locally where to store forward files template for generated pages os mkdir(uploaddirexcept oserrorpass make upload dir if needed template open(templatenameread(sitefiles os listdir(sitefilesdirload or import template text filenamesno directory prefix count for filename in sitefilesif filename endswith(html'or filename endswith(htm')fwdname os path join(uploaddirfilenameprint('creating'filename'as'fwdname complete system programs
4,868
filetext filetext replace('$home$'homedirfiletext filetext replace('$file$'filenameopen(fwdname' 'write(filetextcount + insert text and write file varies print('last file =>\ 'filetextsep=''print('done:'count'forward files created 'notice that the template' text is loaded by reading fileit would work just as well to code it as an imported python string variable ( triple-quoted string in module filealso observe that all configuration options are assignments at the top of the scriptnot command-line argumentssince they change so seldomit' convenient to type them just once in the script itself but the main thing worth noticing here is that this script doesn' care what the template file looks like at allit simply performs global substitutions blindly in its textwith different filename value for each generated file in factwe can change the template file any way we like without having to touch the script though fairly simple techniquesuch division of labor can be used in all sorts of contexts--generating "makefiles,form lettershtml replies from cgi scripts on web serversand so on in terms of library toolsthe generator scriptuses os listdir to step through all the filenames in the site' directory (glob glob would work toobut may require stripping directory prefixes from file namesuses the string object' replace method to perform global search-and-replace operations that fill in the $-delimited targets in the template file' textand endswith to skip non-html files ( images--most browsers won' know what to do with html text in jpgfileuses os path join and built-in file objects to write the resulting text out to forward-link file of the same name in an output directory the end result is mirror image of the original website directorycontaining only forward-link files generated from the page template as an added bonusthe generator script can be run on just about any python platform-- can run it on my windows laptop (where ' writing this book)as well as on linux server (where my learning-python com domain is hostedhere it is in action on windowsc:\pp \system\filetoolspython site-forward py creating about-lp html as :\temp\isp-forward\about-lp html creating about-lp html as :\temp\isp-forward\about-lp html creating about-lp html as :\temp\isp-forward\about-lp html creating about-lp html as :\temp\isp-forward\about-lp html creating about-lp html as :\temp\isp-forward\about-lp html many more lines deleted creating training html as :\temp\isp-forward\training html creating whatsnew html as :\temp\isp-forward\whatsnew html generating redirection web pages
4,869
creating xlate-lp html as :\temp\isp-forward\xlate-lp html creating zopeoutline htm as :\temp\isp-forward\zopeoutline htm last file =site redirection pagezopeoutline htm this page has moved this page now lives at this addressplease click on the new address to jump to this pageand update any links accordingly you will be redirectly shortly done forward files created to verify this script' outputdouble-click on any of the output files to see what they look like in web browser (or run start command in dos console on windows- start isp-forward\about-lp htmlfigure - shows what one generated page looks like on my machine figure - site-forward output file page to complete the processyou still need to install the forward linksupload all the generated files in the output directory to your old site' web directory if that' too much to do by handtoobe sure to see the ftp site upload scripts in for complete system programs
4,870
how much manual labor python can automate the next section provides another prime example regression test script mistakes happen as we've seenpython provides interfaces to variety of system servicesalong with tools for adding others example - shows some of the more commonly used system tools in action it implements simple regression test system for python scripts--it runs each in directory of python scripts with provided input and command-line argumentsand compares the output of each run to the prior run' results as suchthis script can be used as an automated testing system to catch errors introduced by changes in program source filesin big systemyou might not know when fix is really bug in disguise example - pp \system\tester\tester py ""###############################################################################test directory of python scriptspassing command-line argumentspiping in stdinand capturing stdoutstderrand exit status to detect failures and regressions from prior run outputs the subprocess module spawns and controls streams (much like os popen in python )and is cross-platform streams are always binary bytes in subprocess test inputsargsoutputsand errors map to files in subdirectories this is command-line scriptusing command-line arguments for optional test directory nameand force-generation flag while we could package it as callable functionthe fact that its results are messages and output files makes call/return model less useful suggested enhancementcould be extended to allow multiple sets of command-line arguments and/or inputs per test scriptto run script multiple times (glob for multiple in*files in inputs?might also seem simpler to store all test files in same directory with different extensionsbut this could grow large over time could also save both stderr and stdout to errors on failuresbut prefer to have expected/actual output in outputs on regressions ###############################################################################""import ossysglobtime from subprocess import popenpipe configuration args testdir sys argv[ if len(sys argv else os curdir forcegen len(sys argv print('start tester:'time asctime()print('in'os path abspath(testdir) regression test script
4,871
print('-'* for arg in argsprint(argdef quiet(*args)pass trace quiet glob scripts to be tested testpatt os path join(testdir'scripts''py'testfiles glob glob(testpatttestfiles sort(trace(os getcwd()*testfilesnumfail for testpath in testfilestestname os path basename(testpathrun all tests in dir strip directory path get input and args infile testname replace(py'in'inpath os path join(testdir'inputs'infileindata open(inpath'rb'read(if os path exists(inpathelse 'argfile testname replace(py'args'argpath os path join(testdir'args'argfileargdata open(argpathread(if os path exists(argpathelse 'locate output and errorscrub prior results outfile testname replace(py'out'outpath os path join(testdir'outputs'outfileoutpathbad outpath badif os path exists(outpathbad)os remove(outpathbaderrfile testname replace(py'err'errpath os path join(testdir'errors'errfileif os path exists(errpath)os remove(errpathrun test with redirected streams pypath sys executable command '% % % (pypathtestpathargdatatrace(commandindataprocess popen(commandshell=truestdin=pipestdout=pipestderr=pipeprocess stdin write(indataprocess stdin close(outdata process stdout read(errdata process stderr read(data are bytes exitstatus process wait(requires binary files trace(outdataerrdataexitstatusanalyze results if exitstatus ! print('error status:'testnameexitstatusif errdataprint('error stream:'testnameerrpathopen(errpath'wb'write(errdata complete system programs status and/or stderr save error text
4,872
numfail + open(outpathbad'wb'write(outdataconsider both failure can get status+stderr save output to view elif not os path exists(outpathor forcegenprint('generating:'outpathopen(outpath'wb'write(outdatacreate first output elsepriorout open(outpath'rb'read(if priorout =outdataprint('passed:'testnameelsenumfail + print('failed output:'testnameoutpathbadopen(outpathbad'wb'write(outdataor compare to prior print('finished:'time asctime()print('% tests were run% tests failed (len(testfiles)numfail)we've seen the tools used by this script earlier in this part of the book--subprocessos pathglobfilesand the like this example largely just pulls these tools together to solve useful purpose its core operation is comparing new outputs to oldin order to spot changes ("regressions"along the wayit also manages command-line argumentserror messagesstatus codesand files this script is also larger than most we've seen so farbut it' realistic and representative system administration tool (in factit' derived from similar tool actually used in the past to detect changes in compilerprobably the best way to understand how it works is to demonstrate what it does the next section steps through testing session to be read in conjunction with studying the test script' code running the test driver much of the magic behind the test driver script in example - has to do with its directory structure when you run it for the first time in test directory (or force it to start from scratch there by passing second command-line argument)itcollects scripts to be run in the scripts subdirectory fetches any associated script input and command-line arguments from the inputs and args subdirectories generates initial stdout output files for tests that exit normally in the outputs subdirectory reports tests that fail either by exit status code or by error messages appearing in stderr on all failuresthe script also saves any stderr error message textas well as any stdout data generated up to the point of failurestandard error text is saved to file in the errors subdirectoryand standard output of failed tests is saved with special regression test script
4,873
would trigger failure when the test is later fixed!here' first runc:\pp \system\testerpython tester py start testermon feb : : in :\users\mark\stuff\books\ \pp \dev\examples\pp \system\tester generating\outputs\test-basic-args out generating\outputs\test-basic-stdout out generating\outputs\test-basic-streams out generating\outputs\test-basic-this out error statustest-errors-runtime py error streamtest-errors-runtime py \errors\test-errors-runtime err error statustest-errors-syntax py error streamtest-errors-syntax py \errors\test-errors-syntax err error statustest-status-bad py generating\outputs\test-status-good out finishedmon feb : : tests were run tests failed to run each scriptthe tester configures any preset command-line arguments providedpipes in fetched canned input (if any)and captures the script' standard output and error streamsalong with its exit status code when ran this examplethere were test scriptsalong with variety of inputs and outputs since the directory and file naming structures are the key to this examplehere is listing of the test directory used--the scripts directory is primarybecause that' where tests to be run are collectedc:\pp \system\testerdir / args errors inputs outputs scripts tester py xxold :\pp \system\testerdir / scripts test-basic-args py test-basic-stdout py test-basic-streams py test-basic-this py test-errors-runtime py test-errors-syntax py test-status-bad py test-status-good py the other subdirectories contain any required inputs and any generated outputs associated with scripts to be testedc:\pp \system\testerdir / args test-basic-args args test-status-good args complete system programs
4,874
test-basic-args in test-basic-streams in :\pp \system\testerdir / outputs test-basic-args out test-basic-stdout out test-basic-streams out test-basic-this out test-errors-runtime out bad test-errors-syntax out bad test-status-bad out bad test-status-good out :\pp \system\testerdir / errors test-errors-runtime err test-errors-syntax err won' list all these files here (as you can seethere are manyand all are available in the book examples distribution package)but to give you the general flavorhere are the files associated with the test script test-basic-args pyc:\pp \system\testertype scripts\test-basic-args py test argsstreams import sysos print(os getcwd()to outputs print(sys path[ ]print('[argv]'for arg in sys argvprint(argfrom args to outputs print('[interaction]'text input('enter text:'rept sys stdin readline(sys stdout write(text int(rept)to outputs from inputs from inputs to outputs :\pp \system\testertype args\test-basic-args args -command -line --stuff :\pp \system\testertype inputs\test-basic-args in eggs :\pp \system\testertype outputs\test-basic-args out :\users\mark\stuff\books\ \pp \dev\examples\pp \system\tester :\users\mark\stuff\books\ \pp \dev\examples\pp \system\tester\scripts [argv\scripts\test-basic-args py -command -line --stuff [interactionenter text:eggseggseggseggseggseggseggseggseggseggs regression test script
4,875
stderrand the second is its stdout generated up to the point where the error occurredthese are for human (or other toolsinspectionand are automatically removed the next time the tester script runsc:\pp \system\testertype errors\test-errors-runtime err traceback (most recent call last)file \scripts\test-errors-runtime py"line in print( zerodivisionerrorint division or modulo by zero :\pp \system\testertype outputs\test-errors-runtime out bad starting nowwhen run again without making any changes to the teststhe test driver script compares saved prior outputs to new ones and detects no regressionsfailures designated by exit status and stderr messages are still reported as beforebut there are no deviations from other testssaved expected outputc:\pp \system\testerpython tester py start testermon feb : : in :\users\mark\stuff\books\ \pp \dev\examples\pp \system\tester passedtest-basic-args py passedtest-basic-stdout py passedtest-basic-streams py passedtest-basic-this py error statustest-errors-runtime py error streamtest-errors-runtime py \errors\test-errors-runtime err error statustest-errors-syntax py error streamtest-errors-syntax py \errors\test-errors-syntax err error statustest-status-bad py passedtest-status-good py finishedmon feb : : tests were run tests failed but when make change in one of the test scripts that will produce different output ( changed loop counter to print fewer lines)the regression is caught and reportedthe new and different output of the script is reported as failureand saved in outputs as badfor later viewingc:\pp \system\testerpython tester py start testermon feb : : in :\users\mark\stuff\books\ \pp \dev\examples\pp \system\tester passedtest-basic-args py failed outputtest-basic-stdout py \outputs\test-basic-stdout out bad passedtest-basic-streams py passedtest-basic-this py error statustest-errors-runtime py error streamtest-errors-runtime py \errors\test-errors-runtime err error statustest-errors-syntax py error streamtest-errors-syntax py \errors\test-errors-syntax err error statustest-status-bad py passedtest-status-good py finishedmon feb : : complete system programs
4,876
:\pp \system\testertype outputs\test-basic-stdout out bad begin spamspam!spamspam!spam!spamspam!spam!spam!spamend one last usage noteif you change the trace variable in this script to be verboseyou'll get much more output designed to help you trace the programs operation (but probably too much for real testing runs) :\pp \system\testertester py start testermon feb : : in :\users\mark\stuff\books\ \pp \dev\examples\pp \system\tester :\users\mark\stuff\books\ \pp \dev\examples\pp \system\tester \scripts\test-basic-args py \scripts\test-basic-stdout py \scripts\test-basic-streams py \scripts\test-basic-this py \scripts\test-errors-runtime py \scripts\test-errors-syntax py \scripts\test-status-bad py \scripts\test-status-good py :\python \python exe \scripts\test-basic-args py -command -line --stuff 'eggs\ \ \ \nb' :\\users\\mark\\stuff\\books\\ \\pp \\dev\\examples\\pp \\system\\tester\ \nc:\\users\\mark\\stuff\\books\\ \\pp \\dev\\examples\\pp \\system\\tester\scripts\ \ [argv]\ \ \\scripts\\test-basic-args py\ \ -command\ \ -line\ \ --st uff\ \ [interaction]\ \nenter text:eggseggseggseggseggseggseggseggseggseggsb' passedtest-basic-args py more lines deleted study the test driver' code for more details naturallythere is much more to the general testing story than we have space for here for examplein-process tests don' need to spawn programs and can generally make do with importing modules and testing them in try exception handler statements there is also ample room for expansion and customization in our testing script (see its docstring for startersmoreoverpython comes with two testing frameworksdoctest and unittest ( pyunit)which provide techniques and structures for coding regression and unit testsunittest an object-oriented framework that specifies test casesexpected resultsand test suites subclasses provide test methods and use inherited assertion calls to specify expected results regression test script
4,877
parses out and reruns tests from an interactive session log that is pasted into module' docstrings the logs give test calls and expected resultsdoctest essentially reruns the interactive session see the python library manualthe pypi websiteand your favorite web search engine for additional testing toolkits in both python itself and the third-party domain for automated testing of python command-line scripts that run as independent programs and tap into standard script execution contextthoughour tester does the job because the test driver is fully independent of the scripts it testswe can drop in new test cases without having to update the driver' code and because it is written in pythonit' quick and easy to change as our testing needs evolve as we'll see again in the next sectionthis "scriptabilitythat python provides can be decided advantage for real tasks testing gone badonce we learn about sending email from python scripts in you might also want to augment this script to automatically send out email when regularly run tests fail ( when run from cron job on unixthat wayyou don' even need to remember to check results of courseyou could go further still one company worked for added sound effects to compiler test scriptsyou got an audible round of applause if no regressions were found and an entirely different noise otherwise (see playfile py at the end of this for hints another company in my development past ran nightly test script that automatically isolated the source code file check-in that triggered test regression and sent nasty email to the guilty party (and his or her supervisornobody expects the spanish inquisitioncopying directory trees my cd writer sometimes does weird things in factcopies of files with odd names can be totally botched on the cdeven though other files show up in one piece that' not necessarily showstopperif just few files are trashed in big cd backup copyi can always copy the offending files elsewhere one at time unfortunatelydrag-and-drop copies on some versions of windows don' play nicely with such cdthe copy operation stops and exits the moment the first bad file is encountered you get only as many files as were copied up to the errorbut no more in factthis is not limited to cd copies 've run into similar problems when trying to back up my laptop' hard drive to another drive--the drag-and-drop copy stops with an error as soon as it reaches file with name that is too long or odd to copy (common complete system programs
4,878
say the leastthere may be some magical windows setting to work around this featurebut gave up hunting for one as soon as realized that it would be easier to code copier in python the cpall py script in example - is one way to do it with this scripti control what happens when bad files are found-- can skip over them with python exception handlersfor instance moreoverthis tool works with the same interface and effect on other platforms it seems to meat leastthat few minutes spent writing portable and reusable python script to meet need is better investment than looking for solutions that work on only one platform (if at allexample - pp \system\filetools\cpall py ""###############################################################################usage"python cpall py dirfrom dirtorecursive copy of directory tree works like "cp - dirfrom/dirtounix commandand assumes that dirfrom and dirto are both directories was written to get around fatal error messages under windows drag-and-drop copies (the first bad file ends the entire copy operation immediately)but also allows for coding more customized copy operations in python ###############################################################################""import ossys maxfileload blksize def copyfile(pathfrompathtomaxfileload=maxfileload)""copy one file pathfrom to pathtobyte for byteuses binary file modes to supress unicde decode and endline transform ""if os path getsize(pathfrom<maxfileloadbytesfrom open(pathfrom'rb'read(read small file all at once open(pathto'wb'write(bytesfromelsefilefrom open(pathfrom'rb'read big files in chunks fileto open(pathto'wb'need mode for both while truebytesfrom filefrom read(blksizeget one blockless at end if not bytesfrombreak empty after last chunk fileto write(bytesfromdef copytree(dirfromdirtoverbose= )""copy contents of dirfrom and below to dirtoreturn (filesdirscountsmay need to use bytes for dirnames if undecodable on other platformsmay need to do more file type checking on unixskip linksfifosetc ""fcount dcount for filename in os listdir(dirfrom)for files/dirs here copying directory trees
4,879
pathto os path join(dirtofilenameextend both paths if not os path isdir(pathfrom)copy simple files tryif verbose print('copying'pathfrom'to'pathtocopyfile(pathfrompathtofcount + exceptprint('error copying'pathfrom'to'pathto'--skipped'print(sys exc_info()[ ]sys exc_info()[ ]elseif verboseprint('copying dir'pathfrom'to'pathtotryos mkdir(pathtomake new subdir below copytree(pathfrompathtorecur into subdirs fcount +below[ add subdir counts dcount +below[ dcount + exceptprint('error creating'pathto'--skipped'print(sys exc_info()[ ]sys exc_info()[ ]return (fcountdcountdef getargs()""get and verify directory name argumentsreturns default none on errors ""trydirfromdirto sys argv[ :exceptprint('usage errorcpall py dirfrom dirto'elseif not os path isdir(dirfrom)print('errordirfrom is not directory'elif not os path exists(dirto)os mkdir(dirtoprint('notedirto was created'return (dirfromdirtoelseprint('warningdirto already exists'if hasattr(os path'samefile')same os path samefile(dirfromdirtoelsesame os path abspath(dirfrom=os path abspath(dirtoif sameprint('errordirfrom same as dirto'elsereturn (dirfromdirtoif __name__ ='__main__'import time dirstuple getargs(if dirstupleprint('copying 'start time clock( complete system programs
4,880
print('copied'fcount'files,'dcount'directories'end='print('in'time clock(start'seconds'this script implements its own recursive tree traversal logic and keeps track of both the "fromand "todirectory paths as it goes at every levelit copies over simple filescreates directories in the "topathand recurs into subdirectories with "fromand "topaths extended by one level there are other ways to code this task ( we might change the working directory along the way with os chdir calls or there is probably an os walk solution which replaces from and to path prefixes as it walks)but extending paths on recursive descent works well in this script notice this script' reusable copyfile function--just in case there are multigigabyte files in the tree to be copiedit uses file' size to decide whether it should be read all at once or in chunks (rememberthe file read method without arguments actually loads the entire file into an in-memory stringwe choose fairly large file and block sizesbecause the more we read at once in pythonthe faster our scripts will typically run this is more efficient than it may soundstrings left behind by prior reads will be garbage collected and reused as we go we're using binary file modes here againtooto suppress the unicode encodings and end-of-line translations of text files--trees may contain arbitrary kinds of files also notice that this script creates the "todirectory if neededbut it assumes that the directory is empty when copy starts upfor accuracybe sure to remove the target directory before copying new tree to its nameor old files may linger in the target tree (we could automatically remove the target firstbut this may not always be desiredthis script also tries to determine if the source and target are the sameon unix-like platforms with oddities such as linksos path samefile does more accurate job than comparing absolute file names (different file names may be the same filehere is copy of big book examples tree ( use the tree from the prior edition throughout this in action on windowspass in the name of the "fromand "todirectories to kick off the processredirect the output to file if there are too many error messages to read all at once ( output txt)and run an rm - or rmdir / shell command (or similar platform-specific toolto delete the target directory first if neededc:\pp \system\filetoolsrmdir / copytemp copytempare you sure ( / ) :\pp \system\filetoolscpall py :\temp\pp \examples copytemp notedirto was created copying copied files directories in seconds :\pp \system\filetoolsfc / copytemp\pp \launcher py :\temp\pp \examples\pp \launcher py comparing files copytemp\pp \launcher py and :\temp\pp \examples\pp \launcher py fcno differences encountered copying directory trees
4,881
the time wrote this edition in this test run copied tree of , files and directories in seconds on my woefully underpowered netbook machine (the builtin time clock call is used to query the system time in seconds)it may run arbitrarily faster or slower for you stillthis is at least as fast as the best drag-and-drop 've timed on this machine so how does this script work around bad files on cd backupthe secret is that it catches and ignores file exceptionsand it keeps walking to copy all the files that are good on cdi simply run command line such as this onec:\pp \system\filetoolspython cpall py :\examples :\pp \examples because the cd is addressed as " :on my windows machinethis is the commandline equivalent of drag-and-drop copying from an item in the cd' top-level folderexcept that the python script will recover from errors on the cd and get the rest on copy errorsit prints message to standard output and continuesfor big copiesyou'll probably want to redirect the script' output to file for later inspection in generalcpall can be passed any absolute directory path on your machineeven those that indicate devices such as cds to make this go on linuxtry root directory such as /dev/cdrom or something similar to address your cd drive once you've copied tree this wayyou still might want to verifyto see howlet' move on to the next example comparing directory trees engineers can be paranoid sort (but you didn' hear that from meat least am it comes from decades of seeing things go terribly wrongi suppose when create cd backup of my hard drivefor instancethere' still something bit too magical about the process to trust the cd writer program to do the right thing maybe shouldbut it' tough to have lot of faith in tools that occasionally trash files and seem to crash my windows machine every third tuesday of the month when push comes to shoveit' nice to be able to verify that data copied to backup cd is the same as the original-or at least to spot deviations from the original--as soon as possible if backup is ever neededit will be really needed because data cds are accessible as simple directory trees in the file systemwe are once again in the realm of tree walkers--to verify backup cdwe simply need to walk its top-level directory if our script is general enoughwe will also be able to use it to verify other copy operations as well-- downloaded tar fileshard-drive backupsand so on in factthe combination of the cpall script of the prior section and general tree comparison would provide portable and scriptable way to copy and verify data sets we've already studied generic directory tree walkersbut they won' help us here directlywe need to walk two directories in parallel and inspect common files along the way moreoverwalking either one of the two directories won' allow us to spot files complete system programs
4,882
in order here finding directory differences before we start codingthe first thing we need to clarify is what it means to compare two directory trees if both trees have exactly the same branch structure and depththis problem reduces to comparing corresponding files in each tree in generalthoughthe trees can have arbitrarily different shapesdepthsand so on more generallythe contents of directory in one tree may have more or fewer entries than the corresponding directory in the other tree if those differing contents are filenamesthere is no corresponding file to compare withif they are directory namesthere is no corresponding branch to descend through in factthe only way to detect files and directories that appear in one tree but not the other is to detect differences in each level' directory in other wordsa tree comparison algorithm will also have to perform directory comparisons along the way because this is nested and simpler operationlet' start by coding and debugging single-directory comparison of filenames in example - example - pp \system\filetools\dirdiff py ""###############################################################################usagepython dirdiff py dir -path dir -path compare two directories to find files that exist in one but not the other this version uses the os listdir function and list difference note that this script checks only filenamesnot file contents--see diffall py for an extension that does the latter by comparing read(results ###############################################################################""import ossys def reportdiffs(unique unique dir dir )""generate diffs report for one dirpart of comparedirs output ""if not (unique or unique )print('directory lists are identical'elseif unique print('files unique to'dir for file in unique print('fileif unique print('files unique to'dir for file in unique print('filedef difference(seq seq )comparing directory trees
4,883
return all items in seq onlya set(seq set(seq would work toobut sets are randomly orderedso any platform-dependent directory order would be lost ""return [item for item in seq if item not in seq def comparedirs(dir dir files =nonefiles =none)""compare directory contentsbut not actual filesmay need bytes listdir arg for undecodable filenames on some platforms ""print('comparing'dir 'to'dir files os listdir(dir if files is none else files files os listdir(dir if files is none else files unique difference(files files unique difference(files files reportdiffs(unique unique dir dir return not (unique or unique true if no diffs def getargs()"args for command-line modetrydir dir sys argv[ :exceptprint('usagedirdiff py dir dir 'sys exit( elsereturn (dir dir command-line args if __name__ ='__main__'dir dir getargs(comparedirs(dir dir given listings of names in two directoriesthis script simply picks out unique names in the first and unique names in the secondand reports any unique names found as differences (that isfiles in one directory but not the otherits comparedirs function returns true result if no differences were foundwhich is useful for detecting differences in callers let' run this script on few directoriesdifferences are detected and reported as names unique in either passed-in directory pathname notice that this is only structural comparison that just checks names in listingsnot file contents (we'll add the latter in moment) :\pp \system\filetoolsdirdiff py :\temp\pp \examples copytemp comparing :\temp\pp \examples to copytemp directory lists are identical :\pp \system\filetoolsdirdiff py :\temp\pp \examples\pp \system comparing :\temp\pp \examples\pp \system to files unique to :\temp\pp \examples\pp \system app complete system programs
4,884
media moreplus py files unique to more pyc spam txt tester __init__ pyc the unique function is the heart of this scriptit performs simple list difference operation when applied to directoriesunique items represent tree differencesand common items are names of files or subdirectories that merit further comparisons or traversals in factin python and laterwe could also use the built-in set object type if we don' care about the order in the results--because sets are not sequencesthey would not maintain any original and possibly platform-specific left-to-right order of the directory listings provided by os listdir for that reason (and to avoid requiring users to upgrade)we'll keep using our own comprehension-based function instead of sets finding tree differences we've just coded directory comparison tool that picks out unique files and directories now all we need is tree walker that applies dirdiff at each level to report unique itemsexplicitly compares the contents of files in commonand descends through directories in common example - fits the bill example - pp \system\filetools\diffall py ""###############################################################################usage"python diffall py dir dir recursive directory tree comparisonreport unique files that exist in only dir or dir report files of the same name in dir and dir with differing contentsreport instances of same name but different type in dir and dir and do the same for all subdirectories of the same names in and below dir and dir summary of diffs appears at end of outputbut search redirected output for "diffand "uniquestrings for further details new( elimit reads to for large files( ecatch same name=file/dir( eavoid extra os listdir(calls in dirdiff comparedirs(by passing results here along ###############################################################################""import osdirdiff blocksize up to per read def intersect(seq seq )""return all items in both seq and seq set(seq set(seq woud work toobut sets are randomly orderedso any platform-dependent directory order would be lost ""return [item for item in seq if item in seq comparing directory trees
4,885
""compare all subdirectories and files in two directory treesuses binary files to prevent unicode decoding and endline transformsas trees might contain arbitrary binary files as well as arbitrary textmay need bytes listdir arg for undecodable filenames on some platforms ""compare file name lists print('- names os listdir(dir names os listdir(dir if not dirdiff comparedirs(dir dir names names )diffs append('unique files at % % (dir dir )print('comparing contents'common intersect(names names missed common[:compare contents of files in common for name in commonpath os path join(dir namepath os path join(dir nameif os path isfile(path and os path isfile(path )missed remove(namefile open(path 'rb'file open(path 'rb'while truebytes file read(blocksizebytes file read(blocksizeif (not bytes and (not bytes )if verboseprint(name'matches'break if bytes !bytes diffs append('files differ at % % (path path )print(name'differs'break recur to compare directories in common for name in commonpath os path join(dir namepath os path join(dir nameif os path isdir(path and os path isdir(path )missed remove(namecomparetrees(path path diffsverbosesame name but not both files or dirsfor name in misseddiffs append('files missed at % % % (dir dir name)print(name'differs'if __name__ ='__main__'dir dir dirdiff getargs(diffs [ complete system programs
4,886
print('= if not diffsprint('no diffs found 'elseprint('diffs found:'len(diffs)for diff in diffsprint('-'diffchanges diffs in-place walkreport diffs list at each directory in the treethis script simply runs the dirdiff tool to detect unique namesand then compares names in common by intersecting directory lists it uses recursive function calls to traverse the tree and visits subdirectories only after comparing all the files at each level so that the output is more coherent to read (the trace output for subdirectories appears after that for filesit is not intermixednotice the misses listadded in the third edition of this bookit' very unlikelybut not impossiblethat the same name might be file in one directory and subdirectory in the other also notice the blocksize variablemuch like the tree copy script we saw earlierinstead of blindly reading entire files into memory all at oncewe limit each read to grab up to mb at timejust in case any files in the directories are too big to be loaded into available memory without this limiti ran into memoryerror exceptions on some machines with prior version of this script that read both files all at oncelike thisbytes open(path 'rb'read(bytes open(path 'rb'read(if bytes =bytes this code was simplerbut is less practical for very large files that can' fit into your available memory space (consider cd and dvd image filesfor examplein the new version' loopthe file reads return what is left when there is less than mb present or remaining and return empty strings at end-of-file files match if all blocks read are the sameand they reach end-of-file at the same time we're also dealing in binary files and byte strings again to suppress unicode decoding and end-line translations for file contentbecause trees may contain arbitrary binary and text files the usual note about changing this to pass byte strings to os listdir on platforms where filenames may generate unicode decoding errors applies here as well ( pass dir encode()on some platformsyou may also want to detect and skip certain kinds of special files in order to be fully generalbut these were not in my treesso they are not in my script one minor change for the fourth edition of this bookos listdir results are now gathered just once per subdirectory and passed alongto avoid extra calls in dirdiff--not huge winbut every cycle counts on the pitifully underpowered netbook used when writing this edition comparing directory trees
4,887
since we've already studied the tree-walking tools this script employslet' jump right into few example runs when run on identical treesstatus messages scroll during the traversaland no diffs found message appears at the endc:\pp \system\filetoolsdiffall py :\temp\pp \examples copytemp diffs txt :\pp \system\filetoolstype diffs txt more comparing :\temp\pp \examples to copytemp directory lists are identical comparing contents readme-root txt matches comparing :\temp\pp \examples\pp to copytemp\pp directory lists are identical comparing contents echoenvironment pyw matches launchbrowser pyw matches launcher py matches launcher pyc matches over , more lines omitted comparing :\temp\pp \examples\pp \tempparts to copytemp\pp \tempparts directory lists are identical comparing contents jpg matches lawnlake -jan- jpg matches part- txt matches part- html matches =======================================no diffs found usually run this with the verbose flag passed in as trueand redirect output to file (for big treesit produces too much output to scroll through comfortably)use false to watch fewer status messages fly by to show how differences are reportedwe need to generate fewfor simplicityi'll manually change few files scattered about one of the treesbut you could also run global search-and-replace script like the one we'll write later in this while we're at itlet' remove few common files so that directory uniqueness differences show up on the scopetoothe last two removal commands in the following will generate one difference in the same directory in different treesc:\pp \system\filetoolsnotepad copytemp\pp \readme-pp txt :\pp \system\filetoolsnotepad copytemp\pp \system\filetools\commands py :\pp \system\filetoolsnotepad :\temp\pp \examples\pp \__init__ py :\pp \system\filetoolsdel copytemp\pp \system\filetools\cpall_visitor py :\pp \system\filetoolsdel copytemp\pp \launcher py :\pp \system\filetoolsdel :\temp\pp \examples\pp \pygadgets py nowrerun the comparison walker to pick out differences and redirect its output report to file for easy inspection the following lists just the parts of the output report that complete system programs
4,888
firstand then search for the strings "diffand "uniquein the report' text if need more information about the differences summarizedthis interface could be much more user-friendlyof coursebut it does the job for mec:\pp \system\filetoolsdiffall py :\temp\pp \examples copytemp diff txt :\pp \system\filetoolsnotepad diff txt comparing :\temp\pp \examples to copytemp directory lists are identical comparing contents readme-root txt matches comparing :\temp\pp \examples\pp to copytemp\pp files unique to :\temp\pp \examples\pp launcher py files unique to copytemp\pp pygadgets py comparing contents echoenvironment pyw matches launchbrowser pyw matches launcher pyc matches more omitted pygadgets_bar pyw matches readme-pp txt differs todos py matches tounix py matches __init__ py differs __init__ pyc matches comparing :\temp\pp \examples\pp \system\filetools to copytemp\pp \system\fil files unique to :\temp\pp \examples\pp \system\filetools cpall_visitor py comparing contents commands py differs cpall py matches more omitted comparing :\temp\pp \examples\pp \tempparts to copytemp\pp \tempparts directory lists are identical comparing contents jpg matches lawnlake -jan- jpg matches part- txt matches part- html matches =======================================diffs found unique files at :\temp\pp \examples\pp copytemp\pp files differ at :\temp\pp \examples\pp \readme-pp txt copytemp\pp \readme-pp txt files differ at :\temp\pp \examples\pp \__init__ py copytemp\pp \__init__ py unique files at :\temp\pp \examples\pp \system\filetools copytemp\pp \system\filetools comparing directory trees
4,889
copytemp\pp \system\filetools\commands py added line breaks and tabs in few of these output lines to make them fit on this pagebut the report is simple to understand in tree with , files and directorieswe found five differences--the three files we changed by editsand the two directories we threw out of sync with the three removal commands verifying backups so how does this script placate cd backup paranoiato double-check my cd writer' worki run command such as the following can also use command like this to find out what has been changed since the last backup againsince the cd is " :on my machine when plugged ini provide path rooted thereuse root such as /devcdrom or /mnt/cdrom on linuxc:\pp \system\filetoolspython diffall py examples :\pp \examples diff :\pp \system\filetoolsmore diff output omitted the cd spinsthe script comparesand summary of differences appears at the end of the report for an example of full difference reportsee the file difftxt files in the book' examples distribution package and to be really surei run the following global comparison command to verify the entire book development tree backed up to memory stick (which works just like cd in terms of the filesystem) :\pp \system\filetoolsdiffall py :\writing-backups\feb- - \dev :\users\mark\stuff\books\ \pp \dev diff txt :\pp \system\filetoolsmore diff txt comparing :\writing-backups\feb- - \dev to :\users\mark\stuff\books\ \pp \dev directory lists are identical comparing contents ch doc differs ch doc matches ch doc differs ch doc matches ch doc differs ch doc matches ch doc differs more output omitted comparing :\writing-backups\feb- - \dev\examples\pp \system\filetools to :files unique to :\users\mark\stuff\books\ \pp \dev\examples\pp \system\filetools copytemp cpall py diff txt diff txt diffall py diffs txt dirdiff py dirdiff pyc complete system programs
4,890
bigext-tree py matches bigpy-dir py matches more output omitted =======================================diffs found files differ at :\writing-backups\feb- - \dev\ch doc :\users\mark\stuff\books\ \pp \dev\ch doc files differ at :\writing-backups\feb- - \dev\ch doc :\users\mark\stuff\books\ \pp \dev\ch doc files differ at :\writing-backups\feb- - \dev\ch doc :\users\mark\stuff\books\ \pp \dev\ch doc files differ at :\writing-backups\feb- - \dev\ch doc :\users\mark\stuff\books\ \pp \dev\ch doc files differ at :\writing-backups\feb- - \dev\toc txt :\users\mark\stuff\books\ \pp \dev\toc txt unique files at :\writing-backups\feb- - \dev\examples\pp \system\filetools :\users\mark\stuff\books\ \pp \dev\examples\pp \system\filetools files differ at :\writing-backups\feb- - \dev\examples\pp \tools\visitor py :\users\mark\stuff\books\ \pp \dev\examples\pp \tools\visitor py this particular run indicates that 've added few examples and changed some files since the last backupif run immediately after backupnothing should show up on diffall radar except for any files that cannot be copied in general this global comparison can take few minutes it performs byte-for-byte comparisons of all files and screenshotsthe examples treeand morebut it' an accurate and complete verification given that this book development tree contained many filesa more manual verification procedure without python' help would be utterly impossible after writing this scripti also started using it to verify full automated backups of my laptops onto an external hard-drive device to do soi run the cpall copy script we wrote earlier in the preceding section of this and then the comparison script developed here to check results and get list of files that didn' copy correctly the last time did thisthis procedure copied and compared , files and , directories in gb of space--not the sort of task that lends itself to manual laborhere are the magic incantations on my windows laptop :is partition on my external hard driveand you shouldn' be surprised if each of these commands runs for half an hour or more on currently common hardware drag-and-drop copy takes at least as long (assuming it works at all!) :\system\filetoolscpall py : : :\copy-log txt :\system\filetoolsdiffall py : : :\diff-log txt reporting differences and other ideas finallyit' worth noting that this script still only detects differences in the tree but does not give any further details about individual file differences in factit simply loads and compares the binary contents of corresponding files with string comparisons it' simple yes/no result comparing directory trees
4,891
edit the files or run the file-comparison command on the host platform ( fc on windows/dosdiff or cmp on unix and linuxthat' not portable solution for this last stepbut for my purposesjust finding the differences in , -file tree was much more critical than reporting which lines differ in files flagged in the report of coursesince we can always run shell commands in pythonthis last step could be automated by spawning diff or fc command with os popen as differences are encountered (or after the traversalby scanning the report summarythe output of these system calls could be displayed verbatimor parsed for relevant parts we also might try to do bit better here by opening true text files in text mode to ignore line-terminator differences caused by transferring across platformsbut it' not clear that such differences should be ignored (what if the caller wants to know whether lineend markers have been changed?for exampleafter downloading website with an ftp script we'll meet in the diffall script detected discrepancy between the local copy of file and the one at the remote server to probe furtheri simply ran some interactive python codea open('lp -updates html''rb'read( open( ' :\mark\website\public_html\lp -updates html''rb'read( = false this verifies that there really is binary difference in the downloaded and local versions of the fileto see whether it' because unix or dos line end snuck into the filetry again in text mode so that line ends are all mapped to the standard \ charactera open('lp -updates html'' 'read( open( ' :\mark\website\public_html\lp -updates html'' 'read( = true sure enoughnowto find where the difference isthe following code checks character by character until the first mismatch is found (in binary modeso we retain the difference) open('lp -updates html''rb'read( open( ' :\mark\website\public_html\lp -updates html''rb'read(for ( (acbc)in enumerate(zip(ab))if ac !bcprint(irepr(ac)repr(bc)break '\ '\nthis means that at byte offset , there is \ in the downloaded filebut \ in the local copy this line has dos line end in one and unix line end in the other to see moreprint text around the mismatch complete system programs
4,892
if ac !bcprint(irepr(ac)repr(bc)print(repr( [ - : + ])print(repr( [ - : + ])break '\ '\ 're>\ \ndef min(*args):\ \ tmp list(arg're>\ \ndef min(*args):\ tmp list(argsapparentlyi wound up with unix line end at one point in the local copy and dos line end in the version downloaded--the combined effect of the text mode used by the download script itself (which translated \ to \ \nand years of edits on both linux and windows pdas and laptops ( probably coded this change on linux and copied it to my local windows copy in binary modecode such as this could be integrated into the diffall script to make it more intelligent about text files and difference reporting because python excels at processing files and stringsit' even possible to go one step further and code python equivalent of the fc and diff commands in factmuch of the work has already been donethe standard library module difflib could make this task simple see the python library manual for details and usage examples we could also be smarter by avoiding the load and compare steps for files that differ in sizeand we might use smaller block size to reduce the script' memory requirements for most treessuch optimizations are unnecessaryreading multimegabyte files into strings is very fast in pythonand garbage collection reclaims the space as you go since such extensions are beyond both this script' scope and this size limitsthoughthey will have to await the attention of curious reader (this book doesn' have formal exercisesbut that almost sounds like onedoesn' it?for nowlet' move on to explore ways to code one more common directory tasksearch searching directory trees engineers love to change things as was writing this booki found it almost irresistible to move and rename directoriesvariablesand shared modules in the book examples tree whenever thought ' stumbled onto more coherent structure that was fine early onbut as the tree became more intertwinedthis became maintenance nightmare things such as program directory paths and module names were hardcoded all over the place--in package import statementsprogram startup callstext notesconfiguration filesand more one way to repair these referencesof courseis to edit every file in the directory by handsearching each for information that has changed that' so tedious as to be utterly impossible in this book' examples treethoughthe examples of the prior edition contained directories and , filesclearlyi needed way to automate updates after searching directory trees
4,893
operationsto custom tree walkersto general-purpose frameworks in this and the next sectionwe'll explore each option in turnjust as did while refining solutions to this real-world dilemma greps and globs and finds if you work on unix-like systemsyou probably already know that there is standard way to search files for strings on such platforms--the command-line program grep and its relatives list all lines in one or more files containing string or string pattern |given that shells expand ( "glob"filename patterns automaticallya command such as the following will search single directory' python files for string named on the command line (this uses the grep command installed with the cygwin unix-like system for windows that described in the prior :\pp \system\filetoolsc:\cygwin\bin\grep exe walk py bigext-tree py:for (thisdirsubsherefilesherein os walk(dirname)bigpy-path pyfor (thisdirsubsherefilesherein os walk(srcdir)bigpy-tree py:for (thisdirsubsherefilesherein os walk(dirname)as we've seenwe can often accomplish the same within python script by running such shell command with os system or os popen and if we search its results manuallywe can also achieve similar results with the python glob module we met in it expands filename pattern into list of matching filename strings much like shellc:\pp \system\filetoolspython import os for line in os popen( ' :\cygwin\bin\grep exe walk py')print(lineend=''bigext-tree py:for (thisdirsubsherefilesherein os walk(dirname)bigpy-path pyfor (thisdirsubsherefilesherein os walk(srcdir)bigpy-tree py:for (thisdirsubsherefilesherein os walk(dirname)from glob import glob for filename in glob('py')if 'walkin open(filenameread()print(filenamebigext-tree py bigpy-path py bigpy-tree py unfortunatelythese tools are generally limited to single directory glob can visit multiple directories given the right sort of pattern stringbut it' not general directory walker of the sort need to maintain large examples tree on unix-like systemsa find shell command can go the extra mile to traverse an entire directory tree for |in factthe act of searching files often goes by the colloquial name "greppingamong developers who have spent any substantial time in the unix ghetto complete system programs
4,894
the current directory that mention the string popenfind -name "py-print -exec fgrep popen {\if you happen to have unix-like find command on every machine you will ever usethis is one way to process directories rolling your own find module but if you don' happen to have unix find on all your computersnot to worry--it' easy to code portable one in python python itself used to have find module in its standard librarywhich used frequently in the past although that module was removed between the second and third editions of this bookthe newer os walk makes writing your own simple rather than lamenting the demise of modulei decided to spend minutes coding custom equivalent example - implements find utility in pythonwhich collects all matching filenames in directory tree unlike glob globits find find automatically matches through an entire tree and unlike the tree walk structure of os walkwe can treat find find results as simple linear group example - pp \tools\find py #!/usr/bin/python ""###############################################################################return all files matching filename pattern at and below root directorycustom version of the now deprecated find module in the standard libraryimport as "pp tools find"like originalbut uses os walk loophas no support for pruning subdirsand is runnable as top-level scriptfind(is generator that uses the os walk(generator to yield just matching filenamesuse findlist(to force results list generation###############################################################################""import fnmatchos def find(patternstartdir=os curdir)for (thisdirsubsherefilesherein os walk(startdir)for name in subshere fileshereif fnmatch fnmatch(namepattern)fullpath os path join(thisdirnameyield fullpath def findlist(patternstartdir=os curdirdosort=false)matches list(find(patternstartdir)if dosortmatches sort(return matches searching directory trees
4,895
import sys namepatternstartdir sys argv[ ]sys argv[ for name in find(namepatternstartdir)print(namethere' not much to this file--it' largely just minor extension to os walk--but calling its find function provides the same utility as both the deprecated find standard library module and the unix utility of the same name it' also much more portableand noticeably easier than repeating all of this file' code every time you need to perform find-type search because this file is instrumented to be both script and libraryit can also be both run as command-line tool or called from other programs for instanceto process every python file in the directory tree rooted one level up from the current working directoryi simply run the following command line from system console window run this yourself to watch its progressthe script' standard output is piped into the more command to page it herebut it can be piped into any processing program that reads its input from the standard input streamc:\pp \toolspython find py py more \launchbrowser py \launcher py \__init__ py \preview\attachgui py \preview\customizegui py more lines omitted for more controlrun the following sort of python code from script or interactive prompt in this modeyou can apply any operation to the found files that the python language providesc:\pp \system\filetoolspython from pp tools import find for filename in find find('py'')if 'walkin open(filenameread()print(filename\launcher py \system\filetools\bigext-tree py \system\filetools\bigpy-path py \system\filetools\bigpy-tree py \tools\cleanpyc py \tools\find py \tools\visitor py or just import find if in cwd notice how this avoids having to recode the nested loop structure required for os walk every time you want list of matching file namesfor many use casesthis seems conceptually simpler also note that because this finder is generator functionyour script doesn' have to wait until all matching files have been found and collectedos walk yields results as it goesand find find yields matching files among that set here' more complex example of our find module at workthe following system command line lists all python files in directory :\temp\pp whose names begin with complete system programs
4,896
directory specificationc:\pp \toolsfind py [qx]py :\temp\pp :\temp\pp \examples\pp \database\sqlscripts\querydb py :\temp\pp \examples\pp \gui\tools\queuetest-gui-class py :\temp\pp \examples\pp \gui\tools\queuetest-gui py :\temp\pp \examples\pp \gui\tour\quitter py :\temp\pp \examples\pp \internet\other\grail\question py :\temp\pp \examples\pp \internet\other\xml\xmlrpc py :\temp\pp \examples\pp \system\threads\queuetest py and here' some python code that does the same find but also extracts base names and file sizes for each file foundc:\pp \toolspython import os from find import find for name in find('[qx]py' ' :\temp\pp ')print(os path basename(name)os path getsize(name)querydb py queuetest-gui-class py queuetest-gui py quitter py question py xmlrpc py queuetest py the fnmatch module to achieve such code economythe find module calls os walk to walk the tree and simply yields matching filenames along the way new herethoughis the fnmatch module--yet another python standard library module that performs unix-like pattern matching against filenames this module supports common operators in name pattern stringsto match any number of charactersto match any single characterand and [to match any character inside the bracket pairs or notother characters match themselves unlike the re modulefnmatch supports only common unix shell matching operatorsnot full-blown regular expression patternswe'll see why this distinction matters in interestinglypython' glob glob function also uses the fnmatch module to match namesit combines os listdir and fnmatch to match in directories in much the same way our find find combines os walk and fnmatch to match in trees (though os walk ultimately uses os listdir as wellone ramification of all this is that you can pass byte strings for both pattern and start-directory to find find if you need to suppress unicode filename decodingjust as you can for os walk and glob globyou'll receive byte strings for filenames in the result see for more details on unicode filenames searching directory trees
4,897
to platform-specific directory tree listing shell commands such as dir / / on dos and windows since all files match "*"this just exhaustively generates all the file names in tree with single traversal because we can usually run such shell commands in python script with os popenthe following do the same workbut the first is inherently nonportable and must start up separate program along the wayimport os for line in os popen('dir / / ')print(lineend=''from pp tools find import find for name in find(pattern='*'startdir=')print(namewatch for this utility to show up in action later in this and bookincluding an arguably strong showing in the next section and cameo appearance in the grep dialog of ' pyedit text editor guiwhere it will serve central role in threaded external files search tool the standard library' find module may be gonebut it need not be forgotten in factyou must pass bytes pattern string for bytes filename to fnnmatch (or pass both as str)because the re pattern matching module it uses does not allow the string types of subject and pattern to be mixed this rule is inherited by our find find for directory and pattern see for more on re curiouslythe fnmatch module in python also converts bytes pattern string to and from unicode str in order to perform internal text processingusing the latin- encoding this suffices for many contextsbut may not be entirely sound for some encodings which do not map to latin- cleanly sys getfilesystemencoding might be better encoding choice in such contextsas this reflects the underlying file system' constraints (as we learned in sys getdefaultencoding reflects file contentnot namesin the absence of bytesos walk assumes filenames follow the platform' convention and does not ignore decoding errors triggered by os list dir in the "greputility of ' pyeditthis picture is further clouded by the fact that str pattern string from gui would have to be encoded to bytes using potentially inappropriate encoding for some files present see fnmatch py and os py in python' library and the python library manual for more details unicode can be very subtle affair cleaning up bytecode files the find module of the prior section isn' quite the general string searcher we're afterbut it' an important first step--it collects files that we can then search in an automated script in factthe act of collecting matching files in tree is enough by itself to support wide variety of day-to-day system tasks complete system programs
4,898
all the bytecode files in tree because these are not always portable across major python releasesit' usually good idea to ship programs without them and let python create new ones on first imports now that we're expert os walk userswe could cut out the middleman and use it directly example - codes portable and general commandline toolwith support for argumentsexception processingtracingand list-only mode example - pp \tools\cleanpyc py ""delete all pyc bytecode files in directory treeuse the command line arg as root if givenelse current working dir ""import ossys findonly false rootdir os getcwd(if len(sys argv= else sys argv[ found removed for (thisdirlevelsubsherefilesherein os walk(rootdir)for filename in fileshereif filename endswith(pyc')fullname os path join(thisdirlevelfilenameprint('=>'fullnameif not findonlytryos remove(fullnameremoved + excepttypeinst sys exc_info()[: print('*'* 'failed:'filenametypeinstfound + print('found'found'filesremoved'removedwhen runthis script walks directory tree (the cwd by defaultor else one passed in on the command line)deleting any and all bytecode files along the wayc:\examples\pp etools\cleanpyc py = :\users\mark\stuff\books\ \pp \dev\examples\pp \__init__ pyc = :\users\mark\stuff\books\ \pp \dev\examples\pp \preview\initdata pyc = :\users\mark\stuff\books\ \pp \dev\examples\pp \preview\make_db_file pyc = :\users\mark\stuff\books\ \pp \dev\examples\pp \preview\manager pyc = :\users\mark\stuff\books\ \pp \dev\examples\pp \preview\person pyc more lines here found filesremoved :\pp \toolscleanpyc py =\find pyc =\visitor pyc =\__init__ pyc found filesremoved searching directory trees
4,899
now that we also know about find operationswriting scripts based upon them is almost trivial when we just need to match filenames example - for instancefalls back on spawning shell find commands if you have them example - pp \tools\cleanpyc-find-shell py ""find and delete all "pycbytecode files at and below the directory named on the command-lineassumes nonportable unix-like find command ""import ossys rundir sys argv[ if sys platform[: ='win'findcmd ' :\cygwin\bin\find % -name "pyc-printrundir elsefindcmd 'find % -name "pyc-printrundir print(findcmdcount for fileline in os popen(findcmd)count + print(filelineend=''os remove(fileline rstrip()for all result lines have \ at the end print('removed % pyc filescountwhen runfiles returned by the shell command are removedc:\pp \toolscleanpyc-find-shell py :\cygwin\bin\find -name "pyc-print /find pyc /visitor pyc /__init__ pyc removed pyc files this script uses os popen to collect the output of cygwin find program installed on one of my windows computersor else the standard find tool on the linux side it' also completely nonportable to windows machines that don' have the unix-like find program installedand that includes other computers of my own (not to mention those throughout most of the world at largeas we've seenspawning shell commands also incurs performance penalties for starting new program we can do much better on the portability and performance fronts and still retain code simplicityby applying the find tool we wrote in python in the prior section the new script is shown in example - example - pp \tools\cleanpyc-find-py py ""find and delete all "pycbytecode files at and below the directory named on the command-linethis uses python-coded find utilityand complete system programs