id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
8,900 | advanced programming techniques def __getattr__(selfname)if name ="colors"return set(self __colorsclassname self __class__ __name__ if name in frozenset({"background""width""height"})return self __dict__[" {classname}__{name}format**locals())raise attributeerror("'{classname}object has no "attribute '{name}'format(**locals())if we attempt to access an object' attribute and the attribute is not foundpython will call the __getattr__(method (providing it is implementedand that we have not reimplemented __getattribute__())with the name of the attribute as parameter implementations of __getattr__(must raise an attributeerror exception if they do not handle the given attribute for exampleif we have the statement image colorspython will look for colors attribute and having failed to find itwill then call image __getattr__(image"colors"in this case the __getattr__(method handles "colorsattribute name and returns copy of the set of colors that the image is using the other attributes are immutableso they are safe to return directly to the caller we could have written separate elif statements for each one like thiselif name ="background"return self __background but instead we have chosen more compact approach since we know that under the hood all of an object' nonspecial attributes are held in self __dict__we have chosen to access them directly for private attributes (those whose name begins with two leading underscores)the name is mangled to have the form _classname__attributenameso we must account for this when retrieving the attribute' value from the object' private dictionary for the name mangling needed to look up private attributes and to provide the standard attributeerror error textwe need to know the name of the class we are in (it may not be image because the object might be an instance of an image subclass every object has __class__ special attributeso self __class__ is always available inside methods and can safely be accessed by __getattr__(without risking unwanted recursion note that there is subtle difference in that using __getattr__(and self __class__ provides access to the attribute in the instance' class (which may be subclass)but accessing the attribute directly uses the class the attribute is defined in one special method that we have not covered is __getattribute__(whereas the __getattr__(method is called last when looking for (nonspecialat |
8,901 | advanced programming techniques def make_strip_function(characters)def strip_function(string)return string strip(charactersreturn strip_function strip_punctuation make_strip_function(",;!?"strip_punctuation("land ahoy!"returns'land ahoythe make_strip_function(function takes the characters to be stripped as its sole argument and returns functionstrip_function()that takes string argument and which strips the characters that were given at the time the closure was created so just as we can create as many instances of the strip class as we wanteach with its own characters to stripwe can create as many strip functions with their own characters as we like the classic use case for functors is to provide key functions for sort routines here is generic sortkey functor class (from file sortkey py)class sortkeydef __init__(self*attribute_names)self attribute_names attribute_names def __call__(selfinstance)values [for attribute_name in self attribute_namesvalues append(getattr(instanceattribute_name)return values when sortkey object is created it keeps tuple of the attribute names it was initialized with when the object is called it creates list of the attribute values for the instance it is passed--in the order they were specified when the sortkey was initialized for exampleimagine we have person classclass persondef __init__(selfforenamesurnameemail)self forename forename self surname surname self email email suppose we have list of person objects in the people list we can sort the list by surnames like thispeople sort(key=sortkey("surname")if there are lot of people there are bound to be some surname clashesso we can sort by surnameand then by forename within surnamelike thispeople sort(key=sortkey("surname""forename")and if we had people with the same surname and forename we could add the email attribute too and of |
8,902 | coursewe could sort by forename and then surname by changing the order of the attribute names we give to the sortkey functor another way of achieving the same thingbut without needing to create functor at allis to use the operator module' operator attrgetter(function for exampleto sort by surname we could writepeople sort(key=operator attrgetter("surname")and similarlyto sort by surname and forenamepeople sort(key=operator attrgetter("surname""forename")the operator attrgetter(function returns function ( closurethatwhen called on an objectreturns those attributes of the object that were specified when the closure was created functors are probably used rather less frequently in python than in other languages that support them because python has other means of doing the same things--for exampleusing closures or item and attribute getters context managers |context managers allow us to simplify code by ensuring that certain operations are performed before and after particular block of code is executed the behavior is achieved because context managers define two special methods__enter__(and __exit__()that python treats specially in the scope of with statement when context manager is created in with statement its __enter__(method is automatically calledand when the context manager goes out of scope after its with statement its __exit__(method is automatically called we can create our own custom context managers or use predefined ones--as we will see later in this subsectionthe file objects returned by the built-in open(function are context managers the syntax for using context managers is thiswith expression as variablesuite the expression must be or must produce context manager objectif the optional as variable part is specifiedthe variable is set to refer to the object returned by the context manager' __enter__(method (and this is often the context manager itselfbecause context manager is guaranteed to execute its "exitcode (even in the face of exceptions)context managers can be used to eliminate the need for finally blocks in many situations some of python' types are context managers--for exampleall the file objects that open(can return--so we can eliminate finally blocks when doing file handling as these equivalent code snippets illustrate (assuming that process(is function defined elsewhere) |
8,903 | advanced programming techniques fh none tryfh open(filenamefor line in fhprocess(lineexcept environmenterror as errprint(errfinallyif fh is not nonefh close(trywith open(filenameas fhfor line in fhprocess(lineexcept environmenterror as errprint(erra file object is context manager whose exit code always closes the file if it was opened the exit code is executed whether or not an exception occursbut in the latter casethe exception is propagated this ensures that the file gets closed and we still get the chance to handle any errorsin this case by printing message for the user in factcontext managers don' have to propagate exceptionsbut not doing so effectively hides any exceptionsand this would almost certainly be coding error all the built-in and standard library context managers propagate exceptions sometimes we need to use more than one context manager at the same time for exampletrywith open(sourceas finwith open(target" "as foutfor line in finfout write(process(line)except environmenterror as errprint(errhere we read lines from the source file and write processed versions of them to the target file using nested with statements can quickly lead to lot of indentation fortunatelythe standard library' contextlib module provides some additional support for context managersincluding the contextlib nested(function which allows two or more context managers to be handled in the same with statement rather than having to nest with statements here is replacement for the code just shownbut omitting most of the lines that are identical to beforetrywith contextlib nested(open(source)open(target" ")as finfout)for line in fin |
8,904 | it is only necessary to use contextlib nested(for python from python this function is deprecated because python can handle multiple context managers in single with statement here is the same example--again omitting irrelevant lines--but this time for python trywith open(sourceas finopen(target" "as foutfor line in finusing this syntax keeps context managers and the variables they are associated with togethermaking the with statement much more readable than if we were to nest them or to use contextlib nested(it isn' only file objects that are context managers for exampleseveral threading-related classes used for locking are context managers context managers can also be used with decimal decimal numbersthis is useful if we want to perform some calculations with certain settings (such as particular precisionin effect if we want to create custom context manager we must create class that provides two methods__enter__(and __exit__(whenever with statement is used on an instance of such classthe __enter__(method is called and the return value is used for the as variable (or thrown away if there isn' onewhen control leaves the scope of the with statement the __exit__(method is called (with details of an exception if one has occurred passed as argumentssuppose we want to perform several operations on list in an atomic manner--that iswe either want all the operations to be done or none of them so that the resultant list is always in known state for exampleif we have list of integers and want to append an integerdelete an integerand change couple of integersall as single operationwe could write code like thistrywith atomiclist(itemsas atomicatomic append( del atomic[ atomic[ atomic[index except (attributeerrorindexerrorvalueerroras errprint("no changes applied:"errif no exception occursall the operations are applied to the original list (items)but if an exception occursno changes are made at all here is the code for the atomiclist context managerclass atomiclistdef __init__(selfalistshallow_copy=true)threading |
8,905 | advanced programming techniques self original alist self shallow_copy shallow_copy def __enter__(self)self modified (self original[:if self shallow_copy else copy deepcopy(self original)return self modified def __exit__(selfexc_typeexc_valexc_tb)if exc_type is noneself original[:self modified shallow and deep copying when the atomiclist object is created we keep reference to the original list and note whether shallow copying is to be used (shallow copying is fine for lists of numbers or stringsbut for lists that contain lists or other collectionsshallow copying is not sufficient thenwhen the atomiclist context manager object is used in the with statement its __enter__(method is called at this point we copy the original list and return the copy so that all the changes can be made on the copy once we reach the end of the with statement' scope the __exit__(method is called if no exception occurred the exc_type ("exception type"will be none and we know that we can safely replace the original list' items with the items from the modified list (we cannot do self original self modified because that would just replace one object reference with another and would not affect the original list at all but if an exception occurredwe do nothing to the original list and the modified list is discarded the return value of __exit__(is used to indicate whether any exception that occurred should be propagated true value means that we have handled any exception and so no propagation should occur normally we always return false or something that evaluates to false in boolean context to allow any exception that occurred to propagate by not giving an explicit return valueour __exit__(returns none which evaluates to false and correctly causes any exception to propagate custom context managers are used in to ensure that socket connections and gzipped files are closedand some of the threading modules context managers are used in to ensure that mutual exclusion locks are unlocked you'll also get the chance to create more generic atomic contex manager in this exercises descriptors |descriptors are classes which provide access control for the attributes of other classes any class that implements one or more of the descriptor special |
8,906 | methods__get__()__set__()and __delete__()is called (and can be used asa descriptor the built-in property(and classmethod(functions are implemented using descriptors the key to understanding descriptors is that although we create an instance of descriptor in class as class attributepython accesses the descriptor through the class' instances to make things clearlet' imagine that we have class whose instances hold some strings we want to access the strings in the normal wayfor exampleas propertybut we also want to get an xml-escaped version of the strings whenever we want one simple solution would be that whenever string is set we immediately create an xml-escaped copy but if we had thousands of strings and only ever read the xml version of few of themwe would be wasting lot of processing and memory for nothing so we will create descriptor that will provide xml-escaped strings on demand without storing them we will start with the beginning of the client (ownerclassthat isthe class that uses the descriptorclass product__slots__ ("__name""__description""__price"name_as_xml xmlshadow("name"description_as_xml xmlshadow("description"def __init__(selfnamedescriptionprice)self __name name self description description self price price the only code we have not shown are the propertiesthe name is read-only property and the description and price are readable/writable propertiesall set up in the usual way (all the code is in the xmlshadow py file we have used the __slots__ variable to ensure that the class has no __dict__ and can store only the three specified private attributesthis is not related to or necessary for our use of descriptors the name_as_xml and description_as_xml class attributes are set to be instances of the xmlshadow descriptor although no product object has name_as_xml attribute or description_as_xml attributethanks to the descriptor we can write code like this (here quoting from the module' doctests)product product("chisel ""chisel cap" product nameproduct name_as_xmlproduct description_as_xml ('chisel ''chisel < cm>''chisel &cap'this works because when we try to accessfor examplethe name_as_xml attributepython finds that the product class has descriptor with that name |
8,907 | advanced programming techniques and so uses the descriptor to get the attribute' value here' the complete code for the xmlshadow descriptor classclass xmlshadowdef __init__(selfattribute_name)self attribute_name attribute_name def __get__(selfinstanceowner=none)return xml sax saxutils escapegetattr(instanceself attribute_name)when the name_as_xml and description_as_xml objects are created we pass the name of the product class' corresponding attribute to the xmlshadow initializer so that the descriptor knows which attribute to work on thenwhen the name_as_xml or description_as_xml attribute is looked uppython calls the descriptor' __get__(method the self argument is the instance of the descriptorthe instance argument is the product instance ( the product' self)and the owner argument is the owning class (product in this casewe use the getattr(function to retrieve the relevant attribute from the product (in this case the relevant property)and return an xml-escaped version of it if the use case was that only small proportion of the products were accessed for their xml stringsbut the strings were often long and the same ones were frequently accessedwe could use cache for exampleclass cachedxmlshadowdef __init__(selfattribute_name)self attribute_name attribute_name self cache {def __get__(selfinstanceowner=none)xml_text self cache get(id(instance)if xml_text is not nonereturn xml_text return self cache setdefault(id(instance)xml sax saxutils escapegetattr(instanceself attribute_name))we store the unique identity of the instance as the key rather than the instance itself because dictionary keys must be hashable (which ids are)but we don' want to impose that as requirement on classes that use the cachedxmlshadow descriptor the key is necessary because descriptors are created per class rather than per instance (the dict setdefault(method conveniently returns the value for the given keyor if no item with that key is presentcreates new item with the given key and value and returns the value |
8,908 | having seen descriptors used to generate data without necessarily storing itwe will now look at descriptor that can be used to store all of an object' attribute datawith the object not needing to store anything itself in the examplewe will just use dictionarybut in more realistic contextthe data might be stored in file or database here' the start of modified version of the point class that makes use of the descriptor (from the externalstorage py file)class point__slots__ ( externalstorage(" " externalstorage(" "def __init__(selfx= = )self self by setting __slots__ to an empty tuple we ensure that the class cannot store any data attributes at all when self is assigned topython finds that there is descriptor with the name " "and so uses the descriptor' __set__(method the rest of the class isn' shownbut is the same as the original point class shown in here is the complete externalstorage descriptor classclass externalstorage__slots__ ("attribute_name",__storage {def __init__(selfattribute_name)self attribute_name attribute_name def __set__(selfinstancevalue)self __storage[id(instance)self attribute_namevalue def __get__(selfinstanceowner=none)if instance is nonereturn self return self __storage[id(instance)self attribute_nameeach externalstorage object has single data attributeattribute_namewhich holds the name of the owner class' data attribute whenever an attribute is set we store its value in the private class dictionary__storage similarlywhenever an attribute is retrieved we get it from the __storage dictionary as with all descriptor methodsself is the instance of the descriptor object and instance is the self of the object that contains the descriptorso here self is an externalstorage object and instance is point object |
8,909 | advanced programming techniques although __storage is class attributewe can access it as self __storage (just as we can call methods using self method())because python will look for it as an instance attributeand not finding it will then look for it as class attribute the one (theoreticaldisadvantage of this approach is that if we have class attribute and an instance attribute with the same nameone would hide the other (if this were really problem we could always refer to the class attribute using the classthat isexternalstorage __storage although hard-coding the class does not play well with subclassing in generalit doesn' really matter for private attributes since python name-mangles the class name into them anyway the implementation of the __get__(special method is slightly more sophisticated than before because we provide means by which the externalstorage instance itself can be accessed for exampleif we have point( )we can access the -coordinate with xand we can access the externalstorage object that holds all the xs with point to complete our coverage of descriptors we will create the property descriptor that mimics the behavior of the built-in property(functionat least for setters and getters the code is in property py here is the complete nameandextension class that makes use of itclass nameandextensiondef __init__(selfnameextension)self __name name self extension extension @property def name(self)return self __name uses the custom property descriptor @property uses the custom property descriptor def extension(self)return self __extension @extension setter uses the custom property descriptor def extension(selfextension)self __extension extension the usage is just the same as for the built-in @property decorator and for the @propertyname setter decorator here is the start of the property descriptor' implementationclass propertydef __init__(selfgettersetter=none)self __getter getter |
8,910 | self __setter setter self __name__ getter __name__ the class' initializer takes one or two functions as arguments if it is used as decoratorit will get just the decorated function and this becomes the getterwhile the setter is set to none we use the getter' name as the property' name so for each propertywe have getterpossibly setterand name def __get__(selfinstanceowner=none)if instance is nonereturn self return self __getter(instancewhen property is accessed we return the result of calling the getter function where we have passed the instance as its first parameter at first sightself __getter(looks like method callbut it is not in factself __getter is an attributeone that happens to hold an object reference to method that was passed in so what happens is that first we retrieve the attribute (self __getter)and then we call it as function (and because it is called as function rather than as method we must pass in the relevant self object explicitly ourselves and in the case of descriptor the self object (from the class that is using the descriptoris called instance (since self is the descriptor objectthe same applies to the __set__(method def __set__(selfinstancevalue)if self __setter is noneraise attributeerror("'{ }is read-onlyformatself __name__)return self __setter(instancevalueif no setter has been specifiedwe raise an attributeerrorotherwisewe call the setter with the instance and the new value def setter(selfsetter)self __setter setter return self __setter this method is called when the interpreter reachesfor example@extension setterwith the function it decorates as its setter argument it stores the setter method it has been given (which can now be used in the __set__(method)and returns the settersince decorators should return the function or method they decorate we have now looked at three quite different uses of descriptors descriptors are very powerful and flexible feature that can be used to do lots of underthe-hood work while appearing to be simple attributes in their client (ownerclass |
8,911 | advanced programming techniques class decorators |just as we can create decorators for functions and methodswe can also create decorators for entire classes class decorators take class object (the result of the class statement)and should return class--normally modified version of the class they decorate in this subsection we will study two class decorators to see how they can be implemented sortedlist in we created the sortedlist custom collection class that aggregated plain list as the private attribute self __list eight of the sortedlist methods simply passed on their work to the private attribute for examplehere are how the sortedlist clear(and sortedlist pop(methods were implementeddef clear(self)self __list [def pop(selfindex=- )return self __list pop(indexthere is nothing we can do about the clear(method since there is no corresponding method for the list typebut for pop()and the other six methods that sortedlist delegateswe can simply call the list class' corresponding method this can be done by using the @delegate class decorator from the book' util module here is the start of new version of the sortedlist class@util delegate("__list"("pop""__delitem__""__getitem__""__iter__""__reversed__""__len__""__str__")class sortedlistthe first argument is the name of the attribute to delegate toand the second argument is sequence of one or more methods that we want the delegate(decorator to implement for us so that we don' have to do the work ourselves the sortedlist class in the sortedlistdelegate py file uses this approach and therefore does not have any code for the methods listedeven though it fully supports them here is the class decorator that implements the methodsdef delegate(attribute_namemethod_names)def decorator(cls)nonlocal attribute_name if attribute_name startswith("__")attribute_name "_cls __name__ attribute_name for name in method_namessetattr(clsnameeval("lambda self* **kw"self { { }(* **kw)formatattribute_namename))return cls return decorator |
8,912 | we could not use plain decorator because we want to pass arguments to the decoratorso we have instead created function that takes our arguments and that returns class decorator the decorator itself takes single argumenta class (just as function decorator takes single function or method as its argumentwe must use nonlocal so that the nested function uses the attribute_name from the outer scope rather than attempting to use one from its own scope and we must be able to correct the attribute name if necessary to take account of the name mangling of private attributes the decorator' behavior is quite simpleit iterates over all the method names that the delegate(function has been givenand for each one creates new method which it sets as an attribute on the class with the given method name we have used eval(to create each of the delegated methods since it can be used to execute single statementand lambda statement produces method or function for examplethe code executed to produce the pop(method islambda self* **kwself _sortedlist__list pop(* **kwwe use the and *argument forms to allow for any arguments even though the methods being delegated to have specific argument lists for examplelist pop(accepts single index position (or nothingin which case it defaults to the last itemthis is okay because if the wrong number or kinds of arguments are passedthe list method that is called to do the work will raise an appropriate exception fuzzybool the second class decorator we will review was also used in when we implemented the fuzzybool class we mentioned that we had supplied only the __lt__(and __eq__(special methods (for and ==)and had generated all the other comparison methods automatically what we didn' show was the complete start of the class definition@util complete_comparisons class fuzzyboolthe other four comparison operators were provided by the complete_comparisons(class decorator given class that defines only (or and ==)the decorator produces the missing comparison operators by using the following logical equivalencesx ( xx ( yx> < < ( xx > ( yif the class to be decorated has and ==the decorator will use them bothfalling back to doing everything in terms of if that is the only operator |
8,913 | advanced programming techniques supplied (in factpython automatically produces if is supplied!if =is suppliedand >if <is suppliedso it is sufficient to just implement the three operators <<=and =and to leave python to infer the others howeverusing the class decorator reduces the minimum that we must implement to just this is convenientand also ensures that all the comparison operators use the same consistent logic def complete_comparisons(cls)assert cls __lt__ is not object __lt__"{ must define and ideally ==format(cls __name__)if cls __eq__ is object __eq__cls __eq__ lambda selfother(not (cls __lt__(selfotheror cls __lt__(otherself))cls __ne__ lambda selfothernot cls __eq__(selfothercls __gt__ lambda selfothercls __lt__(otherselfcls __le__ lambda selfothernot cls __lt__(otherselfcls __ge__ lambda selfothernot cls __lt__(selfotherreturn cls one problem that the decorator faces is that class object from which every other class is ultimately derived defines all six comparison operatorsall of which raise typeerror exception if used so we need to know whether and =have been reimplemented (and are therefore usablethis can easily be done by comparing the relevant special methods in the class being decorated with those in object if the decorated class does not have custom the assertion fails because that is the decorator' minimum requirement and if there is custom =we use itotherwisewe create one then all the other methods are created and the decorated classnow with all six comparison methodsis returned using class decorators is probably the simplest and most direct way of changing classes another approach is to use metaclassesa topic we will cover later in this abstract base classes |an abstract base class (abcis class that cannot be used to create objects insteadthe purpose of such classes is to define interfacesthat isto in effect list the methods and properties that classes that inherit the abstract base class must provide this is useful because we can use an abstract base class as kind of promise-- promise that any derived class will provide the methods and properties that the abstract base class specifies python' abstract base classes are described in pep (www python org/dev/peps/pep- )which also includes very useful rationale and is well worth reading metaclasses |
8,914 | table the numbers module' abstract base classes abc inherits api examples number object complex number real complex complexdecimal decimalfloatfractions fractionint ==!=+-*/abs()bool()complexcomplex()conjugate()also real decimal decimaland imag properties floatfractions fractionint =>+-*/decimal decimal//%abs()bool()complex()floatconjugate()divmod()float()fractions fractionmath ceil()math floor()round()int trunc()also real and imag properties rational real integral rational =>+-*///int %>~&^|abs()bool()complex()conjugate()divmod()float()math ceil()math floor()pow()round()trunc()also realimagnumeratorand denominator =>+-*/fractions fraction//%abs()bool()complex()int conjugate()divmod()float()math ceil()math floor()round()trunc()also realimagnumeratorand denominator properties properties abstract base classes are classes that have at least one abstract method or property abstract methods can be defined with no implementation ( their suite is passor if we want to force reimplementation in subclassraise notimplementederror())or with an actual (concreteimplementation that can be invoked from subclassesfor examplewhen there is common case they can also have other concrete ( nonabstractmethods and properties classes that derive from an abc can be used to create instances only if they reimplement all the abstract methods and abstract properties they have inherited for those abstract methods that have concrete implementations (even if it is only pass)the derived class could simply use super(to use the abc' ver |
8,915 | advanced programming techniques sion any concrete methods or properties are available through inheritance as usual all abcs must have metaclass of abc abcmeta (from the abc module)or from one of its subclasses we cover metaclasses bit further on metaclasses python provides two groups of abstract base classesone in the collections module and the other in the numbers module they allow us to ask questions about an objectfor examplegiven variable xwe can see whether it is sequence using isinstance(xcollections mutablesequenceor whether it is whole number using isinstance(xnumbers integralthis is particularly useful in view of python' dynamic typing where we don' necessarily know (or carewhat an object' type isbut want to know whether it supports the operations we want to apply to it the numeric and collection abcs are listed in tables and the other major abc is io iobase from which all the file and stream-handling classes derive to fully integrate our own custom numeric and collection classes we ought to make them fit in with the standard abcs for examplethe sortedlist class is sequencebut as it standsisinstance(lcollections sequencereturns false if is sortedlist one easy way to fix this is to inherit the relevant abcclass sortedlist(collections sequence)by making collections sequence the base classthe isinstance(test will now return true furthermorewe will be required to implement __init__((or __new__())__getitem__()and __len__((which we dothe collections sequence abc also provides concrete ( nonabstractimplementations for __contains__()__iter__()__reversed__()count()and index(in the case of sortedlistwe reimplement them allbut we could have used the abc versions if we wanted tosimply by not reimplementing them we cannot make sortedlist subclass of collections mutablesequence even though the list is mutable because sortedlist does not have all the methods that collections mutablesequence must providesuch as __setitem__(and append((the code for this sortedlist is in sortedlistabc py we will see an alternative approach to making sortedlist into collections sequence in the metaclasses subsection now that we have seen how to make custom class fit in with the standard abcswe will turn to another use of abcsto provide an interface promise for our own custom classes we will look at three rather different examples to cover different aspects of creating and using abcs we will start with very simple example that shows how to handle readable/writable properties the class is used to represent domestic appliances every appliance that is created must have read-only model string and readable/writable price we also want to ensure that the abc' __init__(is reimplemented here' the abc (from appliance py)we have not shown the import metaclasses |
8,916 | table the collections module' main abstract base classes abc inherits api examples callable object (container object in hashable object hash(iterable object iter(all functionsmethodsand lambdas bytearraybytesdictfrozensetlistsetstrtuple bytesfrozensetstrtuple bytearraybytescollections dequedictfrozensetlistsetstrtuple iterator iterable iter()next(sized object len(bytearraybytescollections dequedictfrozensetlistsetstrtuple mapping containeriterablesized ==!=[]len()iter()inget()items()keys()values(dict mutablemapping mapping ==!=[]dellen()iter()inclear()get()items()keys()pop()popitem()setdefault()update()values(dict sequence containeriterablesized containeriterablesized []len()iter()reversed()incount()index(bytearraybytesliststrtuple []+=dellen()iter()reversed()inappend()count()extend()index()insert()pop()remove()reverse(bytearraylist containeriterablesized set >&|^len()frozensetset iter()inisdisjoint(mutablesequence set mutableset >&|^&=|=^=-=len()iter()inadd()clear()discard()isdisjoint()pop()remove(set |
8,917 | advanced programming techniques abc statement which is needed for the abstractmethod(and abstractproperty(functionsboth of which can be used as decoratorsclass appliance(metaclass=abc abcmeta)@abc abstractmethod def __init__(selfmodelprice)self __model model self price price def get_price(self)return self __price def set_price(selfprice)self __price price price abc abstractproperty(get_priceset_price@property def model(self)return self __model we have set the class' metaclass to be abc abcmeta since this is requirement for abcsany abc abcmeta subclass can be used insteadof course we have made __init__(an abstract method to ensure that it is reimplementedand we have also provided an implementation which we expect (but can' forceinheritors to call to make an abstract readable/writable property we cannot use decorator syntaxalso we have not used private names for the getter and setter since doing so would be inconvenient for subclasses the price property is abstract (so we cannot use the @property decorator)and is readable/writable here we follow common pattern for when we have private readable/writable data ( __priceas propertywe initialize the property in the __init__(method rather than setting the private data directly--this ensures that the setter is called (and may potentially do validation or other workalthough it doesn' in this particular examplethe model property is not abstractso subclasses don' need to reimplement itand we can make it property using the @property decorator here we follow common pattern for when we have private read-only data ( __modelas propertywe set the private __model data once in the __init__(methodand provide read access via the read-only model property note that no appliance objects can be createdbecause the class contains abstract attributes here is an example subclass |
8,918 | class cooker(appliance)def __init__(selfmodelpricefuel)super(__init__(modelpriceself fuel fuel price property(lambda selfsuper(pricelambda selfpricesuper(set_price(price)the cooker class must reimplement the __init__(method and the price property for the property we have just passed on all the work to the base class the model read-only property is inherited we could create many more classes based on appliancesuch as fridgetoasterand so on the next abc we will look at is even shorterit is an abc for text-filtering functors (in file textfilter py)class textfilter(metaclass=abc abcmeta)@abc abstractproperty def is_transformer(self)raise notimplementederror(@abc abstractmethod def __call__(self)raise notimplementederror(the textfilter abc provides no functionality at allit exists purely to define an interfacein this case an is_transformer read-only property and __call__(methodthat all its subclasses must provide since the abstract property and method have no implementations we don' want subclasses to call themso instead of using an innocuous pass statement we raise an exception if they are used ( via super(callhere is one simple subclassclass charcounter(textfilter)@property def is_transformer(self)return false def __call__(selftextchars)count for in textif in charscount + return count |
8,919 | advanced programming techniques this text filter is not transformer because rather than transforming the text it is givenit simply returns count of the specified characters that occur in the text here is an example of usevowel_counter charcounter(vowel_counter("dog fish and cat fish""aeiou"returns two other text filters are providedboth of which are transformersrunlengthencode and runlengthdecode here is how they are usedrle_encoder runlengthencode(rle_text rle_encoder(textrle_decoder runlengthdecode(original_text rle_decoder(rle_textthe run length encoder converts string into utf- encoded bytesand replaces bytes with the sequence and any sequence of three to repeated bytes with the sequence countbyte if the string has lots of runs of four or more identical consecutive characters this can produce shorter byte string than the raw utf- encoded bytes the run length decoder takes run length encoded byte string and returns the original string here is the start of the runlengthdecode classclass runlengthdecode(textfilter)@property def is_transformer(self)return true def __call__(selfrle_bytes)we have omitted the body of the __call__(methodalthough it is in the source that accompanies this book the runlengthencode class has exactly the same structure the last abc we will look at provides an application programming interface (apiand default implementation for an undo mechanism here is the complete abc (from file abstract py)class undo(metaclass=abc abcmeta)@abc abstractmethod def __init__(self)self __undos [ |
8,920 | @abc abstractproperty def can_undo(self)return bool(self __undos@abc abstractmethod def undo(self)assert self __undos"nothing left to undoself __undos pop()(selfdef add_undo(selfundo)self __undos append(undothe __init__(and undo(methods must be reimplemented since they are both abstractand so must the read-only can_undo property subclasses don' have to reimplement the add_undo(methodalthough they are free to do so the undo(method is slightly subtle the self __undos list is expected to hold object references to methods each method must cause the corresponding action to be undone if it is called--this will be clearer when we look at an undo subclass in moment so to perform an undo we pop the last undo method off the self __undos listand then call the method as functionpassing self as an argument (we must pass self because the method is being called as function and not as method here is the beginning of the stack classit inherits undoso any actions performed on it can be undone by calling stack undo(with no argumentsclass stack(undo)def __init__(self)super(__init__(self __stack [@property def can_undo(self)return super(can_undo def undo(self)super(undo(def push(selfitem)self __stack append(itemself add_undo(lambda selfself __stack pop()def pop(self)item self __stack pop(self add_undo(lambda selfself __stack append(item)return item |
8,921 | advanced programming techniques we have omitted stack top(and stack __str__(since neither adds anything new and neither interacts with the undo base class for the can_undo property and the undo(methodwe simply pass on the work to the base class if these two were not abstract we would not need to reimplement them at all and the same effect would be achievedbut in this case we wanted to force subclasses to reimplement them to encourage undo to be taken account of in the subclass for push(and pop(we perform the operation and also add function to the undo list which will undo the operation that has just been performed abstract base classes are most useful in large-scale programslibrariesand application frameworkswhere they can help ensure that irrespective of implementation details or authorclasses can work cooperatively together because they provide the apis that their abcs specify multiple inheritance |multiple inheritance is where one class inherits from two or more other classes although python (andfor examplec++fully supports multiple inheritancesome languages--most notablyjava--don' allow it one problem is that multiple inheritance can lead to the same class being inherited more than once ( if two of the base classes inherit from the same class)and this means that the version of method that is calledif it is not in the subclass but is in two or more of the base classes (or their base classesetc )depends on the method resolution orderwhich potentially makes classes that use multiple inheritance somewhat fragile multiple inheritance can generally be avoided by using single inheritance (one base class)and setting metaclass if we want to support an additional apisince as we will see in the next subsectiona metaclass can be used to give the promise of an api without actually inheriting any methods or data attributes an alternative is to use multiple inheritance with one concrete class and one or more abstract base classes for additional apis and another alternative is to use single inheritance and aggregate instances of other classes nonethelessin some casesmultiple inheritance can provide very convenient solution for examplesuppose we want to create new version of the stack class from the previous subsectionbut want the class to support loading and saving using pickle we might well want to add the loading and saving functionality to several classesso we will implement it in class of its ownclass loadsavedef __init__(selffilename*attribute_names)self filename filename self __attribute_names [for name in attribute_namesif name startswith("__") |
8,922 | name "_self __class__ __name__ name self __attribute_names append(namedef save(self)with open(self filename"wb"as fhdata [for name in self __attribute_namesdata append(getattr(selfname)pickle dump(datafhpickle highest_protocoldef load(self)with open(self filename"rb"as fhdata pickle load(fhfor namevalue in zip(self __attribute_namesdata)setattr(selfnamevaluethe class has two attributesfilenamewhich is public and can be changed at any timeand __attribute_nameswhich is fixed and can be set only when the instance is created the save(method iterates over all the attribute names and creates list called data that holds the value of each attribute to be savedit then saves the data into pickle the with statement ensures that the file is closed if it was successfully openedand any file or pickle exceptions are passed up to the caller the load(method iterates over the attribute names and the corresponding data items that have been loaded and sets each attribute to its loaded value here is the start of the filestack class that multiply-inherits the undo class from the previous subsection and this subsection' loadsave classclass filestack(undoloadsave)def __init__(selffilename)undo __init__(selfloadsave __init__(selffilename"__stack"self __stack [def load(self)super(load(self clear(the rest of the class is just the same as the stack classso we have not reproduced it here instead of using super(in the __init__(method we must specify the base classes that we initialize since super(cannot guess our intentions for the loadsave initialization we pass the filename to use and also the names of the attributes we want savedin this case just onethe private __stack (we don' want to save the __undosand nor could we in this case since it is list of methods and is therefore unpicklable |
8,923 | advanced programming techniques the filestack class has all the undo methodsand also the loadsave class' save(and load(methods we have not reimplemented save(since it works finebut for load(we must clear the undo stack after loading this is necessary because we might do savethen do various changesand then load the load wipes out what went beforeso any undos no longer make sense the original undo class did not have clear(methodso we had to add onedef clear(self)self __undos [in class undo in the stack load(method we have used super(to call loadsave load(because there is no undo load(method to cause ambiguity if both base classes had had load(methodthe one that would get called would depend on python' method resolution order we prefer to use super(only when there is no ambiguityand to use the appropriate base name otherwiseso we never rely on the method resolution order for the self clear(callagain there is no ambiguity since only the undo class has clear(methodand we don' need to use super(since (unlike load()filestack does not have clear(method what would happen iflater ona clear(method was added to the filestack classit would break the load(method one solution would be to call super(clear(inside load(instead of plain self clear(this would result in the first super-class' clear(method that was found being used to protect against such problems we could make it policy to use hard-coded base classes when using multiple inheritance (in this examplecalling undo clear(self)or we could avoid multiple inheritance altogether and use aggregationfor exampleinheriting the undo class and creating loadsave class designed for aggregation what multiple inheritance has given us here is mixture of two rather different classeswithout the need to implement any of the undo or the loading and saving ourselvesrelying instead on the functionality provided by the base classes this can be very convenient and works especially well when the inherited classes have no overlapping apis metaclasses | metaclass is to class what class is to an instancethat isa metaclass is used to create classesjust as classes are used to create instances and just as we can ask whether an instance belongs to class by using isinstance()we can ask whether class object (such as dictintor sortedlistinherits another class using issubclass(the simplest use of metaclasses is to make custom classes fit into python' standard abc hierarchy for exampleto make sortedlist collections |
8,924 | sequenceinstead of inheriting the abc (as we showed earlier)we can simply register the sortedlist as collections sequenceclass sortedlistcollections sequence register(sortedlistafter the class is defined normallywe register it with the collections sequence abc registering class like this makes it virtual subclass virtual subclass reports that it is subclass of the class or classes it is registered with ( using isinstance(or issubclass())but does not inherit any data or methods from any of the classes it is registered with registering class like this provides promise that the class provides the api of the classes it is registered withbut does not provide any guarantee that it will honor its promise one use of metaclasses is to provide both promise and guarantee about class' api another use is to modify class in some way (like class decorator doesand of coursemetaclasses can be used for both purposes at the same time suppose we want to create group of classes that all provide load(and save(methods we can do this by creating class that when used as metaclasschecks that these methods are presentclass loadablesaveable(type)def __init__(clsclassnamebasesdictionary)super(__init__(classnamebasesdictionaryassert hasattr(cls"load"and isinstance(getattr(cls"load")collections callable)("class 'classname "must provide load(method"assert hasattr(cls"save"and isinstance(getattr(cls"save")collections callable)("class 'classname "must provide save(method"classes that are to serve as metaclasses must inherit from the ultimate metaclass base classtypeor one of its subclasses note that this class is called when classes that use it are instantiatedin all probability not very oftenso the runtime cost is extremely low notice also that we must perform the checks after the class has been created (using the super(call)since only then will the class' attributes be available in the class itself (the attributes are in the dictionarybut we prefer to work on the actual initialized class when doing checks in python terminologyvirtual does not mean the same thing as it does in +terminology |
8,925 | collections abcs advanced programming techniques we could have checked that the load and save attributes are callable using hasattr(to check that they have the __call__ attributebut we prefer to check whether they are instances of collections callable instead the collections callable abstract base class provides the promise (but no guaranteethat instances of its subclasses (or virtual subclassesare callable once the class has been created (using type __new__(or reimplementation of __new__())the metaclass is initialized by calling its __init__(method the arguments given to __init__(are clsthe class that' just been createdclassnamethe class' name (also available from cls __name__)basesa list of the class' base classes (excluding objectand therefore possibly empty)and dictionary that holds the attributes that became class attributes when the cls class was createdunless we intervened in reimplementation of the metaclass' __new__(method here are couple of interactive examples that show what happens when we create classes using the loadablesaveable metaclassclass bad(metaclass=meta loadablesaveable)def some_method(self)pass traceback (most recent call last)assertionerrorclass 'badmust provide load(method the metaclass specifies that classes using it must provide certain methodsand when they don'tas in this casean assertionerror exception is raised class good(metaclass=meta loadablesaveable)def load(self)pass def save(self)pass good(the good class honors the metaclass' api requirementseven if it doesn' meet our informal expectations of how it should behave we can also use metaclasses to change the classes that use them if the change involves the namebase classesor dictionary of the class being created ( its slots)then we need to reimplement the metaclass' __new__(methodbut for other changessuch as adding methods or data attributesreimplementing __init__(is sufficientalthough this can also be done in __new__(we will now look at metaclass that modifies the classes it is used with purely through its __new__(method as an alternative to using the @property and @name setter decoratorswe could create classes where we use simple naming convention to identify properties for exampleif class has methods of the form get_name(and set_name()we would expect the class to have private __name property accessed using |
8,926 | instance name for getting and setting this can all be done using metaclass here is an example of class that uses this conventionclass product(metaclass=autoslotproperties)def __init__(selfbarcodedescription)self __barcode barcode self description description def get_barcode(self)return self __barcode def get_description(self)return self __description def set_description(selfdescription)if description is none or len(description self __description "elseself __description description we must assign to the private __barcode property in the initializer since there is no setter for itanother consequence of this is that barcode is read-only property on the other handdescription is readable/writable property here are some examples of interactive useproduct product(" "" mm stapler"product barcodeproduct description (' '' mm stapler'product description " mm stapler (long)product barcodeproduct description (' '' mm stapler (long)'if we attempt to assign to the bar code an attributeerror exception is raised with the error text "can' set attributeif we look at the product class' attributes ( using dir())the only public ones to be found are barcode and description the get_name(and set_name(methods are no longer there--they have been replaced with the name property and the variables holding the bar code and description are also private (__barcode and __description)and have been added as slots to minimize the class' memory use this is all done by the autoslotproperties metaclass which is implemented in single methodclass autoslotproperties(type)def __new__(mclclassnamebasesdictionary)slots list(dictionary get("__slots__"[]) |
8,927 | advanced programming techniques for getter_name in [key for key in dictionary if key startswith("get_")]if isinstance(dictionary[getter_name]collections callable)name getter_name[ :slots append("__namegetter dictionary pop(getter_namesetter_name "set_name setter dictionary get(setter_namenoneif (setter is not none and isinstance(settercollections callable))del dictionary[setter_namedictionary[nameproperty(gettersetterdictionary["__slots__"tuple(slotsreturn super(__new__(mclclassnamebasesdictionarya metaclass' __new__(class method is called with the metaclassand the class namebase classesand dictionary of the class that is to be created we must use reimplementation of __new__(rather than __init__(because we want to change the dictionary before the class is created we begin by copying the __slots__ collectioncreating an empty one if none is presentand making sure we have list rather than tuple so that we can modify it for every attribute in the dictionary we pick out those that begin with "get_and that are callablethat isthose that are getter methods for each getter we add private name to the slots to store the corresponding datafor examplegiven getter get_name(we add __name to the slots we then take reference to the getter and delete it from the dictionary under its original name (this is done in one go using dict pop()we do the same for the setter if one is presentand then we create new dictionary item with the desired property name as its keyfor exampleif the getter is get_name(the property name is name we set the item' value to be property with the getter and setter (which might be nonethat we have found and removed from the dictionary at the end we replace the original slots with the modified slots list which has private slot for each property that was addedand call on the base class to actually create the classbut using our modified dictionary note that in this case we must pass the metaclass explicitly in the super(callthis is always the case for calls to __new__(because it is class method and not an instance method for this example we didn' need to write an __init__(method because we have done all the work in __new__()but it is perfectly possible to reimplement both __new__(and __init__(doing different work in each if we consider hand-cranked drills to be analogous to aggregation and inheritance and electric drills the analog of decorators and descriptorsthen metaclasses are at the laser beam end of the scale when it comes to power and |
8,928 | versatility metaclasses are the last tool to reach for rather than the firstexcept perhaps for application framework developers who need to provide powerful facilities to their users without making the users go through hoops to realize the benefits on offer functional-style programming ||functional-style programming is an approach to programming where computations are built up from combining functions that don' modify their arguments and that don' refer to or change the program' stateand that provide their results as return values one strong appeal of this kind of programming is that (in theory)it is much easier to develop functions in isolation and to debug functional programs this is helped by the fact that functional programs don' have state changesso it is possible to reason about their functions mathematically three concepts that are strongly associated with functional programming are mappingfilteringand reducing mapping involves taking function and an iterable and producing new iterable (or listwhere each item is the result of calling the function on the corresponding item in the original iterable this is supported by the built-in map(functionfor examplelist(map(lambda xx * [ ])returns[ the map(function takes function and an iterable as its arguments and for efficiency it returns an iterator rather than list here we forced list to be created to make the result clearer[ * for in [ ]returns[ generator expression can often be used in place of map(here we have used list comprehension to avoid the need to use list()to make it generator we just have to change the outer brackets to parentheses filtering involves taking function and an iterable and producing new iterable where each item is from the original iterable--providing the function returns true when called on the item the built-in filter(function supports thislist(filter(lambda xx [ - - ])returns[ the filter(function takes function and an iterable as its arguments and returns an iterator [ for in [ - - if returns[ |
8,929 | advanced programming techniques the filter(function can always be replaced with generator expression or with list comprehension reducing involves taking function and an iterable and producing single result value the way this works is that the function is called on the iterable' first two valuesthen on the computed result and the third valuethen on the computed result and the fourth valueand so onuntil all the values have been used the functools module' functools reduce(function supports this here are two lines of code that do the same computationfunctools reduce(lambda xyx [ ]functools reduce(operator mul[ ]returns returns the operator module has functions for all of python' operators specifically to make functional-style programming easier herein the second linewe have used the operator mul(function rather than having to create multiplication function using lambda as we did in the first line python also provides some built-in reducing functionsall()which given an iterablereturns true if all the iterable' items return true when bool(is applied to themany()which returns true if any of the iterable' items is truemax()which returns the largest item in the iterablemin()which returns the smallest item in the iterableand sum()which returns the sum of the iterable' items now that we have covered the key conceptslet us look at few more examples we will start with couple of ways to get the total size of all the files in list filesfunctools reduce(operator add(os path getsize(xfor in files)functools reduce(operator addmap(os path getsizefiles)using map(is often shorter than the equivalent list comprehension or generator expression except where there is condition we've used operator add(as the addition function instead of lambda xyx if we only wanted to count the py file sizes we can filter out non-python files here are three ways to do thisfunctools reduce(operator addmap(os path getsizefilter(lambda xx endswith(py")files))functools reduce(operator addmap(os path getsize( for in files if endswith(py")))functools reduce(operator add(os path getsize(xfor in files if endswith(py"))arguablythe second and third versions are better because they don' require us to create lambda functionbut the choice between using generator expres |
8,930 | sions (or list comprehensionsand map(and filter(is most often purely matter of personal programming style using map()filter()and functools reduce(often leads to the elimination of loopsas the examples we have seen illustrate these functions are useful when converting code written in functional languagebut in python we can usually replace map(with list comprehension and filter(with list comprehension with conditionand many cases of functools reduce(can be eliminated by using one of python' built-in functional functions such as all()any()max()min()and sum(for examplesum(os path getsize(xfor in files if endswith(py")this achieves the same thing as the previous three examplesbut is much more compact operator attrgetter( in addition to providing functions for python' operatorsthe operator module also provides the operator attrgetter(and operator itemgetter(functionsthe first of which we briefly met earlier in this both of these return functions which can then be called to extract the specified attributes or items whereas slicing can be used to extract sequence of part of listand slicing with striding can be used to extract sequence of parts (sayevery third item with [:: ])operator itemgetter(can be used to extract sequence of arbitrary partsfor exampleoperator itemgetter( )(lthe function returned by operator itemgetter(does not have to be called immediately and thrown away as we have done hereit could be kept and passed as the function argument to map()filter()or functools reduce()or used in dictionarylistor set comprehension when we want to sort we can specify key function this function can be any functionfor examplea lambda functiona built-in function or method (such as str lower())or function returned by operator attrgetter(for exampleassuming list holds objects with priority attributewe can sort the list into priority order like thisl sort(key=operator attrgetter("priority")in addition to the functools and operator modules already mentionedthe itertools module can also be useful for functional-style programming for examplealthough it is possible to iterate over two or more lists by concatenating theman alternative is to use itertools chain(like thisfor value in itertools chain(data_list data_list data_list )total +value the itertools chain(function returns an iterator that gives successive values from the first sequence it is giventhen successive values from the second sequenceand so on until all the values from all the sequences are used the itertools module has many other functionsand its documentation gives many |
8,931 | advanced programming techniques small yet useful examples and is well worth reading (note also that couple of new functions were added to the itertools module with python partial function application |partial function application is the creation of function from an existing function and some arguments to produce new function that does what the original function didbut with some arguments fixed so that callers don' have to pass them here' very simple exampleenumerate functools partial(enumeratestart= for linoline in enumerate (lines)process_line(ilinethe first line creates new functionenumerate ()that wraps the given function (enumerate()and keyword argument (start= so that when enumerate (is called it calls the original function with the fixed argument--and with any other arguments that are given at the time it is calledin this case lines here we have used the enumerate (function to provide conventional line counting starting from line using partial function application can simplify our codeespecially when we want to call the same functions with the same arguments again and again for exampleinstead of specifying the mode and encoding arguments every time we call open(to process utf- encoded text fileswe could create couple of functions with these arguments fixedreader functools partial(openmode="rt"encoding="utf "writer functools partial(openmode="wt"encoding="utf "now we can open text files for reading by calling reader(filenameand for writing by calling writer(filenameone very common use case for partial function application is in gui (graphical user interfaceprogramming (covered in )where it is often convenient to have one particular function called when any one of set of buttons is pressed for exampleloadbutton tkinter button(frametext="load"command=functools partial(doaction"load")savebutton tkinter button(frametext="save"command=functools partial(doaction"save")this example uses the tkinter gui library that comes as standard with python the tkinter button class is used for buttons--here we have created twoboth contained inside the same frameand each with text that indicates its purpose each button' command argument is set to the function that tkinter |
8,932 | must call when the button is pressedin this case the doaction(function we have used partial function application to ensure that the first argument given to the doaction(function is string that indicates which button called it so that doaction(is able to decide what action to perform coroutines |coroutines are functions whose processing can be suspended and resumed at specific points sotypicallya coroutine will execute up to certain statementthen suspend execution while waiting for some data at this point other parts of the program can continue to execute (usually other coroutines that aren' suspendedonce the data is received the coroutine resumes from the point it was suspendedperforms processing (presumably based on the data it got)and possibly sending its results to another coroutine coroutines are said to have multiple entry and exit pointssince they can have more than one place where they suspend and resume coroutines are useful when we want to apply multiple functions to the same pieces of dataor when we want to create data processing pipelinesor when we want to have master function with slave functions coroutines can also be used to provide simpler and lower-overhead alternatives to threading few coroutine-based packages that provide lightweight threading are available from the python package indexpypi python org/pypi yield statement in pythona coroutine is function that takes its input from yield expression it may also send results to receiver function (which itself must be coroutinewhenever coroutine reaches yield expression it suspends waiting for dataand once it receives datait resumes execution from that point coroutine can have more than one yield expressionalthough each of the coroutine examples we will review has only one performing independent actions on data if we want to perform set of independent operations on some datathe conventional approach is to apply each operation in turn the disadvantage of this is that if one of the operations is slowthe program as whole must wait for the operation to complete before going on to the next one solution to this is to use coroutines we can implement each operation as coroutine and then start them all off if one is slow it won' affect the others--at least not until they run out of data to process--since they all operate independently figure illustrates the use of coroutines for concurrent processing in the figurethree coroutines (each presumably doing different jobprocess the same two data items--and take different amounts of time to do their work in the figurecoroutine (works quite quicklycoroutine (works slowlyand coroutine (varies once all three coroutines have been given their initial data |
8,933 | advanced programming techniques coroutine (coroutine (coroutine ( create coroutines waiting waiting waiting coroutine send(" "process "awaiting waiting step action coroutine send(" "process "aprocess "awaiting coroutine send(" "waiting process "aprocess " coroutine send(" "process "bprocess "aprocess " coroutine send(" "process "bprocess " ("bpendingprocess " coroutine send(" "waiting process " ("bpendingprocess " waiting process "bprocess " waiting process "bwaiting waiting process "bwaiting waiting waiting waiting finished finished finished coroutinen close(figure sending two items of data to three coroutines to processif one is ever waiting (because it finishes first)the others continue to workwhich minimizes processor idle time once we are finished using the coroutines we call close(on each of themthis stops them from waiting for more datawhich means they won' consume any more processor time to create coroutine in pythonwe simply create function that has at least one yield expression--normally inside an infinite loop when yield is reached the coroutine' execution is suspended waiting for data once the data is received the coroutine resumes processing (from the yield expression onward)and when it has finished it loops back to the yield to wait for more data while one or more coroutines are suspended waiting for dataanother one can execute this can produce greater throughput than simply executing functions one after the other linearly we will show how performing independent operations works in practice by applying several regular expressions to the text in set of html files the purpose is to output each file' urls and level and level headings we'll start by looking at the regular expressionsthen the creation of the coroutine "matchers"and then we will look at the coroutines and how they are used url_re re compile( """href=(? ['"])(? [^\ ]+?)"" """(? =quote)"""re ignorecaseflags re multiline|re ignorecase|re dotall _re re compile( "(? +?)"flagsh _re re compile( "(? +?)"flags |
8,934 | these regular expressions ("regexesfrom now onmatch an html href' url and the text contained in and header tags (regular expressions are covered in understanding them is not essential to understanding this example receiver reporter(matchers (regex_matcher(receiverurl_re)regex_matcher(receiverh _re)regex_matcher(receiverh _re)generators since coroutines always have yield expressionthey are generators so although here we create tuple of matcher coroutinesin effect we are creating tuple of generators each regex_matcher(is coroutine that takes receiver function (itself coroutineand regex to match whenever the matcher matches it sends the match to the receiver @coroutine def regex_matcher(receiverregex)while truetext (yieldfor match in regex finditer(text)receiver send(matchthe matcher starts by entering an infinite loop and immediately suspends execution waiting for the yield expression to return text to apply the regex to once the text is receivedthe matcher iterates over every match it makessending each one to the receiver once the matching has finished the coroutine loops back to the yield and again suspends waiting for more text there is one tiny problem with the (undecoratedmatcher--when it is first created it should commence execution so that it advances to the yield ready to receive its first text we could do this by calling the built-in next(function on each coroutine we create before sending it any data but for convenience we have created the @coroutine decorator to do this for us def coroutine(function)@functools wraps(functiondef wrapper(*args**kwargs)generator function(*args**kwargsnext(generatorreturn generator return wrapper decorators the @coroutine decorator takes coroutine functionand calls the built-in next(function on it--this causes the function to be executed up to the first yield expressionready to receive data |
8,935 | advanced programming techniques now that we have seen the matcher coroutine we will look at how the matchers are usedand then we will look at the reporter(coroutine that receives the matchersoutputs tryfor file in sys argv[ :]print(filehtml open(fileencoding="utf "read(for matcher in matchersmatcher send(htmlfinallyfor matcher in matchersmatcher close(receiver close(the program reads the filenames listed on the command lineand for each one prints the filename and then reads the file' entire text into the html variable using the utf- encoding then the program iterates over all the matchers (three in this case)and sends the text to each of them each matcher then proceeds independentlysending each match it makes to the reporter coroutine at the end we call close(on each matcher and on the reporter--this terminates themsince otherwise they would continue (suspendedwaiting for text (or matches in the case of the reportersince they contain infinite loops @coroutine def reporter()ignore frozenset({"style css""favicon png""index html"}while truematch (yieldif match is not nonegroups match groupdict(if "urlin groups and groups["url"not in ignoreprint(url:"groups["url"]elif " in groupsprint( "groups[" "]elif " in groupsprint( "groups[" "]the reporter(coroutine is used to output results it was created by the statement receiver reporter(which we saw earlierand passed as the receiver argument to each of the matchers the reporter(waits (is suspendeduntil match is sent to itthen it prints the match' detailsand then it waits againin an endless loop--stopping only if close(is called on it using coroutines like this may produce performance benefitsbut does require us to adopt somewhat different way of thinking about processing |
8,936 | composing pipelines sometimes it is useful to create data processing pipelines pipeline is simply the composition of one or more functions where data items are sent to the first functionwhich then either discards the item (filters it outor passes it on to the next function (either as is or transformed in some waythe second function receives the item from the first function and repeats the processdiscarding or passing on the item (possibly transformed in different wayto the next functionand so on items that reach the end are then output in some way composing functions pipelines typically have several componentsone that acquires dataone or more that filter or transform dataand one that outputs results this is exactly the functional-style approach to programming that we discussed earlier in the section when we looked at composing some of python' built-in functionssuch as filter(and map(one benefit of using pipelines is that we can read data items incrementallyoften one at timeand have to give the pipeline only enough data items to fill it (usually one or few items per componentthis can lead to significant memory savings compared withsayreading an entire data set into memory and then processing it all in one go step action get_data(process(reporter( pipeline get_dataprocess(reporter())waiting waiting waiting pipeline send(" "read "awaiting waiting pipeline send(" "read "bprocess "awaiting pipeline send(" "read "cprocess "boutput " pipeline send(" "read "dprocess "coutput " pipeline send(" "read "edrop "doutput " pipeline send(" "read "fprocess "ewaiting waiting process "foutput " waiting waiting output " waiting waiting waiting finished finished finished close coroutines figure three-step coroutine pipeline processing six items of data figure illustrates simple three component pipeline the first component of the pipeline (get_data()acquires each data item to be processed in turn the second component (process()processes the data--and may drop unwanted data items--there could be any number of other processing/filtering componentsof course the last component (reporter()outputs results in the figure |
8,937 | advanced programming techniques items " "" "" "" "and "fare processed and produce outputwhile item "dis dropped the pipeline shown in figure is filtersince each data item is passed through unchanged and is either dropped or output in its original form the end points of pipelines tend to perform the same rolesacquiring data items and outputting results but between these we can have as many components as necessaryeach filtering or transforming or both and in some casescomposing the components in different orders can produce pipelines that do different things we will start out by looking at theoretical example to get better idea of how coroutine-based pipelines workand then we will look at real example suppose we have sequence of floating-point numbers and we want to process them in multicomponent pipeline such that we transform each number into an integer (by rounding)but drop any numbers that are out of range ( if we had the four coroutine componentsacquire((get number)to_int((transform number by rounding and converting to an integer)check((pass on number that is in rangedrop number that is out of range)and output((output number)we could create the pipeline like thispipe acquire(to_int(check(output()))we would then send numbers into the pipeline by calling pipe send(we'll look at the progress of the numbers and as they go through the pipelineusing different visualization from the step-by-step figures used earlierpipe send( acquire( to_int( check( output( pipe send( acquire( to_int( check( notice that for there is no output this is because the check(coroutine received which is out of range (> )and so it was filtered out let' see what would happen if we created different pipelinebut using the same componentspipe acquire(check(to_int(output()))this simply performs the filtering (check()before the transforming (to_int()here is how it would work for and pipe send( acquire( check( to_int( output( pipe send( acquire( check( to_int( output( here we have incorrectly output even though it is out of range this is because we applied the check(component firstand since this received an in-range value of it simply passed it on but the to_int(component rounds the numbers it gets |
8,938 | we will now review concrete example-- file matcher that reads all the filenames given on the command line (including those in the directories given on the command linerecursively)and that outputs the absolute paths of those files that meet certain criteria we will start by looking at how pipelines are composedand then we will look at the coroutines that provide the pipeline components here is the simplest pipelinepipeline get_files(receiveros walk( this pipeline prints every file it is given (or all the files in the directory it is givenrecursivelythe get_files(function is coroutine that yields the filenames and the receiver is reporter(coroutine--created by receiver reporter()--that simply prints each filename it receives this pipeline does little more than the os walk(function (and in fact uses that function)but we can use its components to compose more sophisticated pipelines pipeline get_files(suffix_matcher(receiver(htm"html"))this pipeline is created by composing the get_files(coroutine together with the suffix_matcher(coroutine it prints only html files coroutines composed like this can quickly become difficult to readbut there is nothing to stop us from composing pipeline in stages--although for this approach we must create the components in last-to-first order pipeline size_matcher(receiverminimum= * pipeline suffix_matcher(pipeline(png"jpg"jpeg")pipeline get_files(pipelinethis pipeline only matches files that are at least one megabyte in sizeand that have suffix indicating that they are images how are these pipelines usedwe simply feed them filenames or paths and they take care of the rest themselves for arg in sys argv[ :]pipeline send(argnotice that it doesn' matter which pipeline we are using--it could be the one that prints all the filesor the one that prints html filesor the images one--they all work in the same way and in this caseall three of the pipelines are filters--any filename they get is either passed on as is to the next component (and in the case of the reporter()printed)or dropped because they don' meet the criteria before looking at the get_files(and the matcher coroutineswe will look at the trivial reporter(coroutine (passed as receiverthat outputs the results |
8,939 | advanced programming techniques @coroutine def reporter()while truefilename (yieldprint(filename@coroutine decorator os walk( we have used the same @coroutine decorator that we created in the previous subsubsection the get_files(coroutine is essentially wrapper around the os walk(function and that expects to be given paths or filenames to work on @coroutine def get_files(receiver)while truepath (yieldif os path isfile(path)receiver send(os path abspath(path)elsefor rootdirsfiles in os walk(path)for filename in filesreceiver send(os path abspathos path join(rootfilename))this coroutine has the now-familiar structurean infinite loop in which we wait for the yield to return value that we can processand then we send the result to the receiver @coroutine def suffix_matcher(receiversuffixes)while truefilename (yieldif filename endswith(suffixes)receiver send(filenamethis coroutine looks simple--and it is--but notice that it sends only filenames that match the suffixesso any that don' match are filtered out of the pipeline @coroutine def size_matcher(receiverminimum=nonemaximum=none)while truefilename (yieldsize os path getsize(filenameif ((minimum is none or size >minimumand (maximum is none or size <maximum))receiver send(filename |
8,940 | this coroutine is almost identical to suffix_matcher()except that it filters out files whose size is not in the required rangerather than those which don' have matching suffix the pipeline we have created suffers from couple of problems one problem is that we never close any of the coroutines in this case it doesn' mattersince the program terminates once the processing is finishedbut it is probably better to get into the habit of closing coroutines when we are finished with them another problem is that potentially we could be asking the operating system (under the hoodfor different pieces of information about the same file in several parts of the pipeline--and this could be slow solution is to modify the get_files(coroutine so that it returns (filenameos stat() -tuples for each file rather than just filenamesand then pass these -tuples through the pipeline this would mean that we acquire all the relevant information just once per file you'll get the chance to solve both of these problemsand to add additional functionalityin an exercise at the end of the creating coroutines for use in pipelines requires certain reorientation of thinking howeverit can pay off handsomely in terms of flexibilityand for large data sets can help minimize the amount of data held in memory as well as potentially resulting in faster throughput examplevalid py descriptors class decorators ||in this section we combine descriptors with class decorators to create powerful mechanism for creating validated attributes up to now if we wanted to ensure that an attribute was set to only valid value we have relied on properties (or used getter and setter methodsthe disadvantage of such approaches is that we must add validating code for every attribute in every class that needs it what would be much more convenient and easier to maintainis if we could add attributes to classes with the necessary validation built in here is an example of the syntax we would like to use@valid_string("name"empty_allowed=false@valid_string("productid"empty_allowed=falseregex=re compile( "[ - ]{ }\ { }")@valid_string("category"empty_allowed=falseacceptablefrozenset(["consumables""hardware""software""media"])@valid_number("price"minimum= maximum= @valid_number("quantity"minimum= maximum= class stockitemthe os stat(function takes filename and returns named tuple with various items of information about the fileincluding its sizemodeand last modified date/time |
8,941 | advanced programming techniques def __init__(selfnameproductidcategorypricequantity)self name name self productid productid self category category self price price self quantity quantity the stockitem class' attributes are all validated for examplethe productid attribute can be set only to nonempty string that starts with three uppercase letters and ends with four digitsthe category attribute can be set only to nonempty string that is one of the specified valuesand the quantity attribute can be set only to number between and inclusive if we try to set an invalid value an exception is raised class decorators the validation is achieved by combining class decorators with descriptors as we noted earlierclass decorators can take only single argument--the class they are to decorate so here we have used the technique shown when we first discussed class decoratorsand have the valid_string(and valid_number(functions take whatever arguments we wantand then return decoratorwhich in turn takes the class and returns modified version of the class let' now look at the valid_string(functiondef valid_string(attr_nameempty_allowed=trueregex=noneacceptable=none)def decorator(cls)name "__attr_name def getter(self)return getattr(selfnamedef setter(selfvalue)assert isinstance(valuestr)(attr_name must be string"if not empty_allowed and not valueraise valueerror("{ may not be emptyformatattr_name)if ((acceptable is not none and value not in acceptableor (regex is not none and not regex match(value)))raise valueerror("{attr_namecannot be set to "{value}format(**locals())setattr(selfnamevaluesetattr(clsattr_namegenericdescriptor(gettersetter)return cls return decorator the function starts by creating class decorator function which takes class as its sole argument the decorator adds two attributes to the class it decoratesa private data attribute and descriptor for examplewhen the valid_string(regular expressions |
8,942 | function is called with the name "productid"the stockitem class gains the attribute __productid which holds the product id' valueand the descriptor productid attribute which is used to access the value for exampleif we create an item using item stockitem("tv""tva ""electrical" )we can get the product id using item productid and set it usingfor exampleitem productid "tvb the getter function created by the decorator simply uses the global getattr(function to return the value of the private data attribute the setter function incorporates the validationand at the enduses setattr(to set the private data attribute to the new (and validvalue in factthe private data attribute is only created the first time it is set once the getter and setter functions have been created we use setattr(once againthis time to create new class attribute with the given name ( productid)and with its value set to be descriptor of type genericdescriptor at the endthe decorator function returns the modified classand the valid_string(function returns the decorator function the valid_number(function is structurally identical to the valid_string(functiononly differing in the arguments it accepts and in the validation code in the setterso we won' show it here (the complete source code is in the valid py module the last thing we need to cover is the genericdescriptorand that turns out to be the easiest partclass genericdescriptordef __init__(selfgettersetter)self getter getter self setter setter def __get__(selfinstanceowner=none)if instance is nonereturn self return self getter(instancedef __set__(selfinstancevalue)return self setter(instancevaluethe descriptor is used to hold the getter and setter functions for each attribute and simply passes on the work of getting and setting to those functions |
8,943 | summary advanced programming techniques ||in this we learned lot more about python' support for procedural and object-oriented programmingand got taste of python' support for functional-style programming in the first section we learned how to create generator expressionsand covered generator functions in more depth we also learned how to dynamically import modules and how to access functionality from such modulesas well as how to dynamically execute code in this section we saw examples of how to create and use recursive functions and nonlocal variables we also learned how to create custom function and method decoratorsand how to write and make use of function annotations in the second section we studied variety of different and more advanced aspects of object-oriented programming first we learned more about attribute accessfor exampleusing the __getattr__(special method then we learned about functors and saw how we could use them to provide functions with state--something that can also be achieved by adding properties to functions or using closuresboth covered in this we learned how to use the with statement with context managers and how to create custom context managers since python' file objects are also context managersfrom now on we will do our file handling using try with except structures that ensure that opened files are closed without the need for finally blocks the second section continued with coverage of more advanced object-oriented featuresstarting with descriptors these can be used in wide variety of ways and are the technology that underlies many of python' standard decorators such as @property and @classmethod we learned how to create custom descriptors and saw three very different examples of their use next we studied class decorators and saw how we could modify class in much the same way that function decorator can modify function in the last three subsections of the second section we learned about python' support for abcs (abstract base classes)multiple inheritanceand metaclasses we learned how to make our own classes fit in with python' standard abcs and how to create our own abcs we also saw how to use multiple inheritance to unify the features of different classes together in single class and from the coverage of metaclasses we learned how to intervene when class (as opposed to an instance of classis created and initialized the penultimate section introduced some of the functions and modules that python provides to support functional-style programming we learned how to use the common functional idioms of mappingfilteringand reducing we also learned how to create partial functions and how to create and use coroutines |
8,944 | and the last section showed how to combine class decorators with descriptors to provide powerful and flexible mechanism for creating validated attributes this completes our coverage of the python language itself not every feature of the language has been covered here and in the previous but those that have not are obscure and rarely used none of the subsequent introduces new language featuresalthough all of them make use of modules from the standard library that have not been covered beforeand some of them take techniques shown in this and earlier further than we have seen so far furthermorethe programs shown in the following have none of the constraints that have applied previously ( to only use aspects of the language that had been covered up to the point they were introduced)so they are the book' most idiomatic examples exercises ||none of the first three exercises described here requires writing lot of code-although the fourth one does--and none of them are easy copy the magic-numbers py program and delete its get_function(functionsand all but one of its load_modules(functions add getfunction functor class that has two cachesone to hold functions that have been found and one to hold functions that could not be found (to avoid repeatedly looking for function in module that does not have the functionthe only modifications to main(are to add get_function getfunction(before the loopand to use with statement to avoid the need for finally block alsocheck that the module functions are callable using collections callable rather than using hasattr(the class can be written in about twenty lines solution is in magic-numbers_ans py create new module file and in it define three functionsis_ascii(that returns true if all the characters in the given string have code points less than is_ascii_punctuation(that returns true if all the characters are in the string punctuation stringand is_ascii_printable(that returns true if all the characters are in the string printable string the last two are structurally the same each function should be created using lambda and can be done in one or two lines using functional-style code be sure to add docstring for each one with doctests and to make the module run the doctests the functions require only three to five lines for all three of themwith the whole module fewer than lines including doctests solution is given in ascii py create new module file and in it define the atomic context manager class this class should work like the atomiclist class shown in this except that instead of working only with lists it should work with any mutable collection type the __init__(method should check the suitability |
8,945 | advanced programming techniques of the containerand instead of storing shallow/deep copy flag it should assign suitable function to the self copy attribute depending on the flag and call the copy function in the __enter__(method the __exit__(method is slightly more involved because replacing the contents of lists is different than for sets and dictionaries--and we cannot use assignment because that would not affect the original container the class itself can be written in about thirty linesalthough you should also include doctests solution is given in atomic py which is about one hundred fifty lines including doctests create program that finds files based on specified criteria (rather like the unix find programthe usage should be find py options files_or_paths all the options are optionaland without them all the files listed on the command line and all the files in the directories listed on the command line (and in their directoriesrecursivelyshould be listed the options should restrict which files are output as follows- or --days integer discards any files older than the specified number of days- or --bigger integer discards any files smaller than the specified number of bytes- or --smaller integer discards any files bigger than the specified number of bytes- or --output what where what is "date""size"or "date,size(either way aroundspecifies what should be output--filenames should always be output- or --suffix discards any files that don' have matching suffix (multiple suffixes can be given if comma-separated for both the bigger and smaller optionsif the integer is followed by "kit should be treated as kilobytes and multipled by and similarly if followed by "mtreated as megabytes and multiplied by for examplefind py - - date,size will find all files modified today (strictlythe past hours)and output their namedateand size similarlyfind py - - png,jpg,jpeg - size will find all image files bigger than one megabyte and output their names and sizes implement the program' logic by creating pipeline using coroutines to provide matcherssimilar to what we saw in the coroutines subsectiononly this time pass (filenameos stat() -tuples for each file rather than just filenames alsotry to close all the pipeline components at the end in the solution providedthe biggest single function is the one that handles the command-line options the rest is fairly straightforwardbut not trivial the find py solution is around lines |
8,946 | debugging unit testing profiling debuggingtestingand profiling |||writing programs is mixture of artcraftand scienceand because it is done by humansmistakes are made fortunatelythere are techniques we can use to help avoid problems in the first placeand techniques for identifying and fixing mistakes when they become apparent mistakes fall into several categories the quickest to reveal themselves and the easiest to fix are syntax errorssince these are usually due to typos more challenging are logical errors--with thesethe program runsbut some aspect of its behavior is not what we intended or expected many errors of this kind can be prevented from happening by using tdd (test driven development)where when we want to add new featurewe begin by writing test for the feature--which will fail since we haven' added the feature yet--and then implement the feature itself another mistake is to create program that has needlessly poor performance this is almost always due to poor choice of algorithm or data structure or both howeverbefore attempting any optimization we should start by finding out exactly where the performance bottleneck lies--since it might not be where we expect--and then we should carefully decide what optimization we want to dorather than working at random in this first section we will look at python' tracebacks to see how to spot and fix syntax errors and how to deal with unhandled exceptions then we will see how to apply the scientific method to debugging to make finding errors as fast and painless as possible we will also look at python' debugging support in the second section we will look at python' support for writing unit testsand in particular the doctest module we saw earlier (in and )and the unittest module we will see how to use these modules to support tdd in the final section we will briefly look at profilingto identify performance hot spots so that we can properly target our optimization efforts |
8,947 | debuggingtestingand profiling debugging ||in this section we will begin by looking at what python does when there is syntax errorthen at the tracebacks that python produces when unhandled exceptions occurand then we will see how to apply the scientific method to debugging but before all that we will briefly discuss backups and version control when editing program to fix bug there is always the risk that we end up with program that has the original bug plus new bugsthat isit is even worse than it was when we startedand if we haven' got any backups (or we have but they are several changes out of date)and we don' use version controlit could be very hard to even get back to where we just had the original bug making regular backups is an essential part of programming--no matter how reliable our machine and operating system are and how rare failures are--since failures still occur but backups tend to be coarse-grainedwith files hours or even days old version control systems allow us to incrementally save changes at whatever level of granularity we want--every single changeor every set of related changesor simply every so many minutesworth of work version control systems allow us to apply changes ( to experiment with bugfixes)and if they don' work outwe can revert the changes back to the last "goodversion of the code so before starting to debugit is always best to check our code into the version control system so that we have known position that we can revert to if we get into mess there are many good cross-platform open source version control systems available--this book uses bazaar (bazaar-vcs org)but other popular ones include mercurial (mercurial selenic com)git (git-scm com)and subversion (subversion tigris orgincidentallyboth bazaar and mercurial are mostly written in python none of these systems is hard to use (at least for the basics)but using any one of them will help avoid lot of unnecessary pain dealing with syntax errors |if we try to run program that has syntax errorpython will stop execution and print the filenameline numberand offending linewith caret (^underneath indicating exactly where the error was detected here' an examplefile "blocks py"line if blockoutput save_blocks_as_svg(blockssvgsyntaxerrorinvalid syntax |
8,948 | did you see the errorwe've forgotten to put colon at the end of the if statement' condition here is an example that comes up quite oftenbut where the problem isn' at all obviousfile "blocks py"line except valueerror as errsyntaxerrorinvalid syntax there is no syntax error in the line indicatedso both the line number and the caret' position are wrong in generalwhen we are faced with an error that we are convinced is not in the specified linein almost every case the error will be in an earlier line here' the code from the try to the except where python is reporting the error to be--see if you can spot the error before reading the explanation that follows the codetryblocks parse(blockssvg file replace(blk"svg"if not blockoutput save_blocks_as_svg(blockssvg)print("errorfailed to save { }format(svgexcept valueerror as errdid you spot the problemit is certainly easy to miss since it is on the line before the one that python reports as having the error we have closed the str format(method' parenthesesbut not the print(function' parenthesesthat iswe are missing closing parenthesis at the end of the linebut python didn' realize this until it reached the except keyword on the following line missing the last parenthesis on line is quite commonespecially when using print(with str format()but the error is usually reported on the following line similarlyif list' closing bracketor set or dictionary' closing brace is missingpython will normally report the problem as being on the next (nonblankline on the plus sidesyntax errors like these are trivial to fix dealing with runtime errors |if an unhandled exception occurs at runtimepython will stop executing our program and print traceback here is an example of traceback for an unhandled exceptiontraceback (most recent call last)file "blocks py"line in main(file "blocks py"line in main |
8,949 | debuggingtestingand profiling blocks parse(blocksfile "blocks py"line in recursive_descent_parse return data stack[ indexerrorlist index out of range tracebacks (also called backtraceslike this should be read from their last line back toward their first line the last line specifies the unhandled exception that occurred above this linethe filenameline numberand function namefollowed by the line that caused the exceptionare shown (spread over two linesif the function where the exception was raised was called by another functionthat function' filenameline numberfunction nameand calling line are shown above and if that function was called by another function the same appliesall the way up to the beginning of the call stack (note that the filenames in tracebacks are given with their pathbut in most cases we have omitted paths from the examples for the sake of clarity function references so in this examplean indexerror occurredmeaning that data stack is some kind of sequencebut has no item at position the error occurred at line in the blocks py program' recursive_descent_parse(functionand that function was called at line in the main(function (the reason that the function' name is different at line that isparse(instead of recursive_descent_parse()is that the parse variable is set to one of several different functions depending on the command-line arguments given to the programin the common case the names always match the call to main(was made at line and this is the statement at which program execution commenced although at first sight the traceback looks intimidatingnow that we understand its structure it is easy to see how useful it is in this case it tells us exactly where to look for the problemalthough of course we must work out for ourselves what the solution is here is another example tracebacktraceback (most recent call last)file "blocks py"line in main(file "blocks py"line in main if blockoutput save_blocks_as_svg(blockssvg)file "blockoutput py"line in save_blocks_as_svg widthsrows compute_widths_and_rows(cellsscale_byfile "blockoutput py"line in compute_widths_and_rows width len(cell text/cell columns zerodivisionerrorinteger division or modulo by zero herethe problem has occurred in module (blockoutput pythat is called by the blocks py program this traceback leads us to where the problem became apparentbut not to where it occurred the value of cell columns is |
8,950 | clearly in the blockoutput py module' compute_widths_and_rows(function on line --after allthat is what caused the zerodivisionerror exception to be raised--but we must look at the preceding lines to find where and why cell columns was given this incorrect value in some cases the traceback reveals an exception that occurred in python' standard library or in third-party library although this could mean bug in the libraryin almost every case it is due to bug in our own code here is an example of such tracebackusing python traceback (most recent call last)file "blocks py"line in main(file "blocks py"line in main blocks open(fileencoding="utf "read(file "/usr/lib/python /lib/python /io py"line in __new__ return open(*args**kwargsfile "/usr/lib/python /lib/python /io py"line in open closefdfile "/usr/lib/python /lib/python /io py"line in __init__ _fileio _fileio __init__(selfnamemodeclosefdioerror[errno no such file or directory'hierarchy blkthe ioerror exception at the end tells us clearly what the problem is but the exception was raised in the standard library' io module in such cases it is best to keep reading upward until we find the first file listed that is our program' file (or one of the modules we have created for itso in this case we find that the first reference to our program is to file blocks pyline in the main(function it looks like we have call to open(but have not put the call inside try except block or used with statement python is bit smarter than python and realizes that we want to find the mistake in our own codenot in the standard libraryso it produces much more compact and helpful traceback for exampletraceback (most recent call last)file "blocks py"line in main(file "blocks py"line in main blocks open(fileencoding="utf "read(ioerror[errno no such file or directory'hierarchy blkthis eliminates all the irrelevant detail and makes it easy to see what the problem is (on the bottom lineand where it occurred (the lines above itso no matter how big the traceback isthe last line always specifies the unhandled exceptionand we just have to work back until we find our program' file |
8,951 | debuggingtestingand profiling or one of our own modules listed the problem will almost certainly be on the line python specifiesor on an earlier line this particular example illustrates that we should modify the blocks py program to cope gracefully when given the names of nonexistent files this is usability errorand it should also be classified as logical errorsince terminating and printing traceback cannot be considered to be acceptable program behavior in factas matter of good policy and courtesy to our userswe should always catch all relevant exceptionsidentifying the specific ones that we consider to be possiblesuch as environmenterror in generalwe should not use the catchalls of exceptor except exception:although using the latter at the top level of our program to avoid crashes might be appropriate--but only if we always report any exceptions it catches so that they don' go silently unnoticed exceptions that we catch and cannot recover from should be reported in the form of error messagesrather than exposing our users to tracebacks which look scary to the uninitiated for gui programs the same appliesexcept that normally we would use message box to notify the user of problem and for server programs that normally run unattendedwe should write the error message to the server' log python' exception hierarchy was designed so that catching exception doesn' quite cover all the exceptions in particularit does not catch the keyboardinterrupt exceptionso for console applications if the user presses ctrl+cthe program will terminate if we choose to catch this exceptionthere is risk that we could lock the user into program that they cannot terminate this arises because bug in our exception handling code might prevent the program from terminating or the exception propagating (of courseeven an "uninterruptibleprogram can have its process killedbut not all users know how so if we do catch the keyboardinterrupt exception we must be extremely careful to do the minimum amount of saving and clean up that is necessary--and then terminate the program and for programs that don' need to save or clean upit is best not to catch keyboardinterrupt at alland just let the program terminate one of python ' great virtues is that it makes clear distinction between raw bytes and strings howeverthis can sometimes lead to unexpected exceptions occurring when we pass bytes object where str is expected or vice versa for exampletraceback (most recent call last)file "program py"line in print(datetime datetime strptime(dateformat)typeerrorstrptime(argument must be strnot bytes when we hit problem like this we can either perform the conversion--in this caseby passing date decode("utf ")--or carefully work back to find out where |
8,952 | and why the variable is bytes object rather than strand fix the problem at its source when we pass string where bytes are expected the error message is somewhat less obviousand differs between python and for examplein python traceback (most recent call last)file "program py"line in data write(infotypeerrorexpected an object with buffer interface in python the error message' text has been slightly improvedtraceback (most recent call last)file "program py"line in data write(infotypeerror'strdoes not have the buffer interface in both cases the problem is that we are passing string when bytesbytearrayor similar object is expected we can either perform the conversion--in this case by passing info encode("utf ")--or work back to find the source of the problem and fix it there python introduced support for exception chaining--this means that an exception that is raised in response to another exception can contain the details of the original exception when chained exception goes uncaught the traceback includes not just the uncaught exceptionbut also the exception that caused it (providing it was chainedthe approach to debugging chained exceptions is almost the same as beforewe start at the end and work backward until we find the problem in our own code howeverrather than doing this just for the last exceptionwe might then repeat the process for each chained exception above ituntil we get to the problem' true origin we can take advantage of exception chaining in our own code--for exampleif we want to use custom exception class but still want the underlying problem to be visible class invaliddataerror(exception)pass def process(data)tryi int(dataexcept valueerror as errraise invaliddataerror("invalid data received"from err hereif the int(conversion failsa valueerror is raised and caught we then raise our custom exceptionbut with from errwhich creates chained |
8,953 | debuggingtestingand profiling exceptionour ownplus the one in err if the invaliddataerror exception is raised and not caughtthe resulting traceback will look something like thistraceback (most recent call last)file "application py"line in process int(datavalueerrorinvalid literal for int(with base ' the above exception was the direct cause of the following exceptiontraceback (most recent call last)file "application py"line in print(process(line)file "application py"line in process raise invaliddataerror("invalid data received"from err __main__ invaliddataerrorinvalid data received at the bottom our custom exception and text explain what the problem iswith the lines above them showing where the exception was raised (line )and where it was caused (line but we can also go back furtherinto the chained exception which gives more details about the specific errorand which shows the line that triggered the exception ( for detailed rationale and further information about chained exceptionssee pep scientific debugging |if our program runs but does not have the expected or desired behavior then we have bug-- logical error--that we must eliminate the best way to eliminate such errors is to prevent them from occurring in the first place by using tdd (test driven developmenthoweversome bugs will always get throughso even with tdddebugging is still necessary skill to learn in this subsection we will outline an approach to debugging based on the scientific method the approach is explained in sufficient detail that it might appear to be too much work for tackling "simplebug howeverby consciously following the process we will avoid wasting time with "randomdebuggingand after awhile we will internalize the process so that we can do it unconsciouslyand therefore very quickly to be able to kill bug we must be able to do the following reproduce the bug locate the bug the ideas used in this subsection were inspired by the debugging in the book code complete by steve mcconnellisbn |
8,954 | fix the bug test the fix reproducing the bug is sometimes easy--it always occurs on every runand sometimes hard--it occurs intermittently in either case we should try to reduce the bug' dependenciesthat isfind the smallest input and the least amount of processing that can still produce the bug once we are able to reproduce the bugwe have the data--the input data and optionsand the incorrect results--that are needed so that we can apply the scientific method to finding and fixing it the method has three steps think up an explanation-- hypothesis--that reasonably accounts for the bug create an experiment to test the hypothesis run the experiment running the experiment should help to locate the bugand should also give us insight into its solution (we will return to how to create and run an experiment shortly once we have decided how to kill the bug--and have checked our code into our version control system so that we can revert the fix if necessary--we can write the fix once the fix is in place we must test it naturallywe must test to see if the bug it is intended to fix has gone away but this is not sufficientafter allour fix may have solved the bug we were concerned aboutbut the fix might also have introduced another bugone that affects some other aspect of the program so in addition to testing the bugfixwe must also run all of the program' tests to increase our confidence that the bugfix did not have any unwanted side effects some bugs have particular structureso whenever we fix bug it is always worth asking ourselves if there are other places in the program or its modules that might have similar bugs if there arewe can check to see if we already have tests that would reveal the bugs if they were presentand if notwe should add such testsand if that reveals bugsthen we must tackle them as described earlier now that we have good overview of the debugging processwe will focus in on just how we create and run experiments to test our hypotheses we begin with trying to isolate the bug depending on the nature of the program and of the bugwe might be able to write tests that exercise the programfor examplefeeding it data that is known to be processed correctly and gradually changing the data so that we can find exactly where processing fails once we have an idea of where the problem lies--either due to testing or based on reasoning--we can test our hypotheses |
8,955 | debuggingtestingand profiling what kind of hypothesis might we think upwellit could initially be as simple as the suspicion that particular function or method is returning erroneous data when certain input data and options are used thenif this hypothesis proves correctwe can refine it to be more specific--for exampleidentifying particular statement or suite in the function that we think is doing the wrong computation in certain cases to test our hypothesis we need to check the arguments that the function receives and the values of its local variables and the return valueimmediately before it returns we can then run the program with data that we know produces errors and check the suspect function if the arguments coming into the function are not what we expectthen the problem is likely to be further up the call stackso we would now begin the process againthis time suspecting the function that calls the one we have been looking at but if all the incoming arguments are always validthen we must look at the local variables and the return value if these are always correct then we need to come up with new hypothesissince the suspect function is behaving correctly but if the return value is wrongthen we know that we must investigate the function further in practicehow do we conduct an experimentthat ishow do we test the hypothesis that particular function is misbehavingone way to start is to "executethe function mentally--this is possible for many small functions and for larger ones with practiceand has the additional benefit that it familiarizes us with the function' behavior at bestthis can lead to an improved or more specific hypothesis--for examplethat particular statement or suite is the site of the problem but to conduct an experiment properly we must instrument the program so that we can see what is going on when the suspect function is called there are two ways to instrument program--intrusivelyby inserting print(statementsor (usuallynon-intrusivelyby using debugger both approaches are used to achieve the same end and both are validbut some programmers have strong preference for one or the other we'll briefly describe both approachesstarting with the use of print(statements when using print(statementswe can start by putting print(statement right at the beginning of the function and have it print the function' arguments thenjust before the (or eachreturn statement (or at the end of the function if there is no return statement)add print(locals()"\ "the builtin locals(function returns dictionary whose keys are the names of the local variables and whose values are the variablesvalues we can of course simply print the variables we are specifically interested in instead notice that we added an extra newline--we should also do this in the first print(statement so that blank line appears between each set of variables to aid clarity (an alternative to inserting print(statements directly is to use some kind of logging decorator such as the one we created in |
8,956 | if when we run the instrumented program we find that the arguments are correct but that the return value is in errorwe know that we have located the source of the bug and can further investigate the function if looking carefully at the function doesn' suggest where the problem lieswe can simply insert new print(locals()"\ "statement right in the middle after running the program again we should now know whether the problem arises in the first or second half of the functionand can put print(locals()"\ "statement in the middle of the relevant halfrepeating the process until we find the statement where the error is caused this will very quickly get us to the point where the problem occurs--and in most cases locating the problem is half of the work needed to solve it the alternative to adding print(statements is to use debugger python has two standard debuggers one is supplied as module (pdb)and can be used interactively in the console--for examplepython - pdb my_program py (on windowsof coursewe would replace python with something like :\python \python exe howeverthe easiest way to use it is to add import pdb in the program itselfand add the statement pdb set_trace(as the first statement of the function we want to examine when the program is runpdb stops it immediately after the pdb set_trace(calland allows us to step through the programset breakpointsand examine variables here is an example run of program that has been instrumented by having the import pdb statement added to its importsand by having pdb set_trace(added as the first statement inside its calculate_median(function (what we have typed is shown in boldalthough where we typed enter is not indicated python statistics py sum dat statistics py( )calculate_median(-numbers sorted(numbers(pdbs statistics py( )calculate_median(-middle len(numbers/ (pdbstatistics py( )calculate_median(-median numbers[middle(pdbstatistics py( )calculate_median(-if len(numbers = (pdbstatistics py( )calculate_median(-return median (pdbp middlemediannumbers ( [- - ](pdbc |
8,957 | debuggingtestingand profiling commands are given to pdb by entering their name and pressing enter at the (pdbprompt if we just press enter on its own the last command is repeated so here we typed (which means stepi execute the statement shown)and then repeated this (simply by pressing enter)to step through the statements in the calculate_median(function once we reached the return statement we printed out the values that interested us using the (printcommand and finally we continued to the end using the (continuecommand this tiny example should give flavor of pdbbut of course the module has lot more functionality than we have shown here it is much easier to use pdb on an instrumented program as we have done here than on an uninstrumented one but since this requires us to add an import and call to pdb set_trace()it would seem that using pdb is just as intrusive as using print(statementsalthough it does provide useful facilities such as breakpoints the other standard debugger is idleand just like pdbit supports single steppingbreakpointsand the examination of variables idle' debugger window is shown in figure and its code editing window with breakpoints and the current line highlighted is shown in figure figure idle' debugger window showing the call stack and the current local variables one great advantage idle has over pdb is that there is no need to instrument our code--idle is smart enough to debug our code as it standsso it isn' intrusive at all unfortunatelyat the time of this writingidle is rather weak when it comes to running programs that require command-line arguments the only way to do this appears to be to run idle from console with the required arguments |
8,958 | figure an idle code editing window during debugging for exampleidle - - statistics py sum dat the - argument tells idle to start debugging immediately and the - argument tells it to run the following program with any arguments that follow it howeverfor programs that don' require command-line arguments (or where we are willing to edit the code to put them in manually to make debugging easier)idle is quite powerful and convenient to use (incidentallythe code shown in figure does have bug--middle should be middle debugging python programs is no harder than debugging in any other language--and it is easier than for compiled languages since there is no build step to go through after making changes and if we are careful to use the scientific method it is usually quite straightforward to locate bugsalthough fixing them is another matter ideallythoughwe want to avoid as many bugs as possible in the first place and apart from thinking deeply about our design and writing our code with careone of the best ways to prevent bugs is to use tdda topic we will introduce in the next section unit testing ||writing tests for our programs--if done well--can help reduce the incidence of bugs and can increase our confidence that our programs behave as expected but in generaltesting cannot guarantee correctnesssince for most nontrivial programs the range of possible inputs and the range of possible computations is so vast that only the tiniest fraction of them could ever be realistically tested nonethelessby carefully choosing what we test we can improve the quality of our code variety of different kinds of testing can be donesuch as usability testingfunctional testingand integration testing but here we will concern ourselves |
8,959 | debuggingtestingand profiling purely with unit testing--testing individual functionsclassesand methodsto ensure that they behave according to our expectations key point of tddis that when we want to add feature--for examplea new method to class--we first write test for it and of course this test will fail since we haven' written the method now we write the methodand once it passes the test we can then rerun all the tests to make sure our addition hasn' had any unexpected side effects once all the tests run (including the one we added for the new feature)we can check in our codereasonably confident that it does what we expect--providing of course that our test was adequate for exampleif we want to write function that inserts string at particular index positionwe might start out using tdd like thisdef insert_at(stringpositioninsert)"""returns copy of string with insert inserted at the position string "abcderesult [for in range(- len(string )result append(insert_at(stringi"-")result[: ['abc-de''abcd- ''-abcde'' -bcde''ab-cde'result[ :['abc-de''abcd- ''abcde-''abcde-'""return string for functions or methods that don' return anything (they actually return none)we normally give them suite consisting of passand for those whose return value is used we either return constant (say or one of the argumentsunchanged--which is what we have done here (in more complex situations it may be more useful to return fake objects--third-party modules that provide "mockobjects are available for such cases when the doctest is run it will faillisting each of the strings ('abcd-ef''abcde- 'etc that it expectedand the strings it actually got (all of which are 'abcdef'once we are satisfied that the doctest is sufficient and correctwe can write the body of the functionwhich in this case is simply return string[:positioninsert string[position:(and if we wrote return string[:positioninsertand then copied and pasted string[:positionat the end to save ourselves some typingthe doctest will immediately reveal the error python' standard library provides two unit testing modulesdoctestwhich we have already briefly seen here and earlier (in and )and unittest in additionthere are third-party testing tools for |
8,960 | python two of the most notable are nose (code google com/ /python-nose)which aims to be more comprehensive and useful than the standard unittest modulewhile still being compatible with itand py test (codespeak net/py/dist/test/test html)--this takes somewhat different approach to unittestand tries as much as possible to eliminate boilerplate test code both of these third-party tools support test discoveryso there is no need to write an overarching test program--since they will search for tests themselves this makes it easy to test an entire tree of code or just part of the tree ( just those modules that have been worked onfor those serious about testing it is worth investigating both of these third-party modules (and any others that appeal)before deciding which testing tools to use creating doctests is straightforwardwe write the tests in the modulefunctionclassand methodsdocstringsand for moduleswe simply add three lines at the end of the moduleif __name__ ="__main__"import doctest doctest testmod(if we want to use doctests inside programsthat is also possible for examplethe blocks py program whose modules are covered later (in has doctests for its functionsbut it ends with this codeif __name__ ="__main__"main(this simply calls the program' main(functionand does not execute the program' doctests to exercise the program' doctests there are two approaches we can take one is to import the doctest module and then run the program--for exampleat the consolepython - doctest blocks py (on windowsreplacing python with something like :\python \python exeif all the tests run fine there is no outputso we might prefer to execute python - doctest blocks py - insteadsince this will list every doctest that is executedand provide summary of results at the end another way to execute doctests is to create separate test program using the unittest module the unittest module is conceptually modeled on java' junit unit testing library and is used to create test suites that contain test cases the unittest module can create test cases based on doctestswithout having to know anything about what the program or module containsapart from the fact that it has doctests so to make test suite for the blocks py programwe can create the following simple program (which we have called test_blocks py)import doctest import unittest import blocks |
8,961 | debuggingtestingand profiling suite unittest testsuite(suite addtest(doctest doctestsuite(blocks)runner unittest texttestrunner(print(runner run(suite)note that there is an implicit restriction on the names of our programs if we take this approachthey must have names that are valid module namesso program called convert-incidents py cannot have test like this written for it because import convert-incidents is not valid since hyphens are not legal in python identifiers (it is possible to get around thisbut the easiest solution is to use program filenames that are also valid module namesfor examplereplacing hyphens with underscores the structure shown here--create test suiteadd one or more test cases or test suitesrun the overarching test suiteand output the results--is typical of unittest-based tests when runthis particular example produces the following outputran tests in ok each time test case is executed period is output (hence the three periods at the beginning of the output)then line of hyphensand then the test summary (naturallythere is lot more output if any tests fail if we are making the effort to have separate tests (typically one for each program and module we want to test)then rather than using doctests we might prefer to directly use the unittest module' features--especially if we are used to the junit approach to testing the unittest module keeps our tests separate from our code--this is particularly useful for larger projects where test writers and developers are not necessarily the same people alsounittest unit tests are written as stand-alone python modulesso they are not limited by what we can comfortably and sensibly write inside docstring the unittest module defines four key concepts test fixture is the term used to describe the code necessary to set up test (and to tear it downthat isclean upafterwardtypical examples are creating an input file for the test to use and at the end deleting the input file and the resultant output file test suite is collection of test cases and test case is the basic unit of testing--test suites are collections of test cases or of other test suites--we'll see practical examples of these shortly test runner is an object that executes one or more test suites |
8,962 | typicallya test suite is made by creating subclass of unittest testcasewhere each method that has name beginning with "testis test case if we need any setup to be donewe can do it in method called setup()similarlyfor any cleanup we can implement method called teardown(within the tests there are number of unittest testcase methods that we can make use ofincluding asserttrue()assertequal()assertalmostequal((useful for testing floating-point numbers)assertraises()and many moreincluding many inverses such as assertfalse()assertnotequal()failifequal()failunlessequal()and so on atomic py exercise the unittest module is well documented and has lot of functionalitybut here we will just give flavor of its use by reviewing very simple test suite the example we will use is the solution to one of the exercises given at the end of the exercise was to create an atomic module which could be used as context manager to ensure that either all of set of changes is applied to listsetor dictionary--or none of them are the atomic py module provided as an example solution uses lines of code to implement the atomic classand has about lines of module doctests we will create the test_atomic py module to replace the doctests with unittest tests so that we can then delete the doctests and leave atomic py free of any code except that needed to provide its functionality before diving into writing the test modulewe need to think about what tests are needed we will need to test three different kinds of data typelistssetsand dictionaries for lists we need to test appending and inserting an itemdeleting an itemand changing an item' value for sets we must test adding and discarding an item and for dictionaries we must test inserting an itemchanging an item' valueand deleting an item alsowe must test that in the case of failurenone of the changes are applied structurallytesting the different data types is essentially the sameso we will only write the test cases for testing lists and leave the others as an exercise the test_atomic py module must import both the unittest module and the atomic module that it is designed to test when creating unittest fileswe usually create modules rather than programsand inside each module we define one or more unittest testcase subclasses in the case of the test_atomic py moduleit defines single unittest testcase subclasstestatomic (which we will review shortly)and ends with the following two linesif __name__ ="__main__"unittest main(thanks to these linesthe module can be run stand-alone and of courseit could also be imported and run from another test program--something that makes sense if this is just one test suite among many |
8,963 | debuggingtestingand profiling if we want to run the test_atomic py module from another test program we can write program that is similar to the one we used to execute doctests using the unittest module for exampleimport unittest import test_atomic suite unittest testloader(loadtestsfromtestcasetest_atomic testatomicrunner unittest texttestrunner(print(runner run(suite)herewe have created single suite by telling the unittest module to read the test_atomic module and to use each of its test*(methods (test_list_success(and test_list_fail(in this exampleas we will see in moment)as test cases we will now review the implementation of the testatomic class unusually for subclasses generallyalthough not for unittest testcase subclassesthere is no need to implement the initializer in this case we will need setup methodbut not teardown method and we will implement two test cases def setup(self)self original_list list(range( )we have used the unittest testcase setup(method to create single piece of test data def test_list_succeed(self)items self original_list[:with atomic atomic(itemsas atomicatomic append( atomic insert( - del atomic[ atomic[ - atomic insert( - self assertequal(items[- - - ]this test case is used to test that all of set of changes to list are correctly applied the test performs an appendan insertion in the middlean insertion at the beginninga deletionand change of value while by no means comprehensivethe test does at least cover the basics the test should not raise an exceptionbut if it does the unittest testcase base class will handle it by turning it into an appropriate error message at the end we expect the items list to equal the literal list included in the test rather than the original list the unittest testcase assertequal(method can |
8,964 | compare any two python objectsbut its generality means that it cannot give particularly informative error messages from python the unittest testcase class has many more methodsincluding many data-type-specific assertion methods here is how we could write the assertion using python self assertlistequal(items[- - - ]if the lists are not equalsince the data types are knownthe unittest module is able to give more precise error informationincluding where the lists differ def test_list_fail(self)def process()nonlocal items with atomic atomic(itemsas atomicatomic append( atomic insert( - del atomic[ atomic[ - atomic poop(typo items self original_list[:self assertraises(attributeerrorprocessself assertequal(itemsself original_listto test the failure casethat iswhere an exception is raised while doing atomic processingwe must test that the list has not been changed and also that an appropriate exception has been raised to check for an exception we use the unittest testcase assertraises(methodand in the case of python we pass it the exception we expect to get and callable object that should raise the exception this forces us to encapsulate the code we want to testwhich is why we had to create the process(inner function shown here in python the unittest testcase assertraises(method can be used as context managerso we are able to write our test in much more natural waydef test_list_fail(self)items self original_list[:with self assertraises(attributeerror)with atomic atomic(itemsas atomicatomic append( atomic insert( - del atomic[ atomic[ - atomic poop(typo self assertlistequal(itemsself original_list |
8,965 | debuggingtestingand profiling here we have written the test code directly in the test method without the need for an inner functioninstead using unittest testcase assertraised(as context manager that expects the code to raise an attributeerror we have also used python ' unittest testcase assertlistequal(method at the end as we have seenpython' test modules are easy to use and are extremely usefulespecially if we use tdd they also have lot more functionality and features than have been shown here--for examplethe ability to skip tests which is useful to account for platform differences--and they are also well documented one feature that is missing--and which nose and py test provide--is test discoveryalthough this feature is expected to appear in later python version (perhaps as early as python profiling ||if program runs very slowly or consumes far more memory than we expectthe problem is most often due to our choice of algorithms or data structuresor due to our doing an inefficient implementation whatever the reason for the problemit is best to find out precisely where the problem lies rather than just inspecting our code and trying to optimize it randomly optimizing can cause us to introduce bugs or to speed up parts of our program that actually have no effect on the program' overall performance because the improvements are not in places where the interpreter spends most of its time before going further into profilingit is worth noting few python programming habits that are easy to learn and applyand that are good for performance none of the techniques is python-version-specificand all of them are perfectly sound python programming style firstprefer tuples to lists when read-only sequence is needed seconduse generators rather than creating large tuples or lists to iterate over thirduse python' built-in data structures--dictslistsand tuples--rather than custom data structures implemented in pythonsince the built-in ones are all very highly optimized fourthwhen creating large strings out of lots of small stringsinstead of concatenating the small stringsaccumulate them all in listand join the list of strings into single string at the end fifth and finallyif an object (including function or methodis accessed large number of times using attribute access ( when accessing function in module)or from data structureit may be better to create and use local variable that refers to the object to provide faster access python' standard library provides two modules that are particularly useful when we want to investigate the performance of our code one of these is the timeit module--this is useful for timing small pieces of python codeand can be usedfor exampleto compare the performance of two or more implementations of particular function or method the other is the cprofile module which can |
8,966 | be used to profile program' performance--it provides detailed breakdown of call counts and times and so can be used to find performance bottlenecks to give flavor of the timeit modulewe will look at small example suppose we have three functionsfunction_a()function_b()and function_c()all of which perform the same computationbut each using different algorithm if we put all these functions into module (or import them)we can run them using the timeit module to see how they compare here is the code that we would use at the end of the moduleif __name__ ="__main__"repeats for function in ("function_a""function_b""function_c") timeit timer("{ }(xy)format(function)"from __main__ import { }xyformat(function)sec timeit(repeatsrepeats print("{function}({sec fsecformat(**locals())the first argument given to the timeit timer(constructor is the code we want to execute and timein the form of string herethe first time around the loopthe string is "function_a(xy)the second argument is optionalagain it is string to be executedthis time before the code to be timed so as to provide some setup here we have imported from the __main__ ( thismodule the function we want to testplus two variables that are passed as input data ( and )and that are available as global variables in the module we could just as easily have imported the function and data from different module when the timeit timer object' timeit(method is calledit will first execute the constructor' second argument--if there was one--to set things upand then it will execute the constructor' first argument--and time how long the execution takes the timeit timer timeit(method' return value is the time taken in secondsas float by defaultthe timeit(method repeats million times and returns the total seconds for all these executionsbut in this particular case we needed only repeats to give us useful resultsso we specified the repeat count explicitly after timing each function we divide the total by the number of repeats to get its mean (averageexecution time and print the function' name and execution time on the console function_a( sec function_b( sec function_c( sec in this examplefunction_a(is clearly the fastest--at least with the input data that was used in some situations--for examplewhere performance can the cprofile module is usually available for cpython interpretersbut is not always available for others all python libraries should have the pure python profile module which provides the same api as the cprofile moduleand does the same jobonly more slowly |
8,967 | debuggingtestingand profiling vary considerably depending on the input data--we might have to test each function with multiple sets of input data to cover representative set of cases and then compare the total or average execution times it isn' always convenient to instrument our code to get timingsand so the timeit module provides way of timing code from the command line for exampleto time function_a(from the mymodule py modulewe would enter the following in the consolepython - timeit - - "from mymodule import function_axy"function_a(xy)(as usualfor windowswe must replace python with something like :\python \python exe the - option is for the python interpreter and tells it to load the specified module (in this case timeitand the other options are handled by the timeit module the - option specifies the repetition countthe - option specifies the setupand the last argument is the code to execute and time after the command has finished it prints its results on the consolefor example loopsbest of msec per loop we can easily then repeat the timing for the other two functions so that we can compare them all the cprofile module (or the profile module--we will refer to them both as the cprofile modulecan also be used to compare the performance of functions and methods and unlike the timeit module that just provides raw timingsthe cprofile module shows precisely what is being called and how long each call takes here' the code we would use to compare the same three functions as beforeif __name__ ="__main__"for function in ("function_a""function_b""function_c")cprofile run("for in range( ){ }(xy)format(function)we must put the number of repeats inside the code we pass to the cprofile run(functionbut we don' need to do any setup since the module function uses introspection to find the functions and variables we want to use there is no explicit print(statement since by default the cprofile run(function prints its output on the console here are the results for all the functions (with some irrelevant lines omitted and slightly reformatted to fit the page) function calls in cpu seconds ncalls tottime percall cumtime percall filename:lineno(function : ( mymodule py: (function_a {built-in method exec function calls in cpu seconds |
8,968 | ncalls tottime percall cumtime percall filename:lineno(function : ( mymodule py: (function_b mymodule py: ( {built-in method bisect_left {built-in method exec {built-in method len {built-in method sorted function calls in cpu seconds ncalls tottime percall cumtime percall filename:lineno(function : ( mymodule py: (function_c mymodule py: ( {built-in method execthe ncalls ("number of calls"column lists the number of calls to the specified function (listed in the filename:lineno(functioncolumnrecall that we repeated the calls timesso we must keep this in mind the tottime ("total time"column lists the total time spent in the functionbut excluding time spent inside functions called by the function the first percall column lists the average time of each call to the function (tottime /ncallsthe cumtime ("cumulative time"column lists the time spent in the function and includes the time spent inside functions called by the function the second percall column lists the average time of each call to the functionincluding functions called by it this output is far more enlightening than the timeit module' raw timings we can immediately see that both function_b(and function_c(use generators that are called more than timesmaking them both at least ten times slower than function_a(furthermorefunction_b(calls more functions generallyincluding call to the built-in sorted(functionand this makes it almost twice as slow as function_c(of coursethe timeit(module gave us sufficient information to see these differences in timingbut the cprofile module allows us to see the details of why the differences are there in the first place just as the timeit module allows us to time code without instrumenting itso does the cprofile module howeverwhen using the cprofile module from the command line we cannot specify exactly what we want executed--it simply executes the given program or module and reports the timings of everything the command line to use is python - cprofile programormodule pyand the output produced is in the same format as we saw earlierhere is an extract slightly reformatted and with most lines omitted |
8,969 | debuggingtestingand profiling function calls ( primitive callsin cpu secs ncalls tottime percall cumtime percall filename:lineno(function : ( : ( : (function_a : (function_b : ( : (function_c : (in cprofile terminologya primitive call is nonrecursive function call using the cprofile module in this way can be useful for identifying areas that are worth investigating further herefor examplewe can clearly see that function_b(takes long time but how do we drill down into the detailswe could instrument the program by replacing calls to function_b(with cprofile run("function_b()"or we could save the complete profile data and analyze it using the pstats module to save the profile we must modify our command line slightlypython - cprofile - profiledatafile programormodule py we can then analyze the profile datafor exampleby starting idleimporting the pstats moduleand giving it the saved profiledatafileor by using pstats interactively at the console here' very short example console session that has been tidied up slightly to fit on the pageand with our input shown in boldpython - cprofile - profile dat mymodule py python - pstats welcome to the profile statistics browser read profile dat profile datcallers function_b random listing order was used list reduced from to due to restriction function was called by ncalls tottime cumtime : (function_b< : (profile datcallees function_b random listing order was used list reduced from to due to restriction function called ncalls tottime cumtime : (function_b- built-in method bisect_left built-in method len built-in method sorted profile datquit |
8,970 | type help to get the list of commandsand help followed by command name for more information on the command for examplehelp stats will list what arguments can be given to the stats command other tools are available that can provide graphical visualization of the profile datafor examplerunsnakerun (www vrplumber com/programming/runsnakerun)which depends on the wxpython gui library using the timeit and cprofile modules we can identify areas of our code that might be taking more time than expectedand using the cprofile modulewe can find out exactly where the time is being taken summary ||in generalpython' reporting of syntax errors is very accuratewith the line and position in the line being correctly identified the only cases where this doesn' work well are when we forget closing parenthesisbracketor bracein which case the error is normally reported as being on the next nonblank line fortunatelysyntax errors are almost always easy to see and to fix if an unhandled exception is raisedpython will terminate and output traceback such tracebacks can be intimidating for end-usersbut provide useful information to us as programmers ideallywe should always handle every type of exception that we believe our program can raiseand where necessary present the problem to the user in the form of an error messagemessage boxor log message--but not as raw traceback howeverwe should avoid using the catchall exceptexception handler--if we want to handle all exceptions ( at the top level)then we can use except exception as errand always report errsince silently handling exceptions can lead to programs failing in subtle and unnoticed ways (such as corrupting datalater on and during developmentit is probably best not to have top-level exception handler at all and to simply have the program crash with traceback debugging need not be--and should not be-- hit and miss affair by narrowing down the input necessary to reproduce the bug to the bare minimumby carefully hypothesizing what the problem isand then testing the hypothesis by experiment--using print(statements or debugger--we can often locate the source of the bug quite quickly and if our hypothesis has successfully led us to the bugit is likely to also be helpful in devising solution for testingboth the doctest and the unittest modules have their own particular virtues doctests tend to be particularly convenient and useful for small libraries and modules since well-chosen tests can easily both illustrate and exercise boundary as well as common casesand of coursewriting doctests is convenient and easy on the other handsince unit tests are not constrained to be written inside docstrings and are written as separate stand-alone modulesthey are usually better choice when it comes to writing more complex and |
8,971 | debuggingtestingand profiling sophisticated testsespecially tests that require setup and teardown (cleanupfor larger projectsusing the unittest module (or third-party unit testing modulekeeps the tests and tested programs and modules separate and is generally more flexible and powerful than using doctests if we hit performance problemsthe cause is most often our own codeand in particular our choice of algorithms and data structuresor some inefficiency in our implementation when faced with such problemsit is always wise to find out exactly where the performance bottleneck israther than to guess and end up spending time optimizing something that doesn' actually improve performance python' timeit module can be used to get raw timings of functions or arbitrary code snippetsand so is particularly useful for comparing alternative function implementations and for in-depth analysisthe cprofile module provides both timing and call count information so that we can identify not only which functions take the most timebut also what functions they in turn call overallpython has excellent support for debuggingtestingand profilingright out of the box howeverespecially for large projectsit is worth considering some of the third-party testing toolssince they may offer more functionality and convenience than the standard library' testing modules provide |
8,972 | using the multiprocessing module using the threading module processes and threading |||with the advent of multicore processors as the norm rather than the exceptionit is more tempting and more practical than ever before to want to spread the processing load so as to get the most out of all the available cores there are two main approaches to spreading the workload one is to use multiple processes and the other is to use multiple threads this shows how to use both approaches using multiple processesthat isrunning separate programshas the advantage that each process runs independently this leaves all the burden of handling concurrency to the underlying operating system the disadvantage is that communication and data sharing between the invoking program and the separate processes it invokes can be inconvenient on unix systems this can be solved by using the exec and fork paradigmbut for cross-platform programs other solutions must be used the simplestand the one shown hereis for the invoking program to feed data to the processes it runs and leave them to produce their output independently more flexible approach that greatly simplifies two-way communication is to use networking of coursein many situations such communication isn' needed--we just need to run one or more other programs from one orchestrating program an alternative to handing off work to independent processes is to create threaded program that distributes work to independent threads of execution this has the advantage that we can communicate simply by sharing data (providing we ensure that shared data is accessed only by one thread at time)but leaves the burden of managing concurrency squarely with the programmer python provides good support for creating threaded programsminimizing the work that we must do nonethelessmultithreaded programs are inherently more complex than single-threaded programs and require much more care in their creation and maintenance in this first section we will create two small programs the first program is invoked by the user and the second program is invoked by the first pro |
8,973 | processes and threading gramwith the second program invoked once for each separate process that is required in the second section we will begin by giving bare-bones introduction to threaded programming then we will create threaded program that has the same functionality as the two programs from the first section combined so as to provide contrast between the multiple processes and the multiple threads approaches and then we will review another threaded programmore sophisticated than the firstthat both hands off work and gathers together all the results using the multiprocessing module ||in some situations we already have programs that have the functionality we need but we want to automate their use we can do this by using python' subprocess module which provides facilities for running other programspassing any command-line options we wantand if desiredcommunicating with them using pipes we saw one very simple example of this in when we used the subprocess call(function to clear the console in platform-specific way but we can also use these facilities to create pairs of "parent-childprogramswhere the parent program is run by the user and this in turn runs as many instances of the child program as necessaryeach with different work to do it is this approach that we will cover in this section in we showed very simple programgrepword pythat searches for word specified on the command line in the files listed after the word in this section we will develop more sophisticated version that can recurse into subdirectories to find files to read and that can delegate the work to as many separate child processes as we like the output is just list of filenames (with pathsfor those files that contain the specified search word the parent program is grepword- py and the child program is grepword-pchild py the relationship between the two programs when they are being run is shown schematically in figure the heart of grepword- py is encapsulated by its main(functionwhich we will look at in three partsdef main()child os path join(os path dirname(__file__)"grepword- -child py"optswordargs parse_options(filelist get_files(argsopts recursefiles_per_process len(filelist/opts count startend files_per_process (len(filelistopts countnumber |
8,974 | processes and threading the python interpreterbut we prefer this approach because it ensures that the child program uses the same python interpreter as the parent program once we have the command ready we create subprocess popen objectspecifying the command to execute (as list of strings)and in this case requesting to write to the process' standard input (it is also possible to read process' standard output by setting similar keyword argument we then write the search word followed by newline and then every file in the relevant slice of the file list the subprocess module reads and writes bytesnot stringsbut the processes it creates always assume that the bytes received from sys stdin are strings in the local encoding--even if the bytes we have sent use different encodingsuch as utf- which we have used here we will see how to get around this annoying problem shortly once the word and the list of files have been written to the child processwe close its standard input and move on it is not strictly necessary to keep reference to each process (the pipe variable gets rebound to new subprocess popen object each time through the loop)since each process runs independentlybut we add each one to list so that we can make them interruptible alsowe don' gather the results togetherbut instead we let each process write its results to the console in its own time this means that the output from different processes could be interleaved (you will get the chance to avoid interleaving in the exercises while pipespipe pipes pop(pipe wait(once all the processes have started we wait for each child process to finish this is not essentialbut on unix-like systems it ensures that we are returned to the console prompt when all the processes are done (otherwisewe must press enter when they are all finishedanother benefit of waiting is that if we interrupt the program ( by pressing ctrl+ )all the processes that are still running will be interrupted and will terminate with an uncaught keyboardinterrupt exception--if we did not wait the main program would finish (and therefore not be interruptible)and the child processes would continue (unless killed by kill program or task managerapart from the comments and importshere is the complete grepword-pchild py program we will look at the program in two parts--with two versions of the first partthe first for any python version and the second for python or later versionsblock_size number "{ }format(sys argv[ ]if len(sys argv= else "stdin sys stdin buffer read(lines stdin decode("utf ""ignore"splitlines(word lines[ rstrip( |
8,975 | the program begins by setting the number string to the given number or to an empty string if we are not debugging since the program is running as child process and the subprocess module only reads and writes binary data and always uses the local encodingwe must read sys stdin' underlying buffer of binary data and perform the decoding ourselves once we have read the binary datawe decode it into unicode string and split it into lines the child process then reads the first linesince this contains the search word here are the lines that are different for python sys stdin sys stdin detach(stdin sys stdin read(lines stdin decode("utf ""ignore"splitlines(python provides the sys stdin detach(method that returns binary file object we then read in all the datadecode it into unicode using the encoding of our choiceand then split the unicode string into lines for filename in lines[ :]filename filename rstrip(previous "trywith open(filename"rb"as fhwhile truecurrent fh read(block_sizeif not currentbreak current current decode("utf ""ignore"if (word in current or word in previous[-len(word):current[:len(word)])print("{ }{ }format(numberfilename)break if len(current!block_sizebreak previous current except environmenterror as errprint("{ }{ }format(numbererr)all the lines after the first are filenames (with pathsfor each one we open the relevant fileread itand print its name if it contains the search word it is possible that some of the files might be very large and this could be problemespecially if there are child processes running concurrentlyall reading big it is possible that future version of python will have version of the subprocess module that allows encoding and errors arguments so that we can use our preferred encoding without having to access sys stdin in binary mode and do the decoding ourselves see bugs python org/issue |
8,976 | processes and threading files we handle this by reading each file in blockskeeping the previous block read to ensure that we don' miss cases when the only occurrence of the search word happens to fall across two blocks another benefit of reading in blocks is that if the search word appears early in the file we can finish with the file without having read everythingsince all we care about is whether the word is in the filenot where it appears within the file character encodings the files are read in binary modeso we must convert each block to string before we can search itsince the search word is string we have assumed that all the files use the utf- encodingbut this is most likely wrong in some cases more sophisticated program would try to determine the actual encoding and then close and reopen the file using the correct encoding as we noted in at least two python packages for automatically detecting file' encoding are available from the python package indexpypi python org/pypi (it might be tempting to decode the search word into bytes object and compare bytes with bytesbut that approach is not reliable since some characters have more than one valid utf- representation the subprocess module offers lot more functionality than we have needed to use hereincluding the ability to provide equivalents to shell backquotes and shell pipelinesand to the os system(and spawn functions in the next section we will see threaded version of the grepword- py program so that we can compare it with the parent-child processes one we will also look at more sophisticated threaded program that delegates work and then gathers the results together to have more control over how they are output using the threading module ||setting up two or more separate threads of execution in python is quite straightforward the complexity arises when we want separate threads to share data imagine that we have two threads sharing list one thread might start iterating over the list using for in and then somewhere in the middle another thread might delete some items in the list at best this will lead to obscure crashesat worst to incorrect results one common solution is to use some kind of locking mechanism for exampleone thread might acquire lock and then start iterating over the listany other thread will then be blocked by the lock in factthings are not quite as clean as this the relationship between lock and the data it is locking exists purely in our imagination if one thread acquires lock and second thread tries to acquire the same lockthe second thread will be blocked until the first releases the lock by putting access to shared data within the scope of acquired locks we can ensure that the shared data is accessed by only one thread at timeeven though the protection is indirect |
8,977 | processes and threading when it is ready no other threading thread methods may be reimplementedalthough adding additional methods is fine examplea threaded find word program |in this subsection we will review the code for the grepword- py program this program does the same job as grepword- pyonly it delegates the work to multiple threads rather than to multiple processes it is illustrated schematically in figure one particularly interesting feature of the program is that it does not appear to use any locks at all this is possible because the only shared data is list of filesand for these we use the queue queue class what makes queue queue special is that it handles all the locking itself internallyso whenever we access it to add or remove itemswe can rely on the queue itself to serialize accesses in the context of threadingserializing access to data means ensuring that only one thread at time has access to the data another benefit of using queue queue is that we don' have to share out the work ourselveswe simply add items of work to the queue and leave the worker threads to pick up work whenever they are ready the queue queue class works on first infirst out (fifobasisthe queue module also provides queue lifoqueue for last infirst out (lifoaccessand queue priorityqueue which is given tuples such as the -tuple (priorityitem)with items with the lowest priority numbers being processed first all the queues can be created with maximum size setif the maximum size is reached the queue will block further attempts to add items until items have been removed we will look at the grepword- py program in three partsstarting with the complete main(functiondef main()optswordargs parse_options(filelist get_files(argsopts recursework_queue queue queue(for in range(opts count)number "{ }format( if opts debug else "worker worker(work_queuewordnumberworker daemon true worker start(for filename in filelistwork_queue put(filenamework_queue join( |
8,978 | grepword- py main thread thread # thread # thread # figure multithreaded program getting the user' options and the file list are the same as before once we have the necessary information we create queue queue and then loop as many times as there are threads to be createdthe default is for each thread we prepare number string for debugging (an empty string if we are not debuggingand then we create worker ( threading thread subclassinstance--we'll come back to setting the daemon property in moment next we start off the threadalthough at this point it has no work to do because the work queue is emptyso the thread will immediately be blocked trying to get some work with all the threads created and ready for work we iterate over all the filesadding each one to the work queue as soon as the first file is added one of the threads could get it and start on itand so on until all the threads have file to work on as soon as thread finishes working on file it can get another oneuntil all the files are processed notice that this differs from grepword- py where we had to allocate slices of the file list to each child processand the child processes were started and given their lists sequentially using threads is potentially more efficient in cases like this for exampleif the first five files are very large and the rest are smallbecause each thread takes on one job at time each large file will be processed by separate threadnicely spreading the work but with the multiple processes approach we took in the grepword- py programall the large files would be given to the first process and the small files given to the othersso the first process would end up doing most of the work while the others might all finish quickly without having done much at all the program will not terminate while it has any threads running this is problem because once the worker threads have done their workalthough they have finished they are technically still running the solution is to turn the threads into daemons the effect of this is that the program will terminate as soon as the program has no nondaemon threads running the main thread is not daemonso once the main thread finishesthe program will cleanly terminate each daemon thread and then terminate itself of coursethis can now create the opposite problem--once the threads are up and running we must ensure that the main thread does not finish until all the work is done this is achieved by calling queue queue join()--this method blocks until the queue is empty here is the start of the worker class |
8,979 | processes and threading class worker(threading thread)def __init__(selfwork_queuewordnumber)super(__init__(self work_queue work_queue self word word self number number def run(self)while truetryfilename self work_queue get(self process(filenamefinallyself work_queue task_done(the __init__(method must call the base class __init__(the work queue is the same queue queue shared by all the threads we have made the run(method an infinite loop this is common for daemon threadsand makes sense here because we don' know how many files the thread must process at each iteration we call queue queue get(to get the next file to work on this call will block if the queue is emptyand does not have to be protected by lock because queue queue handles that automatically for us once we have file we process itand afterward we must tell the queue that we have done that particular job--calling queue queue task_done(is essential to the correct working of queue queue join(we have not shown the process(functionbecause apart from the def linethe code is the same as the code used in grepword- -child py from the previous "line to the end ( one final point to note is that included with the book' examples is grepwordm pya program that is almost identical to the grepword- py program reviewed herebut which uses the multiprocessing module rather than the threading module the code has just three differencesfirstwe import multiprocessing instead of queue and threadingsecondthe worker class inherits multiprocessing process instead of threading threadand thirdthe work queue is multiprocessing joinablequeue instead of queue queue the multiprocessing module provides thread-like functionality using forking on systems that support it (unix)and child processes on those that don' (windows)so locking mechanisms are not always requiredand the processes will run on whatever processor cores the operating system has available the package provides several ways of passing data between processesincluding using queue that can be used to provide work for processes just like queue queue can be used to provide work for threads |
8,980 | the chief benefit of the multiprocessing version is that it can potentially run faster on multicore machines than the threaded version since it can run its processes on as many cores as are available compare this with the standard python interpreter (written in csometimes called cpythonwhich has gil (global interpreter lockthat means that only one thread can execute python code at any one time this restriction is an implementation detail and does not necessarily apply to other python interpreters such as jython examplea threaded find duplicate files program |the second threading example has similar structure to the firstbut is more sophisticated in several ways it uses two queuesone for work and one for resultsand has separate results processing thread to output results as soon as they are available it also shows both threading thread subclass and calling threading thread(with functionand also uses lock to serialize access to shared data ( dictthe findduplicates- py program is more advanced version of the finddup py program from it iterates over all the files in the current directory (or the specified path)recursively going into subdirectories it compares the lengths of all the files with the same name (just like finddup py)and for those files that have the same name and the same size it then uses the md (message digestalgorithm to check whether the files are the samereporting any that are we will start by looking at the main(functionsplit into four parts def main()optspath parse_options(data collections defaultdict(listfor rootdirsfiles in os walk(path)for filename in filesfullname os path join(rootfilenametrykey (os path getsize(fullname)filenameexcept environmenterrorcontinue if key[ = continue data[keyappend(fullnamefor brief explanation of why cpython uses gil see www python org/doc/faq/library/#can- -weget-rid-of-the-global-interpreter-lock and docs python org/api/threads html |
8,981 | processes and threading each key of the data default dictionary is -tuple of (sizefilename)where the filename does not include the pathand each value is list of filenames (which do include their pathsany items whose value list has more than one filename potentially has duplicates the dictionary is populated by iterating over all the files in the given pathbut skipping any files we cannot get the size of (perhaps due to permissions problemsor because they are not normal files)and any that are of size (since all zero length files are the samework_queue queue priorityqueue(results_queue queue queue(md _from_filename {for in range(opts count)number "{ }format( if opts debug else "worker worker(work_queuemd _from_filenameresults_queuenumberworker daemon true worker start(with all the data in place we are ready to create the worker threads we begin by creating work queue and results queue the work queue is priority queueso it will always return the lowest-priority items (in our case the smallest filesfirst we also create dictionary where each key is filename (including its pathand where each value is the file' md digest value the purpose of the dictionary is to ensure that we never compute the md of the same file more than once (since the computation is expensivewith the shared data collections in place we loop as many times as there are threads to create (by defaultseven timesthe worker subclass is similar to the one we created beforeonly this time we pass both queues and the md dictionary as beforewe start each worker straight away and each will be blocked until work item becomes available results_thread threading threadtarget=lambdaprint_results(results_queue)results_thread daemon true results_thread start(rather than creating threading thread subclass to process the results we have created function and we pass that to threading thread(the return value is custom thread that will call the given function once the thread is started we pass the results queue (which isof courseempty)so the thread will block immediately at this point we have created all the worker threads and the results thread and they are all blocked waiting for work for sizefilename in sorted(data) |
8,982 | names data[sizefilenameif len(names work_queue put((sizenames)work_queue join(results_queue join(we now iterate over the dataand for each (sizefilename -tuple that has list of two or more potentially duplicate fileswe add the size and the filenames with paths as an item of work to the work queue since the queue is class from the queue module we don' have to worry about locking finally we join the work queue and results queue to block until they are empty this ensures that the program runs until all the work is done and all the results have been outputand then terminates cleanly def print_results(results_queue)while truetryresults results_queue get(if resultsprint(resultsfinallyresults_queue task_done(this function is passed as an argument to threading thread(and is called when the thread it is given to is started it has an infinite loop because it is to be used as daemon thread all it does is get results ( multiline string)and if the string is nonemptyit prints it for as long as results are available the beginning of the worker class is similar to what we had beforeclass worker(threading thread)md _lock threading lock(def __init__(selfwork_queuemd _from_filenameresults_queuenumber)super(__init__(self work_queue work_queue self md _from_filename md _from_filename self results_queue results_queue self number number def run(self)while truetrysizenames self work_queue get(self process(sizenames |
8,983 | processes and threading finallyself work_queue task_done(the differences are that we have more shared data to keep track of and we call our custom process(function with different arguments we don' have to worry about the queues since they ensure that accesses are serializedbut for other data itemsin this case the md _from_filename dictionarywe must handle the serialization ourselves by providing lock we have made the lock class attribute because we want every worker instance to use the same lock so that if one instance holds the lockall the other instances are blocked if they try to acquire it we will review the process(function in two parts def process(selfsizefilenames)md collections defaultdict(setfor filename in filenameswith self md _lockmd self md _from_filename get(filenamenoneif md is not nonemd [md add(filenameelsetrymd hashlib md (with open(filename"rb"as fhmd update(fh read()md md digest(md [md add(filenamewith self md _lockself md _from_filename[filenamemd except environmenterrorcontinue we start out with an empty default dictionary where each key is to be an md digest value and where each value is to be set of the filenames of the files that have the corresponding md value we then iterate over all the filesand for each one we retrieve its md if we have already calculated itand calculate it otherwise context managers whether we access the md _from_filename dictionary to read it or to write to itwe put the access in the context of lock instances of the threading lock(class are context managers that acquire the lock on entry and release the lock on exit the with statements will block if another thread has the md _lockuntil the lock is released for the first with statement when we acquire the lock we get the md from the dictionary (or none if it isn' thereif the md is none we must compute itin which case we store it in the md _from_filename dictionary to avoid performing the computation more than once per file |
8,984 | notice that at all times we try to minimize the amount of work done within the scope of lock to keep blocking to minimum--in this case just one dictionary access each time gil strictly speakingwe do not need to use lock at all if we are using cpythonsince the gil effectively synchronizes dictionary accesses for us howeverwe have chosen to program without relying on the gil implementation detailand so we use an explicit lock for filenames in md values()if len(filenames= continue self results_queue put("{ }duplicate files ({ :nbytes):"\ \ { }format(self numbersize"\ \tjoin(sorted(filenames)))at the end we loop over the local md default dictionaryand for each set of names that contains more than one name we add multiline string to the results queue the string contains the worker thread number (an empty string by default)the size of the file in bytesand all the duplicate filenames we don' need to use lock to access the results queue since it is queue queue which will automatically handle the locking behind the scenes the queue module' classes greatly simplify threaded applicationsand when we need to use explicit locks the threading module offers many options here we used the simplestthreading lockbut others are availableincluding threading rlock ( lock that can be acquired again by the thread that already holds it)threading semaphore ( lock that can be used to protect specific number of resources)and threading condition that provides wait condition using multiple threads can often lead to cleaner solutions than using the subprocess modulebut unfortunatelythreaded python programs do not necgil essarily achieve the best possible performance compared with using multiple processes as noted earlierthe problem afflicts the standard implementation of pythonsince the cpython interpreter can execute python code on only one processor at timeeven when using multiple threads one package that tries to solve this problem is the multiprocessing moduleand as we noted earlierthe grepword- py program is multiprocessing version of the grepword- py programwith only three lines that are different similar transformation could be applied to the findduplicates- py program reviewed herebut in practice this is not recommended although the multiprocessing module offers an api (application programming interfacethat closely matches the threading module' api to ease conversionthe two apis are not the same and have different trade-offs alsoperforming mechanistic conversion from threading to multiprocessing is likely to be successful only on smallsimple programs like grepword- pyit is too crude an approach to use for the |
8,985 | processes and threading findduplicates- py programand in general it is best to design programs from the ground up with multiprocessing in mind (the program findduplicates- py is provided with the book' examplesit does the same job as findduplicatest py but works in very different way and uses the multiprocessing module another solution being developed is threading-friendly version of the cpython interpretersee www code google com/ /python-threadsafe for the latest project status summary ||this showed how to create programs that can execute other programs using the standard library' subprocess module programs that are run using subprocess can be given command-line datacan be fed data to their standard inputand can have their standard output (and standard errorread using child processes allows us to take maximum advantage of multicore processors and leaves concurrency issues to be handled by the operating system the downside is that if we need to share data or synchronize processes we must devise some kind of communication mechanismfor exampleshared memory ( using the mmap module)shared filesor networkingand this can require care to get right the also showed how to create multithreaded programs unfortunatelysuch programs cannot take full advantage of multiple cores (if run using the standard cpython interpreter)so for pythonusing multiple processes is often more practical solution where performance is concerned nonethelesswe saw that the queue module and python' locking mechanismssuch as threading lockmake threaded programming as straightforward as possible--and that for simple programs that only need to use queue objects like queue queue and queue priorityqueuewe may be able to completely avoid using explicit locks although multithreaded programming is undoubtedly fashionableit can be much more demanding to writemaintainand debug multithreaded programs than single-threaded ones howevermultithreaded programs allow for straightforward communicationfor exampleusing shared data (providing we use queue class or use locking)and make it much easier to synchronize ( to gather resultsthan using child processes threading can also be very useful in gui (graphical user interfaceprograms that must carry out long-running tasks while maintaining responsivenessincluding the ability to cancel the task being worked on but if good communication mechanism between processes is usedsuch as shared memoryor the process-transparent queue offered by the multiprocessing packageusing multiple processes can often be viable alternative to multiple threads |
8,986 | the following shows another example of threaded programa server that handles each client request in separate threadand that uses locks to protect shared data ||exercises copy and modify the grepword- py program so that instead of the child processes printing their outputthe main program gathers the resultsand after all the child processes have finishedsorts and prints the results this only requires editing the main(function and changing three lines and adding three lines the exercise does require some thought and careand you will need to read the subprocess module' documentation solution is given in grepword-p_ans py write multithreaded program that reads the files listed on the command line (and the files in any directories listed on the command linerecursivelyfor any file that is an xml file ( it begins with the characters "<?xml")parse the file using an xml parser and produce list of the unique tags used by the file or an error message if parsing error occurs here is sample of the program' output from one particular run/data/dvds xml is an xml file that uses the following tagsdvd dvds /data/bad aix is an xml file that has the following errormismatched tagline column /data/incidents aix is an xml file that uses the following tagsairport incident incidents narrative the easiest way to write the program is to modify copy of the findduplicates- py programalthough you can of course write the program entirely from scratch small changes will need to be made to the worker class' __init__(and run(methodsand the process(method will need to be rewritten entirely (but needs only around twenty linesthe program' main(function will need several simplifications and so will one line of the print_results(function the usage message will also need to be modified to match the one shown hereusagexmlsummary py [options[pathoutputs summary of the xml files in pathpath defaults to options- --help show this help message and exit |
8,987 | processes and threading - count--threads=count the number of threads to use ( [default - --verbose - --debug make sure you try running the program with the debug flag set so that you can check that the threads are started up and that each one does its share of the work solution is provided in xmlsummary pywhich is slightly more than lines and uses no explicit locks |
8,988 | creating tcp client creating tcp server |||networking networking allows computer programs to communicate with each othereven if they are running on different machines for programs such as web browsersthis is the essence of what they dowhereas for others networking adds additional dimensions to their functionalityfor exampleremote operation or loggingor the ability to retrieve or supply data to other machines most networking programs work on either peer-to-peer basis (the same program runs on different machines)or more commonlya client/server basis (client programs send requests to serverin this we will create basic client/server application such applications are normally implemented as two separate programsa server that waits for and responds to requestsand one or more clients that send requests to the server and read back the server' response for this to workthe clients must know where to connect to the serverthat isthe server' ip (internet protocoladdress and port number alsoboth clients and server must send and receive data using an agreed-upon protocol using data formats that they both understand python' low-level socket module (on which all of python' higher-level networking modules are basedsupports both ipv and ipv addresses it also supports the most commonly used networking protocolsincluding udp (user datagram protocol), lightweight but unreliable connectionless protocol where data is sent as discrete packets (datagramsbut with no guarantee that they will arriveand tcp (transmission control protocol) reliable connectionand stream-oriented protocol with tcpany amount of data can be sent and received--the socket is responsible for breaking the data into chunks that are small enough to sendand for reconstructing the data at the other end machines can also connect using service discoveryfor exampleusing the bonjour apisuitable modules are available from the python package indexpypi python org/pypi |
8,989 | networking udp is often used to monitor instruments that give continuous readingsand where the odd missed reading is not significantand it is sometimes used for audio or video streaming in cases where the occasional missed frame is acceptable both the ftp and the http protocols are built on top of tcpand client/server applications normally use tcp because they need connection-oriented communication and the reliability that tcp provides in this we will develop client/server programso we use tcp pickles another decision that must be made is whether to send and receive data as lines of text or as blocks of binary dataand if the latterin what form in this we use blocks of binary data where the first four bytes are the length of the following data (encoded as an unsigned integer using the struct module)and where the following data is binary pickle the advantage of this approach is that we can use the same sending and receiving code for any application since we can store almost any arbitrary data in pickle the disadvantage is that both client and server must understand picklesso they must be written in python or must be able to access pythonfor exampleusing jython in java or boost python in +and of coursethe usual security considerations apply to the use of pickles the example we will use is car registration program the server holds details of car registrations (license plateseatsmileageand ownerthe client is used to retrieve car detailsto change car' mileage or owneror to create new car registration any number of clients can be used and they won' block each othereven if two access the server at the same time this is because the server hands off each client' request to separate thread (we will also see that it is just as easy to use separate processes for the sake of the examplewe will run the server and clients on the same machinethis means that we can use "localhostas the ip address (although if the server is on another machine the client can be given its ip address on the command line and this will work as long as there is no firewall in the waywe have also chosen an arbitrary port number of the port number should be greater than and is normally between and although port numbers up to are normally valid the server can accept five kinds of requestsget_car_detailschange_mileagechange_ownernew_registrationand shutdownwith corresponding response for each the response is the requested data or confirmation of the requested actionor an indication of an error creating tcp client ||the client program is car_registration py here is an example of interaction (with the server already runningand with the menu edited slightly to fit on the page) |
8,990 | ( )ar ( )ileage ( )wner ( )ew car license hyr license hyr seats mileage ownerjack lemon ( )ar ( )ileage ( )wner ( )ew car license [ hyr]mileage [ ] mileage successfully changed ( )top server ( )uit [ ]( )top server ( )uit [ ] the data entered by the user is shown in bold--where there is no visible input it means that the user pressed enter to accept the default here the user has asked to see the details of particular car and then updated its mileage as many clients as we like can be runningand when user quits their particular client the server is unaffected but if the server is stoppedthe client it was stopped in will quit and all the other clients will get "connection refusederror and will terminate when they next attempt to access the server in more sophisticated applicationthe ability to stop the server would be available only to certain usersperhaps on only particular machinesbut we have included it in the client to show how it is done we will now review the codestarting with the main(function and the handling of the user interfaceand finishing with the networking code itself def main()if len(sys argv address[ sys argv[ call dict( =get_car_detailsm=change_mileageo=change_ownern=new_registrations=stop_serverq=quitmenu ("( )ar edit ( )ileage edit ( )wner ( )ew car "( )top server ( )uit"valid frozenset("cmonsq"previous_license none while trueaction console get_menu_choice(menuvalid" "trueprevious_license call[action](previous_licensebranching using dictionaries the address list is global that holds the ip address and port number as two-item list["localhost" ]with the ip address overridden if specified on the command line the call dictionary maps menu options to functions the console module is one supplied with this book and contains some useful functions for getting values from the user at the consolesuch as console get_string(and console get_integer()these are similar to functions |
8,991 | networking developed in earlier and have been put in module to make them easy to reuse in different programs as convenience for userswe keep track of the last license they entered so that it can be used as the defaultsince most commands start by asking for the license of the relevant car once the user makes choice we call the corresponding function passing in the previous licenseand expecting each function to return the license it used since the loop is infinite the program must be terminated by one of the functionswe will see this further on def get_car_details(previous_license)licensecar retrieve_car_details(previous_licenseif car is not noneprint("license{ }\nseats{seats}\nmileage{mileage}\ "owner{owner}format(license**car _asdict())return license this function is used to get information about particular car since most of the functions need to request license from the user and often need some car-related data to work onwe have factored out this functionality into the retrieve_car_details(function--it returns -tuple of the license entered by the user and named tuplecartuplethat holds the car' seatsmileageand owner (or the previous license and none if they entered an unrecognized licensehere we just print the information retrieved and return the license to be used as the default for the next function that is called and that needs the license def retrieve_car_details(previous_license)license console get_string("license""license"previous_licenseif not licensereturn previous_licensenone license license upper(ok*data handle_request("get_car_details"licenseif not okprint(data[ ]return previous_licensenone return licensecartuple(*datathis is the first function to make use of networking it calls the handle_request(function that we review further on the handle_request(function takes whatever data it is given as arguments and sends it to the serverand then returns whatever the server replies the handle_request(function does not know or care what data it sends or returnsit purely provides the networking service |
8,992 | in the case of car registrations we have protocol where we always send the name of the action we want the server to perform as the first argumentfollowed by any relevant parameters--in this casejust the license the protocol for the reply is that the server always return tuple whose first item is boolean success/failure flag if the flag is falsewe have -tuple and the second item is an error message if the flag is truethe tuple is either -tuple with the second item being confirmation messageor an -tuple with the second and subsequent items holding the data that was requested so hereif the license is unrecognizedok is false and we print the error message in data[ and return the previous license unchanged otherwisewe return the license (which will now become the previous license)and cartuple made from the data list(seatsmileageownerdef change_mileage(previous_license)licensecar retrieve_car_details(previous_licenseif car is nonereturn previous_license mileage console get_integer("mileage""mileage"car mileage if mileage = return license ok*data handle_request("change_mileage"licensemileageif not okprint(data[ ]elseprint("mileage successfully changed"return license this function follows similar pattern to get_car_details()except that once we have the details we update one aspect of them there are in fact two networking callssince retrieve_car_details(calls handle_request(to get the car' details--we need to do this both to confirm that the license is valid and to get the current mileage to use as the default here the reply is always -tuplewith either an error message or none as the second item we won' review the change_owner(function since it is structurally the same as change_mileage()nor will we review new_registration(since it differs only in not retrieving car details at the start (since it is new car being entered)and asking the user for all the details rather than just changing one detailnone of which is new to us or relevant to network programming def quit(*ignore)sys exit( |
8,993 | networking def stop_server(*ignore)handle_request("shutdown"wait_for_reply=falsesys exit(if the user chooses to quit the program we do clean termination by calling sys exit(every menu function is called with the previous licensebut we don' care about the argument in this particular case we cannot write def quit()because that would create function that expects no arguments and so when the function was called with the previous license typeerror exception would be raised saying that no arguments were expected but that one was given so instead we specify parameter of *ignore which can take any number of positional arguments the name ignore has no significance to python and is used purely to indicate to maintainers that the arguments are ignored if the user chooses to stop the server we use handle_request(to inform the serverand specify that we don' want reply once the data is senthandle_request(returns without waiting for replyand we do clean termination using sys exit(def handle_request(*itemswait_for_reply=true)sizestruct struct struct("! "data pickle dumps(items trywith socketmanager(tuple(address)as socksock sendall(sizestruct pack(len(data))sock sendall(dataif not wait_for_replyreturn size_data sock recv(sizestruct sizesize sizestruct unpack(size_data)[ result bytearray(while truedata sock recv( if not databreak result extend(dataif len(result>sizebreak return pickle loads(resultexcept socket error as errprint("{ }is the server running?format(err)sys exit( this function provides all the client program' network handling it begins by creating struct struct which holds one unsigned integer in network byte |
8,994 | orderand then it creates pickle of whatever items it is passed the function does not know or care what the items are notice that we have explicitly set the pickle protocol version to --this is to ensure that both clients and server use the same pickle versioneven if client or server is upgraded to run different version of python if we wanted our protocol to be more future proofwe could version it (just as we do with binary disk formatsthis can be done either at the network level or at the data level at the network level we can version by passing two unsigned integers instead of onethat islength and protocol version number at the data level we could follow the convention that the pickle is always list (or always dictionarywhose first item (or "versionitemhas version number (you will get the chance to version the protocol in the exercises the socketmanager is custom context manager that gives us socket to use--we will review it shortly the socket socket sendall(method sends all the data it is given--making multiple socket socket send(calls behind the scenes if necessary we always send two items of datathe length of the pickle and the pickle itself if the wait_for_reply argument is false we don' wait for reply and return immediately--the context manager will ensure that the socket is closed before the function actually returns after sending the data (and when we want reply)we call the socket socket recv(method to get the reply this method blocks until it receives data for the first call we request four bytes--the size of the integer that holds the size of the reply pickle to follow we use the struct struct to unpack the bytes into the size integer we then create an empty bytearray and try to retrieve the incoming pickle in blocks of up to bytes once we have read in size bytes (or if the data has run out before then)we break out of the loop and unpickle the data using the pickle loads(function (which takes bytes or bytearray object)and return it in this case we know that the data will always be tuple since that is the protocol we have established with the car registration serverbut the handle_request(function does not know or care about what the data is if something goes wrong with the network connectionfor examplethe server isn' running or the connection fails for some reasona socket error exception is raised in such cases the exception is caught and the client program issues an error message and terminates class socketmanagerdef __init__(selfaddress)self address address def __enter__(self)self sock socket socket(socket af_inetsocket sock_streamself sock connect(self address |
8,995 | networking return self sock def __exit__(self*ignore)self sock close(the address object is -tuple (ip addressport numberand is set when the context manager is created once the context manager is used in with statement it creates socket and tries to make connection--blocking until connection is established or until socket exception is raised the first argument to the socket socket(initializer is the address familyhere we have used socket af_inet (ipv )but others are availablefor examplesocket af_inet (ipv )socket af_unixand socket af_netlink the second argument is normally either socket sock_stream (tcpas we have used hereor socket sock_dgram (udpwhen the flow of control leaves the with statement' scope the context object' __exit__(method is called we don' care whether an exception was raised or not (so we ignore the exception arguments)and just close the socket since the method returns none (in boolean contextfalse)any exceptions are propagated--this works well since we put suitable except block in handle_request(to process any socket exceptions that occur creating tcp server ||since the code for creating servers often follows the same designrather than having to use the low-level socket modulewe can use the high-level socketserver module which takes care of all the housekeeping for us all we have to do is provide request handler class with handle(method which is used to read requests and write replies the socketserver module handles the communications for usservicing each connection requesteither serially or by passing each request to its own separate thread or process--and it does all of this transparently so that we are insulated from the low-level details for this application the server is car_registration_server py this program contains very simple car class that holds seatsmileageand owner information as properties (the first one read-onlythe class does not hold car licenses because the cars are stored in dictionary and the licenses are used for the dictionary' keys we will begin by looking at the main(functionthen briefly review how the server' data is loadedthen the creation of the custom server classand finally the implementation of the request handler class that handles the client requests the first time the server is run on windows firewall dialog might pop up saying that python is blocked--click unblock to allow the server to operate |
8,996 | def main()filename os path join(os path dirname(__file__)"car_registrations dat"cars load(filenameprint("loaded { car registrationsformat(len(cars))requesthandler cars cars server none tryserver carregistrationserver(("" )requesthandlerserver serve_forever(except exception as errprint("error"errfinallyif server is not noneserver shutdown(save(filenamecarsprint("saved { car registrationsformat(len(cars))we have stored the car registration data in the same directory as the program the cars object is set to dictionary whose keys are license strings and whose values are car objects normally servers do not print anything since they are typically started and stopped automatically and run in the backgroundso usually they report on their status by writing logs ( using the logging modulehere we have chosen to print message at start-up and shutdown to make testing and experimenting easier our request handler class needs to be able to access the cars dictionarybut we cannot pass the dictionary to an instance because the server creates the instances for us--one to handle each request so we set the dictionary to the requesthandler cars class variable where it is accessible to all instances we create an instance of the server passing it the address and port it should operate on and the requesthandler class object--not an instance an empty string as the address indicates any accessible ipv address (including the current machinelocalhostthen we tell the server to serve requests forever when the server shuts down (we will see how this happens further on)we save the cars dictionary since the data may have been changed by clients def load(filename)trywith contextlib closing(gzip open(filename"rb")as fhreturn pickle load(fhexcept (environmenterrorpickle unpicklingerroras errprint("server cannot load data{ }format(err)sys exit( |
8,997 | networking the code for loading is easy because we have used context manager from the standard library' contextlib module to ensure that the file is closed irrespective of whether an exception occurs another way of achieving the same effect is to use custom context manager for exampleclass gzipmanagerdef __init__(selffilenamemode)self filename filename self mode mode def __enter__(self)self fh gzip open(self filenameself modereturn self fh def __exit__(self*ignore)self fh close(using the custom gzipmanagerthe with statement becomeswith gzipmanager(filename"rb"as fhthis context manager will work with any python version but if we only care about python or laterwe can simply writewith gzip openas fhsince from python the gzip open(function supports the context manager protocol the save(function (not shownis structurally the same as the load(functiononly we open the file in write binary modeuse pickle dump(to save the dataand don' return anything class carregistrationserver(socketserver threadingmixinsocketserver tcpserver)pass multiple inheritance this is the complete custom server class if we wanted to create server that used processes rather than threadsthe only change would be to inherit the socketserver forkingmixin class instead of the socketserver threadingmixin class the term mixin is often used to describe classes that are specifically designed to be multiply-inherited the socketserver module' classes can be used to create variety of custom servers including udp servers and unix tcp and udp serversby inheriting the appropriate pair of base classes note that the socketserver mixin class we used must always be inherited first this is to ensure that the mixin class' methods are used in preference to the second class' methods for those methods that are provided by bothsince python looks for methods in the base classes in the order in which the base classes are specifiedand uses the first suitable method it finds |
8,998 | the socket server creates request handler (using the class it was givento handle each request our custom requesthandler class provides method for each kind of request it can handleplus the handle(method that it must have since that is the only method used by the socket server but before looking at the methods we will look at the class declaration and the class' class variables class requesthandler(socketserver streamrequesthandler)carslock threading lock(calllock threading lock(call dictget_car_details=lambda self*argsself get_car_details(*args))change_mileage=lambda self*argsself change_mileage(*args))change_owner=lambda self*argsself change_owner(*args))new_registration=lambda self*argsself new_registration(*args))shutdown=lambda self*argsself shutdown(*args)we have created socketserver streamrequesthandler subclass since we are using streaming (tcpserver corresponding socketserver datagramrequesthandler is available for udp serversor we could inherit the socketserver baserequesthandler class for lower-level access the requesthandler cars dictionary is class variable that was added in the main(functionit holds all the registration data adding additional attributes to objects (such as classes and instancescan be done outside the class (in this case in the main(functionwithout formality (as long as the object has __dict__)and can be very convenient since we know that the class depends on this variable some programmers would have added cars none as class variable to document the variable' existence almost every request-handling method needs access to the cars databut we must ensure that the data is never accessed by two methods (from two different threadsat the same timeif it isthe dictionary may become corruptedor the program might crash to avoid this we have lock class variable that we will use to ensure that only one thread at time accesses the cars dictionary (threadingincluding the use of locksis covered in the call dictionary is another class variable each key is the name of an action that the server can perform and each value is function for performing the action we cannot use the methods directly as we did with the functions gil the gil (global interpreter lockensures that accesses to the cars dictionary are synchronizedbut as noted earlierwe do not take advantage of this since it is cpython implementation detail |
8,999 | networking in the client' menu dictionary because there is no self available at the class level the solution we have used is to provide wrapper functions that will get self when they are calledand which in turn call the appropriate method with the given self and any other arguments an alternative solution would be to create the call dictionary after all the methods that would allow us to create entries such as get_car_details=get_car_detailswith python able to find the get_car_details(method because the dictionary is created after the method is defined we have used the first approach since it is more explicit and does not impose an order dependency on where the dictionary is created gil although the call dictionary is only ever read after the class is createdsince it is mutable we have played it extra-safe and created lock for it to ensure that no two threads access it at the same time (againbecause of the gilthe lock isn' really needed for cpython def handle(self)sizestruct struct struct("! "size_data self rfile read(sizestruct sizesize sizestruct unpack(size_data)[ data pickle loads(self rfile read(size)trywith self calllockfunction self call[data[ ]reply function(self*data[ :]except finishreturn data pickle dumps(reply self wfile write(sizestruct pack(len(data))self wfile write(datawhenever client makes request new thread is created with new instance of the requesthandler classand then the instance' handle(method is called inside this method the data coming from the client can be read from the self rfile file objectand data can be sent back to the client by writing to the self wfile object--both of these objects are provided by socketserveropened and ready for use the struct struct is for the integer byte count that we need for the "length plus pickleformat we are using to exchange data between clients and the server we begin by reading four bytes and unpacking this as the size integer so that we know the size of the pickle we have been sent then we read size bytes and unpickle them into the data variable the read will block until the data is read in this case we know that data will always be tuplewith the first item being the requested action and the other items being the parametersbecause that is the protocol we have established with the car registration clients |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.