id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
14,100 | hash tables entrywhich results in the secondary clustering this occurs because the probe equation is based solely on the original hash slot better approach for reducing secondary clustering is to base the probe sequence on the key itself in double hashing when collision occursthe key is hashed by second function and the result is used as the constant factor in the linear probeslot (home hp(key) while the step size remains constant throughout the probemultiple keys that map to the same table entry will have different probe sequences to reduce clusteringthe second hash function should not be the same as the main hash function and it should produce valid index in the range simple choice for the second hash function takes the formhp(key key where is some constant less than for examplesuppose we define second hash functionhp(key key and use it with double hashing to build hash table from our sample keys this results in only two collisionsh( = ( = ( = ( = ( = ( = ( = ( = = = the hash table resulting from the use of double hashing is illustrated in figure the double hashing technique is most commonly used to resolve collisions since it reduces both primary and secondary clustering to ensure every table entry is visited during the probingthe table size must be prime number we leave it as an exercise to show why this is necessary figure the hash table using double hashing rehashing we have looked at how to use and manage hash tablebut how do we decide how big the hash table should beif we know the number of entries that will be |
14,101 | stored in the tablewe can easily create table large enough to hold the entire collection in many instanceshoweverthere is no way to know up front how many keys will be stored in the hash table in this casewe can start with table of some given size and then grow or expand the table as needed to make room for more entries we used similar approach with vector when all available slots in the underlying array had been consumeda new larger array was created and the contents of the vector copied to the new array with hash tablewe create new array larger than the originalbut we cannot simply copy the contents from the old array to the new one insteadwe have to rebuild or rehash the entire table by adding each key to the new array as if it were new key being added for the first time rememberthe search keys were added to the hash table based on the result of the hash function and the result of the function is based on the size of the table if we increase the size of the tablethe function will return different hash values and the keys may be stored in different entries than in the original table for examplesuppose we create hash table of size and insert our set of sample keys using simple linear probe with applying the hash function to the keys yields the following resultswhich includes single collisionh( = ( = ( = ( = = ( = ( = ( = ( = the original hash table using linear probe is shown in figure (aand the new larger hash table is shown in figure (byou will notice the keys are stored in different locations due to the larger table size ( ( figure the result of enlarging the hash table from elements to as the table becomes more fullthe more likely it is that collisions will occur experience has shown that hashing works best when the table is no more than approximately three quarters full thusif the hash table is to be expandedit should be done before the table becomes full the ratio between the number of keys in the hash table and the size of the table is called the load factor in practicea hash table should be expanded before the load factor reaches the amount by which the table should be expanded can depend on the applicationbut good rule of thumb is to at least double its size as we indicated earlier |
14,102 | hash tables most of the probing techniques can benefit from table size that is prime number to determine the actual size of the new tablewe can first double the original size and then search for the first prime number greater than depending on the application and the type of probing usedyou may be able to simply double the size and add one note that by adding onethe resulting size will be an odd numberwhich results in fewer divisors for the given table size efficiency analysis the ultimate goal of hashing is to provide direct access to data items based on the search keys in the hash table butas we've seencollisions routinely occur due to multiple keys mapping to the same table entry the efficiency of the hash operations depends on the hash functionthe size of the tableand the type of probe used to resolve collisions the insertion and deletion operations both require search to locate the slot into which new key can be inserted or the slot containing the key to be deleted once the slot has been locatedthe insertion and deletion operations are simple and only require constant time the time required to perform the search is the main contributor to the overall time of the three hash table operationssearchinginsertionsand deletions to evaluate the search performed in hashingassume there are elements currently stored in the table of size in the best casewhich only requires constant timethe key maps directly to the table entry containing the target and no collision occurs when collision occurshowevera probe is required to find the target key in the worst casethe probe has to visit every entry in the tablewhich requires (mtime from this analysisit appears as if hashing is no better than basic linear searchwhich also requires linear time the differencehoweveris that hashing is very efficient in the average case the average case assumes the keys are uniformly distributed throughout the table it depends on the average probe length and the average probe length depends on the load factor given the load factor donald knuthauthor of the definitive book series on data structures and algorithmsthe art of computer programmingderived equations for the average probe length the times depend on the type of probe used in the search and whether the search was successful when using linear probethe average number of comparisons required to locate key in the hash table for successful search is ( ) and for an unsuccessful search ( awhen using quadratic probe or double hashingthe average number of comparisons required to locate key for successful search is |
14,103 | log( aa and for an unsuccessful search ( atable shows the average number of comparisons for both linear and quadratic probes when used with various load factors as the load factor increases beyond approximately / the average number of comparisons become very largeespecially for an unsuccessful search the data in the table also shows that the quadratic and double hashing probes can allow for higher load factors than the linear probe load factor successful searchlinear probe quadratic probe unsuccessful searchlinear probe quadratic probe table average search times for both linear and quadratic probes based on experiments and the equations abovewe can conclude that the hash operations only require an average time of ( when the load factor is between / and / compare this to the average times for the linear and binary searches ( (nand (log )respectivelyand we find that hashing provides an efficient solution for the search operation separate chaining when collision occurswe have to probe the hash table to find another available slot in the previous sectionwe reviewed several probing techniques that can be used to help reduce the number of collisions but we can eliminate collisions entirely if we allow multiple keys to share the same table entry to accommodate multiple keyslinked lists can be used to store the individual keys that map to the same entry the linked lists are commonly referred to as chains and this technique of collision resolution is known as separate chaining in separate chainingthe hash table is constructed as an array of linked lists the keys are mapped to an individual index in the usual waybut instead of storing |
14,104 | hash tables the key into the array elementsthe keys are inserted into the linked list referenced from the corresponding entrythere' no need to probe for different slot new keys can be prepended to the linked list since the nodes are in no particular order figure illustrates the use of separate chaining to build hash table figure hash table using separate chaining the search operation is much simpler when using separate chaining after mapping the key to an entry in the tablethe corresponding linked list is searched to determine if the key is in the table when deleting keythe key is again mapped in the usual way to find the linked list containing that key after locating the listthe node containing the key is removed from the linked list just as if we were removing any other item from linked list since the keys are not stored in the array elements themselveswe no longer have to mark the entry as having been filled by previously deleted key separate chaining is also known as open hashing since the keys are stored outside the table the term closed hashing is used when the keys are stored within the elements of the table as described in the previous section to confuse things bitsome computer scientists also use the terms closed addressing to describe open hashing and open addressing to describe closed hashing the use of the addressing terms refers to the possible locations of the keys in relation to the table entries in open addressingthe keys may have been stored in an open slot different from the one to which it originally mapped while in closed addressingthe key is contained within the entry to which it mapped the table size used in separate chaining is not as important as in closed hashing since multiple keys can be stored in the various linked list but it still requires attention since better key distribution can be achieved if the table size is prime number in additionif the table is too smallthe linked lists will grow larger with the addition of each new key if the list become too largethe table can be rehashed just as we did when using closed hashing the analysis of the efficiency for separate chaining is similar to that of closed hashing as beforethe search required to locate key is the most time consuming |
14,105 | part of the hash operations mapping key to an entry in the hash table can be done in one stepbut the time to search the corresponding linked list is based on the length of that list in the worst casethe list will contain all of the keys stored in the hash tableresulting in linear time search as with closed hashingseparate chaining is very efficient in the average case the average time to locate key within the hash table assumes the keys are uniformly distributed across the table and it depends on the average length of the linked lists if the hash table contains keys and entriesthe average list length is which is the same as the load factor deriving equations for the average number of searches in separate chaining is much easier than with closed hashing the average number of comparisons required to locate key in the hash table for successful search is and for an unsuccessful search is + when the load factor is less than (twice the number of keys as compared to the number of table entries)it can be shown that the hash operations only require ( time in the average case this is better average time than that for closed hashingwhich is an advantage of separate chaining the drawback to separate chaininghoweveris the need for additional storage used by the link fields in the nodes of the linked lists hash functions the efficiency of hashing depends in large part on the selection of good hash function as we saw earlierthe purpose of hash function is to map set of search keys to range of index values corresponding to entries in hash table "perfecthash function will map every key to different table entryresulting in no collisions but this is seldom achieved except in cases like our collection of products in which the keys are within small range or when the keys are known beforehand insteadwe try to design good hash function that will distribute the keys across the range of hash table indices as evenly as possible there are several important guidelines to consider in designing or selecting hash functionthe computation should be simple in order to produce quick results the resulting index cannot be random when hash function is applied multiple times to the same keyit must always return the same index value if the key consists of multiple partsevery part should contribute in the computation of the resulting index value the table size should be prime numberespecially when using the modulus operator this can produce better distributions and fewer collisions as it tends to reduce the number of keys that share the same divisor |
14,106 | hash tables integer keys are the easiest to hashbut there are many times when we have to deal with keys that are either strings or mixture of strings and integers when dealing with non-integer keysthe most common approach is to first convert the key to an integer value and then apply an integer-based hash function to that value in this sectionwe first explore several hash functions that can be used with integers and then look at common techniques used to convert strings to integer values that can then be hashed division the simplest hash function for integer values is the one we have been using throughout the the integer keyor mixed type key that has been converted to an integeris divided by the size of the hash table with the remainder becoming the hash table indexh(keykey computing the remainder of an integer key is the easiest way to ensure the resulting index always falls within the legal range of indices the division technique is one of the most commonly used hash functionsapplied directly to an integer key or after converting mixed type key to an integer truncation for large integerssome columns in the key value are ignored and not used in the computation of the hash table index in this casethe index is formed by selecting the digits from specific columns and combining them into an integer within the legal range of indices for exampleif the keys are composed of integer values that all contain seven digits and the hash table size is we can concatenate the firstthirdand sixth digits (counting from right to leftto form the index value using this techniquekey value would hash to index folding in this methodthe key is split into multiple parts and then combined into single integer value by adding or multiplying the individual parts the resulting integer value is then either truncated or the division method is applied to fit it within the range of legal table entries for examplegiven key value consisting of seven digitswe can split it into three smaller integer values ( and and then sum these to obtain new integer the division method can then be used to obtain the hash table index this method can also be used when the keys store data with explicit components such as social security numbers or phone numbers hashing strings strings can also be stored in hash table the string representation has to be converted to an integer value that can be used with the division or truncation |
14,107 | methods to generate an index within the valid range there are many different techniques available for this conversion the simplest approach is to sum the ascii values of the individual characters for exampleif we use this method to hash the string 'hashing'the result will be this approach works well with small hash tables but when used with larger tablesshort strings will not hash to the larger index valuesthey will only be used when probed for examplesuppose we apply this method to strings containing seven characterseach with maximum ascii value of summing the ascii values will yield maximum value of second approach that can provide good results regardless of the string length uses polynomials an- an- sn- sn- sn- where is non-zero constantsi is the ith element of the stringand is the length of the string if we use this method with the string 'hashing'where the resulting hash value will be this value can then be used with the division method to yield an index value within the valid range the hashmap abstract data type one of the most common uses of hash table is for the implementation of map in factpython' dictionary is implemented using hash table with closed hashing the definition of the map adt from allows for the use of any type of comparable keywhich differs from python' dictionary since the latter requires the keys to be hashable that requirement can limit the efficient use of the dictionary since we must define our own hash function for any user-defined types that are to be used as dictionary keys our hash function must produce good results or the dictionary operations may not be very efficient in this sectionwe provide an implementation for the map that is very similar to the approach used in implementing python' dictionary since this version requires the keys to be hashablewe use the name hashmap to distinguish it from the more general map adt for the implementation of the hashmap adtwe are going to use hash table with closed hashing and double hashing probe the source code is provided in listing on the next page the hash table in implementing the hashmap adtwe must first decide how big the hash table should be the hashmap adt is supposed to be general purpose structure that can store any number of key/value pairs to maintain this propertywe must allow the hash table to expand as needed thuswe can start with relatively small table ( and allow it to expand as needed by rehashing each time the load factor is exceeded the next question we need to answer is what load factor |
14,108 | hash tables should we useas we saw earliera load factor between / and / provides good performance in the average case for our implementation we are going to use load factor of / listing the hashmap py module implementation of the map adt using closed hashing and probe with double hashing from arrays import array class hashmap defines constants to represent the status of each table entry unused none empty _mapentrynonenone creates an empty map instance def __init__self )self _table array self _count self _maxcount len(self _tablelen(self _table/ returns the number of entries in the map def __len__self )return self _count determines if the map contains the given key def __contains__selfkey )slot self _findslotkeyfalse return slot is not none adds new entry to the map if the key does not exist otherwisethe new value replaces the current value associated with the key def addselfkeyvalue )if key in self slot self _findslotkeyfalse self _table[slotvalue value return false else slot self _findslotkeytrue self _table[slot_mapentrykeyvalue self _count + if self _count =self _maxcount self _rehash(return true returns the value associated with the key def valueofselfkey )slot self _findslotkeyfalse assert slot is not none"invalid map key return self _table[slotvalue removes the entry associated with the key def removeselfkey ) |
14,109 | returns an iterator for traversing the keys in the map def __iter__self )finds the slot containing the key or where the key can be added forinsert indicates if the search is for an insertionwhich locates the slot into which the new key can be added def _findslotselfkeyforinsert )compute the home slot and the step size slot self _hash key step self _hash key probe for the key len(self _tablewhile self _table[slotis not unused if forinsert and (self _table[slotis unused or self _table[slotis emptyreturn slot elif not forinsert and (self _table[slotis not empty and self _table[slotkey =keyreturn slot else slot (slot stepm rebuilds the hash table def _rehashself create new larger table origtable self _table newsize len(self _table self _table arraynewsize modify the size attributes self _count self _maxcount newsize newsize / add the keys from the original array to the new table for entry in origtable if entry is not unused and entry is not empty slot self _findslotkeytrue self _table[slotentry self _count + the main hash function for mapping keys to table entries def _hash selfkey )return abshash(keylen(self _tablethe second hash function used with double hashing probes def _hash selfkey )return abshash(key(len(self _table storage class for holding the key/value pairs class _mapentry def __init__selfkeyvalue )self key key self value value |
14,110 | hash tables in the constructor (lines - )we create three attributestable stores the array used for the hash tablecount indicates the number of keys currently stored in the tableand maxcount indicates the maximum number of keys that can be stored in the table before exceeding the load factor instead of using floatingpoint operations to determine if the load factor has been exceededwe can store the maximum number of keys needed to reach that point each time the table is expandeda new value of maxcount is computed for the initial table size of this value will be the key/value entries can be stored in the table using the same storage class mapentry as used in our earlier implementation but we also need way to flag an entry as having been previously used by key but has now been deleted the easiest way to do this is with the use of dummy mapentry object when key is deletedwe simply store an alias of the dummy object reference in the corresponding table entry for easier readability of the source codewe create two named constants in lines - to indicate the two special states for the table entriesan unused entrywhich is indicated by null referenceis one that has not yet been used to store keyand an empty entry is one that had previously stored key but has now been deleted the third possible state of an entrywhich is easily determined if the entry is not one of the other two statesis one that is currently occupied by key hash functions our implementation will need two hash functionsthe main function for mapping the key to home position and the function used with the double hashing for both functionswe are going to use the simple division method in which the key value is divided by the size of the table and the remainder becomes the index to which the key maps the division hash functions defined earlier in the assumed the search key is an integer value but the hashmap adt allows for the storage of any type of search keywhich includes stringsfloating-point valuesand even user-defined types to accommodate keys of various data typeswe can use python' built-in hash(functionwhich is automatically defined for all of the built-in types it hashes the given key and returns an integer value that can be used in the division method but the value returned by the python' hash(function can be any integernot just positive values or those within given range we can still use the function and simply take its absolute value and then divide it by the size of the table the main hash function for our implementation is defined ash(key|hash(key) while the second function for use with double hashing is defined ashp(key |hash(key)( the size of our hash table will always be an odd numberso we subtract from the size of the table in the second function to ensure the division is by an odd number the two hash functions are implemented in lines - of listing |
14,111 | to use objects of user-defined class as keys in the dictionarythe class must implement both the hash and eq methods the hash method should hash the contents of the object and return an integer that can be used by either of our two hash functionsh(and hp(the eq is needed for the equality comparison in line of listing which determines if the key stored in the given slot is the target key searching as we have seena search has to be performed no matter which hash table operation we use to aide in the searchwe create the findslot(helper method as shown in lines - searching the table to determine if key is simply contained in the table and searching for key to be deleted require the same sequence of steps after mapping the key to its home positionwe determine if the key was found at this location or if probe has to be performed when probingwe step through the keys using the step size returned by the second hash function the probe continues until the key has been located or we encounter an unused slot (contains null referencethe search used to locate slot for the insertion of new keyhoweverhas one major difference the probe must also terminate if we encounter table entry marked as empty from previously deleted key since new key can be stored in such an entry this minor difference between the two types of searches is handled by the forinsert argument when truea search is performed for the location where new key can be inserted and the index of that location is returned when the argument is falsea normal search is performed and either the index of the entry containing the key is returned or none is returned when the key is not in the table when used in the contains and valueof(methodsthe findslot(method is called with value of false for the forinsert argument insertions the add(method also uses the findslot(helper method in factit' called twice firstwe determine if the key is in the table that indirectly calls the contains method if the key is in the tablewe have to locate the key through normal search and modify its corresponding value on the otherif the key is not in the tablefindslot is called with value of true passed to the forinsert argument to locate the next available slot finallyif the key is new and has to be added to the tablewe check the count and determine if it exceeds the load factorin which case the table has to be rehashed the remove operation and the implementation of an iterator for use with this new version of the map adt are left as exercises rehashing the rehash operation is shown in lines - of listing the first step is to create new larger array for simplicitythe new size is computed to be |
14,112 | hash tables which ensures an odd value more efficient solution would ensure the new size is always prime number by searching for the next prime number larger than the original array is saved in temporary variable and the new array is assigned to the table attribute the reason for assigning the new array to the attribute at this time is that we will need to use the findslot(method to add the keys to the new array and that method works off the table attribute the count and maxcount are also reset the value of maxcount is set to be approximately twothirds the size of the new table using the expression shown in line of listing finallythe key/value pairs are added to the new arrayone at time instead of using the add(methodwhich first verifies the key is newwe perform the insertion of each directly within the for loop applicationhistograms graphical displays or charts of tabulated frequencies are very common in statistics these chartsknown as histogramsare used to show the distribution of data across discrete categories histogram consists of collection of categories and counters the number and types of categories can vary depending on the problem the counters are used to accumulate the number of occurrences of values within each category for given data collection consider the example histogram in figure the five letter grades are the categories and the heights of the bars represent the value of the counters grade distribution figure sample histogram for distribution of grades the histogram abstract data type we can define an abstract data type for collecting and storing the frequency counts used in constructing histogram an ideal adt would allow for building general purpose histogram that can contain many different categories and be used with many different problems |
14,113 | define histogram adt histogram is container that can be used to collect and store discrete frequency counts across multiple categories representing distribution of data the category objects must be comparable histogramcatseq )creates new histogram containing the categories provided in the given sequencecatseq the frequency counts of the categories are initialized to zero getcountcategory )returns the frequency count for the given categorywhich must be valid inccountcategory )increments the count by for the given category the supplied category must be valid totalcount()returns sum of the frequency counts for all of the categories iterator ()creates and returns an iterator for traversing over the histogram categories building histogram the program in listing produces text-based version of the histogram from figure and illustrates the use of the histogram adt the program extracts collection of numeric grades from text file and assigns letter grade to each value based on the common -point scalea the frequency counts of the letter grades are tabulated and then used to produce histogram listing the buildhist py program prints histogram for distribution of letter grades computed from collection of numeric grades extracted from text file from maphist import histogram def main()create histogram instance for computing the frequencies gradehist histogram"abcdfopen the text file containing the grades gradefile open('cs grades txt'" "extract the grades and increment the appropriate counter for line in gradefile grade int(linegradehist inccountlettergrade(grade(listing continued |
14,114 | hash tables listing continued print the histogram chart printchartgradehist determines the letter grade for the given numeric value def lettergradegrade )if grade > return 'aelif grade > return 'belif grade > return 'celif grade > return 'delse return 'fprints the histogram as horizontal bar chart def printchartgradehist )printgrade distributionprint the body of the chart lettergrades ' '' '' '' ''ffor letter in lettergrades print|printletter +"end "freq gradehist getcountletter print'*freq print the -axis print|print+----+----+----+----+----+----+----+----print calls the main routine main(the buildhist py program consists of three functions the main(function drives the programwhich extracts the numeric grades and builds an instance of the histogram adt it initializes the histogram to contain the five letter grades as its categories the lettergrade(function is helper functionwhich simply returns the letter grade for the given numeric value the printchart(function prints the text-based histogram using the frequency counts computed in the main routine assuming the following grades are extracted from the text file the buildhist py program would produce the following text-based histogram |
14,115 | grade distribution +***** +******** +********** +***** +**+----+----+----+----+----+----+----+--- implementation to implement the histogram adtwe must select an appropriate data structure for storing the categories and corresponding frequency counts there are several different structures and approaches that can be usedbut the map adt provides an ideal solution since it already stores key/value mappings and allows for full implementation of the histogram adt to use mapthe categories can be stored in the key part of the key/value pairs and counter (integer valuecan be stored in the value part when category counter is incrementedthe entry is located by its key and the corresponding value can be incremented and stored back into the entry the implementation of the histogram adt using an instance of the hash table version of the map adt as the underlying structure is provided in listing listing the maphist py module implementation of the histogram adt using hash map from hashmap import hashmap class histogram creates histogram containing the given categories def __init__selfcatseq )self _freqcounts hashmap(for cat in catseq self _freqcounts addcat returns the frequency count for the given category def getcountselfcategory )assert category in self _freqcounts"invalid histogram category return self _freqcounts valueofcategory (listing continued |
14,116 | hash tables listing continued increments the counter for the given category def inccountselfcategory )assert category in self _freqcounts"invalid histogram category value self _freqcounts valueofcategory self _freqcounts addcategoryvalue returns the sum of the frequency counts def totalcountself )total for cat in self _freqcounts total +self _freqcounts valueofcat return total returns an iterator for traversing the categories def __iter__self )return iterself _freqcounts the iterator operation defined by the adt is implemented in lines - in section we indicated the iterator method is supposed to create and return an iterator object that can be used with the given collection since the map adt already provides an iterator for traversing over the keyswe can have python access and return that iterator as if we had created our own this is done using the iter(functionas shown in our implementation of the iter method in lines - the color histogram histogram is used to tabulate the frequencies of multiple discrete categories the histogram adt from the previous section works well when the collection of categories is small some applicationshowevermay deal with millions of distinct categoriesnone of which are known up frontand require specialized version of the histogram one such example is the color histogramwhich is used to tabulate the frequency counts of individual colors within digital image color histograms are used in areas of image processing and digital photography for image classificationobject identificationand image manipulation color histograms can be constructed for any color spacebut we limit our discussion to the more common discrete rgb color space in the rgb color spaceindividual colors are specified by intensity values for the three primary colorsredgreenand blue this color space is commonly used in computer applications and computer graphics because it is very convenient for modeling the human visual system the intensity values in the rgb color spacealso referred to as color componentscan be specified using either real values in the range [ or discrete values in the range [ the discrete version is the most commonly used for the storage of digital imagesespecially those produced by digital cameras and scanners with discrete values for the three color componentsmore than million colors can be representedfar more than humans are capable of distinguishing value of indicates no intensity for the given component while |
14,117 | indicates full intensity thuswhite is represented with all three components set to while black is represented with all three components set to we can define an abstract data type for color histogram that closely follows that of the general histogramdefine color histogram adt color histogram is container that can be used to collect and store frequency counts for multiple discrete rgb colors colorhistogram()creates new empty color histogram getcountredgreenblue )returns the frequency count for the given rgb colorwhich must be valid inccountredgreenblue )increments the count by for the given rgb color if the color was previously added to the histogram or the color is added to the histogram as new entry with count of totalcount()returns sum of the frequency counts for all colors in the histogram iterator ()creates and returns an iterator for traversing over the colors in the color histogram there are number of ways we can construct color histogrambut we need fast and memory-efficient approach the easiest approach would be to use three-dimensional array of size where each element of the array represents single color this approachhoweveris far too costly it would require array elementsmost of which would go unused on the other handthe advantage of using an array is that accessing and updating particular color is direct and requires no costly operations other options include the use of python list or linked list but these would be inefficient when working with images containing millions of colors in this we've seen that hashing can be very efficient technique when used with good hash function for the color histogramclosed hashing would not be an ideal choice since it may require multiple rehashes involving hundreds of thousandsif not millionsof colors separate chaining can be used with good resultsbut it requires the design of good hash function and the selection of an appropriately sized hash table different approach can be used that combines the advantages of the direct access of the - array and the limited memory use and fast searches possible with hashing and separate chaining instead of using - array to store the separate chainswe can use - array of size the colors can be mapped to specific chain by having the rows correspond to the red color component and the columns correspond to the green color component thusall colors having the |
14,118 | hash tables same red and green components will be stored in the same chainwith only the blue components differing figure illustrates this - array of linked lists ** ** figure - array of linked lists used to store color counts in color histogram given digital image consisting of distinct pixelsall of which may contain unique colorsthe histogram can be constructed in linear time this time is derived from the fact that searching for the existence of color can be done in constant time locating the specific - array entry in which the color should be stored is direct mapping to the corresponding array indices determining if the given color is contained in the corresponding linked list requires linear search over the entire list since all of the nodes in the linked list store colors containing the same red and green componentsthey only differ in their blue components given that there are only different blue component valuesthe list can never contain more than entries thusthe length of the linked list is independent of the number of pixels in the image this results in worst case time of ( to search for the existence of color in the histogram in order to increment its count or to add new color to the histogram search is required for each of the distinct image pixelsresulting in total time (nin the worst case after the histogram is constructeda traversal over the unique colors contained in the histogram is commonly performed we could traverse over the entire - arrayone element at timeand then traverse the linked list referenced from the individual elements but this can be time consuming since in practicemany of the elements will not contain any colors insteadwe can maintain single separate linked list that contains the individual nodes from the various hash chainsas illustrated in figure when new color is added to the histograma node is |
14,119 | created and stored in the corresponding chain if we were to include second link within the same nodes used in the chains to store the colors and color countswe can then easily add each node to separate linked list this list can then be used to provide complete traversal over the entries in the histogram without wasting time in visiting the empty elements of the - array the implementation of the color histogram is left as an exercise ** ** colorlist figure the individual chain nodes are linked together for faster traversals exercises assume an initially empty hash table with entries in which the hash function uses the division method show the contents of the hash table after the following keys are inserted (in the order listed)assuming the indicated type of probe is used (alinear probe (with (blinear probe (with (cquadratic probe (ddouble hashing [with hp(key(key (eseparate chaining |
14,120 | hash tables do the same as in exercise but use the following hash function to map the keys to the table entriesh(key( key show the contents of the hash table from exercise after rehashing with new table containing entries consider hash table of size that contains keys (awhat is the load factor(bwhat is the average number of comparisons required to determine if the collection contains the key ifi linear probing is used ii quadratic probing is used iii separate chaining is used do the same as in exercise but for hash table of size that contains keys show why the table size must be prime number in order for double hashing to visit every entry during the probe design hash function that can be used to map the two-character state abbreviations (including the one for the district of columbiato entries in hash table that results in no more than three collisions when used with table where programming projects implement the remove operation for the hashmap adt design and implement an iterator for use with the implementation of the hashmap adt modify the implementation of the hashmap adt to(ause linear probing instead of double hashing (buse quadratic probing instead of double hashing (cuse separate chaining instead of closed hashing design and implement program that compares the use of linear probingquadratic probingand double hashing on collection of string keys of varying lengths the program should extract collection of strings from text file and compute the average number of collisions and the average number of probes implement the color histogram adt using the - array of chains as described in the |
14,121 | advanced sorting we introduced the sorting problem in and explored three basic sorting algorithmsbut there are many others most sorting algorithms can be divided into two categoriescomparison sorts and distribution sorts in comparison sortthe data items can be arranged in either ascending (from smallest to largestor descending (from largest to smallestorder by performing pairwise logical comparisons between the sort keys the pairwise comparisons are typically based on either numerical order when working with integers and reals or lexicographical order when working with strings and sequences distribution sorton the other handdistributes or divides the sort keys into intermediate groups or collections based on the individual key values for exampleconsider the problem of sorting list of numerical grades based on their equivalent letter grade instead of the actual numerical value the grades can be divided into groups based on the corresponding letter grade without having to make comparisons between the numerical values the sorting algorithms described in used nested iterative loops to sort sequence of values in this we explore two additional comparison sort algorithmsboth of which use recursion and apply divide and conquer strategy to sort sequences many of the comparison sorts can also be applied to linked listswhich we explore along with one of the more common distribution sorts merge sort the merge sort algorithm uses the divide and conquer strategy to sort the keys stored in mutable sequence the sequence of values is recursively divided into smaller and smaller subsequences until each value is contained within its own subsequences the subsequences are then merged back together to create sorted sequence for illustration purposeswe assume the mutable sequence is list |
14,122 | advanced sorting algorithm description the algorithm starts by splitting the original list of values in the middle to create two sublistseach containing approximately the same number of values consider the list of integer values at the top of figure this list is first split following the element containing value these two sublists are then split in similar fashion to create four sublists and those four are split to create eight sublists figure recursively splitting list until each element is contained within its own list after the list has been fully subdivided into individual subliststhe sublists are then merged back togethertwo at timeto create new sorted list these sorted lists are themselves merged to create larger and larger lists until single sorted list has been constructed during the merging phaseeach pair of sorted sublists are merged to create new sorted list containing all of the elements from both sublists this process is illustrated in figure basic implementation given basic description of the merge sort algorithm from an abstract viewwe now turn our attention to the implementation details there are two major steps in the merge sort algorithmdividing the list of values into smaller and smaller sublists and merging the sublists back together to create sorted list the use of recursion provides simple solution to this problem the list can be subdivided by each recursive call and then merged back together as the recursion unwinds listing illustrates simple recursive function for use with python list if the supplied list contains single itemit is by definition sorted and the list is simply returnedwhich is the base case of the recursive definition if the list contains multiple itemsit has to be split to create two sublists of approximately |
14,123 | figure the sublists are merged back together to create sorted list equal size the split is handled by first computing the midpoint of the list and then using the slice operation to create two new sublists the left sublist is then passed to recursive call of the pythonmergesort(function that portion of the list will be processed recursively until it is completely sorted and returned the right half of the list is then processed in similar fashion after both the left and right sublists have been orderedthe two lists are merged using the mergesortedlists(function from section the new sorted list is returned listing implementation of the merge sort algorithm for use with python lists sorts python list in ascending order using the merge sort algorithm def pythonmergesortthelist )check the base case the list contains single item if len(thelist< return thelist else compute the midpoint mid len(thelist/ split the list and perform the recursive step lefthalf pythonmergesortthelist:mid righthalf pythonmergesortthelistmid#merge the two ordered sublists newlist mergeorderedlistslefthalfrighthalf return newlist the pythonmergesort(function provides simple recursive implementation of the merge sort algorithmbut it has several disadvantages firstit relies on the use of the slice operationwhich prevents us from using the function to sort an array of values since the array structure does not provide slice operation second |
14,124 | advanced sorting new physical sublists are created in each recursive call as the list is subdivided we learned in that the slice operation can be time consuming since new list has to be created and the contents of the slice copied from the original list new list is also created each time two sublists are merged during the unwinding of the recursionadding yet more time to the overall process finallythe sorted list is not contained within the same list originally passed to the function as was the case with the sorting algorithms presented earlier in improved implementation we can improve the implementation of the merge sort algorithm by using technique similar to that employed with the binary search algorithm from instead of physically creating sublists when the list is splitwe can use index markers to specify subsequence of elements to create virtual sublists within the original physical list as was done with the binary search algorithm figure shows the corresponding index markers used to split the sample list from figure the use of virtual sublists eliminates the need to repeatedly create new physical arrays or python list structures during each recursive call first mid last firstl lastl firstr lastr left sublist right sublist figure splitting list of values into two virtual sublists the new implementation of the merge sort algorithm is provided in listing the recmergesort(function is very similar to the earlier implementation since both require the same steps to implement the merge sort algorithm the difference is that recmergesort(works with virtual sublists instead of using the slice operation to create actual sublists this requires two index variablesfirst and lastfor indicating the range of elements within the physical sublist that comprise the virtual sublist this implementation of the merge sort algorithm requires the use of temporary array when merging the sorted virtual sublists instead of repeatedly creating new array and later deleting it each time sublists are mergedwe can create single array and use it for every merge operation since this array is needed inside the mergevirtuallists(functionit has to either be declared as global |
14,125 | listing improved implementation of the merge sort algorithm sorts virtual subsequence in ascending order using merge sort def recmergesorttheseqfirstlasttmparray )the elements that comprise the virtual subsequence are indicated by the range [first lasttmparray is temporary storage used in the merging phase of the merge sort algorithm check the base casethe virtual sequence contains single item if first =last returnelse compute the mid point mid (first last/ split the sequence and perform the recursive step recmergesorttheseqfirstmidtmparray recmergesorttheseqmid+ lasttmparray merge the two ordered subsequences mergevirtualseqtheseqfirstmid+ last+ tmparray variable or created and passed into the recursive function before the first call our implementation uses the latter approach the mergevirtualseq(functionprovided in listing on the next pageis modified version of mergesortedlists(from section the original function was used to create new list that contained the elements resulting from merging two sorted lists this version is designed to work with two virtual mutable subsequences that are stored adjacent to each other within the physical sequence structuretheseq since the two virtual subsequences are always adjacent within the physical sequencethey can be specified by three array index variablesleftthe index of the first element in the left subsequencerightthe index of the first element in the right subsequenceand endthe index of the first element following the end of the right subsequence second difference in this version is how the resulting merged sequence is returned instead of creating new list structurethe merged sequence is stored back into the physical structure within the elements occupied by the two virtual subsequences the tmparray argument provides temporary array needed for intermediate storage during the merging of the two subsequences the array must be large enough to hold all of the elements from both subsequences this temporary storage is needed since the resulting sorted sequence is not returned by the function but instead is copied back to the original sequence structure during the merging operationthe elements from the two subsequences are saved into the temporary array after being mergedthe elements are copied from the temporary array back to the original structure we could create new array each time the function is calledwhich would then be deleted when the function terminates but that requires additional overhead that is compounded by the many calls to the mergevirtualseq(function during the execution of the recursive recmergesort(function to reduce |
14,126 | advanced sorting listing merging two ordered virtual sublists merges the two sorted virtual subsequences[left right[right endusing the tmparray for intermediate storage def mergevirtualseqtheseqleftrightendtmparray )initialize two subsequence index variables left right initialize an index variable for the resulting merged array merge the two sequences together until one is empty while right and end if theseq[atheseq[btmparray[mtheseq[aa + else tmparray[mtheseq[bb + + if the left subsequence contains more items append them to tmparray while right tmparray[mtheseq[aa + + or if right subsequence contains moreappend them to tmparray while end tmparray[mtheseq[bb + + copy the sorted subsequence back into the original sequence structure for in rangeend left theseq[ +lefttmparray[ithis overheadimplementations of the the merge sort algorithm typically allocate single array that is of the same size as the original list and then simply pass the array into the mergevirtualseq(function the use of the temporary array is illustrated in figure with the merging of the two subsequences lists formed from the second half of the original sequence the implementation of the earlier sorting algorithms only required the user to supply the array or list to be sorted the recmergesort(functionhowevernote wrapper functions wrapper function is function that provides simpler and cleaner interface for another function and typically provides little or no additional functionality wrapper functions are commonly used with recursive functions that require additional arguments since their initial invocation may not be as natural as an equivalent sequential function |
14,127 | first mid end theseq the two virtual subsequences are merged into the temporary array tmparray the elements are copied from the temporary array back into the original sequence theseq figure temporary array is used to merge two virtual subsequences requires not only the sequence structure but also the index markers and temporary array these extra arguments may not be as intuitive to the user as simply passing the sequence to be sorted in additionwhat happens if the user supplies incorrect range values or the temporary array is not large enough to merge the largest subsequencea better approach is to provide wrapper function for recmergesort(such as the one shown in listing the mergesort(function provides simpler interface as it only requires the array or list to be sorted the wrapper function handles the creation of the temporary array needed by the merge sort algorithm and initiates the first call to the recursive function listing wrapper function for the new implementation of the merge sort algorithm sorts an array or list in ascending order using merge sort def mergesorttheseq ) lentheseq create temporary array for use when merging subsequences tmparray arrayn call the private recursive merge sort function recmergesorttheseq - tmparray efficiency analysis we provided two implementations for the merge sort algorithmone that can only be used with lists and employs the slice operationand another that can be used with arrays or lists but requires the use of temporary array in merging virtual |
14,128 | advanced sorting subsequences both implementations run in ( log ntime to see how we obtain this resultassume an array of elements is passed to recmergesort(on the first invocation of the recursive function for simplicitywe can let be power of which results in subsequences of equal size each time list is split as we saw in the running time of recursive function is computed by evaluating the time required by each function invocation this evaluation only includes the time of the steps actually performed in the given function invocation the recursive steps are omitted since their times will be computed separately we can start by evaluating the time required for single invocation of the recmergesort(function since each recursive call reduces the size of the problemwe let represent the number of keys in the subsequence passed to the current instance of the function ( represents the size of the entire arraywhen the function is executedeither the base case or the divide and conquer steps are performed the base case occurs when the supplied sequence contains single item ( )which results in the function simply returning without having performed any operations this of course only requires ( time the dividing step is also constant time operation since it only requires computing the midpoint to determine where the virtual sequence will be split the real work is done in the conquering step by the mergevirtuallists(function this function requires (mtime in the worst case where represents the total number of items in both subsequences the analysis for the merging operation follows that of the mergesortedlists(from and is left as an exercise having determined the time required of the various operationswe can conclude that single invocation of the recmergesort(function requires (mtime given subsequence of keys the next step is to determine the total time required to execute all invocations of the recursive function this analysis is best described using recursive call tree consider the call tree in figure which represents the merge sort algorithm when applied to sequence containing keys the values inside the function call boxes show the size of the subsequence passed to the function for that invocation since we know single invocation of the recmergesort(function requires ( figure recursive call tree for the merge sort algorithm for |
14,129 | timewe can determine the time required for each instance of the function based on the size of the subsequence processed by that instance to obtain the total running time of the merge sort algorithmwe need to compute the sum of the individual times in our sample call treewhere the first recursive call processes the entire key list this instance makes two recursive callseach processing half of the original key sequenceas shown on the second level of the call tree the two function instances at the second level of the call tree each make two recursive callsall of which process one-fourth of the original key sequence these recursive calls continue until the subsequence contains single key valueas illustrated in the recursive call tree while each invocation of the functionother than the initial callonly processes portion of the original key sequenceall keys are processed at each level if we can determine how many levels there are in the recursive call treewe can multiply this value by the number of keys to obtain the final run time when is power of the merge sort algorithm requires log levels of recursion thusthe merge sort algorithm requires ( log ntime since there are log levels and each level requires time the final analysis is illustrated graphically by the more general recursive call tree provided in figure when is not power of the only difference in the analysis is that the lowest level in the call tree will not be completely fullbut the call tree will still contain at most dlog ne levels number of levels time per level nn / / / / log / / / / / / / / total timen log figure time analysis of the merge sort algorithm quick sort the quick sort algorithm also uses the divide and conquer strategy but unlike the merge sortwhich splits the sequence of keys at the midpointthe quick sort partitions the sequence by dividing it into two segments based on selected pivot key in additionthe quick sort can be implemented to work with virtual subsequences without the need for temporary storage |
14,130 | advanced sorting algorithm description the quick sort is simple recursive algorithm that can be used to sort keys stored in either an array or list given the sequenceit performs the following steps the first key is selected as the pivotp the pivot value is used to partition the sequence into two segments or subsequencesl and gsuch that contains all keys less than the and contains all keys greater than or equal to the algorithm is then applied recursively to both and the recursion continues until the base case is reachedwhich occurs when the sequence contains fewer than two keys the two segments and the pivot value are merged to produce sorted sequence this is accomplished by copying the keys from segment back into the original sequencefollowed by the pivot value and then the keys from segment after this stepthe pivot key will end up in its proper position within the sorted sequence an abstract view of the partitioning stepin which much of the actual work is doneis illustrated in figure you will notice the size of the segments will vary depending on the value of the pivot in some instancesone segment may not contain any elements it depends on the pivot value and the relationship between that value and the other keys in the sequence when the recursive calls returnthe segments and pivot value are merged to produce sorted sequence this process is illustrated in figure figure an abstract view showing how quick sort partitions the sequence into segments based on the pivot value (shown with gray background |
14,131 | figure an abstract showing how quick sort merges the sorted segments and pivot value back into the original sequence implementation simple implementation using the slice operation can be devised for the quick sort algorithm as was done with the merge sort but it would require the use of temporary storage an efficient solution can be designed to work with virtual subsequences or segments that does not require temporary storage howeverit is not as easily implemented since the partitioning must be done using the same sequence structure python implementation of the quick sort algorithm is provided in listing the quicksort(function is simple wrapper that is used to initiate the recursive call to recquicksort(the recursive function is rather simple and follows the enumerated steps described earlier note that first and last indicate the elements in the range [first lastthat comprise the current virtual segment the partitioning step is handled by the partitionseq(function this function rearranges the keys within the physical sequence structure by correctly positioning the pivot key within the sequence and placing all keys that are less than the pivot to the left and all keys that are greater to the right as shown herefirst pivot last pivot pos the final position of the pivot value also indicates the position at which the sequence is split to create the two segments the left segment consists of the elements between the first element and the pos element while the right segment consists of the elements between pos and lastinclusive the virtual segments |
14,132 | advanced sorting listing implementation of the quick sort algorithm sorts an array or list using the recursive quick sort algorithm def quicksorttheseq ) lentheseq recquicksorttheseq - the recursive implementation using virtual segments def recquicksorttheseqfirstlast )check the base case if first >last return else save the pivot value pivot theseq[firstpartition the sequence and obtain the pivot position pos partitionseqtheseqfirstlast repeat the process on the two subsequences recquicksorttheseqfirstpos recquicksorttheseqpos last partitions the subsequence using the first key as the pivot def partitionseqtheseqfirstlast )save copy of the pivot value pivot theseq[firstfind the pivot position and move the elements around the pivot left first right last while left <right find the first key larger than the pivot while left right and theseq[leftpivot left + find the last key in the sequence that is smaller than the pivot while right >left and theseq[right>pivot right - swap the two keys if we have not completed this partition if left right tmp theseq[lefttheseq[lefttheseq[righttheseq[righttmp put the pivot in the proper position if right !first theseq[firsttheseq[righttheseq[rightpivot return the index position of the pivot value return right |
14,133 | are passed to the recursive calls in lines and of listing using the proper index ranges after the recursive callsthe recquicksort(function returns in the earlier descriptionthe sorted segments and pivot value had to be merged and stored back into the original sequence but since we are using virtual segmentsthe keys are already stored in their proper position upon the return of the two recursive calls to help visualize the operation of the partitionseq(functionwe step through the first complete partitioning of the sample sequence the function begins by saving copy of the pivot value for easy reference and then initializes the two index markersleft and right the left marker is initialized to the first position following the pivot value while the right marker is set to the last position within the virtual segment the two markers are used to identify the range of elements within the sequence that will comprise the left and right segments first last left right the main loop is executed until one of the two markers crosses the other as they are shifted in opposite directions the left marker is shifted to the right by the loop in lines and of listing until key value larger than the pivot is found or the left marker crosses the right marker since the left marker starts at key larger than the pivotthe body of the outer loop is not executed if theseq is empty left right after the left marker is positionedthe right marker is then shifted to the left by the loop in lines and the marker is shifted until key value less than or equal to the pivot is located or the marker crosses the left marker the test for less than or equal allows for the correct sorting of duplicate keys in our examplethe right marker will be shifted to the position of the left right the two keys located at the positions marked by left and right are then swappedwhich will place them within the proper segment once the location of the pivot is found left right |
14,134 | advanced sorting after the two keys are swappedthe two markers are again shifted starting where they left off left right the left marker will be shifted to key value and the right marker to value left right left right once the two markers are shiftedthe corresponding keys are swapped left right and the process is repeated this timethe left marker will stop at value while the right marker will stop at value left right right left note that the right marker has crossed the left such that right leftresulting in the termination of the outer while loop when the two markers crossthe right marker indicates the final position of the pivot value in the resulting sorted list thusthe pivot value currently located in the first element and the element marked by right have to be swapped resulting in value being placed in element number the final sorted position of the pivot within the original sequence |
14,135 | pos the if statement at line of listing is included to prevent swap from occurring when the right marker is at the same position as the pivot value this situation will occur when there are no keys in the list that are smaller than the pivot finallythe function returns the pivot position for use in splitting the sequence into the two segments we are not limited to selecting the first key within the list as the pivotbut it is the easiest to implement we could have chosen the last key instead butin practiceusing the first or last key as the pivot is poor choice especially when subsequence is already sorted that results in one of the segments being empty choosing key near the middle is better choice that can be implemented with few modifications to the code provided we leave these modifications as an exercise efficiency analysis the quick sort algorithm has an average or expected time of ( log nbut runs in ( in the worst casethe analysis of which is left as an exercise even though quick sort is quadratic in the worst caseit does approach the average case in many instances and has the advantage of not requiring additional temporary storage as is the case with the merge sort the quick sort is the commonly used algorithm to implement sorting in language libraries earlier versions of python used quick sort to implement the sort(method of the list structure in the current version of pythona hybrid algorithm that combines the insertion and merge sort algorithms is used instead how fast can we sortthe comparison sort algorithms achieve their goal by comparing the individual sort keys to other keys in the list we have reviewed five sorting algorithms in this and the first three--bubbleselectionand insertion--have worst case time of ( while the merge sort has worst case time of ( log nthe quick sortthe more commonly used algorithm in language librariesis ( in the worst case but it has an expected or average time of ( log nthe natural question is can we do better than ( log )for comparison sortthe answer is no it can be shownwith the use of decision tree and examining the permutations of all possible comparisons among the sort keysthat the worst case time for comparison sort can be no better than ( log nthis does not meanhoweverthat the sorting operation cannot be done faster than ( log nit simply means that we cannot achieve this with comparison sort in the next sectionwe examine distribution sort algorithm that works in linear time distribution sort algorithms use techniques other than comparisons |
14,136 | advanced sorting among the keys themselves to sort the sequence of keys while these distribution algorithms are fastthey are not general purpose sorting algorithms in other wordsthey cannot be applied to just any sequence of keys typicallythese algorithms are used when the keys have certain characteristics and for specific types of applications radix sort radix sort is fast distribution sorting algorithm that orders keys by examining the individual components of the keys instead of comparing the keys themselves for examplewhen sorting integer keysthe individual digits of the keys are compared from least significant to most significant this is special purpose sorting algorithm but can be used to sort many types of keysincluding positive integersstringsand floating-point values the radix sort algorithm also known as bin sort can be traced back to the time of punch cards and card readers card readers contained number of bins in which punch cards could be placed after being read by the card reader to sort values punched on cards the cards were first separated into different bins based on the value in the ones column of each value the cards would then be collected such that the cards in the bin representing zero would be placed on topfollowed by the cards in the bin for oneand so on through nine the cards were then sorted againbut this time by the tens column the process continued until the cards were sorted by each digit in the largest value the final result was stack of punch cards with values sorted from smallest to largest algorithm description to illustrate how the radix sort algorithm worksconsider the array of values shown at the top of figure as with the card reader versionbins are used to store the various keys based on the individual column values since we are sorting positive integerswe will need ten binsone for each digit the process starts by distributing the values among the various bins based on the digits in the ones columnas illustrated in step (aof figure if keys have duplicate digits in the ones columnthe values are placed in the bins in the order that they occur within the list thuseach duplicate is placed behind the keys already stored in the corresponding binas illustrated by the keys in bins and after the keys have been distributed based on the least significant digitthey are gathered back into the arrayone bin at timeas illustrated in step (bof figure the keys are taken from each binwithout rearranging themand inserted into the array with those in bin zero placed at the frontfollowed by those in bin onethen bin twoand so on until all of the keys are back in the sequence at this pointthe keys are only partially sorted the process must be repeated againbut this time the distribution is based on the digits in the tens column after distributing the keys the second timeas illustrated in step (cof figure they |
14,137 | distribute the keys across the bins based on the ones column bin bin bin bin bin bin bin bin bin bin distribute the keys across the bins based on the tens column bin bin bin bin bin bin bin bin bin gather the keys back into the array bin gather the keys back into the array figure sorting an array of integer keys using the radix sort algorithm |
14,138 | advanced sorting are once again gathered back into the arrayone bin at time as shown in step (dthe result is correct ordering of the keys from smallest to largestas shown at the bottom of figure in this examplethe largest value ( only contains two digits thuswe had to distribute and then gather the keys twiceonce for the ones column and once for the tens column if the largest value in the list had contain additional digitsthe process would have to be repeated for each digit in that value basic implementation the radix sortas indicated earlieris not general purpose algorithm insteadit' used in special cases such as sorting records by zip codesocial security numberor product codes the sort keys can be represented as integersrealsor strings different implementations are requiredhoweversince the individual key components (digits or charactersdiffer based on the type of key in additionwe must know the maximum number of digits or characters used by the largest key in order to know the number of iterations required to distribute the keys among the bins in this sectionwe implement version of the radix sort algorithm for use with positive integer values stored in mutable sequence firstwe must decide how to represent the bins used in distributing the values consider the following points related to the workings of the algorithmthe individual bins store groups of keys based on the individual digits keys with duplicate digits (in given columnare stored in the same binbut following any that are already there when the keys are gathered from the binsthey have to be stored back into the original sequence this is done by removing them from the bins in first-in first-out ordering you may notice the bins sound very much like queues and in fact they can be represented as such adding key to bin is equivalent to enqueuing the key while removing the keys from the bins to put them back into the sequence is easily handled with the dequeue operation since there are ten digitswe will need ten queues the queues can be stored in ten-element array to provide easy management in the distribution and gathering of the keys our implementation of the radix sort algorithm is provided in listing the function takes two argumentsthe list of integer values to be sorted and the maximum number of digits possible in the largest key value instead of relying on the user to supply the number of digitswe could easily have searched for the largest key value in the sequence and then computed the number of digits in that value the implementation of the radix sort uses two loops nested inside an outer loop the outer for loop iterates over the columns of digits with the number of iterations based on the user-supplied numdigits argument the first nested loop in lines - distributes the keys across the bins since the queues are stored in |
14,139 | listing implementation of the radix sort using an array of queues sorts sequence of positive integers using the radix sort algorithm from llistqueue import queue from array import array def radixsortintlistnumdigits )create an array of queues to represent the bins binarray array for in range )binarray[kqueue(the value of the current column column iterate over the number of digits in the largest value for in rangenumdigits )distribute the keys across the bins for key in intlist digit (key /column binarray[digitenqueuekey gather the keys from the bins and place them back in intlist for bin in binarray while not bin isempty(intlist[ibin dequeue( + advance to the next column value column * the ten-element arraythe distribution is easily handled by determining the bin or corresponding queue to which each key has to be added (based on the digit in the current column being processedand enqueuing it in that queue to extract the individual digitswe can use the following arithmetic expressiondigit (key /columnvalue where column is the value ( of the current column being processed the variable is initialized to since we work from the least-significant digit to the most significant after distributing the keys and then gathering them back into the sequencewe can advance to the next column by simply multiplying the current value by as is done at the bottom of the outer loop in line the second nested loopin lines - handles the gathering step to remove the keys from the queues and place them back into the sequencewe must dequeue all of the keys from each of the ten queues and add them to the sequence in successive elements starting at index position zero |
14,140 | advanced sorting this implementation of the radix sort algorithm is straightforwardbut it requires the use of multiple queues to result in an efficient implementationwe must use the queue adt implemented as linked list or have direct access to the underlying list in order to use the python list version efficiency analysis to evaluate the radix sort algorithmassume sequence of keys in which each key contains components in the largest key value and each component contains value between and - also assume we are using the linked list implementation of the queue adtwhich results in ( time queue operations the array used to store the queues and the creation of the queues themselves can be done in (ktime the distribution and gathering of the keys involves two stepswhich are performed timesone for each componentthe distribution of the keys across the queues requires (ntime since an individual queue can be accessed directly by subscript gathering the keys from the queues and placing them back into the sequence requires (ntime even though the keys have to be gathered from queuesthere are keys in total to be dequeued resulting in the dequeue(operation being performed times the distribution and gathering steps are performed timesresulting in time of (dncombining this with the initialization step we have an overall time of ( +dnthe radix sort is special purpose algorithm and in practice both and are constants specific to the given problemresulting in linear time algorithm for examplewhen sorting list of integersk and can vary but commonly thusthe sorting time depends only on the number of keys sorting linked lists the sorting algorithms introduced in the previous sections and earlier in can be used to sort keys stored in mutable sequence but what if we need to sort keys stored in an unsorted singly linked list such as the one shown in figure in this sectionwe explore that topic by reviewing two common algorithms that can be used to sort linked list by modifying the links to rearrange the existing nodes the techniques employed by any of the three quadratic sorting algorithms-bubbleselectionand insertion--presented in can be used to sort linked list instead of swapping or shifting the values within the sequencehoweverthe nodes are rearranged by unlinking each node from the list and then relinking them at different position linked list version of the bubble sort would rearrange the nodes within the same list by leap-frogging the nodes containing larger values over those with smaller values the selection and insertion sortson the other hand |
14,141 | would create new sorted linked list by selecting and unlinking nodes from the original list and adding them to the new list origlist figure an unsorted singly linked list insertion sort simple approach for sorting linked list is to use the technique employed by the insertion sort algorithmtake each item from an unordered list and insert themone at timeinto an ordered list when used with linked listwe can unlink each nodeone at timefrom the original unordered list and insert them into new ordered list using the technique described in the python implementation is shown in listing to create the sorted linked list using the insertion sortwe must unlink each node from the original list and insert them into new ordered list this is done in listing implementation of the insertion sort algorithm for use with linked list sorts linked list using the technique of the insertion sort reference to the new ordered list is returned def llistinsertionsortoriglist )make sure the list contains at least one node if origlist is none return none iterate through the original list newlist none while origlist is not none assign temp reference to the first node curnode origlist advance the original list reference to the next node origlist origlist next unlink the first node and insert into the new ordered list curnode next none newlist addtosortedlistnewlistcurnode return the list reference of the new ordered list return newlist |
14,142 | advanced sorting four stepsas illustrated in figure and implemented in lines - inserting the node into the new ordered list is handled by the addtosortedlist(functionwhich simply implements the operation from listing figure illustrates the results after each of the remaining iterations of the insertion sort algorithm when applied to our sample linked list the insertion sort algorithm used with linked lists is ( in the worst case just like the sequence-based version the differencehoweveris that the items do not have to be shifted to make room for the unsorted items as they are inserted into the sorted list insteadwe need only modify the links to rearrange the nodes newlist origlist curnode (acurnode origlist newlist (bcurnode origlist newlist (cnewlist origlist (dfigure the individual steps performed in each iteration of the linked list insertion sort algorithm(aassign the temporary reference to the first node(badvance the list reference(cunlink the first nodeand (dinsert the node into the new list |
14,143 | newlist origlist newlist origlist newlist origlist newlist origlist newlist origlist newlist origlist figure the results after each iteration of the linked list insertion sort algorithm |
14,144 | advanced sorting merge sort the merge sort algorithm is an excellent choice for sorting linked list unlike the sequence-based versionwhich requires additional storagewhen used with linked list the merge sort is efficient in both time and space the linked list versionwhich works in the same fashion as the sequence versionis provided in listing listing the merge sort algorithm for linked lists sorts linked list using merge sort new head reference is returned def llistmergesortthelist )if the list is empty (base case)return none if thelist is none return none split the linked list into two sublists of equal size rightlist _splitlinkedlistthelist leftlist thelist perform the same operation on the left half leftlist llistmergesortleftlist and the right half rightlist llistmergesortrightlist merge the two ordered sublists thelist _mergelinkedlistsleftlistrightlist return the head pointer of the ordered sublist return thelist splits linked list at the midpoint to create two sublists the head reference of the right sublist is returned the left sublist is still referenced by the original head reference def _splitlinkedlistsublist )assign reference to the first and second nodes in the list midpoint sublist curnode midpoint next iterate through the list until curnode falls off the end while curnode is not none advance curnode to the next node curnode curnode next if there are more nodesadvance curnode again and midpoint once if curnode is not none midpoint midpoint next curnode curnode next set rightlist as the head pointer to the right sublist rightlist midpoint next unlink the right sub list from the left sublist midpoint next none |
14,145 | return the right sub list head reference return rightlist merges two sorted linked listreturns head reference for the new list def _mergelinkedlistssublistasublistb )create dummy node and insert it at the front of the list newlist listnodenone newtail newlist append nodes to the new list until one list is empty while sublista is not none and sublistb is not none if sublista data <sublistb data newtail next sublista sublista sublista next else newtail next sublistb sublistb sublistb next newtail newtail next newtail next none if self list contains more termsappend them if sublista is not none newtail next sublista else newtail next sublistb return the new merged listwhich begins with the first node after the dummy node return newlist next the linked list is recursively subdivided into smaller linked lists during each recursive callwhich are then merged back into new ordered linked list since the nodes are not contained within single object as the elements of an array arethe head reference of the new ordered list has to be returned after the list is sorted to sort linked list using the merge sort algorithmthe sort function would be called using the statementthelist llistmergesortthelist the implementation in listing includes the recursive function and two helper functions you will note that wrapper function is not required with this version since the recursive function only requires the head reference of the list being sorted as the single argument splitting the list the split operation is handled by the splitlinkedlist(helper functionwhich takes as an argument the head reference to the singly linked list to be split and returns the head reference for the right sublist the left sublist can still be referenced by the original head reference to split linked listwe need to know the |
14,146 | advanced sorting midpointor more specificallythe node located at the midpoint an easy way to find the midpoint would be to traverse through the list and count the number of nodes and then iterate the list until the node at the midpoint is located this is not the most efficient approach since it requires one and half traversals through the list insteadwe can devise solution that requires one complete list traversalas shown in lines - of listing this approach uses two external referencesmidpoint and curnode the two references are initialized with midpoint referencing the first node and curnode referencing the second node the two references are advanced through the list using loop as is done in normal list traversalbut the curnode reference will advance twice as fast as the midpoint reference the traversal continues until curnode becomes nullat which point the midpoint reference will be pointing to the last node in the left sublist figure illustrates the traversal required to find the midpoint of our sample linked list midpoint curnode sublist sublist midpoint curnode sublist midpoint curnode sublist midpoint curnode figure sequence of steps for finding the midpoint in linked list |
14,147 | after the midpoint is locatedthe link between the node referenced by midpoint and its successor can be removedcreating two sublistsas illustrated in figure before the link is removeda new head reference rightlist has to be created and initialized to reference the first node in the right sublist the rightlist head reference is returned by the function to provide access to the new sublist sublist midpoint rightlist curnode (asublist midpoint rightlist curnode (bfigure splitting the list after finding the midpoint(alink modifications required to unlink the last node of the left sublist from the right sublist and (bthe two sublists resulting from the split merging the lists the mergelinkedlists(functionprovided in lines - of listing manages the merging of the two sorted linked lists in we discussed an efficient solution for the problem of merging two sorted python listsand earlier in this that algorithm was adapted for use with arrays the array and python list versions are rather simple since we can refer to individual elements by index and easily append the values to the sequence structure merging two sorted linked lists requires several modifications to the earlier algorithm firstthe nodes from the two sublists will be removed from their respective list and appended to new sorted linked list we can use tail reference with the new sorted list to allow the nodes from the sublists to be appended in ( time secondafter all of the nodes have been removed from one of the two sublistswe do not have to iterate through the other list to append the nodes insteadwe can simply link the last node of the new sorted list to the first node in the remaining sublist finallywe can eliminate the special case of appending |
14,148 | advanced sorting the first node to the sorted list with the use of dummy node at the front of the listas illustrated in figure the dummy node is only temporary and will not be part of the final sorted list thusafter the two sublists have been mergedthe function returns reference to the second node in the list (the first real node following the dummy node)which becomes the head reference sublista sublistb (anewlist newtail dummy node (bfigure merging two ordered linked lists using dummy node and tail reference the linked list version of the merge sort algorithm is also ( log nfunction but it does not require temporary storage to merge the sublists the analysis of the run time is left as an exercise note dummy nodes dummy node is temporary node that is used to simplify link modifications when adding or removing nodes from linked list they are called dummy nodes because they contain no actual data but they are part of the physical linked structure |
14,149 | exercises given the following sequence of keys ( )trace the indicated algorithm to produce recursive call tree when sorting the values in descending order (amerge sort (bquick sort do the same as in exercise but produce recursive call tree when sorting the values in ascending order show the distribution steps performed by the radix sort when ordering the following list of keys( ( ( "ms""va""ak""la""ca""al""ga""tn""wa""dc analyze the quick sort algorithm to show the worst case time is ( analyze the mergevirtualseq(function and show that it is linear time operation in the worst case analyze the linked list version of the merge sort algorithm to show the worst case time is ( log an important property of sorting algorithms is stability sorting algorithm is stable if it preserves the original order of duplicate keys stability is important when sorting collection that has already been sorted by primary key that will now be sorted by secondary key for examplesuppose we have sequence of student records that have been sorted by name and now we want to sort the sequence by gpa since there can be many duplicate gpaswe want to order any duplicates by name thusif smith and green both have the same gpathen green would be listed before smith if the sorting algorithm used for this second sort is stablethen the proper ordering can be achieved since green would appear before smith in the original sequence (adetermine which of the comparison sorts presented in this and in are stable sorts (bfor any of the algorithms that are not stableprovide sequence containing some duplicate keys that shows the order of the duplicates is not preserved |
14,150 | advanced sorting programming projects implement the addtosortedlist(function for use with the linked list version of the insertion sort algorithm create linked list version of the indicated algorithm (abubble sort (bselection sort create new version of the quick sort algorithm that chooses different key as the pivot instead of the first element (aselect the middle element (bselect the last element write program to read list of grade point averages ( from text file and sort them in descending order select the most efficient sorting algorithm for your program some algorithms are too complex to analyze using simple big- notation or representative data set may not be easily identifiable in these caseswe must actually execute and test the algorithms on different sized data sets and compare the results special care must be taken to be fair in the actual implementation and execution of the different algorithms this is known as an empirical analysis we can also use an empirical analysis to verify and compare the time-complexities of family of algorithms such as those for searching or sorting design and implement program to evaluate the efficiency of the comparison sorts used with sequences by performing an empirical analysis using random numbers your program shouldprompt the user for the size of the sequencen generate random list of values (integersfrom the range [ nsort the original list using each of the sorting algorithmskeeping track of the number of comparisons performed by each algorithm compute the average number of comparisons for each algorithm and then report the results when performing the empirical analysis on family of algorithmsit is important that you use the same original sequence for each algorithm thusinstead of sorting the original sequenceyou must make duplicate copy of the original and sort that sequence in order to preserve the original for use with each algorithm |
14,151 | binary trees we have introduced and used several sequential structures throughout the text such as the arraypython listlinked liststacksand queues these structures organize data in linear fashion in which the data elements have "beforeand "afterrelationship they work well with many types of problemsbut some problems require data to be organized in nonlinear fashion in this we explore the tree data structurewhich can be used to arrange data in hierarchical order trees can be used to solve many different problemsincluding those encountered in data miningdatabase systemsencryptionartificial intelligencecomputer graphicsand operating systems the tree structure tree structure consists of nodes and edges that organize data in hierarchical fashion the relationships between data elements in tree are similar to those of family tree"child,"parent,"ancestor,etc the data elements are stored in nodes and pairs of nodes are connected by edges the edges represent the relationship between the nodes that are linked with arrows or directed edges to form hierarchical structure resembling an upside-down tree complete with branchesleavesand even root formallywe can define tree as set of nodes that either is empty or has node called the root that is connected by edges to zero or more subtrees to form hierarchical structure each subtree is itself by definition tree classic example of tree structure is the representation of directories and subdirectories in file system the top tree in figure illustrates the hierarchical nature of student' home directory in the unix file system trees can be used to represent structured datawhich results in the subdivision of data into smaller and smaller parts simple example of this use is the division of book into its various parts of sectionsand subsectionsas illustrated by the |
14,152 | binary trees bottom tree in figure trees are also used for making decisions one that you are most likely familiar with is the phoneor menutree when you call customer service for most businesses todayyou are greeted with an automated menu that you have to traverse the various menus are nodes in tree and the menu options from which you can choose are branches to other nodes /home/smith/home/smithpicturespicturescoursescoursescs cs proj proj projsprojshwhwproj proj proj proj javajavamusicmusiccs cs pdfpdfasmasmdocumentsdocumentsodpodptexttextotherotherhwhwbook book contents contents sect sect preface preface sect sect sect sect sect sect sect sect sect sect sect sect sect sect sect sect ****sect sect sect sect sect sect ** **sect sect index index sect sect sect sect sect sect figure example tree structuresa unix file system home directory (topand the subdivision of book into its parts (bottomwe use many terms to describe the different characteristics and components of trees most of the terminology comes from that used to describe family relationships or botanical descriptions of trees knowing some of these terms will help you grasp the tree structure and its use in various applications root the topmost node of the tree is known as the root node it provides the single access point into the structure the root node is the only node in the tree that does not have an incoming edge (an edge directed toward itconsider the sample tree in figure (athe node with value is the root of the tree by definitionevery non-empty tree must contain root node |
14,153 | is the root node nodes tcrand form path from to ( (bfigure sample tree with(athe root nodeand (ba path from to path the other nodes in the tree are accessed by following the edges starting with the root and progressing in the direction of the arrow until the destination node is reached the nodes encountered when following the edges from starting node to destination form path as shown in figure ( )the nodes labeled tcrand form path from node to node parent the organization of the nodes form relationships between the data elements every nodeexcept the roothas parent nodewhich is identified by the incoming edge node can have only one parent (or incoming edgeresulting in unique path from the root to any other node in the tree there are number of parent nodes in the sample treeone is node xwhich is the parent of and gas shown in figure (achildren each node can have one or more child nodes resulting in parent-child hierarchy the children of node are identified by the outgoing edges (directed away from the is the parent of nodes and and are children of node and are siblings jj the gray nodes are interior nodes the white nodes are leaves ( (bfigure the sample tree with(athe parentchildand sibling relationshipsand (bthe distinction between interior and leaf nodes |
14,154 | binary trees nodefor examplenodes and are the children of all nodes that have the same parent are known as siblingsbut there is no direct access between siblings thuswe cannot directly access node from node or vice versa nodes nodes that have at least one child are known as interior nodes while nodes that have no children are known as leaf nodes the interior nodes of the sample tree are shown with gray backgrounds in figure (band the leaf nodes are shown in white subtree tree is by definition recursive structure every node can be the root of its own subtreewhich consists of subset of nodes and edges of the larger tree figure shows the subtree with node as its root every node is the root of its own subtree figure subtree with root node relatives all of the nodes in subtree are descendants of the subtree' root in the example treenodes jrkand are descendants of node the ancestors of node include the parent of the nodeits grandparentits great-grandparentand so on all the way up to the root the ancestors of node can also be identified by the nodes along the path from the root to the given node the root node is the ancestor of every node in the tree and every node in the tree is descendant of the root node binary tree illustrations the trees illustrated above used directed edges to indicate the parent-child relationship between the nodes but it' not uncommon to see trees drawn using straight lines or undirected edges when tree is drawn without arrowswe have to be able to deduce the parent-child relationship from the placement of the nodes thusthe parent is always placed above its children with binary treesthe left and right children are always drawn offset from the parent in the appropriate direction in order to easily identify the specific child node note |
14,155 | the binary tree trees can come in many different shapesand they can vary in the number of children allowed per node or in the way they organize data values within the nodes one of the most commonly used trees in computer science is the binary tree binary tree is tree in which each node can have at most two children one child is identified as the left child and the other as the right child in the remainder of the we focus on the use and construction of the binary tree in the next we will continue our discussion of binary trees but also explore other types properties binary trees come in many different shapes and sizes the shapes vary depending on the number of nodes and how the nodes are linked figure illustrates three different shapes of binary tree consisting of nine nodes there are number of properties and characteristics associated with binary treesall of which depend on the organization of the nodes within the tree ( (ba (cf level ii level level level level level level level figure three different arrangements of nine nodes in binary tree tree size the nodes in binary tree are organized into levels with the root node at level its children at level the children of level one nodes are at level and so on in |
14,156 | binary trees family tree terminologyeach level corresponds to generation the binary tree in figure ( )for examplecontains two nodes at level one ( and )four nodes at level two (defand )and two nodes at level three ( and ithe root node always occupies level zero the depth of node is its distance from the rootwith distance being the number of levels that separate the two node' depth corresponds to the level it occupies consider node in the three trees of figure in tree ( ) has depth of in tree (bit has depth of and in (cits depth is the height of binary tree is the number of levels in the tree for examplethe three binary trees in figure have different heights(ahas height of (bhas height of and (chas height of the width of binary tree is the number of nodes on the level containing the most nodes in the three binary trees of figure (ahas width of (bhas width of and (chas width of finallythe size of binary tree is simply the number of nodes in the tree an empty tree has height of and width of and its size is level nodes - **** ( - figure possible slots for the placement of nodes in binary tree binary tree of size can have maximum height of nwhich results when there is one node per level this is the case with the binary tree in figure (cwhat is the minimum height of binary tree with nodesto determine thiswe need to consider the maximum number of nodes at each level since the nodes will have to be organized with each level at full capacity figure illustrates the slots for the possible placement of nodes within binary tree since each node can have at most two childreneach successive level in the tree doubles the number of nodes contained on the previous level this corresponds to given tree level having capacity for nodes if we sum the size of each levelwhen all of the levels are filled to capacityexcept possibly the last onewe find that the minimum height of binary tree of size is blog nc |
14,157 | tree structure the height of the tree will be important in analyzing the time-complexities of various algorithms applied to binary trees the structural properties of binary trees can also play role in the efficiency of an algorithm in factsome algorithms require specific tree structures full binary tree is binary tree in which each interior node contains two children full trees come in many different shapesas illustrated in figure figure examples of full binary trees perfect binary tree is full binary tree in which all leaf nodes are at the same level the perfect tree has all possible node slots filled from top to bottom with no gapsas illustrated in figure figure perfect binary tree binary tree of height is complete binary tree if it is perfect binary tree down to height and the nodes on the lowest level fill the available slots from left to right leaving no gaps consider the two complete binary trees in figure if any of the three leaf nodes labeled abor in the left tree were missingthat tree would not be complete likewiseif either leaf node labeled or in the right tree were missingit would not be complete implementation binary trees are commonly implemented as dynamic structure in the same fashion as linked lists binary tree is data structure that can be used to implement many different abstract data types since the operations that binary tree supports |
14,158 | binary trees perfect tree down to height - slots on the lowest level filled from left to right figure examples of complete binary trees depend on its applicationwe are going to create and work with the trees directly instead of creating generic binary tree class trees are generally illustrated as abstract structures with the nodes represented as circles or boxes and the edges as lines or arrows to implement binary treehoweverwe must explicitly store in each node the links to the two children along with the data stored in that node we define the bintreenode storage classshown in listing for creating the nodes in binary tree like other storage classesthe tree node class is meant for internal use only figure illustrates the physical implementation of the sample binary tree from figure listing the binary tree node class the storage class for creating binary tree nodes class _bintreenode def __init__selfdata )self data data self left none self right none tree traversals the operations that can be performed on binary tree depend on the applicationespecially the construction of the tree in this sectionwe explore the tree traversal operationwhich is one of the most common operations performed on collections of data remembera traversal iterates through collectionone item at timein order to access or visit each item the actual operation performed when "visitingan item is application dependentbut it could involve something as simple as printing the data item or saving it to file with linear structure such as linked listthe traversal is rather easy since we can start with the first node and iterate through the nodesone at at timeby following the links between the nodes but how do we visit every node in binary treethere is no single path from the root to every other node in the tree rememberthe links between the nodes lead us down into the tree if we were to |
14,159 | root jj figure the physical implementation of binary tree simply follow the linksonce we reach leaf node we cannot directly access any other node in the tree preorder traversal tree traversal must begin with the root nodesince that is the only access into the tree after visiting the root nodewe can then traverse the nodes in its left subtree followed by the nodes in its right subtree since every node is the root of its own subtreewe can repeat the same process on each noderesulting in recursive solution the base case occurs when null child link is encountered since there will be no subtree to be processed from that link the recursive operation can be viewed graphicallyas illustrated in figure visit the node traverse the left subtree left subtree traverse the right subtree right subtree figure trees are traversed recursively consider the binary tree in figure the dashed lines show the logical order the nodes would be visited during the traversalabdehcfgij this traversal is known as preorder traversal since we first visit the node followed by the subtree traversals the recursive function for preorder traversal of binary tree is rather simpleas shown in listing the subtree argument will either be null reference or reference to the root of subtree in the binary tree if the reference is not nonethe node is first visited and then the two subtrees are traversed by conventionthe left subtree is always visited before the right subtree the subtree argument |
14,160 | binary trees visit the node traverse the left subtree traverse the right subtree jj figure the logical ordering of the nodes with preorder traversal will be null reference when the binary tree is empty or we attempt to follow non-existent link for one or both of the children given binary tree of size na complete traversal of binary tree visits each node once if the visit operation only requires constant timethe tree traversal can be done in (nlisting preorder traversal on binary tree def preordertravsubtree )if subtree is not none printsubtree data preordertravsubtree left preordertravsubtree right inorder traversal in the preorder traversalwe chose to first visit the node and then traverse both subtrees another traversal that can be performed is the inorder traversal in which we first traverse the left subtree and then visit the node followed by the traversal of the right subtree figure shows the logical ordering of the node visits in the example treedbheafcigj the recursive function for an inorder traversal of binary tree is provided in listing it is almost identical to the preorder traversal function the only difference is the visit operation is moved following the traversal of the left subtree listing inorder traversal on binary tree def inordertravsubtree )if subtree is not none inordertravsubtree left printsubtree data inordertravsubtree right |
14,161 | traverse the left subtree visit the node traverse the right subtree ii jj figure the logical ordering of the nodes with an inorder traversal postorder traversal we can also perform postorder traversal which can be viewed as the opposite of the preorder traversal in postorder traversalthe left and right subtrees of each node are traversed before the node is visited the recursive function is provided in listing listing postorder traversal on binary tree def postordertravsubtree )if subtree is not none postordertravsubtree left postordertravsubtree right printsubtree data the example tree with the logical ordering of the node visits in postorder traversal is shown in figure the nodes are visited in this orderdhebfijgca you may notice that the root node is always visited first in preorder traversal but last in postorder traversal traverse the left subtree traverse the right subtree visit the node ii jj figure the logical ordering of the nodes with postorder traversal |
14,162 | binary trees breadth-first traversal the preorderinorderand postorder traversals are all examples of depth-first traversal that isthe nodes are traversed deeper in the tree before returning to higher-level nodes another type of traversal that can be performed on binary tree is the breadth-first traversal in breadth-first traversalthe nodes are visited by levelfrom left to right figure shows the logical ordering of the nodes in breadth-first traversal of the example tree ii jj figure the logical ordering of the nodes with breadth-first traversal recursion cannot be used to implement breadth-first traversal since the recursive calls must follow the links that lead deeper into the tree insteadwe must devise another approach your first attempt might be to visit node followed by its two children thusin the example tree we would visit node followed by nodes and cwhich is the correct ordering but what happens when we visit node bwe can' visit its two childrend and euntil after we have visited node what we need is way to remember or save the two children of until after has been visited likewisewhen visiting node cwe will have to save its two children until after the children of have been visited after visiting node cwe have saved four nodes--defand --which are the next four to be visitedin the order they were saved the best way to save node' children for later access is to use queue we can then use an iterative loop to move across the tree in the correct node order to produce breadth-first traversal listing uses queue to implement the breadth-first traversal the process starts by saving the root node and in turn priming the iterative loop during each iterationwe remove node from the queuevisit itand then add its children to the queue the loop terminates after all nodes have been visited expression trees arithmetic expressions such as ( + )*( - can be represented using an expression tree an expression tree is binary tree in which the operators are stored in the interior nodes and the operands (the variables or constant valuesare stored |
14,163 | listing breadth-first traversal on binary tree def breadthfirsttravbintree )create queue and add the root node to it queue enqueuebintree visit each node in the tree while not isempty(remove the next node from the queue and visit it node dequeue(printnode data add the two children to the queue if node left is not none enqueuenode left if node right is not none enqueuenode right in the leaves once constructedan expression tree can be used to evaluate the expression or for converting an infix expression to either prefix or postfix notation the structure of the expression tree is based on the order in which the operators are evaluated the operator in each internal node is evaluated after both its left and right subtrees have been evaluated thusthe lower an operator is in subtreethe earlier it will be evaluated the root node contains the operator to be evaluated figure illustrates several expression trees + + + * * * + - ( ( - ( figure sample arithmetic expression trees while python provides the eval(function for evaluating an arithmetic expression stored as stringthe string must be parsed each time it' evaluated this means the python interpreter has to determine the order in which the operators are evaluated and then perform each of the corresponding operations one way it can do this is with the use of an expression tree after the expression has been parsed and the tree constructedthe evaluation step is quite simpleas you will see later in this section this real-time evaluation of expression strings is not commonly available in compiled languages when using such language and user-supplied expression has to be evaluatedan expression tree can be constructed and evaluated to obtain the result |
14,164 | binary trees expression tree abstract data type arithmetic expressions can consist of both unary (-an!and binary operators ( bwe only consider expressions containing binary operators and leave the inclusion of unary operators as an exercise binary operators are stored in an expression tree with the left subtree containing the left side of the operation and the right subtree containing the right side we define the expression tree adt below for use with arithmetic expressions consisting of operands comprised of singleinteger digits or single-letter variables define expression tree adt an expression tree is binary tree representation of an arithmetic expression that consists of various operators (+-*/%and operands comprised of single integer digits and single-letter variables within fully parenthesized expression expressiontreeexpstr )builds an expression tree for the expression given in expstr assume the string contains validfully parenthesized expression evaluatevardict )evaluates the expression tree and returns the numeric result the values of the single-letter variables are extracted from the supplied dictionary structure an exception is raised if there is division by zero error or an undefined variable is used tostring ()constructs and returns string representation of the expression the expression tree adt can be used to evaluate basic arithmetic expressions of any size the following example illustrates the use of the adtcreate dictionary containing values for the one-letter variables vars ' ' build the tree for sample expression and then evaluate it exptree expressiontree"( /( - ))print"the result "exptree evaluate(varswe can change the value assigned to variable and reevaluate vars[' ' print"the result "exptree evaluate(varsin the following sections we develop algorithms for constructing and evaluating arithmetic expression trees in order to implement the expressiontree class partial implementation is provided in listing all of the operations will require recursive algorithm that is applied to the tree structure thuseach will call helper method to which the root reference will be passed in order to initiate the recursion if the helper methods were not usedthe client or user code would have to have access to the root reference in order to pass it to the recursive operation |
14,165 | listing the exptree py module class expressiontree builds an expression tree for the expression string def __init__selfexpstr )self _exptree none self _buildtreeexpstr evaluates the expression tree and returns the resulting value def evaluateselfvarmap )return self _evaltreeself _exptreevarmap returns string representation of the expression tree def __str__self )return self _buildstringself _exptree storage class for creating the tree nodes class _exptreenode def __init__selfdata )self element data self left none self right none the constructor creates single data field for storing the reference to the root node of the tree the buildtree(helper method is then called to actually construct the tree the evaluate(and str methods each call their own helper method and simply return the value returned by the helper the nodes of the expression tree will be created by the exptreenode storage classas shown in lines - the helper methods will be developed in the following sections string representation before looking at how we create and evaluate expression treeslet' consider the results of performing the three depth-first traversals on an arithmetic expression tree consider the larger expression tree from figure and suppose we perform postorder traversal on the tree the order the nodes are visited is what does this ordering representif you look closelyyou should notice it as the postfix representation for the expression ( thusa postorder traversal can be used to convert an arithmetic expression tree to the equivalent postfix expression while preorder traversal will produce the equivalent prefix expression soin what order would the nodes be visited by an inorder traversal it appears to be the infix representationbut notice the result is not correct since the parentheses around ( were omitted even though this result is incorrectwe can develop an algorithm that uses combination of all three depth-first |
14,166 | binary trees + / - figure expression tree for /( traversals to produce the correct expression trying to determine the minimum sets of parentheses that are required can be difficultbut we can easily create fully parenthesized expression(( ( ( ))we know an inorder traversal produces the correct ordering of operators and operands for the resulting expression we just have to figure out how to insert the parentheses in fully parenthesized expressiona pair of parentheses encloses each operator and its operands thuswe need to enclose each subtree within pair of parenthesesas illustrated in figure left parenthesis needs to be printed before subtree is visitedwhereas the right one needs to be printed after the subtree has been visited we can combine all three traversals in single recursive operationas shown in listing figure expression tree with braces grouping the subtrees tree evaluation given an algebraic expression represented as binary treewe can develop an algorithm to evaluate the expression each subtree represents valid subexpression with those lower in the tree having higher precedence thusthe two subtrees of each interior node must be evaluated before the node itself for examplein the expression tree from figure the addition operation cannot be performed until both subexpressions (the multiplication and the divisionhave been computed as |
14,167 | listing the _buildstring helper method class expressiontree recursively builds string representation of the expression tree def _buildstringselftreenode )if the node is leafit' an operand if treenode left is none and treenode right is none return strtreenode element else otherwiseit' an operator expstr '(expstr +self _buildstringtreenode left expstr +strtreenode element expstr +self _buildstringtreenode right expstr +')return expstr their results are needed by the operation furtherthe division cannot be evaluated until the subtraction of ( has been computed we have already discussed two versions of an algorithm that processes both subtrees of node before the node itself rememberthis was the technique employed by both the preorder and postorder traversals we can use one of these to evaluate an expression tree the difference is that the visit operation is only applied to the operator (interiornodes and visit becomes the evaluation of the operation applied to the value of both subtrees the recursive function for evaluating an expression tree and returning the result is provided in listing listing evaluate an expression tree class expressiontree def _evaltreeselfsubtreevardict )see if the node is leaf nodein which case return its value if subtree left is none and subtree right is none is the operand literal digitif subtree element >' and subtree element <' return int(subtree elementelse or is it variableassert subtree element in vardict"invalid variable return vardict[subtree elementotherwiseit' an operator that needs to be computed else evaluate the expression in the left and right subtrees lvalue _evaltreesubtree leftvardict rvalue _evaltreesubtree rightvardict evaluate the operator using helper method return computeoplvaluesubtree elementrvalue compute the arithmetic operation based on the supplied op string def _computeopleftopright ) |
14,168 | binary trees when leaf node is encounteredwe know it contains an operand but we must determine if that operand is single-integer digitin which case the integer value can be returnedor if it' single-letter variable in the case of the latterthe value for the variable must be located and returned from the user-supplied dictionary for interior nodesthe two subtrees are evaluated by recursively calling the evaltree(function after the two recursive calls returnthe operation represented by the interior node can be computed the computation is performed using the computeop(helper functionwhich performs the appropriate arithmetic operation based on the given operator the implementation of computeop(is left as an exercise the recursive call tree for the evalstr(method is shown in figure when applied to the expression tree from figure + / - figure the recursive call tree for the _evalstr(function tree construction you have seen how an expression tree is usednow let' look at how to construct the tree given an infix expression for simplicitywe assume the following( the expression is stored in string with no white space( the supplied expression is valid and fully parenthesized( each operand will be single-digit or single-letter variableand ( the operators will consist of +-*/and an expression tree is constructed by parsing the expression and evaluating the individual tokens as the tokens are evaluatednew nodes are inserted into the tree for both the operators and operands each set of parentheses will consist of an interior node containing the operator and two childrenwhich may be single valued or subtrees representing subexpressions the process starts with an empty root node set as the current noderoot current suppose we are building the tree for the expression ( * the action taken depends on the value of the current token the first token is left parenthesis |
14,169 | when left parenthesis is encountereda new node is created and linked into the tree as the left child of the current node we then descend down to the new nodemaking the left child the new current node root token'(root current current the next token is the operand when an operand is encounteredthe data value of the current node is set to contain the operand we then move up to the parent of the current node root token' root current current next comes the plus operator when an operator is encounteredthe data value of the current node is set to the operator new node is then created and linked into the tree as the right child of the current node we descend down to the new node root token'*root *current current the second operand repeats the same action taken with the first operandtoken' root root *current current finallythe right parenthesis is encountered and we move up to the parent of the current node in this casewe have reached the end of the expression and the tree is complete token')root current |
14,170 | binary trees constructing the expression tree involves performing one of five different steps for each token in the expression this same process can be used on larger expressions to construct each part of the tree consider figure which illustrates the steps required to build the tree for the expression (( * )+ the steps illustrated in the figure are described below* ( ( * ( ( ( ( ( * ( ( ( figure steps for building an expression tree for (( create an empty root node and mark it as the current node read the left parenthesisadd new node as the left child and descend down to the new node read the next left parenthesisadd new node as the left child and descend down to the new node read the operand set the value of the current node to the operand and move up to the parent of the current node read the operator *set the value of the current node to the operator and create new node linked as the right child then descend down to the new node read the operand set the value of the current node to the operand and move up to the parent of the current node read the right parenthesismove up to the parent of the current node read the operator +set the value of the current node to the operator and create new node linked as the right childdescend down to the new node |
14,171 | read the operand set the value of the current node to the operand and move up to the parent of the current node read the right parenthesismove up to the parent of the current node since this is the last tokenwe are finished and the expression tree is complete having stepped through the construction of two sample expressionswe now turn our attention to the implementation of the method for building an expression tree throughout the process we have to descend down into the tree to construct each side of an operator and then back up when right parenthesis is encountered but how do we remember where we were in order to back upthere are two approaches we can use one involves the use of stack and the other recursion in you saw that backtracking is automatically handled by the recursion as the recursive calls unwind given this simplicitywe implement recursive function to build an expression treeas shown in listing listing constructing an expression tree class expressiontree def _buildtreeselfexpstr )build queue containing the tokens in the expression string expq queue(for token in expstr expq enqueuetoken create an empty root node self _exptree _exptreenodenone call the recursive function to build the expression tree self _recbuildtreeself _exptreeexpq recursively builds the tree given an initial root node def _recbuildtreeselfcurnodeexpq )extract the next token from the queue token expq dequeue(see if the token is left paren'(if token ='(curnode left _exptreenodenone buildtreereccurnode leftexpq the next token will be an operatorcurnode data expq dequeue(curnode right _exptreenodenone self _buildtreereccurnode rightexpq the next token will be )remove it expq dequeue(otherwisethe token is digit that has to be converted to an int else curnode element token |
14,172 | binary trees the recbuildtree(method takes two argumentsa reference to the current node and queue containing the tokens that have yet to be processed the use of the queue is the easiest way to keep track of the tokens throughout the recursive process we indicated earlier that the expression will be supplied as stringbut strings in python are immutablewhich makes it difficult to remove the tokens as they are processed the queuewhich was introduced in is the best choice since the tokens will be processed in fifo order the non-recursive buildtree(method creates queue and fills it with the tokens from the expressionas shown in lines - of listing the code in this method can actually be placed within the constructor we only used this helper method in order to hide the tree construction details in the initial presentation of the expressiontree class in listing until the actual operation was presented the recursive function assumes the root node has been created before the first invocation thusafter building the token queue in buildtree()an empty root node is created and the two structures are passed to the recursive function the recbuildtree(method implements the five operations for building the expression tree as described earlier if you review those stepsyou will notice the only times we descend down the tree is after encountering left parenthesis or an operator after moving down to either the left or right child nodethe next token encountered must be either left parenthesis or an operand thusthe function extracts the next token to be processed and then evaluates it to see if it is either or an operand if the token is an operandwe can set the data field of the current node with the integer value of the token and then return this takes us back to the parent of the current node the bulk of the work is done when left parenthesis is encountered the same sequence of stepsas shown in lines - is always performed since we are only working with binary operators firsta new left child is created and we descend down to the new node by making recursive call upon returning to this invocation of the functionwhich represents the parent of the new nodethe next token must contain an operator it is removed from the queue and assigned to the current node' data field new right child is then created and again we descend down to the new node to process the right side of the operator finallywhen the second recursive call returnsthe next token will be right parenthesiswhich can removed from the queue and discarded heaps heap is complete binary tree in which the nodes are organized based on their data entry values there are two variants of the heap structure max-heap has the propertyknown as the heap order property that for each non-leaf node the value in is greater than the value of its two children the largest value in max-heap will always be stored in the root while the smallest values will be stored in the leaf nodes the min-heap has the opposite property for each non-leaf node the value in is smaller than the value of its two children figure illustrates an example of max-heap and min-heap |
14,173 | (amin-heap (bmax-heap figure examples of heap definition the heap is specialized structure with limited operations we can insert new value into heap or extract and remove the root node' value from the heap in this sectionwe explore these operations for use with max-heap their application to min-heap is identical except for the logical relationship between each node and its children insertions when new value is inserted into heapthe heap order property and the heap shape property ( complete binary treemust be maintained suppose we want to add value to the max-heap in figure (bif we are to maintain the property of the max-heapthere are only two places in the tree where can be insertedas shown in figure (acontrast this to the possible locations if we were to add value to the max-heapshown in figure (bknowing the possible locations is only part of the problem what happens to the values in the nodes where the new value must be stored in order to maintain the heap order propertyin other wordsif we insert into the heapit must be placed into either the node currently containing or suppose we choose to place it in the node containing value what becomes of value it will have to the nodes where value can be inserted the nodes where value can be inserted ( (bfigure candidate locations in heap for new values |
14,174 | binary trees be moved to another node where it can be legally placedand the value displaced by will have to be movedand so on until new leaf node is created for the last value displaced instead of starting from the top and searching for node in the tree where the new value can be properly placedwe can start at the bottom and work our way up this involves several stepswhich we outline using figure firstwe create new node and fill it with the new value as shown in part (athe node is then attached as leaf node at the only spot in the tree where the heap shape property can be maintained (part ( )remembera heap is complete tree and in such treethe leaf nodes on the lowest level must be filled from left to right as you will noticethe heap order property has been violated since the parent of node is smaller but in max-heap it is supposed to be larger (acreate new node for (blink the node as the last child (csift-upswap and (dsift-upswap and figure the steps to insert value into the heap to restore the heap order propertythe new value has to move up along the path in reverse order from the root to the insertion point until node is found where it can be positioned properly this operation is known as sift-up it can also be known as an up-heapbubble-uppercolate-upor heapify-upamong others the sift-up operation compares the new value in the new node to the value in its parent node since its parent is smallerwe know it belongs above the parent and the two values are swappedas shown in figure (cvalue is then compared to the value in its new parent node againwe find the parent |
14,175 | is smaller and the two values have to be swappedas shown in part (dthe comparison is repeated againbut this time we find value is less than or equal to its parent and the process ends nowsuppose we add value to the heapas illustrated in figure the new node is created and filled with value and linked into the tree as the left child of node when the new value is sifted upwe find values and have to be swappedresulting in the final placement of the new value ( (bfigure inserting value into the heap(acreate the new node and link it into the treeand (bsift the new value up the tree extractions when value is extracted and removed from the heapit can only come from the root node thusin max-heapwe always extract the largest value and in min-heapwe always extract the smallest value after the value in the root has been removedthe binary tree is no longer heap since there is now gap in the root nodeas illustrated in figure to restore the tree to heapanother value will have to take the place of the value extracted from the root and node has to be removed from the tree since figure extracting value from the max-heap leaves hole at the root node |
14,176 | binary trees there is one less value in the heap since heap requires complete treethere is only one leaf that can be removedthe rightmost node on the lowest level to maintain complete tree and the heap order propertyan extraction requires several steps firstwe copy and save the value from the root nodewhich will be returned after the extraction process has been completed nextthe value from the rightmost node on the lowest level is copied to the root and that leaf node is removed from the treeas shown in figure (athis maintains the heap structure property requiring complete treebut it violates the heap order property since is smaller than its children to restore the heap order propertyvalue has to be sifted-down the tree the sift-down works in the same fashion as the sift-up used with an insertion starting at the root nodethe node' value is compared to its children and swapped with the larger of the two the sift-down is then applied to the node into which the smaller value was copied this process continues until the smaller value is copied into leaf node or node whose children are even smaller parts ( dof figure show the sift-down operation applied to value in the root nodevalue is swapped with then with and finally with resulting in proper heap the code in this method can be placed within the constructor we only used this helper method in order to hide the tree construction details in the initial presentation of the expressiontree class in listing until the actual operation was presented copy to the root (acopy the last item to the root (bsift-downswap and (csift-downswap and (dsift-downswap and figure the steps in restoring max-heap after extracting the root value |
14,177 | implementation throughout our discussionwe have used the abstract view of binary tree with nodes and edges to illustrate the heap structure while heap is binary treeit' seldomif everimplemented as dynamic linked structure due to the need of navigating the tree both top-down and bottom-up insteadwe can implement heap using an array or vector to physically store the individual nodes with implicit links between the nodes suppose we number the nodes in the heap left to right by level starting with zeroas shown in figure (awe can then place the heap values within an array using these node numbers as indices into the arrayas shown in figure ( ( (bfigure heap can be implemented using an array or vector node access since heap is complete treeit will never contain holes resulting from missing internal nodes thusthe root will always be at position within the array and its two children will always occupy elements and in factthe children of any given node will always occupy the same elements within the array this allows us to quickly locate the parent of any node or the left and right child of any node given the array index of nodethe index of the parent or children of that node can be computed asparent ( - / left right |
14,178 | binary trees determining if node' child link is null is simply matter of computing the index of the appropriate child and testing to see if the index is out of range for examplesuppose we want to test if node in the tree from figure has left child since the node is stored at index position we plug this value into the equation for computing the left child indexwhich yields this tells us that if node had left child it would be located within the array at index position but there are only items in the heapstored in positions and would be outside the range of valid node positions this indicates node does not have left child class definition we define the maxheap class for our array-based implementation of the max-heap in listing an array-based version of the heap structure is commonly used when the maximum capacity of the heap is known beforehand if the maximum capacity is not knownthen python list structure can be used instead the array is created with size equal to the maxsize argument supplied to the constructor and assigned to elements since we will be adding one item at time to the heapthe items currently in the heap will only use portion of the arraywith the remaining elements available for new items the count attribute keeps track of how many items are currently in the heap listing the arrayheap py module an array-based implementation of the max-heap class maxheap create max-heap with maximum capacity of maxsize def __init__selfmaxsize )self _elements arraymaxsize self _count return the number of items in the heap def __len__self )return self _count return the maximum capacity of the heap def capacityself )return lenself _elements add new value to the heap def addselfvalue )assert self _count self capacity()"cannot add to full heap add the new value to the end of the list self _elementsself _count value self _count + sift the new value up the tree self _siftupself _count extract the maximum value from the heap def extractself )assert self _count "cannot extract from an empty heap |
14,179 | save the root value and copy the last heap value to the root value self _elements[ self _count - self _elements[ self _elementsself _count sift the root value down the tree self _siftdown sift the value at the ndx element up the tree def _siftupselfndx )if ndx parent ndx / if self _elements[ndxself _elements[parenttmp self _elements[ndxself _elements[ndxself _elements[parentself _elements[parenttmp self _siftupparent swap elements sift the value at the ndx element down the tree def _siftdownselfndx )left ndx right ndx determine which node contains the larger value largest ndx if left self _elements[largestlargest left elif right self _elements[largest]largest right if the largest value is not in the current node (ndx)swap it with the largest value and repeat the process if largest !ndx swapself _elements[ndx]self _elements[largest_siftdownlargest the first step when adding new item to heap is to link new leaf node in the rightmost position on the lowest level in the array implementationthis will always be the next position following the last heap item in the array after inserting the new item into the array (lines - it has to be sifted up the tree to find its correct position figure illustrates the modifications to the heap and the storage array when adding to the sample heap to extract the maximum value from max-heapwe first have to copy and save the value in the root nodewhich we know is in index position nextthe root value has to be replaced with the value from the leaf node that is in the rightmost position on the lowest level of the tree in the array implementationthat leaf node will always be the last item of the heap stored in linear order within the array after copying the last heap item to the root node (lines - )the new value in the root node has to be sifted down the tree to find its correct position the implementation of the sift-down operation is straightforward after determining the indices of the nodes left and right childwe determine which of the three values is largerthe value in the nodethe value in the node' left childor the value in the node' right child if one of the two children contains value greater than or equal to the value in the node( it has to be swapped with the |
14,180 | binary trees figure inserting value into the heap implemented as an array value in the current node and ( the sift-down operation has to be repeated on that child otherwisethe proper position of the value being sifted down has been located and the base case of the recursive operation is reached analysis inserting an item into heap implemented as an array requires (log ntime in the worst case inserting the new item at the end of the sequence of heap items can be done in ( time after the new item is insertedit has to be sifted up the tree the worst case time of the sift-up operation is the maximum number of levels the new item can move up the tree new item always begins in leaf node and may end up in the root nodewhich is distance equal to the height of the tree since heap is complete binary treewe know its height is always log extracting an item from heap implemented as an array also requires (log ntime in the worst casethe analysis of which we leave as an exercise the priority queue revisited priority queuewhich was introduced in works like normal queue except each item is assigned priority and the items with higher priority are |
14,181 | dequeued first the bounded priority queuein which the number of priorities is fixedallows for an efficient implementation with the use of an array of queues (section the unbounded priority queue does not place any restriction on the maximum positive integer value that can be used as the priority values with an unlimited number of prioritiesthe array of queues implementation would not be very efficient and could waste lot of space insteadwe would have to use either the python list (section or linked list (section based implementation of the priority queue min-heap can also be used to implement the general priority queue the ordering of the heap nodes is based on the priority associated with each item in the queue for examplefigure illustrates the contents of the heap for the example priority queue from figure since lower values indicate higher prioritythe item with the highest priority will always be in the root of the minheap when that item is dequeuedthe item with the next highest priority will work its way to the top as the sift-down operation is performed "white "black "purple "orange "green "yellowfigure contents of the heap used in the implementation of priority queue when using heap implemented as an arraythe operations of the general priority queue are very efficientboth the enqueue and dequeue operations have worst case times of (log nan array-based version of the heap is sufficient in applications where the maximum capacity of the queue is known beforehand if the heap is implemented using python listthe enqueue and dequeue operations have worst case times of (nsince the underlying array may have to expand or shrinkbut amortized cost of (log ntable compares the worst case times and amortized cost for various implementations of the unbounded priority queue worst case amortized implementation enqueue dequeue enqueue dequeue python list linked list heap (arrayheap (listo(no( (log no(no(no(no(log no(no( (log no(no(log ntable time-complexities for various implementations of the bounded priority queue |
14,182 | binary trees heapsort the simplicity and efficiency of the heap structure can be applied to the sorting problem the heapsort algorithm builds heap from sequence of unsorted values and then extracts the items from the heap to create sorted sequence simple implementation consider the function in listing we create max-heap with enough capacity to store all of the values in theseq each value from the sequence is then inserted into the heap after thatthe values are then extracted from the heapone at timeand stored back into the original sequence structure in reverse order since we are using max-heapeach time value is extractedwe get the next largest value in sorted order the heapsort algorithm is very efficient and only requires ( log ntime in the worst case the construction of the heap requires ( log ntime since there are items in the sequence and each call to add(requires log time extracting the values from the heap and storing them into the sequence structure also requires ( log ntime listing simple implementation of the heapsort algorithm def simpleheapsorttheseq )create an array-based max-heap len(theseqheap maxheapn build max-heap from the list of values for item in theseq heap additem extract each value from the heap and store them back into the list for in rangen - theseq[iheap extract(sorting in place the implementation of the heapsort algorithm provided in listing has one drawbackit requires the use of additional storage to build the heap structure but we don' actually need second array the entire process of building the heap and extracting the values can be done in place--that iswithin the same sequence in which the original values are supplied suppose we are given the array of values shown at the bottom of figure (aand want to sort them using the heapsort algorithm the first step is to construct heap from this sequence of values as you will seewe can do this within the same array without the need for additional storage rememberthe nodes in the heap occupy the elements of the array from front to back we can keep the heap items at the front of the array and those values that have yet to be added to the |
14,183 | ( ( figure adding the first two values to the heap heap at the end of the array all we have to do is keep track of where the heap ends and the sequence of remaining values begin if we consider the first value in the arrayit constitutes max-heap of one itemas shown in figure (awhen adding value to heapit' copied to the first element in the array immediately following the last heap item and sifted up the tree the next value from our sequence that is to be added to the heap is already in this position thusall we have to do is apply the sift-up operation to the valueresulting in max-heap with two itemsas illustrated in figure (bwe can repeat this process on each value in the array to create max-heap consisting of all the values from the array this process is illustrated in figure and includes both the abstract view of the heap and the contents figure adding the remaining values to the heap |
14,184 | binary trees of the corresponding array the shaded part of the array indicates the items that are currently part of the array the boldfaced value indicates the next item to be sifted up the tree we have shown it is quite easy to build heap using the same array containing the values that are to be added to the heap similar approach can be used to extract the values from the heap and create sorted array using the array containing the heap rememberwhen the root value is extracted from heapthe value from the rightmost leaf at the lowest level is copied to the root node and then sifted down the tree consider the completed heap in figure (awhen value is extracted from the heapthe last value in the array would be copied to the root node since it corresponds to the rightmost leaf node at the lowest level instead of simply copying this leaf value to the rootwe can swap the two valuesas shown in figure (bthe next step in the process of extracting value from the heap is to remove the leaf node from the heap in an array representationwe do this by reducing counter indicating the number of items in the heap in figure ( )the elements comprising the heap are shown with white background and the value (athe original max-heap (bswap the first and last items in the heap (cremove the last item from the heap (dsift the root value down the tree figure the three steps performed to extract in place single value from the heap |
14,185 | just swapped with the root is shown in gray background notice that value is the largest value in the original array of unsorted values and when sorted belongs in this exact position at the end of the array finallythe value copied from the leaf to the root has to be sifted downas illustrated in figure (dif we repeat this same processswapping the root value with the last item in the subarray that comprises the heapfor each item in the heap we end up with sorted array of values in ascending order figure illustrates the remaining steps in extracting each value from the heap and storing them in the same array the shaded part of the array shows the values that have been removed from the heap and placed in sorted order while the elements with white background show those that comprise the heap are currently part of the array the boldfaced values indicate those that were affected by the sift-down operation figure the steps in extracting the values from the heap into the same array that will store the resulting sequence the implementation for this improved version of the heapsort algorithm is provided in listing it does not use the maxheap class from earlierbut it does rely on siftup(and siftdown(functions like those used with the class |
14,186 | binary trees listing improved implementation of the heapsort algorithm sorts sequence in ascending order using the heapsort def heapsorttheseq ) len(theseqbuild max-heap within the same array for in rangen siftuptheseqi extract each value and rebuild the heap for in rangen- - tmp theseq[jtheseq[jtheseq[ theseq[ tmp siftdowntheseqj- applicationmorse code morse code is type of character encoding originally designed in the late by samuel morse for use with his telegraph system morse code allowed messages to be transmitted long distances across telegraph wires and was extensively used by the american railroad companies it was first used in to transmit messages between washington and baltimore the original code used various patterns of dots and spaces to represent the letters of the alphabet while this was sufficient for use in the united statesthe code could not be used in europe to transmit non-english textwhich contains diacritic marks to remedy this shortcomingfriedrich clemens gerke improved on the original morse code and developed new version that was first used in to transmit messages in germany gerke' version of the codewith minor changeswas standardized in and became known as international morse code the original code developed by samuel morse became known as american morse code the modern international morse code represents various letterssymbolsand digits using sequences of dots (or dits)dashes (or dahs)short gapsand long gaps the short gaps are used to break the sequence between letters and the long gaps are used to separate words the most famous is the sequence for sos--at this pointyou might be wondering why we are discussing morse code and what it has to do with binary trees suppose you are given the following sequenceand would like to know what it means the most obvious way to decode this message is to look through table for each part of the sequence and find the corresponding letter when decoded the message readstrees are fun |
14,187 | decision trees another way to translate the message is with the use of decision tree decision tree models sequence of decisions or choices in which selections are made in stages from among multiple alternatives at each stage the stages in the decision are represented as nodes while the branches indicate the decisions that can be made at each stage common use of the decision tree with which you should be familiar is the dreaded automated phone menu when the automated system answers your callit starts at the root of the tree and offers several choices from which you can choose after making your initial selectionyou are presented with submenu from which you must make second selectionand then possibly third selectionand so on the presentation of the menu options by the automated system are the stages in the decision and represented in the tree as nodes the menu choices from which you can select at each stage are indicated by branches from those nodes this same idea can be used to decode morse code sequence while each code sequence is uniquethey do not have unique prefixes for examplethe sequences for the letters and both begin with dot to distinguish between the twowe have to examine more of the sequence the second symbol in the sequence for is dashwhile the sequence for has dot it' not until the third component of the sequence that we can fully distinguish between the and the to confuse the situation even morethe letter is indicated by the two-symbol sequence of -)which is the sequence prefix for subset of the international morse code is shown herea --- --- -- ---to help decode sequencewe can build decision tree that models morse codeas illustrated in figure the nodes represent the letters and symbols that are part of morse code and the branches provide selection of either dot (left branchor dash (right branchthe root node is empty and indicates the starting position when decoding sequence to decode given sequencewe start at the root and follow the left or right branch to the next node based on the current symbol in our sequence for exampleto decode )we start at the root and examine the first symbol since the first symbol is dotwe have to follow the left branch to the next nodewhich leads us to node each time we move to nodewe examine the next character in the sequence since the second symbol is dashwe take the right branch from node leading us to node from that nodewe take the left branch since the third symbol is dot this leads us to node after exhausting all of the symbols in the sequencethe last node visited will contain the character corresponding to the |
14,188 | binary trees jj figure morse code modeled as binary decision tree given sequence in this casethe sequence represents the letter the path of the steps through the tree to decode the sequence is shown in figure jj figure decoding the morse code sequence what happens if we try to decode an invalid sequencefor exampletry decoding the sequence (this will take us from the root noderight to node tleft to node nright to node kand left to node the last symbol in the sequence is dotwhich indicates we are supposed to take the left branch at node cbut it has no left childas illustrated in figure if null child link is encountered during the navigation of the treewe know the sequence is invalid the adt definition we can define an abstract data type that can be used to store morse code tree for use in decoding morse code sequences the adt only includes two operationsthe constructor and the translate operations |
14,189 | ll jj null figure decoding an invalid morse code sequence (define morse code tree adt morse code tree is decision tree that contains the letters of the alphabet and other special symbols in its nodes the nodes are organized based on the morse code sequence corresponding to each letter and symbol morsecodetree()builds the morse code tree consisting of the letters of the alphabet and other special symbols translatecodeseq )translates and returns the given morse code sequence to its equivalent character if the sequence is valid or returns none otherwise we leave the implementation of the adt as an exercise the tree has to be built as part of the constructor start with an empty root node and then add one letter at time when adding letterfollow the branches corresponding to the code sequence representing the given letter if null child link is encounteredsimply add new empty node and continue following the branches after reaching the end of the sequencethe letter being added to the tree is assigned to the last node visited exercises given binary tree of size what is the minimum number of levels it can containwhat is the maximum number of levels draw all possible binary trees that contain nodes |
14,190 | binary trees what is the maximum number of nodes possible in binary tree with levels given the following binary trees( ( ( ( ( (aindicate all of the structure properties that apply to each treefullperfectcomplete (bdetermine the size of each tree (cdetermine the height of each tree (ddetermine the width of each tree consider the following binary tree (ashow the order the nodes will be visited in ai preorder traversal ii inorder traversal (bidentify all of the leaf nodes (cidentify all of the interior nodes (dlist all of the nodes on level iii postorder traversal iv breadth-first traversal |
14,191 | (elist all of the nodes in the path to each of the following nodesi ii iii iv (fconsider node and list the node'si descendants ii ancestors iii siblings (gidentify the depth of each of the following nodesi ii iii iv determine the arithmetic expression represented by each of the following expression trees+/xx aa yy /bb (axx ( ( build the expression tree for each of the following arithmetic expressions( ( bc (ba ( cd ( ( ( zv (dv (ea consider the following set of values and use them to build heap by adding one value at time in the order listed (amin-heap (bmax-heap prove or show that the worst case time of the extraction operation on heap implemented as an array is (log prove or show that the insertion and extraction operations on heap implemented as python list is (nin the worst case also show that each operation has an amortized cost of (log |
14,192 | binary trees programming projects implement the function treesize(root)which computes the number of nodes in binary tree implement the function treeheight(root)which computes the height of binary tree implement the computeop(lvalueoperatorrvaluehelper method used to compute the value of binary operator when evaluating an expression tree assume all operands in the expression tree are single digits modify the expressiontree class from the to handle the unary operator and unary mathematical function implement the general priority queue adt using the min-heap implemented as an array instead of having the number of priority levels as an argument of the constructorspecify the maximum capacity of the queue in additiondefine the isfull(method that returns true when the queue is full and false otherwise implement the general priority queue adt using the min-heap implemented as vector instead of having the number of priority levels as an argument of the constructorspecify the maximum capacity of the queue complete the implementation of the morse code tree adt add the operation getcodeseq(symbolto the morse code tree adtwhich accepts single-character symbol and returns the corresponding morse code sequence for that symbol none should be returned if the supplied symbol is invalid design and implement program that uses the morse code tree adt to decode morse code sequences extracted from standard input your program should detect and report any invalid code sequences |
14,193 | search trees searchingwhich has been discussed throughout the textis very common operation and has been studied extensively linear search of an array or python list is very slowbut that can be improved with binary search even with the improved search timearrays and python lists have disadvantage when it comes to the insertion and deletion of search keys remembera binary search can only be performed on sorted sequence when keys are added to or removed from an array or python listthe order must be maintained this can be time consuming since keys have to be shifted to make room when adding new key or to close the gap when deleting an existing key the use of linked list provides faster insertions and deletions without having to shift the existing keys unfortunatelythe only type of search that can be performed on linked list is linear searcheven if the list is sorted in this we explore some of the many ways the tree structure can be used in performing efficient searches the tree structurewhich was introduced in the last can be used to organize dynamic data in hierarchical fashion trees come in various shapes and sizes depending on their application and the relationship between the nodes when used for searchingeach node contains search key as part of its data entry (sometimes called the payload and the nodes are organized based on the relationship between the keys there are many different types of search treessome of which are simply variations of othersand some that can be used to search data stored externally but the primary goal of all search trees is to provide an efficient search operation for quickly locating specific item contained in the tree search trees can be used to implement many different types of containerssome of which may only need to store the search keys within each node of the tree more commonlyhoweverapplications associate data or payload with each search key and use the structure in the same fashion as map adt would be used the map adt was introduced in at which time we implemented it using list structure exercises in several offered the opportunity to provide new implementations using various data structures in |
14,194 | search trees we implemented hash table version of the map adt that improved the search times but its efficiency depends on the type of keys stored in the mapsince the choice of hash function can greatly impact the search operation throughout the we explore several different search treeseach of which we will use to implement new versions of the map adt to help avoid confusion between the various implementationswe use different class name for each implementation the binary search tree binary search tree (bstis binary tree in which each node contains search key within its payload and the tree is structured such that for each interior node all keys less than the key in node are stored in the left subtree of all keys greater than the key in node are stored in the right subtree of consider the binary search tree in figure which contains integer search keys the root node contains key value and all keys in the root' left subtree are less than and all of the keys in the right subtree are greater than if you examine every node in the keysyou will notice the same key relationship applies to every node in the tree given the relationship between the nodesan inorder traversal will visit the nodes in increasing search key order for the example binary search treethe order would be figure binary search tree storing integer search keys our definition of the binary search tree precludes the storage of duplicate keys in the treewhich makes the implementation of the various operations much easier it' also appropriate for some applicationsbut the restriction can be changed to allow duplicate keysif needed in additionfor illustration purposeswe only show the key within each node of our search trees you should assume the corresponding data value is also stored in the nodes partial implementation of the binary search tree version of the map adt is shown in listing the remaining code will be added as each operation is |
14,195 | listing partial implementation of the map adt using binary search tree class bstmap creates an empty map instance def __init__self )self _root none self _size returns the number of entries in the map def __len__self )return self _size returns an iterator for traversing the keys in the map def __iter__self )return _bstmapiteratorself _root storage class for the binary search tree nodes of the map class _bstmapnode def __init__selfkeyvalue )self key key self value value self left none self right none discussed throughout the section as with any binary treea reference to the root node must also be maintained for binary search tree the constructor defines the root field for this purpose and also defines the size field to keep track of the number of entries in the map the latter is needed by the len method the definition of the private storage class used to create the tree nodes is shown in lines - searching given binary search treeyou will eventually want to search the tree to determine if it contains given key or to locate specific element in the last we saw that there is single path from the root to every other node in tree if the binary search tree contains the target keythen there will be unique path from the root to the node containing that key the only question ishow do we know which path to takesince the root node provides the single access point into any binary treeour search must begin there the target value is compared to the key in the root node as illustrated in figure if the root contains the target valueour search is over with successful result but if the target is not in the rootwe must decide which of two possible paths to take from the definition of the binary search treewe know the key in the root node is larger than the keys in its left subtree and smaller than the keys in its right subtree thusif the target is less than the root' keywe move left and we move right if it' greater we repeat the comparison on the root node of the subtree and take the appropriate path this process is repeated until target is located or we encounter null child link |
14,196 | search trees compare target to xx if target search the left subtree left subtree if target search the right subtree right subtree figure the structure of binary search tree is based on the search keys suppose we want to search for key value in the binary search tree from figure we begin by comparing the target to since the target is less than we move left the target is then compared to this time we move right since the target is larger than nextthe target is compared to resulting in move to the left finallywhen we examine the left child of node we find the target and report successful search the path taken to find key in the example tree is illustrated in figure (aby the dashed directed lines what if the target is not in the treefor examplesuppose we want to search for key we would repeat the same process used to find key as illustrated in figure (bthe difference is what happens when we reach node and compare it to the target if were in the the binary search treeit would have to be in the left subtree of node but you will notice node does not have left child if we continue in that directionwe will "falloff the tree thusreaching null child link during the search for target key indicates an unsuccessful search the binary search tree operations can be implemented iteratively or with the use of recursion we implement recursive functions for each operation and leave the iterative versions as exercises the bstsearch(helper methodprovided in lines - of listing recursively navigates binary search tree to find the node containing the target key the method has two base casesthe target is contained in the current node or null child link is encountered when base case is reachedthe method returns either reference to the node containing the key or noneback through all of the recursive calls the latter indicates the key was not ( ( null figure searching binary search tree(asuccessful search for and (bunsuccessful search for |
14,197 | listing searching for target key in binary search tree class bstmap determines if the map contains the given key def __contains__selfkey )return self _bstsearchself _rootkey is not none returns the value associated with the key def valueofselfkey )node self _bstsearchself _rootkey assert node is not none"invalid map key return node value helper method that recursively searches the tree for target key def _bstsearchselfsubtreetarget )if subtree is none base case return none elif target subtree key target is left of the subtree root return self _bstsearchsubtree left elif target subtree key target is right of the subtree root return self _bstsearchsubtree right else base case return subtree found in the tree the recursive call is made by passing the link to either the left or right subtree depending on the relationship between the target and the key in the current node you may be wondering why we return reference to the node and not just boolean value to indicate the success or failure of the search this allows us to use the same helper method to implement both the contains and valueof(methods of the map class both call the recursive helper method to locate the node containing the target key in doing sothe root node reference has to be passed to the helper to initiate the recursion the value returned from bstsearch(can be evaluated to determine if the key was found in the tree and the appropriate action can be taken for the corresponding map adt operation binary search tree can be emptyas indicated by null root referenceso we must ensure any operation performed on the tree also works when the tree is empty in the bstsearch(methodthis is handled by the first base case on the first call to the method min and max values another operation similar to search that can be performed on binary search tree is finding the minimum or maximum key values given the definition of the binary search treewe know the minimum value is either in the root or in node to its left but how do we know if the root is the smallest value and not somewhere in its left subtreewe could compare the root to its left childbut if you think about itthere is no need to compare the individual keys the reason has to do |
14,198 | search trees with the relationship between the keys if the root node contains keys in its left subtreethen it cannot possibly contain the minimum key value since all of the keys to the left of the root are smaller than the root what if the root node does not have left childin this casethe root would contain the smallest key value since all of the keys to the right are larger than the root if we applied the same logic to the left child of the root node (assuming it has left childand then to that node' left child and so onwe will eventually find the minimum key value that value will be found in node that is either leaf or an interior node with no left child it can be located by starting at the root and following the left child links until null link is encounteredas illustrated in figure the maximum key value can be found in similar fashion traverse left as far as possible minimum key figure finding the minimum or maximum key in binary search tree listing provides recursive helper method for finding the node that contains the minimum key value in the binary search tree the method requires the root of the tree or of subtree as an argument it returns either reference to the node containing the smallest key value or none when the tree is empty listing find the element with the minimum key value in binary search tree class bstmap helper method for finding the node containing the minimum key def _bstminumumselfsubtree )if subtree is none return none elif subtree left is none return subtree else return self _bstminimumsubtree left |
14,199 | insertions when binary search tree is constructedthe keys are added one at time as the keys are inserteda new node is created for each key and linked into its proper position within the tree suppose we want to build binary search tree from the key list [ by inserting the keys in the order they are listed figure illustrates the steps in building the treewhich you can follow as we describe the process (ainsert (binsert (dinsert (cinsert (einsert (finsert figure building binary tree by inserting the keys [ we start by inserting value node is created and its data field set to that value since the tree is initially emptythis first node becomes the root of the tree (part anextwe insert value since it is smaller than it has to be inserted to the left of the rootwhich means it becomes the left child of the root (part bvalue is then inserted in node linked as the right child of the root since it is larger than (part cwhat happens when value is insertedthe root already has both its left and right children when new keys are insertedwe do not modify the data fields of existing nodes or the links between existing nodes thusthere is only one location in which key value can be inserted into our current tree and still maintain the search tree property it has to be inserted as the right child of node (part dyou may have noticed the pattern that is forming as new nodes are added to the binary tree the new nodes are always inserted as leaf node in its proper position such that the binary search tree property is maintained we conclude this example by inserting the last two keys and into the tree (parts and fworking through this example by handit was easy to see where each new node had to be linked into the tree but how do we insert the new keys in program codesuppose we want to insert key into the tree we built by hand what happens if we use the bstsearch(method and search for key the search will lead us to node and we then fall off the tree when attempting to follow its left child |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.