id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
14,500 | sequences computers are really good at dealing with large amounts of information they can repeat task over and over again without getting bored when they repeat task they are generally doing the same thing to similar data or objects it is natural to want to organize those objects into some kind of structure so that our program can easily switch from one object to the next how objects are added to sequence or collection and how we move from one item to the next has some impact on how we might want to organize the collection of data in program in this we look at different ways of organizing data into sequence we'll also examine how to use python to make working with sequences convenient operator overloading in python lets us build sequences that we can manipulate with intuitive operations finallywe'll also examine how the organization of sequence affects the computation complexity of operations on it an abstract data type is term that is used to describe way of organizing data lists are one way of organizing sequence of databut in this we'll discover other ways of organizing sequences as well ascending and descending sequenceslinked listsstacksand queues are all abstract data types that we'll explore in this goals in this you will read about different ways of organizing data within program by the end of the you should be able to answer these questions when presented with an algorithm that requires you to maintain sequence of datawhich organizational scheme fits bestwhat are the trade-offs of selecting one type of sequence as opposed to anotherwhat are some interesting algorithms that use listslinked listsstacksor queueswhat sorting algorithm is most commonly used when sorting sequence of ordered valueswhat search algorithms are possible in sequence(cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha |
14,501 | sequences what is the complexity of many of the common operations on sequences and how is that complexity affected by the underlying organization of the data you will also be presented with few interesting programming problems that will help you learn to select and use appropriate data structures to solve some interesting problems lists in the first and second we developed sequence called pylist the pylist class is really just repackaging of the python list class the example sequence demonstrates some of the operators that are supported by python in this section we want to look more deeply into how lists are implemented there are many operations supported on lists contains the full list the table in fig is subset of the operations supported by lists each of the operations in the table has an associated complexity the performance of an algorithm depends on the complexity of the operations used in implementing that algorithm in the following sections we'll further develop our own list datatypecalled pylistusing the built-in list only for setting and getting elements in list the indexed get and indexed set operations can be observed to have ( complexity this complexity is achieved because the memory of computer is randomly accessiblewhich is why it is called random access memory in chap we spent some time demonstrating that each location within list is accessible in the same amount of time regardless of list size and location being retrieved in the following sections we'll enhance the pylist datatype to support the operations given in this table operation complexity usage method list creation (nor ( list(ycalls __init__(yindexed get ( [ix __getitem__(iindexed set ( [ia __setitem__( ,aconcatenate (nz= + __add__(yappend ( append(ax append(ainsert (nx insert( ,ex insert( , )delete (ndel [ix __delitem__(iequality (nx = __eq__(yiterate (nfor in xx __iter__(length ( len(xx __len__(membership (na in __contains__(asort ( log nx sort( sort(fig complexity of list operations this copy belongs to 'acha |
14,502 | the pylist datatype in the first couple of we began developing our pylist data structure to support the ( complexity of the append operationthe pylist contains empty locations that can be filled when append is called as first described in sect we'll keep track of the number of locations being used and the actual size of the internal list in our pylist objects sowe'll need three pieces of informationthe list itself called itemsthe size of the internal list called sizeand the number of locations in the internal list that are currently being used called numitems while we wouldn' have to keep track of the size of the listbecause we could call the len functionwe'll store the size in the object to avoid the overhead of calling len in multiple places in the code all the used locations in the internal list will occur at the beginning of the list in other wordsthere will be no holes in the middle of list that we will have to worry about we'll call this assumption an invariant on our data structure an invariant is something that is true before and after any method call on the data structure the invariant for this list is that the internal list will have the first numitems filled with no holes the code in sect provides constructor that can also be passed list for its initial contents storing all the items at the beginning of the listwithout holesalso means that we can randomly access elements of the list in ( time we don' have to search for the proper location of an element indexing into the pylist will simply index into the internal items list to find the proper element as seen in the next sections the pylist constructor class pylistdef __init__(self,contents=[]size= )the contents allows the programmer to construct list with the initial contents of this value the initial_size lets the programmer pick size for the internal size of the list this is useful if the programmer knows he/she is going to add specific number of items right away to the list self items [nonesize self numitems self size size for in contentsself append(ethe code in sect builds pylist object by creating list of none values none is the special value in python for references that point at nothing figure shows sample list after it was created and three items were appended to it the special none value is indicated in the figure by the three horizontal lines where the empty slots in the list point the initial size of the internal items list is by defaultbut user could pass larger size initially if they wanted to this is only the initial size the list will still grow when it needs to the contents parameter lets this copy belongs to 'acha |
14,503 | sequences fig sample pylist object the programmer pass in list or sequence to put in the list initially for instancethe object in fig could have been created by writing the following samplelist pylist([" "" "" "]each element of the sequence is added as separate list item the complexity of creating pylist object is ( if no value is passed to the constructor and (nif sequence is passed to the constructorwhere is the number of elements in the sequence pylist get and set def __getitem__(self,index)if index > and index self numitemsreturn self items[index raise indexerror("pylist index out of range" def __setitem__(self,index,val)if index > and index self numitemsself items[indexval return raise indexerror("pylist assignment index out of range"our pylist class is wrapper for the built-in list class soto implement the get item and set item operations on pylistwe'll use the get and set operations on the built-in list class the code is given here the complexity of both operations is ( in both caseswe want to make sure the index is in the range of acceptable indices if it is notwe'll raise an indexerror exception just as the built-in list class does this copy belongs to 'acha |
14,504 | pylist concatenate def __add__(self,other)result pylist(size=self numitems+other numitems for in range(self numitems)result append(self items[ ] for in range(other numitems)result append(other items[ ] return result to concatenate two lists we must build new list that contains the contents of both this is an accessor method because it does not mutate either list insteadit builds new list we can do this operation in (ntime where is the sum of the lengths of the two lists here is some code to accomplish this in sect the size is set to the needed size for the result of concatenating the two lists the complexity of the __add__ method is (nwhere is the length of the two lists the initial size of the list does not have to be set because append has ( complexity as we saw in sect howeversince we know the size of the resulting listsetting the initial size should speed up the concatenation operation slightly pylist append this method is hidden since it starts with two underscores it is only available to the class to use def __makeroom(self)increase list size by / to make more room add one in case for some reason self size is newlen (self size / self size newlst [nonenewlen for in range(self numitems)newlst[iself items[ self items newlst self size newlen def append(self,item)if self numitems =self sizeself __makeroom( self items[self numitemsitem self numitems + same as writing self numitems self numitems in sect we learned that the append method has ( amortized complexity when appendingwe will just add one more item to the end of the self items list if there is room in the description of the constructor we decided the pylist objects would contain list that had room for more elements when appending we can make use of that extra space once in while ( after appending some number of items)the internal self items list will fill up at that time we must increase the size of the items list to make room for the new item we are appending by size proportional to the current length of self items this copy belongs to 'acha |
14,505 | sequences as we learned in chap to make the append operation run in ( time we can' just add one more location each time we need more space it turns out that adding more space each time is enough to guarantee ( complexity the choice of is not significant if we added even more space each time we would get ( complexity at the other extreme we could double the internal list size each time we needed more room as we did in sect however seems like reasonable amount to expand the list without gobbling up too much memory in the computer we just need few more cyber dollars stored up for each append operation to pay for expanding the list when we run out of room the code in sect implements the append operation with an amortized complexity of ( integer division by is very quick in computer because it can be implemented by shifting the bits of the integer to the rightso computing our new lengthwhen neededis relatively quick the python interpreter implements append in similar way the python interpreter is implemented in cso the interpreter uses code python also chooses to increase the list size by other values in python list sizes increase by and so on the additional space to add to the internal list is calculated from the newly needed size of the list and grows by and so on you can see that the amount to add grows as the list grows and that leads to an amortized complexity of ( for the append operation in the python interpreter pylist insert def insert(self, , )if self numitems =self sizeself __makeroom( if self numitemsfor in range(self numitems- , - ,- )self items[ + self items[ self items[ie self numitems + elseself append(eto insert into this sequential list we must make room for the new element given the way the list is organizedthere is no choice but to copy each element after the point where we want to insert the new value to the next location in the list this works best if we start from the right end of the list and work our way back to the point where the new value will be inserted the complexity of this operation is (nwhere is the number of elements in the list after the insertion point the index is the location where the new value is to be inserted if the index provided is larger than the size of the list the new itemeis appended to the end of the list this copy belongs to 'acha |
14,506 | pylist delete def __delitem__(self,index)for in range(indexself numitems- )self items[iself items[ + self numitems - same as writing self numitems self numitems when deleting an item at specific index in the listwe must move everything after the item down to preserve our invariant that there are no holes in the internal list this results in (nimplementation in the average and worst case where is the number of items after the index in the list here is code that accomplishes deletion in the python interpreterto conserve spaceif list reaches point after deletion where less than half of the locations within the internal list are being usedthen the size of the available space is reduced by one half pylist equality test def __eq__(self,other)if type(other!type(self)return false if self numitems !other numitemsreturn false for in range(self numitems)if self items[ !other items[ ]return false return true checking for equality of two lists requires the two lists be of the same type if they are of different typesthen we'll say they are not equal in additionthe two lists must have the same length if they are not the same lengththey cannot be equal if these two preconditions are metthen the lists are equal if all the elements in the two lists are equal here is code that implements equality testing of two pylist objects equality testing is (noperation pylist iteration def __iter__(self)for in range(self numitems)yield self items[ithe ability to iterate over sequence is certainly requirement sequences hold collection of similar data items and we frequently want to do something with each item in sequence of coursethe complexity of iterating over any sequence is (nwhere is the size of the sequence here is code that accomplishes this for the pylist sequence the yield call in python suspends the execution of the __iter__ method and returns the yielded item to the iterator this copy belongs to 'acha |
14,507 | sequences pylist length def __len__(self)return self numitems if the number of items were not kept track of within the pylist objectthen counting the number of items in the list would be (noperation insteadif we keep track of the number of items in the list as items are appended or deleted from the listthen we need only return the value of numitems from the objectresulting in ( complexity pylist membership def __contains__(self,item)for in range(self numitems)if self items[ =itemreturn true return false testing for membership in list means checking to see if an item is one of the items in the list the only way to do this is to examine each item in sequence in the list if the item is found then true is returnedotherwise false is returned this results in (ncomplexity this idea of searching for an item in sequence is so common that computer scientists have named it this is called linear search it is named this because of its (ncomplexity pylist string conversion def __str__(self) "[for in range(self numitems) repr(self items[ ]if self numitems " "]return it is convenient to be able to convert list to string so it can be printed python includes two methods that can be used for converting to string the first you are probably already familiar with the str function calls the __str__ method on an object to create string representation of itself suitable for printing here is code that implements the __str__ method for the pylist class this copy belongs to 'acha |
14,508 | pylist string representation def __repr__(self) "pylist([for in range(self numitems) repr(self items[ ]if self numitems " "])return the other method for converting an object to string has different purpose python includes function called eval that will take string containing an expression and evaluate the expression in the string for instanceeval(" + "results in and eval("[ , , ]"results in the list [ , , the repr function in python calls the __repr__ method on class this methodif definedshould return string representation of an object that is suitable to be given to the eval function in the case of the pylist classthe repr form of the string would be something like "pylist([ , , ])for the pylist sequence containing these items here is the code that accomplishes this it is nearly identical to the __str__ codeexcept that pylist prefixes the sequence notice that in both sects and that repr is called on the elements of the list calling repr is necessary because otherwise list containing strings like ["hi","there"would be converted to [hi,therein its str or repr representation cloning objects it is interesting to note that we now have method of making copy of an object if is pylist objectthen eval(repr( )is copy or clone of this object since all the items in the pylist object are also cloned by evaluating the representation of the objectcloning an object like this is called deep clone or deep copy of the object it is also possible to make what is called shallow copy of an object shallow copy occurs when the object is copiedbut items in the object are shared with the clone if we wish to create shallow copy of pylist object called xwe would write the following pylist([ , , ] pylist(xherey is shallow copy of because both and share the items and in most cases whether some items are shared or not probably doesn' matter in this case it doesn' matter if items are shared because and are integers and integers are immutable howeverif the shared items are mutablethen you may care about shallow or deep clones of objects when working with shallow clone of an object that contains mutable items the programmer must be aware that the items in the collection might change values without any call to method on the object this won' happen to deep clone of an object this copy belongs to 'acha |
14,509 | sequences which is bettershallow cloning or deep cloningdepends on the application being written one is not necessarily better than the other there is an additional performance and memory hit for making deep clones but they are safer the type of application being developed will probably help determine which type of cloning is chosen should clones of objects be useful in the application item ordering now let us turn our attention to implementing the sort method on our pylist data type to sort sequence of itemsthe items in the sequence must be ordered in some way for instanceconsider class that is used to represent cartesian coordinates on plane we'll call the class point and it will contain an ( ,ypair we'll order the point objects by their directed distance from the axis in other wordsthey will be ordered by their -coordinates here is our point class for reasons that will be obvious soonour point class will inherit from rawturtle the point class class point(turtle rawturtle)def __init__(selfcanvasxy)super(__init__(canvascanvas register_shape("dot",(( , ),( , ),( , ),(- , ),(- , ),(- ,- ),( ,- )( ,- ))self shape("dot"self speed( self penup(self goto( , def __str__(self)return "("+str(self xcor())+","+str(self ycor())+") def __lt__(selfother)return self ycor(other ycor(objects of the point class have an ordering because we have defined the less than operator ( <by writing an __lt__ method in the class once definedthis less than operator orders all the elements of the class most of the built-in classes or types in python already have an implementation for the __lt__ method so we don' have to define this method for types like intfloatand str strings are compared lexicographically in python as they are in pretty much every language that supports string comparison lexicographic ordering means that strings are compared from left to right until one character is found to be different than the character at the corresponding position in the other string in other wordssorting sequence of strings means they will end up alphabetized like you would see in dictionary under some conditions lists are orderabletoo for lists to have an orderingthe elements at corresponding indices within the lists must be orderable consider these sample comparisons this copy belongs to 'acha |
14,510 | python ( : feb : : [gcc (apple inc build )type "help""copyright""creditsor "licensefor more information [evaluate untitled- pylst [ , , lst list("abc"lst [' '' '' 'lst lst traceback (most recent call last)file ""line in builtins typeerrorunorderable typesint(str(lst [ , , lst lst true lst [ , , lst lst true lst [ , , lst lst true lst [ , ,' 'lst lst true comparing lst and lst did not work because the items in the two lists are not orderable you can' compare an integer and string howevertwo lists with similar elements can be compared list comparison is performed lexicographically like strings note the last example the two lists [ , , and [ , ,' 'can be compared because the lexicographical ordering does not require that and 'abe compared there are some types in python which have no natural ordering for instancethere is no natural ordering for dictionaries in python you cannot sort list of dictionaries of courseits unclear why you would want to as well since there is no natural ordering there doesn' seem to be reason to sort them howeverif there were way to create an ordering on set of dictionaries you could define it yourself by writing your own class that inherited from the dict class and defining its own __lt__ method just as we did for the point class once we have an ordering of elements in listwe can sort the elements according to that ordering lists have sort method that will sort the items of list according to their ordering the code in sect illustrates how the items of list can be sorted calling the sort method def main() turtle turtle( ht(screen getscreen(lst [ for in range( )this copy belongs to 'acha |
14,511 | sequences for in range( )pair point(screen, ,jlst append(pair lst sort( for in lstprint( when the code in sect is called it prints the points in order of their distance from the -axis as follows ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , butjust how does this sort method work and what is its costin other wordswhat kind of sorting algorithm does python use and what is its computational complexitywe explore these questions in the next sections selection sort in the last section we learned that we can call method called sort on list to sort the items in the list in ascending order ascending order is determined by the less than operator as defined on the items so how does the sorting algorithm work one of the early sorting algorithms was called selection sort and it serves as good starting place to understand sorting algorithms howeverthis is not the sorting algorithm used by python we'll find out why soon the selection sort algorithm is pretty simple to describe the algorithm begins by finding the smallest value to place in the first position in the list it does this by doing linear search through the list and along the way remembering the index of the smallest item it finds the algorithm uses the guess and check pattern by first guessing that the smallest item is the first item in the list and then checking the subsequent items to see if it made an incorrect guess this part of the algorithm is the selection part the select function does this selection this copy belongs to 'acha |
14,512 | selection sort' select function def select(seqstart)minindex start for in range(start+ len(seq))if seq[minindexseq[ ]minindex return minindex the start argument tells the select function where to start looking for the smallest item it searches from start to the end of the sequence for the smallest item the selection sort algorithm works by finding the smallest item using the select function and placing that item into the first position of the sequence now the value in the first position must be put someplace else it is simply swapped with the location of the value that is being moved the algorithm proceeds by next looking for the second smallest value in the sequence since the smallest value is now in the first location in the sequencethe selection sort algorithm starts looking from the second position in the list for the smallest value when the smallest value is found (which is really the second smallest value for the listthe value in the second position and this value are swapped then the selection sort algorithm looks for the smallest item starting at the third location in the sequence this pattern repeats until all the items in the sequence have been sorted the selsort function does the actual sorting for the algorithm the selection sort code def selsort(seq)for in range(len(seq)- )minindex select(seqitmp seq[iseq[iseq[minindexseq[minindextmp we can visualize the selection sort algorithm by running an animation of it sorting the animation is pictured in fig having sorted more than half the values in sequence the green dots represent items that are now in their proper location in the sequence the height of the dot from the -axis ( the -valueis its value the -axis is the position in the list in this animation all the values between and are being sorted into ascending order the upper-right corner represents those values that have not yet been sorted the algorithm starts looking for the next smallest value just to the right of the green diagonal line it finds the minimum value ( closest to the -axisby going through all the remaining unsorted dots once it finds the smallshortest dotit swaps it with the left-most dot to put it into is sorted position in the list the complete code for this animation is given in sect and can be downloaded from the website accompanying this text try it outthis copy belongs to 'acha |
14,513 | sequences fig selection sort snapshot consider sorting the list [ as depicted in fig after each call of the select function from the selsort function the next element of the list is placed in its final location sorting the list leads to the intermediate steps as shown each time the select function is called the new smallest element is swapped with the first location in the rest of the list to move the next smallest element into its location within the sorted list to find each new smallest element we call select which must run through the rest of the list looking for the minimum element after each pass the list in fig is one item closer to sorting the whole list it turns out that this early attempt at writing sorting algorithm is not that great the complexity of this algorithm is ( because each time through the for loop in the selsort function we call the select function which has its own for loop the for loop is executed times and each time it is executed the for loop must go through one less item looking for the smallest value that is left to be sorted sothe first time we execute the body of the for loop timesthen times the second time select is calledthen times and so on we have seen this pattern before the sum of the first integers has an term in its formula thereforeselection sort is ( this means as we try to sort some larger lists the algorithm will really start to slow down you should never use this algorithm for sorting even on small lists we can do much better this copy belongs to 'acha |
14,514 | fig selection sort of list merge sort divide and conqueras the ancient romans might have saidis an effective battle strategy it turns out this concept is very important when writing algorithms the merge sort algorithm is one instance of divide and conquer algorithm divide and conquer algorithms are usually written recursivelybut don' necessarily have to be the basic premise is that we divide problem into two pieces each of the two pieces is easier to solve than trying to tackle the whole problem at once because the two pieces are each smaller this copy belongs to 'acha |
14,515 | sequences the merge sort code def merge(seqstartmidstop)lst [ start mid merge the two lists while each has more elements while mid and stopif seq[iseq[ ]lst append(seq[ ] += elselst append(seq[ ] += copy in the rest of the start to mid sequence while midlst append(seq[ ] += many merge sort implementations copy the rest of the sequence from to stop at this point this is not necessary since in the next part of the code the same part of the sequence would be copied right back to the same place while stoplst append(seq[ ] += copy the elements back to the original sequence for in range(len(lst))seq[start+ ]=lst[ def mergesortrecursively(seqstartstop)we must use >here only when the sequence we are sorting is empty otherwise start =stop- in the base case if start >stop- return mid (start stop/ mergesortrecursively(seqstartmidmergesortrecursively(seqmidstopmerge(seqstartmidstop def mergesort(seq)mergesortrecursively(seq len(seq)the merge sort algorithm takes this divide and conquer strategy to the extreme it divides the listthen divides it again and againuntil we are left with lists of size sublist of length is already sorted two sorted sublists can be merged into one sorted list in (ntime list can be divided into lists of size by repeatedly splitting in (log ntime each of the split lists are then merged together in (ntime this results in complexity of ( log nfor merge sort the merge sort code appears in sect the merge function takes care of merging two adjacent sublists the first sublist runs from start to mid- the second sublist runs from mid to stop- the elements this copy belongs to 'acha |
14,516 | of the two sorted sublists are copiedin (ntimeto new list then the sorted list is copied back into the original sequenceagain in (ntime in the merge functionthe first while loop takes care of merging the two sublists until one or the other sublist is empty the second and third while loops take care of finishing up whichever sublist had the left-over elements only one sublist will have left-over elements so only one condition on the second and third while loops will ever be true notice that the third while loop in the code is commented out copying elements from to stop in the third while loop is not necessary since they would only be copied right back to the same place when the contents of lst are copied back to the seq sequence this optimization speeds up merge sort little bit one other optimization is to pre-allocate one more list in which to copy values and then alternate between merging in the original and the pre-allocated copy in this way the overhead of creating and appending to lists is avoided coding either of these two optimizations does not improve the computational complexity of the algorithmbut can improve its overall performance slightly one criticism of the merge sort algorithm is that the elements of the two sublists cannot be merged without copying to new list and then back again other sorting methodslike quicksorthave the same ( log ncomplexity as merge sort and do not require an extra list the mergesort function calls helper function to get everything started it calls mergesortrecursively function with the sequence and the start and stop values which indicate the entire list should be sorted the start and stop parameters are used when splitting the list the list is not physically split when calling mergesortrecursively insteadthe start and stop values are used to compute the mid point between them and then the two halves are recursively sorted since each sublist is smallerwe can rest assured that the recursive call does its job and sorts the two sublists then we are left to merge the two sorted sublists by calling the merge function the base case for the recursive function is when the sublist size is at that point we have sorted sublist in fig the entire left half of the list has been sorted in additionthree sublists in the right half have been sorted and the third and fourth sublist are in the process of being merged together the green dots represent the portions of the sequence that are sorted and the black dots indicate the unsorted portion of the original sequence the red lines at the bottom reflect the recursive calls that are currently on the run-time stack the length of the red line shows the portion of the sequence that is being sorted by its corresponding recursive call the blue line underscores the two sublists currently being merged together the argument that merge sort runs in ( log ntime needs just bit of explanation the repetitive splitting of the list results in (log nsplits in the end we have lists of size if we were to count every merge that occurs there would be / merges at the bottomfollowed by / merges at the next leveland so on leading to this sum the number of merges >log log = this analysis would seem to suggest that the complexity of the merge sort algorithm is ( since there are roughly merges each of which is (nitself howeverthe this copy belongs to 'acha |
14,517 | sequences fig merge sort snapshot fig merge sort merges algorithm is not ( to see whyconsider sorting the list [ after repeatedly splitting the lists we get down to lists of size one as depicted in the first list of fig the individual items are merged two at time to form sorted lists of twoshown in the second list of items while there are four merges that take place at the lowest level of the merge sortthe four merges are each for lists of two elements ( one from each listand together they form list of items so we can group all these four merges together to find that all the merges at that deepest level take (ntime not eachbut all of the merges at the deepest level when combined are (nthis copy belongs to 'acha |
14,518 | in the second version of the listtwo merges are done for the lists of length two howevereach merge is done on one half the list the purple half is one mergethe green half includes the items that are in the second merge togetherthese two merges include all items again soat the second deepest level again at most items are merged in (ntime finallythe last merge is of all the items in yellow from the two sorted sublists this merge also takes (ntime since it merges all the items in the listresulting in the sorted list seen in the last version of the list sowhile merging is (noperationthe merges take place on sublists of the items in the list which means that we can count the merging at each level as (nand don' have to count each individual merge operation as (nsince there are log levels to the merge sort algorithm and each level takes (nto mergethe algorithm is ( log quicksort in sensethe quicksort algorithm is the exact opposite of the merge sort algorithm it is also the most widely used and one of the most efficient sorting algorithms known quicksort is again divide and conquer algorithm and is usually written recursively butwhere merge sort splits the list until we reach size of and then merges sorted liststhe quicksort algorithm does the merging first and then splits the list we can' merge an unsorted list insteadwe partition it into two lists what we want is to prepare the list so quicksort can be called recursively butif we are to be successfulthe two sublists must somehow be easier to sort than the original this preparation for splitting is called partitioning to partition list we pick pivot element think of quicksort partitioning list into all the items bigger than the pivot and all the elements smaller than the pivot we put all the bigger items to the right of the pivot and all the littler items to the left of the pivot once we have done thistwo things are truethe pivot is in its final location in the list the two sublists are now smaller and can therefore be quicksorted once the two sublists are sorted this will cause the entire list to be in sorted orderbecause the left will be the values ascending up to the pivotthe pivot is in the right spotand the values greater than the pivot will all be in their correct locationstoo quicksort is divide and conquer algorithm to get the best performancewe would like to divide the sequence right down the middle into two equal sized sequences this would mean that we would have to pick the value exactly in the middle to be the pivot since the pivot is used to divide the two lists unfortunately this isn' possible if we are to do it efficiently for quicksort to have ( log ncomplexitylike merge sortit must partition the list in (ntime we must choose pivot quickly and we must choose it well if we don' choose pivot close to the middlewe will not get the ( log ncomplexity we hope for it turns out that choosing random pivot from the list is good enough one way to guarantee random choice this copy belongs to 'acha |
14,519 | sequences of the pivot is to have the quicksort algorithm start by randomizing the sequence the quicksort algorithm is given in sect the quicksort code import random def partition(seqstartstop)pivotindex comes from the start location in the list pivotindex start pivot seq[pivotindexi start+ stop- while < #while < and seq[ <pivotwhile < and not pivot seq[ ] += #while pivotwhile < and pivot seq[ ] -= if jtmp seq[iseq[iseq[jseq[jtmp += -= seq[pivotindexseq[jseq[jpivot return def quicksortrecursively(seqstartstop)if start >stop- return pivotindex ends up in between the two halves where the pivot value is in its final location pivotindex partition(seqstartstop quicksortrecursively(seqstartpivotindexquicksortrecursively(seqpivotindex+ stop def quicksort(seq)randomize the sequence first for in range(len(seq)) random randint( ,len(seq)- tmp seq[iseq[iseq[jseq[jtmp quicksortrecursively(seq len(seq)once the list is randomizedpicking random pivot becomes easier the partition function picks the first item in the sequence as the pivot the partitioning starts from this copy belongs to 'acha |
14,520 | both ends and works it way to the middle essentially every time value bigger than the pivot is found on the left side and value smaller than the pivot is found on the right sidethe two values are swapped once we reach the middle from both sidesthe pivot is swapped into place once the sequence is partitionedthe quicksort algorithm is called recursively on the two halves variables and are the indices of the left and right valuesrespectivelyduring the partitioning process if you look at the partition codethe two commented while loop conditions are probably easier to understand than the uncommented code howeverthe uncommented code only uses the less than operator quicksort is the sorting algorithm used by the sort method on lists it only requires that the less than operator be defined between items in the sequence by writing the two while loops as we havethe only required ordering is defined by the less than operator just as python requires the snapshot in fig shows the effect of partitioning on sequence in this figurethe sequence has been partitioned twice already the first partitioning picked pivot that was almost dead center howeverthe second partitioning picked pivot that was not so good the red line indicates the part of the sequence that is currently being partitioned see how the left-most value in that sub-sequence is the pivot value the two green dots are the pivot values that are already in their correct locations all values above the pivot will end up in the partition to the right of the pivot and all values to the left of the pivot are less than the pivot this is the nature of quicksort againby amortized complexity we can find that the quicksort algorithm runs in ( log ntime consider sorting the list [ using quicksort figure depicts the list after each call to the partition function the pivot in each call is identified by the orange colored item the partition function partitions the list extending to the right of its pivot after partitioningthe pivot is moved to its final location by swapping fig quicksort snapshot this copy belongs to 'acha |
14,521 | sequences fig quicksorting list it with the last item that is less than the pivot then partitioning is performed on the resulting two sublists the randomization done in the first step of quicksort helps to pick more random pivot value this has real consequences in the quicksort algorithmespecially when the sequence passed to quicksort has chance of being sorted already if the sequence given to quicksort is sortedor almost sortedin either ascending or descending orderthen quicksort will not achieve ( log ncomplexity in factthe worst case complexity of the algorithm is ( if the pivot chosen is the next least or greatest valuethen the partitioning will not divide the problem into to smaller sublists as occurred when was chosen as pivot for sublist in fig the algorithm will simply put one value in place and end up with one big partition of all the rest of the values if this happened each time pivot was chosen it would lead to ( complexity randomizing the list prior to quicksorting it will help to ensure that this does not happen merge sort is not affected by the choice of pivotsince no choice is necessary thereforemerge sort does not have worst case or best case to consider it will always achieve ( log ncomplexity even soquicksort performs better in practice than merge sort because the quicksort algorithm does not need to copy to new list and then back again the quicksort algorithm is the de facto standard of sorting algorithms two-dimensional sequences sometimes programmers need to represent two-dimensional sequences in program this can be done quite easily by creating list of lists the main list can represent either the columns or the rows of the matrix if the main list contains references to this copy belongs to 'acha |
14,522 | fig -dimensional matrix the rowsthen the matrix is said to be in row major form if the main list contains references to the columns of the matrixthen it is in column major form most of the timematrices are constructed in row major form in fig matrix is drawn with row major orientationbut the matrix could represent either row major or column major form the actual organization of the data is the same either way the items reference points at the main list the items list contains references to each of the rows of the matrix for exampleconsider program that plays tic tac toe against human opponent we would need to represent the board that tic tac toe is played on to do sowe'll create board class that mimics our pylist class in the previous section the organization of our board class is shown graphically in fig the outline for the board class is given in sect the board class class boardwhen board is constructedyou may want to make copy of the board this can be shallow copy of the board because turtle objects are immutable from the perspective of board object def __init__(self,board=none)self items [for in range( )rowlst [for in range( )if board==nonerowlst append(dummy()elsethis copy belongs to 'acha |
14,523 | sequences rowlst append(board[ ][ ] self items append(rowlst the getitem method is used to index into the board it should return row of the board that row itself is indexable (it is just listso accessing row and column in the board can be written board[row][columnbecause of this method def __getitem__(self,index)return self items[index this method should return true if the two boardsself and otherrepresent exactly the same state def __eq__(self,other)pass this method will mutate this board to contain all dummy turtles this way the board can be reset when new game is selected it should not be used except when starting new game def reset(self)screen tracer( for in range( )for in range( )self items[ ][jgoto(- ,- self items[ ][jdummy( screen tracer( this method should return an integer representing the state of the board if the computer has wonreturn if the human has wonreturn - otherwisereturn def eval(self)pass this method should return true if the board is completely filled up (no dummy turtlesotherwiseit should return false def full(self)pass this method should draw the ' and ' of this board on the screen def drawxos(self)for row in range( )for col in range( )if self[row][coleval(! self[row][colst(self[row][colgoto(col* + ,row* + screen update(because each row is itself list in the board classwe can just use the built-in list class for the rows of the matrix each location in each row of the matrix can hold either an xan oor dummy object the dummy objects are there for convenience and represent an open location in the board the equalevaland full methods are left as an exercise for the student the purpose of each will be described in the next section this copy belongs to 'acha |
14,524 | many gamesboth animated and otherwiseare easy to implement using tkinter and turtle graphics animated characters or tokens in game can be implemented as turtle that moves around on the screen as necessary for the tic tac toe game the ' and ' can be implemented as rawturtles rawturtle is just like turtle object except that we must provide the canvas where rawturtle moves around the code in sect contains the three classes that define the 'sthe 'sand the special dummy class that is placeholder for open locations in the board the xoand dummy classes human - computer this class is just for placeholder objects when no move has been made yet at position in the board having eval(return is convenient when no move has been made class dummydef __init__(self)pass def eval(self)return def goto(self, , )pass in the and classes below the constructor begins by initializing the rawturtle part of the object with the call to super(__init__(canvasthe super(call returns the class of the superclass (the class above the or in the class hierarchyin this casethe superclass is rawturtle thencalling __init__ on the superclass initializes the part of the object that is rawturtle class (rawturtle)def __init__(selfcanvas)super(__init__(canvasself ht(self getscreen(register_shape(" ",((- ,- ),(- ,- ),( ,- ),( ,- ),( ,- )( , ),( , ),( , ),( , ),(- , ),(- , ),(- , ),(- ,- ))self shape(" "self penup(self speed( self goto(- ,- def eval(self)return computer class (rawturtle)def __init__(selfcanvas)super(__init__(canvasself ht(self shape("circle"self penup(self speed( self goto(- ,- def eval(self)return human this copy belongs to 'acha |
14,525 | sequences the minimax algorithm the dummyxand classes all have an eval method that returns either for computer movea - for human moveor for no move yet the values for these moves are used in an algorithm called minimax the minimax algorithm is recursive algorithm that is used in two person game playing where one player is the computer and each player has choice of some number of moves that can be made while taking turns the minimax algorithm is simple the idea is that when it is the computer' turn it should pick the move that will be best for the computer each possible move is analyzed to find the value that would be best for the computer we'll let represent the best move for the computer the worst move the computer could make will be represented by - move values can range between and - when it is the computer' turn it will pick the move that results in the maximum move value that' the max portion of minimax to find the best movethe computer will play out the gamealternating between the best move it could make and the best move the human could make best move for human would be - when it is the human' turnthe computer will assume that the human will make the best move he/she can make this is the min part of minimax the minimax function is given two argumentsthe player (either - for human or for computerand the board for the game the base case for this recursive function checks to see if one of three things has occurred the current board is win for the computer in that case minimax returns for computer win the current board is win for the human in that case minimax returns - for human win the current board is full in that casesince neither human or computer wonminimax returns the recursive part of the minimax function examines the player argument if the player is the computerthen the function tries each possible move in the board by making computer move it places computer move in that spot in copy of the board and calls minimax with the human as the next player on that board the algorithm uses the guess and check pattern to find the maximum of all possible values that come back from the recursive calls to minimax the minimax function then returns that maximum value when minimax is called with the human as the next player to make moveit does the same thing as when the computer is called as the player it makes human move in copy of the board and recursively calls minimax on the copy of the board with the computer as the next player to play the algorithm uses the guess and check pattern to find the minimum of all possible values that come back from calling minimax recursively there is one little tricky part to minimax the minimax algorithm is called with board and the next player to make move it returns number somewhere between - and indicating how likely it is that the computer or the human will win given this copy belongs to 'acha |
14,526 | that board howeverit does not tell you which move is the best to make to deal with this we can have the code that executes the computer' turn do little of the work for the computer turn code to find the best move it makes move in copy of the boardcalls minimax with the human as the next player to make moveand then records the value that comes back from the call to minimax the computer turn code uses the guess and check pattern to find the maximum value for all possible moves and the associated move which resulted in that value after finding the best movethe computer' turn ends by the computer making that move in the board and returning so the human can make their next move the tic tac toe codewhich can be found on the text' accompanying website or in sect contains the outline for the game the minimax function and the computerturn code are left as an exercise for the reader minimax can be used in many two person games of perfect information such as checkers and connect four the term perfect information means that both players can see the whole state of the game [ poker is game of imperfect information so would not be suitable for the minimax algorithm it should be noted that tic tac toe has small enough search space that the computer can solve the game that means it will never lose most games however are not solvableat least with the average computer for instanceconnect four has much larger search space and can' be completely solved heuristics are applied to games like connect four to estimate how good or bad board is after the minimax algorithm has searched as deep as it can given its time constraints games like these are often studied in the field of artificial intelligence [ artificial intelligence includes the study of search algorithms that may be used even when the search space is too large to use an exhaustive search in this textchap covers few heuristic search algorithms along with heuristic applied to the minimax algorithm for the game of connect four linked lists sequences can be organized in several different ways the pylist sequence was randomly accessible list this means that we can access any element of the list in ( time to either store or retrieve value appending an item was possible in ( time using amortized complexity analysisbut inserting an item took (ntime where was the number of items after the location where the new item was being inserted if programmer wants to insert large number of items towards the beginning of lista different organization for sequence might be better suited to their needs linked list is an organization of list where each item in the list is in separate node linked lists look like the links in chain each link is attached to the next link by reference that points to the next link in the chain when working with linked listeach link in the chain is called node each node consists of two pieces of informationan itemwhich is the data associated with the nodeand link to the next node in the linked listoften called next the code in sect defines the node class that can be used to create the nodes in linked list this copy belongs to 'acha |
14,527 | sequences the node class class nodedef __init__(self,item,next=none)self item item self next next def getitem(self)return self item def getnext(self)return self next def setitem(selfitem)self item item def setnext(self,next)self next next in the node class there are two pieces of informationthe item is reference to value in the listand the next reference which points to the next node in the sequence figure is linked list with three elements added to it there are four nodes in this figure the first node is dummy node having one extra dummy node at the beginning of the sequence eliminates lot of special cases that must be considered when working with the linked list an empty sequence still has dummy node nodes in linked lists are represented as rounded rectangle with two halves the left half of node is reference to the item or value for that node in the sequence the right half is reference to the next node in the sequence or to the special value none if it is the last node in the sequence figure depicts linked list consisting of three pieces of information we keep reference to the first node in the sequence so we can traverse the nodes when necessary the reference to the last node in the list makes it possible to append an item to the list in ( time we also keep track of the number of items so we don' have to count when someone wants to retrieve the list size the table of operations in fig contains the computational complexity of various list operations on the linkedlist datatype presented in this section many fig sample linkedlist object this copy belongs to 'acha |
14,528 | operation complexity usage method list creation (len( ) linkedlist(ycalls __init__(yindexed get (na [ix __getitem__(iindexed set (nx[ia __setitem__( ,aconcatenate (nz= + __add__(yappend ( append(ax append(ainsert (nx insert( ,ex insert( , )delete (ndel [ix __delitem__(iequality (nx = __eq__(yiterate (nfor in xx __iter__(length ( len(xx __len__(membership (na in __contains__(asort / / / fig complexity of linkedlist operations of the operations appear to have the same complexity as the list datatype operations presented in fig there are some important differences though the following sections will provide the implementations for some of these operations and point out the differences as compared to the list datatype operations given in fig the linkedlist constructor class linkedlist this class is used internally by the linkedlist class it is invisible from outside this class due to the two underscores that precede the class name python mangles names so that they are not recognizable outside the class when two underscores precede name but aren' followed by two underscores at the end of the name ( an operator nameclass __nodedef __init__(self,item,next=none)self item item self next next def getitem(self)return self item def getnext(self)return self next def setitem(selfitem)self item item def setnext(self,next)self next next def __init__(self,contents=[])here we keep reference to the first node in the linked list this copy belongs to 'acha |
14,529 | sequences and the last item in the linked list they both point to dummy node to begin with this dummy node will always be in the first position in the list and will never contain an item its purpose is to eliminate special cases in the code below self first linkedlist __node(none,noneself last self first self numitems for in contentsself append(ecreating linkedlist object has exactly the same complexity has constructing list object if an empty list is createdthen the time taken is ( and if list is copiedthen it is (noperation linkedlist object has references to both ends of the linked list the reference to the head of the list points to dummy node having dummy node at the beginning eliminates many special cases that would exist when the list was empty if no dummy node were used declaring the node class within the linkedlist classand preceding the name with two underscoreshides the __node class from any code outside the linkedlist class the idea here is that only the linkedlist class needs to know about the __node class initiallyboth the first and the last references point to the dummy node the append method is used to add elements to the linkedlist should list be passed to the constructor linkedlist get and set def __getitem__(self,index)if index > and index self numitemscursor self first getnext(for in range(index)cursor cursor getnext( return cursor getitem( raise indexerror("linkedlist index out of range" def __setitem__(self,index,val)if index > and index self numitemscursor self first getnext(for in range(index)cursor cursor getnext( cursor setitem(valreturn raise indexerror("linkedlist assignment index out of range"implementations for the indexed get and set operations are included in sect largely as an example of traversing linked list they are of little practical value if random access to list is desiredthen the list class should be used linked lists are not randomly accessible they require linear search through the datatype to access particular location in the list each of these operations is (nwhere is the value of the index this copy belongs to 'acha |
14,530 | linkedlist concatenate def __add__(self,other)if type(self!type(other)raise typeerror("concatenate undefined for str(type(self)str(type(other)) result linkedlist( cursor self first getnext( while cursor !noneresult append(cursor getitem()cursor cursor getnext( cursor other first getnext( while cursor !noneresult append(cursor getitem()cursor cursor getnext( return result concatenation is an accessor method that returns new list comprised of the two original lists the operation is once again (nfor linked lists as it was for the pylist datatype presented in fig in this concatenation code variable called cursor is used to step through the nodes of the two lists this is the common method of stepping through linked list set the cursor to the first element of the linked list then use while loop that terminates when the cursor reaches the end ( the special value noneeach time through the while loop the cursor is advanced by setting it to the next node in the sequence notice in the code that the dummy node from both lists ( self and otheris skipped when concatenating the two lists the dummy node in the new list was created when the constructor was called linkedlist append def append(self,item)node linkedlist __node(itemself last setnext(nodeself last node self numitems + the code in sect is the first time we see small advantage of linkedlist over list the append operation has complexity of ( for linkedlists where with lists the complexity was an amortized ( complexity each append will always take the same amount of time with linkedlist linkedlist is also never bigger than it has to be howeverlinkedlists take up about twice the space of randomly accessible list since there has to be room for both the reference to the item and the reference to the next node in the list this copy belongs to 'acha |
14,531 | sequences the code for the append method is quite simple since the self last reference points at the node immediately preceding the place where we want to put the new nodewe just create new node and make the last one point at it then we make the new node the new self last node and increment the number of items by linkedlist insert def insert(self,index,item)cursor self first if index self numitemsfor in range(index)cursor cursor getnext( node linkedlist __node(itemcursor getnext()cursor setnext(nodeself numitems + elseself append(itemthe insert operationwhile it has the same complexity as insert on listis quite bit different for linked lists inserting into list is (noperation where is the number of elements that are in the list after the insertion point since they must all be moved down to make room for the new item when working with linkedlist the is the number of elements that appear before the insertion point because we must search for the correct insertion point this means that while inserting at the beginning of list is (noperationinserting at the beginning of linkedlist is ( operation if you will have to do many inserts near the beginning of listthen linked list may be better to use than randomly accessible list other linked list operations the other linked list operations are left as an exercise for the reader in many casesthe key to working with linked list is to get reference to the node that preceds the location you want to work with for instanceto delete node from linked list you merely want to make the next field of the preceeding node point to the node following the node you wish to delete consider the sample linked list in fig to delete the second item ( the " "from this list we want to remove the node that contains the reference to the "bto do this we can use cursor to find the node preceding the one that references the "bonce foundthe next field of the cursor can be made to point to the node after the "bas shown in fig changing the next pointer of the cursor' node to point to the node after the "bresults in the node containing the "bdropping out of the list since there are no other references to the node containing the " "and actually to the "bitselfthe two objects are garbage collected this copy belongs to 'acha |
14,532 | fig deleting node from linked list finallythe sort operation is not applicable on linked lists efficient sorting algorithms require random access to list insertion sorta ( algorithmwould workbut it would be highly inefficient if sorting were requiredit would be much more efficient to copy the linked list to randomly accessible listsort itand then build new sorted linked list from the sorted list stacks and queues there are two other sequential data structures that are very common in computer programming stack is data structure where access is only at one end of the sequence new values are pushed onto the stack to add them to the sequence and popped off the stack to remove them from the sequence the run-time stackdescribed in chap was one such instance of stack stacks are used in many algorithms in computer science stacks can be used to evaluate numeric expressions they are useful when parsing information they can be used to match parenthesis in programs and expressions the operations on stack are given in fig stacks are called last in/first out or lifo data structures the last item pushed is the first item popped stack class can be implemented in at least couple different operation stack creation pop push top isempty complexity ( ( ( ( ( usage =stack( = pop( push(aa= top( isempty(description calls the constructor returns the last item pushed and removes it from pushes the itemaon the stacks returns the top item without popping returns true if has no pushed items fig complexity of stack operations this copy belongs to 'acha |
14,533 | sequences ways to achieve the computation complexities outlined in this table either list or linked list will suffice the code in sect is an implementation of stack with list the implementation is pretty straight-forward the main program in sect tests the stack datatype with couple tests to make sure the code operates correctly the stack class code class stackdef __init__(self)self items [ def pop(self)if self isempty()raise runtimeerror("attempt to pop an empty stack" topidx len(self items)- item self items[topidxdel self items[topidxreturn item def push(self,item)self items append(item def top(self)if self isempty()raise runtimeerror("attempt to get top of empty stack" topidx len(self items)- return self items[topidx def isempty(self)return len(self items= def main() stack(lst list(range( )lst [ for in lsts push( if top(= print("test passed"elseprint("test failed" while not isempty()lst append( pop() lst reverse( if lst !lstprint("test failed"elseprint("test passed" this copy belongs to 'acha |
14,534 | trys pop(print("test failed" except runtimeerrorprint("test passed"exceptprint("test failed" trys top(print("test failed" except runtimeerrorprint("test passed"exceptprint("test failed" if __name__=="__main__"main(this codeif saved in file called stack py can be imported into other modules when this module is run by itselfthe test main function will execute when this module is imported into another programthe main function will not execute because the __name__ variable will not be equal to "__main__a queue is like stack in many ways except that instead of being lifo data structurequeues are fifo or first in/first out data structures the first item pushedis the first item popped when we are working with queue we talk of enqueueing an iteminstead of pushing it when removing an item from the queue we talk of dequeueing the item instead of popping it as we did from stack the table in fig provides details of the queue operations and their complexities implementing queue with the complexities given in this table is bit trickier than implementing the stack to implement queue with these complexities we need to be able to add to one end of sequence and remove from the other end of the sequence in ( time this suggests the use of linked list certainlya linked list would work to get the desired complexities howeverwe can still use list if we are willing to accept an amortized complexity of ( for the dequeue operation this queue class code implements queue with list and achieves an amortized complexity of ( for the dequeue operation operation queue creation dequeue enqueue front isempty complexity ( ( ( ( ( usage =queue( = dequeue( enqueue(aa= front( isempty(description calls the constructor returns the first item enqueued and removes it from enqueues the itemaon the queueq returns the front item without dequeueing the item returns true if has not enqueued items fig complexity of queue operations this copy belongs to 'acha |
14,535 | sequences class queuedef __init__(self)self items [self frontidx def __compress(self)newlst [for in range(self frontidx,len(self items))newlst append(self items[ ] self items newlst self frontidx def dequeue(self)if self isempty()raise runtimeerror("attempt to dequeue an empty queue" when queue is half fullcompress it this achieves an amortized complexity of ( while not letting the list continue to grow unchecked if self frontidx len(self items)self __compress( item self items[self frontidxself frontidx + return item def enqueue(self,item)self items append(item def front(self)if self isempty()raise runtimeerror("attempt to access front of empty queue" return self items[self frontidx def isempty(self)return self frontidx =len(self items def main() queue(lst list(range( )lst [ for in lstq enqueue( if front(= print("test passed"elseprint("test failed" while not isempty()lst append( dequeue() if lst !lstprint("test failed"elseprint("test passed"this copy belongs to 'acha |
14,536 | for in lstq enqueue( lst [ while not isempty()lst append( dequeue() if lst !lstprint("test failed"elseprint("test passed" tryq dequeue(print("test failed" except runtimeerrorprint("test passed"exceptprint("test failed" tryq front(print("test failed" except runtimeerrorprint("test passed"exceptprint("test failed" if __name__=="__main__"main(infix expression evaluation an infix expression is an expression where operators appear in between their operands for example( is an infix expression because appears between the and and appears between its operands pythonbeing programming languagecan evaluate infix expressions howeverlet' say we would like to write program that would evaluate expressions entered by the user can we do this in python programit turns out we canand very easily python includes function that will treat string like an expression to be evaluated and will return the result of that evaluation the function is called eval here is program that uses the eval function def main()expr input("please enter an infix expression"result eval(exprprint("the result of",expr,"is",result if __name__ ="__main__"main(this copy belongs to 'acha |
14,537 | sequences this is certainly very interestingalbeit shortprogram the eval function does an awful lot of work for us buthow does it workit turns out we can write our own eval function using couple of stacks in this section we describe an infix expression evaluator to make our job bit easierwe'll insist that the user enter spaces between all operators (including parensand operands so for instancethe user might interact as follows please enter an infix expression the result of the infix evaluator algorithm uses two stacksan operator stack and an operand stack the operator stack will hold operators and left parens the operand stack holds numbers the algorithm proceeds by scanning the tokens of the infix expression from left to right you can do this quite easily in python by splitting the input string and then iterating over the list of strings from the input the tokens are operators (including parensand numbers each operator has precedence associated with it multiplication and division operators have the highest precedence while addition and subtraction are next finally the left paren and right paren precedence are the lowest function called precedence can be written so that given an operatorthe precendence function returns the proper precedence value you can decide on your precedence values given the prescribed restrictions to begin the operand stack and the operator stack are initialized to empty stacks and left paren is pushed on the operator stack for each token in the input we do the following if the token is an operator then we need to operate on the two stacks with the given operator operating is described in sect if the token is number then we push it on the number stack after scanning all the input and operating when required we operate on the stacks one more time with an extra right paren operator after operating the final time the operator stack should be empty and the operand stack should have one number on it which is the result you pop the operand stack and print the number as your result the operate procedure the operate procedure should be separate function the operate procedure is given an operatorthe operator stackand operand stack as arguments to operate we do the followingif the given operator is left paren we push it on the operator stack and return otherwisewhile the precedence of the given operator is less than or equal to the precedence of the top operator on the operator stack proceed as followsthis copy belongs to 'acha |
14,538 | pop the top operator from the operator stack call this the topop if topop is +-*or then operate on the number stack by popping the operandsdoing the operationand pushing the result if topop is left paren then the given operator should be right paren if sowe are done operating and we return immediately when the precedence of the given operator is greater than the precedence of topop we terminate the loop and push the given operator on the operator stack before returning from operating example to see how this algorithm works it will help to look at an example in the figures the right-most underlined token is the token currently being processed and all tokens to the left have already be processed the operand stack and the operator stack are labeled in the diagrams the topop operator and the current operator are given as well in fig the tokens have been processed and pushed onto their respective stacks the right paren is now being processed and operate was called to process this operator in the operate procedure the topop is "+and "+has higher precedence than ")so the right paren cannot be pushed on top of the addition operator sowe pop the operator stack and operate since we popped "+we then pop the two numbers from the operand stackadd them togetherand push the result as shown in fig after pushing the on the stack we found that the top operator was left paren in that case we popped the left paren and returned immediately leaving just the single left paren on the operator stack then the evaluator proceeded by finding the "*operator and pushing it and then finding the and pushing it on the operand stack nextthe "-operator cannot be pushed onto the operator stack because its precedence is lower then "*operate is called and the topop dictates that we pop two numbersmultiply themand push the result as shown in fig the "-operator has higher precedence than the "(so it is just pushed onto the operator stack the is processed and pushed onto the operand stack as shown in fig to finish up the evaluation of the infix expressionoperate is called one more time with ")as the operator this forces the to be subtracted from the leaving on the operand stack and the operator stack empty the final result is and the evaluator pops the result from the operand stack and returns it as the result and that is how the python eval function works it turns out that the python eval function is bit more sophisticated than this example howeverthe effect of evaluating simple infix expressions is the same radix sort the radix sort algorithm sorts sequence of strings lexicographicallyas they would appear in phonebook or dictionary to implement this algorithm the strings are this copy belongs to 'acha |
14,539 | sequences fig infix evaluation step fig infix evaluation step fig infix evaluation step this copy belongs to 'acha |
14,540 | read from the source ( file or the internetand they are placed in queue called the mainqueue as the strings are read and placed in the queue the algorithm keeps track of the length of the longest string that will be sorted we'll call this length longest for the radix sort algorithm to work correctlyall the strings in the mainqueue need to be the same lengthwhich is not likely of course if string is shorter than the longest string it can be padded with blanks to assistit is handy to have function that returns character of string like the charat function in sect the charat function def charat( , )if len( ireturn return [ithe charat function returns the ith character of the string and blank if is greater than or equal to the length of with the use of this function the strings in the mainqueue can be different lengths and the charat function will make them look like they are the same length in addition to the mainqueuethere are queues created and placed into list of queuescalled queuelist there are different possible ascii values and one queue is created for each ascii letter since most ascii characters are in the range of - the algorithm probably won' use the queues at indices - the sorting algorithm works by removing string from mainqueue it looks at the last characterstarting at longest- in each string and placing the string on queue that corresponds to the character' ascii value soall strings ending with 'ago on the 'aqueue and so on to find the index into queuelist the ord function is used for instancewriting ord(" "would return which is the index to use into the queuelist for the character "athenstarting with the first queue in the queuelistall strings are dequeued from each queue and placed on the main queue we empty each queue first before proceeding to the next queue in the queuelist this is repeated until all letter queues are empty thenwe go back to removing all elements from the main queue againthis time looking at the second to the last letter of each word each string is placed on queue in the queuelist depending on its second to last letter the process is repeated until we get to the first letter of each string when we are doneall strings are on the main queue in sorted order to complete the algorithm all strings are removed from the mainqueue one more time in sorted order radix sort example to see how radix sort works we'll consider an example where the words batfarmbarncarhatand cat are sorted alphabetically in the figures in this section each this copy belongs to 'acha |
14,541 | sequences queue is drawn vertically with the front of the queue being at the bottom of the box and the rear of the queue being at the top of each box while there are queues plus mainqueue created by radix sortthe example will show just the queues that are used while sorting these words the first queue in the list is the queue for spaces in string figure depicts the strings on the mainqueue after they have been read from their source the first step of the algorithm processes the mainqueue by emptying it and placing each string in the queue that corresponds to its fourth letter (the maximum string lengththis results in farm and barn being placed on the and queues the other strings are placed on the space queue as shown in fig thenall the strings are dequeued from the three non-empty queues and placed back on the mainqueue in the order that they were dequeued as shown in fig once againthe process is repeated for the third letter in each string this results in using the and queues as shown in fig againthe strings are brought back to the mainqueue in the order they were dequeued as depicted in fig and the process is repeated again for the second letter in each string all strings have an as their second character so they all end up on the queue (fig fig radix sort step fig radix sort step -- th letter this copy belongs to 'acha |
14,542 | fig radix sort step fig radix sort step -- rd letter fig radix sort step and they all go back to the mainqueue as shown in fig no change from step in this case finallywe look at the first letter in each string and the sort is almost complete as shown in fig bringing all the strings back to the mainqueue results in the mainqueue containing all the strings in sorted order the mainqueue can be emptied at this point and the strings can be processed in sorted order as depicted in fig this copy belongs to 'acha |
14,543 | sequences fig radix sort step -- nd letter fig radix sort step fig radix sort step -- st letter radix sort is pretty simple it is called radix sort because radix is like decimal point moving backwards through the string we move the decimal point one character at time until we get to the first character in each string this copy belongs to 'acha |
14,544 | fig radix sort step summary this explored the use of linear sequences in computer programming these sequences come in many forms including randomly accessible listsmatriceslinked listsstacksand queues we also saw that two-dimensional matrix is just list of lists the also explored operations as related to these datatypes and the complexity of these operations algorithms were also an important part of four the selection sortmerge sortand quicksort algorithms were studied along with their computational complexities minimax was presented as an interesting case study on using two-dimensional matrices and recursion in program the infix evaluator and radix sort algorithms were also presented as examples of using stacks and queues having read this you should have an understanding of abstract data types like listsstacksand queues you should understand how an abstract data type is implementedhow the implementation can affect the complexity of its operationsand at least few algorithms that use these data types in their implementations review questions answer these short answermultiple choiceand true/false questions to test your mastery of the what is the best caseworst caseand average case complexity of selection sortmerge sortand the quicksort algorithms how can the append operation achieve ( complexity when it sometimes runs out of space to append another item what is the complexity of the concatenation operatorthe operatorfor lists what is the complexity of the deletion operator for liststhis copy belongs to 'acha |
14,545 | sequences when sorting items in listwhat method must be defined for those elementswhy why does quicksort perform better than merge sort under what conditions would it be possible for merge sort to perform better than quicksort summarize what happens when list is partitioned summarize what happens when two lists are merged in the merge sort algorithm what is the purpose of the start parameter to the select function of the selection sort algorithm what are the advantages of linked list over randomly accessible list implementation of list data type what are the advantages of randomly accessible list over linked list implementation of list data type how does stack differ from queue in how we access it what is the complexity of the radix sort algorithm programming problems write program that times the quicksortthe merge sortand the built-in sort algorithms in python to discover which one is better and to see their relative speeds to do this you should implement two sequence classesa qsequence and msequence the qsequence can inherit from the pylist class and should implement its own sort algorithm using the quicksort code presented in this the msequence should be similar class using the merge sort algorithm you can sort anything you like if you choose to sort objects of some class you defineremember that you must implement the __lt__ method for that class be sure to randomize the elements in the sequence prior to sorting them using quicksort generate an xml file in the plot format to plot three sequences plot the time it takes to both randomize and quicksort sequence then plot the time it takes to merge sort sequence finallyplot the time it takes to sort the same sequence using the built-in sort method the complexity of merge sort and quicksort is ( log nso by computational complexity the three algorithms are equivalent what does the experimental data reveal about the two algorithmsput comment at the top of your program giving your answer to effectively test these three sorting algorithms you should sort items up to list size of at least elements you can time the sorts in increments of depending on your computeryou may need to play with the exact numbers to some degree to get good looking graph the merge sort algorithm is not as commonly used as the quicksort algorithm because quicksort is an inplace sort while merge sort requires at least space for one extra copy of the list in factmerge sort can be implemented with exactly one extra copy of the list in this exercise you are to re-implement the merge sort this copy belongs to 'acha |
14,546 | programming problems algorithm to use one extra copy of the list instead of allocating new list each time two lists are merged the extra list is allocated prior to calling the recursive part of the merge sort algorithm thenwith each alternating level of recursion the merge sort algorithm copies to the other listflipping between the two lists to accomplish thisthe mergesortrecursively function should be given new list of lists called lists instead of the seq list at lists [ is the seq list and at lists [ is the extra copy of the list one extra parameter is given to the mergesortrecursively function the index of the list to merge from is also provided this index will flip back and forth between and as each recursive call to mergesortrecursively occurs this flipping of to to and so on can be accomplished using modular arithmetic by writinglisttomergefromindex (listtomergefromindex the percent sign is the remainder after dividing by adding and finding the remainder after dividing by two means that listtomergefromindex flips between and sothe call is made as mergesortrecursively(listtomergefromindexlistsstartstopthe mergesortrecursively function must return the new index of the list to merge from one part of this new mergesortrecursively function is little tricky there may be one more level of recursion on the left or right side for the two recursive calls to mergesortrecursively if this is the casethen the result from either the left or right half must be copied into the same list as the other half before the two halves can be successfully merged when mergesortrecursively returns to the mergesort function the result of sorting may be in the original sequence or it may be in the copy if it is in the copythen the result must be copied back to the original sequence before returning complete this two list version of merge sort as described in this exercise and test it thoroughly then time this version of merge sort and compare those timings with the version of merge sort presented in the and with the quicksort implementation presented in this construct an xml file in the format read by the plotdata py program and plot their corresponding timing diagrams to see how this algorithm performs when compared to the other two complete the tic tac toe program described in the section on -dimensional matrices use the code from sect as your starting point then complete the sections that say they are left as an exercise for the student complete the linkedlist datatype by implementing the deleteequalityiteratelengthand membership operations make sure they have the complexity given in the linkedlist complexities table thenimplement test program in your main function to thorougly test the operations you implemented call the module linkedlist py so that you can import this into other programs that may need it implement queue data type using linked list implementation create set of test cases to throroughly test your datatype place the datatype in file called queue py and create main function that runs your test cases this copy belongs to 'acha |
14,547 | sequences implement priority queue data type using linked list implementation in priority queueelements on the queue each have priority where the lower the number the higher the priority the priorities are usually just numbers the priority queue has the usual enqueuedequeueand empty methods when value is enqueued it is compared to the priority of other items and placed in front of all items that have lower priority ( higher priority number implement stack data type using linked list implementation create set of test cases to throroughly test your datatype place the datatype in file called stack py and create main function that runs your test cases implement the infix evaluator program described in the you should accept input and produce output as described in that section of the text the input tokens should will all be separated by blanks to make retrieval of the tokens easy don' forget to convert your number tokens from strings to floats when writing the program implement the radix sort algorithm described in the use the algorithm to sort list of words you find on the internet or elsewhere write main program that tests your radix sort algorithm searching sequence of items for particular item takes (ntime on average where is the number of items in the list howeverif the list is sorted firstthen searching for an item within the list can be done in (log ntime by using divide and conquer approach this type of search is called binary search the binary search algorithm starts by looking for the item in the middle of the sequence if it is not found there then because the list is sorted the binary search algorithm knows whether to look in the left or right side of the sequence binary search reports true or false depending on whether the item is found it is often written recursively being given sequence and the beginning and ending index values in which to search for the item for instanceto search an entire sequence called seqbinary search might be called as binarysearch(seq, ,len(seq)- write program that builds pylist or just python list of valuessorts themand then looks up random values within the list compute the lookup times for lists of various sizes and record your results in the plotdata py format so you can visualize your results you should see (log ncurve if you implemented binary search correctly this copy belongs to 'acha |
14,548 | sets and maps in the last we studied sequences which are used to keep track of lists of things where duplicate values are allowed for instancethere can be two sixes in sequence or list of integers in this we look at sets where duplicate values are not allowed after examining sets we'll move on to talk about maps maps may also be called dictionaries or hash tables the term hash table actually suggests an implementation of set or map the primary focus of this is in understanding hashing hashing is very important concept in computer science because it is very efficient method of searching for value to begin the we'll motivate our interest in hashingthen we'll develop hashing algorithm for finding values in set we'll also apply hashing to the building of sets and maps then we'll look at an important technique that uses hashing called memoization and we'll apply that technique to couple of problems goals in this you learn how to implement couple of abstract datatypessets and maps you will read about the importance of hashing for both of these datatypes you'll also learn about the importance of understanding the difference between mutable and immutable data by the end of the you should be able to answer these questions what is the complexity of finding value in setwhat is the load factor and how does it affect the overall efficiency of lookup in hash tablewhen would you use an immutable setwhen might you want mutable setwhen is there an advantage to using memoization in problemunder what circumstances can it be useful to use map or dictionary(cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha |
14,549 | sets and maps againthere will be some interesting programming problem challenges in this including optimization of the tic tac toe game first presented in the last and sudoku puzzle solver read on to discover what you need to know to solve these interesting problems playing sudoku many people enjoy solving sudoku puzzles to solve sudoku puzzle you must find the correct numbers to fill matrix all numbers must be - each row must have one each of - in it the same is true for each column finallythere are nine squares within the matrix that must also have each of - in them to beginyou are given puzzle with some of the locations known as shown in fig your job is to find the rest of the numbers given the numbers that already appear in the puzzle common way to solve these puzzles is by the process of elimination it helps to write down the possible values for puzzle and then eliminate the possible values one by one for instancethe puzzle above can be annotated with possible values in for the unknowns as shown in fig to begin solving the puzzlewe can immediately eliminate and from the second column of the puzzle for those cells that do not contain or none of those numbers could appear in any other cell in the second column because they are already known likewisethe numbers and can be eliminated from some cells fig sudoku puzzle this copy belongs to 'acha |
14,550 | fig annotated sudoku puzzle in the third row in the puzzle because those numbers already appear in other cells applying rules like these reduces the number of possible values for each cell in the puzzle figure shows the puzzle after applying some of these rules if we spend some time thinking about sudoku and how to solve it we can derive two rules that can be used to solve many sudoku puzzles these two rules can be applied to any group within the puzzle group is collection of nine cells that appear in rowcolumnor square within the puzzle within each cell is set of numbers representing the possible values for the cell at some point in the process of reducing the puzzle here are the two rules rule the first rule is generalization of the process that we used above to remove some values from cells within group look for cells that contain the same set of possible values if the cardinality of the set ( the number of items in the setmatches the number of duplicate sets foundthen the items of the duplicate sets may safely be removed from all non-duplicate sets in the group this rule applies even in the degenerative case where the number of duplicate sets is and the size of that set is the degenerative case is what we used above to remove single items from other sets this rule can be applied to fig to remove the from all cells in the th row of the puzzle except the th column where appears by itself rule the second rule looks at each cell within group and throws away all items that appear in other cells in the group if we are left with only one value in the chosen cellthen it must appear in this cell and the cell may be updated by throwing this copy belongs to 'acha |
14,551 | sets and maps fig sudoku puzzle after one pass away all other values that appear in the chosen cell applying this rule to the fifth row in fig results in the fourth column being reduced to containing because does not appear in any other cell in the th row this rule also applies in the last row of the puzzle where is only possible in the second column after removing and from that cell because they appear within other cells in that row note that the puzzle in fig is not fully reduced the reduction process can be applied iteratively until no more reductions are possible the sudoku solver algorithm keeps applying this reduction process until no more changes are made during pass of reductions on all the groups within the puzzle applying these two rules in this manner will fully reduce many sudoku puzzles sets the reduction algorithm for sudoku puzzles manipulates sets of numbers and eliminates possible values from those sets as the reduction progresses set is collection that does not allow duplicate values sets can be composed of any values integersemployee objectscharactersstringsliterally any object in python could be an element of some set set has cardinality the cardinality of set is the number of items in it this copy belongs to 'acha |
14,552 | operation set creation complexity ( set creation ( cardinality membership ( ( nonmembership disjoint ( subset (nsuperset (nunion (nintersection (nset difference (nsymmetric difference set copy (no(no(nusage =set([iterable]description calls the set constructor to create set iterable is an optional initial contents in which case we have (ncomplexity =frozenset([iterable]calls the frozenset constructor for immutable set objects to create frozenset object len(sthe number of elements in is returned in returns true if is in and false otherwise not in returns true if is not in and false otherwise isdisjoint(treturns true if and share no elementsand false otherwise issubset(treturns true if is subset of tand false otherwise issuperset(treturns true if is superset of and false otherwise union(treturns new set which contains all elements in and intersection(treturns new set which contains only the elements in both and difference(treturns new set which contains the elements of that are not in symmetric_difference(treturns new set which contains difference(tunion( difference( ) copy(returns shallow copy of fig set and frozen set operations sets are objects that have several commonly defined operations on them these operations are sometimes binary operations involving more than one setand sometimes retrieve information about just one set the table in fig describes commonly defined operations on sets and their associated computational complexities python has built-in support for two types of setsthe set and frozenset classes the frozenset class is immutable objects of the set class can be mutated in fig the variable must be set and the variable must be an iterable sequencewhich would include sets infix operators are also defined as syntactic sugar for some of the operations defined in fig for subset containment you can write < for proper subset you can write proper subset is subset that has at least one less element than its superset for superset you can write > or for proper superset for the union operationwriting is equivalent to writing union(tand for intersections& is equivalent to writing intersection(twriting is the same as writing difference(tand ^ is equivalent to the symmetric difference operator the operations in fig are not defined on the frozenset class since they mutate the set they are only defined on the set class againthere are operators for some of the methods presented in fig the mutator union method can be written |= intersection update can be written as &= finallythe symmetric difference update operator is written ^= while these operators are convenientthey are not well-known and code written by calling the methods in the table above will be more descriptive this copy belongs to 'acha |
14,553 | sets and maps operation union intersection complexity (no(nusage update(ts intersection_update(tset difference symmetric difference add remove (no(no( ( difference_update(ts symmetric_difference _update(ts add(es remove(ediscard ( discard(epop clear ( ( pop( clear(description adds the contents of to updates to contain only the intersection of the elements from and subtracts from the elements of updates with the symmetric difference of and add the element to the set remove the element from the set this raises keyerror if does not exist in remove the element if it exists in and ignore it otherwise remove an arbitrary element of remove all the elements of leaving the set empty fig mutable set operations the computational complexities presented above are surprisinghow can set membership be tested in ( timefrom what has been presented so farit should take (ntime to test set membership after allwe would have to look at all the elements in the setor at least half on averageto know if an item was in the set how can the union of two sets be computed in (ntime if we are to insure there are no duplicates in the setit would seem that the union of two sets would take ( time to compute unless the set could be sorted in some way but sorting elements of set is not always possible since not all elements of sets have an ordering hashing if it is possible to implement set membership test in ( timethen we can implement the other operations above with the complexities we have indicated without ( membership testtaking the union of two sets would take lot longer as indicated above testing set membership in ( time is accomplished using hashing hashing is an extremely important concept in computer science and is related to random access in computer as we saw back in chap accessing any location within list can be accomplished in ( time this is the principle of random access randomly accessible list means any location within the list can be accessed in ( time to access location in list we need the index of the location we wish to access the index serves as the address of an item in the list once we have stored an item in the listwe must remember its index if we wish to retrieve it in ( time without the index we would have to search for the item in the list which would take (ntimenot ( time soif we wanted to implement set where we could test membership in ( time we might think of storing the items of the set in list we would somehow have to remember the index where each item was stored to find it again in ( time this seems improbable at first howeverwhat if the item could be used to figure out its addressthis is the insight that led to hashing each object in the computer must this copy belongs to 'acha |
14,554 | be stored as string of zeroes and ones since computers speak binary these zeroes and ones can be interpreted however we likeincluding as the index into list this concept is so important that python (and many other modern languageshas included function called hash that can be called on any object to return an integer value for an object we'll call this value the object' hash code or hash value consider these calls to the hash function python ( : feb : : [gcc (apple inc build )on darwin type "help""copyright""creditsor "licensefor more information hash("abc"- hash(" " hash( hash( hash( hash(true hash(false hash([ , , ]traceback (most recent call last)file ""line in typeerrorunhashable type'listwhile most objects are hashablenot every object is in particularmutable objects like lists may not be hashable because when an object is mutated its hash value may also change this has consequences when using hash values in data structures as we'll see later in this in addition to built-in typespython let' the programmer have some control over hash codes by implementing __hash__ method on class if you write __hash__ method for class you can return whatever hash value integer you like for instances of that class notice that calling hash on the string "abcreturned negative value while other calls to hash returned extremely large integers clearly some work has to be done to convert this hash integer into an acceptable index into list read on to discover how hash values are converted into list indices the hashset class we can use hash value to compute an index into list to obtain ( item lookup complexity to hide the details of the list and the calling of the hash function to find the index of the itema set class can be written we'll call our set class hashsetnot to be confused with the built-in set class of python the built-in set class uses hashingtoo the hashset class presented in this section shows you how the set this copy belongs to 'acha |
14,555 | sets and maps class is implemented to beginhashset objects will contain list and the number of items in the list initially the list will contain bunch of none values the list must be built with some kind of value in it none serves as null value for places in the list where no value has been stored the list isn' nearly big enough to have location for every possible hash value yetthe list can' possibly be big enough for all possible hash values anyway in factas we saw in the last sectionsome hash values are negative and clearly indices into list are not negative the conversion of hash value to list index is explained in more detail in sect the hashset constructor is given in sect the hashset constructor class hashsetdef __init__(self,contents=[])self items [none self numitems for item in contentsself add(item storing an item to store an item in hash set we first compute its index using the hash function there are two problems that must be dealt with firstthe list that items are stored in must be finite in length and definitely cannot be as long as the unique hash values we would generate by calling the hash function since the list must be shorter than the maximum hash valuewe pick size for our list and then divide hash values by the length of this list the remainder ( the result of the operatorcalled the mod operatoris used as the index into the list the remainder after dividing by the length of the list will always be between and the length of the list minus one even if the hash value is negative integer using the mod operator will give us valid indices into list of whatever size we choose there is another problem we must deal with hash values are not necessarily unique hash values are integers and there are only finitely many integers possible in computer in additionbecause we divide hash values by the length of the listthe remaindersor list indiceswill be even less unique than the original hash values if the list length is then hash value of and - will both result in trying to store value at index in the list this isn' possible of course collision resolution consider trying to store both "cowand "foxusing hashing in list whose length is the hash value of "cowis - and the hash value of "foxis when we mod both values by the remainder is for both hash values indicating they should both be stored at the fifth location in list this copy belongs to 'acha |
14,556 | when two objects need to be stored at the same index within the hash set listbecause their computed indices are identicalwe call this collision it is necessary to define collision resolution scheme to deal with this there are many different schemes that are possible we'll explore scheme called linear probing when collision occurs while using linear probingwe advance to the next location in the list to see if that location might be available we can tell if location is available if we find none value in that spot in the list it turns out that there is one other value we might find in the list that means that location is available special type of object called __placeholder object might also be stored in the list the reason for this class will become evident in the next section for nowa none or __placeholder object indicates an open location within the hash set list the code in sect takes care of adding an item into the hashset list and is helper function for the actual add method hashset add helper function def __add(item,items)idx hash(itemlen(itemsloc - while items[idx!noneif items[idx=itemitem already in set return false if loc and type(items[idx]=hashset __placeholderloc idx idx (idx len(items if loc loc idx items[locitem return true the code in sect does not add an item that is already in the list the while loop is the linear probing part of the code the index idx is incrementedmod the length of the listuntil either the item is found or none value is encountered finding none value indicates the end of the linear chain and hence the end of any linear searching that must be done to determine if the item is already in the set if the item is not in the listthen the item is added either at the location of the first __placeholder object found in the searchor at the location of the none value at the end of the chain there is one more issue that must be dealt with when adding value imagine that only one position was open in the hash set list what would happen in the code abovethe linear search would result in searching the entire list if the list were fullthe result would be an infinite loop we don' want either to happen in factwe want to be able to add an item in amortized ( time to insure that we get an amortized complexity of ( )the list must never be full or almost full this copy belongs to 'acha |
14,557 | sets and maps the load factor the fullness of the hash set list is called its load factor we can find the load factor of hash set by dividing the number of items stored in the list by its length really small load factor means the list is much bigger than the number of items stored in it and the chance there is collision is small high load factor means more efficient space utilizationbut higher chance of collision experimentation can help to determine optimal load factorsbut reasonable maximum load factor is full when adding value into the listif the resulting load factor is greater than then all the values in the list must be transferred to new list to transfer the values to new list the values must be hashed again because the new list is different length this process is called rehashing in the hash set implementation we chose to double the size of the list when rehashing was necessary the code in sect calls the __add function from sect this code and the __add method are in the hashset class the __add and __rehash functions are hidden helper functions used by the publicly accessible add method hashset add def __rehash(oldlistnewlist)for in oldlistif !none and type( !hashset __placeholderhashset __add( ,newlist return newlist def add(selfitem)if hashset __add(item,self items)self numitems + load self numitems len(self itemsif load > self items hashset __rehash(self items,[none]* *len(self items)since the load factor is managedthe amortized complexity of adding value to the list is ( this means the length of any chain within the list will be finite length independent of the number of items in the hash set deleting an item deleting value from hash set means first finding the item this may involve doing linear search in the chain of values that reside at location in the list if the value to be deleted is the last in chain then it can be replaced with none if it is in the middle of chain then we cannot replace it with none because this would cut the chain of values insteadthe item is replaced with __placeholder object place holder object does not break chain and linear probe continues to search skipping over placeholder objects when necessary the remove helper function is given in sect this copy belongs to 'acha |
14,558 | hashset remove helper function class __placeholderdef __init__(self)pass def __eq__(self,other)return false def __remove(item,items)idx hash(itemlen(items while items[idx!noneif items[idx=itemnextidx (idx len(itemsif items[nextidx=noneitems[idxnone elseitems[idxhashset __placeholder(return true idx (idx len(items return false when removing an itemthe load factor may get too low to be efficiently using space in memory when the load factor dips below %the list is again rehashed to decrease the list size by one half to increase the load factor the remove method is provided in sect hashset remove def remove(selfitem)if hashset __remove(item,self items)self numitems - load max(self numitems len(self itemsif load < self items hashset __rehash(self items,[none]*int(len(self items)/ )elseraise keyerror("item not in hashset"for the same reason that adding value can be done in ( timedeleting value can also be done with an amortized complexity of ( the discard method is nearly the same of the remove method presented in sect except that no exception is raised if the item is not in the set when it is discarded finding an item to find an item in hash set involves hashing the item to find its address and then searching the possible chain of values the chain terminates with none if the item is in the chain somewhere then the __contains__ method will return true and false will be returned otherwise the method in sect is called when item in set is written in program this copy belongs to 'acha |
14,559 | sets and maps hashset membership def __contains__(selfitem)idx hash(itemlen(self itemswhile self items[idx!noneif self items[idx=itemreturn true idx (idx len(self items return false finding an item results in ( amortized complexity as well the chains are kept short as long as most hash values are evenly distributed and the load factor is kept from approaching iterating over set to iterate over the items of set we need to define the __iter__ method to yield the elements of the hashset the method traverses the list of items skipping over placeholder elements and none references here is the code for the iterator def __iter__(self)for in range(len(self items))if self items[ !none and type(self items[ ]!hashset __placeholderyield self items[iother set operations many of the other set operations on the hashset are left as an exercise for the reader howevermost of them can be implemented in terms of the methods already presented in this consider the difference_update method it can be implemented using the iteratorthe membership testand the discard method the code in sect provides the implementation for the difference_update method hashset difference update def difference_update(selfother)for item in otherself discard(itemthe difference_update method presented in sect is mutator method because it alters the sequence referenced by self compare that with the difference method in sect which does not mutate the object referenced by self insteadthe difference method returns new set which consists of the difference of self and the other set this copy belongs to 'acha |
14,560 | hashset difference def difference(selfother)result hashset(selfresult difference_update(otherreturn result the difference method is implemented using the difference_update method on the result hashset notice that new set is returned the hash set referenced by self is not updated the code is simple and it has the added benefit that if difference_update is correctly writtenso will this method programmers should always avoid writing duplicate code when possible and difference and difference_udpate are nearly identical except that the difference method performs the difference on newly constructed set instead of the set that self references solving sudoku using set or hashset datatypewe now have the tools to solve most sudoku puzzles puzzle can be read from file where known values are represented by their digit and unknown values are ' as in the puzzle below reading the dateable can be done line at time splitting the line will provide string with each known and unknown value as separate item in the list when an is encountered set containing all values - can be constructedjust as you would if solving sudoku by hand when known value is found set with the known number in it can be constructed the sets are added to two-dimensional matrix the matrix is list of lists so reading line corresponds to reading row of the matrix each line becomes list of sets each list of sets is added to list we'll call the matrix somatrix[row][colis one set within the sudoku puzzle there are sets within sudoku puzzle howevereach set is member of three groupsthe rowthe columnand the square in which it resides the rules presented in the sudoku puzzle description above each deal with reducing set within one of these groups after reading the input from the file and forming the matrix with the sets groups are formed by creating list of groups shallow copy of each row is first appended to the list of groups shallow copy means that each of the sets is the same set that was created when the puzzle was read this copy belongs to 'acha |
14,561 | sets and maps deep copy would create copy of each of the sets shallow copy of list does not copy the sets within the list calling list on list will make shallow copy another group is formed for each column and those groups are appended to the list of groups finallya group is formed for each square and those groups are appended to the list of groups when all donethere is one groups list with groups in it when forming these groups it is critical that the same set appears in each of three groups this is because when row is reducedwe want the changes in that row to be reflected in the columns and squares where the elements of the row also appear solving sudoku puzzle means reducing the number of items in each set of group according to the two rules presented in sect writing function called reducegroup that is given list of sets to reduce can help the function reducegroup should return true if it was able to reduce the group and false if it was not given this reducegroup functionthe reduce function is as defined in sect the sudoku reduce function def reduce(matrix)changed true groups getgroups(matrix while changedchanged reducegroups(groupsthis algorithm is quite simpleand yet very powerful it is different than any other algorithms that have been presented so far in this text the concept is quite simplekeep reducing until no more reductions are possible we are guaranteed that it will terminate since each iteration of the algorithm reduces the number of items in some of the sets of the puzzle we never increase the size of any of these sets once we return from this function we may or may not have solution there are some puzzles that will not be solved by this sudoku solver because the two rules that are presented above are not powerful enough for all puzzles there are some situations where the number of items in set cannot be reduced by looking at just one group you would have to look at more than one group at the same time to figure out how to reduce the number of sets hold onyou don' need to figure out any more rules the next will present an algorithm for solving all sudoku puzzleseven the very hardest of them the rules presented in this will solve the sudoku puzzle given in this section and many others sudoku puzzles one through six on the text' website can be solved by this sudoku solver the reducegroups function above would presumably call reducegroup on each of the groups in its list and it would return true if any of the groups are reduced and false otherwise this copy belongs to 'acha |
14,562 | maps map in computer science is not like the map you used to read when going someplace in your car the term map is more mathematical term referring to function that maps domain to range you may have already used map in python maps are called by many names including dictionarieshash tablesand hash maps they are all the same data structure map or dictionary maps set of unique keys to their associated values much the way function maps value in the domain to the range key is what we provide to map when we want to look for the key/value pair the keys of map are unique there can only be one copy of specific key value in the dictionary at time as we saw in onepython has built-in support for dictionaries or maps here is some sample interaction with dictionary in the python shell python ( : feb : : [gcc (apple inc build )on darwin type "help""copyright""creditsor "licensefor more information { ["dog""catd["batman""jokerd["superman""lex lutherfor key in dprint(keybatman dog superman for key in dprint(key, [key]batman joker dog cat superman lex luther len( ["dog""skunkd["dog"'skunka mapor dictionaryis lot like set set and dictionary both contain unique values the set datatype contains group of unique values map contains set of unique keys that map to associated values like setswe can look up key in the mapand its associated valuein ( time as you might expectmapslike setsare implemented using hashing while the underlying implementation is the samemaps and sets are used differently the table below provides the methods and operators of maps or dictionaries and their associated complexities the operations in the table above have the expected complexities given hashing implementation as was presented in sect the interesting difference is that key/value pairs are stored in the dictionary as opposed to just the items of set the key part of the key/value pair is used to determine if key is in the dictionary as you might expect the value is returned when appropriate this copy belongs to 'acha |
14,563 | sets and maps the hashmap class hashmap classlike the dict class in pythonuses hashing to achieve the complexities outlined in the table in fig private __kvpair class is defined instances of __kvpair hold the key/value pairs as they are added to the hashmap object with the addition of __getitem__ method on the hashset classthe hashset class could be used for the hashmap class implementation the additional __getitem__ method for the hashset is given in sect hashset get item one extra hashset method for use with the hashmap class def __getitem__(selfitem)idx hash(itemlen(self itemswhile self items[idx!noneif self items[idx=itemreturn self items[idx idx (idx len(self items return none operation dictionary creation complexity ( usage {[iterable]size membership ( ( len(dk in nonmembership add lookup ( not in ( ( [kv [klookup ( get( [,default]remove key/value pair items ( del [ko( items(keys ( keys(values ( values(pop ( pop(kpop item set default ( ( update (nd popitem( setdefault( [default] update(eclear dictionary copy ( (nd clear( copy(description calls the constructor to create dictionary iterable is an optional initial contents in which case it is (ncomplexity the number of key/value pairs in the dictionary returns true if is key in and false otherwise returns true if is not key in and false otherwise adds ( ,vas key/value pair in returns the value associated with the keyk keyerror exception is raised if is not in returns for the key/value pair ( ,vif is not in returns default or none if not specified removes the ( ,vkey value pair from raises keyerror if is not in returns view of the key/value pairs in the view updates as changes returns view of the keys in the view updates as changes returns view of the values in the view updates as changes returns the value associated with key and deletes the item raises keyerror if is not in return an abritrary key/value pair( , )from sets as key in and maps to default or none if not specified updates the dictionarydwith the contents of dictionary removes all key/value pairs from returns shallow copy of fig dictionary operations this copy belongs to 'acha |
14,564 | thento implement the hashmap we can use hashset as shown in sect in the __kvpair class definition it is necessary to define the __eq__ method so that keys are compared when comparing two items in the hash map the __hash__ method of __kvpair hashes only the key value since keys are used to look up key/value pairs in the hash map the implementation provided in sect is partial the other methods are left as an exercise for the reader the hashmap class class hashmapclass __kvpairdef __init__(self,key,value)self key key self value value def __eq__(self,other)if type(self!type(other)return false return self key =other key def getkey(self)return self key def getvalue(self)return self value def __hash__(self)return hash(self key def __init__(self)self hset hashset hashset( def __len__(self)return len(self hset def __contains__(self,item)return hashset __kvpair(item,nonein self hset def not__contains__(self,item)return item not in self hset def __setitem__(self,key,value)self hset add(hashmap __kvpair(key,value) def __getitem__(self,key)if hashmap __kvpair(key,nonein self hsetval self hset[hashmap __kvpair(key,none)getvalue(return val raise keyerror("key str(keynot in hashmap" def __iter__(self)for in self hsetyield getkey(this copy belongs to 'acha |
14,565 | sets and maps the provided implementation in sect helps to demonstrate the similarities between the implementation of the hashset class and the hashmap classor between the set and dict classes in python the two types of data structures are both implemented using hashing both rely heavily on ( membership test while understanding how the hashmap class is implemented is importantmost programming languages include some sort of hash map in their library of built-in typesas does python it is important to understand the complexity of the methods on hash mapbut just as important is understanding when to use hash map and how it can be used read on to see how you can use hash map in code you write to make it more efficient memoization memoization is an interesting programming technique that can be employed when you write functions that may get called more than once with the same arguments the idea behind memoization is to do the work of computing value in function once thenwe make note to ourselves so when the function is called with the same arguments againwe return the value we just computed again this avoids going to the work of computing the value all over again powerful example of this is the recursive fibonacci function the fibonacci sequence is defined as follows fib( fib( fib(nfib( - fib( - this sequence can be computed recursively by writing python function as follows def fib( )if = return if = return return fib( - fib( - howeverwe would never want to use this function for anything but simple demonstration of small fibonacci number the function cannot be used to computing something as big as fib( even running the function with an argument of will take very long time on even the fastest computers consider what happens to compute fib( to do that fib( and fib( must first be computed then the two results can be added together to find fib( butto compute fib( the values fib( and fib( must be computed now we are computing fib( twice to compute fib( )but to compute fib( we must compute fib( and fib( butfib( must be computed to find fib( as well figure shows all the calls to fib to compute fib( this copy belongs to 'acha |
14,566 | fig computing fib( as you can see from fig it takes lot of calls to the fib function to compute fib( now imagine how many calls it would take to compute fib( to compute fib( we first have to compute fib( and then compute fib( it took calls to fib to compute fib( and from the figure we can see that it takes calls to compute fib( including the call to fib( it will take calls to fib to compute fib( computing fib( will take calls or calls computing fib(nthis way more than doubles the number of calls to compute fib( - this is called exponential growth the complexity of the fib function is ( function with exponential complexity is worthless except for very small values of all is not lost there are better ways of computing the fibonacci sequence the way to improve the efficiency is to avoid all that unnecessary work once fib( has been computedwe shouldn' compute it again we already did that work there are at least couple of ways of improving the efficiency one method involves removing the recursion and computing fib(nwith loopwhich is probably the best option howeverthe recursive function is closer to the original definition we can improve the recursive version of the function with memoization in sect the memo dictionary serves as our mapping from values of to their fib(nresult memoized fibonacci function memo { def fib( )if in memoreturn memo[ if = memo[ this copy belongs to 'acha |
14,567 | sets and maps return if = memo[ return val fib( - fib( - memo[nval return val def main()print(fib( ) if __name__ ="__main__"main(the memoized fib function in sect records any value returned by the function in its memo the memo variable is accessed from the enclosing scope the memo is not created locally because we want it to persist from one call of fib to the next each time fib is called with new value of the answer is recorded in the memo when fib(nis called subsequent time for some nthe memoized result is looked up and returned the resultthe memoized fib function now has (ncomplexity and it can compute fib( almost instantly without memoizationit would take , , , , , , , calls to the fib function to compute fib( assuming each function call completed in microsecondsit would take roughly million years to compute fib( with memoization it takes calls to fib and assuming microseconds per callthat' microseconds or / of second this is an extreme example of the benefit of memoizationbut it can come in handy in many situations for instancein the tic tac toe problem of chap the minimax function is called on many boards that are identical the minimax function does not care if an is placed in the upper-right corner first followed by the lower-left corner or vice-versa yetthe way minimax is written it will be called to compute the value of the same board multiple times memoizing minimax speeds up the playing of tic tac toe correlating two sources of information another use of map or dictionary is in correlating data from different sources assume you are given list of cities and the zip code or codes within those cities you want to provide service where people can look up the zip code for city in the usa soyou'll be given city by the web page that provides you the information you have to use that city to find list of possible zip codes you could search the list of cities to find the corresponding list of zip codes oryou could create dictionary from city name to zip code list then when given city name you check to see if it is in the dictionary and if soyou can look up the corresponding list of zip codes in ( time this copy belongs to 'acha |
14,568 | summary in this we explored the implementation and some uses of sets and maps in python hashing is an important concept hashing data structures must be able to handle collisions within the hash table by collision resolution strategy the resolution strategy explored in this was linear probing there are other collision resolution strategies possible any collision resolution strategy must have way of handling new values being added to chain and existing values being deleted from chain the key feature of hashing is the amortized ( complexity for membership testing and lookup within the table the ability to test membership or lookup value in ( time makes many algorithms efficient that otherwise might not run efficiently on large data sets memoization is one important use of dictionary or map by memoizing function we avoid doing any redundant work another important use of maps or dictionaries is in correlating sources of information when we are given information from two different sources and must match those two sourcesa map or dictionary will make that correlation efficient review questions answer these short answermultiple choiceand true/false questions to test your mastery of the what type of value is hash code hash codes can be both positive and negative how does hash code get converted into value that can be used in hash table once you find the proper location with hash tablehow do you know if the item you are looking for is in the table or notbe careful to answer this completely why is collision resolution strategy needed when working with hash table what is the difference between map and set in this the hashset was used to implement the hashmap class what if we turned things aroundhow could dictionary in python be used to implement setdescribe how this might be done by describing the add and membership methods of set and how they would be implemented if internally the set used dictionary how does the load factor affect the complexity of the membership test on the set datatype what is rehashing when is memoization an effective programming technique true or falsememoization would help make the factorial function run fasterjustify your answer this copy belongs to 'acha |
14,569 | sets and maps def fact( )if = return return fact( - def main() fact( print(" is",xif __name__ ="__main__"main( programming problems complete the sudoku puzzle as described in the the program should read text file prompt the user for the name of the text file the text file should be placed in the same directory or folder as the program so it can easily be found by your program there are six sample sudoku puzzles that you can solve available on the text' website write the program to read text file like those you find on the text' website print both the unsolved and solved problem to the screen as shown below please enter sudoku puzzle file namesudoku txt solving this puzzle solution valid solutionthis copy belongs to 'acha |
14,570 | complete the hashset class found in the by implementing the methods described in the two tables of set operations thenwrite main function to test these operations save the class in file called hashset py so it can be imported into other programs if you call your main function in hashset py with the if __name__ ="__main__statementthen when you import it into another program your hashset py main function will not be executedbut when you run hashset py on its ownits main function will run to test your hashset class memoize the tic tac toe program from chap to improve its performance to do this each board must have hash value you should implement __hash__ method for the board class the hash value should be unique to board' configuration in other wordsthe 'so'sand dummy objects should factor into the hash value for the board so that each board has its own unique hash value then memoize the minimax function to remember the value found for particular board' configuration the minimax function should start by checking whether or not the value for this board has already been computed and the function should return it if it has write version of the hashset class that allows you to specify the maximum and minimum allowable load factor then run number of tests where you plot the average time taken to add an item to set given different maximum load factors also gather information about the average time it takes to test the membership of an item in set for different maximum load factors from this information you should be able to see some of the space/time trade-off in hash tables generate xml data in the plot format from these experimental results and plot the data to see what it tells you from the gathered informationexpress your opinion about the optimal load factor for the hashset class comment on the optimal maximum load factor at the top of the program that performs your tests this copy belongs to 'acha |
14,571 | trees when we see tree in our everyday lives the roots are generally in the ground and the leaves are up in the air the branches of tree spread out from the roots in more or less organized fashion the word tree is used in computer science when talking about way data may be organized trees have some similarities to the linked list organization found in chap in tree there are nodes which have links to other nodes in linked list each node has one linkto the next node in the list in tree each node may have two or more links to other nodes tree is not sequential data structure it is organized like treeexcept the root is at the top of tree data structures and the leaves are at the bottom tree in computer science is usually drawn inverted when compared to the trees we see in nature there are many uses for trees in computer science sometimes they show the structure of bunch of function calls as we saw when examining the fibonacci function as depicted in fig figure depicts call tree of the fib function for computing fib( unlike real trees it has root (at the topand leaves at the bottom there are relationships between the nodes in this tree the fib( call has left sub-tree and right sub-tree the fib( node is child of the fib( node the fib( node is sibling to the fib( node to the right of it leaf node is node with no children the leaf nodes in fig represent calls to the fib function which matched the base cases of the function in this we'll explore trees and when it makes sense to build and or use tree in program not every program will need tree data structure neverthelesstrees are used in many types of programs knowledge of them is not only necessityproper use of them can greatly simplify some types of programs goals this introduces trees and some algorithms that use trees by the end of the you should be able to answer these questions how are trees constructedhow can we traverse tree(cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha |
14,572 | trees fig the call tree for computing fib( how are expressions and trees relatedwhat is binary search treeunder what conditions is binary search tree usefulwhat is depth first search and how does it relate to trees and search problemswhat are the three types of tree traversals we can do on binary treeswhat is grammar and what can we do with grammarread on to discover trees and their uses in computer science abstract syntax trees and expressions trees have many applications in computer science they are used in many different types of algorithms for instanceevery python program you write is converted to treeat least for little whilebefore it is executed by the python interpreter internallya python program is converted to tree-like structure called an abstract syntax treeoften abbreviated astbefore it is executed we can build our own abstract syntax trees for expressions so we can see how tree might be evaluated and why we would want to evaluate tree in chap linked lists were presented as way of organizing list trees may be stored using similar kind of structure if node in tree has two childrenthen that node would have two links to its children as opposed to linked list which has one link to the next node in the sequence consider the expression ( we can construct an abstract syntax tree for this expression as shown in fig since the operation is the last operation performed when evaluating this functionthe node will be at the root of the tree it has two subtreesthe expression to the left of the and then to the right of the this copy belongs to 'acha |
14,573 | fig the ast for ( similarlynodes for the other operators and operands can be constructed to yield the tree shown in fig to represent this in the computerwe could define one class for each type of node we'll define timesnodea plusnodeand numnode class so we can evaluate the abstract syntax treeeach node in the tree will have one eval method defined on it the code in sect defines these classesthe eval methodsand main function that builds the example tree in fig constructing asts class timesnodedef __init__(selfleftright)self left left self right right def eval(self)return self left eval(self right eval( class plusnodedef __init__(selfleftright)self left left self right right def eval(self)return self left eval(self right eval( class numnodedef __init__(selfnum)self num num def eval(self)return self num def main() numnode( numnode( plusnode( ,yt timesnode(pnumnode( )root plusnode(tnumnode( ) this copy belongs to 'acha |
14,574 | trees print(root eval() if __name__ ="__main__"main(in sect the tree is built from the bottom ( the leavesup to the root the code above contains an eval function for each node calling eval on the root node will recursively call eval on every node in the treecausing the result to be printed to the screen once an ast is builtevaluating such tree is accomplished by doing recursive traversal of the tree the eval methods together are the recursive function in this example we say that the eval methods are mutually recursive since all the eval methods together form the recursive function prefix and postfix expressions expressionsas we normally write themare said to be in infix form an infix expression is an expression written with the binary operators in between their operands expressions can be written in other forms though another form for expressions is postfix in postfix expression the binary operators are written after their operands the infix expression ( can be written in postfix form as postfix expressions are well-suited for evaluation with stack when we come to an operand we push the value on the stack when we come to an operatorwe pop the operands from the stackdo the operationand push the result evaluating expressions in this manner is quite easy for humans to do with little practice hewlett-packard has designed many calculators that use this postfix evaluation method in factin the early years of computinghewlett-packard manufactured whole line of computers that used stack to evaluate expressions in the same way the hp was one such computer in more recent times many virtual machines are implemented as stack machines including the java virtual machineor jvmand the python virtual machine as another example of tree traversalconsider writing method that returns string representation of an expression the string is built as the result of traversal of the abstract syntax tree to get string representing an infix version of the expressionyou perform an inorder traversal of the ast to get postfix expression you would do postfix traversal of the tree the inorder methods in sect perform an inorder traversal of an ast ast tree traversal class timesnodedef __init__(selfleftright)self left left self right right this copy belongs to 'acha |
14,575 | def eval(self)return self left eval(self right eval( def inorder(self)return "(self left inorder(self right inorder(") class plusnodedef __init__(selfleftright)self left left self right right def eval(self)return self left eval(self right eval( def inorder(self)return "(self left inorder(self right inorder(") class numnodedef __init__(selfnum)self num num def eval(self)return self num def inorder(self)return str(self num the inorder methods in sect provide for an inorder traversal because each binary operator is added to the string in between the two operands to do postorder traversal of the tree we would write postorder method that would add each binary operator to the string after postorder traversing the two operands note that because of the way postorder traversal is writtenparentheses are never needed in postfix expressions one other traversal is possiblecalled preorder traversal in preorder traversaleach binary operator is added to the string before its two operands given the infix expression ( the prefix equivalent is againbecause of the way prefix expression is writtenparentheses are never needed in prefix expressions parsing prefix expressions abstract syntax trees are almost never constructed by hand they are often built automatically by an interpreter or compiler when python program is executed the python interpreter scans it and builds an abstract syntax tree of the program this part of the python interpreter is called parser parser is programor part of programthat reads file and automatically builds an abstract syntax tree of the expression ( source program)and reports syntax error if the program or expression is not properly formed the exact details of how this is accomplished this copy belongs to 'acha |
14,576 | trees is beyond the scope of this text howeverfor some simple expressionslike prefix expressionsit is relatively easy to build parser ourselves in middle school we learned when checking to see if sentence is properly formed we should use the english grammar grammar is set of rules that dictate how sentence in language can be put together in computer science we have many different languages and each language has its own grammar prefix expressions make up language we call them the language of prefix expressions and they have their own grammarcalled context-free grammar context-free grammar for prefix expressions is given in sect the prefix expression grammar ( ,ewhere {et {identifiernumber+* is defined by the set of productions number grammargconsists of three setsa set of non-terminals symbols denoted by na set of terminals or tokens called and setpof productions one of the nonterminals is designated the start symbol of the grammar for this grammarthe special symbol is the start symbol and only non-terminal of the grammar the symbol stands for any prefix expression in this grammar there are three productions that provide the rules for how prefix expressions can be constructed the productions state that any prefix expression is composed of (you can read as is composed of plus sign followed by two prefix expressionsa multiplication symbol followed by two prefix expressionsor just number the grammar is recursive so every time you see in the grammarit can be replaced by another prefix expression this grammar is very easy to convert to function that given queue of tokens will build an abstract syntax tree of prefix expression functionlike the function in sect that reads tokens and returns an abstract syntax tree is called parser since the grammar is recursivethe parsing function is recursive as well it has base case firstfollowed by the recursive cases the code in sect provides that function prefix expression parser import queue def ( )if isempty()raise valueerror("invalid prefix expression" token dequeue( this copy belongs to 'acha |
14,577 | if token ="+"return plusnode( ( ), ( ) if token ="*"return timesnode( ( ), ( ) return numnode(float(token) def main() input("please enter prefix expression" lst split( queue queue( for token in lstq enqueue(token root ( print(root eval()print(root inorder() if __name__ ="__main__"main(in sect the parameter is queue of the tokens read from the file or string code to call this function is provided in the main function of sect the main function gets string from the user and enqueues all the tokens in the string (tokens must be separated by spaceson queue of tokens then the queue is passed to the function this function is based on the grammar given above the function looks at the next token and decides which rule to apply each call to the function returns an abstract syntax tree calling from the main function results in parsing the prefix expression and building its corresponding tree this example gives you little insight into how python reads program and constructs an abstract syntax tree for it python program is parsed according to grammar and an abstract syntax tree is constructed from the program the python interpreter then interprets the program by traversing the tree this parser in sect is called top-down parser not all parsers are constructed this way the prefix grammar presented in this text is grammar where the top-down parser construction will work in particulara grammar cannot have any left-recursive rules if we are to create top-down parser for it left recursive rules occur in the postfix grammar given in sect the postfix expression grammar ( ,ewhere {et {identifiernumber+* is defined by the set of productions number this copy belongs to 'acha |
14,578 | trees in this grammar the first and second productions have an expression composed of an expressionfollowed by another expressionfollowed by an addition or multiplication token if we tried to write recursive function for this grammarthe base case would not come first the recursive case would come first and hence the function would not be written correctly since the base case must come first in recursive function this type of production is called left-recursive rule grammars with left-recursive rules are not suitable for top-down construction of parser there are other ways to construct parsers that are beyond the scope of this text you can learn more about parser construction by studying book on compiler construction or programming language implementation binary search trees binary search tree is tree where each node has up to two children in additionall values in the left subtree of node are less than the value at the root of the tree and all values in the right subtree of node are greater than or equal to the value at the root of the tree finallythe left and right subtrees must also be binary search trees this definition makes it possible to write class where values may be inserted into the tree while maintaining the definition the code in sect accomplishes this the binarysearchtree class class binarysearchtree this is node class that is internal to the binarysearchtree class class __node def __init__(self,val,left=none,right=none) self val val self left left self right right def getval(self)return self val def setval(self,newval) self val newval def getleft(self)return self left def getright(self)return self right def setleft(self,newleft) self left newleft def setright(self,newright) self right newright this method deserves little explanation it does an inorder traversal of the nodes of the tree yielding all the values in this waywe get this copy belongs to 'acha |
14,579 | the values in ascending order def __iter__(self)if self left !nonefor elem in self leftyield elem yield self val if self right !none for elem in self right yield elem below are the methods of the binarysearchtree class def __init__(self)self root none def insert(self,val) the __insert function is recursive and is not passed self parameter it is static function (not method of the classbut is hidden inside the insert function so users of the class will not know it exists def __insert(root,val) if root =nonereturn binarysearchtree __node(val if val root getval() root setleft(__insert(root getleft(),val) else root setright(__insert(root getright(),val) return root self root __insert(self root,val def __iter__(self)if self root !nonereturn self root __iter__( elsereturn [__iter__( def main() input("enter list of numbers" lst split( tree binarysearchtree( for in lsttree insert(float( ) for in treeprint( if __name__ ="__main__"main(when the program in sect is run with list of values (they must have an orderingit will print the values in ascending order for instanceif is entered at the keyboardthe program behaves as follows this copy belongs to 'acha |
14,580 | trees enter list of numbers from this example it appears that binary search tree can produce sorted list of values when traversed howlet' examine how this program behaves with this input initiallythe tree reference points to binarysearchtree object where the root pointer points to none as shown in fig into the tree in fig we insert the the insert method is called which immediately calls the __insert function on the root of the tree the __insert function is given treewhich in this case is none ( an empty treeand the __insert function returns new tree with the value inserted the root instance variable is set equal to this new tree as shown in fig which is the consequence of line of the code in sect in the following figures the dashed line indicates the new reference that is assigned to point to the new node each time the __insert function is called new tree is returned and the root instance variable is re-assigned on line most of the time it is re-assigned to point to the same node nowthe next value to be inserted is the inserting the calls __insert on the root node containing when this is doneit recursively calls __insert on the right subtreewhich is none (and not picturedthe result is new right subtree is created and the right subtree link of the node containing is made to point to it as shown in fig which is the consequence of line in sect again the dashed arrows indicate the new references that are assigned during the insert it doesn' hurt anything to reassign the references and the code works very nicely in the recursive __insert we always reassign the reference on lines and after inserting new value into the tree likewiseafter inserting new valuethe root reference is reassigned to the new tree after inserting the new value on line of the code in sect fig an empty binarysearchtree object this copy belongs to 'acha |
14,581 | fig the tree after inserting fig the tree after inserting fig the tree after inserting nextthe is inserted into the tree as shown in fig the ended up to the right of the to preserve the binary search tree property the is inserted into the left subtree of the because is less than the is inserted next and because it is less than the it is inserted into the left subtree of the node containing because that subtree contains the is inserted into the left subtree of the node containing this is depicted in fig inserting the next means the value is inserted to the left of the and to the right of the this preserves the binary search tree property as shown in fig this copy belongs to 'acha |
14,582 | trees fig the tree after inserting fig the tree after inserting to insert the it must go to the right of all nodes inserted so far since it is greater than all nodes in the tree this is depicted in fig the goes to the right of the and to the left of the in fig the only place the can go is to the right of the left of the and right of the in fig the final tree is pictured in fig this is binary search tree since all nodes with subtrees have values less than the node in the left subtree and values greater than or equal to the node in the right subtree while both subtrees also conform to the binary search tree property the final part of the program in sect iterates over the tree in the main function this calls the __iter__ method of the binarysearchtree class this __iter__ method returns an iterator over the root' __node object the __node' __iter__ method is interesting because it is recursive traversal of the tree when for elem in self left is writtenthis calls the __iter__ method on the left subtree after all the elements in the left subtree are yieldedthe value at the root of the tree is yieldedthis copy belongs to 'acha |
14,583 | fig the tree after inserting fig the tree after inserting fig the tree after inserting this copy belongs to 'acha |
14,584 | trees fig the final binarysearchtree object contents then the values in the right subtree are yielded by writing for elem in self right the result of this recursive function is an inorder traversal of the tree binary search trees are of some academic interest howeverthey are not used much in practice in the average caseinserting into binary search tree takes (log ntime to insert items into binary search tree would take ( log ntime soin the average case we have an algorithm for sorting sequence of ordered items howeverit takes more space than list and the quicksort algorithm can sort list with the same big-oh complexity in the worst casebinary search trees suffer from the same problem that quicksort suffers from when the items are already sortedboth quicksort and binary search trees perform poorly the complexity of inserting items into binary search tree becomes ( in the worst case the tree becomes stick if the values are already sorted and essentially becomes linked list there are couple of nice properties of binary search trees that random access list does not have inserting into tree can be done in (log ntime in the average case while inserting into list would take (ntime deleting from binary search tree can also be done in (log ntime in the average case looking up value in binary search tree can also be done in (log ntime in the average case if we have lots of insertdeleteand lookup operations for some algorithma tree-like structure may be useful butbinary search trees cannot guarantee the (log ncomplexity it turns out that there are implementations of search tree structures that can guarantee (log ncomplexity or better for insertingdeletingand searching for values few examples are splay treesavl-treesand -trees which are all studied later in this text search spaces sometimes we have problem that may consist of many different states we may want to find particular state of the problem which we'll call the goal consider sudoku puzzles sudoku puzzle has statereflecting how much of it we have this copy belongs to 'acha |
14,585 | solved we are seeking goal which is the solution of the puzzle we could randomly try value in cell of the puzzle and try to solve the puzzle after having made that guess the guess would lead to new state of the puzzle butif the guess were wrong we may have to go back and undo our guess wrong guess could lead to dead end this process of guessingtrying to finish the puzzleand undoing bad guesses is called depth first search looking for goal by making guesses is called depth first search of problem space when dead end is found we may have to backtrack backtracking involves undoing bad guesses and then trying the next guess to see if the problem can be solved by making the new guess the description here leads to the depth first search algorithm in sect depth-first search algorithm def dfs(currentgoal)if current =goalreturn [current for next in adjacent(current)result dfs(nextif result !nonereturn [currentresult return none the depth first search algorithm may be written recursively in this code the depth first search algorithm returns the path from the current node to the goal node the backtracking occurs if the for loop completes without finding an appropriate adjacent node in that casenone is returned and the previous recursive call of dfs goes on to the next adjacent node to look for the goal on that path in the last an algorithm was presented for solving sudoku puzzles that works for many puzzlesbut not all in these casesdepth first search can be applied to the puzzle after reducing the problem as far as possible it is important to first apply the rules of the last to reduce the puzzle because otherwise the search space is too big to search in reasonable amount of time the solve function in sect includes depth first search that will solve any sudoku puzzle assuming that the reduce function applies the rules of the last to all the groups within puzzle the copy module must be imported for this code to run correctly sudoku depth-first search def solutionviable(matrix)check that no set is empty for in range( )for in range( )if len(matrix[ ][ ]= return false return true def solve(matrix)this copy belongs to 'acha |
14,586 | trees reduce(matrix if not solutionviable(matrix)return none if solutionok(matrix)return matrix print("searching " for in range( )for in range( )if len(matrix[ ][ ] for in matrix[ ][ ]mcopy copy deepcopy(matrixmcopy[ ][jset([ ] result solve(mcopy if result !nonereturn result return none in the solve function of sect reduce is called to try to solve the puzzle with the rules of the last after calling reduce we check to see if the puzzle is still solvable ( no empty setsif notthe solve function returns none the search proceeds by examining each location within the matrix and each possible value that the location could hold the for loop tries all possible values for cell with more than one possibility if the call to reduce solves the puzzlethe solutionok function will return true and the solve function will return the matrix otherwisethe depth first search proceeds by looking for cell in the matrix with more than one choice the function makes copy of the matrix called mcopy and makes guess as to the value in that location in mcopy it then recursively calls solve on mcopy the solve function returns none if no solution is found and the solved puzzle if solution is found sowhen solve is called recursivelyif none is returnedthe function continues to search by trying another possible value initially calling solve can be accomplished as shown in sect assuming that matrix is matrix of sets representing sudoku puzzle calling sudoku' solve function print("begin solving" matrix solve(matrix if matrix =noneprint("no solution found!!!"return this copy belongs to 'acha |
14,587 | search spaces if non-none matrix is returnedthen the puzzle is solved and the solution may be printed this is one example where no tree is ever constructedyet the search space is shaped like tree and depth first search can be used to search the problem space summary tree-like structures appear in many problems in computer science tree datatype can hold information and allow quick insertdeleteand search times while binary search trees are not used in practicethe principles governing them are used in many advanced data structures like -treesavl-treesand splay trees understanding how references point to objects and how this can be used to build datatype like tree is an important concept for computer programmers to understand search spaces are often tree-like when making decision between several choices leads to another decision search space is not datatypeso in this case no tree is built howeverthe space that is searched has tree-like structure the key to doing depth first search of space is to remember where you were so you can backtrack when choice leads to dead end backtracking is often accomplished using recursion many algorithms that deal with trees are naturally recursive depth first searchtree traversalsparsingand abstract syntax evaluation may all be recursively implemented recursion is powerful mechanism to have in your toolbox for solving problems review questions answer these short answermultiple choiceand true/false questions to test your mastery of the is the root of tree in computer science at the top or bottom of tree how many roots can tree have full binary tree is tree that is full at each level of the treemeaning there is no room for another node at any level of the treeexcept at the leaves how many nodes are in full binary tree with three levelshow about levelshow about levels in full binary treewhat is the relationship between the number of leaves in the tree and the total number of nodes in the tree when constructing treefor which is it easiest to write codea bottom-up or top-down construction of the tree what term is used when wrong choice is made and another choice must be attempted when searching for value in tree how does search space differ from tree datatypethis copy belongs to 'acha |
14,588 | trees describe non-recursive algorithm for doing an inorder traversal of tree hintyour algorithm will need stack to get this to work write some code to build tree for the infix expression be sure to follow the precedence of operators and in your tree you may assume the plusnode and timesnode classes from the are already defined provide the prefix and postfix forms of programming problems write program that asks the user to enter prefix expression thenthe program should print out the infix and postfix forms of that expression finallyit should print the result of evaluating the expression interacting with the program should look like this please enter prefix expression the infix form is((( the postfix form is the result is if the prefix expression is malformedthe program should print that the expression is malformed and it should quit it should not try to print the infix or postfix forms of the expression in this case write program that reads list of numbers from the user and lets the user insertdeleteand search for values in the tree the program should be menu driven allowing for insertingsearchingand deleting from binary search tree inserting into the tree should allow for multiple inserts as follows binary search tree program make choice insert into tree delete from tree lookup value choice insert insert insert insert insert insert insert insert insertmake choice insert into tree delete from tree lookup value choice value yes is in the tree this copy belongs to 'acha |
14,589 | programming problems make choice insert into tree delete from tree lookup value choice value has been deleted from the tree make choice insert into tree delete from tree lookup value choice value was not in the tree the hardest part of this program is deleting from the tree you can write recursive function to delete value in some waysthe delete from tree function is like the insert function given in the you will want to write two functionsone that is method to call on binary search tree to delete valuethe other would be hidden recursive delete from tree function the recursive function should be given tree and value to delete it should return the tree after deleting the value from the tree the recursive delete function must be handled in three cases as follows case the value to delete is in node that has no children in this casethe recursive function can return an empty tree ( nonebecause that is the tree after deleting the value from it this would be the case if the were deleted from the binary search tree in fig in fig the right subtree of the node containing is now none and therefore the node containing is gone from the tree case the value to delete is in node that has one child in this casethe recursive function can return the child as the tree after deleting the value this would be the case if deleting from the tree in fig in this caseto delete fig the tree after deleting this copy belongs to 'acha |
14,590 | trees fig the tree after deleting the node containing from the tree you simply return the tree for the node containing so it ends up being linked to the node containing in fig the node containing is eliminated by making the left subtree of the node containing point at the right subtree of the node containing case this is is hardest case to implement when the value to delete is in node that has two childrenthen to delete the node we want to use another functioncall it getrightmostto get the right-most value of tree then you use this function to get the right-most value of the left subtree of the node to delete instead of deleting the nodeyou replace the value of the node with the right-most value of the left subtree then you delete the right-most value of the left subtree from the left subtree in fig the is eliminated by setting the node containing to the right-most value of the left subtree then is deleted from the left subtree fig the tree after deleting this copy belongs to 'acha |
14,591 | programming problems complete the sudoku program as described in chap and augment it with the depth first search described in sect to complete sudoku program that is capable of solving any sudoku puzzle it should solve these puzzles almost instantly if it is taking long time to solve puzzle it is likely because your reduce function is not reducing the puzzle as described in chap to complete this exercise you will need two functionsthe solutionok function and the solutionviable function the solutionviable function is given in the and returns true if none of the sets in the matrix are empty the solutionok function returns true if the solution is valid solution this can be checked very easily if any of the sets in the matrix do not contain contain exactly element then the solution is not okay and false should be returned if the union of any group within sudoku puzzle does not contain elements then the solution is not okay and false should be returned otherwisethe solution is okay and true should be returned after completing this program you should be able to solve sudoku problems like sudoku txt or sudoku txt which are available for download on the text' website design an orderedtreeset class which can be used to insert itemsdelete itemsand lookup items in an average case of (log ntime implement the in operator on this class for set containment also implement an iterator that returns the items of the set in ascending order the design of this set should allow items of any type to be added to the set as long as they implement the __lt__ operator this orderedtreeset class should be written in file called orderedtreeset py the main function of this module should consist of test program for your orderedtreeset class that thoroughly tests your code the main function should be called using the standard if statement that distinguishes between the module being imported or run itself design an orderedtreemap class which uses an orderedtreeset class in its implementation to organize this correctly you should create two modulesan orderedtreeset py module and an orderedtreemap py module have the ordered treemap class use the orderedtreeset class in its implementation the way hashset and hashmap were implemented in chap design test cases to thoroughly test your orderedtreemap class this copy belongs to 'acha |
14,592 | graphs many problems in computer science and mathematics can be reduced to set of states and set of transitions between these states graph is mathematical representation of problems like these in the last we saw that trees serve variety of purposes in computer science trees are graphs howevergraphs are more general than trees abstracting away the details of problem and studying it in its simplest form often leads to new insight as resultmany algorithms have come out of the research in graph theory graph theory was first studied by mathematicians many of the algorithms in graph theory are named for the mathematician that developed or discovered them dijkstra and kruskal are two such mathematicians and this covers algorithms developed by them representing graph can be done one of several different ways the correct way to represent graph depends on the algorithm being implemented graph theory problems include graph coloringfinding path between two states or nodes in graphor finding shortest path through graph among many others there are many algorithms that have come from the study of graphs to understand the formulation of these problems it is good to learn little graph notation which is presented in this as well goals this covers the representation of graphs it also covers few graph algorithms depth first search of graph is presentedalong with breadth first search dijkstra' algorithm is famous in computer science and has many applications from networking to construction planning kruskal' algorithm is another famous algorithm used to find minimum weighted spanning tree by the end of the you should have basic understanding of graph theory and how many problems in computer science can be posed in the form of graphs to begin we'll study some notation and depth first search of graph then we'll examine couple of greedy algorithms that answer some interesting questions about graphs greedy algorithms are algorithms that never make wrong choice in finding (cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha |
14,593 | graphs solution we'll examine two of these algorithms called kruskal' algorithm and dijkstra' algorithmboth named for the people that formulated the algorithm to solve their respective problems graph notation little notation will help in the graph definitions in this set is an unordered collection of items for instancev { is the set of the first natural numbers subset of set is some collectionpossibly emptyof items from its superset the set { is subset of the cardinality of set is its size or number of elements the cardinality of the set is written as | the cardinality of is and is so | and | graph ( ,eis defined by set of verticesnamed vand set of edgesnamed the set of edges are subsets of where each member of has cardinality in other wordsedges are denoted by pairs of vertices consider the simpleundirected graph given in fig the sets { and {{ },{ },{ },{ },{ },{ },{ },{ },{ },{ },{ },{ },{ },{ },{ }define this graph since each edge is itself set of cardinality the order of the vertices in each edge set does not matter for instance{ is the same edge as { many problems can be formulated in terms of graph for instancewe might ask how many colors it would take to color map so that no two countries that shared border were colored the same in this problem the vertices in fig would represent countries and two countries that share border would have an edge between them the problem can then be restated as finding the minimum number of colors required to color each vertex in the graph so that no two vertices that share an edge have the same color directed graph ( ,eis defined in the same way as an undirected graph except that the set of edgeseis set of tuples instead of subsets by defining {(vi vj where vi vj means that edges can be traversed in one direction fig an undirected graph this copy belongs to 'acha |
14,594 | fig directed graph only in fig we can move from vertex to vertex along the edge ( , )but we cannot move from vertex to at least not without going through some other verticesbecause the edge ( , is not in the set path in graph is series of edgesnone repeatedthat can be traversed in order to travel from one vertex to another in graph cycle in graph is path which begins and ends with the same vertex the last covered trees in computer science now armed with some notation from graph theory we can give formal definition of tree tree is directedconnected acyclic graph an acyclic graph is graph without any cycles sometimes in graph theory tree is defined as an acyclic connected graph dropping the requirement that it be directed graph in this casea tree may be defined as graph which is fully connectedbut has only one path between any two vertices both directed and undirected graphs can be used to model many different kinds of problems the graph in fig might represent register allocation in cpu the vertices could represent symbolically named registers and two registers that were both in use at the same time would have an edge between them the question that might be asked is"how many physical registers of the machine are required for the symbolic registers of this computation?it turns out that register allocation and map coloring represent the same problem when we abstract away the detailsthe problem boils down to graph coloring problem an answer to "how many colors are required to color the map?would answer "how many physical registers are required for this computation?and vice-versa weighted graph is graph where every edge has weight assigned to it more formallya weighted graph ( , ,wis graph with the given set of verticesvand edgese in additiona weighted graph has weight functionwthat maps edges to real numbers so the signature of is given by we real weighted graphs can be used to represent the state of many different problems for instancea weighted graph might provide information about roads and intersections cost/benefit analysis can sometimes be expressed in terms of weighted graph the weights can represent the available capacity of network connections between nodes in network this copy belongs to 'acha |
14,595 | graphs fig weightedundirected graph weighted graph can be used to represent the state of many different kinds of problems figure depicts weighted graph which represents roads and intersections searching graph many problems have been formulated in terms of graph theory one of the more common problems is discovering path from one vertex to another in graph the question might bedoes path exist from vertex vi to vj and if sowhat edges must you traverse to get thereperforming depth first search on graph is similar to the algorithm first presented in chap but we must be wary of getting stuck in cycle within graph consider searching for path from vertex to vertex in the directed graph of fig the blue lines in fig highlight the path between these vertices in the this copy belongs to 'acha |
14,596 | fig path from vertex to vertex graphthere seems to be only one choice in most cases howeverwhen the search reaches vertex we must choose between two edges one edge takes us back to vertex which we have already visited the other edge takes us closer to the final path another choice is made at vertex if the edge to is wrongly examinedwe must have way of backing up and trying the other edge to vertex searching graph in this manner is also called depth first searchas first discussed in chap and requires the ability to backtrack consider when vertex is encountered if choice is made to go to vertex we must be able to back up to fix that choice and go to vertex instead stack data structure or recursion handles the backtracking depth first search must also avoid possible cycles within the graph the avoidance of cycles is accomplished by maintaining set of visited vertices when vertex is visitedit is added to the visited set if vertex is in the visited setthen it is not examined again later in the search should cycle take the search back to the same vertex an iterative ( non-recursivegraph depth first search algorithm begins by initializing the visited set to the empty set and by creating stack for backtracking the start vertex is pushed onto the stack to begin the algorithm steps similar to those taken in sect are executed to find the goal this code is pseudo-codebut presents the necessary details iterative depth first search of graph def graphdfs(gstartgoal) ( ,eis the graph with verticesvand edgese , stack stack(visited set(stack push(start while not stack isempty() vertex is popped from the stack this is called the current vertex current stack pop(this copy belongs to 'acha |
14,597 | graphs the current vertex is added to the visited set visited add(current if the current vertex is the goal vertexthen we discontinue the search reporting that we found the goal if current =goalreturn true or return path to goal perhaps otherwisefor every adjacent vertexvto the current vertex in the graphv is pushed on the stack of vertices yet to search unless is already in the visited set in which case the edge leading to is ignored for in adjacent(current, )if not in visitedstack push( if we get this farthen we did not find the goal return false or return an empty path if the while loop in sect terminates the stack was empty and therefore no path to the goal exists this algorithm implements depth first search of graph it can also be implemented recursively if pushing on the stack is replaced with recursive call to depth first search when implemented recursivelythe depth first search function is passed the current vertex and mutable visited set and it returns either the path to the goal or alternatively boolean value indicating that the goal or target was found given the graph in fig the search returned true the iterative version of depth first search can be modified to do breadth first search of graph if the stack is replaced with queue breadth first search is an exhaustive searchmeaning that it looks at all paths at the same timebut will also find the shortest pathwith the least number of edgesbetween any two vertices in graph performing breadth first search on large graphs may take too long to be of practical use kruskal' algorithm consider for moment county which is responsible for plowing roads in the winter but is running out of money due to an unexpected amount of snow the county supervisor has been told to reduce costs by plowing only the necessary roads for the rest of the winter the supervisor wants to find the shortest number of total miles that must be plowed so any person can travel from one point to any other point in the countybut not necessarily by the shortest route the county supervisor wants to minimize the miles of plowed roadswhile guaranteeing you can still get anywhere you need to in the county joseph kruskal was an american computer scientist and mathematician who lived from to he imagined this problemformalized it in terms of weighted graphand devised an algorithm to solve this problem his algorithm was first published in the proceedings of the american mathematical society [ and is commonly called kruskal' algorithm this copy belongs to 'acha |
14,598 | fig minimum weighted spanning tree the last introduced trees by using them in various algorithms like binary search the definition doesn' change from the last but treesin the context of graph theoryare subset of the set of all possible graphs tree is just graph without any cycles in additionit is relatively easy to prove that tree must contain one less edge than its number of vertices otherwiseit would not be tree clearly the graph in fig is not tree there are many cycles within the graph kruskal' paper presented an algorithm to find minimum weighted spanning tree for such graph figure contains minimum weighted spanning tree for the graph in fig with the tree edges highlighted in orange we don' say the minimum weighted spanning tree because in general there could be more than one minimum weighted spanning tree in this casethere is likely only one possible kruskal' algorithm is greedy algorithm the designation greedy means that the algorithm always chooses the first alternative when presented with list of alternatives and never makes mistakeor wrong choicewhen choosing in other wordsno backtracking is required in kruskal' algorithm the algorithm begins by sorting all the edges in ascending order of their weights assuming that the graph is fully connectedthe spanning tree will contain | |- edges the algorithm forms sets of all the vertices in the graphone set for each this copy belongs to 'acha |
14,599 | graphs vertexinitially containing just that vertex that corresponds to the set in the example in fig there are initially sets each containing one vertex the algorithm proceeds as follows until | |- edges have been added to the set of spanning tree edges the next shortest edge is examined if the two vertex end points of the edge are in different setsthen the edge may be safely added to the set of spanning tree edges new set is formed from the union of the two vertex sets and the two previous sets are dropped from the list of vertex sets if the two vertex endpoints of the edge are already in the same setthe edge is ignored that' the entire algorithm the algorithm is greedy because it always chooses the next smallest edge unless doing so would form cycle if cycle would be formed by adding an edgeit is known right away without having to undo any mistake or backtrack consider fig in this snapshot the algorithm has already formed forest of treesbut not spanning tree yet the edges in orange are part the spanning tree fig kruskal'ssnapshot this copy belongs to 'acha |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.