id
int64
0
25.6k
text
stringlengths
0
4.59k
14,600
the next shortest edgethe edge from vertex to vertex is currently being considered the set containing and is the set { adding the edge with end points and cannot be done because vertices and are already in the same set sothis edge is skipped it cannot be part of the minimum weighted spanning tree the next shortest edge is the edge between vertices and since is member of { and is member of its set in the previous paragraphthe edge { is added to the minimum weighted spanning tree edges and the new set { is formed replacing its previous two subsets as depicted in fig the next shortest edge is the edge between vertices and this edge again cannot be added since and are in the same set and therefore adding the edge would form cycle the next shortest edge is the edge between vertices and but again adding it would form cycle so it is skipped the algorithm proceeds in this manner until the resulting spanning tree is formed with | |- edges (assuming the graph is fully connectedfig kruskal'ssnapshot this copy belongs to 'acha
14,601
graphs proof of correctness proving kruskal' algorithm correctly finds minimum weighted spanning tree can be done with proof by contradiction the proof starts by recognizing that there must be | |- edges in the spanning tree then we assume that some other edge would be better to add to the spanning tree than the edges picked by the algorithm the new edge must be part of one and only one cycle if adding the new edge formed two or more cycles then there would have had to be cycle in the tree before adding the new edge one of the edges in this newly formed cycle must be deleted from the minimum weighted spanning tree to once again make it tree andthe deleted edge must have weight greater than the newly added edge this is only possible if the new edge and the deleted old edge have exactly the same weight since all the old edges in the cycle were chosen before the new edge and the new edge was skipped because choosing it would have formed cycle so dropping the same weighted edge of the older edges will result in minimum weighted spanning tree with the same weight thereforethe new spanning tree has the same weight as the original spanning tree and that contradicts our assumption that better edge could be found kruskal' complexity analysis the complexity of kruskal' algorithm depends on sorting the list of edges and then forming the union of sets as the algorithm proceeds sorting listas was shown in chap when we looked at the complexity of quicksortis (| |log| |sorting the list is one half of kruskal' algorithm the other half is choosing the correct edges recall that each edge starts in set by itself and that an edge belongs to the minimum weighted spanning tree if the two endpoint vertices are in separate sets if sothen the union is formed for the two sets containing the endpoints and this union of two sets replaces the previous two sets going forward three operations are required to implement this part of the algorithm first we must discover the set for each endpoint of the edge being considered for addition to the spanning tree then the two sets must be compared for equality finallythe union of the two sets must be formed and any necessary updates must be performed so the two endpoint vertices now refer to the union of the two sets instead of their original sets one way to implement these operations would be to create list of sets where each position in the list corresponded to one vertex in the graph the vertices are conveniently numbered - in the example in the text but vertices can be reassigned integer identifiers starting at otherwise the set corresponding to vertex can be determined in ( time since indexed lookup in list is constant time operation this copy belongs to 'acha
14,602
if we make sure there is only one copy of each setwe can determine if two sets are the same or not in ( time as well we can just compare their references to see whether they are the same set or not the keyword is in python will accomplish this so if we want to know that and refer to the same set we can write is and this operation is ( the third operation requires forming new set from the previous two this operation will be performed | |- times in the worst case the first time this operation occurs vertex will be added to an existing set the second timetwo vertices will be added to an existing setand so on so in the endthe overall worst case complexity of this operation is (| | assuming once again that the graph is connected clearlythis is the expensive operation of this algorithm the next section presents data structure that improves on this considerably the partition data structure to improve on the third required operationthe merging of two sets into one seta specialized data structure called partition may be used the partition data structure contains list of integers with one entry for each vertex initiallythe list simply contains list of integers which match their indices , , , , , , , , , , , , , , , , , , , , think of this list as list of treesrepresenting the sets of connected edges in the spanning forest constructed so far tree' root is indicated when the value at location within the list matches its index initiallyeach vertex within the partition is in its own set because each vertex is the root of its own tree discovering the set for vertex means tracing tree back to its root consider what happens when the edge from vertex to vertex is considered for adding to the minimum weighted spanning tree as pictured in fig the partition at that time looks like this , , , , , , , , , , , , , , , , , , , , , vertex is not the root of its own tree at this time since is found at index we next look at index of the partition list that position in the partition list matches its value the is in the set ( treerooted at location looking at vertex nextindex in the list contains index in the list contains vertex is also in the set ( treerooted at index therefore vertex and are already in the same set and the edge from to cannot be added to the minimum spanning tree since cycle would be formed the next edge to be considered is the edge between vertices and the root of the tree containing is at index of the partition the root of the vertex containing is at index as we just saw these two vertices are not in the same set so the edge from to is added to the minimum spanning tree edges this copy belongs to 'acha
14,603
graphs the third operation that must be performed is the merging of the sets containing and this is where the partition comes in handy having found the root of the two treeswe simply make the root of one of the trees point to the root of the other tree we end up with this partition after merging these two sets , , , , , , , , , , , , , , , , , , , , , at this point in the algorithmthe tree rooted at has been altered to be rooted at instead that' all that was needed to merge the two sets containing vertex and the root of one tree can be made to point to the root of the other tree when two sets are merged into one the partition data structure combines the three required operations from sect into one method called samesetandunion this method is given two vertex numbers the method returns true if the two vertices are in the same set ( have the same rootif they do not have the same rootthen the root of one tree is made to point to the other and the method returns false the samesetandunion method first finds the roots of the two vertices given to it in the worst case this could take (| |time leading to an overall complexity of (| | howeverin practice these set trees are very flat for instancein the example presented in this the average depth of the set trees is meaning that on average it takes comparisons to find the root of set tree in graph with vertices and edges to consider adding to the minimum weighted spanning tree another example containing vertices and edges had an average set tree depth of the average complexity of this samesetandunion method is much better than the solution considered in sect the average case complexity of samesetandunion is much closer to (log| |this means that the second part of kruskal' algorithmusing this partition data structureexhibits (| |log| |complexity in the average case in connected graph the number of edges must be no less than one less than the total number of vertices sorting the edges takes (| |log| |time and the second part of kruskal' algorithm takes (| |log| |time since the number of edges is at least on the same order as the number of vertices in connected graph the complexity (| |log| |< (| |log| |so we can say that the overall average complexity of kruskal' algorithm is (| |log| |in practicekruskal' algorithm is very efficient and finds minimum weighted spanning tree quickly even for large graphs with many edges dijkstra' algorithm edsger dijkstra was dutch computer scientist who lived from to in he published short paper [ that commented on kruskal' solution to the minimum spanning tree problem and provided an alternative that might in some cases be more efficient more importantlyhe provided an algorithm for finding the this copy belongs to 'acha
14,604
fig minimum cost paths and total cost from source vertex minimum cost path between any two vertices in weighted graph this algorithm can beand sometimes isgeneralized to find the minimum cost path between source vertex and all other vertices in graph this algorithm is known as dijkstra' algorithm figure shows the result of running dijkstra' algorithm on the graph first presented in fig the purple edges show the minimum cost paths from source vertex to all other vertices in the graph the orange values are the minimum cost of reaching each vertex from source vertex efficiently finding minimum cost path from one vertex to another is used in all kinds of problems including network routingtrip planningand other planning problems where vertices represent intermediate goals and edges represent the cost of transitioning between intermediate goals these kind of planning problems are very common dijkstra' algorithm proceeds in greedy fashion from the single source vertex each vertexvin the graph is assigned cost which is the sum of the weighted edges on the path from the source to initially the source vertex is assigned cost all other vertices are initially assigned infinite cost anything greater than the sum of all weights in the graph can serve as an infinite value this copy belongs to 'acha
14,605
graphs dijkstra' algorithm shares some commonality with depth first search the algorithm proceeds as depth first search proceedsbut starts with single source eventually visiting every node within the graph there are two sets that dijkstra' algorithm maintains the first is an unvisited set this is set of vertices that yet need to be considered while looking for minimum cost paths the unvisited set serves the same purpose as the stack when performing depth first search on graph the visited set is the other set used by the algorithm the visited set contains all vertices which already have their minimum cost and path computed the visited set serves the same purpose as the visited set in depth first search of graph to keep track of the minimum cost path from the source to vertexvit is only necessary to keep track of the previous vertex on the path to for each vertexvwe keep track of the previous vertex on its path from the source initially the source vertexwith its cost of is added to the unvisited set then the algorithm proceeds as follows as long as there is at least one vertex in the unvisited set remove the vertex we'll call current from the unvisited set with the least cost all other paths to this vertex must have greater cost because otherwise they would have been in the unvisited set with smaller cost add current to the visited set for every vertexadjacentthat is adjacent to currentcheck to see if adjacent is in the visited set or not if adjacent is in the visited setthen we know the minimum cost of reaching this vertex from the source so don' do anything if adjacent is not in the visited setcompute new cost for arriving at adjacent by traversing the edgeefrom current to adjacent new cost can be found by adding the cost of getting to current and ' weight if this new cost is better than the current cost of getting to adjacentthen update adjacent' cost and remember that current is the previous vertex of adjacent alsoadd adjacent to the unvisited set when this algorithm terminates the cost of reaching all vertices in the graph has been computed assuming that all vertices are reachable from the source vertex in additionthe minimum cost path to each vertex can be determined from the previous vertex information that was maintained as the algorithm executed dijkstra' complexity analysis in the first step of dijkstra' algorithmthe next current vertex is always the unvisited vertex with smallest cost by always picking the vertex with smallest cost so farwe can be guaranteed that no other cheaper path exists to this vertex since we always proceed by considering the next cheapest vertex on our search to find cheapest paths in the graph the number of edges of any vertex in simpleundirected graph will always be less than the number of total vertices in the graph each vertex becomes the current vertex exactly once in the algorithm in step assume finding the next current takes this copy belongs to 'acha
14,606
(| |time since this happens | timesthe complexity of the first step is (| | over the course of running the algorithm the rest of the steps consider those edges adjacent to current since the number of edges of any vertex in simpleundirected graph will always be less than | |the rest of the algorithm runs in less than (| | time sothe complexity of dijkstra' algorithm is (| | assuming that the first step takes (| |to find the next current vertex it turns out that selecting the next current can be done in (log| |time if we use priority queue for our unvisited set priority queues and their implementation are discussed in chap using priority queuedijkstra' algorithm will run in (| |log| |time graph representations how graphg ( , )is represented within program depends on what the program needs to do consider the directed graph in fig the graph itself can be stored in an xml file containing vertices and edges as shown in sect weighted graph would include weight attribute for each edge in the graph in this xml file format the vertexid is used by the edges to indicate which vertices they are attached to the labelswhich appear in fig are only labels and are not used within the xml file to associate edges with vertices graph xml file this copy belongs to 'acha
14,607
graphs the and vertex attributes are not required in any graph representationbut to draw graph it is nice to have location information for the vertices all this information is stored in the xml filebut what about the three algorithms presented in this what information is actually needed by each algorithm when searching graph by depth first search vertices are pushed onto stack as the search proceeds in that case the vertex information must be stored for use by the search in this casesince edges have the vertexid of their edge endpointsit would be nice to have method to quickly lookup vertices within the graph map or dictionary from vertexid to vertices would be convenient it makes sense to create class to hold the vertex information like the class definition of sect vertex class class vertexdef __init__(self,vertexid, , ,label)self vertexid vertexid self self self label label self adjacent [self previous none in this vertex class definition for directed graphs it makes sense to store the edges with the vertex since edges connect vertices the adjacent list can hold the list of adjacent vertices when running depth first search map of vertexid to vertex for each of the vertices in the graph provides the needed information for the algorithm when implementing kruskal' algorithma list of edges is the important feature of the graph the class definition of sect provides less-than method which allows edge objects to be sortedwhich is crucial for kruskal' algorithm the vertices themselves are not needed by the algorithm list of edges and the partition data structure suffice for running kruskal' algorithm this copy belongs to 'acha
14,608
graph representations an edge class class edgedef __init__(self, , ,weight= )self self self weight weight def __lt__(self,other)return self weight other weight running dijkstra' algorithm benefits from having both the edge and vertex objects the weight of each edge is needed by the algorithm so storing the weight in the edge and associating vertices and edges is useful there are other potential representations for graphs for instancea twodimensional matrix could be used to represent edges between vertices the rows and columns of the matrix represent the vertices the weight of an edge from vertex vi to vertex vj would be recorded at matrix[ ][jsuch representation is called an adjacency matrix adjacency matrices tend to be sparsely populated and are not used much in practice due to their wasted space the chosen graph representation depends on the work being done vertices with adjacency information may be enough an edge list is enough for the kruskal' algorithm vertex and edge information is required for dijskstra' algorithm an adjacency matrix may be required for some situations as programmers we need to be mindful about wasted spacealgorithm needsand efficiency of our algorithms and the implications that the choice of data representation has on our programs summary graph notation was covered in this several terms and definitions were given for various types of graphs including weighted and directed graphs the presented three graph theory algorithmsdepth first searchkruskal' algorithmand dijkstra' algorithm through looking at those algorithms we also explored graph representations and their use in these various algorithms after reading this you should know the following graph is composed of vertices and edges graph may be directed or undirected tree is graph where one path exists between any two vertices spanning tree is subset of graph which includes all the vertices in connected graph minimum weighted spanning tree is found by running kruskal' algorithm dijkstra' algorithm finds the minimum cost of reaching all vertices in graph from given source vertex choosing graph representation depends on the work to be done this copy belongs to 'acha
14,609
graphs some typical graph representations are vertex list with adjacency informationan edge listor an adjacency matrix review questions answer these short answermultiple choiceand true/false questions to test your mastery of the in the definition of graphg (ve)what does the and the stand for what is the difference in the definition of in directed and undirected graphs in depth first searchwhat is the purpose of the visited set how is backtracking accomplished in depth first search of graphexplain how the backtracking happens what is path in graph and how does that differ from cycle what is treefor the graph in fig provide three trees that include the vertices and why does kruskal' algorithm never make mistake when selecting edges for the minimum weighted spanning tree why does dijkstra' algorithm never make mistake when computing the cost of paths to vertices what graph representation is best for kruskal' algorithmwhy why is the previous vertex stored by dijkstra' algorithmwhat purpose does the previous vertex have and why is it stored programming problems write program to find path between vertex and in the graph shown in fig be sure to print the path ( the sequence of verticesthat must be traversed in the path between the two vertices an xml file describing this graph can be found on the text website modify the first problem to find the shortest path between vertices and in terms of the number of edges traversed in other wordsignore the weights in this problem use breadth first search to find this solution write the code and perform dijkstra' algorithm on the graph in fig to find the minimum cost of visiting all other vertices from vertex of the graph write the code and perform kruskal' algorithm on either the directed graph in fig or the undirected example found in the xml files for both graphs can be found on the text website not every graph must be represented explicitly sometimes it is just as easy to write function that given vertexwill compute the vertices that are adjacent to it (that have edges between themfor instanceconsider the water bucket this copy belongs to 'acha
14,610
fig sample weighteddirected graph problem there are two buckets in this problema gallon bucket and gallon bucket your job is to put exactly gallons in the gallon bucket the rules of the game say that you can completely fill bucket of wateryou can pour one bucket into anotherand you can completely dump bucket out on the ground you cannot partially fill up bucketbut you can pour one bucket into another you are to write program that tells you how to start with two empty buckets and end with gallons in the gallon bucket to complete this problem you must implement depth first search of graph the vertices in this problem consist of the state of the problem which is given by the amount of water in each bucket along with the search algorithm you must also implement an adjacent function that given vertex containing this state information will return list of states that may be adjacent to it it may be easier to generate some extra adjacent states and then filter out the unreasonable ones before returning the list from adjacent for instanceit may be easier to generate state with gallons in the gallon bucket and then throw that state out later this copy belongs to 'acha
14,611
graphs by removing states from the list which have more gallons than allowed in that bucket the program should print out the list of actions to take to get from no water in either bucket to four gallons in the five gallon pail the solution may not be the absolute best solutionbut it should be valid solution that is printed when the program is completed bipartite graph is graph where the vertices may be divided into two sets such that no two vertices in the same set have an edge between them all edges in the graph go between vertices that appear in different sets program can test to see if graph is bipartite by doing traversal of the graphlike depth first searchand looking for odd cycles graph is bipartite if and only if it does not contain an odd cycle write program that given graph decides if it is bipartite or not the program need only print yesit is bipartiteor noit is not bipartite extend the program from the previous exercise to print the set of vertices in each of the two bipartite sets if the graph is found to be bipartite this copy belongs to 'acha
14,612
membership structures in chap we covered data structures that support insertiondeletionmembership testingand iteration for some applications testing membership may be enough iteration and deletion may not be necessary the classic example is that of spell checker consider the job of spell checker simple one may detect errors in spelling while more advanced spell checker may suggest alternatives of correctly spelled words clearly spell checker is provided with large dictionary of words using the list of words the spell checker determines whether word you have is in the dictionary and therefore correct word if the word does not appear in the dictionary the word processor or editor may underline the word indicating it may be incorrectly spelled in some cases the word processor may suggest an alternativecorrectly spelled word in some casesthe word processor may simply correct the misspelling how do these spell checkers/correctors workwhat kind of data structures do they use goals at first glancea hash set ( python dictionarymight seem an appropriate data structure for spell checking lookup time within the set could be done in ( time howeverthe tradeoff is in the size of this hash map typical english dictionary might contain over , words the amount of space required to store that many words would be quite large in this we'll cover two data structures that are designed to test membership within set the firsta bloom filterhas significantly smaller space requirements and provides very fast membership test the other is trie (pronounced trydata structure which has features that would not be readily available to hash set implementation and may take up less space than hash set (cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha
14,613
membership structures bloom filters bloom filters are named for their creatorburton howard bloomwho originally proposed this idea in since then many authors have covered the implementations of bloom filters including alan tharp [ wikipediawhile not always the authoritative sourcehas very good discussion of bloom filters as well [ bloom filter shares some ideas with hash sets while using considerably less space bloom filter is data structure employing statistical probability to determine if an item is member of set of values bloom filters are not accurate bloom filter will never report false negative for set membershipmeaning that they will never report that an item doesn' belong to set when it actually does howevera bloom filter will sometimes report false positive it may report an item is in set when it is actually not consider the problem of spell checking spell checker needs to know if typed word is correctly typed by looking it up in the dictionary with bloom filterthe typed word can be given to the bloom filter which will report that it is or is not correctly typed word in some cases it may report word is correct when it is not bloom filter is an array of bits along with set of hashing functions the number of bits in the filter and the number of hashing functions influences the accuracy of the bloom filter the exact number of bits and hash functions will be discussed later consider bloom filter with bits and independent hash functions initially all the bits in the filter are set to as shown in fig consider adding the word cow to the bloom filter assume that three independent hash functions hash the word cowmodulo to and respectively the bits at indices and are set to to remember that cow has been added to the filter as shown in fig now consider adding the word cat to the same filter assume the hash values from the three hash functionsmodulo are and inserting cat into the filter results in setting the bit at index to the other two were already set by inserting cow finallyinserting dog into the filter results in the bloom filter shown in fig the hash values for dog are and fig an empty bloom filter fig after inserting cow into the bloom filter this copy belongs to 'acha
14,614
fig after inserting cowcatand dog into the bloom filter looking up an item in bloom filter requires hashing the value again with the same hash functions generating the indices into the bit array if the value at all indices in the bit array are onethen the lookup function reports success and otherwise failure consider looking up value that is not in the bloom filter of fig if we look up fox the three hash function calls return and the digit at indices and is howeverthe digit at index is and the lookup function reports that fox is not in the bloom filter consider looking up the value rabbit in the same bloom filter hashing rabbit with the three hash functions results in values and all three of the digits at these locations within the bloom filter contain and the bloom filter incorrectly reports that rabbit has been added to the filter this is false positive and while not desirablemust be acceptable if bloom filter is to be used if bloom filter is to be usefulit must never report false negative from these examples it should be clear that false negatives are impossible false positives must be kept to minimum in factit is possible to determine on average how often bloom filter will report false positive the probability calculation depends on three factorsthe hashing functionsthe number of items added to the bloom filterand the number of bits used in the bloom filter the analysis of these factors are covered in the next sections the hashing functions each item added to bloom filter must be hashed by some number of hash functions which are completely independent of each other each hashing function must also be evenly distributed over the range of bit indices in the bit array this second requirement is true of hashing functions for hash sets and hash tables as well uniform distribution is guaranteed by the built-in hash functions of python and most other languages in the examples abovethree hashing functions were required sometimes the required number of hashing functions can be much higherdepending on the number of items being inserted and the number of bits in the bit array creating the required number of independentuniformly distributed hashing functions might seem like daunting problembut it can be solved in at least couple of ways some hashing functions allow seed value to be provided in this casedifferent seed values could be used to create different hashing functions this copy belongs to 'acha
14,615
membership structures another equally effective way of generating independent hashing functions is to append some known value to the end of each item before it is hashed for instancea might be appended to the item before hashing it to get the first hash function could be appended to the item before hashing to get the second hash function likewisea might be appended to get the third hash function value so looking up rabbit in the bloom filter is accomplished by first hashing rabbit rabbit and rabbit with the same hashing function since the hashing function is uniformly distributedthe values returned by the three hashed values will be independent of each other andall items with appended will themselves be uniformly distributed likewise for items with appended and with appended the bloom filter size it is possible to find the required bloom filter size given number of items to insert and desired false positive probability the probability of any one location within bloom filter not being set by hash function while inserting an item is given by the following formula where the filter consists of bits if the bloom filter uses hash functionsthen the probability that bit in the bit array is not set by any of the hash functions required for inserting an item is given by this formula if items are inserted into the bloom filter then raising this formula to will provide the probability that bit within the bloom filter' bit array is still zero after inserting all items so we have nk sothe probability that bit in the bloom filter is after inserting items while using hashing functions is given by this formula nk now consider looking up an item that was not added to the bloom filter the probability that it will report false positive can be found by computing the likelihood that each location within the bloom filter is for all hashing functions this is expressed as follows  nk this formula contains sequence that can be approximated using the natural log [ as  ekn/ this copy belongs to 'acha
14,616
using this formula it is possible to solve for given an and desired probabilitypof false positives the formula is as follows = ln (ln ) finallysolving for above results in the following formula ln these two formulas tell us how many bits are required in our filter to guarantee maximum specified rate of false positives we can also compute the required number of hash functions for instancefor an english dictionary containing , words and desired false postive percentage of no more than (expressed as in the formularequires bit array of , , bits and seven hashing functions the number of bits in this example may seem excessive howeverrecall that they are bits an efficient implementation requires roughly kb of storage the number of characters in the english dictionary used in these examples totals , assuming byte per characterstoring all these words would require minimum of kb the bloom filter represents quite savings in space in additionduring experiments the lookup time using the bloom filter never took longer than us the lookup time is bounded by the number and efficiency of the hash functions used to compute the desired values assuming that the hash functions are dependent on the length of the string being hashedthen the lookup time is (lkwhere is given by the length of the item being looked up and is the number of hash functions drawbacks of bloom filter besides the obvious false positive potentialthe bloom filter can only report yes or no it can' suggest alternatives for items that might be close to being spelled correctly bloom filter has no memory of which bits were set by which items so yes or no answer is the best we can get with even yes answer not being correct in some circumstances the next section presents trie data structure that will not report false positives and can be used to find alternatives for incorrectly spelled words the trie datatype trie is data structure that is designed for retrieval the data structure is pronounced like the word try trie is not meant to be used when deleting values from data structure is required it is meant only for retrieval of items based on key value this copy belongs to 'acha
14,617
membership structures fig after inserting cowcatratrabbitand dog into trie tries are appropriate when key values are made up of more than one unit and when the individual units of key may overlap with other item keys in factthe more overlap the key units havethe more compact the trie data structure in the problem of spell checkingwords are made up of characters these characters are the individual units of the keys many words overlap in dictionary like aanand ant trie may be implemented in several different ways in this text we'll concentrate on the linked trie which is series of link lists making up matrix matrix implementations lead to sparsely populated arrays which take up much more room with empty locations linked trie has overhead for pointersbut is not sparsely populated the trie data structure begins with an empty linked list each node in the linked trie list contains three valuesa unit of the key (in the spellchecker instance this is character of the word) next pointer that points to the next node in the list which would contain some other unit ( characterappearing at the same position within key ( word)and follows pointer which points at node that contains the next unit within the same key in fig the follows pointer is in yellow while the next pointer field is in red this copy belongs to 'acha
14,618
when items are inserted into the trie sentinel unit is added in the case of the spell checkera '$character is appended to the end of every word the sentinel is needed because words like rat are prefixes to words like ratchet without the sentinel character it would be unclear whether word ended or was only prefix of some other word in trie keys with common prefix share that prefix and are not repeated the next pointer is used when more than one possible next character is possible this saves space in the data structure the trade-off is that the next and follows pointers take extra space in each node the trie class class triedef __insert(node,item)this is the recursive insert function def __contains(node,item)this is the recursive membership test class trienodedef __init__(self,item,next nonefollows none)self item item self next next self follows follows def __init__(self)self start none def insert(self,item)self start trie __insert(self start,item def __contains__(self,item)return trie __contains(self start,iteminserting into trie inserting values into trie can be done either iterativelywith loopor recursively to recursively insert into trie the insert method can call an __insert function it is easier to write the recursive code as function and not method of the trie class because the node value passed to the function may be none to insert into the triethe __insert function operates as follows if the key is empty ( no units are left in the key)return none as the empty node if the node is none then new node is created with the next unit of the key and the rest of the key is inserted and added to the follows link this copy belongs to 'acha
14,619
membership structures if the first unit of the key matches the unit of the current nodethen the rest of the key is inserted into the follows link of the node otherwisethe key is inserted into the next link of the node building the trie recursively is simple howeveran iterative version would work just as well the iterative version would require loop and pointer to the current node along with remaining key to insert the iterative insert algorithm would behave in similar fashion to the step outlined above but would need to keep track of the previous node as well as the current node so that links could be set correctly membership in trie checking membership in trie can also be accomplished recursively the steps include base case which might not be completely intuitive at first the empty key is reported as member of any trie because it works when checking membership with the sentinel unit added to the triereturning true for an empty key is completely safe because any real key will at least consist of the sentinel character in the algorithm outlined here the sentinel is assumed to have already been added to the key the steps for membership testing are as follows if the length of the key is then report success by returning true if the node we are looking at is none then report failure by returning false if the first unit of the key matches the unit in the current nodethen check membership of the rest of the key starting with the follows node otherwisecheck membership of the key starting with the next node in the trie againthis code might be implemented iteratively with while loop keeping track of the current node and the remainder of the key either recursive or iterative implementation will work equally well comparing tries and bloom filters bloom filters are clearly faster for testing membership than trie howeverthe trie works acceptably well while the longest bloom filter lookup time in simple experiment was usthe longest trie lookup was us of course the trie takes more spacebut common prefixes share nodes in trie saving some space over storing each word distinctly in data structureas in hash set for purposes of spell checking trie has distinct advantagessince spelling alternatives can be easily found common typographical errors fall into one of four categories this copy belongs to 'acha
14,620
transposition of characters like teh instead of the dropped characters like thei instead of their extra characters like thre instead of the incorrect characters like thare instead of there if in searching in trie word is not foundthese alternatives can also be searched for to find selection of alternative spellings what' morethese alternative spellings can be searched in parallel in trie to quickly put together list of alternatives bloom filter cannot be used to find alternative spellings since that information is lost once entered into the filter of coursea trie will never report false positive either as is possible with bloom filter summary tries and bloom filters are two data structures for testing membership bloom filters are relatively small and will produce false positives some percentage of the time tries are largerdon' produce false positivesand can be used to find alternative key values that are close to the key being sought while either data structure will work for spell checkingspelling correction would be aided by trie while bloom filter would not help as far as efficiency goesbloom filters more efficiently test set membershipsubject to the false positives that are sometimes produced howevera trie also operates efficiently while also taking more space than bloom filter both the bloom filter and the trie tested membership of words in the dictionary in microseconds the bloom filter' worst time was us while the trie' worst time was us for the informal test performed on both size requirements are also concern of course the example dictionary used in the development of both the bloom filter and the trie in this contained , words the bloom filter for this dictionary of words was approximately kb in size assuming that the next and follows pointers take bytes each and the key units ( word characterstake byte eachthe size of the trie is roughly mb in size while the bloom filter is much smaller than the trieboth are well within the limits of what computers are capable of storing review questions answer these short answermultiple choiceand true/false questions to test your mastery of the which datatypethe trie or the bloom filteris susceptible to false positives what is false positive in this contextthis copy belongs to 'acha
14,621
membership structures bloom filter requires more or less storage than trie when spell checkingwhich data type can be used for spelling correction how can you generate more than one hashing function for use in bloom filter add the words " ""an""ant""bat"and "batterto trie draw the trie data structure showing its structure after inserting the words in the order given here why is sentinel needed in trie why is sentinel not needed in bloom filter what must be true of keys to be able to store them in trie which datatypetrie or bloom filteris more efficient in terms of spacewhich is more efficient in terms of speed programming problems go to the text website and download the dictionary of words build bloom filter for this list of words and use it to spellcheck the declaration of independenceprinting all the misspelled words to the screen go to the text website and download the dictionary of words build trie datatype for this list of words and use it to spellcheck the declaration of independenceprinting all misspelled words to the screen create trie as in the previous exercisebut also print suggested replacements for all misspelled words this is tough assignment suggested replacements should not differ from the original in more than one of the ways suggested in the this copy belongs to 'acha
14,622
heaps the word heap is used in couple of different contexts in computer science heap sometimes refers to an area of memory used for dynamic ( run-timememory allocation another meaningand the topic of this is data structure that is conceptually complete binary tree heaps are used in implementing priority queuesthe heapsort algorithmand some graph algorithms heaps are somewhat like binary search trees in that they maintain an ordering of the items within the tree howevera heap does not maintain complete ordering of its items this has some implications for how heap may be used goals by the end of this you should be able to answer the following questionswhat is heap and how is it usedwhat is the computational complexity of adding and deleting items from heapwould you use heap to look up items or notwhen would you use heapin the heapsort algorithmwhy is it advantageous to construct largest-on-top heap key ideas to understand heaps we'll start with definition largest-on-top heap is complete ordered tree such that every node is >all of its children (if it has anyan example will help illustrate this definition conceptuallya heap is tree that is full on all levels except possibly the lowest level which is filled in from left to right it takes the general shape shown in fig (cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha
14,623
heaps fig heap shape conceptually heap is treebut heaps are generally not stored as trees complete tree is tree that is full on all levels except the lowest level which is filled in from left to right because heaps are complete treesthey may be stored in an array an example will help in understanding heaps and the complete property better consider largest on top heap with the root node stored at index in an array conceptuallyfig is heap containing integers the data in this conceptual version is stored in an array by traversing the tree level by level starting from the root node to the heap the conceptual heap in fig would be stored in an array as organized in fig there are two properties that heap exhibits they areheap structure propertythe elements of the heap form complete ordered tree heap order propertyevery parent >all children (including all descendantsthe heap in fig maintains these two properties the array implementation of this heap in fig also maintains these properties to see how the properties are maintained in the array implementation we need to be able to compute the location fig sample heap fig heap organization this copy belongs to 'acha
14,624
of children and parents the children of any element of the array can be calculated from the index of the parent leftchildindex parentindex rightchildindex parentindex using these formulae on fig we can see that the children of the root node ( index are (at index and (at index likewisethe children of are located at index and which are the values and which we can verify are the same children as in the conceptual model of coursenot every node has child or even two children if the computed leftchildindex or rightchildindex are greater than or equal to the number of values in the heapthen the node in question is leaf node it is also possible to go in the other direction given child' indexwe can discover where the parent is located parentindex (childindex )// the /in the previous formula represents integer division it means that the result is always an integer if there were fractional part we round down to the next lower integer sothe index of the parent of the in fig is computed as parentindex ( )// consulting the conceptual model in fig we see that the value at index in the arraythe is indeed the parent of the it should be noted that not every node in heap has parent in particularthe root nodeat index does not have parent all other nodes in heap have parents building heap now that we've seen what heap looks likewe'll investigate building heap heaps can be built either largest on top or smallest on top we'll build largest on top heap heap class will encapsulate the data and methods needed to build heap heap objects contain list and count of the number of items currently stored in the heap we'll call this count the size of the heap to encapsulate the data we'll want method that will take sequence of values and build heap from it we'll call this method buildfrom private method will also be useful the buildfrom method will call the _siftupfrom to get each successive element of the sequence into its correct position within the heap the buildfrom method def buildfrom(selfasequence)'''asequence is an instance of sequence collection which understands the comparison operators the elements of asequence are copied into the heap and ordered to build heap ''this copy belongs to 'acha
14,625
heaps def __siftupfrom(selfchildindex)'''childindex is the index of node in the heap this method sifts that node up as far as necessary to ensure that the path to the root satisfies the heap condition ''the sequence of values passed to the buildfrom method will be copied into the heap theneach subsequent value in the list will be sifted up into its final location in the heap consider the list of values [ we'll trace this through showing the resulting heap at each stage to begin the list is copied into the heap object in the order given here then siftupfrom is called on each subsequent element to beginsiftupfrom is called on the second elementthe in this case calling siftupfrom on the root of the heap would have no effect normallythe parent index of the node is computed the parent will already be greater than the other child (if there is oneif the value at the current child index is greater than the value at the parent indexthen the two are swapped and the process repeats this process repeats as many times as are necessaryeither until the root node is reached ( index of the listor until the new node is in the proper location to maintain the heap property the first time some movement occurs is when is added to the heap the is swapped with the to arrive at its final location resulting in the heap in fig the first four elements of the list now make up heap butthe is not in its final position we need to sift it up in the heap to get it to its final position looking at the conceptual view of the heap ( the tree)you can see that is child of the node containing clearlythat violates the heap property so and are swapped to sift the up as shown in fig without looking at the conceptual model you can still compute the parent of the node containing in part one of fig when was at index in the listthe parent index was computered as follows parentindex ( )// sothe is compared to the at index in part one above thenthe swap is made because is greater than in part two howeverthe is still not in the fig building heap part one this copy belongs to 'acha
14,626
fig building heap part two fig building heap part three right place parentindex ( )// we compare with the and swap the two elements this is the last iteration of sifting up because has now reached the root ( index of the heap after swapping the two values we get the heap in fig the heapsort algorithm version heaps have two basic operations you can add value to heap you can also deleteand retrievethe maximum value from heap if the heap is largest on top heap using these two operationsor variations of themwe can devise sorting algorithm by building heap with list of values and then removing the values one by one in this copy belongs to 'acha
14,627
heaps descending order we'll call these two parts of the algorithm phase and phase ii to implement phase we'll need one new method in our heap class the addtoheap method def addtoheap(self,newobject)'''if the heap is fulldouble its current capacity add the newobject to the heapmaintaining it as heap of the same type answer newobject ''this new method can use the __siftupfrom private method to get the new element to its final destination within the heap version of phase calls addtoheap times this results in ( log ncomplexity the specific steps of phase include double the capacity of the heap if necessary data[sizenewobject __siftupfrom(size size as you can see__siftupfrom will be called timesonce for each element of the heap each time __siftupfrom is calledthe heap will have grown by element consider the heap in fig just before pass # we are about to sift the up to its rightful location within the heap conceptually we have the picture of the heap shown in fig to move the to the correct location we must compute the parent index from the child indices as shown in table fig adding to the heap fig conceptual view while adding to the heap this copy belongs to 'acha
14,628
table child and parent indices childindex parentindex (childindex )// (swap (swap (stopfig heap after moving to correct location sifting up the in the heap' list results in two swaps before it reaches its final location figure shows the is swapped with the at index then it is swapped again with the at index at this point no more swaps are done because is greater than the has reached its proper position within the heap analysis of version phase the approach taken in version of phase is slow as we shall see consider perfect complete binary tree one which is completely full on all levels with levels as shown in fig consider the relationship between the number of levels and the number of items in the heap as shown in table fig perfect binary tree this copy belongs to 'acha
14,629
heaps table heap levels versus heap size level of nodes at level ( - ( - ( - ( - for heap with items in itthe value of can be computed by adding up all the nodes at each level in the heap' tree to simplify our argument we'll assume that the heap is full binary tree - for some this is the sum of geometric sequence the sum of geometric sequence can be computed as follows + if  - applying this formula to our equation above the number of nodes in complete binary tree ( full binary heapwith levels is given by this formula below rm - - this implies that we can solve this equation for doing so we get log ( the brackets above are the ceiling operator and it simply means that we should round up to the next highest integer rounding up takes into account that not every heap tree is completely full so there may be some values of that won' give us an integer for if we didn' round up the following inequality will be useful in determining the computational complexity of phase of the heapsort algorithm log so far we have been able to determine that the height of complete binary tree ( the number of levelsis equivalent to the ceiling of the logbase of the number of elements in the tree + phase of our algorithm appends each value to the end of the list where it is sifted up to its final location within the heap since sifting up will go through at most levels and since the heap grows by one each timethe following summation describes an upper limit of value that is proportional to the amount of work that must be done in phase log ( = this copy belongs to 'acha
14,630
but applying the inequality presented above we have the following the term comes from the last summation from to from the inequality above there are ones that are part of the summation these can be factored out as = log < log ( <(log log ( = = = we now have lower and upper bound for our sum the same summation appears in both the lower and upper bound but what does = log equalthe following equivalences will help in determining this summation ln ln to determine what the summation above is equal to we can establish couple of inequalities that bound the sum from above and below in fig the summation can be visualized as the green area the first term in the summation would provide the first green rectanglethe second green rectangle corresponds to the second term in the summation and so on the black line in the figure is the plot of the log base of clearly the area covered by the green rectangles is bigger than the area under log ln ln log fig plot of log(nthis copy belongs to 'acha
14,631
heaps the curve of the log the area under the curve can be found by taking the definite integral from to nwhich in the picture is but in general would be from this we get the following inequality  log dx < log = nowconsider shifting the entire green area to the right by one in the figure abovethat' the orange area the orange and green areas are exactly the same size the orange is just shifted right by one now look at the plot of the log base of the area below the curve is now clearly bigger than the orange area if we imagine this graph going out to nthen we'll have to include in our definite integral (since we shifted the orange area to the rightso we get the following inequality + log <log dx = putting the two inequalities together we have lower and upper bound for our summation +  log dx <log <log dx = it is easier to integrate using natural log so we'll rewrite the integral as follows   ln dx log dx ln the constant term in the integral can be factored out so we'll look at the following integral  ln dx we can find the result of the definite integral that appears above by doing integration by parts the integration by parts rule is as follows   dv uvv du this copy belongs to 'acha
14,632
applying this to our integral we have the following ln and dv dx du dx and    ln dx ln dx  ln    dx ln - ln ( we have proved that the lower bound is proportional to log similarlywe could prove that the upper bound is also proportional to log therefore the work done by inserting elements into heap using the __siftupfrom method is th ( log nwe can do betterif the values in the heap were in the correct order we could achieve (ncomplexity using different approach we will be able to achieve (ncomplexity in all cases phase ii laterwe will investigate how to improve the performance of phase recall that phase of the heapsort algorithm builds heap from list of values phase ii takes the elements out of the heapone at timeand places them in list to save spacethe same list that was used for the heap may be used for the list of values to be returned each pass of phase ii takes one item from the list and places it where it belongs and the size of the heap is decremented by one the key operation is the __siftdownfromto method (fig the siftdownfromto method def __siftdownfromto(selffromindexlastindex)'''fromindex is the index of an element in the heap predata[fromindex lastindexsatisfies the heap conditionexcept perhaps for the element data[fromindexpostthat element is sifted down as far as neccessary to maintain the heap structure for data[fromindex lastindex''to illustrate this methodlet' take our small heap example and start extracting the values from it consider the heap in fig where both the conceptual view and the organization of that heap are shown is at the top of the heap and is also the largest value this copy belongs to 'acha
14,633
heaps fig just before phase ii fig after swapping first and last values if sortedthe would go at the end of the list since there are elements in the heapwe'll swap the and the by doing this is at its final position within sorted list the is not in the correct location within the heap sowe call the __siftdownfromto method to sift the down from the position within the heap to at most the size- location the __siftdownfromto method does its work and swaps the with the bigger of the two childrenthe the does not need to sift down any further since it is bigger than the so we have the view of the heap in fig after the first pass of phase ii the second pass of phase ii swaps the and the moving the to its final location in the sorted list it then sifts the down to its rightful location within the heapproducing the picture you see in fig this copy belongs to 'acha
14,634
fig after the first pass of phase ii fig after the second pass of phase ii during the third pass of phase ii the is put in its final location and swapped with the to make room for it although __siftdownfromto is calledno movement of values within the heap occurs because the is at the top and is the largest value in the heap (fig during the fourth and final passthe is swapped with the no call to _siftdownfromto is necessary this time since the heap is only of size after the swap since heap of size is already sorted and in the right placewe can decrement the size to the list is now sorted in place without using an additional array as shown in fig this copy belongs to 'acha
14,635
heaps fig after the third pass of phase ii fig after the fourth and final pass of phase ii analysis of phase ii the work of phase ii is in the calls to the __siftdownfromto method which is called times each call must sift down an element in tree that shrinks by one element each time earlier in this we did the analysis to determine that the amount of work in the average and worst case is proportional to log ( th (nlogni= the best case of phase ii would require that all values in the heap are identical in that case the computational complexity would be (nsince the values would never this copy belongs to 'acha
14,636
sift down this best case scenario brings up good point if we could limit how far down the value is siftedwe might be able to speed up phase that' the topic or our next section the heapsort algorithm version in version onethe heapsort algorithm attained ( log ncomplexity during phase and phase ii in version twowe will be able to speed up phase of the heapsort algorithm up to (ncomplexity we do this by limiting how far each newly inserted value must be sifted down the idea is pretty simplebut yet powerful technique rather than inserting each element at the top of the heapwe'll build the heapor heapsfrom the bottom up this means that we'll approach the building of our heap by starting at the end of the list rather than the beginning an example will help make this more clear consider the list of values in fig that we wish to sort using heapsort rather than starting from the first element of the listwe'll start from the other end of the list there is no need to start with the last element as we will see we need to pick node that is parent of some node in the tree since the final heap is binary heapthe property we have is that half the nodes of the tree are leaf nodes and cannot be parents of any node within the heap we can compute the first parent index as follows parentindex (size )// the size above is the size of the list to be sorted note that because the list has indices to size- we must subtract two to compute the proper parentindex in all cases in this casethat parentindex is we need to start with index in the list to start building our heaps from the bottom up index will be the first parent and we'll sift it down as far as necessary fig list to be heapsorted this copy belongs to 'acha
14,637
heaps childindex parentindex childindex parentindex since the second of these indices is beyond the last index of the listthe __siftdownfromto method will not consider childindex after considering the and the we see that those two nodes do in fact form heap as shown in fig we will show this in the following figures by joining them with an arrow we now have heapsone less than we started with more importantlywe only had to sift the parent down one position at the most nextwe move back one more in the list to index we call __siftdownfromto specifying to start from this node doing so causes the sift down method to pick the larger of the two children to swap withforming heap out of the three values - and as result this is depicted in fig fig after forming sub-heap fig after forming second sub-heap this copy belongs to 'acha
14,638
finallywe move backward in the list one more element to index this time we only need to look at the values of the two children because they will already be the largest values in their respective heaps calling __siftdownfromto on the first element of the list will pick the maximum value from and and will swap the with that value resulting in the situation in fig this doesn' form complete heap yet we still need to move the down again and __siftdownfromto takes care of moving the to the bottom of the heap as shown in fig fig sifting the down fig the final heap using version of phase this copy belongs to 'acha
14,639
heaps analysis of heapsort version recall that phase ii is when the values are in heap and extracted one at time to form the sorted list version phase ii of the heapsort algorithm is identical to version and has the same complexityo( log nversion phase on the other hand has changed from top down approach to building the heap in version to building the heap from the bottom up in version we claimed that the complexity of this new phase is (nwhere is the number of nodes in the list stated more formally we have this claim for perfect binary tree of height hcontaining ( - nodesthe sums of the lengths of its maximum comparison paths is ( -hconsider binary heaps of heights etc up to height from the example for version of the algorithm it should be clear the maximum path length for any call to __siftdownfromto will be determined as shown in table notice that ( - represents half the nodes in the final heap (the leaf nodesand that the max path length for half the nodes in the heap will be it is this observation that leads to more efficient algorithm for building heap from the bottom up if we could add up all these maximum path lengthsthen we would have an upper bound for the amount of work to be done during phase of version of this algorithm ( ( ( - - the value would be an upper bound of the work to be donethe sum of the maximum path lengths we can eliminate most of the terms in this sum with little manipulation of the formula the value of could be computed as using this formula we can write it as ( ( - - [( ( ( - if we line up the terms in the equation above (as they are lined up right now)we can subtract like terms in the first like term we see ( this simplifies to table maximum path length for __siftdownfromto level max path length of nodes at level - - - - - - - - this copy belongs to 'acha
14,640
fig binary heap of height similarlythe other like terms simplify so we end up with the following formula for - - ( - - (nwhere nodes in the last step of the simplification above we have the sum of the first - powers of also known as the sum of geometric sequence this sum is equal to raised to the powerminus one this can be proven with simple proof by induction sowe have just proved that version of phase is (nphase ii is still ( log nso the overall complexity of heap sort is ( log nconsider binary heap of height (fig in such heapusing the sift down method the first sifting occurs at height in the tree where we have four nodes that may travel down one level in the tree at height we have two nodes that may travel down two levels finallythe root node may travel down three levels we have the following sum of maximum path lengths + + + + + + comparison to other sorting algorithms the heapsort algorithm operates in ( log ntimethe same complexity as the quicksort algorithm key difference is in the movement of individual values in quicksortvalues are always moved toward their final location heapsort moves values first to form heapthen moves them again to arrive at their final location this copy belongs to 'acha
14,641
heaps fig comparison of several sorting algorithms within sorted list quicksort is more efficient than heapsort even though they have the same computational complexity examining fig we see selection sort operating with th ( complexitywhich is not acceptable except for very short lists the quicksort algorithm behaves more favorably than the heapsort algorithm as is expected the built-in sortwhich is quicksort implemented in cruns the fastestdue to being implemented in summary this introduced heaps and the heapsort algorithm building heap can be done efficiently in (ntime complexity heap guarantees the top element will be either the biggest or smallest element of an ordered collection of values using this principle we can implement many algorithms and datatypes using heaps the heapsort algorithm was presented in this as one example of use for heaps heaps are not good for looking up values looking up value in heap would take (ntime and would be no better than linear search of list for value this is because there is no ordering of the elements within heap except that the largest (or smallestvalue is on top you cannot determine where in heap value is located this copy belongs to 'acha
14,642
without searching the entire heapunless it happens to be equal or greater to the largest value and you have largest on top heap likewiseif you have smallest on top heap and are looking for valueyou would have to look at all values unless the value you are searching for is equal or smaller than the smallest value commonlyheaps are used to implement priority queues where the elements of queue are ordered according to some kind of priority value an element can be added to an existing heap in (log ntime an element can be removed from heap in (log ntime as well this makes heap the logical choice for priority queue implementation priority queues are useful in message passing frameworks and especially in some graph algorithms and heuristic search algorithms review questions answer these short answermultiple choiceand true/false questions to test your mastery of the state the heap property for largest on top heap when removing value from heapwhich value are you likely to removewhy after removing value from heapwhat steps do you have to take to ensure you still have heap if you had heap of height what would be the total maximum travel distance for all nodes in the heap as you built it using version phase of the heapsort algorithm use __siftupfrom(from version of the heapsort algorithmadding new element to growing heap on each pass to construct largest-on-top heap from the following integers sketch new picture of the binary heap each time the structure changes use __siftdownfromto(from version of the heapsort algorithm on the same data as in the previous problemsketching new picture of the binary tree each time the structure changes using the final heap from problem execute phase ii version of the heapsort algorithmusing __siftdownfromto to sort the data in increasing order sketch new picture of the binary tree each time the structure changes redo problems and this time showing the data in arrays ( listswith starting index rather than drawing the tree structures show the new values of the structure after each pass use the following data why does heapsort operate less efficiently than quicksort when is heap commonly usedthis copy belongs to 'acha
14,643
heaps programming problems implement version of the heapsort algorithm run your own tests using heapsort and quicksort to compare the execution time of the two sorting algorithms output your data in the plot format and plot your data using the plotdata py program provided on the text website implement version and version of the program and compare the execution times of the two heapsort variations gather experimental data in the xml format accepted by the plotdata py program and plot that data to see the difference between using version and version of the heap sort algorithm implement smallest on top heap and use it in implementing priority queue priority queue has enqueue and dequeue methods when enqueueing an item on priority queuea priority is provided elements enqueued on the queue include both the data item and the priority write test program to test your priority queue data structure use the priority queue from the last exercise to implement dijkstra' algorithm from chap the priority queue implementation of dijkstra' algorithm is more efficient the priority of each element is the cost so far of each vertex added to the priority queue by dequeueing from the priority queue we automatically get the next lowest cost vertex from the queue without searchingresulting in (| |log| |complexity instead of (| | use the heapsort algorithmeither version or version to implement kruskal' algorithm from chap use one of the sample graph xml files found on the text website as your input data to test your program this copy belongs to 'acha
14,644
in chap binary search trees were defined along with recursive insert algorithm the discussion of binary search trees pointed out they have problems in some cases binary search trees can become unbalancedactually quite often when tree is unbalanced the complexity of insertdeleteand lookup operations can get as bad as (nthis problem with unbalanced binary search trees was the motivation for the development of height-balanced avl trees by adelson-velskii and landistwo soviet computer scientistsin avl trees were named for these two inventors their paper on avl trees [ described the first algorithm for maintaining balanced binary search trees balanced binary search trees provide (log ninsertdeleteand lookup operations in additiona balanced binary search tree maintains its items in sorted order an infix traversal of binary search tree will yield its items in ascending order and this traversal can be accomplished in (ntime assuming the tree is already built the hashset and hashmap classes provide very efficient insertdeleteand lookup operations as wellmore efficient than the corresponding binary search tree operations heaps also provide (log ninsert and delete operations but neither hash tables nor heaps maintain their elements as an ordered sequence if you want to perform many insert and delete operations and need to iterate over sequence in ascending or descending orderperhaps many timesthen balanced binary search tree data structure may be more appropriate goals this describes why binary search trees can become unbalanced then it goes on to describe several implementations of two types of height-balanced treesavl trees and splay trees by the end of this you should be able to implement your own avl or splay tree datatypewith either iteratively or recursively implemented operations (cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha
14,645
balanced binary search trees binary search trees binary search tree comes in handy when large number of insertdeleteand lookup operations are required by an application while at times it is necessary to traverse the items in ascending or descending order consider website like wikipedia that provides access to large set of online materials imagine the designers of the website want to keep log of all the users that have accessed the website within the last hour the website might operate as follows each visitor accesses the website with unique cookie when visitor accesses the site their cookie along with date and time is recorded in log on the site' server if they have accessed the site within the last two hours their cookie and access time may already be recorded in that casetheir last access date and time is updated every hour snapshot is generated as to who is currently accessing the site the snapshot is to be generated in ascending order of the unique cookie numbers after patron has been inactive for at least an houraccording to the snapshottheir information is deleted from the record of website activity log since the site is quite large with thousandsif not tens of thousands or morepeople accessing it every hourthe data structure to hold this information must be fast it must be fast to insertlookupand delete entries it must also be quick to take snapshot since the website will hold up all requests while the snapshot is taken if the number of users that come and go during an hour on site like wikipedia is typically higher than the number that stay around for long periods of timeif may be most efficient to rebuild the tree from the activity log rather than delete each entry after it has been inactive for at least an hour this would be true if the number of people still active on the site is much smaller than the number of inactive entries in the snapshot of the log in this caserebuilding the log after deleting inactive patrons must be fast as well binary search tree is logical choice for the organization of this log if we could guarantee (log nlookupinsertand delete along with (ntime to take snapshot howevera binary search tree has one big problem recall that as the snapshot is taken the log may be rebuilt with only the recently active users and furthermore the cookies will be accessed in ascending order while rebuilding the log consider the insert operation on binary search trees shown in sect when the binary search tree is rebuilt the items to insert into the new tree will be added in ascending order the result is an unbalanced tree binary search tree insert def __insert(root,val)if root =nonereturn binarysearchtree __node(valthis copy belongs to 'acha
14,646
if val root getval()root setleft(binarysearchtree __insert(root getleft(),val)elseroot setright(binarysearchtree __insert(root getright(),val)return root if items are inserted into binary search tree in ascending order the effect is that execution always progresses from line to and the result on line puts the new value in the right most location of the binary search treesince it is the largest value inserted so far the resulting tree is stick extending down and to the right without any balance to the treeinserting the next bigger value will result in traversing each and every value that has already been inserted to find the location of the new value this means that the first value takes zero comparisons to insertwhile the second requires one comparison to find its final locationthe third value requires two comparisonsand so on the total number of comparisons to build the tree is ( as proved in chap this complexity will be much too slow for any site getting reasonable amount of activity in an hour in additionwhen the height of the binary search tree is nwhere is the number of values in the treethe look upinsertand delete times are (nfor both the worst and average cases when the tree is stick or even close to being stick the efficiency characteristics of binary search tree are no better than that of linked list avl trees binary search tree that stays balanced would provide everything that is required by the website log described in the last section avl trees are binary search trees with additional information to maintain their balance the height of an avl tree is guaranteed to be (log nthus guaranteeing that lookupinsertand delete operations will all complete in (log ntime with these guaranteesan avl tree can be built in ( log ntime from sequence of items moreoveravl treeslike binary search treescan be traversed using an inorder traversalyielding their items in ascending order in (ntime definitions to understand how avl trees worka few definitions are in order height(tree)the height of tree is one plus the maximum height of its subtrees the height of leaf node is one balance(tree)the balance of node in binary tree is height(right subtree)-height(left subtreeavl treean avl tree is binary tree in which the balance of every node in the tree is - or this copy belongs to 'acha
14,647
balanced binary search trees implementation alternatives looking back at chap and the implementation of binary search treesinserting value into tree can be written recursively inserting into an avl tree can also be implemented recursively it is also possible to implement inserting value into an avl tree iterativelyusing loop and stack this explores both alternatives additionallythe balance of an avl tree can be maintained using either the height of each node in the tree or the balance of each node in the tree implementations of avl tree nodes store either their balance or their height as values are inserted into the treethe balance or height values of affected nodes are updated to reflect the addition of the new item in the tree avlnode with stored balance class avltreeclass avlnodedef __init__(self,item,balance= ,left=none,right=none)self item item self left left self right right self balance balance def __repr__(self)return "avltree avlnode("+repr(self item)+",balance="repr(self balance)+",left="+repr(self left)",right="+repr(self right)+")whether implementing insert recursively or iterativelythe node class of chap must be extended slightly to accommodate either the balance or the height of the node consider the code fragment in sect the first implementation of avltree that we'll explore is balance storing iterative version of the algorithm notice that the avlnode implementation is buried inside the avltree class to hide it from users of the avltree class while python does not actually prevent access to the avlnode class from outside the avltree classby convention users of the avltree data structure should know to leave the internals of the tree alone avl trees are created by users of this data structurebut not avl nodes the creation of nodes is handled by the alvtree class the avlnode constructor has default values for balanceleftand right which makes it easy to construct avltrees when debugging code the __repr__ function prints the avlnode in form that can be used to construct such node calling print(repr(node)will print node so it can be provided to python to construct sample tree the repr(self leftand repr(self rightare recursive calls to the __repr__ functionso the entire tree is printed rooted at self from chap the same __iter__ function will work to traverse an avltree the iterator function will yield all the values of the tree in ascending order examples in this will refer to balance of nodes in an avl tree it turns out that storing the balance of node is sufficient to correctly implement height balanced this copy belongs to 'acha
14,648
avl treesbut perhaps bit more difficult to maintain than maintaining the height of each node in the tree later in the modifications to these algorithms are discussed that maintain the height of each node whether storing height or balance in avl treesthe complexity of the tree operations is not affected avl tree iterative insert as described in the last sectionthere are two variants to the insert algorithm for height balanced avl trees insert can be performed iteratively or recursively the balance can also be stored explicitly or it can be computed from the height of each subtree this section describes how to maintain the balance explicitly without maintaining the height of each node iteratively inserting new value in height balanced avl tree requires keeping track of the path to the newly inserted value to maintain that patha stack is used we'll call this stack the path stack in the algorithm to insert new nodewe follow the unique search path from the root to the new node' locationpushing each node on the path stack as we proceedjust as if we were adding it to binary search tree as we proceed along the path to the new node' destinationwe push all the nodes we encounter onto the path stack we insert the new item where it should be according to the binary search tree property thenthe algorithm proceeds popping values from the path stack and adjusting their balances until node is found that has balance not equal to zero before being adjusted this nodewhich is the closest ancestor with non-zero balanceis called the pivot based on the pivot and the location of the new value there are three mutually exclusive cases to consider which are described below after making the adjustments in case below there may be new root node for the subtree rooted at the pivot if this is the casethe parent of the pivot is the next node on the path stack and can be linked to the new subtree if the path stack is empty after popping the pivotthen the root of the tree was the pivot in this casethe root node of the avl tree can be made to point to the new root node in the tree as mentioned aboveone of three cases will arise when inserting new value into the tree case no pivot there is no pivot node in other words the balance of each node along the path was in this case just adjust the balance of each node on the search path based on the relative value of the new key with respect to the key of each node you can use the path stack to examine the path to the new node this case is depicted in fig where is to be added to the avl tree in each node the value is on the left and the balance is given on the right each of the nodes containing and are pushed onto the path stack the balance of the new node containing is set to the new balance of the node containing is - the node containing has new balance of the balance of the root node after the insert is because is inserted to the right of it and therefore its balance increases by one the new value is inserted to the left of the node containing so its balance decreases by one figure depicts the tree after inserting the new value this copy belongs to 'acha
14,649
balanced binary search trees fig avl tree case --no pivot node fig avl tree case --no rotate case adjust balances the pivot node exists furtherthe subtree of the pivot node in which the new node was added has the smaller height in this casejust change the balance of the nodes along the search path from the new node up to the pivot node the balances of the nodes above the pivot node are unaffected this is true because the height of the subtree rooted at the pivot node is not changed by the insertion of the new node figure depicts this case the item with key is about to be added to the avl tree the node containing the is the pivot node since the value to be inserted is less than and the balance of the node containing is the new node could possibly help to better balance the tree the avl tree remains an avl tree the balance of nodes up to the pivot must be adjusted balances above the pivot need not be adjusted because they are unaffected figure depicts what the tree looks like after inserting into the tree case the pivot node exists this timehoweverthe new node is added to the subtree of the pivot of larger height (the subtree in the direction of the imbalancethis will cause the pivot node to have balance of - or after inserting the new nodeso the tree will no longer be an avl tree there are two subcases hererequiring either single rotation or double rotation to restore the tree to avl status call the child of the pivot node in the direction of the imbalance the bad child subcase asingle rotation this subcase occurs when the new node is added to the subtree of the bad child which is also in the direction of the imbalance the solution this copy belongs to 'acha
14,650
fig avl tree case --single rotation is rotation at the pivot node in the opposite direction of the imbalance after the rotation the tree is still binary search tree in additionthe subtree rooted at the pivot will be balanced once againdecreasing its overall height by one figure illustrates this subcase the value is to be inserted into the tree to the left of the node containing howeverdoing so would result in the balance of the node containing to decrease to - which is the closest ancestor with improper balance and the pivot node the yellow node is the bad child in additionthe is being inserted in the same direction as the imbalance the imbalance is on the left and new new value is being inserted on the left the solution is to rotate the subtree rooted at to the rightresulting in the tree pictured in fig subcase bdouble rotation this subcase occurs when the new node is added to the subtree of the bad child which is in the opposite direction of the imbalance for this subcasecall the child node of the bad child which lies on the search path the bad grandchild in some casesthere may not be bad grandchild in fig the bad grandchild is the purple node the solution is as follows perform single rotation at the bad child in the direction of the imbalance perform single rotation at the pivot away from the imbalance fig avl tree case --double rotation this copy belongs to 'acha
14,651
balanced binary search trees fig avl tree case step rotate toward againthe tree is still binary search tree and the height of the subtree in the position of the original pivot node is not changed by the double rotation figure illustrates this situation the pivot in this case is the root of the tree the node containing is the bad child the bad grandchild is the node containing (fig the imbalance in the tree is to the right of the pivot yet the is being inserted to the left of the bad child the first step is rotation to the right at the bad child this brings the upsomewhat helping to balance the right side of the tree the second stepdepicted in fig rotates to the left at the pivot bringing the whole tree into balance again the trickiest part of this algorithm is updating the balances correctly firstthe pivotbad childand bad grandchild contain the balances that may change if there is no bad grandchild then the pivot' and bad child' balances will be zero if there is bad grandchildas is the case herethen there is little more work to determining the balances of the pivot and the bad child when the bad grandchild existsits balance is after the double rotation the balances of the bad child and pivot depend on the direction of the rotation and the value of the new item and the bad grandchild' item this can be analyzed on case by case basis to determine the balances of both the pivot and bad grandchild in these cases in the next section we examine how the balances are calculated fig avl tree case step rotate away this copy belongs to 'acha
14,652
rotations both cases and are trivial to implement as they simply adjust balances case is by far the hardest of the cases to implement rotating subtree is the operation that keeps the tree balanced as new nodes are inserted into it for case the tree is in state where new node is going to be added to the tree causing an imbalance that must be dealt with there are two possibilities figure depicts the first of these possible situations the new node may be inserted to the left of the bad childawhen the subtree anchored at the pivot node is already weighted to the left the pivot nodebis the nearest ancestor with non-zero balance for node to have balance - before inserting the new node its right subtree must have height while its left subtree has height adding the new node into the subtree of the bad child would result in the pivot having balance - which is not allowed the right rotation resolves the problem and maintains the binary search tree property the subtree moves in the rotation but before the rotation all values in must have been less then and greater than after the rotation this would also be true which means it remains binary search tree inserting value to the right of the bad child when the imbalance is to the right results in an analogous situation requiring left rotation notice that in either rotation the balance of nodes and are zero this only applies to case and does not hold in the case of double rotation (fig againthe balance of both nodesthe pivot and the bad childbecome zero after the rotation in either direction case is not possible under any other circumstances for case we must deal not only with pivot and bad childbut also bad grandchild as described in the previous sectionthis case occurs when inserting fig avl tree case right rotation this copy belongs to 'acha
14,653
balanced binary search trees fig avl tree case left rotation new value under bad child in the opposite direction of the imbalance for instancethe subtree in fig is weighted to the left and the new node is inserted to the right of the bad child an analogous situation occurs when the subtree is weighted to the right and the new node is inserted into the left subtree of the bad child when either situation occurs double rotation is needed to bring it back into balance figure show that there are two possible subcases there are actually three possible subcases it is possible there is no bad grandchild in that casethe newly inserted node will end up in the location that would have been occupied by the bad grandchild otherwise the new node might be inserted to the left or right of the bad grandchildwhich is node in fig either waythe first step in fig is to rotate left at the bad childnode then right rotation at the pivotnode bcompletes the rebalancing of the tree againthe trickiest part of this implementation is the calculation of the balance of each node the bad grandchild and new pivot nodenode in fig always has balance of if there is no bad grandchildthen the new pivot node is the newly inserted value if there was bad grandchildand if the new item was less than the bad grandchild' itemthe balance of the bad child is and the balance of the old pivot is if the new item was inserted to the right of the bad grandchild then the balance of the bad child is - and the balance of the old pivot is all other balances remain the same including balances above the pivot because the overall height of the tree before inserting the new value and after inserting the new value has not changed againan analogous situation occurs in the mirror image of fig when new value is inserted into left subtree of bad child which is in the right subtree of the pivot and which is already weighted more heavily to the rightthen double rotation is also requiredrotating first right at the bad child and then left at the pivot this copy belongs to 'acha
14,654
fig avl tree case steps and this copy belongs to 'acha
14,655
balanced binary search trees avl tree recursive insert when implementing recursive function it is much easier to write as stand-alone function as opposed to method of class this is because stand-alone method may be called on nothing ( none in the case of pythonwhile method must always have non-null self reference writing recursive functions as methods leads to special cases for self for instancethe insert methodif written recursivelyis easier to implement if it calls __insert as its recursive function the __insert function of sect won' suffice for height balanced avl trees the insert algorithm must take into account the current balance of the tree and operate to maintain the balance as we discussed in the three cases presented in the previous section the recursive insert avl tree class declaration class avltreeclass avlnodedef __init__(self,item,balance= ,left=none,right=none)self item item self left left self right right self balance balance other methods to be written here like __iter__ and __repr__ see chap def __init__(self,root=none)self root root def insert(selfitem) def __insert(root,item)code to be written here return root self pivotfound false self root __insert(self root,item def __repr__(self)return "avltree(repr(self root") def __iter__(self)return iter(self rootthe shell of the recursive implementation is given in sect the algorithm proceeds much like combination of the three cases presented above along with the implementation of insert presented in sect there is no path stack in the recursive implementation insteadthe run-time stack serves that purpose between lines and or lines and of sect there is an opportunity to rebalance the tree as the code returns and works its way back up from the recursive calls as each call returnsthe balances of each node can be adjusted accordingly adjusting this copy belongs to 'acha
14,656
balances before returning implements cases one and two as described earlier in the case three is detected when balance of - or results from rebalancing in that case the pivot is found and rebalancing according to case can occur should pivot be foundno balancing need occur above the pivot this is the use of the self pivotfound variable initialized on line of the code in sect this flag can be set to true to avoid any balancing above the pivot nodeshould it be found balances are adjusted just as described in the case by case analysis earlier in the in the worst case the balances of the pivot and bad child will need to be adjusted implementing both the iterative and the recursive versions of insert into avl trees helps illustrate the special cases that must be handled in the iterative versionwhile the recursive version will not need special cases the recursive version does not need special case handling because of the way the __insert works the function always is given the root node of tree in which to insert the new item and returns the root node of the tree after inserting that item since it works in such regular wayspecial case handling is not necessary maintaining balance versus height the two implementations presented in this the recursive and iterative insert algorithms for avl treesmaintained the balance of each node as an alternativethe height of each node could be maintained in this casethe height of leaf node is the height of any other node is plus the maximum height of its two subtrees the height of an empty tree or none is avlnode with stored height class avlnodedef __init__(self,item,height= ,left=none,right=none)self item item self left left self right right self height height def balance(self)return avltree height(self rightavltree height(self leftif the height of nodes is maintained instead of balancesall heights on the path to the new item' inserted location must be adjusted on the way back up the tree unlike balancesit is not possible to stop adjusting heights at the pivot node after rotation the height of the pivot and bad child must also be recomputed as the rotation may change their height since heights are computed bottom-upall heights on the pathincluding the heights of the pivot and bad child should be recomputed in bottomup fashion the code in sect provides partial declaration of an avlnode storing the height of the tree tree rooted at the node in this implementation the balance of any node can be computed from the heights of the two subtrees this copy belongs to 'acha
14,657
balanced binary search trees deleting an item from an avl tree deleting value from an avl tree can be accomplished in the same way as described in programming problem from chap howeverit is necessary to adjust balances on the way back from deleting the final leaf node this can be done either by maintaining path stack if delete is implemented iteratively or by adjusting balances or heights while returning from the recursive calls in recursive implementation of delete in either casewhen adjusted balance of node on the path reaches left rotation is required to rebalance the tree if the adjusted balance of node on the path results in - then right rotation is required these rotations may cascade back up the path to the root of the tree splay trees avl trees are always balanced since the balance of each node is computed and maintained to be either - or because they are balanced they guarantee (log nlookupinsertand delete time an avl tree is binary search tree so it also maintains its items in sorted order allowing iteration from the smallest to largest item in (ntime while there doesn' seem to be many downsides to this data structure there is possible improvement in the form of splay trees one of the criticisms of avl trees is that each node must maintain its balance the extra work and extra space that are required for this balance maintenance might be unnecessary what if binary search tree could maintain its balance good enough without storing the balance in each node storing the balance of each node or the height of each node increases the size of the data in memory this was bigger concern when memory sizes were smaller butmaintaining the extra information takes extra time as well what if we could not only reduce the overall data size but eliminate some of the work in maintaining the balance of binary search tree the improvement to avl trees incorporates the concept of spatial locality this idea reflects the nature of interaction with large data sets access to large data set is often localizedmeaning that the same piece or several pieces of data might be accessed several times over short period of time and then may not be accessed for some time while some other relatively small subset of the data is accessed by either inserting new values or looking up old values spatial locality means that relatively small subset of data is accessed over short period of time in terms of our example at the beginning of this tree containing cookies may have cookies that are assigned when user first visits website user coming into the website will interact for while and then leaveprobably not coming back soon again the set of users who are interacting with the web server will change over time but it is always relatively small subset compared to the overall number of entries in the tree if we could store the cookies of the recent users closer to the top of the treewe might be able to improve the overall time for looking up and inserting new value in the tree the complexity won' improve inserting an item will still this copy belongs to 'acha
14,658
take (log ntime but the overall time to insert or lookup an item might improve little bit this is the motivation for splay tree in splay treeeach insert or lookup moves the inserted or looked up value to the root of the tree through process called splaying when deleting valuethe parent may be splayed to the root of the tree splay tree is still binary search tree splay trees usually remain well-balanced but unlike an avl treea splay tree does not contain any balance or height information splaying node to the root involves series of rotatesmuch like the rotates of avl treesbut with slight difference it is interesting to note that while splay trees are designed to exploit spatial locality in the datathey are not dependent on spatial locality to perform well splay trees function as well or better than avl trees in practice on completely random data sets there are several things that are interesting about splay trees firstthe splaying process does not require the balance or any other information about the height of subtrees the binary search tree structure is good enough splay trees don' stay perfectly balanced all the time howeverbecause they stay relatively balancedthey are balanced enough to get an average case complexity of (log nfor insertlookupand delete operations this idea that they are good enough is the basis for what is called amortized complexity which is discussed later in chap and later in this splaying is relatively simple to implement in this text we cover two bottom-up splay tree implementations splay trees can be implemented either iteratively or recursively and we examine both implementations in chap binary search tree insert was implemented recursively if splaying is to be done recursivelythe splay can be part of the insert function if written iterativelya stack can be used in the splaying process the following sections cover both the iterative and recursive implementations but first we examine the rotations that are used in splaying splay rotations each time value is inserted or looked up the node containing that value is splayed to the top through series of rotate operations unlike avl treesa splay tree employs double rotation to move node up to the level of its grandparent if grandparent exists through series of double rotations the node will either make it to the root or to the child of the root if the splayed node makes it to the child of the roota single rotation is used to bring it to the root the single rotate functions are often labelled zig or zag while the double rotations are called zig-zig or zig-zag operations depending on the direction of the movement of the splayed node sometimes the node moves with zig-zag motion while other times it moves with zig-zig motion splaying happens when value is inserted intolooked upor deleted from splay tree when value is looked up either the searched value is splayed to the top or the this copy belongs to 'acha
14,659
balanced binary search trees would-be parent of the value if the value is not found in the tree deletion from the tree can be implemented like delete from any other binary search tree as described in problem of chap when value is deleted from binary search tree the parent of the deleted node is splayed to the root of the tree the example in fig depicts the splay operations that result from inserting the green nodes into splay tree when is insertedit is splayed to the root of the tree as appears in the second version of the tree (the red nodeswhen is insertedit is splayed to the root as well moving to the root is accomplished through zig-zig rotation called double-right rotation splaying the to the root is the result of zig-zag rotation called right-left rotation when the is splayed to the root it is double-left rotation followed by single left rotation splaying the to the root is accomplished by doing double-right rotation followed by left-right rotation the double-right is often called zig-zig rotation as is the double-left rotation the left-right and right-left rotations are often called zig-zag rotations the end result in each case has the newly inserted nodeor looked up nodesplayed to the root of the tree figures and depict these splay operations figures and give some intuitive understanding of why splay trees work as well as they fig splay tree double-right rotate this copy belongs to 'acha
14,660
fig splay tree double-left rotate fig splay tree right-left rotate this copy belongs to 'acha
14,661
balanced binary search trees fig splay tree left-right rotate do after the rotate operations depicted in figs and the subtree rooted at the child appears to be more balanced than before those rotations notice that doing left-right rotation is not the same as doing left rotation followed by right rotation the splay left-right rotate yields different result likewisethe splay right-left rotate yields different result than right followed by left rotation splay zig-zag rotates are designed this way to help balance they tree figures and depict trees that might be slightly out of balance before the rotationbrought into much better balance by the right-left rotation or the left-right rotation iterative splaying each time value is inserted or looked up it is splayed to the root of the splay tree through series of rotations as described in the previous section the double rotation operations will either move the value to the root or the child of the root of the tree if the double rotates result in the newly inserted value at the child of the root of the treea single rotate is used to move the newly inserted value to the root as depicted in fig when and are inserted into the splay tree this copy belongs to 'acha
14,662
fig splay tree example inserting new value into binary search tree without recursion is possible using while loop the while loop moves from the root of the tree to the leaf node which will become the new node' parent at which point the loop terminatesthe new node is createdand the parent is hooked up to its new child after inserting the new nodeit must be splayed to the top to splay it is necessary to know the path that was taken through the tree to the newly inserted node this path can be recorded using stack as the insert loop passes through another node in the treeit is pushed onto the stack the end result is that all nodesfrom the root to the new childon the path to the new child are pushed onto this path stack finallysplaying can occur by emptying this path stack first the child is popped from the stack thenthe rest of the stack is emptied as follows if two more nodes are available on the stack they are the parent and grandparent of the newly inserted node in that case double rotate can be performed resulting in the root of the newly rotated subtree being the newly inserted node which double rotation is required can be determined from the values of the grandparentparentand child if only one node remains on the stack it is the parent of the newly inserted node single rotation will bring the newly inserted node to the root of the splay tree implementing splay in the manner described here works well when looking up value in the treewhether it is found or not when value is found it will be added to the path stack when value is not foundthe parent should be splayed to the topwhich naturally occurs when the looked up value is not found because the parent will be left on the top of the path stack when splaying is performed this copy belongs to 'acha
14,663
balanced binary search trees one method of deleting node from splay tree is accomplished by deleting just as you would in binary search tree if the node to delete has zero or one child it is trivial to delete the node if the node to delete has two childrenthen the leftmost value in its right subtree can replace the value in the node to delete and the leftmost value can be deleted from the right subtree the parent of the deleted node is splayed to the top of the tree another method of deletion requires splaying the deleted node to the root of the tree first then the rightmost value of the left subtree is splayed to the root after splaying the left subtreeits root node' right subtree is empty and the original right subtree can be added to it the original left subtree becomes the root of the newly constructed splay tree recursive splaying implementing splaying recursively follows the recursive insert operation on binary search trees the splaying is combined with this recursive insert function as the recursive insert follows the path down the tree it builds rotate string of "rand "lif the new item is inserted to the right of the current root nodethen left rotate will be required to splay the newly inserted node up the tree and an "lis added to the rotate string otherwisea right rotate will be required and an "ris added to the rotate string as the recursive insert function returnsthe path to the newly inserted node is retraced by the returning function the last two characters in the rotate string dictate what double rotation is required dictionary or hash table takes care of mapping "rr""rl""lr"and "llto the appropriate rotate functions the hash table lookup is used to call the appropriate rotation and the rotate string is truncated (or re-initialized to the empty string depending on when "rand "lare added to the rotate stringwhen the recursive insert is finishedany required single rotation will be recorded in the rotate string and can be performed it should be noted that implementing splaying using rotate string and hash table like this requires about one half the conditional statements to determine the required rotations as compared to the iterative algorithm described above when inserting new node the path must be determined by comparing the value to insert to each node on the path to its location in the tree in the iterative description abovethe values on the path are again compared during splaying in this recursive description the new item is only compared to each item on the path once this has an impact on performance as shown later in the looking up value using this recursive implementation works similarly to insert either splaying the found value or its parent if it is not found to the root of the tree deleting value again can be done recursively by first looking up the value to delete resulting in it being splayed to the root of the tree and then performing the method of root removal described in the previous section this copy belongs to 'acha
14,664
performance analysis in the worst case splay tree may become stick resulting in (ncomplexity for each lookupinsertand delete operation while avl trees guarantee (log ntime for lookupinsertand delete operations it would appear that avl trees might have better performance howeverthis does not seem to be the case in practice close to , insert and , random lookups were performed in an experiment using pre-generated dataset the insert and lookup operations were identified in the dataset with all looked up values being found in the tree the average combined insert and lookup time were recorded in fig for an avl treea splay tree implemented iterativelyand the recursive implementation of splay tree insert and lookup the results show that the recursive splay tree implementation performs better on random set of values than the avl tree implementation the experiment suggests that splay trees also exhibit (log ncomplexity in practice for insert and lookup operations in figs and we got an intuitive understanding of how splay trees maintain balance through their specialized double rotations howeverit is not very convincing argument to say that the double rotations appear to make the tree more balanced this idea is formalized using amortized complexity amortizationfirst encountered in chap is an accounting term used when an expense is spread over number of years as opposed to expensing it all in one year this same principle can be applied to the expense in finding or inserting value in splay tree the complete fig average insert/lookup time this copy belongs to 'acha
14,665
balanced binary search trees analysis of this is done on case by case basis and is not present in this text but may be found in texts on-line these proofs show that splay trees do indeed operate as efficiently as avl trees on randomly accessed data in additionthe splaying operation used when inserting or looking up value exploits spatial locality in the data data values that are frequently looked up will make their way toward the top of the tree so as to be more efficiently looked up in the future while taking advantage of spatial locality is certainly desirable if present in the datait does not improve the overall computational complexity of splay tree insert and lookup operations howeverthis does not happen in the average case on randomly inserted and looked up values in factthe recursive implementation of splay trees presented in the previous section exhibits (log naverage insert and lookup time on randomly distributed set of values and performs better in random sample than the avl tree implementation insertlookupand delete operations on an avl tree can be completed in (log ntime in the average case this holds for splay trees as well traversal of an avl or splay tree runs in (ntime and yields its items in ascending or descending order (depending on how the iterator is writtenwhile the quicksort algorithm can sort the items of list just as efficientlyavl and splay trees are data structures that allow many insert and delete operations while still maintaining the ordering of their elements an avl or splay tree may be practical choice if data structure is needed that efficiently implements lookupdeleteand insert operations while also allowing the sequence of values to be iterated over in ascending or descending order the advantage of avl trees lies in their ability to maintain the ordering of elements while guaranteeing efficient lookupinsertand delete complexity splay trees work just as well in almost all cases and in the case of the recursive splay tree implementation described in this it performs even better than the avl tree implementation on random data sets the difference in performance between the avl tree and the recursive splay tree performance numbers is the difference between maintaining the balance explicitly in the avl tree and getting good enough balance in the splay tree summary this presented several implementations of height-balanced avl trees and splay trees recursive and iterative insert algorithms were presented both balance maintaining and height maintaining avl nodes were discussed the recursive insert algorithms for both avl and splay trees result in very clean code without many special caseswhile the iterative versions needs few more if statements to handle some conditions in some instance the iterative version may be slightly more efficient than the recursive version since there is cost associated with function calls in any languagebut the experimental results obtained from the experiments performed in this seem to suggest that the recursive implementations operate very efficiently when written in python this copy belongs to 'acha
14,666
review questions answer these short answermultiple choiceand true/false questions to test your mastery of the what is the balance of node in an avl tree how does the balance of node relate to its height how does an avl tree make use of the balance of node what is pivot node what is bad child in relationship to avl trees what is the path stack and when is it necessary after doing right rotationwhere is the pivot node and the bad child in the subtree that was originally rooted at the pivot why is the balance of the root of subtree always after code for case is executed in the two subcases for case what node becomes the root node of the subtree rooted at the pivot after executing the algorithm on each of the subcases why does the avl tree insert algorithm always completes in (log ntimedo case by case analysis to justify your answer for each of the three cases involved in inserting value what is the purpose of the rotate string in the recursive insert splay tree implementation why does it seem that the recursive splay tree insert and lookup implementation operates faster than the avl tree implementation programming problems write an avl tree implementation that maintains balances in each node and implements insert iteratively write test program to thoroughly test your program on some randomly generated data write an avl tree implementation that maintains balances in each node and implements insert recursively write test program to thoroughly test your program on some randomly generated data write an avl tree implementation that maintains heights in each node and implements insert recursively write test program to thoroughly test your program on some randomly generated data write an avl tree implementation that maintains heights in each node and implements insert iteratively write test program to thoroughly test your program on some randomly generated data complete programming problem then implement the delete operation for avl trees finallywrite test program to thoroughly test your data structure as values are inserted and deleted from your tree you should test your code to make sure it maintains all heights correctly and the ordering of all values in the tree this copy belongs to 'acha
14,667
balanced binary search trees implement two of the programming problems - in this and then write test program that generates random list of integers time inserting the values into the first implementation and then time inserting each value into the second implementation record all times in the xml format needed by the plotdata py program from two plot the timing of the two algorithms to compare their relative efficiency write splay tree implementation with recursive insert and lookup functions implement an avl tree either iteratively or recursively where the height of each node is maintained run test where trees are built from the same list of values when you generate the list of valuesduplicate values should be considered lookup write the data file with an or an followed by value which indicates either lookup or insert operation should be performed generate an xml file in the format used by the plotdata py program to compare your performance results write splay tree implementation with recursive insert and lookup functions compare it to one of the other balanced binary tree implementations detailed in this run test where trees are built from the same list of values when you generate the list of valuesduplicate values should be considered lookup write the data file with an or an followed by value which indicates either lookup or insert operation should be performed generate an xml file in the format used by the plotdata py program to compare your performance results this copy belongs to 'acha
14,668
-trees this covers one of the more important data structures of the last thirty years -trees are primarily used by relational databases to efficiently implement an operation called join -trees have other properties that are also useful for databases including ordering of rows within tablefast delete capabilityand sequential access goals this introduces some terminology from relational databases to motivate the need for -trees the goes on to introduce the -tree data structure and its implementation by the end of this you should have an understanding of -treestheir advantages over other data structuresand you should be able to demonstrate your understanding by implementing -tree that can be used to efficiently process joins in relational databases relational databases while this is not database text we will cover bit of database terminology to demonstrate the need for -tree and its use in relational database relational database consists of entities and relationships between these entities database schema is collection of entities and their relationships schema is specified by entity relationship diagramoften abbreviated er-diagramor logical data structure [ figure provides an er-diagram for database called the dairy database it is used to formulate rations for dairy cattle to maximize milk production each box in fig represents an entity in the database of particular interest in this text are the feedfeedattributeand feedattribtype entities feedlike corn silage or alfalfais composed of many different nutrients nutrients are things like calciumironphosphorusproteinsugarand so on in the dairy database (cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha
14,669
-trees fig dairy database entity relationship diagram these nutrients are called feedattribtypes there is many-to-many relationship between feeds and feedattribtypes feed has many feed attributesor nutrients each nutrient or feed attribute type appears in more than one feed this relationship this copy belongs to 'acha
14,670
fig many to many relationship is depicted in fig the forks on the two ends of the line represent the manyto-many relationship between feeds and feed attribute types many-to-many relationships cannot be represented in relational database without going through process called reification reification introduces new entities that remove many-to-many relationships when many-to-many relationship appears within logical data structure it indicates there may be missing attributes in this casethe quantity of each nutrient within feed was missing the new feedattribute entity eliminates the many-to-many relationship by introducing two one-to-many relationships one-to-many relationships can be represented in relational databases every entity in relational database must have unique identifier in fig the feed entities are uniquely identified by their feedid attribute the other attributes are importantbut do not have to be unique each feedid must be unique and it cannot be null or empty for any feed likewisea feedattribtypeid field uniquely identifies each feed nutrient there is unique feedattribtypeid for calciumironand so on the feedattribute entity has unique id made up of two fields togetherthe feedid and the feedattribtypeid identify unique instance of nutrient for particular feed the value was the missing attribute in fig that was introduced by reifying the many-to-many relationship as depicted in fig the logical data structure in fig describes the schema for feeds and nutrients in the dairy database relational database is composed of tables and the shema provides the definition of these tables the feed table consists of rows and columns each row in the feed table describes one feed the columns of the feed table are each of the attributes of feed provided in fig the example in sect provides subset of this table fig logical data structure this copy belongs to 'acha
14,671
-trees with subset of the columns of this table the ellipses ( the indicate omitted rows within the database the full table is available as feed tbl on the text website the feed table 'corn silag 'almond hul 'molasswet 'liq cit pl 'whey 'sf corn 'dry min 'min plts 'mineral 'hay lact 'dry hay 'oat hay 'hlg 'cuphay 'hay # 'bmr csilage 'wheat sil 'corn silag 'almond hul 'closeplt 'corn %fat 'dry min 'comm mix 'on farm 'big sq 'hay# - 'hay# - : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am : : am normally relational database would store table like the feed table in binary format that would be unreadable except by computer the feed tbl file is written in ascii format to be human readable with simple text editorbut the principles are the same each row within the table represents one record of the table which is one instance of feed in this case the records are each the same size to make reading the table easy within any record we can find the name of the feed by going to the correct column for feed namewhich is the fourth field within each record and starts bytes or characters into each record ten bytes or characters are allocated to each integer field (the first column was edited to better fit on the pagethere are records or feeds within the sample feed tbl table provided on the text website the feedattribtype table ' 'ca 'rfv ' ' 'phosphorus as of dm'calcium as of dm'relative feed value (calculated)'sulfur as of dm'potassium as of dmthis copy belongs to 'acha
14,672
'mg 'fat 'magnesium as of dm'fat as of dm the table in sect contains subset of the records in the feedattribtype tableavailable as feedattribtype tbl on the text website the full table has different rows each containing fields as with the feed tablethe feedattribtype table is organized into rows and columns subset of the feedattribute table is provided in sect each feed attribute is comprised of the corresponding feedidthe feedattribtypeidand the amount of that nutrient for the given feed which is called the value column within the table the feedattribute table storing the feed data this way is flexible new nutrients can easily be added feeds can be added as well feed attributes can be stored if available or omitted occasionallyprograms that use relational databases need access to data from more than one table but need to correlate the data between the tables for instanceit may be convenient to temporarily construct table that contains the feed numberfeed namenutrient nameand value of that nutrient for the corresponding feed into table like that in sect we may want to compute the average phosphorous content within all feeds in factwe may wish to calculate the average content for each nutrient type within the database in that case table like the one in sect would be very useful temporary table corn silag corn silag ca corn silag rfv corn silag this copy belongs to 'acha
14,673
-trees corn silag corn silag mg corn silag fat almond hul almond hul ca almond hul rfv almond hul almond hul almond hul mg almond hul fat relational databases are often called sql databases sql stands for system query language sql is language for querying relational databases sql can be used to build temporary tables like the one in sect the sql statement to build this table would be written as select feed feednumfeed namefeedattribtype namefeedattribute value where feed feedid feedattribute feedid and feedattribute feedattribtypeid feedattribtype feedattribtypeid this sql statement is known as join of three tables because three tables will be joined together to form the result it is up to the relational database to translate this query into commands that read the three tables and efficiently construct new temporary table as the result of the join if we were to implement our own relational databasethe join operation for these three tables might be programmed similarly to the code appearing in sect don' be misled relational databases don' program specific joins like this onebut the joining of the three tables might be functionally equivalent to this code the entire program is available as joinquery py on the text' website the readfield function here in the text is abbreviated for spacebut reads any type of field from table file the join algorithm picks one of the tables and read it from beginning to end in this casethe feedattribute table is read from beginning to end for each feed attributethe matching feed id from the feed table must be located in the code in sect this involves readingon averagehalf the feed table to supply the feed number and feed name for each line of the query likewiseto supply the feed attribute nameon average half the feedattribtype table is read to supply the feed attribute name for each line of the query output the complexity of this operation is ( *mwhere is the number of records in feedattribute tbl and is the maximum of the number of records in feedattribtype tbl and feed tbl this is ( performance if is roughly equivalent to whether the two are roughly equivalent or notthe performance of this queryeven on our small sample tableis not great it takes about to run the query as written on ghz intel core processor with gb of ram and solid state hard drive programming the joining of tables import datetime def readfield(record,coltypes,fieldnum)this copy belongs to 'acha
14,674
fieldnum is zero based record is string containing the record coltypes is the types for each of the columns in the record offset for in range(fieldnum)coltype coltypes[ if coltype ="int"offset+= elif coltype[: ="char"size int(coltype[ :]offset +size elif coltype ="float"offset+= return val def main()select feed feednumfeed namefeedattribtype namefeedattribute value where feed feedid feedattribute feedid and feedattribute feedattribtypeid feedattribtype feedattribtypeid attribtypecols ["int","char ","char ","int","int","int","int"feedcols ["int","int","int","char ","datetime","float","float","int","char ","int"feedattributecols ["int","int","float"before datetime datetime now(feedattributetable open("feedattribute tbl"," "for record in feedattributetablefeedid readfield(record,feedattributecols, feedattribtypeid readfield(record,feedattributecols, value readfield(record,feedattributecols, feedtable open("feed tbl"," "feedfeedid - while feedfeedid !feedidfeedrecord feedtable readline(feedfeedid readfield(feedrecord,feedcols, feednum readfield(feedrecord,feedcols, feedname readfield(feedrecord,feedcols, feedattribtypetable open("feedattribtype tbl"" "feedattribtypeidid - while feedattribtypeidid !feedattribtypeidfeedattribtyperecord feedattribtypetable readline(feedattribtypeidid readfield(feedattribtyperecord,attribtypecols, feedattribtypename readfield(feedattribtyperecord,attribtypecols, print(feednum,feedname,feedattribtypename,valueafter datetime datetime now(deltat after before milliseconds deltat total_seconds( print("time for the query without indexing was",milliseconds,"milliseconds " if __name__ ="__main__"main(the code in sect suffers because the two tablesfeed tbl and feedattribtype tbl are read sequentially each time through the outer loop to find the matching feed and feed attribute typerespectively we can improve the efficiency of this query if we recognize that disk drives are random access devices that means that we can position the read head of disk drive anywhere within file we don' have to start at the beginning of table to begin looking for matching feed or feed attribute type we can jump around within the table to find the matching record this copy belongs to 'acha
14,675
-trees the readrecord function def readrecord(file,recnum,recsize)file seek(recnum*recsizerecord file read(recsizereturn record python includes seek method on files to position the read head of disk to byte offset within file the read method on files reads given number of bytes and returns them as string to test this readrecord functionand the functionality of the seek methoda program was written to randomly access the records in the feedattribute tbl file the results of that experiment are shown in fig the data shows that accessing any record within the file took about the same amount of time regardless of its position within the file as with any experimentthere were few anomalies butthe vast majority of records were accessed in the same amount of time or nearly the same amount of time let' say we were to organize the feed tbl and the feedattribtype tbl files so that the records were sorted in increasing order by their keys the feed tbl file would be sorted by feedid and the feedattribtype tbl would be sorted by feedattribtypeid then we could use binary search on these two files to find the matching records for each feed attribute in the code of sect since the tables are randomly accessiblethe query time could be reduced from ( *mto ( log mhoweverfig access time for randomly read records in file this copy belongs to 'acha
14,676
we can' assume that database table will alwaysor everbe sorted according to one field databases can have new records added and old records deleted at any time this is where the need for -tree comes from -tree is tree structure that is built over the topso to speakof database table to provide (log nlookup time to any record within the database table while the records themselves may be in any orderthe -tree provides the (log nsearch complexity into the table -tree is built by inserting records or items into the tree once builtthe index provides the efficient lookup of any record based on the key value stored in the -tree consider the code in sect lines - build the feed tbl index and lines - build the feedattribtype tbl index once builtthe indices are used when programming the query the loop beginning on line no longer contains two while loops to lookup the corresponding records in the two tables insteadthe -trees are consulted to find the corresponding records in the two tables when programmed this waythe query in sect runs in approximately sthree times faster than the originalnon-indexed query the sample query here uses relatively small tables imagine the speed up possible when either of the feed tbl or feedattribtype tbl tables contained millions of records in that casethe original query would not have completed in an acceptable amount of time while the indexed query given here would have completed in roughly the same amount of time or perhaps second longer at worst efficient join def main()select feed feednumfeed namefeedattribtype namefeedattribute value where feed feedid feedattribute feedid and feedattribute feedatribtypeid feedattribtype id attribtypecols ["int","char ","char ","int","int","int","int"feedcols ["int","int","int","char ","datetime","float","float","int","char ","int"feedattributecols ["int","int","float" feedattributetable open("feedattribute tbl"," " if os path isfile("feed idx")indexfile open("feed idx"," "feedtablereclength int(indexfile readline()feedindex eval(indexfile readline()elsefeedindex btree( feedtable open("feed tbl"," "offset for record in feedtablefeedid readfield(record,feedcols, anitem item(feedid,offsetfeedindex insert(anitemoffset+= feedtablereclength len(record print("feed table index created"indexfile open("feed idx"," "indexfile write(str(feedtablereclength)+"\ "indexfile write(repr(feedindex)+"\ "indexfile close( if os path isfile("feedattribtype idx")indexfile open("feedattribtype idx"," "this copy belongs to 'acha
14,677
-trees attribtypetablereclength int(indexfile readline()attribtypeindex eval(indexfile readline()elseattribtypeindex btree( attribtable open("feedattribtype tbl"," "offset for record in attribtablefeedattribtypeid readfield(record,attribtypecols, anitem item(feedattribtypeid,offsetattribtypeindex insert(anitemoffset+= attribtypetablereclength len(record print("attrib type table index created"indexfile open("feedattribtype idx"," "indexfile write(str(attribtypetablereclength)+"\ "indexfile write(repr(attribtypeindex)+"\ "indexfile close( feedtable open("feed tbl","rb"feedattribtypetable open("feedattribtype tbl""rb"before datetime datetime now(for record in feedattributetable feedid readfield(record,feedattributecols, feedattribtypeid readfield(record,feedattributecols, value readfield(record,feedattributecols, lookupitem item(feedid,noneitem feedindex retrieve(lookupitemoffset item getvalue(feedrecord readrecord(feedtable,offset,feedtablereclengthfeednum readfield(feedrecord,feedcols, feedname readfield(feedrecord,feedcols, lookupitem item(feedattribtypeid,noneitem attribtypeindex retrieve(lookupitemoffset item getvalue(feedattribtyperecord readrecord(feedattribtypetable,offset,attribtypetablereclengthfeedattribtypename readfield(feedattribtyperecord,attribtypecols, print(feednum,feedname,feedattribtypename,valueafter datetime datetime now(deltat after before milliseconds deltat total_seconds( print("time for the query with indexing was",milliseconds,"milliseconds "clearly we need the functionality of -tree to make queries possible and efficient in relational database joins the next section goes on to explain the organization of -treethe advantages of -treesand how they are implemented -tree organization -tree is balanced tree each node in -tree consists of alternating pointers and items as shown in fig -trees consist of nodes each node in -tree contains pointers to other nodes and items in an alternating sequence the items in node are arranged sequentially in order of their keys in fig the key is the first value in each tuple pointer to the left of an item points to another -tree node this copy belongs to 'acha
14,678
fig sample -tree that contains items that are all less than the item to the right of the pointer pointer to the right of an item points to node where all the items are greater than the item in fig the items in node are all less than while the items in node are all greater than -trees are always balancedmeaning that all the leaf nodes appear on the same level of the tree -tree may contain as many items and pointers as desired in each node there will always be one more pointer than items in node -trees don' have to fill each node the degree of -tree is the minimum number of items that -tree node may containexcept for the root node the capacity of node is always twice its degree in fig the degree is and the capacity is the requirements of -tree are as follows every node except the root node must contain between degree and *degree items every node contains one more pointer than the number of items in the node all leaf nodes are at the same level within -tree the items within -tree node are ordered in ascending (or descendingorder all nodes have their items in the same ordereither ascending or descending the items in the subtree to the left of an item are all less than that item the items in the subtree to the right of an item are all greater than that item to maintain these propertiesinserting and deleting items from the tree must be done with some care inserting an item can cause splitting of node deleting from tree sometimes requires rebalancing of the tree looking up an item in -tree is performed much the same way lookup is performed in binary search tree the node is examined to find the item if it is not foundthen the pointer is followed that lies between the items that are less than and greater than the item to be found if this leads to leaf node and the item is not found in the leaf nodethe item is reported as not in the tree this copy belongs to 'acha
14,679
-trees the advantages of -trees -tree may contain entire records instead of just key/value pairs as appear in fig where the key/value pairs are the feedid and record number of each record in the feed table for instancethe entire record for feedid might be stored directly in the -tree where ( , currently appears in the examples in this text the -tree and the database table are stored separately this has the advantage that more than one -tree index could be built over the feed table the -tree in fig is built over the feedid field some other unique field might be used to build another -tree over the table if desired by storing the -tree and the table separatelymultiple indices are possible as mentioned earlier in the -trees provide (logd nlookup time where is the degree of the -tree and is the number of items in the tree hash tables provide faster lookup time than -tree so why not use hash table insteadunlike hash tablea -tree provides ordered sequential access to the index you can iterate over the items in -tree much like binary trees provide iteration iteration over -tree provides the items or keys in ascending (or descendingorder hash table does not provide an ordering of its keys -trees provide (log ninsertdeleteand lookup time as well while not as efficient as hash tables in this regardb-trees nodes are often quite large providing very flat tree in this casethe time for these three operations often comes close to that of hash table -trees are often constructed with literally millions of items when -tree reaches this sizeholding all the nodes in memory at one time may consume lot of ram this is great advantage of -trees over hash tables -tree may be stored in file itself since files are randomly accessible on diska -tree' node may be thought of as record in file consider the -tree in fig the nodes and could be thought of as three records within file the record number are the pointer valuesso to search the -tree it is only necessary to start with the root node in memory thento search when pointer is followed during searchthe record corresponding to the new node is read into memory during the search search can proceed in this wayreading one record at time from disk typically pool of records would be held in memory for -tree and records would be replaced in memory using some sort of node replacement scheme in this way fixed amount of ram can be allocated to hold -tree that would typically be much smaller than the total size of the tree in additionsince -tree can be stored in fileit is not necessary to reconstruct the -tree each time it is needed the code in sect stores the -trees in two files named feed idx and feedattribtype idx and reads the index from the file the next time the program is run deleting record from table with million records or more in it could be an expensive operation if the table has to be completely rewritten if sequential access to the underlying table is handled through the -tree or if the entire file is stored in the nodes of the -treedeletion of row or record in the table gets much simpler for instancein fig the feed with feedid of remains in the feed tbl filebut this copy belongs to 'acha
14,680
fig sample -tree with key deleted has been deleted from the -tree if sequential access is always handled through the -treeit would appear that the feed with feedid has been deleted from the table deleting an item from the table in this way is (log noperation while deleting by rewriting the entire file would take (ntime when is millions of recordsthe difference between (log nand (nis significant the same goes for inserting new row or record within the feed table adding one new record to the end of file can be done quicklywithout rewriting the entire file when -tree is used the newly inserted item automatically maintains its sorted position within the file to summarizeb-trees have several characteristics that make them attractive to use in relational databases and for providing access to large quantities or ordered data these properties includeordered sequential access over the key value on (ntime (log ninsert timewhile maintaining the ordering of the items (log ndelete time of items within the -tree if sequential access is handled through the -tree then (log ndelete time is provided for the underlying table as well -trees can be stored in file and -tree nodes can be read on an as needed basis allowing -trees to be larger than available memory -tree index stored in file does not have to be rebuilt each time it is needed in program it is this final point that make -trees and their derivatives so valuable to relational database implementations relational databases need -trees and their derivative implementations to efficiently process join operations while also providing many of the advantages listed above this copy belongs to 'acha
14,681
-trees -tree implementation looking up value in -tree is relatively simple and is left as an exercise for the reader inserting and deleting values are where all the action is alan tharp [ provides great discussion of both inserting and deleting values in -tree in this text we provide new examples and suggest both iterative and recursive implementations of both operations -tree insert inserting an item in -tree involves finding the leaf node which should contain the item it may also involve splitting if no room is left in the leaf node when leaf node reaches its capacitywhich is two times its degree and new item is being insertedthe *degree+ items are sorted and the median value ( the middle valueis promoted up the tree to the parent node in this waysplitting may cascade up the tree to see the splitting process in actionconsider building the tree given in fig with the keys given in this order [ the first item to be inserted is the when this occursthe -tree is emptyconsisting of one empty node the ( , item is added into that node as shown in fig the items with keys and are inserted in similar fashion as shown in fig the node is now full the next item to be inserted will cause split the next item inserted is causing the node to split into two nodes the left subtree node is the original node the right subtree contains the new node the middle value in this caseis promoted up to the parent in this casethere is no parent since we split the root node in this special case new root node is created to hold the promoted value after taking these stepsthe tree appears as shown in fig the three values and are inserted resulting in the tree as shown in fig when is inserted -tree node number is going to split and promote the middle value in this caseup to the parent this time there is room in the parent so the new item is added resulting in the tree shown in fig fig inserting into an empty -tree fig after inserting and this copy belongs to 'acha
14,682
fig after splitting as result of inserting fig after inserting and fig inserting into the -tree causes splitting inserting an item causes one of two possible outcomes either the leaf node has room in it to add the new item or the leaf node splits resulting in middle value and new node being promoted to the parent this suggests recursive implementation is appropriate for inserting new item the recursive algorithm is given an item to insert and returns two valuesthe promoted key and the new right node if there is one and proceeds as follows if this is leaf node and there is room for itmake room and store the item in the node otherwise if this is leaf nodemake new node sort the new item and old items choose the middle item to promote to the parent take the items after the middle and put them into the new node return tuple of the middle item and new right node if this is non-leaf nodecall insert recursively on the appropriate subtree consult the return value of the recursive call to see if there is newly promoted key and right subtree if sotake the appropriate action to store the new item and subtree pointer in the node if there is no room to store the promoted valuesplit again as described in step this copy belongs to 'acha
14,683
-trees step above automatically handles any cascading splits that must occur after the recursive call the algorithm looks for any promoted value and handles it by either adding it into the node or by splitting again an iterative version of insert would proceed in similar manner as the recursive version except that the path to the newly inserted item would have to be maintained on stack thenafter inserting or splitting the leaf nodethe stack of nodes on the path to the leaf would be popped one at timehandling any promoted valuesuntil the stack was emptied when writing insert as recursive function it makes sense to implement it as method of -tree node class then the insert method on -tree class can call the recursive insert on the -tree node class in this wayif the root node is splitthe -tree insert method can deal with this by creating new root node from the promoted value and the left and right subtrees recall that the old root is the new left subtree in the newly created node -tree delete deleting from -tree can be written recursively or iteratively like the insert algorithm when an item is deleted from -tree there may be rebalancing required recall that every nodeexcept the root nodeof -tree must contain at least degree items there are just few rules that can be followed to delete items from the tree while maintaining the balance requirements if the node containing the item is leaf node and the node has more than degree items in it then the item may simply be deleted if the node containing the item is leaf node and has degree or fewer items in it before deleting the valuethen rebalancing is required if the node is non-leaf node then the least value of the right subtree can replace the item in the node rebalancing can be accomplished in one of two ways if sibling of the unbalanced node contains more than degree itemsthen some of those items can be rotated into the current node if no rotation from sibling is possiblethen sibling and the unbalanced nodealong with the item that separates them in the parentcan be coalesced into one node this reduces by one the number of items in the parent which in turn may cause cascading rotations or coalescing to occur another example will help to illustrate the delete and rebalancing algorithm consider deleting the item containing from the -tree in fig this causes this copy belongs to 'acha
14,684
fig after deleting the item containing fig after deleting the item containing fig after deleting the item containing the node containing to become unbalanced rebalancing is accomplished by borrowing items from its left sibling this is depicted in fig in fig notice that the rotates to the parent and the item containing rotates into node of the tree this is necessary to maintain the ordering within the nodes the rotation travels through the parent to redistribute the items between the two nodes nextconsider deleting the item containing in this case there is no sibling on the right and the sibling on the left doesn' have enough items to redistribute them sonodes and are coalesced into one node along with the item containing from the root nodeproducing the -tree shown in fig next is deleted from the -tree this causes left rotation with the right sibling resulting in the -tree depicted in fig continuing the example assume that the item containing key of is deleted from the tree the item is in non-leaf node so in this case the least value from the right subtree replaces the item containing this must be followed up with deleting this copy belongs to 'acha
14,685
-trees fig after deleting the item containing fig after deleting the item containing that valuethe item containing in this casefrom the right subtree the result is depicted in fig deleting next causes the two sibling nodes to coalesce along with the separating item in the parent (the root in this casethe result is an empty root node as shown in fig in this casethe delete method in the -tree class must recognize this situation and update the root node pointer to point to the correct node -tree node is no longer the root node of the -tree deleting any more of the nodes simply reduces the number of items in the root node againthe delete method on -tree nodes may be implemented recursively the -tree node delete method is given the item to delete and does not need to return anything the recursive algorithm proceeds as follows if the item to delete is in the current node then we do one of two things depending on whether it is leaf node or not if the node is leaf nodethe item is deleted from the node without regard to rebalancing if the node is non-leaf nodethen the smallest valued item from the right subtree replaces the item and the smallest valued item is deleted from the right subtree if the item is not in the current node then delete is called recursively on the correct subtree after delete returnsrebalancing of the child on the path to the deleted item may be needed if the child node is out of balance first try rotating value from left this copy belongs to 'acha
14,686
-tree delete or right sibling if that can' be donethen coalesce the child node with left or right sibling if the algorithm is implemented iteratively instead of recursively stack is needed to keep track of the path from the root node to the node containing the item to delete after deleting the item the stack is emptied and as each node is popped from the stack rebalancing of the child node on the path may be required as described in the steps above summary -trees are very important data structuresespecially for relational databases in order for join operations to be implemented efficientlyindices are needed over at least some tables in relational database -trees are also important because they can be stored in record format on disk meaning that the entire index does not need to be present in ram at any one time this means that -trees can be created even for tables that consist of millions of records -trees have many important properties including (log nlookupinsertand delete time -trees always remain balancedregardless of the order of insertions and deletions -trees can also provide sequential access of records within table in sorted ordereither ascending or descending due to the balance requirement in -trees splitting of nodes may be required during item insertion rebalancing of nodes may be required during item deletion rebalancing takes the form of rotation of items or coalescing of nodes rotation to redistribute items is the preferred method of rebalancing both the insert and delete operations may be implemented either recursively or iteratively in either case the splitting or rebalancing may result in cascading splitting or rebalancing as the effects ripple up through the tree on the path taken to insert or delete the item if implemented iterativelyboth the insert and delete algorithms require stack to record the path from the root node to the inserted or deleted item so that this ripple affect can be handled in the recursive case no stack is required since the run-time stack remembers the path from the root node to the inserted or deleted item there are derivative implementations of -trees that have been created +-trees and #-trees are two other variations that are not covered in this text alan tharp [ ]among otherscovers both these derivative implementations review questions answer these short answermultiple choiceand true/false questions to test your mastery of the this copy belongs to 'acha
14,687
-trees how does the use of an index improve the efficiency of the sample join operation presented in sect what advantages does -tree have over hash table implementation of an index what advantages does hash table have over -tree implementation of an index how can -tree index be created over table with millions of records and still be usablewhat challenges could this pose and how does -tree provide means to deal with those challenges starting with fig insert an item with key and draw picture of the resulting -tree starting with fig delete the item containing and draw picture of the resulting -tree when does node get coalescedwhat does that meanprovide short example different from any example in the text when does rotation correct imbalance in nodeprovide short example different from any example in the text insert the values through into an empty -tree of degree to demonstrate your understanding of the insert algorithm draw picturesbut you can combine pictures that don' require splitting at each split be sure to draw completely new picture delete the values and from the tree you constructed in the previous review question showing the rebalanced tree after each deletion programming problems write -tree class and -tree node class implement the insert and delete algorithms described in this implement lookup method as well use this implementation to efficiently run the join operation presented in sect compare the time this algorithm takes to run to the time the non-indexed joinfrom sect takes to run write the two methods recursively write the -tree class with iterativenon-recursiveimplementations of insert and delete in this case the insert and delete methods of the -tree class don' necessarily have to call insert and delete on -tree nodes since the example tables in this are rather smallafter completing exercise or run the query code again using dictionary for the index compare the amount of time taken to implement the query in this way with the -tree implementation comment on the experiment results this copy belongs to 'acha
14,688
this text has focused on the interaction of algorithms with data structures many of the algorithms presented in this text deal with search and how to organize data so searching can be done efficiently many problems involve searching for an answer among many possible solutionsnot all of which are correct sometimesthere are so many possibilitiesno algorithm can be written that will efficiently find correct solution amongst all the possible solutions in these caseswe may be able to use rule of thumbmost often called heuristic in computer scienceto eliminate some of these possibilities from our search space if the heuristic does not eliminate possible solutionsit may at least help us order the possible solutions so we look at better possible solutions firstwhatever better might mean in chap depth first search of graph was presented sometimes search spaces for graphs or other problems grow to such an enormous sizeit is impossible to blindly search for goal node this is where heuristic can come in handy this uses searching mazewhich is really just type of graphas an example to illustrate several search algorithms that are related to depth first or breadth first search several applications of these search algorithms are also presented or discussed heuristic search is often covered in texts on artificial intelligence [ as problems in ai are better understoodalgorithms arise that become more commonplace over time the heuristic algorithms presented in this are covered in more detail in an ai textbut as data sizes growheuristic search will become more and more necessary in all sorts of applications ai techniques may be useful in many search problems and so are covered in this to provide an introduction to search algorithms designed to deal with large or infinite search spaces goals by the end of this you will have been presented with examples of depth first and breadth first search hill climbingbest first searchand the (pronounced staralgorithm will also be presented in additionheuristics will be applied to the search in two person game playing as well (cspringer international publishing switzerland lee and hubbarddata structures and algorithms with pythonundergraduate topics in computer sciencedoi -- this copy belongs to 'acha
14,689
heuristic search while heuristic search is not the solution to every problemas data sizes growthe use of heuristics will become more important this provides the necessary information to choose between at least some of these techniques to improve performance and solve some interesting large problems that would otherwise be unsolvable in reasonable amount of time depth first search we first encountered depth first search in chap where we discuss search spaces and using depth first search to find solution to some sudoku puzzles thenin chap the depth first search algorithm was generalized bit to handle search spaces that include cycles to prevent getting stuck in cyclea visited set was used to avoid looking at vertices that had already been considered slightly modified version of the depth first search for graphs is presented in sect in this version the path from the start to the goal is returned if the goal is found otherwisethe empty list is returned to indicate the goal was not found iterative depth first search of graph def graphdfs(gstartgoal) ( ,eis the graph with verticesvand edgese , stack stack(visited set(stack push([start]the stack is stack of paths while not stack isempty() path is popped from the stack path stack pop(current path[ the last vertex in the path if not current in visitedthe current vertex is added to the visited set visited add(current if the current vertex is the goal vertexthen we discontinue the search reporting that we found the goal if current =goalreturn path return path to goal otherwisefor every adjacent vertexvto the current vertex in the graphv is pushed on the stack of paths yet to search unless is already in the path in which case the edge leading to is ignored for in adjacent(current, )if not in pathstack push([ ]+path if we get this farthen we did not find the goal return [return an empty path this copy belongs to 'acha
14,690
the algorithm in sect consists of while loop that finds path from start node to goal node when there is choice of direction on this pathall choices are pushed onto the stack by pushing all choicesif path leads to dead endthe algorithm just doesn' push anything new onto the stack the next time through the loopthe next path is popped from the stackresulting in the algorithm backtracking to point where it last made decision on the direction it was going maze representation how should the maze be representeddata representation is such an important part of any algorithm the maze consists of rows and columns we can think of each location in the maze as tuple of (rowcolumnthese tuples can be added to hash set for lookup in ( time by using hash set we can determine the adjacent (row,columnlocations in ( time as well for any location within the maze when maze is read from filethe (rowcolumnpairs can be added to hash set the adjacent function then must be given location and the maze hash set to determine the adjacent locations dfs example consider searching the maze in fig let' assume that our depth first search algorithm prefers to go up if possible when searching maze if it can' go upthen it prefers to go down next preference is given to going left in the mazefollowed lastly by going right assume we start at the top of the maze and want to exit at the bottom note that going on the diagonal is not considered in the examples presented in this since otherwise moves where two corners in the maze meet would be fig depth first search of maze this copy belongs to 'acha
14,691
heuristic search possible diagonal moves would have the affect of moving through what looks like walls in the maze in some circumstances according to our direction preferencethe algorithm proceeds by making steps and in red then it proceeds to travel to the left into region when it gets to in region athere are no possible moves adjacent to step that have not already been visited the code in lines - cannot find anything to push onto the stack howeverwhen step was originally consideredall the other choices were pushed onto the stack including the red three that appears to the right of step when nothing is pushed onto the stack while looking at step in region athe next top value on the stack is the red step the unvisited nodes adjacent to the red step are then pushed onto the stack the last location pushed is the red step which leads to the red step being pushed and considered next then the depth first search proceeds to the left againexamining all the locations in region when region is exhaustedbacktracking occurs againresulting in taking the red step this leads to the search entering region nextexhausting the possibilities on this path and backtracking occurring to take the search to step in red likewiseregions efand are explored when the search gets to red step the depth first search prefers to go up and proceeds to the top of the maze and enters region we can tell by looking at the maze that entering region will lead nowhere but depth first search does not know or care about this it just blindly considers the next possible path to the goal until that path leads to the goal or we have exhausted all possible next steps and backtrack backtracking out of region leads to step in red when we reach step the algorithm prefers to go down first and proceeds on wild goose chase leading from region to region where it runs out of possible next steps and backtracks to the red step finallythat path leads to the goal there are some things to notice about this search firstas mentioned beforeit was blind search that uses backtracking to eventually find the goal in this example the depth first search examined every location in the mazebut that is not always the case depth first search did find solutionbut it wasn' the optimal solution if the depth first search were programmed to go right first it would have found solution much faster and found the optimal solution for this maze unfortunately of coursethat won' work for all mazes while the maze search space is finitewhat if the maze was infinite in size and we went to the left while we should have started going rightthe algorithm would blindly proceed going left forevernever finding solution the drawbacks of depth first search are as follows depth first search cannot handle infinite search spaces unless it gets lucky and proceeds down path that leads to the goal it does not necessarily find an optimal solution unless we are luckythe search order in finite space may lead to exhaustively trying every possible path we may be able to do better using either breadth first search or heuristic search read on to see how these algorithms work this copy belongs to 'acha
14,692
breadth first search breadth first search was first mentioned in chap the code for breadth first search differs in small way from depth first search instead of stacka queue is used to store the alternative choices the change to the code is smallbut the impact on the performance of the algorithm is quite big depth first search goes down one path until the path leads to the goal or no more steps can be taken when path is exhausted and does not end at the goalbacktracking occurs in contrastbreadth first search explores all paths from the starting location at the same time this is because each alternative is enqueued onto the queue and then each alternative is dequeued too this has an effect on how the search proceeds bfs example breadth first search takes step on each path each time through the while loop in sect soafter step in fig the two step ' occur next then the three step ' occur the three step ' are next the five step ' are all done on the next five iterations of the while loop you can see that the number of alternatives is growing in this maze there were step ' on up to five step ' the number of choices at each step is called the branching factor of problem branching factor of one would mean that there is no choice from one step to the next branching factor of two means the problem doubles in size at each step since breadth first search takes step in each direction at each stepa branching factor of two would be bad branching factor of two means the size of the search space grows exponentially (assuming no repeated statesbreadth first search is not good search in this case unless the goal node is very near the start node the breadth first search shown in fig covers nearly as much of the maze as the blind depth first search did only few locations are left unvisited the breadth first search found the optimal solution to this maze in factbreadth first search will always find the optimal solution if it is given enough time breadth first search also deals well with infinite search spaces because breadth first search branches out from the source exploring all possible paths simultaneouslyit will never get stuck going down some infinite path forever it may help to visualize pouring water into the maze the water will fill the maze from the source and find the shortest way to the goal the advantages and disadvantages of breadth first search are as follows breadth first search can deal with infinite search spaces breadth first search will always find the optimal goal it may not perform well at all when the problem has too high branching factor in factit may take millions of years or more to use breadth first search on some problems this copy belongs to 'acha
14,693
heuristic search fig breadth first search of maze while it would be nice to be able to find optimal solutions to problemsbreadth first search is not really all that practical to use most interesting problems have high enough branching factors that breadth first search is impractical hill climbing depth first search was impractical because it blindly searched for solution if the search is truly blind then sometimes we'll get lucky and find solution quickly while other times we might not find solution at all depending on the size of the search spaceespecially when there are infinite branches if we had some more information about where the goal isthen we might be able to improve the depth first search algorithm think of trying to summit mountain we can see the peak of the mountain so we know the general direction we want to take to get there we want to climb the hill that' where the name of this algorithm comes from anyone who has climbed mountains knows that sometimes what appears to be route up the mountain leads to dead end sometimes what appears to be route to the top only leads to smaller peak close by these false peaks are called localized maxima and hill climbing can suffer from finding localized maximum and thinking that it is the overall goal that was sought hill climbing example figure features the same maze with hill climbing applied to the search to climb the hill we apply heuristic to help in searching mazeif we know the exit point of this copy belongs to 'acha
14,694
fig hill climbing search of maze the maze we can employ the manhattan distance as heuristic to guide us towards the goal we don' know the length of the path that will lead to the solution since we don' know all the details of the mazebut we can estimate the distance from where we are to the goal if we know the location of the goal and our current location the manhattan distance is measure of the number of rows and columns that separate any two locations on maze or map in fig the manhattan distance from the start to the goal is we have to go down one rowthen right columnsand down rows this distance is called the manhattan distance because it would be like walking between buildings in manhattan or city blocks in any city the manhattan distance would be either exact or an under-estimate of the total distance to the goal in fig it is an exact estimatebut in general direct route to the goal may not be possible in which case the manhattan distance would be an under-estimate this is important because over-estimating the distance will mean that hill climbing will end up working like depth first search again the heuristic would not affect the performance of the algorithm for instanceif we took the easy approach and said that our distance was always from the goalhill climbing would not really occur the example in fig shows that the algorithm chooses to go down first if possible then it goes right the goal location is known and the minimum manhattan distance orders the choices to be explored going left or up is not an option unless nothing else is available so the algorithm proceeds down and to the right until it reaches step where it has no choice on this path but to go up hill climbing performs like depth first search in that it won' give up on path until it reaches dead end while hill climbing does not find the optimal solution in fig it does find solution and examines far fewer locations in this case than breadth first or depth first search the advantages and disadvantages of hill climbing are as follows this copy belongs to 'acha
14,695
heuristic search the location of the goal must be known prior to starting the search you must have heuristic that can be applied that will either under-estimate or provide an exact length of the path to the goal the better the heuristicthe better the hill climbing search algorithm hill climbing can perform well even in large search spaces hill climbing can handle infinite search branches if the heuristic can avoid them hill climbing may suffer from local maxima or peaks hill climbing may not find an optimal solution like breadth first search to implement hill climbing the alternative choices at each step are sorted according to the heuristic before they are placed on the stack otherwisethe code is exactly the same as that of depth first search closed knight' tour hill climbing can be used in solving the closed knight' tour problem solving this problem involves moving knight from the game of chess around chess board (or any size boardthe knight must be moved two squares followed by one square in the perpendicular directionforming an on the chessboard the closed knight tour problem is to find path that visits every location on the board through sequence of legal knight moves that starts and ends in the same location with no square being visited twice except the starting and ending location since we want to find path through the boardthe solution can be represented as path from start to finish each node in the path is move on the board move is valid if it is on the board and is not already in the path in this waythe board itself never has to be explicitly built generating possible moves for knight could be rather complex if you try to write code to deal with the edges of the board in generalwhen adjacent nodes have to be generated and special cases occur on boundariesit is far easier to generate set of possibly invalid moves along with the valid moves in the case of moving knight aroundthere are eight possible moves in the general case after generating all possible movesthe invalid moves are obvious and can be filtered out using this techniqueboundary conditions are handled in uniform manner once instead of with each separate possible move the code is much cleaner and the logic is much easier to understand figure provides solution to the closed knight' tour problem for board the tour starts in the lower left corner where two edges were not drawn so you can see where the tour began and ended while it was being computed the tour took few minutes to find using heuristic to sort the choices of next location least constrained heuristic was applied to sort the new choices before adding them to the stack the least constrained next choice was the choice that would have the most choices next sorting the next moves in this fashion avoids looking at paths that lead to dead ends by generally staying closer to the edges of the board where the next move has the most choices in other wordsit avoids moving to the middle this copy belongs to 'acha
14,696
fig closed knight' tour and getting stuck in the middle of the board this heuristic is not perfect and some backtracking is still required to find the solution neverthelesswithout this heuristic there would be no hope in solving the problem in reasonable amount of time for board in factthe solution can' be found in reasonable amount of time with simple depth first searchunless you get lucky and search in the correct direction at each step with the heuristic and hill climbing appliedthe solution can be found in just few seconds the -queens problem to solve the -queens problemn queens must be placed on an xn chess board so that no two queens are in the same columnrowor diagonal solving this using depth first search would not work the search space is too large and you would simply have to get very lucky to find solution using brute force the -queens problem does have the unique feature that when queen is placed on the boardall other locations in the rowcolumnor the diagonals it was placed in are no longer possible candidates for future moves removing these possible moves from the list of available locations is called forward checking this forward checking decreases the size of the search space at each step the choice of the next row to place queen is another unique feature of the -queens problem the solution won' be easier to find if random row is picked or this copy belongs to 'acha
14,697
heuristic search if we simply pick the next row in the sequence of rows so the search for the solution is only what column to place the next queen in to aide in forward checkingthe board can be represented as tuple(queen locationsavailable locationsthe first item in the tuple is the list of placed queens the second item of the tuple is the list of available locations on the board the forward checking can pick one of the available locations for the next row at this point all locations in the second part of the tuple that conflict with the choice of the next queen placement can be eliminated thus forward checking removes all the possible locations that are no longer viable given choice of placement for queen the hill climbing part of solving the -queens problem comes into play when the choice of which column to place queen is made the column chosen is the one that least constrains future choices like the knight' tourthe -queens problem benefits when the next choice made leaves the maximum number of choices later using this heuristicforward checkingand the simple selection of the next row in which to place queenit is possible to solve the -queens problem in reasonable amount of time one solution is shown in fig to reviewimplementing hill climbing requires the alternative choices at each step be sorted according to the heuristic before they are placed on the stack otherwisethe code is exactly the same as that of depth first search in some caseslike the knight' tour and the -queens problemany solution is an optimal solution butas noted above when searching mazehill climbing does not necessarily find an optimal solution wouldn' it be nice if we could combine breadth first search and hill climbing fig -queens solution this copy belongs to 'acha
14,698
best first search sobreadth first search can find an optimal solution and deal with infinite search spacesbut it is not very efficient and can only be used in some smaller problems hill climbing is more efficientbut may not find an optimal solution combining the two we get best first search in best first search we order the entire queue according to the distance of each current node to the goal using the same heuristic as hill climbing best first example consider the example in fig step moves closer by moving down one row to step nowto the right of step is an equally good move (actually better knowing the optimal solution)but the next step is better at step than to the right of because it is closer to the eventual goal so best first proceeds down and to the right like hill climbing until it gets to step in red at this step it is forced to move up and away from the goal in this case red step looks just as good as blue step along the bottom the manhattan distance of both is when we reach red step then blue step in the middle of the maze looks just as good the effect of heading away from the goal is to start search all paths simultaneously that' how best first works it explores one path while it is moving towards the goal and multiple paths when moving away from the goal the code for best first search is lot like breadth first except that priority queue is used to sort the possible next steps on the queue according to their estimated distance from the goal best first has the advantage of considering multiple pathslike breadth first searchwhen heading away from the goal while performing like hill climbing when heading toward the goal fig best first search of maze this copy belongs to 'acha
14,699
heuristic search in the example shown here we did not do better than hill climbing of coursethat is only this example in general hill climbing may do worse than best first it all depends on the order that locations are searched in the search space howeverneither hill climbing or best first found the optimal solution like breadth first search they both got stuck heading into the long path in the middle of the maze asearch wouldn' it be nice to be able to give up on some paths if they seem too longthat' the idea behind the aalgorithm in this searchthe next choices are sorted by their estimate of the distance to the goal (the manhattan distance in our maze examplesand the distance of the path so far the effect of this is that paths are abandoned (for while anywayif they appear to be taking too long to reach the goal aexample in fig the same path is first attempted by going down and to the right until step at the bottom of the maze is reached then that path is abandoned because the length of the path plus the manhattan distance at step in red is better than taking another step (step at the bottom of the maze again the search goes down and to the right eventually filling the same region from fig at this point the search continues across the top to step where it again goes down to step at which point step in red looks better than taking step to the left the search gives up on the blue path at step and then proceeds to the goal from red step fig -star search of maze this copy belongs to 'acha