id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
13,200 | class treemap(linkedbinarytreemapbase) """sorted map implementation using binary search tree "" override position class class position(linkedbinarytree position) def key(self) """return key of map key-value pair "" return self elementkey def value(self) """return value of map key-value pair "" return self elementvalue nonpublic utilities def subtree search(selfpk) """return position of subtree having key kor last node searched "" if = key)found match return elif key)search left subtree if self left(pis not none return self subtree search(self left( ) elsesearch right subtree if self right(pis not none return self subtree search(self right( ) return unsucessful search def subtree first position(selfp) """return position of first item in subtree rooted at "" walk while self left(walkis not nonekeep walking left walk self left(walk return walk def subtree last position(selfp) """return position of last item in subtree rooted at "" walk while self right(walkis not nonekeep walking right walk self right(walk return walk code fragment beginning of treemap class based on binary search tree |
13,201 | def first(self)"""return the first position in the tree (or none if empty""return self subtree first position(self root)if len(self else none def last(self)"""return the last position in the tree (or none if empty""return self subtree last position(self root)if len(self else none def before(selfp)"""return the position just before in the natural order return none if is the first position ""inherited from linkedbinarytree self validate(pif self left( )return self subtree last position(self left( )elsewalk upward walk above self parent(walkwhile above is not none and walk =self left(above)walk above above self parent(walkreturn above def after(selfp)"""return the position just after in the natural order return none if is the last position ""symmetric to before(pdef find position(selfk)"""return position with key kor else neighbor (or none if empty""if self is empty)return none elsep self subtree search(self root)khook for balanced tree subclasses self rebalance access(preturn code fragment navigational methods of the treemap class |
13,202 | def find min(self)"""return (key,valuepair with minimum key (or none if empty""if self is empty)return none elsep self firstreturn ( key) value)def find ge(selfk)"""return (key,valuepair with least key greater than or equal to return none if there does not exist such key ""if self is empty)return none elsemay not find exact match self find position(kif keykp' key is too small self after(preturn ( key) value)if is not none else none def find range(selfstartstop)"""iterate all (key,valuepairs such that start <key stop if start is noneiteration begins with minimum key of map if stop is noneiteration continues through the maximum key of map ""if not self is empty)if start is nonep self firstelsewe initialize with logic similar to find ge self find position(startif keystartp self after(pwhile is not none and (stop is none or keystop)yield ( key) value) self after(pcode fragment some of the sorted map operations for the treemap class |
13,203 | def getitem (selfk)"""return value associated with key (raise keyerror if not found""if self is empty)raise keyerrorkey errorrepr( )elsep self subtree search(self root)khook for balanced tree subclasses self rebalance access(pif ! key)raise keyerrorkey errorrepr( )return valuedef setitem (selfkv)"""assign value to key koverwriting existing value if present ""if self is empty)from linkedbinarytree leaf self add root(self item( , )elsep self subtree search(self root)kif key=kreplace existing item' value elementvalue hook for balanced tree subclasses self rebalance access(preturn elseitem self item( ,vif keykleaf self add right(piteminherited from linkedbinarytree elseleaf self add left(piteminherited from linkedbinarytree hook for balanced tree subclasses self rebalance insert(leafdef iter (self)"""generate an iteration of all keys in the map in order "" self firstwhile is not noneyield keyp self after(pcode fragment map operations for accessing and inserting items in the treemap class reverse iteration can be implemented with reverse using symmetric approach to iter |
13,204 | def delete(selfp)"""remove the item at given position ""inherited from linkedbinarytree self validate(pif self left(pand self right( ) has two children replacement self subtree last position(self left( )from linkedbinarytree self replace(preplacement element) replacement now has at most one child parent self parent(pinherited from linkedbinarytree self delete(pif root deletedparent is none self rebalance delete(parentdef delitem (selfk)"""remove item associated with key (raise keyerror if not found""if not self is empty) self subtree search(self root)kif = key)self delete(prely on positional version return successful deletion complete hook for balanced tree subclasses self rebalance access(praise keyerrorkey errorrepr( )code fragment support for deleting an item from treemaplocated either by position or by key performance of binary search tree an analysis of the operations of our treemap class is given in table almost all operations have worst-case running time that depends on hwhere is the height of the current tree this is because most operations rely on constant amount of work for each node along particular path of the treeand the maximum path length within tree is proportional to the height of the tree most notablyour implementations of map operations getitem setitem and delitem each begin with call to the subtree search utility which traces path downward from the root of the treeusing ( time at each node to determine how to continue the search similar paths are traced when looking for replacement during deletionor when computing position' inorder predecessor or successor we note that although single call to the after method has worst-case running time of ( )the successive calls made during call to iter require total of (ntimesince each edge is traced at most twicein sensethose calls have ( amortized time bounds similar argument can be used to prove the ( hworst-case bound for call to find range that reports results (see exercise - |
13,205 | operation in [ ] [kv delete( )del [kt find position(kt first) last) find min) find maxt before( ) after(pt find lt( ) find le( ) find gt( ) find ge(kt find range(startstopiter( )reversed(trunning time (ho(ho(ho(ho(ho(ho(ho( ho(ntable worst-case running times of the operations for treemap we denote the current height of the tree with hand the number of items reported by find range as the space usage is ( )where is the number of items stored in the map binary search tree is therefore an efficient implementation of map with entries only if its height is small in the best caset has height log( ) which yields logarithmic-time performance for all the map operations in the worst casehowevert has height nin which case it would look and feel like an ordered list implementation of map such worst-case configuration arisesfor exampleif we insert items with keys in increasing or decreasing order (see figure figure example of binary search tree with linear heightobtained by inserting entries with keys in increasing order we can nevertheless take comfort thaton averagea binary search tree with keys generated from random series of insertions and removals of keys has expected height (log )the justification of this statement is beyond the scope of the bookrequiring careful mathematical language to precisely define what we mean by random series of insertions and removalsand sophisticated probability theory in applications where one cannot guarantee the random nature of updatesit is better to rely on variations of search treespresented in the remainder of this that guarantee worst-case height of (log )and thus (log nworstcase time for searchesinsertionsand deletions |
13,206 | balanced search trees in the closing of the previous sectionwe noted that if we could assume random series of insertions and removalsthe standard binary search tree supports (log nexpected running times for the basic map operations howeverwe may only claim (nworst-case timebecause some sequences of operations may lead to an unbalanced tree with height proportional to in the remainder of this we explore four search tree algorithms that provide stronger performance guarantees three of the four data structures (avl treessplay treesand red-black treesare based on augmenting standard binary search tree with occasional operations to reshape the tree and reduce its height the primary operation to rebalance binary search tree is known as rotation during rotationwe "rotatea child to be above its parentas diagrammed in figure figure rotation operation in binary search tree rotation can be performed to transform the left formation into the rightor the right formation into the left note that all keys in subtree have keys less than that of position xall keys in subtree have keys that are between those of positions and yand all keys in subtree have keys that are greater than that of position to maintain the binary search tree property through rotationwe note that if position was left child of position prior to rotation (and therefore the key of is less than the key of )then becomes the right child of after the rotationand vice versa furthermorewe must relink the subtree of items with keys that lie between the keys of the two positions that are being rotated for examplein figure the subtree labeled represents items with keys that are known to be greater than that of position and less than that of position in the first configuration of that figuret is the right subtree of position xin the second configurationit is the left subtree of position because single rotation modifies constant number of parent-child relationshipsit can be implemented in ( time with linked binary tree representation |
13,207 | search trees in the context of tree-balancing algorithma rotation allows the shape of tree to be modified while maintaining the search tree property if used wiselythis operation can be performed to avoid highly unbalanced tree configurations for examplea rightward rotation from the first formation of figure to the second reduces the depth of each node in subtree by onewhile increasing the depth of each node in subtree by one (note that the depth of nodes in subtree are unaffected by the rotation one or more rotations can be combined to provide broader rebalancing within tree one such compound operation we consider is trinode restructuring for this manipulationwe consider position xits parent yand its grandparent the goal is to restructure the subtree rooted at in order to reduce the overall path length to and its subtrees pseudo-code for restructure(xmethod is given in code fragment and illustrated in figure in describing trinode restructuringwe temporarily rename the positions xyand as aband cso that precedes and precedes in an inorder traversal of there are four possible orientations mapping xyand to aband cas shown in figure which are unified into one case by our relabeling the trinode restructuring replaces with the node identified as bmakes the children of this node be and cand makes the children of and be the four previous children of xyand (other than and )while maintaining the inorder relationships of all the nodes in algorithm restructure( )inputa position of binary search tree that has both parent and grandparent outputtree after trinode restructuring (which corresponds to single or double rotationinvolving positions xyand let (abcbe left-to-right (inorderlisting of the positions xyand zand let ( be left-to-right (inorderlisting of the four subtrees of xyand not rooted at xyor replace the subtree rooted at with new subtree rooted at let be the left child of and let and be the left and right subtrees of arespectively let be the right child of and let and be the left and right subtrees of crespectively code fragment the trinode restructuring operation in binary search tree in practicethe modification of tree caused by trinode restructuring operation can be implemented through case analysis either as single rotation (as in figure and bor as double rotation (as in figure and dthe double rotation arises when position has the middle of the three relevant keys and is first rotated above its parentand then above what was originally its grandparent in any of the casesthe trinode restructuring is completed with ( running time |
13,208 | = = single rotation = = = = (ac= = single rotation = = = = (ba= = double rotation = = = = (cc= = double rotation = = = = (dfigure schematic illustration of trinode restructuring operation( and brequire single rotation( and drequire double rotation |
13,209 | python framework for balancing search trees our treemap classintroduced in section is concrete map implementation that does not perform any explicit balancing operations howeverwe designed that class to also serve as base class for other subclasses that implement more advanced tree-balancing algorithms summary of our inheritance hierarchy is shown in figure linkedbinarytree (section mapbase (section treemap (section avltreemap (section splaytreemap (section redblacktreemap (section figure our hierarchy of balanced search trees (with references to where they are definedrecall that treemap inherits multiply from linkedbinarytree and mapbase hooks for rebalancing operations our implementation of the basic map operations in section includes strategic calls to three nonpublic methods that serve as hooks for rebalancing algorithmsa call to rebalance insert(pis made from within the setitem method immediately after new node is added to the tree at position call to rebalance delete(pis made each time node has been deleted from the treewith position identifying the parent of the node that has just been removed formallythis hook is called from within the public delete(pmethodwhich is indirectly invoked by the public delitem (kbehavior we also provide hookrebalance access( )that is called when an item at position of tree is accessed through public method such as getitem this hook is used by the splay tree structure (see section to restructure tree so that more frequently accessed items are brought closer to the root we provide trivial declarations of these three methodsin code fragment having bodies that do nothing (using the pass statementa subclass of treemap may override any of these methods to implement nontrivial action to rebalance tree this is another example of the template method design patternas seen in section |
13,210 | def rebalance insert(selfp)pass def rebalance delete(selfp)pass def rebalance access(selfp)pass code fragment additional code for the treemap class (continued from code fragment )providing stubs for the rebalancing hooks nonpublic methods for rotating and restructuring second form of support for balanced search trees is our inclusion of nonpublic utility methods rotate and restructure thatrespectivelyimplement single rotation and trinode restructuring (described at the beginning of section although these methods are not invoked by the public treemap operationswe promote code reuse by providing these implementation in this class so that they are inherited by all balanced-tree subclasses our implementations are provided in code fragment to simplify the codewe define an additional relink utility that properly links parent and child nodes to each otherincluding the special case in which "childis none reference the focus of the rotate method then becomes redefining the relationship between the parent and childrelinking rotated node directly to its original grandparentand shifting the "middlesubtree (that labeled as in figure between the rotated nodes for the trinode restructuringwe determine whether to perform single or double rotationas originally described in figure factory for creating tree nodes we draw attention to an important subtlety in the design of both our treemap class and the original linkedbinarytree subclass the low-level definition of node is provided by the nested node class within linkedbinarytree yetseveral of our tree-balancing strategies require that auxiliary information be stored at each node to guide the balancing process those classes will override the nested node class to provide storage for an additional field whenever we add new node to the treeas within the add right method of the linkedbinarytree (originally given in code fragment )we intentionally instantiate the node using the syntax self noderather than the qualified name linkedbinarytree node this is vital to our frameworkwhen the expression self node is applied to an instance of tree (sub)classpython' name resolution follows the inheritance structure (as described in section if subclass has overridden the definition for the node classinstantiation of self node relies on the newly defined node class this technique is an example of the factory method design patternas we provide subclass the means to control the type of node that is created within methods of the parent class |
13,211 | def relink(selfparentchildmake left child)"""relink parent node with child node (we allow child to be none""make it left child if make left childparent left child elsemake it right child parent right child if child is not nonemake child point to parent child parent parent def rotate(selfp)"""rotate position above its parent "" node we assume this exists parent grandparent (possibly nonez parent if is nonex becomes root self root parent none elsex becomes direct child of self relink(zxy = leftnow rotate and yincluding transfer of middle subtree if = leftx right becomes left child of self relink(yx righttruey becomes right child of self relink(xyfalseelsex left becomes right child of self relink(yx leftfalsey becomes left child of self relink(xytruedef restructure(selfx)"""perform trinode restructure of position with parent/grandparent "" self parent(xz self parent(yif ( =self right( )=( =self right( ))matching alignments single rotation (of yself rotate(yreturn is new subtree root elseopposite alignments double rotation (of xself rotate(xself rotate(xreturn is new subtree root code fragment additional code for the treemap class (continued from code fragment )to provide nonpublic utilities for balanced search tree subclasses |
13,212 | avl trees the treemap classwhich uses standard binary search tree as its data structureshould be an efficient map data structurebut its worst-case performance for the various operations is linear timebecause it is possible that series of operations results in tree with linear height in this sectionwe describe simple balancing strategy that guarantees worst-case logarithmic running time for all the fundamental map operations definition of an avl tree the simple correction is to add rule to the binary search tree definition that will maintain logarithmic height for the tree although we originally defined the height of subtree rooted at position of tree to be the number of edges on the longest path from to leaf (see section )it is easier for explanation in this section to consider the height to be the number of nodes on such longest path by this definitiona leaf position has height while we trivially define the height of "nullchild to be in this sectionwe consider the following height-balance propertywhich characterizes the structure of binary search tree in terms of the heights of its nodes height-balance propertyfor every position of the heights of the children of differ by at most any binary search tree that satisfies the height-balance property is said to be an avl treenamed after the initials of its inventorsadel'son-vel'skii and landis an example of an avl tree is shown in figure figure an example of an avl tree the keys of the items are shown inside the nodesand the heights of the nodes are shown above the nodes (with empty subtrees having height |
13,213 | an immediate consequence of the height-balance property is that subtree of an avl tree is itself an avl tree the height-balance property has also the important consequence of keeping the height smallas shown in the following proposition proposition the height of an avl tree storing entries is (log njustificationinstead of trying to find an upper bound on the height of an avl tree directlyit turns out to be easier to work on the "inverse problemof finding lower bound on the minimum number of nodes (hof an avl tree with height we will show that (hgrows at least exponentially from thisit will be an easy step to derive that the height of an avl tree storing entries is (log nwe begin by noting that ( and ( because an avl tree of height must have exactly one node and an avl tree of height must have at least two nodes nowan avl tree with the minimum number of nodes having height for > is such that both its subtrees are avl trees with the minimum number of nodesone with height and the other with height taking the root into accountwe obtain the following formula that relates (hto ( and ( )for > ( ( ( ( at this pointthe reader familiar with the properties of fibonacci progressions (section and exercise - will already see that (his function exponential in to formalize that observationwe proceed as follows formula implies that (his strictly increasing function of thuswe know that ( ( replacing ( with ( in formula and dropping the we getfor > ( ( ( formula indicates that (hat least doubles each time increases by which intuitively means that (hgrows exponentially to show this fact in formal waywe apply formula repeatedlyyielding the following series of inequalitiesn( ( ( ( ( ( that isn( * ( )for any integer isuch that > since we already know the values of ( and ( )we pick so that is equal to either or that iswe pick |
13,214 | by substituting the above value of in formula we obtainfor > - + * - ( > - ( > - ( by taking logarithms of both sides of formula we obtain log( ( ) from which we get log( ( ) ( which implies that an avl tree storing entries has height at most logn by proposition and the analysis of binary search trees given in section the operation getitem in map implemented with an avl treeruns in time (log )where is the number of items in the map of coursewe still have to show how to maintain the height-balance property after an insertion or deletion update operations given binary search tree we say that position is balanced if the absolute value of the difference between the heights of its children is at most and we say that it is unbalanced otherwise thusthe height-balance property characterizing avl trees is equivalent to saying that every position is balanced the insertion and deletion operations for avl trees begin similarly to the corresponding operations for (standardbinary search treesbut with post-processing for each operation to restore the balance of any portions of the tree that are adversely affected by the change insertion suppose that tree satisfies the height-balance propertyand hence is an avl treeprior to the insertion of new item an insertion of new item in binary search treeas described in section results in new node at leaf position this action may violate the height-balance property (seefor examplefigure )yet the only positions that may become unbalanced are ancestors of pbecause those are the only positions whose subtrees have changed thereforelet us describe how to restructure to fix any unbalance that may have occurred |
13,215 | ( (bfigure an example insertion of an item with key in the avl tree of figure (aafter adding new node for key the nodes storing keys and become unbalanced(ba trinode restructuring restores the height-balance property we show the heights of nodes above themand we identify the nodes xyand and subtrees and participating in the trinode restructuring we restore the balance of the nodes in the binary search tree by simple "search-and-repairstrategy in particularlet be the first position we encounter in going up from toward the root of such that is unbalanced (see figure alsolet denote the child of with higher height (and note that must be an ancestor of pfinallylet be the child of with higher height (there cannot be tie and position must also be an ancestor of ppossibly itself we rebalance the subtree rooted at by calling the trinode restructuring methodrestructure( )originally described in section an example of such restructuring in the context of an avl insertion is portrayed in figure to formally argue the correctness of this process in reestablishing the avl height-balance propertywe consider the implication of being the nearest ancestor of that became unbalanced after the insertion of it must be that the height of increased by one due to the insertion and that it is now greater than its sibling since remains balancedit must be that it formerly had subtrees with equal heightsand that the subtree containing has increased its height by one that subtree increased either because pand thus its height changed from to or because previously had equal-height subtrees and the height of the one containing has increased by letting > denote the height of the tallest child of xthis scenario might be portrayed as in figure after the trinode restructuringwe see that each of xyand has become balanced furthermorethe node that becomes the root of the subtree after the restructuring has height which is precisely the height that had before the insertion of the new item thereforeany ancestor of that became temporarily unbalanced becomes balanced againand this one restructuring restores the heightbalance property globally |
13,216 | + + - - (ah+ + + - (bh+ + + - (cfigure rebalancing of subtree during typical insertion into an avl tree(abefore the insertion(bafter an insertion in subtree causes imbalance at (cafter restoring balance with trinode restructuring notice that the overall height of the subtree after the insertion is the same as before the insertion |
13,217 | deletion recall that deletion from regular binary search tree results in the structural removal of node having either zero or one children such change may violate the height-balance property in an avl tree in particularif position represents the parent of the removed node in tree there may be an unbalanced node on the path from to the root of (see figure in factthere can be at most one such unbalanced node (the justification of this fact is left as exercise - ( (bfigure deletion of the item with key from the avl tree of figure (aafter removing the node storing key the root becomes unbalanced(ba (singlerotation restores the height-balance property as with insertionwe use trinode restructuring to restore balance in the tree in particularlet be the first unbalanced position encountered going up from toward the root of alsolet be the child of with larger height (note that position is the child of that is not an ancestor of )and let be the child of defined as followsif one of the children of is taller than the otherlet be the taller child of yelse (both children of have the same height)let be the child of on the same side as (that isif is the left child of zlet be the left child of yelse let be the right child of yin any casewe then perform restructure(xoperation (see figure the restructured subtree is rooted at the middle position denoted as in the description of the trinode restructuring operation the height-balance property is guaranteed to be locally restored within the subtree of (see exercises - and - unfortunatelythis trinode restructuring may reduce the height of the subtree rooted at by which may cause an ancestor of to become unbalanced soafter rebalancing zwe continue walking up looking for unbalanced positions if we find anotherwe perform restructure operation to restore its balanceand continue marching up looking for moreall the way to the root stillsince the height of is (log )where is the number of entriesby proposition (log ntrinode restructurings are sufficient to restore the height-balance property |
13,218 | performance of avl trees by proposition the height of an avl tree with items is guaranteed to be (log nbecause the standard binary search tree operation had running times bounded by the height (see table )and because the additional work in maintaining balance factors and restructuring an avl tree can be bounded by the length of path in the treethe traditional map operations run in worst-case logarithmic time with an avl tree we summarize these results in table and illustrate this performance in figure operation in [kv delete( )del [kt find position(kt first) last) find min) find maxt before( ) after(pt find lt( ) find le( ) find gt( ) find ge(kt find range(startstopiter( )reversed(trunning time (log no(log no(log no(log no(log no(log no(log no( log no(ntable worst-case running times of operations for an -item sorted map realized as an avl tree twith denoting the number of items reported by find range height time per level ( avl tree to( (log ndown phase ( up phase worst-case time: (log nfigure illustrating the running time of searches and updates in an avl tree the time performance is ( per levelbroken into down phasewhich typically involves searchingand an up phasewhich typically involves updating height values and performing local trinode restructurings (rotations |
13,219 | python implementation complete implementation of an avltreemap class is provided in code fragments and it inherits from the standard treemap class and relies on the balancing framework described in section we highlight two important aspects of our implementation firstthe avltreemap overrides the definition of the nested node classas shown in code fragment in order to provide support for storing the height of the subtree stored at node we also provide several utilities involving heights of nodesand the corresponding positions to implement the core logic of the avl balancing strategywe define utilitynamed rebalancethat suffices as hook for restoring the height-balance property after an insertion or deletion although the inherited behaviors for insertion and deletion are quite differentthe necessary post-processing for an avl tree can be unified in both caseswe trace an upward path from the position at which the change took placerecalculating the height of each position based on the (updatedheights of its childrenand using trinode restructuring operation if an imbalanced position is reached if we reach an ancestor with height that is unchanged by the overall map operationor if we perform trinode restructuring that results in the subtree having the same height it had before the map operationwe stop the processno further ancestor' height will change to detect the stopping conditionwe record the "oldheight of each node and compare it to the newly calculated height class avltreemap(treemap) """sorted map implementation using an avl tree "" nested node class class node(treemap node) """node class for avl maintains height value for balancing ""slots _height additional data member to store height def init (selfelementparent=noneleft=noneright=none) superinit (elementparentleftrightwill be recomputed during balancing self height def left height(self) return self left height if self left is not none else def right height(self) return self right height if self right is not none else code fragment avltreemap class (continued in code fragment |
13,220 | positional-based utility methods def recompute height(selfp) node height max( node left height) node right height)def isbalanced(selfp)return abs( node left heightp node right height)< def tall child(selfpfavorleft=false)parameter controls tiebreaker if node left height( if favorleft else node right height)return self left(pelsereturn self right(pdef tall grandchild(selfp)child self tall child(pif child is on leftfavor left grandchildelse favor right grandchild alignment (child =self left( )return self tall child(childalignmentdef rebalance(selfp)while is not nonetrivially if new node old height node height imbalance detectedif not self isbalanced( )perform trinode restructuringsetting to resulting rootand recompute new local heights after the restructuring self restructure(self tall grandchild( )self recompute height(self left( )self recompute height(self right( )adjust for recent changes self recompute height(phas height changedif node height =old heightp none no further changes needed elsep self parent(prepeat with parent override balancing hooks def rebalance insert(selfp)self rebalance(pdef rebalance delete(selfp)self rebalance(pcode fragment avltreemap class (continued from code fragment |
13,221 | splay trees the next search tree structure we study is known as splay tree this structure is conceptually quite different from the other balanced search trees we discuss in this for splay tree does not strictly enforce logarithmic upper bound on the height of the tree in factthere are no additional heightbalanceor other auxiliary data associated with the nodes of this tree the efficiency of splay trees is due to certain move-to-root operationcalled splayingthat is performed at the bottommost position reached during every insertiondeletionor even search (in essencethis is tree variant of the moveto-front heuristic that we explored for lists in section intuitivelya splay operation causes more frequently accessed elements to remain nearer to the rootthereby reducing the typical search times the surprising thing about splaying is that it allows us to guarantee logarithmic amortized running timefor insertionsdeletionsand searches splaying given node of binary search tree we splay by moving to the root of through sequence of restructurings the particular restructurings we perform are importantfor it is not sufficient to move to the root of by just any sequence of restructurings the specific operation we perform to move up depends upon the relative positions of xits parent yand (if it existsx' grandparent there are three cases that we consider zig-zigthe node and its parent are both left children or both right children (see figure we promote xmaking child of and child of ywhile maintaining the inorder relationships of the nodes in (at (bfigure zig-zig(abefore(bafter there is another symmetric configuration where and are left children |
13,222 | zig-zagone of and is left child and the other is right child (see figure in this casewe promote by making have and as its childrenwhile maintaining the inorder relationships of the nodes in (at (bfigure zig-zag(abefore(bafter there is another symmetric configuration where is right child and is left child zigx does not have grandparent (see figure in this casewe perform single rotation to promote over ymaking child of xwhile maintaining the relative inorder relationships of the nodes in (at (bfigure zig(abefore(bafter there is another symmetric configuration where is originally left child of we perform zig-zig or zig-zag when has grandparentand we perform zig when has parent but not grandparent splaying step consists of repeating these restructurings at until becomes the root of an example of the splaying of node is shown in figures and |
13,223 | ( ( (cfigure example of splaying node(asplaying the node storing starts with zig-zag(bafter the zig-zag(cthe next step will be zig-zig (continues in figure |
13,224 | ( ( ( figure example of splaying node:(dafter the zig-zig(ethe next step is again zig-zig( after the zig-zig (continued from figure |
13,225 | when to splay the rules that dictate when splaying is performed are as followswhen searching for key kif is found at position pwe splay pelse we splay the leaf position at which the search terminates unsuccessfully for examplethe splaying in figures and would be performed after searching successfully for key or unsuccessfully for key when inserting key kwe splay the newly created internal node where gets inserted for examplethe splaying in figures and would be performed if were the newly inserted key we show sequence of insertions in splay tree in figure ( ( ( ( ( ( (gfigure sequence of insertions in splay tree(ainitial tree(bafter inserting but before zig step(cafter splaying(dafter inserting but before zig-zag step(eafter splaying( after inserting but before zig-zig step(gafter splaying |
13,226 | when deleting key kwe splay the position that is the parent of the removed noderecall that by the removal algorithm for binary search treesthe removed node may be that originally containing kor descendant node with replacement key an example of splaying following deletion is shown in figure ( ( ( ( (efigure deletion from splay tree(athe deletion of from the root node is performed by moving to the root the key of its inorder predecessor wdeleting wand splaying the parent of (bsplaying starts with zig-zig(cafter the zig-zig(dthe next step is zig(eafter the zig |
13,227 | python implementation although the mathematical analysis of splay tree' performance is complex (see section )the implementation of splay trees is rather simple adaptation to standard binary search tree code fragment provides complete implementation of splaytreemap classbased upon the underlying treemap class and use of the balancing framework described in section it is important to note that our original treemap class makes calls to the rebalance access methodnot just from within the getitem methodbut also during setitem when modifying the value associated with an existing keyand after any map operations that result in failed search class splaytreemap(treemap) """sorted map implementation using splay tree "" splay operation def splay(selfp) while !self root) parent self parent( grand self parent(parent if grand is none zig case self rotate( elif (parent =self left(grand)=( =self left(parent)) zig-zig case move parent up self rotate(parentthen move up self rotate( else zig-zag case move up self rotate(pmove up again self rotate( override balancing hooks def rebalance insert(selfp) self splay( def rebalance delete(selfp) if is not none self splay( def rebalance access(selfp) self splay(pcode fragment complete implementation of the splaytreemap class |
13,228 | amortized analysis of splaying after zig-zig or zig-zagthe depth of position decreases by twoand after zig the depth of decreases by one thusif has depth dsplaying consists of sequence of / zig-zigs and/or zig-zagsplus one final zig if is odd since single zig-zigzig-zagor zig affects constant number of nodesit can be done in ( time thussplaying position in binary search tree takes time ( )where is the depth of in in other wordsthe time for performing splaying step for position is asymptotically the same as the time needed just to reach that position in top-down search from the root of worst-case time in the worst casethe overall running time of searchinsertionor deletion in splay tree of height is ( )since the position we splay might be the deepest position in the tree moreoverit is possible for to be as large as nas shown in figure thusfrom worst-case point of viewa splay tree is not an attractive data structure in spite of its poor worst-case performancea splay tree performs well in an amortized sense that isin sequence of intermixed searchesinsertionsand deletionseach operation takes on average logarithmic time we perform the amortized analysis of splay trees using the accounting method amortized performance of splay trees for our analysiswe note that the time for performing searchinsertionor deletion is proportional to the time for the associated splaying so let us consider only splaying time let be splay tree with keysand let be node of we define the size (wof as the number of nodes in the subtree rooted at note that this definition implies that the size of nonleaf node is one more than the sum of the sizes of its children we define the rank (wof node as the logarithm in base of the size of wthat isr(wlog( ( )clearlythe root of has the maximum sizenand the maximum ranklog nwhile each leaf has size and rank we use cyber-dollars to pay for the work we perform in splaying position in and we assume that one cyber-dollar pays for zigwhile two cyber-dollars pay for zig-zig or zig-zag hencethe cost of splaying position at depth is cyber-dollars we keep virtual account storing cyber-dollars at each position of note that this account exists only for the purpose of our amortized analysisand does not need to be included in data structure implementing the splay tree |
13,229 | an accounting analysis of splaying when we perform splayingwe pay certain number of cyber-dollars (the exact value of the payment will be determined at the end of our analysiswe distinguish three casesif the payment is equal to the splaying workthen we use it all to pay for the splaying if the payment is greater than the splaying workwe deposit the excess in the accounts of several nodes if the payment is less than the splaying workwe make withdrawals from the accounts of several nodes to cover the deficiency we show below that payment of (log ncyber-dollars per operation is sufficient to keep the system workingthat isto ensure that each node keeps nonnegative account balance an accounting invariant for splaying we use scheme in which transfers are made between the accounts of the nodes to ensure that there will always be enough cyber-dollars to withdraw for paying for splaying work when needed in order to use the accounting method to perform our analysis of splayingwe maintain the following invariantbefore and after splayingeach node of has (wcyber-dollars in its account note that the invariant is "financially sound,since it does not require us to make preliminary deposit to endow tree with zero keys let ( be the sum of the ranks of all the nodes of to preserve the invariant after splayingwe must make payment equal to the splaying work plus the total change in ( we refer to single zigzig-zigor zig-zag operation in splaying as splaying substep alsowe denote the rank of node of before and after splaying substep with (wand ( )respectively the following proposition gives an upper bound on the change of ( caused by single splaying substep we will repeatedly use this lemma in our analysis of full splaying of node to the root |
13,230 | proposition let be the variation of ( caused by single splaying substep ( zigzig-zigor zig-zagfor node in we have the followingd < ( (xr( ) if the substep is zig-zig or zig-zag < ( (xr( )if the substep is zig justificationwe use the fact (see proposition appendix athatif and blog log log ( let us consider the change in ( caused by each type of splaying substep zig-zig(recall figure since the size of each node is one more than the size of its two childrennote that only the ranks of xyand change in zig-zig operationwhere is the parent of and is the parent of alsor (xr( ) ( < ( )and ( < (ythusd (xr (yr (zr(xr(yr(zr (yr (zr(xr( < (xr ( (xnote that (xn (zn ( ( thusr(xr ( ( as per formula that isr ( (xr( this inequality and formula imply < ( ( (xr( ( < ( (xr( ) zig-zag(recall figure againby the definition of size and rankonly the ranks of xyand changewhere denotes the parent of and denotes the parent of alsor(xr(yr(zr (xthusd (xr (yr (zr(xr(yr(zr (yr (zr(xr( < (yr ( ( ( note that (yn (zn ( )hencer (yr ( ( as per formula thusd < ( ( ( (xr( ) < ( (xr( ) zig(recall figure in this caseonly the ranks of and changewhere denotes the parent of alsor (yr(xthusd (yr (xr(yr( < (xr( < ( (xr( ) |
13,231 | proposition let be splay tree with root and let be the total variation of ( caused by splaying node at depth we have < ( (tr( ) justificationsplaying node consists of / splaying substepseach of which is zig-zig or zig-zagexcept possibly the last onewhich is zig if is odd let (xr(xbe the initial rank of xand for clet ri (xbe the rank of after the ith substep and di be the variation of ( caused by the ith substep by proposition the total variation of ( caused by splaying is di = < (ri (xri- ( ) = (rc (xr ( ) < ( (tr( ) by proposition if we make payment of ( (tr( ) cyber-dollars towards the splaying of node xwe have enough cyber-dollars to maintain the invariantkeeping (wcyber-dollars at each node in and pay for the entire splaying workwhich costs cyber-dollars since the size of the root is nits rank (tlog given that ( > the payment to be made for splaying is (log ncyber-dollars to complete our analysiswe have to compute the cost for maintaining the invariant when node is inserted or deleted when inserting new node into splay tree with keysthe ranks of all the ancestors of are increased namelylet wi wd be the ancestors of wwhere wwi is the parent of wi- and wd is the root for dlet (wi and (wi be the size of wi before and after the insertionrespectivelyand let (wi and (wi be the rank of wi before and after the insertion we have (wi (wi alsosince (wi < (wi+ )for we have the following for each in this ranger (wi log( (wi )log( (wi <log( (wi+ ) (wi+ thusthe total variation of ( caused by the insertion is (wi (wii= - < (wd ( (wi+ (wi ) = (wd ( <log thereforea payment of (log ncyber-dollars is sufficient to maintain the invariant when new node is inserted |
13,232 | when deleting node from splay tree with keysthe ranks of all the ancestors of are decreased thusthe total variation of ( caused by the deletion is negativeand we do not need to make any payment to maintain the invariant when node is deleted thereforewe may summarize our amortized analysis in the following proposition (which is sometimes called the "balance propositionfor splay trees)proposition consider sequence of operations on splay treeeach one searchinsertionor deletionstarting from splay tree with zero keys alsolet ni be the number of keys in the tree after operation iand be the total number of insertions the total running time for performing the sequence of operations is log ni = which is ( log nin other wordsthe amortized running time of performing searchinsertionor deletion in splay tree is (log )where is the size of the splay tree at the time thusa splay tree can achieve logarithmic-time amortized performance for implementing sorted map adt this amortized performance matches the worstcase performance of avl trees( treesand red-black treesbut it does so using simple binary tree that does not need any extra balance information stored at each of its nodes in additionsplay trees have number of other interesting properties that are not shared by these other balanced search trees we explore one such additional property in the following proposition (which is sometimes called the "static optimalityproposition for splay trees)proposition consider sequence of operations on splay treeeach one searchinsertionor deletionstarting from splay tree with zero keys alsolet (idenote the number of times the entry is accessed in the splay treethat isits frequencyand let denote the total number of entries assuming that each entry is accessed at least oncethen the total running time for performing the sequence of operations is (ilog(mf ( ) = we omit the proof of this propositionbut it is not as hard to justify as one might imagine the remarkable thing is that this proposition states that the amortized running time of accessing an entry is (log(mf ( )) |
13,233 | ( , trees in this sectionwe consider data structure known as ( , tree it is particular example of more general structure known as multiway search treein which internal nodes may have more than two children other forms of multiway search trees will be discussed in section multiway search trees recall that general trees are defined so that internal nodes may have many children in this sectionwe discuss how general trees can be used as multiway search trees map items stored in search tree are pairs of the form (kv)where is the key and is the value associated with the key definition of multiway search tree let be node of an ordered tree we say that is -node if has children we define multiway search tree to be an ordered tree that has the following propertieswhich are illustrated in figure aeach internal node of has at least two children that iseach internal node is -node such that > each internal -node of with children cd stores an ordered set of key-value pairs ( )(kd- vd- )where <<kd- let us conventionally define and kd for each item (kvstored at node in the subtree of rooted at ci dwe have that ki- < <ki that isif we think of the set of keys stored at as including the special fictitious keys and kd +then key stored in the subtree of rooted at child node ci must be "in betweentwo keys stored at this simple viewpoint gives rise to the rule that -node stores regular keysand it also forms the basis of the algorithm for searching in multiway search tree by the above definitionthe external nodes of multiway search do not store any data and serve only as "placeholders these external nodes can be efficiently represented by none referencesas has been our convention with binary search trees (section howeverfor the sake of expositionwe will discuss these as actual nodes that do not store anything based on this definitionthere is an interesting relationship between the number of key-value pairs and the number of external nodes in multiway search tree proposition an -item multiway search tree has external nodes we leave the justification of this proposition as an exercise ( - |
13,234 | ( ( (cfigure (aa multiway search tree (bsearch path in for key (unsuccessful search)(csearch path in for key (successful search |
13,235 | searching in multiway tree searching for an item with key in multiway search tree is simple we perform such search by tracing path in starting at the root (see figure and when we are at -node during this searchwe compare the key with the keys kd- stored at if ki for some ithe search is successfully completed otherwisewe continue the search in the child ci of such that ki- ki (recall that we conventionally define and kd if we reach an external nodethen we know that there is no item with key in and the search terminates unsuccessfully data structures for representing multiway search trees in section we discuss linked data structure for representing general tree this representation can also be used for multiway search tree when using general tree to implement multiway search treewe must store at each node one or more key-value pairs associated with that node that iswe need to store with reference to some collection that stores the items for during search for key in multiway search treethe primary operation needed when navigating node is finding the smallest key at that node that is greater than or equal to for this reasonit is natural to model the information at node itself as sorted mapallowing use of the find ge(kmethod we say such map serves as secondary data structure to support the primary data structure represented by the entire multiway search tree this reasoning may at first seem like circular argumentsince we need representation of (secondaryordered map to represent (primaryordered map we can avoid any circular dependencehoweverby using the bootstrapping techniquewhere we use simple solution to problem to create newmore advanced solution in the context of multiway search treea natural choice for the secondary structure at each node is the sortedtablemap of section because we want to determine the associated value in case of match for key kand otherwise the corresponding child ci such that ki- ki we recommend having each key ki in the secondary structure map to the pair (vi ci with such realization of multiway search tree processing -node while searching for an item of with key can be performed using binary search operation in (log dtime let dmax denote the maximum number of children of any node of and let denote the height of the search time in multiway search tree is therefore ( log dmax if dmax is constantthe running time for performing search is (hthe primary efficiency goal for multiway search tree is to keep the height as small as possible we next discuss strategy that caps dmax at while guaranteeing height that is logarithmic in nthe total number of items stored in the map |
13,236 | ( , )-tree operations multiway search tree that keeps the secondary data structures stored at each node small and also keeps the primary multiway tree balanced is the ( treewhich is sometimes called - tree or tree this data structure achieves these goals by maintaining two simple properties (see figure )size propertyevery internal node has at most four children depth propertyall the external nodes have the same depth figure ( tree againwe assume that external nodes are empty andfor the sake of simplicitywe describe our search and update methods assuming that external nodes are real nodesalthough this latter requirement is not strictly needed enforcing the size property for ( trees keeps the nodes in the multiway search tree simple it also gives rise to the alternative name tree,since it implies that each internal node in the tree has or children another implication of this rule is that we can represent the secondary map stored at each internal node using an unordered list or an ordered arrayand still achieve ( )-time performance for all operations (since dmax the depth propertyon the other handenforces an important bound on the height of ( tree proposition the height of ( tree storing items is (log njustificationlet be the height of ( tree storing items we justify the proposition by showing the claim log( < <log( ( to justify this claim note first thatby the size propertywe can have at most nodes at depth at most nodes at depth and so on thusthe number of external nodes in is at most likewiseby the depth property and the definition |
13,237 | of ( treewe must have at least nodes at depth at least nodes at depth and so on thusthe number of external nodes in is at least in additionby proposition the number of external nodes in is thereforewe obtain < < taking the logarithm in base of the terms for the above inequalitieswe get that <log( < hwhich justifies our claim (formula when terms are rearranged proposition states that the size and depth properties are sufficient for keeping multiway tree balanced moreoverthis proposition implies that performing search in ( tree takes (log ntime and that the specific realization of the secondary structures at the nodes is not crucial design choicesince the maximum number of children dmax is constant maintaining the size and depth properties requires some effort after performing insertions and deletions in ( treehowever we discuss these operations next insertion to insert new item (kv)with key kinto ( tree we first perform search for assuming that has no item with key kthis search terminates unsuccessfully at an external node let be the parent of we insert the new item into node and add new child (an external nodeto on the left of our insertion method preserves the depth propertysince we add new external node at the same level as existing external nodes neverthelessit may violate the size property indeedif node was previously -nodethen it would become -node after the insertionwhich causes the tree to no longer be ( tree this type of violation of the size property is called an overflow at node wand it must be resolved in order to restore the properties of ( tree let be the children of wand let be the keys stored at to remedy the overflow at node wwe perform split operation on as follows (see figure )replace with two nodes and where is -node with children storing keys and is -node with children storing key if is the root of create new root node uelselet be the parent of insert key into and make and children of uso that if was child of uthen and become children and of urespectively as consequence of split operation on node wa new overflow may occur at the parent of if such an overflow occursit triggers in turn split at node (see figure split operation either eliminates the overflow or propagates it into the parent of the current node we show sequence of insertions in ( tree in figure |
13,238 | (au ww (bc (cfigure node split(aoverflow at -node (bthe third key of inserted into the parent of (cnode replaced with -node and -node ( ( ( ( ( ( figure an insertion in ( tree that causes cascading split(abefore the insertion(binsertion of causing an overflow(ca split(dafter the split new overflow occurs(eanother splitcreating new root node( final tree |
13,239 | ( ( ( ( ( ( ( ( ( ( ( (lfigure sequence of insertions into ( tree(ainitial tree with one item(binsertion of (cinsertion of (dinsertion of which causes an overflow(esplitwhich causes the creation of new root node( after the split(ginsertion of (hinsertion of which causes an overflow(isplit(jafter the split(kinsertion of (linsertion of |
13,240 | analysis of insertion in ( , tree because dmax is at most the original search for the placement of new key uses ( time at each leveland thus (log ntime overallsince the height of the tree is (log nby proposition the modifications to single node to insert new key and child can be implemented to run in ( timeas can single split operation the number of cascading split operations is bounded by the height of the treeand so that phase of the insertion process also runs in (log ntime thereforethe total time to perform an insertion in ( tree is (log ndeletion let us now consider the removal of an item with key from ( tree we begin such an operation by performing search in for an item with key removing an item from ( tree can always be reduced to the case where the item to be removed is stored at node whose children are external nodes supposefor instancethat the item with key that we wish to remove is stored in the ith item (ki vi at node that has only internal-node children in this casewe swap the item (ki vi with an appropriate item that is stored at node with external-node children as follows (see figure ) we find the rightmost internal node in the subtree rooted at the ith child of znoting that the children of node are all external nodes we swap the item (ki vi at with the last item of once we ensure that the item to remove is stored at node with only externalnode children (because either it was already at or we swapped it into )we simply remove the item from and remove the ith external node of removing an item (and childfrom node as described above preserves the depth propertyfor we always remove an external child from node with only external children howeverin removing such an external nodewe may violate the size property at indeedif was previously -nodethen it becomes -node with no items after the removal (figure and )which is not allowed in ( tree this type of violation of the size property is called an underflow at node to remedy an underflowwe check whether an immediate sibling of is -node or -node if we find such sibling sthen we perform transfer operationin which we move child of to wa key of to the parent of and sand key of to (see figure and if has only one siblingor if both immediate siblings of are -nodesthen we perform fusion operationin which we merge with siblingcreating new node and move key from the parent of to (see figure and |
13,241 | ( ( ( ( ( ( ( (hfigure sequence of removals from ( tree(aremoval of causing an underflow(ba transfer operation(cafter the transfer operation(dremoval of causing an underflow(ea fusion operation( after the fusion operation(gremoval of (hafter removing |
13,242 | fusion operation at node may cause new underflow to occur at the parent of wwhich in turn triggers transfer or fusion at (see figure hencethe number of fusion operations is bounded by the height of the treewhich is (log nby proposition if an underflow propagates all the way up to the rootthen the root is simply deleted (see figure and ( (bu ( (dfigure propagating sequence of fusions in ( tree(aremoval of which causes an underflow(bfusionwhich causes another underflow(csecond fusion operationwhich causes the root to be removed(dfinal tree performance of ( , trees the asymptotic performance of ( tree is identical to that of an avl tree (see table in terms of the sorted map adtwith guaranteed logarithmic bounds for most operations the time complexity analysis for ( tree having keyvalue pairs is based on the followingthe height of ( tree storing entries is (log )by proposition splittransferor fusion operation takes ( time searchinsertionor removal of an entry visits (log nnodes thus( trees provide for fast map search and update operations ( trees also have an interesting relationship to the data structure we discuss next |
13,243 | red-black trees although avl trees and ( trees have number of nice propertiesthey also have some disadvantages for instanceavl trees may require many restructure operations (rotationsto be performed after deletionand ( trees may require many split or fusing operations to be performed after an insertion or removal the data structure we discuss in this sectionthe red-black treedoes not have these drawbacksit uses ( structural changes after an update in order to stay balanced formallya red-black tree is binary search tree (see section with nodes colored red and black in way that satisfies the following propertiesroot propertythe root is black red propertythe children of red node (if anyare black depth propertyall nodes with zero or one children have the same black depthdefined as the number of black ancestors (recall that node is its own ancestoran example of red-black tree is shown in figure figure an example of red-black treewith "rednodes drawn in white the common black depth for this tree is we can make the red-black tree definition more intuitive by noting an interesting correspondence between red-black trees and ( trees (excluding their trivial external nodesnamelygiven red-black treewe can construct corresponding ( tree by merging every red node into its parentstoring the entry from at its parentand with the children of becoming ordered children of the parent for examplethe red-black tree in figure corresponds to the ( tree from figure as illustrated in figure the depth property of the red-black tree corresponds to the depth property of the ( tree since exactly one black node of the red-black tree contributes to each node of the corresponding ( tree converselywe can transform any ( tree into corresponding red-black tree by coloring each node black and then performing the following transformationsas illustrated in figure |
13,244 | figure an illustration that the red-black tree of figure corresponds to the ( tree of figure based on the highlighted grouping of red nodes with their black parents if is -nodethen keep the (blackchildren of as is if is -nodethen create new red node ygive ' last two (blackchildren to yand make the first child of and be the two children of if is -nodethen create two new red nodes and zgive ' first two (blackchildren to ygive ' last two (blackchildren to zand make and be the two children of notice that red node always has black parent in this construction proposition the height of red-black tree storing entries is (log -( -( or -( figure correspondence between nodes of ( tree and red-black tree( -node( -node( -node |
13,245 | justificationlet be red-black tree storing entriesand let be the height of we justify this proposition by establishing the following factlog( < < log( let be the common black depth of all nodes of having zero or one children let be the ( tree associated with and let be the height of (excluding trivial leavesbecause of the correspondence between red-black trees and ( treeswe know that henceby proposition <log( ) by the red propertyh < thuswe obtain < log( ) the other inequalitylog( <hfollows from proposition and the fact that has nodes red-black tree operations the algorithm for searching in red-black tree is the same as that for standard binary search tree (section thussearching in red-black tree takes time proportional to the height of the treewhich is (log nby proposition the correspondence between ( trees and red-black trees provides important intuition that we will use in our discussion of how to perform updates in red-black treesin factthe update algorithms for red-black trees can seem mysteriously complex without this intuition split and fuse operations of ( tree will be effectively mimicked by recoloring neighboring red-black tree nodes rotation within red-black tree will be used to change orientations of -node between the two forms shown in figure (binsertion now consider the insertion of key-value pair (kvinto red-black tree the algorithm initially proceeds as in standard binary search tree (section namelywe search for in until we reach null subtreeand we introduce new leaf at that positionstoring the item in the special case that is the only node of and thus the rootwe color it black in all other caseswe color red this action corresponds to inserting (kvinto node of the ( tree with external children the insertion preserves the root and depth properties of but it may violate the red property indeedif is not the root of and the parent of is redthen we have parent and child (namelyy and xthat are both red note that by the root propertyy cannot be the root of and by the red property (which was previously satisfied)the parent of must be black since and its parent are redbut ' grandparent is blackwe call this violation of the red property double red at node to remedy double redwe consider two cases |
13,246 | case the sibling of is black (or none(see figure in this casethe double red denotes the fact that we have added the new node to corresponding -node of the ( tree effectively creating malformed -node this formation has one red node (ythat is the parent of another red node ( )while we want it to have the two red nodes as siblings instead to fix this problemwe perform trinode restructuring of the trinode restructuring is done by the operation restructure( )which consists of the following steps (see again figure this operation is also discussed in section )take node xits parent yand grandparent zand temporarily relabel them as aband cin left-to-right orderso that aband will be visited in this order by an inorder tree traversal replace the grandparent with the node labeled band make nodes and the children of bkeeping inorder relationships unchanged after performing the restructure(xoperationwe color black and we color and red thusthe restructuring eliminates the double-red problem notice that the portion of any path through the restructured part of the tree is incident to exactly one black nodeboth before and after the trinode restructuring thereforethe black depth of the tree is unaffected (ab (bfigure restructuring red-black tree to remedy double red(athe four configurations for xyand before restructuring(bafter restructuring |
13,247 | case the sibling of is red (see figure in this casethe double red denotes an overflow in the corresponding ( tree to fix the problemwe perform the equivalent of split operation namelywe do recoloringwe color and black and their parent red (unless is the rootin which caseit remains blacknotice that unless is the rootthe portion of any path through the affected part of the tree is incident to exactly one black nodeboth before and after the recoloring thereforethe black depth of the tree is unaffected by the recoloring unless is the rootin which case it is increased by one howeverit is possible that the double-red problem reappears after such recoloringalbeit higher up in the tree since may have red parent if the double-red problem reappears at zthen we repeat the consideration of the two cases at thusa recoloring either eliminates the double-red problem at node xor propagates it to the grandparent of we continue going up performing recolorings until we finally resolve the double-red problem (with either final recoloring or trinode restructuringthusthe number of recolorings caused by an insertion is no more than half the height of tree that iso(log nby proposition (az (bfigure recoloring to remedy the double-red problem(abefore recoloring and the corresponding -node in the associated ( tree before the split(bafter recoloring and the corresponding nodes in the associated ( tree after the split as further examplesfigures and show sequence of insertion operations in red-black tree |
13,248 | ( ( ( ( ( ( ( ( ( ( ( (lfigure sequence of insertions in red-black tree(ainitial tree(binsertion of (cinsertion of which causes double red(dafter restructuring(einsertion of which causes double red( after recoloring (the root remains black)(ginsertion of (hinsertion of (iinsertion of which causes double red(jafter restructuring(kinsertion of which causes double red(lafter recoloring (continues in figure |
13,249 | ( ( ( ( (qfigure sequence of insertions in red-black tree(minsertion of which causes double red(nafter restructuring(oinsertion of which causes double red(pafter recoloring there is again double redto be handled by restructuring(qafter restructuring (continued from figure |
13,250 | deletion deleting an item with key from red-black tree initially proceeds as for binary search tree (section structurallythe process results in the removal node that has at most one child (either that originally containing key or its inorder predecessorand the promotion of its remaining child (if anyif the removed node was redthis structural change does not affect the black depths of any paths in the treenor introduce any red violationsand so the resulting tree remains valid red-black tree in the corresponding ( tree this case denotes the shrinking of -node or -node if the removed node was blackthen it either had zero children or it had one child that was red leaf (because the null subtree of the removed node has black height in the latter casethe removed node represents the black part of corresponding -nodeand we restore the redblack properties by recoloring the promoted child to black the more complex case is when (nonrootblack leaf is removed in the corresponding ( treethis denotes the removal of an item from -node without rebalancingsuch change results in deficit of one for the black depth along the path leading to the deleted item by necessitythe removed node must have sibling whose subtree has black height (given that this was valid red-black tree prior to the deletion of the black leaf to remedy this scenariowe consider more general setting with node that is known to have two subtreestheavy and tlight such that the root of tlight (if anyis black and such that the black depth of theavy is exactly one more than that of tlight as portrayed in figure in the case of removed black leafz is the parent of that leaf and tlight is trivially the empty subtree that remains after the deletion we describe the more general case of deficit because our algorithm for rebalancing the tree willin some casespush the deficit higher in the tree (just as the resolution of deletion in ( tree sometimes cascades upwardwe let denote the root of theavy (such node exists because theavy has black height at least one tlight theavy figure portrayal of deficit between the black heights of subtrees of node the gray color in illustrating and denotes the fact that these nodes may be colored either black or red |
13,251 | we consider three possible cases to remedy deficit case node is black and has red child (see figure we perform trinode restructuringas originally described in section the operation restructure(xtakes the node xits parent yand grandparent zlabels them temporarily left to right as aband cand replaces with the node labeled bmaking it the parent of the other two we color and blackand give the former color of notice that the path to tlight in the result includes one additional black node after the restructurethereby resolving its deficit in contrastthe number of black nodes on paths to any of the other three subtrees illustrated in figure remains unchanged resolving this case corresponds to transfer operation in the ( tree between the two children of the node with the fact that has red child assures us that it represents either -node or -node in effectthe item previously stored at is demoted to become new -node to resolve the deficiencywhile an item stored at or its child is promoted to take the place of the item previously stored at tlight tlight tlight tlight figure resolving black deficit in tlight by performing trinode restructuring as restructure(xtwo possible configurations are shown (two other configurations are symmetricthe gray color of in the left figures denotes the fact that this node may be colored either red or black the root of the restructured portion is given that same colorwhile the children of that node are both colored black in the result |
13,252 | case node is black and both children of are black (or noneresolving this case corresponds to fusion operation in the corresponding ( tree as must represent -node we do recoloringwe color redandif is redwe color it black (see figure this does not introduce any red violationbecause does not have red child in the case that was originally redand thus the parent in the corresponding ( tree is -node or -nodethis recoloring resolves the deficit (see figure the path leading to tlight includes one additional black node in the resultwhile the recoloring did not affect the number of black nodes on the path to the subtrees of theavy in the case that was originally blackand thus the parent in the corresponding ( tree is -nodethe recoloring has not increased the number of black nodes on the path to tlight in factit has reduced the number of black nodes on the path to theavy (see figure after this stepthe two children of will have the same black height howeverthe entire tree rooted at has become deficientthereby propogating the problem higher in the treewe must repeat consideration of all three cases at the parent of as remedy tlight theavy (az tlight theavy tlight (bfigure resolving black deficit in tlight by recoloring operation(awhen is originally redreversing the colors of and resolves the black deficit in tlight ending the process(bwhen is originally blackrecoloring causes the entire subtree of to have black deficitrequiring cascading remedy |
13,253 | case node is red (see figure because is red and theavy has black depth at least must be black and the two subtrees of must each have black root and black depth equal to that of theavy in this casewe perform rotation about and zand then recolor black and red this denotes reorientation of -node in the corresponding ( tree this does not immediately resolve the deficitas the new subtree of is an old subtree of with black root and black height equal to that of the original theavy we reapply the algorithm to resolve the deficit at zknowing that the new child that is the root of theavy is now blackand therefore that either case applies or case applies furthermorethe next application will be the lastbecause case is always terminal and case will be terminal given that is red tlight theavy theavy tlight figure rotation and recoloring about red node and black node zassuming black deficit at this amounts to change of orientation in the corresponding -node of ( tree this operation does not affect the black depth of any paths through this portion of the tree furthermorebecause was originally redthe new subtree of must have black root and must have black height equal to the original theavy thereforea black deficit remains at node after the transformation in figure we show sequence of deletions on red-black tree dashed edge in those figuressuch as to the right of in part ( )represents branch with black deficiency that has not yet been resolved we illustrate case restructuring in parts (cand (dwe illustrate case recoloring in parts ( and (gfinallywe show an example of case rotation between parts (iand ( )concluding with case recoloring in part ( |
13,254 | ( ( ( ( ( ( ( ( ( ( (kfigure sequence of deletions from red-black tree(ainitial tree(bremoval of (cremoval of causing black deficit to the right of (handled by restructuring)(dafter restructuring(eremoval of ( removal of causing black deficit to the right of (handled by recoloring)(gafter recoloring(hremoval of (iremoval of causing black deficit to the right of (handled initially by rotation)(jafter the rotation the black deficit needs to be handled by recoloring(kafter the recoloring |
13,255 | performance of red-black trees the asymptotic performance of red-black tree is identical to that of an avl tree or ( tree in terms of the sorted map adtwith guaranteed logarithmic time bounds for most operations (see table for summary of the avl performance the primary advantage of red-black tree is that an insertion or deletion requires only constant number of restructuring operations (this is in contrast to avl trees and ( treesboth of which require logarithmic number of structural changes per map operation in the worst case that isan insertion or deletion in red-black tree requires logarithmic time for searchand may require logarithmic number of recoloring operations that cascade upward yet we showin the following propositionsthat there are constant number of rotations or restructure operations for single map operation proposition the insertion of an item in red-black tree storing items can be done in (log ntime and requires (log nrecolorings and at most one trinode restructuring justificationrecall that an insertion begins with downward searchthe creation of new leaf nodeand then potential upward effort to remedy double-red violation there may be logarithmically many recoloring operations due to an upward cascading of case applicationsbut single application of the case action eliminates the double-red problem with trinode restructuring thereforeat most one restructuring operation is needed for red-black tree insertion proposition the algorithm for deleting an item from red-black tree with items takes (log ntime and performs (log nrecolorings and at most two restructuring operations justificationa deletion begins with the standard binary search tree deletion algorithmwhich requires time proportional to the height of the treefor red-black treesthat height is (log nthe subsequent rebalancing takes place along an upward path from the parent of deleted node we considered three cases to remedy resulting black deficit case requires trinode restructuring operationyet completes the processso this case is applied at most once case may be applied logarithmically many timesbut it only involves recoloring of up to two nodes per application case requires rotationbut this case can only apply oncebecause if the rotation does not resolve the problemthe very next action will be terminal application of either case or case in the worst casethere will be (log nrecolorings from case single rotation from case and trinode restructuring from case |
13,256 | python implementation complete implementation of redblacktreemap class is provided in code fragments through it inherits from the standard treemap class and relies on the balancing framework described in section we beginin code fragment by overriding the definition of the nested node class to introduce an additional boolean field to denote the current color of node our constructor intentionally sets the color of new node to red to be consistent with our approach for inserting items we define several additional utility functionsat the top of code fragment that aid in setting the color of nodes and querying various conditions when an element has been inserted as leaf in the treethe rebalance insert hook is calledallowing us the opportunity to modify the tree the new node is red by defaultso we need only look for the special case of the new node being the root (in which case it should be colored black)or the possibility that we have double-red violation because the new node' parent is also red to remedy such violationswe closely follow the case analysis described in section the rebalancing after deletion also follows the case analysis described in section an additional challenge is that by the time the rebalance hook is calledthe old node has already been removed from the tree that hook is invoked on the parent of the removed node some of the case analysis depends on knowing about the properties of the removed node fortunatelywe can reverse engineer that information by relying on the red-black tree properties in particularif denotes the parent of the removed nodeit must be thatif has no childrenthe removed node was red leaf (exercise - if has one childthe removed node was black leafcausing deficitunless that one remaining child is red leaf (exercise - if has two childrenthe removed node was black node with one red childwhich was promoted (exercise - class redblacktreemap(treemap) """sorted map implementation using red-black tree "" class node(treemap node) """node class for red-black tree maintains bit that denotes color ""slots _red add additional data member to the node class def init (selfelementparent=noneleft=noneright=none) superinit (elementparentleftrightnew node red by default self red true code fragment beginning of the redblacktreemap class (continued in code fragment |
13,257 | positional-based utility methods we consider nonexistent child to be trivially black def set red(selfp) node red true def set black(selfp) node red false def set color(selfpmake red) node red make red def is red(selfp)return is not none and node red def is red leaf(selfp)return self is red(pand self is leaf(pdef get red child(selfp)"""return red child of (or none if no such child""for child in (self left( )self right( ))if self is red(child)return child return none support for insertions def rebalance insert(selfp)new node is always red self resolve red(pdef resolve red(selfp)if self is root( )self set black(pelseparent self parent(pif self is red(parent)uncle self sibling(parentif not self is red(uncle)middle self restructure(pself set black(middleself set red(self left(middle)self set red(self right(middle)elsegrand self parent(parentself set red(grandself set black(self left(grand)self set black(self right(grand)self resolve red(grandmake root black double red problem case misshapen -node do trinode restructuring and then fix colors case overfull -node grandparent becomes red its children become black recur at red grandparent code fragment continuation of the redblacktreemap class (continued from code fragment and concluded in code fragment |
13,258 | support for deletions def rebalance delete(selfp)if len(self= special caseensure that root is black self set black(self root)elif is not nonen self num children(pif = deficit exists unless child is red leaf next(self children( )if not self is red leaf( )self fix deficit(pcelif = removed black node with red child if self is red leaf(self left( ))self set black(self left( )elseself set black(self right( )def fix deficit(selfzy)"""resolve black deficit at zwhere is the root of heavier subtree ""if not self is red( ) is blackwill apply case or self get red child(yif is not nonecase is black and has red child xdo "transferold color self is red(zmiddle self restructure(xmiddle gets old color of self set color(middleold colorchildren become black self set black(self left(middle)self set black(self right(middle)elsecase is blackbut no red childrenrecolor as "fusionself set red(yif self is red( )this resolves the problem self set black(zelif not self is root( )self fix deficit(self parent( )self sibling( )recur upward elsecase is redrotate misaligned -node and repeat self rotate(yself set black(yself set red(zif =self right( )self fix deficit(zself left( )elseself fix deficit(zself right( )code fragment conclusion of the redblacktreemap class (continued from code fragment |
13,259 | exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - if we insert the entries ( )( )( , )( )and ( )in this orderinto an initially empty binary search treewhat will it look liker- insertinto an empty binary search treeentries with keys (in this orderdraw the tree after each insertion - how many different binary search trees can store the keys { } - dr amongus claims that the order in which fixed set of entries is inserted into binary search tree does not matter--the same tree results every time give small example that proves he is wrong - dr amongus claims that the order in which fixed set of entries is inserted into an avl tree does not matter--the same avl tree results every time give small example that proves he is wrong - our implementation of the treemap subtree search utilityfrom code fragment relies on recursion for large unbalanced treepython' default limit on recursive depth may be prohibitive give an alternative implementation of that method that does not rely on the use of recursion - do the trinode restructurings in figures and result in single or double rotationsr- draw the avl tree resulting from the insertion of an entry with key into the avl tree of figure - draw the avl tree resulting from the removal of the entry with key from the avl tree of figure - explain why performing rotation in an -node binary tree when using the array-based representation of section takes (ntime - give schematic figurein the style of figure showing the heights of subtrees during deletion operation in an avl tree that triggers trinode restructuring for the case in which the two children of the node denoted as start with equal heights what is the net effect of the height of the rebalanced subtree due to the deletion operationr- repeat the previous problemconsidering the case in which ' children start with different heights |
13,260 | - the rules for deletion in an avl tree specifically require that when the two subtrees of the node denoted as have equal heightchild should be chosen to be "alignedwith (so that and are both left children or both right childrento better understand this requirementrepeat exercise assuming we picked the misaligned choice of why might there be problem in restoring the avl property with that choicer- perform the following sequence of operations in an initially empty splay tree and draw the tree after each set of operations insert keys in this order search for keys in this order delete keys in this order - what does splay tree look like if its entries are accessed in increasing order by their keysr- is the search tree of figure (aa ( treewhy or why notr- an alternative way of performing split at node in ( tree is to partition into and with being -node and -node which of the keys or do we store at ' parentwhyr- dr amongus claims that ( tree storing set of entries will always have the same structureregardless of the order in which the entries are inserted show that he is wrong - draw four different red-black trees that correspond to the same ( tree - consider the set of keys { draw ( tree storing as its keys using the fewest number of nodes draw ( tree storing as its keys using the maximum number of nodes - consider the sequence of keys ( draw the result of inserting entries with these keys (in the given orderinto an initially empty ( tree an initially empty red-black tree - for the following statements about red-black treesprovide justification for each true statement and counterexample for each false one subtree of red-black tree is itself red-black tree node that does not have sibling is red there is unique ( tree associated with given red-black tree there is unique red-black tree associated with given ( tree - explain why you would get the same output in an inorder listing of the entries in binary search treet independent of whether is maintained to be an avl treesplay treeor red-black tree |
13,261 | - consider tree storing , entries what is the worst-case height of in the following casesa is binary search tree is an avl tree is splay tree is ( tree is red-black tree - draw an example of red-black tree that is not an avl tree - let be red-black tree and let be the position of the parent of the original node that is deleted by the standard search tree deletion algorithm prove that if has zero childrenthe removed node was red leaf - let be red-black tree and let be the position of the parent of the original node that is deleted by the standard search tree deletion algorithm prove that if has one childthe deletion has caused black deficit at pexcept for the case when the one remaining child is red leaf - let be red-black tree and let be the position of the parent of the original node that is deleted by the standard search tree deletion algorithm prove that if has two childrenthe removed node was black and had one red child creativity - explain how to use an avl tree or red-black tree to sort comparable elements in ( log ntime in the worst case - can we use splay tree to sort comparable elements in ( log ntime in the worst casewhy or why notc- repeat exercise - for the treemap class - show that any -node binary tree can be converted to any other -node binary tree using (nrotations - for key that is not found in binary search tree prove that both the greatest key less than and the least key greater than lie on the path traced by the search for - in section we claim that the find range method of binary search tree executes in ( htime where is the number of items found within the range and is the height of the tree our implementationin code fragment begins by searching for the starting keyand then repeatedly calling the after method until reaching the end of the range each call to after is guaranteed to run in (htime this suggests weaker (shbound for find rangesince it involves (scalls to after prove that this implementation achieves the stronger ( hbound |
13,262 | - describe how to perform an operation remove range(startstopthat removes all the items whose keys fall within range(startstopin sorted map that is implemented with binary search tree and show that this method runs in time ( )where is the number of items removed and is the height of - repeat the previous problem using an avl treeachieving running time of ( log nwhy doesn' the solution to the previous problem trivially result in an ( log nalgorithm for avl treesc- suppose we wish to support new method count range(startstopthat determines how many keys of sorted map fall in the specified range we could clearly implement this in ( htime by adapting our approach to find range describe how to modify the search tree structure to support (hworst-case time for count range - if the approach described in the previous problem were implemented as part of the treemap classwhat additional modifications (if anywould be necessary to subclass such as avltreemap in order to maintain support for the new methodc- draw schematic of an avl tree such that single remove operation could require (log ntrinode restructurings (or rotationsfrom leaf to the root in order to restore the height-balance property - in our avl implementationeach node stores the height of its subtreewhich is an arbitrarily large integer the space usage for an avl tree can be reduced by instead storing the balance factor of nodewhich is defined as the height of its left subtree minus the height of its right subtree thusthe balance factor of node is always equal to - or except during an insertion or removalwhen it may become temporarily equal to - or + reimplement the avltreemap class storing balance factors rather than subtree heights - if we maintain reference to the position of the leftmost node of binary search treethen operation find min can be performed in ( time describe how the implementation of the other map methods need to be modified to maintain reference to the leftmost position - if the approach described in the previous problem were implemented as part of the treemap classwhat additional modifications (if anywould be necessary to subclass such as avltreemap in order to accurately maintain the reference to the leftmost positionc- describe modification to the binary search tree implementation having worst-case ( )-time performance for methods after(pand before(pwithout adversely affecting the asymptotics of any other methods |
13,263 | - if the approach described in the previous problem were implemented as part of the treemap classwhat additional modifications (if anywould be necessary to subclass such as avltreemap in order to maintain the efficiencyc- for standard binary search treetable claims ( )-time performance for the delete(pmethod explain why delete(pwould run in ( time if given solution to exercise - - describe modification to the binary search tree data structure that would support the following two index-based operations for sorted map in (htimewhere is the height of the tree at index( )return the position of the item at index of sorted map index of( )return the index of the item at position of sorted map - draw splay treet together with the sequence of updates that produced itand red-black treet on the same set of ten entriessuch that preorder traversal of would be the same as preorder traversal of - show that the nodes that become temporarily unbalanced in an avl tree during an insertion may be nonconsecutive on the path from the newly inserted node to the root - show that at most one node in an avl tree becomes temporarily unbalanced after the immediate deletion of node as part of the standard delitem map operation - let and be ( trees storing and entriesrespectivelysuch that all the entries in have keys less than the keys of all the entries in describe an (log log )-time method for joining and into single tree that stores all the entries in and - repeat the previous problem for red-black trees and - justify proposition - the boolean indicator used to mark nodes in red-black tree as being "redor "blackis not strictly needed when we have distinct keys describe scheme for implementing red-black tree without adding any extra space to standard binary search tree nodes - let be red-black tree storing entriesand let be the key of an entry in show how to construct from in (log ntimetwo red-black trees and such that contains all the keys of less than kand contains all the keys of greater than this operation destroys - show that the nodes of any avl tree can be colored "redand "blackso that becomes red-black tree |
13,264 | - the standard splaying step requires two passesone downward pass to find the node to splayfollowed by an upward pass to splay the node describe method for splaying and searching for in one downward pass each substep now requires that you consider the next two nodes in the path down to xwith possible zig substep performed at the end describe how to perform the zig-zigzig-zagand zig steps - consider variation of splay treescalled half-splay treeswhere splaying node at depth stops as soon as the node reaches depth / perform an amortized analysis of half-splay trees - describe sequence of accesses to an -node splay tree where is oddthat results in consisting of single chain of nodes such that the path down alternates between left children and right children - as positional structureour treemap implementation has subtle flaw position instance associated with an key-value pair (kvshould remain valid as long as that item remains in the map in particularthat position should be unaffected by calls to insert or delete other items in the collection our algorithm for deleting an item from binary search tree may fail to provide such guaranteein particular because of our rule for using the inorder predecessor of key as replacement when deleting key that is located in node with two children given an explicit series of python commands that demonstrates such flaw - how might the treemap implementation be changed to avoid the flaw described in the previous problemprojects - perform an experimental study to compare the speed of our avl treesplay treeand red-black tree implementations for various sequences of operations - redo the previous exerciseincluding an implementation of skip lists (see exercise - - implement the map adt using ( tree (see section - redo the previous exerciseincluding all methods of the sorted map adt (see section - redo exercise - providing positional supportas we did for binary search trees (section )so as to include methods first)last)before( )after( )and find position(keach item should have distinct position in this abstractioneven though several items may be stored at single node of tree |
13,265 | - write python class that can take any red-black tree and convert it into its corresponding ( tree and can take any ( tree and convert it into its corresponding red-black tree - in describing multisets and multimaps in section we describe general approach for adapting traditional map by storing all duplicates within secondary container as value in the map give an alternative implementation of multimap using binary search tree such that each entry of the map is stored at distinct node of the tree with the existence of duplicateswe redefine the search tree property so that all items in the left subtree of position with key have keys that are less than or equal to kwhile all items in the right subtree of have keys that are greater than or equal to use the public interface given in code fragment - prepare an implementation of splay trees that uses top-down splaying as described in exercise - perform extensive experimental studies to compare its performance to the standard bottom-up splaying implemented in this - the mergeable heap adt is an extension of the priority queue adt consisting of operations add(kv)min)remove minand merge( )where the merge(hoperations performs union of the mergeable heap with the present oneincorporating all items into the current one while emptying describe concrete implementation of the mergeable heap adt that achieves (log nperformance for all its operationswhere denotes the size of the resulting heap for the merge operation - write program that performs simple -body simulationcalled "jumping leprechauns this simulation involves leprechaunsnumbered to it maintains gold value gi for each leprechaun iwhich begins with each leprechaun starting out with million dollars worth of goldthat isgi for each in additionthe simulation also maintainsfor each leprechaunia place on the horizonwhich is represented as double-precision floating-point numberxi in each iteration of the simulationthe simulation processes the leprechauns in order processing leprechaun during this iteration begins by computing new place on the horizon for iwhich is determined by the assignment xi xi rgi where is random floating-point number between - and the leprechaun then steals half the gold from the nearest leprechauns on either side of him and adds this gold to his gold valuegi write program that can perform series of iterations in this simulation for given numbernof leprechauns you must maintain the set of horizon positions using sorted map data structure described in this |
13,266 | notes some of the data structures discussed in this are extensively covered by knuth in his sorting and searching book [ ]and by mehlhorn in [ avl trees are due to adel'son-vel'skii and landis [ ]who invented this class of balanced search trees in binary search treesavl treesand hashing are described in knuth' sorting and searching [ book average-height analyses for binary search trees can be found in the books by ahohopcroftand ullman [ and cormenleisersonrivest and stein [ the handbook by gonnet and baeza-yates [ contains number of theoretical and experimental comparisons among map implementations ahohopcroftand ullman [ discuss ( treeswhich are similar to ( trees red-black trees were defined by bayer [ variations and interesting properties of red-black trees are presented in paper by guibas and sedgewick [ the reader interested in learning more about different balanced tree data structures is referred to the books by mehlhorn [ and tarjan [ ]and the book by mehlhorn and tsakalidis [ knuth [ is excellent additional reading that includes early approaches to balancing trees splay trees were invented by sleator and tarjan [ (see also [ ] |
13,267 | sorting and selection contents why study sorting algorithms merge-sort divide-and-conquer array-based implementation of merge-sort the running time of merge-sort merge-sort and recurrence equations alternative implementations of merge-sort quick-sort randomized quick-sort additional optimizations for quick-sort studying sorting through an algorithmic lens lower bound for sorting linear-time sortingbucket-sort and radix-sort comparing sorting algorithms python' built-in sorting functions sorting according to key function selection prune-and-search randomized quick-select analyzing randomized quick-select exercises |
13,268 | why study sorting algorithmsmuch of this focuses on algorithms for sorting collection of objects given collectionthe goal is to rearrange the elements so that they are ordered from smallest to largest (or to produce new copy of the sequence with such an orderas we did when studying priority queues (see section )we assume that such consistent order exists in pythonthe natural order of objects is typically defined using the operator having following propertiesirreflexive propertyk transitive propertyif and then the transitive property is important as it allows us to infer the outcome of certain comparisons without taking the time to perform those comparisonsthereby leading to more efficient algorithms sorting is among the most importantand well studiedof computing problems data sets are often stored in sorted orderfor exampleto allow for efficient searches with the binary search algorithm (see section many advanced algorithms for variety of problems rely on sorting as subroutine python has built-in support for sorting datain the form of the sort method of the list class that rearranges the contents of listand the built-in sorted function that produces new list containing the elements of an arbitrary collection in sorted order those built-in functions use advanced algorithms (some of which we will describe in this and they are highly optimized programmer should typically rely on calls to the built-in sorting functionsas it is rare to have special enough circumstance to warrant implementing sorting algorithm from scratch with that saidit remains important to have deep understanding of sorting algorithms most immediatelywhen calling the built-in functionit is good to know what to expect in terms of efficiency and how that may depend upon the initial order of elements or the type of objects that are being sorted more generallythe ideas and approaches that have led to advances in the development of sorting algorithm carry over to algorithm development in many other areas of computing we have introduced several sorting algorithms already in this bookinsertion-sort (see sections and selection-sort (see section bubble-sort (see exercise - heap-sort (see section in this we present four other sorting algorithmscalled merge-sortquick-sortbucket-sortand radix-sortand then discuss the advantages and disadvantages of the various algorithms in section in section we will explore another technique used in python for sorting data according to an order other than the natural order defined by the operator |
13,269 | merge-sort divide-and-conquer the first two algorithms we describe in this merge-sort and quick-sortuse recursion in an algorithmic design pattern called divide-and-conquer we have already seen the power of recursion in describing algorithms in an elegant manner (see the divide-and-conquer pattern consists of the following three steps divideif the input size is smaller than certain threshold (sayone or two elements)solve the problem directly using straightforward method and return the solution so obtained otherwisedivide the input data into two or more disjoint subsets conquerrecursively solve the subproblems associated with the subsets combinetake the solutions to the subproblems and merge them into solution to the original problem using divide-and-conquer for sorting we will first describe the merge-sort algorithm at high levelwithout focusing on whether the data is an array-based (pythonlist or linked listwe will soon give concrete implementations for each to sort sequence with elements using the three divide-and-conquer stepsthe merge-sort algorithm proceeds as follows divideif has zero or one elementreturn immediatelyit is already sorted otherwise ( has at least two elements)remove all the elements from and put them into two sequencess and each containing about half of the elements of sthat iss contains the first / elements of sand contains the remaining / elements conquerrecursively sort sequences and combineput back the elements into by merging the sorted sequences and into sorted sequence in reference to the divide stepwe recall that the notation xindicates the floor of xthat isthe largest integer ksuch that < similarlythe notation xindicates the ceiling of xthat isthe smallest integer msuch that < |
13,270 | we can visualize an execution of the merge-sort algorithm by means of binary tree called the merge-sort tree each node of represents recursive invocation (or callof the merge-sort algorithm we associate with each node of the sequence that is processed by the invocation associated with the children of node are associated with the recursive calls that process the subsequences and of the external nodes of are associated with individual elements of scorresponding to instances of the algorithm that make no recursive calls figure summarizes an execution of the merge-sort algorithm by showing the input and output sequences processed at each node of the merge-sort tree the step-by-step evolution of the merge-sort tree is shown in figures through this algorithm visualization in terms of the merge-sort tree helps us analyze the running time of the merge-sort algorithm in particularsince the size of the input sequence roughly halves at each recursive call of merge-sortthe height of the merge-sort tree is about logn (recall that the base of log is if omitted ( (bfigure merge-sort tree for an execution of the merge-sort algorithm on sequence with elements(ainput sequences processed at each node of (boutput sequences generated at each node of |
13,271 | ( ( ( ( ( ( figure visualization of an execution of merge-sort each node of the tree represents recursive call of merge-sort the nodes drawn with dashed lines represent calls that have not been made yet the node drawn with thick lines represents the current call the empty nodes drawn with thin lines represent completed calls the remaining nodes (drawn with thin lines and not emptyrepresent calls that are waiting for child invocation to return (continues in figure |
13,272 | ( ( ( ( ( (lfigure visualization of an execution of merge-sort (combined with figures and |
13,273 | ( ( ( (pfigure visualization of an execution of merge-sort (continued from figure several invocations are omitted between (mand (nnote the merging of two halves performed in step (pproposition the merge-sort tree associated with an execution of mergesort on sequence of size has height log nwe leave the justification of proposition as simple exercise ( - we will use this proposition to analyze the running time of the merge-sort algorithm having given an overview of merge-sort and an illustration of how it workslet us consider each of the steps of this divide-and-conquer algorithm in more detail dividing sequence of size involves separating it at the element with index / and recursive calls can be started by passing these smaller sequences as parameters the difficult step is combining the two sorted sequences into single sorted sequence thusbefore we present our analysis of merge-sortwe need to say more about how this is done |
13,274 | array-based implementation of merge-sort we begin by focusing on the case when sequence of items is represented as an (array-basedpython list the merge function (code fragment is responsible for the subtask of merging two previously sorted sequencess and with the output copied into we copy one element during each pass of the while loopconditionally determining whether the next element should be taken from or the divide-and-conquer merge-sort algorithm is given in code fragment we illustrate step of the merge process in figure during the processindex represents the number of elements of that have been copied to swhile index represents the number of elements of that have been copied to assuming and both have at least one uncopied elementwe copy the smaller of the two elements being considered since objects have been previously copiedthe next element is placed in [ (for examplewhen is the next element is copied to [ ]if we reach the end of one of the sequenceswe must copy the next element from the other def merge( ) """merge two sorted python lists and into properly sized list "" = = while len( ) if =len( or ( len( and [is [ ]) [ +js [icopy ith element of as next item of + else [ +js [jcopy jth element of as next item of + code fragment an implementation of the merge operation for python' arraybased list class ij ( ij (bfigure step in the merge of two sorted arrays for which js [iwe show the arrays before the copy step in (aand after it in ( |
13,275 | sorting and selection def merge sort( ) """sort the elements of python list using the merge-sort algorithm "" len( if return list is already sorted divide mid / [ :midcopy of first half [mid:ncopy of second half conquer (with recursionsort copy of first half merge sort( sort copy of second half merge sort( merge results merge( smerge sorted halves back into code fragment an implementation of the recursive merge-sort algorithm for python' array-based list class (using the merge function defined in code fragment the running time of merge-sort we begin by analyzing the running time of the merge algorithm let and be the number of elements of and respectively it is clear that the operations performed inside each pass of the while loop take ( time the key observation is that during each iteration of the loopone element is copied from either or into (and that element is considered no furtherthereforethe number of iterations of the loop is thusthe running time of algorithm merge is ( having analyzed the running time of the merge algorithm used to combine subproblemslet us analyze the running time of the entire merge-sort algorithmassuming it is given an input sequence of elements for simplicitywe restrict our attention to the case where is power of we leave it to an exercise ( - to show that the result of our analysis also holds when is not power of when evaluating the merge-sort recursionwe rely on the analysis technique introduced in section we account for the amount of time spent within each recursive callbut excluding any time spent waiting for successive recursive calls to terminate in the case of our merge sort functionwe account for the time to divide the sequence into two subsequencesand the call to merge to combine the two sorted sequencesbut we exclude the two recursive calls to merge sort |
13,276 | merge-sort tree as portrayed in figures through can guide our analysis consider recursive call associated with node of the merge-sort tree the divide step at node is straightforwardthis step runs in time proportional to the size of the sequence for vbased on the use of slicing to create copies of the two list halves we have already observed that the merging step also takes time that is linear in the size of the merged sequence if we let denote the depth of node vthe time spent at node is ( / )since the size of the sequence handled by the recursive call associated with is equal to / looking at the tree more globallyas shown in figure we see thatgiven our definition of "time spent at node,the running time of merge-sort is equal to the sum of the times spent at the nodes of observe that has exactly nodes at depth this simple observation has an important consequencefor it implies that the overall time spent at all the nodes of at depth is ( / )which is (nby proposition the height of is log nthussince the time spent at each of the log levels of is ( )we have the following resultproposition algorithm merge-sort sorts sequence of size in ( log ntimeassuming two elements of can be compared in ( time height time per level (nn / (nn/ (log nn/ / / / (ntotal timeo( log nfigure visual analysis of the running time of merge-sort each node represents the time spent in particular recursive calllabeled with the size of its subproblem |
13,277 | sorting and selection merge-sort and recurrence equations there is another way to justify that the running time of the merge-sort algorithm is ( log (proposition namelywe can deal more directly with the recursive nature of the merge-sort algorithm in this sectionwe present such an analysis of the running time of merge-sortand in so doingintroduce the mathematical concept of recurrence equation (also known as recurrence relationlet the function (ndenote the worst-case running time of merge-sort on an input sequence of size since merge-sort is recursivewe can characterize function (nby means of an equation where the function (nis recursively expressed in terms of itself in order to simplify our characterization of ( )let us restrict our attention to the case when is power of (we leave the problem of showing that our asymptotic characterization still holds in the general case as an exercise in this casewe can specify the definition of (nas if < ( ( / cn otherwise an expression such as the one above is called recurrence equationsince the function appears on both the leftand right-hand sides of the equal sign although such characterization is correct and accuratewhat we really desire is big-oh type of characterization of (nthat does not involve the function (nitself that iswe want closed-form characterization of (nwe can obtain closed-form solution by applying the definition of recurrence equationassuming is relatively large for exampleafter one more application of the equation abovewe can write new recurrence for (nas ( ( ( / (cn/ )cn ( / (cn/ cn ( / cn if we apply the equation againwe get ( ( / cn at this pointwe should see pattern emergingso that after applying this equation timeswe get ( ( / icn the issue that remainsthenis to determine when to stop this process to see when to stoprecall that we switch to the closed form (nb when < which will occur when in other wordsthis will occur when log making this substitutionthenyields ( log ( / log (log )cn nt( cn log nb cn log that iswe get an alternative justification of the fact that (nis ( log |
13,278 | alternative implementations of merge-sort sorting linked lists the merge-sort algorithm can easily be adapted to use any form of basic queue as its container type in code fragment we provide such an implementationbased on use of the linkedqueue class from section the ( log nbound for merge-sort from proposition applies to this implementation as wellsince each basic operation runs in ( time when implemented with linked list we show an example execution of this version of the merge algorithm in figure def merge( ) """merge two sorted queue instances and into empty queue "" while not is emptyand not is empty) if firsts first) enqueue( dequeue) else enqueue( dequeue)move remaining elements of to while not is empty) enqueue( dequeue)move remaining elements of to while not is empty) enqueue( dequeue) def merge sort( ) """sort the elements of queue using the merge-sort algorithm "" len( if return list is already sorted divide linkedqueueor any other queue implementation linkedqueue while len( / move the first // elements to enqueue( dequeue)move the rest to while not is empty) enqueue( dequeue) conquer (with recursionsort first half merge sort( sort second half merge sort( merge results merge( smerge sorted halves back into code fragment an implementation of merge-sort using basic queue |
13,279 | (as (bs (cs (ds ( ( ( (hs (ifigure example of an execution of the merge algorithmas implemented in code fragment using queues |
13,280 | bottom-up (nonrecursivemerge-sort there is nonrecursive version of array-based merge-sortwhich runs in ( log ntime it is bit faster than recursive merge-sort in practiceas it avoids the extra overheads of recursive calls and temporary memory at each level the main idea is to perform merge-sort bottom-upperforming the merges level by level going up the merge-sort tree given an input array of elementswe begin by merging every successive pair of elements into sorted runs of length two we merge these runs into runs of length fourmerge these new runs into runs of length eightand so onuntil the array is sorted to keep the space usage reasonablewe deploy second array that stores the merged runs (swapping input and output arrays after each iterationwe give python implementation in code fragment similar bottom-up approach can be used for sorting linked lists (see exercise - def merge(srcresultstartinc) """merge src[start:start+incand src[start+inc:start+ incinto result "" end start+inc boundary for run boundary for run end min(start+ inclen(src) xyz startstart+incstart index into run run result while end and end if src[xsrc[ ] result[zsrc[ ] + copy from run and increment else result[zsrc[ ] + copy from run and increment + increment to reflect new result if end result[ :end src[ :end copy remainder of run to output elif end result[ :end src[ :end copy remainder of run to output def merge sort( ) """sort the elements of python list using the merge-sort algorithm "" len( logn math ceil(math log( , )make temporary storage for dest srcdest [nonen for in ( for in range(logn))pass creates all runs of length each pass merges two length runs for in range( ) merge(srcdestji srcdest destsrc reverse roles of lists if is not src [ :nsrc[ :nadditional copy to get results to code fragment an implementation of the nonrecursive merge-sort algorithm |
13,281 | quick-sort the next sorting algorithm we discuss is called quick-sort like merge-sortthis algorithm is also based on the divide-and-conquer paradigmbut it uses this technique in somewhat opposite manneras all the hard work is done before the recursive calls high-level description of quick-sort the quick-sort algorithm sorts sequence using simple recursive approach the main idea is to apply the divide-and-conquer techniquewhereby we divide into subsequencesrecur to sort each subsequenceand then combine the sorted subsequences by simple concatenation in particularthe quick-sort algorithm consists of the following three steps (see figure ) divideif has at least two elements (nothing needs to be done if has zero or one element)select specific element from swhich is called the pivot as is common practicechoose the pivot to be the last element in remove all the elements from and put them into three sequenceslstoring the elements in less than estoring the elements in equal to gstoring the elements in greater than of courseif the elements of are distinctthen holds just one element-the pivot itself conquerrecursively sort sequences and combineput back the elements into in order by first inserting the elements of lthen those of eand finally those of split using pivot ( recur recur (xg( concatenate figure visual schematic of the quick-sort algorithm |
13,282 | like merge-sortthe execution of quick-sort can be visualized by means of binary recursion treecalled the quick-sort tree figure summarizes an execution of the quick-sort algorithm by showing the input and output sequences processed at each node of the quick-sort tree the step-by-step evolution of the quick-sort tree is shown in figures and unlike merge-sorthoweverthe height of the quick-sort tree associated with an execution of quick-sort is linear in the worst case this happensfor exampleif the sequence consists of distinct elements and is already sorted indeedin this casethe standard choice of the last element as pivot yields subsequence of size while subsequence has size and subsequence has size at each invocation of quick-sort on subsequence lthe size decreases by hencethe height of the quick-sort tree is ( (bfigure quick-sort tree for an execution of the quick-sort algorithm on sequence with elements(ainput sequences processed at each node of (boutput sequences generated at each node of the pivot used at each level of the recursion is shown in bold |
13,283 | ( ( ( ( ( ( figure visualization of quick-sort each node of the tree represents recursive call the nodes drawn with dashed lines represent calls that have not been made yet the node drawn with thick lines represents the running invocation the empty nodes drawn with thin lines represent terminated calls the remaining nodes represent suspended calls (that isactive invocations that are waiting for child invocation to returnnote the divide steps performed in ( )( )and ( (continues in figure |
13,284 | ( ( ( ( ( (lfigure visualization of an execution of quick-sort note the concatenation step performed in ( (continues in figure |
13,285 | ( ( ( ( ( (rfigure visualization of an execution of quick-sort several invocations between (pand (qhave been omitted note the concatenation steps performed in (oand ( (continued from figure |
13,286 | performing quick-sort on general sequences in code fragment we give an implementation of the quick-sort algorithm that works on any sequence type that operates as queue this particular version relies on the linkedqueue class from section we provide more streamlined implementation of quick-sort using an array-based sequence in section our implementation chooses the first item of the queue as the pivot (since it is easily accessible)and then it divides sequence into queues leand of elements that are respectively less thanequal toand greater than the pivot we then recur on the and listsand transfer elements from the sorted lists leand back to all of the queue operations run in ( worst-case time when implemented with linked list def quick sort( ) """sort the elements of queue using the quick-sort algorithm "" len( if return list is already sorted divide firstusing first as arbitrary pivot linkedqueue linkedqueue linkedqueuedivide into leand while not is empty) if firstp enqueue( dequeue) elif first) enqueue( dequeue) elses first(must equal pivot enqueue( dequeue) conquer (with recursionsort elements less than quick sort(lsort elements greater than quick sort( concatenate results while not is empty) enqueue( dequeue) while not is empty) enqueue( dequeue) while not is empty) enqueue( dequeue)code fragment quick-sort for sequence implemented as queue |
13,287 | running time of quick-sort we can analyze the running time of quick-sort with the same technique used for merge-sort in section namelywe can identify the time spent at each node of the quick-sort tree and sum up the running times for all the nodes examining code fragment we see that the divide step and the final concatenation of quick-sort can be implemented in linear time thusthe time spent at node of is proportional to the input size (vof vdefined as the size of the sequence handled by the invocation of quick-sort associated with node since subsequence has at least one element (the pivot)the sum of the input sizes of the children of is at most ( let si denote the sum of the input sizes of the nodes at depth for particular quick-sort tree clearlys nsince the root of is associated with the entire sequence alsos < since the pivot is not propagated to the children of more generallyit must be that si si- since the elements of the subsequences at depth all come from distinct subsequences at depth and at least one element from depth does not propagate to depth because it is in set (in factone element from each node at depth does not propagate to depth iwe can therefore bound the overall running time of an execution of quick-sort as ( hwhere is the overall height of the quick-sort tree for that execution unfortunatelyin the worst casethe height of quick-sort tree is th( )as observed in section thusquick-sort runs in ( worst-case time paradoxicallyif we choose the pivot as the last element of the sequencethis worst-case behavior occurs for problem instances when sorting should be easy--if the sequence is already sorted given its namewe would expect quick-sort to run quicklyand it often does in practice the best case for quick-sort on sequence of distinct elements occurs when subsequences and have roughly the same size in that caseas we saw with merge-sortthe tree has height (log nand therefore quick-sort runs in ( log ntimewe leave the justification of this fact as an exercise ( - more sowe can observe an ( log nrunning time even if the split between and is not as perfect for exampleif every divide step caused one subsequence to have one-fourth of those elements and the other to have three-fourths of the elementsthe height of the tree would remain (log nand thus the overall performance ( log nwe will see in the next section that introducing randomization in the choice of pivot will makes quick-sort essentially behave in this way on averagewith an expected running time that is ( log |
13,288 | randomized quick-sort one common method for analyzing quick-sort is to assume that the pivot will always divide the sequence in reasonably balanced manner we feel such an assumption would presuppose knowledge about the input distribution that is typically not availablehowever for examplewe would have to assume that we will rarely be given "almostsorted sequences to sortwhich are actually common in many applications fortunatelythis assumption is not needed in order for us to match our intuition to quick-sort' behavior in generalwe desire some way of getting close to the best-case running time for quick-sort the way to get close to the best-case running timeof courseis for the pivot to divide the input sequence almost equally if this outcome were to occurthen it would result in running time that is asymptotically the same as the best-case running time that ishaving pivots close to the "middleof the set of elements leads to an ( log nrunning time for quick-sort picking pivots at random since the goal of the partition step of the quick-sort method is to divide the sequence with sufficient balancelet us introduce randomization into the algorithm and pick as the pivot random element of the input sequence that isinstead of picking the pivot as the first or last element of swe pick an element of at random as the pivotkeeping the rest of the algorithm unchanged this variation of quick-sort is called randomized quick-sort the following proposition shows that the expected running time of randomized quick-sort on sequence with elements is ( log nthis expectation is taken over all the possible random choices the algorithm makesand is independent of any assumptions about the distribution of the possible input sequences the algorithm is likely to be given proposition the expected running time of randomized quick-sort on sequence of size is ( log njustificationwe assume two elements of can be compared in ( time consider single recursive call of randomized quick-sortand let denote the size of the input for this call say that this call is "goodif the pivot chosen is such that subsequences and have size at least / and at most / eachotherwisea call is "bad nowconsider the implications of our choosing pivot uniformly at random note that there are / possible good choices for the pivot for any given call of size of the randomized quick-sort algorithm thusthe probability that any call is good is / note further that good call will at least partition list of size into two lists of size / and / and bad call could be as bad as producing single call of size |
13,289 | now consider recursion trace for randomized quick-sort this trace defines binary treet such that each node in corresponds to different recursive call on subproblem of sorting portion of the original list say that node in is in size group if the size of ' subproblem is greater than ( / ) + and at most ( / ) let us analyze the expected time spent working on all the subproblems for nodes in size group by the linearity of expectation (proposition )the expected time for working on all these subproblems is the sum of the expected times for each one some of these nodes correspond to good calls and some correspond to bad calls but note thatsince good call occurs with probability / the expected number of consecutive calls we have to make before getting good call is moreovernotice that as soon as we have good call for node in size group iits children will be in size groups higher than thusfor any element from in the input listthe expected number of nodes in size group containing in their subproblems is in other wordsthe expected total size of all the subproblems in size group is since the nonrecursive work we perform for any subproblem is proportional to its sizethis implies that the total expected time spent processing subproblems for nodes in size group is (nthe number of size groups is log / nsince repeatedly multiplying by / is the same as repeatedly dividing by / that isthe number of size groups is (log nthereforethe total expected running time of randomized quick-sort is ( log (see figure if factwe can show that the running time of randomized quick-sort is ( log nwith high probability (see exercise - number of size groups size group (rs(as(bsize group (log ns(cs(ds(eexpected time per size group (nsf size group (no(ntotal expected timeo( log nfigure visual time analysis of the quick-sort tree each node is shown labeled with the size of its subproblem |
13,290 | additional optimizations for quick-sort an algorithm is in-place if it uses only small amount of memory in addition to that needed for the original input our implementation of heap-sortfrom section is an example of such an in-place sorting algorithm our implementation of quick-sort from code fragment does not qualify as in-place because we use additional containers leand when dividing sequence within each recursive call quick-sort of an array-based sequence can be adapted to be in-placeand such an optimization is used in most deployed implementations performing the quick-sort algorithm in-place requires bit of ingenuityhoweverfor we must use the input sequence itself to store the subsequences for all the recursive calls we show algorithm inplace quick sortwhich performs in-place quick-sortin code fragment our implementation assumes that the input sequencesis given as python list of elements in-place quick-sort modifies the input sequence using element swapping and does not explicitly create subsequences insteada subsequence of the input sequence is implicitly represented by range of positions specified by leftmost index and rightmost index the def inplace quick sort(sab) """sort the list from [ato [binclusive using the quick-sort algorithm "" if >breturn range is trivially sorted pivot [blast element of range is pivot left will scan rightward right - will scan leftward while left <right scan until reaching value equal or larger than pivot (or right marker while left <right and [leftpivot left + scan until reaching value equal or smaller than pivot (or left marker while left <right and pivot [right] right - if left <rightscans did not strictly cross [left] [rights[right] [leftswap values leftright left right shrink range put pivot into its final place (currently marked by left index [left] [bs[ ] [left make recursive calls inplace quick sort(saleft inplace quick sort(sleft bcode fragment in-place quick-sort for python list |
13,291 | divide step is performed by scanning the array simultaneously using local variables leftwhich advances forwardand rightwhich advances backwardswapping pairs of elements that are in reverse orderas shown in figure when these two indices pass each otherthe division step is complete and the algorithm completes by recurring on these two sublists there is no explicit "combinestepbecause the concatenation of the two sublists is implicit to the in-place use of the original list it is worth noting that if sequence has duplicate valueswe are not explicitly creating three sublists leand gas in our original quick-sort description we instead allow elements equal to the pivot (other than the pivot itself to be dispersed across the two sublists exercise - explores the subtlety of our implementation in the presence of duplicate keysand exercise - describes an in-place algorithm that strictly partitions into three sublists leand ( (cl (bl (al lr ( ( (gfigure divide step of in-place quick-sortusing index as shorthand for identifier leftand index as shorthand for identifier right index scans the sequence from left to rightand index scans the sequence from right to left swap is performed when is at an element as large as the pivot and is at an element as small as the pivot final swap with the pivotin part ( )completes the divide step |
13,292 | although the implementation we describe in this section for dividing the sequence into two pieces is in-placewe note that the complete quick-sort algorithm needs space for stack proportional to the depth of the recursion treewhich in this case can be as large as admittedlythe expected stack depth is (log )which is small compared to neverthelessa simple trick lets us guarantee the stack size is (log nthe main idea is to design nonrecursive version of in-place quick-sort using an explicit stack to iteratively process subproblems (each of which can be represented with pair of indices marking subarray boundarieseach iteration involves popping the top subproblemsplitting it in two (if it is big enough)and pushing the two new subproblems the trick is that when pushing the new subproblemswe should first push the larger subproblem and then the smaller one in this waythe sizes of the subproblems will at least double as we go down the stackhencethe stack can have depth at most (log nwe leave the details of this implementation as an exercise ( - pivot selection our implementation in this section blindly picks the last element as the pivot at each level of the quick-sort recursion this leaves it susceptible to the th( )-time worst casemost notably when the original sequence is already sortedreverse sortedor nearly sorted as described in section this can be improved upon by using randomly chosen pivot for each partition step in practiceanother common technique for choosing pivot is to use the median of tree valuestaken respectively from the frontmiddleand tail of the array this median-of-three heuristic will more often choose good pivot and computing median of three may require lower overhead than selecting pivot with random number generator for larger data setsthe median of more than three potential pivots might be computed hybrid approaches although quick-sort has very good performance on large data setsit has rather high overhead on relatively small data sets for examplethe process of quicksorting sequence of eight elementsas illustrated in figures through involves considerable bookkeeping in practicea simple algorithm like insertionsort (section will execute faster when sorting such short sequence it is therefore commonin optimized sorting implementationsto use hybrid approachwith divide-and-conquer algorithm used until the size of subsequence falls below some threshold (perhaps elements)insertion-sort can be directly invoked upon portions with length below the threshold we will further discuss such practical considerations in section when comparing the performance of various sorting algorithms |
13,293 | studying sorting through an algorithmic lens recapping our discussions on sorting to this pointwe have described several methods with either worst case or expected running time of ( log non an input sequence of size these methods include merge-sort and quick-sortdescribed in this as well as heap-sort (section in this sectionwe study sorting as an algorithmic problemaddressing general issues about sorting algorithms lower bound for sorting natural first question to ask is whether we can sort any faster than ( log ntime interestinglyif the computational primitive used by sorting algorithm is the comparison of two elementsthis is in fact the best we can do--comparison-based sorting has an ( log nworst-case lower bound on its running time (recall the notation (*from section to focus on the main cost of comparison-based sortinglet us only count comparisonsfor the sake of lower bound suppose we are given sequence ( xn- that we wish to sortand assume that all the elements of are distinct (this is not really restriction since we are deriving lower boundwe do not care if is implemented as an array or linked listfor the sake of our lower boundsince we are only counting comparisons each time sorting algorithm compares two elements xi and (that isit asks"is xi ?")there are two outcomes"yesor "no based on the result of this comparisonthe sorting algorithm may perform some internal calculations (which we are not counting hereand will eventually perform another comparison between two other elements of swhich again will have two outcomes thereforewe can represent comparison-based sorting algorithm with decision tree (recall example that iseach internal node in corresponds to comparison and the edges from position to its children correspond to the computations resulting from either "yesor "noanswer it is important to note that the hypothetical sorting algorithm in question probably has no explicit knowledge of the tree the tree simply represents all the possible sequences of comparisons that sorting algorithm might makestarting from the first comparison (associated with the rootand ending with the last comparison (associated with the parent of an external nodeeach possible initial orderor permutationof the elements in will cause our hypothetical sorting algorithm to execute series of comparisonstraversing path in from the root to some external node let us associate with each external node in thenthe set of permutations of that cause our sorting algorithm to end up in the most important observation in our lower-bound argument is that each external node in can represent the sequence of comparisons for at most one permutation of the justification for this claim is simpleif two different |
13,294 | permutations and of are associated with the same external nodethen there are at least two objects xi and such that xi is before in but xi is after in at the same timethe output associated with must be specific reordering of swith either xi or appearing before the other but if and both cause the sorting algorithm to output the elements of in this orderthen that implies there is way to trick the algorithm into outputting xi and in the wrong order since this cannot be allowed by correct sorting algorithmeach external node of must be associated with exactly one permutation of we use this property of the decision tree associated with sorting algorithm to prove the following resultproposition the running time of any comparison-based algorithm for sorting an -element sequence is ( log nin the worst case justificationthe running time of comparison-based sorting algorithm must be greater than or equal to the height of the decision tree associated with this algorithmas described above (see figure by the argument aboveeach external node in must be associated with one permutation of moreovereach permutation of must result in different external node of the number of permutations of objects is nn( )( thust must have at least nexternal nodes by proposition the height of is at least log( !this immediately justifies the propositionbecause there are at least / terms that are greater than or equal to / in the product !hencen log log( !>log which is ( log nminimum height ( worst-case running timexi xa xb xc xd log( !xe xg xh xk xl xm xn nfigure visualizing the lower bound for comparison-based sorting |
13,295 | linear-time sortingbucket-sort and radix-sort in the previous sectionwe showed that ( log ntime is necessaryin the worst caseto sort an -element sequence with comparison-based sorting algorithm natural question to askthenis whether there are other kinds of sorting algorithms that can be designed to run asymptotically faster than ( log ntime interestinglysuch algorithms existbut they require special assumptions about the input sequence to be sorted even sosuch scenarios often arise in practicesuch as when sorting integers from known range or sorting character stringsso discussing them is worthwhile in this sectionwe consider the problem of sorting sequence of entrieseach key-value pairwhere the keys have restricted type bucket-sort consider sequence of entries whose keys are integers in the range [ ]for some integer > and suppose that should be sorted according to the keys of the entries in this caseit is possible to sort in ( ntime it might seem surprisingbut this impliesfor examplethat if is ( )then we can sort in (ntime of coursethe crucial point is thatbecause of the restrictive assumption about the format of the elementswe can avoid using comparisons the main idea is to use an algorithm called bucket-sortwhich is not based on comparisonsbut on using keys as indices into bucket array that has cells indexed from to an entry with key is placed in the "bucketb[ ]which itself is sequence (of entries with key kafter inserting each entry of the input sequence into its bucketwe can put the entries back into in sorted order by enumerating the contents of the buckets [ ] [ ] [ in order we describe the bucket-sort algorithm in code fragment algorithm bucketsort( )inputsequence of entries with integer keys in the range [ outputsequence sorted in nondecreasing order of the keys let be an array of sequenceseach of which is initially empty for each entry in do the key of remove from and insert it at the end of bucket (sequenceb[kfor to - do for each entry in sequence [ido remove from [iand insert it at the end of code fragment bucket-sort |
13,296 | it is easy to see that bucket-sort runs in ( ntime and uses ( nspace hencebucket-sort is efficient when the range of values for the keys is small compared to the sequence size nsay (nor ( log nstillits performance deteriorates as grows compared to an important property of the bucket-sort algorithm is that it works correctly even if there are many different elements with the same key indeedwe described it in way that anticipates such occurrences stable sorting when sorting key-value pairsan important issue is how equal keys are handled let (( )(kn- vn- )be sequence of such entries we say that sorting algorithm is stable iffor any two entries (ki vi and ( of such that ki and (ki vi precedes ( in before sorting (that isi )entry (ki vi also precedes entry ( after sorting stability is important for sorting algorithm because applications may want to preserve the initial order of elements with the same key our informal description of bucket-sort in code fragment guarantees stability as long as we ensure that all sequences act as queueswith elements processed and removed from the front of sequence and inserted at the back that iswhen initially placing elements of into bucketswe should process from front to backand add each element to the end of its bucket subsequentlywhen transferring elements from the buckets back to swe should process each [ifrom front to backwith those elements added to the end of radix-sort one of the reasons that stable sorting is so important is that it allows the bucket-sort approach to be applied to more general contexts than to sort integers supposefor examplethat we want to sort entries with keys that are pairs (kl)where and are integers in the range [ ]for some integer > in context such as thisit is common to define an order on these keys using the lexicographic (dictionaryconventionwhere ( ( if or if and (see page this is pairwise version of the lexicographic comparison functionwhich can be applied to equal-length character stringsor to tuples of length the radix-sort algorithm sorts sequence of entries with keys that are pairsby applying stable bucket-sort on the sequence twicefirst using one component of the pair as the key when ordering and then using the second component but which order is correctshould we first sort on the ' (the first componentand then on the ' (the second component)or should it be the other way around |
13,297 | sorting and selection to gain intuition before answering this questionwe consider the following example example consider the following sequence (we show only the keys) (( )( )( )( )( )( )( )( )if we sort stably on the first componentthen we get the sequence (( )( )( )( )( )( )( )( )if we then stably sort this sequence using the second componentwe get the sequence , (( )( )( )( )( )( )( )( ))which is unfortunately not sorted sequence on the other handif we first stably sort using the second componentthen we get the sequence (( )( )( )( )( )( )( )( )if we then stably sort sequence using the first componentwe get the sequence , (( )( )( )( )( )( )( )( ))which is indeed sequence lexicographically ordered sofrom this examplewe are led to believe that we should first sort using the second component and then again using the first component this intuition is exactly right by first stably sorting by the second component and then again by the first componentwe guarantee that if two entries are equal in the second sort (by the first component)then their relative order in the starting sequence (which is sorted by the second componentis preserved thusthe resulting sequence is guaranteed to be sorted lexicographically every time we leave to simple exercise ( - the determination of how this approach can be extended to triples and other -tuples of numbers we can summarize this section as followsproposition let be sequence of key-value pairseach of which has key ( kd )where ki is an integer in the range [ for some integer > we can sort lexicographically in time ( ( )using radix-sort radix sort can be applied to any key that can be viewed as composite of smaller pieces that are to be sorted lexicographically for examplewe can apply it to sort character strings of moderate lengthas each individual character can be represented as an integer value (some care is needed to properly handle strings with varying lengths |
13,298 | comparing sorting algorithms at this pointit might be useful for us to take moment and consider all the algorithms we have studied in this book to sort an -element sequence considering running time and other factors we have studied several methodssuch as insertion-sortand selection-sortthat have ( )-time behavior in the average and worst case we have also studied several methods with ( log )-time behaviorincluding heap-sortmerge-sortand quick-sort finallythe bucket-sort and radix-sort methods run in linear time for certain types of keys certainlythe selection-sort algorithm is poor choice in any applicationsince it runs in ( time even in the best case butof the remaining sorting algorithmswhich is the bestas with many things in lifethere is no clear "bestsorting algorithm from the remaining candidates there are trade-offs involving efficiencymemory usageand stability the sorting algorithm best suited for particular application depends on the properties of that application in factthe default sorting algorithm used by computing languages and systems has evolved greatly over time we can offer some guidance and observationsthereforebased on the known properties of the "goodsorting algorithms insertion-sort if implemented wellthe running time of insertion-sort is ( )where is the number of inversions (that isthe number of pairs of elements out of orderthusinsertion-sort is an excellent algorithm for sorting small sequences (sayless than elements)because insertion-sort is simple to programand small sequences necessarily have few inversions alsoinsertion-sort is quite effective for sorting sequences that are already "almostsorted by "almost,we mean that the number of inversions is small but the ( )-time performance of insertion-sort makes it poor choice outside of these special contexts heap-sort heap-sorton the other handruns in ( log ntime in the worst casewhich is optimal for comparison-based sorting methods heap-sort can easily be made to execute in-placeand is natural choice on smalland medium-sized sequenceswhen input data can fit into main memory howeverheap-sort tends to be outperformed by both quick-sort and merge-sort on larger sequences standard heap-sort does not provide stable sortbecause of the swapping of elements |
13,299 | quick-sort although its ( )-time worst-case performance makes quick-sort susceptible in real-time applications where we must make guarantees on the time needed to complete sorting operationwe expect its performance to be ( log )-timeand experimental studies have shown that it outperforms both heap-sort and merge-sort on many tests quick-sort does not naturally provide stable sortdue to the swapping of elements during the partitioning step for decades quick-sort was the default choice for general-purposein-memory sorting algorithm quick-sort was included as the qsort sorting utility provided in language librariesand was the basis for sorting utilities on unix operating systems for many years it was also the standard algorithm for sorting arrays in java through version of that language (we discuss java below merge-sort merge-sort runs in ( log ntime in the worst case it is quite difficult to make merge-sort run in-place for arraysand without that optimization the extra overhead of allocate temporary arrayand copying between the arrays is less attractive than in-place implementations of heap-sort and quick-sort for sequences that can fit entirely in computer' main memory even somerge-sort is an excellent algorithm for situations where the input is stratified across various levels of the computer' memory hierarchy ( cachemain memoryexternal memoryin these contextsthe way that merge-sort processes runs of data in long merge streams makes the best use of all the data brought as block into level of memorythereby reducing the total number of memory transfers the gnu sorting utility (and most current versions of the linux operating systemrelies on multiway merge-sort variant since the standard sort method of python' list class has been hybrid approach named tim-sort (designed by tim peters)which is essentially bottom-up merge-sort that takes advantage of some initial runs in the data while using insertion-sort to build additional runs tim-sort has also become the default algorithm for sorting arrays in java bucket-sort and radix-sort finallyif an application involves sorting entries with small integer keyscharacter stringsor -tuples of keys from discrete rangethen bucket-sort or radix-sort is an excellent choicefor it runs in ( ( )timewhere [ is the range of integer keys (and for bucket sortthusif ( nis significantly "belowthe log functionthen this sorting method should run faster than even quick-sortheap-sortor merge-sort |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.