id
int64
0
25.6k
text
stringlengths
0
4.59k
13,300
python' built-in sorting functions python provides two built-in ways to sort data the first is the sort method of the list class as an examplesuppose that we define the following listcolors red green blue cyan magenta yellow that method has the effect of reordering the elements of the list into orderas defined by the natural meaning of the operator for those elements in the above examplewithin elements that are stringsthe natural order is defined alphabetically thereforeafter call to colors sort)the order of the list would becomeblue cyan green magenta red yellow python also supports built-in functionnamed sortedthat can be used to produce new ordered list containing the elements of any existing iterable container going back to our original examplethe syntax sorted(colorswould return new list of those colorsin alphabetical orderwhile leaving the contents of the original list unchanged this second form is more general because it can be applied to any iterable object as parameterfor examplesortedgreen returns sorting according to key function there are many situations in which we wish to sort list of elementsbut according to some order other than the natural order defined by the operator for examplewe might wish to sort list of strings from shortest to longest (rather than alphabeticallyboth of python' built-in sort functions allow caller to control the notion of order that is used when sorting this is accomplished by providingas an optional keyword parametera reference to secondary function that computes key for each element of the primary sequencethen the primary elements are sorted based on the natural order of their keys (see pages and of section for discussion of this technique in the context of the built-in min and max functions key function must be one-parameter function that accepts an element as parameter and returns key for examplewe could use the built-in len function when sorting strings by lengthas call len(sfor string returns its length to sort our colors list based on lengthwe use the syntax colors sort(key=lento mutate the list or sorted(colorskey=lento generate new ordered listwhile leaving the original alone when sorted with the length function as keythe contents arered blue cyan green yellow magenta these built-in functions also support keyword parameterreversethat can be set to true to cause the sort order to be from largest to smallest
13,301
decorate-sort-undecorate design pattern python' support for key function when sorting is implemented using what is known as the decorate-sort-undecorate design pattern it proceeds in steps each element of the list is temporarily replaced with "decoratedversion that includes the result of the key function applied to the element the list is sorted based upon the natural order of the keys (figure the decorated elements are replaced by the original elements red blue cyan green magenta yellow figure list of "decoratedstringsusing their lengths as decoration this list has been sorted by those keys although there is already built-in support for pythonif we were to implement such strategy ourselvesa natural way to represent "decoratedelement is using the same composition strategy that we used for representing key-value pairs within priority queue code fragment of section includes just such an item classdefined so that the operator for items relies upon the given keys with such compositionwe could trivially adapt any sorting algorithm to use the decoratesort-undecorate patternas demonstrated in code fragment with merge-sort def decorated merge sort(datakey=none) """demonstration of the decorate-sort-undecorate pattern "" if key is not none for in range(len(data))decorate each element data[jitem(key(data[ ])data[ ] merge sort(datasort with existing algorithm if key is not none for in range(len(data))undecorate each element data[jdata[jvalue code fragment an approach for implementing the decorate-sort-undecorate pattern based upon the array-based merge-sort of code fragment the item class is identical to that which was used in the priorityqueuebase class (see code fragment
13,302
selection as important as it issorting is not the only interesting problem dealing with total order relation on set of elements there are number of applications in which we are interested in identifying single element in terms of its rank relative to the sorted order of the entire set examples include identifying the minimum and maximum elementsbut we may also be interested insayidentifying the median elementthat isthe element such that half of the other elements are smaller and the remaining half are larger in generalqueries that ask for an element with given rank are called order statistics defining the selection problem in this sectionwe discuss the general order-statistic problem of selecting the kth smallest element from an unsorted collection of comparable elements this is known as the selection problem of coursewe can solve this problem by sorting the collection and then indexing into the sorted sequence at index using the best comparison-based sorting algorithmsthis approach would take ( log ntimewhich is obviously an overkill for the cases where or (or even or )because we can easily solve the selection problem for these values of in (ntime thusa natural question to ask is whether we can achieve an (nrunning time for all values of (including the interesting case of finding the medianwhere  / prune-and-search we can indeed solve the selection problem in (ntime for any value of moreoverthe technique we use to achieve this result involves an interesting algorithmic design pattern this design pattern is known as prune-and-search or decreaseand-conquer in applying this design patternwe solve given problem that is defined on collection of objects by pruning away fraction of the objects and recursively solving the smaller problem when we have finally reduced the problem to one defined on constant-sized collection of objectswe then solve the problem using some brute-force method returning back from all the recursive calls completes the construction in some caseswe can avoid using recursionin which case we simply iterate the prune-and-search reduction step until we can apply brute-force method and stop incidentallythe binary search method described in section is an example of the prune-and-search design pattern
13,303
randomized quick-select in applying the prune-and-search pattern to finding the kth smallest element in an unordered sequence of elementswe describe simple and practical algorithmknown as randomized quick-select this algorithm runs in (nexpected timetaken over all possible random choices made by the algorithmthis expectation does not depend whatsoever on any randomness assumptions about the input distribution we note though that randomized quick-select runs in ( time in the worst casethe justification of which is left as an exercise ( - we also provide an exercise ( - for modifying randomized quick-select to define deterministic selection algorithm that runs in (nworst-case time the existence of this deterministic algorithm is mostly of theoretical interesthoweversince the constant factor hidden by the big-oh notation is relatively large in that case suppose we are given an unsorted sequence of comparable elements together with an integer [ nat high levelthe quick-select algorithm for finding the kth smallest element in is similar to the randomized quick-sort algorithm described in section we pick "pivotelement from at random and use this to subdivide into three subsequences leand gstoring the elements of less thanequal toand greater than the pivotrespectively in the prune stepwe determine which of these subsets contains the desired elementbased on the value of and the sizes of those subsets we then recur on the appropriate subsetnoting that the desired element' rank in the subset may differ from its rank in the full set an implementation of randomized quick-select is shown in code fragment def quick select(sk) """return the kth smallest element of list sfor from to len( "" if len( = return [ pivot random choice(spick random pivot element from [ for in if pivotelements less than pivot [ for in if =pivotelements equal to pivot [ for in if pivot xelements greater than pivot if <len( )kth smallest lies in return quick select(lk elif <len(llen( ) return pivot kth smallest equal to pivot else len(llen(enew selection parameter kth smallest is jth in return quick select(gjcode fragment randomized quick-select algorithm
13,304
analyzing randomized quick-select showing that randomized quick-select runs in (ntime requires simple probabilistic argument the argument is based on the linearity of expectationwhich states that if and are random variables and is numberthen ( + ( ( and (cx ce( )where we use (zto denote the expected value of the expression let (nbe the running time of randomized quick-select on sequence of size since this algorithm depends on random eventsits running timet( )is random variable we want to bound ( ( ))the expected value of (nsay that recursive invocation of our algorithm is "goodif it partitions so that the size of each of and is at most / clearlya recursive call is good with probability at least / let (ndenote the number of consecutive recursive calls we makeincluding the present onebefore we get good one then we can characterize (nusing the following recurrence equationt( <bn (nt( / )where > is constant applying the linearity of expectation for we get ( ( )< (bn (nt( / )bn ( ( ) ( ( / )since recursive call is good with probability at least / and whether recursive call is good or not is independent of its parent call being goodthe expected value of (nis at most the expected number of times we must flip fair coin before it comes up "heads that ise( ( )< thusif we let (nbe shorthand for ( ( ))then we can write the case for as ( < ( / bn to convert this relation into closed formlet us iteratively apply this inequality assuming is large sofor exampleafter two applicationst ( < (( / ) ( / ) bn at this pointwe should see that the general case is log / nt ( < bn ( / ) = in other wordsthe expected running time is at most bn times geometric sum whose base is positive number less than thusby proposition (nis (nproposition the expected running time of randomized quick-select on sequence of size is ( )assuming two elements of can be compared in ( time
13,305
exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - give complete justification of proposition - in the merge-sort tree shown in figures through some edges are drawn as arrows what is the meaning of downward arrowhow about an upward arrowr- show that the running time of the merge-sort algorithm on an -element sequence is ( log )even when is not power of - is our array-based implementation of merge-sort given in section stableexplain why or why not - is our linked-list-based implementation of merge-sort given in code fragment stableexplain why or why not - an algorithm that sorts key-value entries by key is said to be straggling ifany time two entries ei and have equal keysbut ei appears before in the inputthen the algorithm places ei after in the output describe change to the merge-sort algorithm in section to make it straggling - suppose we are given two -element sorted sequences and each with distinct elementsbut potentially some elements that are in both sequences describe an ( )-time method for computing sequence representing the union (with no duplicatesas sorted sequence - suppose we modify the deterministic version of the quick-sort algorithm so thatinstead of selecting the last element in an -element sequence as the pivotwe choose the element at index  / what is the running time of this version of quick-sort on sequence that is already sortedr- consider modification of the deterministic version of the quick-sort algorithm where we choose the element at index  / as our pivot describe the kind of sequence that would cause this version of quick-sort to run in ( time - show that the best-case running time of quick-sort on sequence of size with distinct elements is ( log nr- suppose function inplace quick sort is executed on sequence with duplicate elements prove that the algorithm still correctly sorts the input sequence what happens in the partition step when there are elements equal to the pivotwhat is the running time of the algorithm if all the input elements are equal
13,306
- if the outermost while loop of our implementation of inplace quick sort (line of code fragment were changed to use condition left right (rather than left <right)there would be flaw explain the flaw and give specific input sequence on which such an implementation fails - if the conditional at line of our inplace quick sort implementation of code fragment were changed to use condition left right (rather than left <right)there would be flaw explain the flaw and give specific input sequence on which such an implementation fails - following our analysis of randomized quick-sort in section show that the probability that given input element belongs to more than log subproblems in size group is at most / - of the npossible inputs to given comparison-based sorting algorithmwhat is the absolute maximum number of inputs that could be correctly sorted with just comparisonsr- jonathan has comparison-based sorting algorithm that sorts the first elements of sequence of size in (ntime give big-oh characterization of the biggest that can ber- is the bucket-sort algorithm in-placewhy or why notr- describe radix-sort method for lexicographically sorting sequence of triplets (klm)where kland are integers in the range [ ]for some > how could this scheme be extended to sequences of -tuples ( kd )where each ki is an integer in the range [ ] - suppose is sequence of valueseach equal to or how long will it take to sort with the merge-sort algorithmwhat about quick-sortr- suppose is sequence of valueseach equal to or how long will it take to sort stably with the bucket-sort algorithmr- given sequence of valueseach equal to or describe an in-place method for sorting - give an example input list that requires merge-sort and heap-sort to take ( log ntime to sortbut insertion-sort runs in (ntime what if you reverse this listr- what is the best algorithm for sorting each of the followinggeneral comparable objectslong character strings -bit integersdouble-precision floating-point numbersand bytesjustify your answer - show that the worst-case running time of quick-select on an -element sequence is (
13,307
creativity - linda claims to have an algorithm that takes an input sequence and produces an output sequence that is sorting of the elements in give an algorithmis sortedthat tests in (ntime if is sorted explain why the algorithm is sorted is not sufficient to prove particular output to linda' algorithm is sorting of describe what additional information linda' algorithm could output so that her algorithm' correctness could be established on any given and in (ntime - describe and analyze an efficient method for removing all duplicates from collection of elements - augment the positionallist class (see section to support method named merge with the following behavior if and are positionallist instances whose elements are sortedthe syntax merge(bshould merge all elements of into so that remains sorted and becomes empty your implementation must accomplish the merge by relinking existing nodesyou are not to create any new nodes - augment the positionallist class (see section to support method named sort that sorts the elements of list by relinking existing nodesyou are not to create any new nodes you may use your choice of sorting algorithm - implement bottom-up merge-sort for collection of items by placing each item in its own queueand then repeatedly merging pairs of queues until all items are sorted within single queue - modify our in-place quick-sort implementation of code fragment to be randomized version of the algorithmas discussed in section - consider version of deterministic quick-sort where we pick as our pivot the median of the last elements in the input sequence of elementsfor fixedconstant odd number > what is the asymptotic worst-case running time of quick-sort in this casec- another way to analyze randomized quick-sort is to use recurrence equation in this casewe let (ndenote the expected running time of randomized quick-sortand we observe thatbecause of the worst-case partitions for good and bad splitswe can write ( < ( ( / ( / )( ( )bn where bn is the time needed to partition list for given pivot and concatenate the result sublists after the recursive calls return showby inductionthat (nis ( log
13,308
- our high-level description of quick-sort describes partitioning the elements into three sets leand ghaving keys less thanequal toor greater than the pivotrespectively howeverour in-place quick-sort implementation of code fragment does not gather all elements equal to the pivot into set an alternative strategy for an in-placethreeway partition is as follows loop through the elements from left to right maintaining indices ijand and the invariant that all elements of slice [ :iare strictly less than the pivotall elements of slice [ :jare equal to the pivotand all elements of slice [ :kare strictly greater than the pivotelements of [ :nare yet unclassified in each pass of the loopclassify one additional elementperforming constant number of swaps as needed implement an in-place quick-sort using this strategy - suppose we are given an -element sequence such that each element in represents different vote for presidentwhere each vote is given as an integer representing particular candidateyet the integers may be arbitrarily large (even if the number of candidates is notdesign an ( log )time algorithm to see who wins the election representsassuming the candidate with the most votes wins - consider the voting problem from exercise - but now suppose that we know the number of candidates runningeven though the integer ids for those candidates can be arbitrarily large describe an ( log )time algorithm for determining who wins the election - consider the voting problem from exercise - but now suppose the integers to are used to identify candidates design an ( )-time algorithm to determine who wins the election - show that any comparison-based sorting algorithm can be made to be stable without affecting its asymptotic running time - suppose we are given two sequences and of elementspossibly containing duplicateson which total order relation is defined describe an efficient algorithm for determining if and contain the same set of elements what is the running time of this methodc- given an array of integers in the range [ ]describe simple method for sorting in (ntime - let sk be different sequences whose elements have integer keys in the range [ ]for some parameter > describe an algorithm that produces respective sorted sequences in ( ntimewere denotes the sum of the sizes of those sequences - given sequence of elementson which total order relation is defineddescribe an efficient method for determining whether there are two equal elements in what is the running time of your method
13,309
sorting and selection - let be sequence of elements on which total order relation is defined recall that an inversion in is pair of elements and such that appears before in but describe an algorithm running in ( log ntime for determining the number of inversions in - let be sequence of integers describe method for printing out all the pairs of inversions in in ( ktimewhere is the number of such inversions - let be random permutation of distinct integers argue that the expected running time of insertion-sort on is ( (hintnote that half of the elements ranked in the top half of sorted version of are expected to be in the first half of - let and be two sequences of integers each given an integer mdescribe an ( log )-time algorithm for determining if there is an integer in and an integer in such that - given set of integersdescribe and analyze fast method for finding the log integers closest to the median - bob has set of nuts and set of boltssuch that each nut in has unique matching bolt in unfortunatelythe nuts in all look the sameand the bolts in all look the same as well the only kind of comparison that bob can make is to take nut-bolt pair (ab)such that is in and is in band test it to see if the threads of are largersmalleror perfect match with the threads of describe and analyze an efficient algorithm for bob to match up all of his nuts and bolts - our quick-select implementation can be made more space-efficient by initially computing only the counts for sets leand gcreating only the new subset that will be needed for recursion implement such version - describe an in-place version of the quick-select algorithm in pseudo-codeassuming that you are allowed to modify the order of elements - show how to use deterministic ( )-time selection algorithm to sort sequence of elements in ( log nworst-case time - given an unsorted sequence of comparable elementsand an integer kgive an ( log kexpected-time algorithm for finding the (kelements that have rank /  /  / and so on - space aliens have given us functionalien splitthat can take sequence of integers and partition in (ntime into sequences sk of size at most /keachsuch that the elements in si are less than or equal to every element in si+ for for fixed numberk show how to use alien split to sort in ( log nlog ktime - read documenation of the reverse keyword parameter of python' sorting functionsand describe how the decorate-sort-undecorate paradigm could be used to implement itwithout assuming anything about the key type
13,310
- show that randomized quick-sort runs in ( log ntime with probability at least /nthat iswith high probabilityby answering the followinga for each input element xdefine cij (xto be / random variable that is if and only if element is in subproblems that belong to size group argue why we need not define cij for let xij be / random variable that is with probability / independent of any other eventsand let log / nargue why - - = = cij ( < = = xij show that the expected value of - = = xij is ( / ) show that the probability that = = xij is at most / using the chernoff bound that states that if is the sum of finite number of independent / random variables with expected value then pr( ( / )- where argue why the previous claim proves randomized quick-sort runs in ( log ntime with probability at least / - we can make the quick-select algorithm deterministicby choosing the pivot of an -element sequence as followspartition the set into / groups of size each (except possibly for one groupsort each little set and identify the median element in this set from this set of / "babymediansapply the selection algorithm recursively to find the median of the baby medians use this element as the pivot and proceed as in the quick-select algorithm show that this deterministic quick-select algorithm runs in (ntime by answering the following questions (please ignore floor and ceiling functions if that simplifies the mathematicsfor the asymptotics are the same either way) how many baby medians are less than or equal to the chosen pivothow many are greater than or equal to the pivotb for each baby median less than or equal to the pivothow many other elements are less than or equal to the pivotis the same true for those greater than or equal to the pivotc argue why the method for finding the deterministic pivot and using it to partition takes (ntime based on these estimateswrite recurrence equation to bound the worst-case running time (nfor this selection algorithm (note that in the worst case there are two recursive calls--one to find the median of the baby medians and one to recur on the larger of and ge using this recurrence equationshow by induction that (nis (
13,311
projects - implement nonrecursivein-place version of the quick-sort algorithmas described at the end of section - experimentally compare the performance of in-place quick-sort and version of quick-sort that is not in-place - perform series of benchmarking tests on version of merge-sort and quick-sort to determine which one is faster your tests should include sequences that are "randomas well as "almostsorted - implement deterministic and randomized versions of the quick-sort algorithm and perform series of benchmarking tests to see which one is faster your tests should include sequences that are very "randomlooking as well as ones that are "almostsorted - implement an in-place version of insertion-sort and an in-place version of quick-sort perform benchmarking tests to determine the range of values of where quick-sort is on average better than insertion-sort - design and implement version of the bucket-sort algorithm for sorting list of entries with integer keys taken from the range [ ]for > the algorithm should run in ( ntime - design and implement an animation for one of the sorting algorithms described in this your animation should illustrate the key properties of this algorithm in an intuitive manner notes knuth' classic text on sorting and searching [ contains an extensive history of the sorting problem and algorithms for solving it huang and langston [ show how to merge two sorted lists in-place in linear time the standard quick-sort algorithm is due to hoare [ several optimizations for quick-sort are described by bentley and mcilroy [ more information about randomizationincluding chernoff boundscan be found in the appendix and the book by motwani and raghavan [ the quick-sort analysis given in this is combination of the analysis given in an earlier java edition of this book and the analysis of kleinberg and tardos [ exercise - is due to littman gonnet and baeza-yates [ analyze and compare experimentally several sorting algorithms the term "prune-and-searchcomes originally from the computational geometry literature (such as in the work of clarkson [ and megiddo [ ]the term "decreaseand-conqueris from levitin [
13,312
text processing contents abundance of digitized text notations for strings and the python str class pattern-matching algorithms brute force the boyer-moore algorithm the knuth-morris-pratt algorithm dynamic programming matrix chain-product dna and text sequence alignment text compression and the greedy method the huffman coding algorithm the greedy method tries standard tries compressed tries suffix tries search engine indexing exercises
13,313
text processing abundance of digitized text despite the wealth of multimedia informationtext processing remains one of the dominant functions of computers computer are used to editstoreand display documentsand to transport documents over the internet furthermoredigital systems are used to archive wide range of textual informationand new data is being generated at rapidly increasing pace large corpus can readily surpass petabyte of data (which is equivalent to thousand terabytesor million gigabytescommon examples of digital collections that include textual information aresnapshots of the world wide webas internet document formats html and xml are primarily text formatswith added tags for multimedia content all documents stored locally on user' computer email archives customer reviews compilations of status updates on social networking sites such as facebook feeds from microblogging sites such as twitter and tumblr these collections include written text from hundreds of international languages furthermorethere are large data sets (such as dnathat can be viewed computationally as "stringseven though they are not language in this we explore some of the fundamental algorithms that can be used to efficiently analyze and process large textual data sets in addition to having interesting applicationstext-processing algorithms also highlight some important algorithmic design patterns we begin by examining the problem of searching for pattern as substring of larger piece of textfor examplewhen searching for word in document the pattern-matching problem gives rise to the brute-force methodwhich is often inefficient but has wide applicability nextwe introduce an algorithmic technique known as dynamic programmingwhich can be applied in certain settings to solve problem in polynomial time that appears at first to require exponential time to solve we demonstrate the application on this technique to the problem of finding partial matches between strings that may be similar but not perfectly aligned this problem arises when making suggestions for misspelled wordor when trying to match related genetic samples because of the massive size of textual data setsthe issue of compression is importantboth in minimizing the number of bits that need to be communicated through network and to reduce the long-term storage requirements for archives for text compressionwe can apply the greedy methodwhich often allows us to approximate solutions to hard problemsand for some problems (such as in text compressionactually gives rise to optimal algorithms finallywe examine several special-purpose data structures that can be used to better organize textual data in order to support more efficient run-time queries
13,314
notations for strings and the python str class we use character strings as model for text when discuss algorithms for text processing character strings can come from wide variety of sourcesincluding scientificlinguisticand internet applications indeedthe following are examples of such stringss "cgtaaactgctttaatcaaacgct "the first stringscomes from dna applicationsand the second stringt is the internet address (urlfor the publisher of this book we refer to appendix for an overview of the operations supported by python' str class to allow fairly general notions of string in our algorithm descriptionswe only assume that characters of string come from known alphabetwhich we denote as for examplein the context of dnathere are four symbols in the standard alphabets { ,cgt this alphabet canof coursebe subset of the ascii or unicode character setsbut it could also be something more general although we assume that an alphabet has fixed finite sizedenoted as | |that size can be nontrivialas with python' treatment of the unicode alphabetwhich allows for more than million distinct characters we therefore consider the impact of |sin our asymptotic analysis of text-processing algorithms several string-processing operations involve breaking large strings into smaller strings in order to be able to speak about the pieces that result from such operationswe will rely on python' indexing and slicing notations for the sake of notationwe let denote string of length in that casewe let [jrefer to the character at index for < < we let notation [ :kfor < < < denote the slice (or substringof consisting of characters [jup to and including [ - ]but not [kby this definitionnote that substring [ mhas length and that substring [ :jis trivially the null stringhaving length in accordance with python conventionsthe substring [ :kis also the null string when in order to distinguish some special kinds of substringslet us refer to any substring of the form [ :kfor < < as prefix of ssuch prefix results in python when the first index is omitted from slice notationas in [:ksimilarlyany substring of the form [ :nfor < < is suffix of ssuch suffix results in python when the second index is omitted from slice notationas in [ :for exampleif we again take to be the string of dna given abovethen "cgtaais prefix of "cgcis suffix of sand "cis both prefix and suffix of note that the null string is prefix and suffix of any string
13,315
pattern-matching algorithms in the classic pattern-matching problemwe are given text string of length and pattern string of length mand want to find whether is substring of if sowe may want to find the lowest index within at which beginssuch that [ : +mequals por perhaps to find all indices of at which pattern begins the pattern-matching problem is inherent to many behaviors of python' str classsuch as in tt find( ) index( ) count( )and is subtask of more complex behaviors such as partition( ) split( )and replace(pqin this sectionwe present three pattern-matching algorithms (with increasing levels of difficultyfor simplicitywe model the outward semantics of our functions upon the find method of the string classreturning the lowest index at which the pattern beginsor - if the pattern is not found brute force the brute-force algorithmic design pattern is powerful technique for algorithm design when we have something we wish to search for or when we wish to optimize some function when applying this technique in general situationwe typically enumerate all possible configurations of the inputs involved and pick the best of all these enumerated configurations in applying this technique to design brute-force pattern-matching algorithmwe derive what is probably the first algorithm that we might think of for solving the problem--we simply test all the possible placements of relative to an implementation of this algorithm is shown in code fragment def find brute(tp) """return the lowest index of at which substring begins (or else - "" nm len( )len(pintroduce convenient notations for in range( - + )try every potential starting index within = an index into pattern while and [ = [ ]kth character of matches + if =mif we reached the end of pattern return substring [ : +mmatches return - failed to find match starting with any code fragment an implementation of brute-force pattern-matching algorithm
13,316
performance the analysis of the brute-force pattern-matching algorithm could not be simpler it consists of two nested loopswith the outer loop indexing through all possible starting indices of the pattern in the textand the inner loop indexing through each character of the patterncomparing it to its potentially corresponding character in the text thusthe correctness of the brute-force pattern-matching algorithm follows immediately from this exhaustive search approach the running time of brute-force pattern matching in the worst case is not goodhoweverbecausefor each candidate index in twe can perform up to character comparisons to discover that does not match at the current index referring to code fragment we see that the outer for loop is executed at most timesand the inner while loop is executed at most times thusthe worst-case running time of the brute-force method is (nmexample suppose we are given the text string "abacaabaccabacabaabband the pattern string "abacabfigure illustrates the execution of the brute-force pattern-matching algorithm on and textpatterna comparisons not shown figure example run of the brute-force pattern-matching algorithm the algorithm performs character comparisonsindicated above with numerical labels
13,317
the boyer-moore algorithm at firstit might seem that it is always necessary to examine every character in in order to locate pattern as substring or to rule out its existence but this is not always the case the boyer-moore pattern-matching algorithmwhich we study in this sectioncan sometimes avoid comparisons between and sizable fraction of the characters in in this sectionwe describe simplified version of the original algorithm by boyer and moore the main idea of the boyer-moore algorithm is to improve the running time of the brute-force algorithm by adding two potentially time-saving heuristics roughly statedthese heuristics are as followslooking-glass heuristicwhen testing possible placement of against tbegin the comparisons from the end of and move backward to the front of character-jump heuristicduring the testing of possible placement of within ta mismatch of text character [ ]= with the corresponding pattern character [kis handled as follows if is not contained anywhere in pthen shift completely past [ (for it cannot match any character in potherwiseshift until an occurrence of character in gets aligned with [iwe will formalize these heuristics shortlybut at an intuitive levelthey work as an integrated team the looking-glass heuristic sets up the other heuristic to allow us to avoid comparisons between and whole groups of characters in in this case at leastwe can get to the destination faster by going backwardsfor if we encounter mismatch during the consideration of at certain location in tthen we are likely to avoid lots of needless comparisons by significantly shifting relative to using the character-jump heuristic the character-jump heuristic pays off big if it can be applied early in the testing of potential placement of against figure demonstrates few simple applications of these heuristics texte patterns figure simple example demonstrating the intuition of the boyer-moore pattern-matching algorithm the original comparison results in mismatch with character of the text because that character is nowhere in the patternthe entire pattern is shifted beyond its location the second comparison is also mismatchbut the mismatched character occurs elsewhere in the pattern the pattern is next shifted so that its last occurrence of is aligned with the corresponding in the text the remainder of the process is not illustrated in this figure
13,318
the example of figure is rather basicbecause it only involves mismatches with the last character of the pattern more generallywhen match is found for that last characterthe algorithm continues by trying to extend the match with the second-to-last character of the pattern in its current alignment that process continues until either matching the entire patternor finding mismatch at some interior position of the pattern if mismatch is foundand the mismatched character of the text does not occur in the patternwe shift the entire pattern beyond that locationas originally illustrated in figure if the mismatched character occurs elsewhere in the patternwe must consider two possible subcases depending on whether its last occurrence is before or after the character of the pattern that was aligned with the mismatched those two cases are illustrated in figure texta pattern(aa + texta pattern(bb - figure additional rules for the character-jump heuristic of the boyer-moore algorithm we let represent the index of the mismatched character in the textk represent the corresponding index in the patternand represent the index of the last occurrence of [iwithin the pattern we distinguish two cases(aj kin which case we shift the pattern by unitsand thusindex advances by units(bj kin which case we shift the pattern by one unitand index advances by units in the case of figure ( )we slide the pattern only one unit it would be more productive to slide it rightward until finding another occurrence of mismatched character [iin the patternbut we do not wish to take time to search for
13,319
text processing another occurrence the efficiency of the boyer-moore algorithm relies on creating lookup table that quickly determines where mismatched character occurs elsewhere in the pattern in particularwe define function last(cas if is in plast(cis the index of the last (rightmostoccurrence of in otherwisewe conventionally define last( - if we assume that the alphabet is of fixedfinite sizeand that characters can be converted to indices of an array (for exampleby using their character code)the last function can be easily implemented as lookup table with worst-case ( )time access to the value last(choweverthe table would have length equal to the size of the alphabet (rather than the size of the pattern)and time would be required to initialize the entire table we prefer to use hash table to represent the last functionwith only those characters from the pattern occurring in the structure the space usage for this approach is proportional to the number of distinct alphabet symbols that occur in the patternand thus (mthe expected lookup time remains independent of the problem (although the worst-case bound is ( )our complete implementation of the boyer-moore pattern-matching algorithm is given in code fragment def find boyer moore(tp) """return the lowest index of at which substring begins (or else - "" nm len( )len(pintroduce convenient notations if = return trivial search for empty string last build 'lastdictionary for in range( ) lastp[kk later occurrence overwrites align end of pattern at index - of text - an index into - an index into while if [ = [ ] matching character if = return pattern begins at index of text else - examine previous character - of both and else last get( [ ]- last( [ ]is - if not found + min(kj case analysis for jump step = - restart at end of pattern return - code fragment an implementation of the boyer-moore algorithm
13,320
the correctness of the boyer-moore pattern-matching algorithm follows from the fact that each time the method makes shiftit is guaranteed not to "skipover any possible matches for last(cis the location of the last occurrence of in in figure we illustrate the execution of the boyer-moore pattern-matching algorithm on an input string similar to example texta last(ca - patterna figure an illustration of the boyer-moore pattern-matching algorithmincluding summary of the last(cfunction the algorithm performs character comparisonswhich are indicated with numerical labels performance if using traditional lookup tablethe worst-case running time of the boyer-moore algorithm is (nm | |namelythe computation of the last function takes time ( | |)and the actual search for the pattern takes (nmtime in the worst casethe same as the brute-force algorithm (with hash tablethe dependence on |sis removed an example of text-pattern pair that achieves the worst case is aaaaaaa -  aa the worst-case performancehoweveris unlikely to be achieved for english textforin that casethe boyer-moore algorithm is often able to skip large portions of text experimental evidence on english text shows that the average number of comparisons done per character is for five-character pattern string we have actually presented simplified version of the boyer-moore algorithm the original algorithm achieves running time (nm | |by using an alternative shift heuristic to the partially matched text stringwhenever it shifts the pattern more than the character-jump heuristic this alternative shift heuristic is based on applying the main idea from the knuth-morris-pratt pattern-matching algorithmwhich we discuss next
13,321
the knuth-morris-pratt algorithm in examining the worst-case performances of the brute-force and boyer-moore pattern-matching algorithms on specific instances of the problemsuch as that given in example we should notice major inefficiency for certain alignment of the patternif we find several matching characters but then detect mismatchwe ignore all the information gained by the successful comparisons after restarting with the next incremental placement of the pattern the knuth-morris-pratt (or "kmp"algorithmdiscussed in this sectionavoids this waste of information andin so doingit achieves running time of ( )which is asymptotically optimal that isin the worst case any pattern-matching algorithm will have to examine all the characters of the text and all the characters of the pattern at least once the main idea of the kmp algorithm is to precompute self-overlaps between portions of the pattern so that when mismatch occurs at one locationwe immediately know the maximum amount to shift the pattern before continuing the search motivating example is shown in figure textpatterna figure motivating example for the knuth-morris-pratt algorithm if mismatch occurs at the indicated locationthe pattern could be shifted to the second alignmentwithout explicit need to recheck the partial match with the prefix ama if the mismatched character is not an lthen the next potential alignment of the pattern can take advantage of the common the failure function to implement the kmp algorithmwe will precompute failure functionf that indicates the proper shift of upon failed comparison specificallythe failure function (kis defined as the length of the longest prefix of that is suffix of [ : + (note that we did not include [ heresince we will shift at least one unitintuitivelyif we find mismatch upon character [ + ]the function (ktells us how many of the immediately preceding characters can be reused to restart the pattern example describes the value of the failure function for the example pattern from figure
13,322
example consider the pattern "amalgamationfrom figure the knuth-morris-pratt (kmpfailure functionf ( )for the string is as shown in the following tablek [kf ( implementation our implementation of the kmp pattern-matching algorithm is shown in code fragment it relies on utility functioncompute kmp faildiscussed on the next pageto compute the failure function efficiently the main part of the kmp algorithm is its while loopeach iteration of which performs comparison between the character at index in and the character at index in if the outcome of this comparison is matchthe algorithm moves on to the next characters in both and (or reports match if reaching the end of the patternif the comparison failedthe algorithm consults the failure function for new candidate character in por starts over with the next index in if failing on the first character of the pattern (since nothing can be reused def find kmp(tp) """return the lowest index of at which substring begins (or else - "" nm len( )len(pintroduce convenient notations if = return trivial search for empty string rely on utility to precompute fail compute kmp fail( = index into text = index into pattern while if [ = [ ] [ : +kmatched thus far if = match is complete return + try to extend match + elif fail[ - reuse suffix of [ : else + return - reached end without match code fragment an implementation of the kmp pattern-matching algorithm the compute kmp fail utility function is given in code fragment
13,323
constructing the kmp failure function to construct the failure functionwe use the method shown in code fragment which is "bootstrappingprocess that compares the pattern to itself as in the kmp algorithm each time we have two characters that matchwe set jk note that since we have throughout the execution of the algorithmf ( is always well defined when we need to use it def compute kmp fail( ) """utility that computes and returns kmp fail list "" len(pby defaultpresume overlap of everywhere fail [ = = while mcompute (jduring this passif nonzero if [ = [ ] characters match thus far fail[jk + + elif follows matching prefix fail[ - elseno match found starting at + return fail code fragment an implementation of the compute kmp fail utility in support of the kmp pattern-matching algorithm note how the algorithm uses the previous values of the failure function to efficiently compute new values performance excluding the computation of the failure functionthe running time of the kmp algorithm is clearly proportional to the number of iterations of the while loop for the sake of the analysislet us define intuitivelys is the total amount by which the pattern has been shifted with respect to the text note that throughout the execution of the algorithmwe have < one of the following three cases occurs at each iteration of the loop if jp[ ]then and each increase by and thuss does not change if  [kand then does not change and increases by at least since in this case changes from to ( )which is an addition of ( )which is positive because ( if  [kand then increases by and increases by since does not change
13,324
thusat each iteration of the loopeither or increases by at least (possibly both)hencethe total number of iterations of the while loop in the kmp patternmatching algorithm is at most achieving this boundof courseassumes that we have already computed the failure function for the algorithm for computing the failure function runs in (mtime its analysis is analogous to that of the main kmp algorithmyet with pattern of length compared to itself thuswe haveproposition the knuth-morris-pratt algorithm performs pattern matching on text string of length and pattern string of length in ( mtime the correctness of this algorithm follows from the definition of the failure function any comparisons that are skipped are actually unnecessaryfor the failure function guarantees that all the ignored comparisons are redundant--they would involve comparing the same matching characters over again in figure we illustrate the execution of the kmp pattern-matching algorithm on the same input strings as in example note the use of the failure function to avoid redoing one of the comparisons between character of the pattern and character of the text also note that the algorithm performs fewer overall comparisons than the brute-force algorithm run on the same strings (figure the failure function [kf (ktextpatterna no comparison performed figure an illustration of the kmp pattern-matching algorithm the primary algorithm performs character comparisonswhich are indicated with numerical labels (additional comparisons would be performed during the computation of the failure function
13,325
dynamic programming in this sectionwe discuss the dynamic programming algorithm-design technique this technique is similar to the divide-and-conquer technique (section )in that it can be applied to wide variety of different problems dynamic programming can often be used to take problems that seem to require exponential time and produce polynomial-time algorithms to solve them in additionthe algorithms that result from applications of the dynamic programming technique are usually quite simple--often needing little more than few lines of code to describe some nested loops for filling in table matrix chain-product rather than starting out with an explanation of the general components of the dynamic programming techniquewe begin by giving classicconcrete example suppose we are given collection of two-dimensional matrices for which we wish to compute the mathematical product an- where ai is di di+ matrixfor in the standard matrix multiplication algorithm (which is the one we will use)to multiply -matrix times an -matrix cwe compute the productaas - [ ]jb[ ][ * [ ]jk= this definition implies that matrix multiplication is associativethat isit implies that ( ( cd thuswe can parenthesize the expression for any way we wish and we will end up with the same answer howeverwe will not necessarily perform the same number of primitive (that isscalarmultiplications in each parenthesizationas is illustrated in the following example example let be -matrixlet be -matrixand let be -matrix computing ( drequires multiplicationswhereas computing ( *cd requires multiplications the matrix chain-product problem is to determine the parenthesization of the expression defining the product that minimizes the total number of scalar multiplications performed as the example above illustratesthe differences between parenthesizations can be dramaticso finding good solution can result in significant speedups
13,326
defining subproblems one way to solve the matrix chain-product problem is to simply enumerate all the possible ways of parenthesizing the expression for and determine the number of multiplications performed by each one unfortunatelythe set of all different parenthesizations of the expression for is equal in number to the set of all different binary trees that have leaves this number is exponential in thusthis straightforward ("brute-force"algorithm runs in exponential timefor there are an exponential number of ways to parenthesize an associative arithmetic expression we can significantly improve the performance achieved by the brute-force algorithmhoweverby making few observations about the nature of the matrix chain-product problem the first is that the problem can be split into subproblems in this casewe can define number of different subproblemseach of which is to compute the best parenthesization for some subexpression ai ai+ as concise notationwe use nij to denote the minimum number of multiplications needed to compute this subexpression thusthe original matrix chain-product problem can be characterized as that of computing the value of , - this observation is importantbut we need one more in order to apply the dynamic programming technique characterizing optimal solutions the other important observation we can make about the matrix chain-product problem is that it is possible to characterize an optimal solution to particular subproblem in terms of optimal solutions to its subproblems we call this property the subproblem optimality condition in the case of the matrix chain-product problemwe observe thatno matter how we parenthesize subexpressionthere has to be some final matrix multiplication that we perform that isa full parenthesization of subexpression ai ai+ has to be of the form (ai ak (ak+ )for some {ii moreoverfor whichever is the correct onethe products (ai ak and (ak+ must also be solved optimally if this were not sothen there would be global optimal that had one of these subproblems solved suboptimally but this is impossiblesince we could then reduce the total number of multiplications by replacing the current subproblem solution by an optimal solution for the subproblem this observation implies way of explicitly defining the optimization problem for nij in terms of other optimal subproblem solutions namelywe can compute nij by considering each place where we could put the final multiplication and taking the minimum over all such choices
13,327
designing dynamic programming algorithm we can therefore characterize the optimal subproblem solutionnij as nij min {ni, nk+ di dk+ + } <=kj where ni, since no work is needed for single matrix that isnij is the minimumtaken over all possible places to perform the final multiplicationof the number of multiplications needed to compute each subexpression plus the number of multiplications needed to perform the final matrix multiplication notice that there is sharing of subproblems going on that prevents us from dividing the problem into completely independent subproblems (as we would need to do to apply the divide-and-conquer techniquewe canneverthelessuse the equation for nij to derive an efficient algorithm by computing nij values in bottom-up fashionand storing intermediate solutions in table of nij values we can begin simply enough by assigning ni, for we can then apply the general equation for nij to compute ni, + valuessince they depend only on ni, and ni+ , + values that are available given the ni, + valueswe can then compute the ni, + valuesand so on thereforewe can build nij values up from previously computed values until we can finally compute the value of , - which is the number that we are searching for python implementation of this dynamic programming solution is given in code fragment we use techniques from section for representing multidimensional table in python def matrix chain( ) """ is list of + numbers such that size of kth matrix is [ ]-by- [ + return an -by- table such that [ ][jrepresents the minimum number of multiplications needed to compute the product of ai through aj inclusive "" len( number of matrices initialize -by- result to zero [[ for in range( ) for in range( )number of products in subchain for in range( - )start of subchain = + end of subchain [ ][jmin( [ ][ ]+ [ + ][ ]+ [id[ + [ + for in range( , ) return code fragment dynamic programming algorithm for the matrix chainproduct problem thuswe can compute , - with an algorithm that consists primarily of three nested loops (the third of which computes the min termeach of these loops iterates at most times per executionwith constant amount of additional work within thereforethe total running time of this algorithm is (
13,328
dna and text sequence alignment common text-processing problemwhich arises in genetics and software engineeringis to test the similarity between two text strings in genetics applicationthe two strings could correspond to two strands of dnafor which we want to compute similarities likewisein software engineering applicationthe two strings could come from two versions of source code for the same programfor which we want to determine changes made from one version to the next indeeddetermining the similarity between two strings is so common that the unix and linux operating systems have built-in programnamed difffor comparing text files given string xn- subsequence of is any string that is of the form xi xi xik where + that isit is sequence of characters that are not necessarily contiguous but are nevertheless taken in order from for examplethe string aaag is subsequence of the string cgat aat gaga the dna and text similarity problem we address here is the longest common subsequence (lcsproblem in this problemwe are given two character stringsx xn- and ym- over some alphabet (such as the alphabet { ,cgt common in computational geneticsand are asked to find longest string that is subsequence of both and one way to solve the longest common subsequence problem is to enumerate all subsequences of and take the largest one that is also subsequence of since each character of is either in or not in subsequencethere are potentially different subsequences of each of which requires (mtime to determine whether it is subsequence of thusthis brute-force approach yields an exponential-time algorithm that runs in ( mtimewhich is very inefficient fortunatelythe lcs problem is efficiently solvable using dynamic programming the components of dynamic programming solution as mentioned abovethe dynamic programming technique is used primarily for optimization problemswhere we wish to find the "bestway of doing something we can apply the dynamic programming technique in such situations if the problem has certain propertiessimple subproblemsthere has to be some way of repeatedly breaking the global optimization problem into subproblems moreoverthere should be way to parameterize subproblems with just few indiceslike ijkand so on subproblem optimizationan optimal solution to the global problem must be composition of optimal subproblem solutions subproblem overlapoptimal solutions to unrelated subproblems can contain subproblems in common
13,329
applying dynamic programming to the lcs problem recall that in the lcs problemwe are given two character stringsx and of length and mrespectivelyand are asked to find longest string that is subsequence of both and since and are character stringswe have natural set of indices with which to define subproblems--indices into the strings and let us define subproblemthereforeas that of computing the value , which we will use to denote the length of longest string that is subsequence of both prefixes [ jand [ kthis definition allows us to rewrite , in terms of optimal subproblem solutions this definition depends on which of two cases we are in (see figure , , , max( , , =gt cc taa ta =gt ctaa ycgataat tgaga ycgataat tgag ( (bfigure the two cases in the longest common subsequence algorithm for computing , (ax - yk- (bx - yk- - yk- in this casewe have match between the last character of [ jand the last character of [ kwe claim that this character belongs to longest common subsequence of [ jand [ kto justify this claimlet us suppose it is not true there has to be some longest common subsequence xa xa xac yb yb ybc if xac - or ybc yk- then we get the same sequence by setting ac and bc alternatelyif xac  - and ybc yk- then we can get an even longer common subsequence by adding - yk- to the end thusa longest common subsequence of [ jand [ kends with - thereforewe set , - , - if - yk- - yk- in this casewe cannot have common subsequence that includes both - and yk- that iswe can have common subsequence end with - or one that ends with yk- (or possibly neither)but certainly not both thereforewe set , max{ - , , - if - yk- we note that because slice [ is the empty stringl , for nsimilarlybecause slice [ is the empty stringl , for
13,330
the lcs algorithm the definition of , satisfies subproblem optimizationfor we cannot have longest common subsequence without also having longest common subsequences for the subproblems alsoit uses subproblem overlapbecause subproblem solution , can be used in several other problems (namelythe problems + , , + and + , + turning this definition of , into an algorithm is actually quite straightforward we create an ( ( arrayldefined for < < and < < we initialize all entries to in particular so that all entries of the form , and , are zero thenwe iteratively build up values in until we have ln, the length of longest common subsequence of and we give python implementation of this algorithm in code fragment def lcs(xy) """return table such that [ ][kis length of lcs for [ :jand [ : "" nm len( )len(yintroduce convenient notations ( + ( + table [[ ( + for in range( + ) for in range( ) for in range( ) if [ = [ ]align this match [ + ][ + [ ][ elsechoose to ignore one character [ + ][ + max( [ ][ + ] [ + ][ ] return code fragment dynamic programming algorithm for the lcs problem the running time of the algorithm of the lcs algorithm is easy to analyzefor it is dominated by two nested for loopswith the outer one iterating times and the inner one iterating times since the if-statement and assignment inside the loop each requires ( primitive operationsthis algorithm runs in (nmtime thusthe dynamic programming technique can be applied to the longest common subsequence problem to improve significantly over the exponential-time brute-force solution to the lcs problem the lcs function of code fragment computes the length of the longest common subsequence (stored as ln, )but not the subsequence itself fortunatelyit is easy to extract the actual longest common subsequence if given the complete table of , values computed by the lcs function the solution can be reconstructed back to front by reverse engineering the calculation of length ln, at any position , if yk then the length is based on the common subsequence associated with length - , - followed by common character we can record as part of the sequenceand then continue the analysis from - , - if yk
13,331
then we can move to the larger of , - and - , we continue this process until reaching some , (for exampleif or is as boundary casea python implementation of this strategy is given in code fragment this function constructs longest common subsequence in ( madditional timesince each pass of the while loop decrements either or (or bothan illustration of the algorithm for computing the longest common subsequence is given in figure def lcs solution(xyl) """return the longest common substring of and ygiven lcs table "" solution , len( )len( while [ ][ common characters remain if [ - = [ - ] solution append( [ - ] - - elif [ - ][ > [ ][ - ] -= else - return left-to-right version return join(reversed(solution)code fragment reconstructing the longest common subsequence =gttcctaata ycgat aat gaga figure illustration of the algorithm for constructing longest common subsequence from the array diagonal step on the highlighted path represents the use of common character (with that character' respective indices in the sequences highlighted in the margins
13,332
text compression and the greedy method in this sectionwe consider an important text-processing tasktext compression in this problemwe are given string defined over some alphabetsuch as the ascii or unicode character setsand we want to efficiently encode into small binary string (using only the characters and text compression is useful in any situation where we wish to reduce bandwidth for digital communicationsso as to minimize the time needed to transmit our text likewisetext compression is useful for storing large documents more efficientlyso as to allow fixed-capacity storage device to contain as many documents as possible the method for text compression explored in this section is the huffman code standard encoding schemessuch as asciiuse fixed-length binary strings to encode characters (with or bits in the traditional or extended ascii systemsrespectivelythe unicode system was originally proposed as -bit fixedlength representationalthough common encodings reduce the space usage by allowing common groups of characterssuch as those from the ascii systemwith fewer bits the huffman code saves space over fixed-length encoding by using short code-word strings to encode high-frequency characters and long code-word strings to encode low-frequency characters furthermorethe huffman code uses variable-length encoding specifically optimized for given string over any alphabet the optimization is based on the use of character frequencieswhere we havefor each character ca count (cof the number of times appears in the string to encode the string we convert each character in to variable-length code-wordand we concatenate all these code-words in order to produce the encoding for in order to avoid ambiguitieswe insist that no code-word in our encoding be prefix of another code-word in our encoding such code is called prefix codeand it simplifies the decoding of to retrieve (see figure even with this restrictionthe savings produced by variable-length prefix code can be significantparticularly if there is wide variance in character frequencies (as is the case for natural language text in almost every written languagehuffman' algorithm for producing an optimal variable-length prefix code for is based on the construction of binary tree that represents the code each edge in represents bit in code-wordwith an edge to left child representing " and an edge to right child representing " each leaf is associated with specific characterand the code-word for that character is defined by the sequence of bits associated with the edges in the path from the root of to (see figure each leaf has frequencyf ( )which is simply the frequency in of the character associated with in additionwe give each internal node in frequencyf ( )that is the sum of the frequencies of all the leaves in the subtree rooted at
13,333
(acharacter frequency (ba figure an illustration of an example huffman code for the input string " fast runner need never be afraid of the dark"(afrequency of each character of (bhuffman tree for string the code for character is obtained by tracing the path from the root of to the leaf where is storedand associating left child with and right child with for examplethe code for "ris and the code for "his the huffman coding algorithm the huffman coding algorithm begins with each of the distinct characters of the string to encode being the root node of single-node binary tree the algorithm proceeds in series of rounds in each roundthe algorithm takes the two binary trees with the smallest frequencies and merges them into single binary tree it repeats this process until only one tree is left (see code fragment each iteration of the while loop in huffman' algorithm can be implemented in (log dtime using priority queue represented with heap in additioneach iteration takes two nodes out of and adds one ina process that will be repeated times before exactly one node is left in thusthis algorithm runs in ( log dtime although full justification of this algorithm' correctness is beyond our scope herewe note that its intuition comes from simple idea--any optimal code can be converted into an optimal code in which the code-words for the two lowest-frequency charactersa and bdiffer only in their last bit repeating the argument for string with and replaced by character cgives the followingproposition huffman' algorithm constructs an optimal prefix code for string of length with distinct characters in ( log dtime
13,334
algorithm huffman( )inputstring of length with distinct characters outputcoding tree for compute the frequency (cof each character of initialize priority queue for each character in do create single-node binary tree storing insert into with key (cwhile len( do remove min( remove min(create new binary tree with left subtree and right subtree insert into with key remove min(return tree code fragment huffman coding algorithm the greedy method huffman' algorithm for building an optimal encoding is an example application of an algorithmic design pattern called the greedy method this design pattern is applied to optimization problemswhere we are trying to construct some structure while minimizing or maximizing some property of that structure the general formula for the greedy method pattern is almost as simple as that for the brute-force method in order to solve given optimization problem using the greedy methodwe proceed by sequence of choices the sequence starts from some well-understood starting conditionand computes the cost for that initial condition the pattern then asks that we iteratively make additional choices by identifying the decision that achieves the best cost improvement from all of the choices that are currently possible this approach does not always lead to an optimal solution but there are several problems that it does work forand such problems are said to possess the greedy-choice property this is the property that global optimal condition can be reached by series of locally optimal choices (that ischoices that are each the current best from among the possibilities available at the time)starting from well-defined starting condition the problem of computing an optimal variable-length prefix code is just one example of problem that possesses the greedy-choice property
13,335
tries the pattern-matching algorithms presented in section speed up the search in text by preprocessing the pattern (to compute the failure function in the knuthmorris-pratt algorithm or the last function in the boyer-moore algorithmin this sectionwe take complementary approachnamelywe present string searching algorithms that preprocess the text this approach is suitable for applications where series of queries is performed on fixed textso that the initial cost of preprocessing the text is compensated by speedup in each subsequent query (for examplea web site that offers pattern matching in shakespeare' hamlet or search engine that offers web pages on the hamlet topica trie (pronounced "try"is tree-based data structure for storing strings in order to support fast pattern matching the main application for tries is in information retrieval indeedthe name "triecomes from the word "retrieval in an information retrieval applicationsuch as search for certain dna sequence in genomic databasewe are given collection of stringsall defined using the same alphabet the primary query operations that tries support are pattern matching and prefix matching the latter operation involves being given string and looking for all the strings in that contain as prefix standard tries let be set of strings from alphabet such that no string in is prefix of another string standard trie for is an ordered tree with the following properties (see figure )each node of except the rootis labeled with character of the children of an internal node of have distinct labels has leaveseach associated with string of ssuch that the concatenation of the labels of the nodes on the path from the root to leaf of yields the string of associated with thusa trie represents the strings of with paths from the root to the leaves of note the importance of assuming that no string in is prefix of another string this ensures that each string of is uniquely associated with leaf of (this is similar to the restriction for prefix codes with huffman codingas described in section we can always satisfy this assumption by adding special character that is not in the original alphabet at the end of each string an internal node in standard trie can have anywhere between and |schildren there is an edge going from the root to one of its children for each character that is first in some string in the collection in additiona path from the root of to an internal node at depth corresponds to -character prefix [ :
13,336
figure standard trie for the strings {bearbellbidbullbuysellstockstopof string of in factfor each character that can follow the prefix [ kin string of the set sthere is child of labeled with character in this waya trie concisely stores the common prefixes that exist among set of strings as special caseif there are only two characters in the alphabetthen the trie is essentially binary treewith some internal nodes possibly having only one child (that isit may be an improper binary treein generalalthough it is possible that an internal node has up to |schildrenin practice the average degree of such nodes is likely to be much smaller for examplethe trie shown in figure has several internal nodes with only one child on larger data setsthe average degree of nodes is likely to get smaller at greater depths of the treebecause there may be fewer strings sharing the common prefixand thus fewer continuations of that pattern furthermorein many languagesthere will be character combinations that are unlikely to naturally occur the following proposition provides some important structural properties of standard trieproposition standard trie storing collection of strings of total length from an alphabet has the following propertiesthe height of is equal to the length of the longest string in every internal node of has at most |schildren has leaves the number of nodes of is at most the worst case for the number of nodes of trie occurs when no two strings share common nonempty prefixthat isexcept for the rootall internal nodes have one child
13,337
text processing trie for set of strings can be used to implement set or map whose keys are the strings of namelywe perform search in for string by tracing down from the root the path indicated by the characters in if this path can be traced and terminates at leaf nodethen we know is key in the map for examplein the trie in figure tracing the path for "bullends up at leaf if the path cannot be traced or the path can be traced but terminates at an internal nodethen is not key in the map in the example in figure the path for "betcannot be traced and the path for "beends at an internal node neither such word is in the map it is easy to see that the running time of the search for string of length is ( | |)because we visit at most nodes of and we spend (| |time at each node determining the child having the subsequent character as label the (| |upper bound on the time to locate child with given label is achievableeven if the children of node are unorderedsince there are at most |schildren we can improve the time spent at node to be (log | |or expected ( )by mapping characters to children using secondary search table or hash table at each nodeor by using direct lookup table of size |sat each nodeif |sis sufficiently small (as is the case for dna stringsfor these reasonswe typically expect search for string of length to run in (mtime from the discussion aboveit follows that we can use trie to perform special type of pattern matchingcalled word matchingwhere we want to determine whether given pattern matches one of the words of the text exactly word matching differs from standard pattern matching because the pattern cannot match an arbitrary substring of the text--only one of its words to accomplish thiseach word of the original document must be added to the trie (see figure simple extension of this scheme supports prefix-matching queries howeverarbitrary occurrences of the pattern in the text (for examplethe pattern is proper suffix of word or spans two wordscannot be efficiently performed to construct standard trie for set of stringswe can use an incremental algorithm that inserts the strings one at time recall the assumption that no string of is prefix of another string to insert string into the current trie we trace the path associated with in creating new chain of nodes to store the remaining characters of when we get stuck the running time to insert with length is similar to searchwith worst-case ( | |performanceor expected (mif using secondary hash tables at each node thusconstructing the entire trie for set takes expected (ntimewhere is the total length of the strings of there is potential space inefficiency in the standard trie that has prompted the development of the compressed triewhich is also known (for historical reasonsas the patricia trie namelythere are potentially lot of nodes in the standard trie that have only one childand the existence of such nodes is waste we discuss the compressed trie next
13,338
(ab (bfigure word matching with standard trie(atext to be searched (articles and prepositionswhich are also known as stop wordsexcluded)(bstandard trie for the words in the textwith leaves augmented with indications of the index at which the given work begins in the text for examplethe leaf for the word stock notes that the word begins at indices and of the text
13,339
compressed tries compressed trie is similar to standard trie but it ensures that each internal node in the trie has at least two children it enforces this rule by compressing chains of single-child nodes into individual edges (see figure let be standard trie we say that an internal node of is redundant if has one child and is not the root for examplethe trie of figure has eight redundant nodes let us also say that chain of > edges( )( (vk- vk )is redundant ifvi is redundant for and vk are not redundant we can transform into compressed trie by replacing each redundant chain ( (vk- vk of > edges into single edge ( vk )relabeling vk with the concatenation of the labels of nodes vk ar id ll ll to ell ck figure compressed trie for the strings {bearbellbidbullbuysellstockstop(compare this with the standard trie shown in figure in addition to compression at the leavesnotice the internal node with label to shared by words stock and stop thusnodes in compressed trie are labeled with stringswhich are substrings of strings in the collectionrather than with individual characters the advantage of compressed trie over standard trie is that the number of nodes of the compressed trie is proportional to the number of strings and not to their total lengthas shown in the following proposition (compare with proposition proposition compressed trie storing collection of strings from an alphabet of size has the following propertiesevery internal node of has at least two children and most children has leaves nodes the number of nodes of is (
13,340
the attentive reader may wonder whether the compression of paths provides any significant advantagesince it is offset by corresponding expansion of the node labels indeeda compressed trie is truly advantageous only when it is used as an auxiliary index structure over collection of strings already stored in primary structureand is not required to actually store all the characters of the strings in the collection supposefor examplethat the collection of strings is an array of strings [ ] [ ] [ instead of storing the label of node explicitlywe represent it implicitly by combination of three integers (ij )such that [ ] ]that isx is the slice of [iconsisting of the characters from the jth up to but not including the kth (see the example in figure also compare with the standard trie of figure [ [ [ [ [ [ [ [ [ [ ( (bfigure (acollection of strings stored in an array (bcompact representation of the compressed trie for this additional compression scheme allows us to reduce the total space for the trie itself from (nfor the standard trie to (sfor the compressed triewhere is the total length of the strings in and is the number of strings in we must still store the different strings in sof coursebut we nevertheless reduce the space for the trie searching in compressed trie is not necessarily faster than in standard treesince there is still need to compare every character of the desired pattern with the potentially multi-character labels while traversing paths in the trie
13,341
suffix tries one of the primary applications for tries is for the case when the strings in the collection are all the suffixes of string such trie is called the suffix trie (also known as suffix tree or position treeof string for examplefigure shows the suffix trie for the eight suffixes of string "minimize for suffix triethe compact representation presented in the previous section can be further simplified namelythe label of each vertex is pair jkindicating the string (see figure to satisfy the rule that no suffix of is prefix of another suffixwe can add special characterdenoted with $that is not in the original alphabet at the end of (and thus to every suffixthat isif string has length nwe build trie for the set of strings ]for saving space using suffix trie allows us to save space over standard trie by using several space compression techniquesincluding those used for the compressed trie the advantage of the compact representation of tries now becomes apparent for suffix tries since the total length of the suffixes of string of length is *** ( storing all the suffixes of explicitly would take ( space even sothe suffix trie represents these strings implicitly in (nspaceas formally stated in the following proposition proposition the compact representation of suffix trie for string of length uses (nspace construction we can construct the suffix trie for string of length with an incremental algorithm like the one given in section this construction takes (| | time because the total length of the suffixes is quadratic in howeverthe (compactsuffix trie for string of length can be constructed in (ntime with specialized algorithmdifferent from the one for general tries this linear-time construction algorithm is fairly complexhoweverand is not reported here stillwe can take advantage of the existence of this fast construction algorithm when we want to use suffix trie to solve other problems
13,342
mi mize nimize ze nimize nimize ze ze ( : : : : : : : : : : (bfigure (asuffix trie for the string "minimize(bcompact representation of where pair denotes slice kin the reference string using suffix trie the suffix trie for string can be used to efficiently perform pattern-matching queries on text namelywe can determine whether pattern is substring of by trying to trace path associated with in is substring of if and only if such path can be traced the search down the trie assumes that nodes in store some additional informationwith respect to the compact representation of the suffix trieif node has label jkand is the string of length associated with the path from the root to (included)then [ ky this property ensures that we can easily compute the start index of the pattern in the text when match occurs
13,343
search engine indexing the world wide web contains huge collection of text documents (web pagesinformation about these pages are gathered by program called web crawlerwhich then stores this information in special dictionary database web search engine allows users to retrieve relevant information from this databasethereby identifying relevant pages on the web containing given keywords in this sectionwe present simplified model of search engine inverted files the core information stored by search engine is dictionarycalled an inverted index or inverted filestoring key-value pairs (wl)where is word and is collection of pages containing word the keys (wordsin this dictionary are called index terms and should be set of vocabulary entries and proper nouns as large as possible the elements in this dictionary are called occurrence lists and should cover as many web pages as possible we can efficiently implement an inverted index with data structure consisting of the following an array storing the occurrence lists of the terms (in no particular order compressed trie for the set of index termswhere each leaf stores the index of the occurrence list of the associated term the reason for storing the occurrence lists outside the trie is to keep the size of the trie data structure sufficiently small to fit in internal memory insteadbecause of their large total sizethe occurrence lists have to be stored on disk with our data structurea query for single keyword is similar to wordmatching query (section namelywe find the keyword in the trie and we return the associated occurrence list when multiple keywords are given and the desired output are the pages containing all the given keywordswe retrieve the occurrence list of each keyword using the trie and return their intersection to facilitate the intersection computationeach occurrence list should be implemented with sequence sorted by address or with mapto allow efficient set operations in addition to the basic task of returning list of pages containing given keywordssearch engines provide an important additional service by ranking the pages returned by relevance devising fast and accurate ranking algorithms for search engines is major challenge for computer researchers and electronic commerce companies
13,344
exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - list the prefixes of the string ="aaabbaaathat are also suffixes of - what is the longest (properprefix of the string "cgtacgttcgtacgthat is also suffix of this stringr- draw figure illustrating the comparisons done by brute-force pattern matching for the text "aaabaadaabaaaand pattern "aabaaar- repeat the previous problem for the boyer-moore algorithmnot counting the comparisons made to compute the last(cfunction - repeat exercise - for the knuth-morris-pratt algorithmnot counting the comparisons made to compute the failure function - compute map representing the last function used in the boyer-moore pattern-matching algorithm for characters in the pattern string"the quick brown fox jumped over lazy catr- compute table representing the knuth-morris-pratt failure function for the pattern string "cgtacgttcgtacr- what is the best way to multiply chain of matrices with dimensions that are and show your work - in figure we illustrate that gtttaa is longest common subsequence for the given strings and howeverthat answer is not unique give another common subsequence of and having length six - show the longest common subsequence array for the two stringsx "skullandbonesy "lullabybabieswhat is longest common subsequence between these stringsr- draw the frequency array and huffman tree for the following string"dogs do not spot hot pots or catsr- draw standard trie for the following set of stringsababbabacccccbbaaaacaabbaacccbcccbca - draw compressed trie for the strings given in the previous problem - draw the compact representation of the suffix trie for the string"minimize minime
13,345
creativity - describe an example of text of length and pattern of length such that force the brute-force pattern-matching algorithm achieves running time that is (nmc- adapt the brute-force pattern-matching algorithm in order to implement functionrfind brute( , )that returns the index at which the rightmost occurrence of pattern within text if any - redo the previous problemadapting the boyer-moore pattern-matching algorithm appropriately to implement function rfind boyer moore( ,pc- redo exercise - adapting the knuth-morris-pratt pattern-matching algorithm appropriately to implement function rfind kmp( ,pc- the count method of python' str class reports the maximum number of nonoverlapping occurrences of pattern within string for examplethe call abababa countaba returns (not adapt the brute-force pattern-matching algorithm to implement functioncount brute( , )with similar outcome - redo the previous problemadapting the boyer-moore pattern-matching algorithm in order to implement function count boyer moore( ,pc- redo exercise - adapting the knuth-morris-pratt pattern-matching algorithm appropriately to implement function count kmp( ,pc- give justification of why the compute kmp fail function (code fragment runs in (mtime on pattern of length - let be text of length nand let be pattern of length describe an ( + )-time method for finding the longest prefix of that is substring of - say that pattern of length is circular substring of text of length if is (normalsubstring of or if is equal to the concatenation of suffix of and prefix of that isif there is an index < msuch that [ nt [ kgive an ( )-time algorithm for determining whether is circular substring of - the knuth-morris-pratt pattern-matching algorithm can be modified to run faster on binary strings by redefining the failure function asf (kthe largest such that [ jp is suffix of [ ]where  denotes the complement of the jth bit of describe how to modify the kmp algorithm to be able to take advantage of this new failure function and also give method for computing this failure function show that this method makes at most comparisons between the text and the pattern (as opposed to the comparisons needed by the standard kmp algorithm given in section
13,346
- modify the simplified boyer-moore algorithm presented in this using ideas from the kmp algorithm so that it runs in ( mtime - design an efficient algorithm for the matrix chain multiplication problem that outputs fully parenthesized expression for how to multiply the matrices in the chain using the minimum number of operations - native australian named anatjari wishes to cross desert carrying only single water bottle he has map that marks all the watering holes along the way assuming he can walk miles on one bottle of waterdesign an efficient algorithm for determining where anatjari should refill his bottle in order to make as few stops as possible argue why your algorithm is correct - describe an efficient greedy algorithm for making change for specified value using minimum number of coinsassuming there are four denominations of coins (called quartersdimesnickelsand pennies)with values and respectively argue why your algorithm is correct - give an example set of denominations of coins so that greedy changemaking algorithm will not use the minimum number of coins - in the art gallery guarding problem we are given line that represents long hallway in an art gallery we are also given set { xn- of real numbers that specify the positions of paintings in this hallway suppose that single guard can protect all the paintings within distance at most of his or her position (on both sidesdesign an algorithm for finding placement of guards that uses the minimum number of guards to guard all the paintings with positions in - let be convex polygona triangulation of is an addition of diagonals connecting the vertices of so that each interior face is triangle the weight of triangulation is the sum of the lengths of the diagonals assuming that we can compute lengths and add and compare them in constant timegive an efficient algorithm for computing minimum-weight triangulation of - let be text string of length describe an ( )-time method for finding the longest prefix of that is substring of the reversal of - describe an efficient algorithm to find the longest palindrome that is suffix of string of length recall that palindrome is string that is equal to its reversal what is the running time of your methodc- given sequence ( xn- of numbersdescribe an ( )time algorithm for finding longest subsequence (xi xi xik- of numberssuch that xi + that ist is longest decreasing subsequence of - give an efficient algorithm for determining if pattern is subsequence (not substringof text what is the running time of your algorithm
13,347
- define the edit distance between two strings and of length and mrespectivelyto be the number of edits that it takes to change into an edit consists of character insertiona character deletionor character replacement for examplethe strings "algorithmand "rhythmhave edit distance design an (nm)-time algorithm for computing the edit distance between and - let and be strings of length and mrespectively define bjkto be the length of the longest common substring of the suffix [ :nand the suffix [ :mdesign an (nm)-time algorithm for computing all the values of bjkfor and - anna has just won contest that allows her to take pieces of candy out of candy store for free anna is old enough to realize that some candy is expensivewhile other candy is relatively cheapcosting much less the jars of candy are numbered so that jar has pieces in itwith price of per piece design an ( )-time algorithm that allows anna to maximize the value of the pieces of candy she takes for her winnings show that your algorithm produces the maximum value for anna - let three integer arraysaband cbe giveneach of size given an arbitrary integer kdesign an ( log )-time algorithm to determine if there exist numbersa in ab in band in csuch that - give an ( )-time algorithm for the previous problem - given string of length and string of length mdescribe an ( )-time algorithm for finding the longest prefix of that is suffix of - give an efficient algorithm for deleting string from standard trie and analyze its running time - give an efficient algorithm for deleting string from compressed trie and analyze its running time - describe an algorithm for constructing the compact representation of suffix triegiven its noncompact representationand analyze its running time projects - use the lcs algorithm to compute the best sequence alignment between some dna stringswhich you can get online from genbank - write program that takes two character strings (which could befor examplerepresentations of dna strandsand computes their edit distanceshowing the corresponding pieces (see exercise -
13,348
- perform an experimental analysis of the efficiency (number of character comparisons performedof the brute-force and kmp pattern-matching algorithms for varying-length patterns - perform an experimental analysis of the efficiency (number of character comparisons performedof the brute-force and boyer-moore patternmatching algorithms for varying-length patterns - perform an experimental comparison of the relative speeds of the bruteforcekmpand boyer-moore pattern-matching algorithms document the relative running times on large text documents that are then searched using varying-length patterns - experiment with the efficiency of the find method of python' str class and develop hypothesis about which pattern-matching algorithm it uses try using inputs that are likely to cause both best-case and worst-case running times for various algorithms describe your experiments and your conclusions - implement compression and decompression scheme that is based on huffman coding - create class that implements standard trie for set of ascii strings the class should have constructor that takes list of strings as an argumentand the class should have method that tests whether given string is stored in the trie - create class that implements compressed trie for set of ascii strings the class should have constructor that takes list of strings as an argumentand the class should have method that tests whether given string is stored in the trie - create class that implements prefix trie for an ascii string the class should have constructor that takes string as an argumentand method for pattern matching on the string - implement the simplified search engine described in section for the pages of small web site use all the words in the pages of the site as index termsexcluding stop words such as articlesprepositionsand pronouns - implement search engine for the pages of small web site by adding page-ranking feature to the simplified search engine described in section your page-ranking feature should return the most relevant pages first use all the words in the pages of the site as index termsexcluding stop wordssuch as articlesprepositionsand pronouns
13,349
notes the kmp algorithm is described by knuthmorrisand pratt in their journal article [ ]and boyer and moore describe their algorithm in journal article published the same year [ in their articlehoweverknuth et al [ also prove that the boyer-moore algorithm runs in linear time more recentlycole [ shows that the boyer-moore algorithm makes at most character comparisons in the worst caseand this bound is tight all of the algorithms discussed above are also discussed in the book by aho [ ]albeit in more theoretical frameworkincluding the methods for regular-expression pattern matching the reader interested in further study of string pattern-matching algorithms is referred to the book by stephen [ and the book by aho [ ]and crochemore and lecroq [ dynamic programming was developed in the operations research community and formalized by bellman [ the trie was invented by morrison [ and is discussed extensively in the classic sorting and searching book by knuth [ the name "patriciais short for "practical algorithm to retrieve information coded in alphanumeric[ mccreight [ shows how to construct suffix tries in linear time an introduction to the field of information retrievalwhich includes discussion of search engines for the webis provided in the book by baeza-yates and ribeiro-neto [
13,350
graph algorithms contents graphs the graph adt data structures for graphs edge list structure adjacency list structure adjacency map structure adjacency matrix structure python implementation graph traversals depth-first search dfs implementation and extensions breadth-first search transitive closure directed acyclic graphs topological ordering shortest paths weighted graphs dijkstra' algorithm minimum spanning trees prim-jarnik algorithm kruskal' algorithm disjoint partitions and union-find structures exercises
13,351
graphs graph is way of representing relationships that exist between pairs of objects that isa graph is set of objectscalled verticestogether with collection of pairwise connections between themcalled edges graphs have applications in modeling many domainsincluding mappingtransportationcomputer networksand electrical engineering by the waythis notion of "graphshould not be confused with bar charts and function plotsas these kinds of "graphsare unrelated to the topic of this viewed abstractlya graph is simply set of vertices and collection of pairs of vertices from called edges thusa graph is way of representing connections or relationships between pairs of objects from some set incidentallysome books use different terminology for graphs and refer to what we call vertices as nodes and what we call edges as arcs we use the terms "verticesand "edges edges in graph are either directed or undirected an edge (uvis said to be directed from to if the pair (uvis orderedwith preceding an edge (uvis said to be undirected if the pair (uvis not ordered undirected edges are sometimes denoted with set notationas {uv}but for simplicity we use the pair notation (uv)noting that in the undirected case (uvis the same as (vugraphs are typically visualized by drawing the vertices as ovals or rectangles and the edges as segments or curves connecting pairs of ovals and rectangles the following are some examples of directed and undirected graphs example we can visualize collaborations among the researchers of certain discipline by constructing graph whose vertices are associated with the researchers themselvesand whose edges connect pairs of vertices associated with researchers who have coauthored paper or book (see figure such edges are undirected because coauthorship is symmetric relationthat isif has coauthored something with bthen necessarily has coauthored something with snoeyink garg goldwasser goodrich tamassia tollis vitter preparata chiang figure graph of coauthorship among some authors
13,352
example we can associate with an object-oriented program graph whose vertices represent the classes defined in the programand whose edges indicate inheritance between classes there is an edge from vertex to vertex if the class for inherits from the class for such edges are directed because the inheritance relation only goes in one direction (that isit is asymmetricif all the edges in graph are undirectedthen we say the graph is an undirected graph likewisea directed graphalso called digraphis graph whose edges are all directed graph that has both directed and undirected edges is often called mixed graph note that an undirected or mixed graph can be converted into directed graph by replacing every undirected edge (uvby the pair of directed edges (uvand (vuit is often usefulhoweverto keep undirected and mixed graphs represented as they arefor such graphs have several applicationsas in the following example example city map can be modeled as graph whose vertices are intersections or dead endsand whose edges are stretches of streets without intersections this graph has both undirected edgeswhich correspond to stretches of two-way streetsand directed edgeswhich correspond to stretches of one-way streets thusin this waya graph modeling city map is mixed graph example physical examples of graphs are present in the electrical wiring and plumbing networks of building such networks can be modeled as graphswhere each connectorfixtureor outlet is viewed as vertexand each uninterrupted stretch of wire or pipe is viewed as an edge such graphs are actually components of much larger graphsnamely the local power and water distribution networks depending on the specific aspects of these graphs that we are interested inwe may consider their edges as undirected or directedforin principlewater can flow in pipe and current can flow in wire in either direction the two vertices joined by an edge are called the end vertices (or endpointsof the edge if an edge is directedits first endpoint is its origin and the other is the destination of the edge two vertices and are said to be adjacent if there is an edge whose end vertices are and an edge is said to be incident to vertex if the vertex is one of the edge' endpoints the outgoing edges of vertex are the directed edges whose origin is that vertex the incoming edges of vertex are the directed edges whose destination is that vertex the degree of vertex vdenoted deg( )is the number of incident edges of the in-degree and out-degree of vertex are the number of the incoming and outgoing edges of vand are denoted indeg(vand outdeg( )respectively
13,353
example we can study air transportation by constructing graph gcalled flight networkwhose vertices are associated with airportsand whose edges are associated with flights (see figure in graph gthe edges are directed because given flight has specific travel direction the endpoints of an edge in correspond respectively to the origin and destination of the flight corresponding to two airports are adjacent in if there is flight that flies between themand an edge is incident to vertex in if the flight for flies to or from the airport for the outgoing edges of vertex correspond to the outbound flights from ' airportand the incoming edges correspond to the inbound flights to ' airport finallythe in-degree of vertex of corresponds to the number of inbound flights to ' airportand the out-degree of vertex in corresponds to the number of outbound flights lax aa dfw dl aa aa jfk dl ua ua ord sfo bos nw sw aa aa mia figure example of directed graph representing flight network the endpoints of edge ua are lax and ordhencelax and ord are adjacent the in-degree of dfw is and the out-degree of dfw is the definition of graph refers to the group of edges as collectionnot setthus allowing two undirected edges to have the same end verticesand for two directed edges to have the same origin and the same destination such edges are called parallel edges or multiple edges flight network can contain parallel edges (example )such that multiple edges between the same pair of vertices could indicate different flights operating on the same route at different times of the day another special type of edge is one that connects vertex to itself namelywe say that an edge (undirected or directedis self-loop if its two endpoints coincide self-loop may occur in graph associated with city map (example )where it would correspond to "circle( curving street that returns to its starting pointwith few exceptionsgraphs do not have parallel edges or self-loops such graphs are said to be simple thuswe can usually say that the edges of simple graph are set of vertex pairs (and not just collectionthroughout this we assume that graph is simple unless otherwise specified
13,354
path is sequence of alternating vertices and edges that starts at vertex and ends at vertex such that each edge is incident to its predecessor and successor vertex cycle is path that starts and ends at the same vertexand that includes at least one edge we say that path is simple if each vertex in the path is distinctand we say that cycle is simple if each vertex in the cycle is distinctexcept for the first and last one directed path is path such that all edges are directed and are traversed along their direction directed cycle is similarly defined for examplein figure (bosnw jfkaa dfwis directed simple pathand (laxua ordua dfwaa laxis directed simple cycle note that directed graph may have cycle consisting of two edges with opposite direction between the same pair of verticesfor example (ordua dfwdl ordin figure directed graph is acyclic if it has no directed cycles for exampleif we were to remove the edge ua from the graph in figure the remaining graph is acyclic if graph is simplewe may omit the edges when describing path or cycle cas these are well definedin which case is list of adjacent vertices and is cycle of adjacent vertices example given graph representing city map (see example )we can model couple driving to dinner at recommended restaurant as traversing path though if they know the wayand do not accidentally go through the same intersection twicethen they traverse simple path in likewisewe can model the entire trip the couple takesfrom their home to the restaurant and backas cycle if they go home from the restaurant in completely different way than how they wentnot even going through the same intersection twicethen their entire round trip is simple cycle finallyif they travel along one-way streets for their entire tripwe can model their night out as directed cycle given vertices and of (directedgraph gwe say that reaches vand that is reachable from uif has (directedpath from to in an undirected graphthe notion of reachability is symmetricthat is to sayu reaches if an only if reaches howeverin directed graphit is possible that reaches but does not reach ubecause directed path must be traversed according to the respective directions of the edges graph is connected iffor any two verticesthere is path is strongly connected if for any two vertices between them directed graph and of gu reaches and reaches (see figure for some examples subgraph of graph is graph whose vertices and edges are subsets of the vertices and edges of grespectively spanning subgraph of is subgraph of that contains all the vertices of the graph if graph is not connectedits maximal connected subgraphs are called the connected components of forest is graph without cycles tree is connected forestthat isa connected graph without cycles spanning tree of graph is spanning subgraph that is tree (note that this definition of tree is somewhat different from the one given in as there is not necessarily designated root
13,355
bos bos ord ord jfk jfk sfo sfo dfw dfw lax lax mia mia ( (bbos bos ord ord jfk sfo jfk sfo dfw dfw lax lax mia (cmia (dfigure examples of reachability in directed graph(aa directed path from bos to lax is highlighted(ba directed cycle (ordmiadfwlaxordis highlightedits vertices induce strongly connected subgraph(cthe subgraph of the vertices and edges reachable from ord is highlighted(dthe removal of the dashed edges results in an acyclic directed graph example perhaps the most talked about graph today is the internetwhich can be viewed as graph whose vertices are computers and whose (undirectededges are communication connections between pairs of computers on the internet the computers and the connections between them in single domainlike wiley comform subgraph of the internet if this subgraph is connectedthen two users on computers in this domain can send email to one another without having their information packets ever leave their domain suppose the edges of this subgraph form spanning tree this implies thatif even single connection goes down (for examplebecause someone pulls communication cable out of the back of computer in this domain)then this subgraph will no longer be connected
13,356
in the propositions that followwe explore few important properties of graphs proposition if is graph with edges and vertex set then deg( in justificationan edge (uvis counted twice in the summation aboveonce by its endpoint and once by its endpoint thusthe total contribution of the edges to the degrees of the vertices is twice the number of edges proposition if is directed graph with edges and vertex set then indeg(voutdeg(vm in in justificationin directed graphan edge (uvcontributes one unit to the out-degree of its origin and one unit to the in-degree of its destination thusthe total contribution of the edges to the out-degrees of the vertices is equal to the number of edgesand similarly for the in-degrees we next show that simple graph with vertices has ( edges proposition let be simple graph with vertices and edges if is undirectedthen < ( )/ and if is directedthen < ( justificationsuppose that is undirected since no two edges can have the same endpoints and there are no self-loopsthe maximum degree of vertex in is in this case thusby proposition < ( now suppose that is directed since no two edges can have the same origin and destinationand there are no self-loopsthe maximum in-degree of vertex in is in this case thusby proposition < ( there are number of simple properties of treesforestsand connected graphs proposition let be an undirected graph with vertices and edges if is connectedthen > if is treethen if is forestthen <
13,357
the graph adt graph is collection of vertices and edges we model the abstraction as combination of three data typesvertexedgeand graph vertex is lightweight object that stores an arbitrary element provided by the user ( an airport code)we assume it supports methodelement)to retrieve the stored element an edge also stores an associated object ( flight numbertravel distancecost)retrieved with the elementmethod in additionwe assume that an edge supports the following methodsendpoints)return tuple (uvsuch that vertex is the origin of the edge and vertex is the destinationfor an undirected graphthe orientation is arbitrary opposite( )assuming vertex is one endpoint of the edge (either origin or destination)return the other endpoint the primary abstraction for graph is the graph adt we presume that graph can be either undirected or directedwith the designation declared upon constructionrecall that mixed graph can be represented as directed graphmodeling edge {uvas pair of directed edges (uvand (vuthe graph adt includes the following methodsvertex count)return the number of vertices of the graph vertices)return an iteration of all the vertices of the graph edge count)return the number of edges of the graph edges)return an iteration of all the edges of the graph get edge( , )return the edge from vertex to vertex vif one existsotherwise return none for an undirected graphthere is no difference between get edge( ,vand get edge( ,udegree(vout=true)for an undirected graphreturn the number of edges incident to vertex for directed graphreturn the number of outgoing (resp incomingedges incident to vertex vas designated by the optional parameter incident edges(vout=true)return an iteration of all edges incident to vertex in the case of directed graphreport outgoing edges by defaultreport incoming edges if the optional parameter is set to false insert vertex( =none)create and return new vertex storing element insert edge(uvx=none)create and return new edge from vertex to vertex vstoring element (none by defaultremove vertex( )remove vertex and all its incident edges from the graph remove edge( )remove edge from the graph
13,358
data structures for graphs in this sectionwe introduce four data structures for representing graph in each representationwe maintain collection to store the vertices of graph howeverthe four representations differ greatly in the way they organize the edges in an edge listwe maintain an unordered list of all edges this minimally sufficesbut there is no efficient way to locate particular edge (uv)or the set of all edges incident to vertex in an adjacency listwe maintainfor each vertexa separate list containing those edges that are incident to the vertex the complete set of edges can be determined by taking the union of the smaller setswhile the organization allows us to more efficiently find all edges incident to given vertex an adjacency map is very similar to an adjacency listbut the secondary container of all edges incident to vertex is organized as maprather than as listwith the adjacent vertex serving as key this allows for access to specific edge (uvin ( expected time an adjacency matrix provides worst-case ( access to specific edge (uvby maintaining an matrixfor graph with vertices each entry is dedicated to storing reference to the edge (uvfor particular pair of vertices and vif no such edge existsthe entry will be none summary of the performance of these structures is given in table we give further explanation of the structures in the remainder of this section operation vertex countedge countverticesedgesget edge( ,vdegree(vincident edges(vinsert vertex(xremove vertex(vinsert edge( , ,xremove edge(eedge list ( ( (no(mo(mo(mo(mo( (mo( ( adj list ( ( (no(mo(min(du dv ) ( (dv ( (dv ( ( adj map ( ( (no(mo( exp ( (dv ( (dv ( exp ( exp adj matrix ( ( (no(mo( (no(no( ( ( ( table summary of the running times for the methods of the graph adtusing the graph representations discussed in this section we let denote the number of verticesm the number of edgesand dv the degree of vertex note that the adjacency matrix uses ( spacewhile all other structures use ( mspace
13,359
edge list structure the edge list structure is possibly the simplestthough not the most efficientrepresentation of graph all vertex objects are stored in an unordered list and all edge objects are stored in an unordered list we illustrate an example of the edge list structure for graph in figure (ah (bfigure (aa graph (bschematic representation of the edge list structure for notice that an edge object refers to the two vertex objects that correspond to its endpointsbut that vertices do not refer to incident edges to support the many methods of the graph adt (section )we assume the following additional features of an edge list representation collections and are represented with doubly linked lists using our positionallist class from vertex objects the vertex object for vertex storing element has instance variables fora reference to element xto support the elementmethod reference to the position of the vertex instance in the list thereby allowing to be efficiently removed from if it were removed from the graph edge objects the edge object for an edge storing element has instance variables fora reference to element xto support the elementmethod references to the vertex objects associated with the endpoint vertices of these allow the edge instance to provide constant-time support for methods endpointsand opposite(va reference to the position of the edge instance in list ethereby allowing to be efficiently removed from if it were removed from the graph
13,360
performance of the edge list structure the performance of an edge list structure in fulfilling the graph adt is summarized in table we begin by discussing the space usagewhich is ( mfor representing graph with vertices and edges each individual vertex or edge instance uses ( spaceand the additional lists and use space proportional to their number of entries in terms of running timethe edge list structure does as well as one could hope in terms of reporting the number of vertices or edgesor in producing an iteration of those vertices or edges by querying the respective list or ethe vertex count and edge count methods run in ( timeand by iterating through the appropriate listthe methods vertices and edges run respectively in (nand (mtime the most significant limitations of an edge list structureespecially when compared to the other graph representationsare the (mrunning times of methods get edge( , )degree( )and incident edges(vthe problem is that with all edges of the graph in an unordered list ethe only way to answer those queries is through an exhaustive inspection of all edges the other data structures introduced in this section will implement these methods more efficiently finallywe consider the methods that update the graph it is easy to add new vertex or new edge to the graph in ( time for examplea new edge can be added to the graph by creating an edge instance storing the given element as dataadding that instance to the positional list eand recording its resulting position within as an attribute of the edge that stored position can later be used to locate and remove this edge from in ( timeand thus implement the method remove edge(eit is worth discussing why the remove vertex(vmethod has running time of (mas stated in the graph adtwhen vertex is removed from the graphall edges incident to must also be removed (otherwisewe would have contradiction of edges that refer to vertices that are not part of the graphto locate the incident edges to the vertexwe must examine all edges of operation vertex count)edge countverticesedgesget edge( , )degree( )incident edges(vinsert vertex( )insert edge( , , )remove edge(eremove vertex(vrunning time ( (no(mo(mo( (mtable running times of the methods of graph implemented with the edge list structure the space used is ( )where is the number of vertices and is the number of edges
13,361
adjacency list structure in contrast to the edge list representation of graphthe adjacency list structure groups the edges of graph by storing them in smallersecondary containers that are associated with each individual vertex specificallyfor each vertex vwe maintain collection ( )called the incidence collection of vwhose entries are edges incident to (in the case of directed graphoutgoing and incoming edges can be respectively stored in two separate collectionsiout (vand iin (vtraditionallythe incidence collection (vfor vertex is listwhich is why we call this way of representing graph the adjacency list structure we require that the primary structure for an adjacency list maintain the collection of vertices in way so that we can locate the secondary structure (vfor given vertex in ( time this could be done by using positional list to represent with each vertex instance maintaining direct reference to its (vincidence collectionwe illustrate such an adjacency list structure of graph in figure if vertices can be uniquely numbered from to we could instead use primary array-based structure to access the appropriate secondary lists the primary benefit of an adjacency list is that the collection (vcontains exactly those edges that should be reported by the method incident edges(vthereforewe can implement this method by iterating the edges of (vin (deg( )timewhere deg(vis the degree of vertex this is the best possible outcome for any graph representationbecause there are deg(vedges to be reported (ah (bfigure (aan undirected graph (ba schematic representation of the adjacency list structure for collection is the primary list of verticesand each vertex has an associated list of incident edges although not diagrammed as suchwe presume that each edge of the graph is represented with unique edge instance that maintains references to its endpoint vertices
13,362
performance of the adjacency list structure table summarizes the performance of the adjacency list structure implementation of graphassuming that the primary collection and all secondary collections (vare implemented with doubly linked lists asymptoticallythe space requirements for an adjacency list are the same as an edge list structureusing ( mspace for graph with vertices and edges the primary list of vertices uses (nspace the sum of the lengths of all secondary lists is ( )for reasons that were formalized in propositions and in shortan undirected edge (uvis referenced in both (uand ( )but its presence in the graph results in only constant amount of additional space we have already noted that the incident edges(vmethod can be achieved in (deg( )time based on use of (vwe can achieve the degree(vmethod of the graph adt to use ( timeassuming collection (vcan report its size in similar time to locate specific edge for implementing get edge( , )we can search through either (uand (vby choosing the smaller of the twowe get (min(deg( )deg( ))running time the rest of the bounds in table can be achieved with additional care to efficiently support deletions of edgesan edge (uvwould need to maintain reference to its positions within both (uand ( )so that it could be deleted from those collections in ( time to remove vertex vwe must also remove any incident edgesbut at least we can locate those edges in (deg( )time the easiest way to support edgesin (mand count edgesin ( is to maintain an auxiliary list of edgesas in the edge list representation otherwisewe can implement the edges method in ( mtime by accessing each secondary list and reporting its edgestaking care not to report an undirected edge (uvtwice operation vertex count)edge countverticesedgesget edge( ,vdegree(vincident edges(vinsert vertex( )insert edge( , ,xremove edge(eremove vertex(vrunning time ( (no(mo(min(deg( )deg( )) ( (deg( ) ( ( (deg( )table running times of the methods of graph implemented with the adjacency list structure the space used is ( )where is the number of vertices and is the number of edges
13,363
adjacency map structure in the adjacency list structurewe assume that the secondary incidence collections are implemented as unordered linked lists such collection (vuses space proportional to (deg( ))allows an edge to be added or removed in ( timeand allows an iteration of all edges incident to vertex in (deg( )time howeverthe best implementation of get edge( ,vrequires (min(deg( )deg( ))timebecause we must search through either (uor (vwe can improve the performance by using hash-based map to implement (vfor each vertex specificallywe let the opposite endpoint of each incident edge serve as key in the mapwith the edge structure serving as the value we call such graph representation an adjacency map (see figure the space usage for an adjacency map remains ( )because (vuses (deg( )space for each vertex vas with the adjacency list the advantage of the adjacency maprelative to an adjacency listis that the get edge( ,vmethod can be implemented in expected ( time by searching for vertex as key in ( )or vice versa this provides likely improvement over the adjacency listwhile retaining the worst-case bound of (min(deg( )deg( ))in comparing the performance of adjacency map to other representations (see table )we find that it essentially achieves optimal running times for all methodsmaking it an excellent all-purpose choice as graph representation ( (bfigure (aan undirected graph (ba schematic representation of the adjacency map structure for each vertex maintains secondary map in which neighboring vertices serve as keyswith the connecting edges as associated values although not diagrammed as suchwe presume that there is unique edge instance for each edge of the graphand that it maintains references to its endpoint vertices
13,364
adjacency matrix structure the adjacency matrix structure for graph augments the edge list structure with matrix (that isa two-dimensional arrayas in section )which allows us to locate an edge between given pair of vertices in worst-case constant time in the adjacency matrix representationwe think of the vertices as being the integers in the set { and the edges as being pairs of such integers this allows us to store references to edges in the cells of two-dimensional array specificallythe cell [ijholds reference to the edge (uv)if it existswhere is the vertex with index and is the vertex with index if there is no such edgethen [ijnone we note that array is symmetric if graph is undirectedas [ijajifor all pairs and (see figure the most significant advantage of an adjacency matrix is that any edge (uvcan be accessed in worst-case ( timerecall that the adjacency map supports that operation in ( expected time howeverseveral operation are less efficient with an adjacency matrix for exampleto find the edges incident to vertex vwe must presumably examine all entries in the row associated with vrecall that an adjacency list or map can locate those edges in optimal (deg( )time adding or removing vertices from graph is problematicas the matrix must be resized furthermorethe ( space usage of an adjacency matrix is typically far worse than the ( mspace required of the other representations althoughin the worst casethe number of edges in dense graph will be proportional to most real-world graphs are sparse in such casesuse of an adjacency matrix is inefficient howeverif graph is densethe constants of proportionality of an adjacency matrix can be smaller than that of an adjacency list or map in factif edges do not have auxiliary dataa boolean adjacency matrix can use one bit per edge slotsuch that [ijtrue if and only if associated (uvis an edge (ah (bfigure (aan undirected graph (ba schematic representation of the auxiliary adjacency matrix structure for gin which vertices are mapped to indices to although not diagrammed as suchwe presume that there is unique edge instance for each edgeand that it maintains references to its endpoint vertices we also assume that there is secondary edge list (not pictured)to allow the edgesmethod to run in (mtimefor graph with edges
13,365
python implementation in this sectionwe provide an implementation of the graph adt our implementation will support directed or undirected graphsbut for ease of explanationwe first describe it in the context of an undirected graph we use variant of the adjacency map representation for each vertex vwe use python dictionary to represent the secondary incidence map (vhoweverwe do not explicitly maintain lists and eas originally described in the edge list representation the list is replaced by top-level dictionary that maps each vertex to its incidence map ( )note that we can iterate through all vertices by generating the set of keys for dictionary by using such dictionary to map vertices to the secondary incidence mapswe need not maintain references to those incidence maps as part of the vertex structures alsoa vertex does not need to explicitly maintain reference to its position in dbecause it can be determined in ( expected time this greatly simplifies our implementation howevera consequence of our design is that some of the worst-case running time bounds for the graph adt operationsgiven in table become expected bounds rather than maintain list ewe are content with taking the union of the edges found in the various incidence mapstechnicallythis runs in ( mtime rather than strictly (mtimeas the dictionary has keyseven if some incidence maps are empty our implementation of the graph adt is given in code fragments through classes vertex and edgegiven in code fragment are rather simpleand can be nested within the more complex graph class note that we define the hash method for both vertex and edge so that those instances can be used as keys in python' hash-based sets and dictionaries the rest of the graph class is given in code fragments and graphs are undirected by defaultbut can be declared as directed with an optional parameter to the constructor internallywe manage the directed case by having two different top-level dictionary instancesoutgoing and incomingsuch that outgoing[vmaps to another dictionary representing iout ( )and incoming[vmaps to representation of iin (vin order to unify our treatment of directed and undirected graphswe continue to use the outgoing and incoming identifiers in the undirected caseyet as aliases to the same dictionary for conveniencewe define utility named is directed to allow us to distinguish between the two cases for methods degree and incident edgeswhich each accept an optional parameter to differentiate between the outgoing and incoming orientationswe choose the appropriate map before proceeding for method insert vertexwe always initialize outgoing[vto an empty dictionary for new vertex in the directed casewe independently initialize incoming[vas well for the undirected casethat step is unnecessary as outgoing and incoming are aliases we leave the implementations of methods remove vertex and remove edge as exercises ( - and -
13,366
nested vertex class class vertex """lightweight vertex structure for graph ""slots _element def init (selfx) """do not call constructor directly use graph insert vertex( "" self element def element(self) """return element associated with this vertex "" return self element will allow vertex to be map/set key def hash (self) return hash(id(self) nested edge class class edge """lightweight edge structure for graph ""slots _origin _destination _element def init (selfuvx) """do not call constructor directly use graph insert edge( , , "" self origin self destination self element def endpoints(self) """return ( ,vtuple for vertices and "" return (self originself destination def opposite(selfv) """return the vertex that is opposite on this edge "" return self destination if is self origin else self origin def element(self) """return element associated with this edge "" return self element will allow edge to be map/set key def hash (self) return hash(self originself destinationcode fragment vertex and edge classes (to be nested within graph class
13,367
graph algorithms class graph """representation of simple graph using an adjacency map "" def init (selfdirected=false) """create an empty graph (undirectedby default graph is directed if optional paramter is set to true "" self outgoing only create second map for directed graphuse alias for undirected self incoming if directed else self outgoing def is directed(self) """return true if this is directed graphfalse if undirected property is based on the original declaration of the graphnot its contents "" return self incoming is not self outgoing directed if maps are distinct def vertex count(self) """return the number of vertices in the graph "" return len(self outgoing def vertices(self) """return an iteration of all vertices of the graph "" return self outgoing keys def edge count(self) """return the number of edges in the graph "" total sum(len(self outgoing[ ]for in self outgoing for undirected graphsmake sure not to double-count edges return total if self is directedelse total / def edges(self) """return set of all edges of the graph "" result setavoid double-reporting edges of undirected graph for secondary map in self outgoing values)add edges to resulting set result update(secondary map values) return result code fragment graph class definition (continued in code fragment
13,368
def get edge(selfuv)"""return the edge from to vor none if not adjacent ""returns none if not adjacent return self outgoing[uget(vdef degree(selfvoutgoing=true)"""return number of (outgoingedges incident to vertex in the graph if graph is directedoptional parameter used to count incoming edges ""adj self outgoing if outgoing else self incoming return len(adj[ ]def incident edges(selfvoutgoing=true)"""return all (outgoingedges incident to vertex in the graph if graph is directedoptional parameter used to request incoming edges ""adj self outgoing if outgoing else self incoming for edge in adj[vvalues)yield edge def insert vertex(selfx=none)"""insert and return new vertex with element "" self vertex(xself outgoing[vif self is directed)need distinct map for incoming edges self incoming[vreturn def insert edge(selfuvx=none)"""insert and return new edge from to with auxiliary element "" self edge(uvxself outgoing[ ][ve self incoming[ ][ue code fragment graph class definition (continued from code fragment we omit error-checking of parameters for brevity
13,369
graph traversals greek mythology tells of an elaborate labyrinth that was built to house the monstrous minotaurwhich was part bull and part man this labyrinth was so complex that neither beast nor human could escape it no humanthat isuntil the greek herotheseuswith the help of the king' daughterariadnedecided to implement graph traversal algorithm theseus fastened ball of thread to the door of the labyrinth and unwound it as he traversed the twisting passages in search of the monster theseus obviously knew about good algorithm designforafter finding and defeating the beasttheseus easily followed the string back out of the labyrinth to the loving arms of ariadne formallya traversal is systematic procedure for exploring graph by examining all of its vertices and edges traversal is efficient if it visits all the vertices and edges in time proportional to their numberthat isin linear time graph traversal algorithms are key to answering many fundamental questions about graphs involving the notion of reachabilitythat isin determining how to travel from one vertex to another while following paths of graph interesting problems that deal with reachability in an undirected graph include the followingcomputing path from vertex to vertex vor reporting that no such path exists given start vertex of gcomputingfor every vertex of ga path with the minimum number of edges between and vor reporting that no such path exists testing whether is connected computing spanning tree of gif is connected computing the connected components of computing cycle in gor reporting that has no cycles include the interesting problems that deal with reachability in directed graph followingcomputing directed path from vertex to vertex vor reporting that no such path exists that are reachable from given vertex finding all the vertices of is acyclic determine whether is strongly connected determine whether in the remainder of this sectionwe present two efficient graph traversal algorithmscalled depth-first search and breadth-first searchrespectively
13,370
depth-first search the first traversal algorithm we consider in this section is depth-first search (dfsdepth-first search is useful for testing number of properties of graphsincluding whether there is path from one vertex to another and whether or not graph is connected depth-first search in graph is analogous to wandering in labyrinth with string and can of paint without getting lost we begin at specific starting vertex in gwhich we initialize by fixing one end of our string to and painting as "visited the vertex is now our "currentvertex--call our current vertex we then traverse by considering an (arbitraryedge (uvincident to the current vertex if the edge (uvleads us to vertex that is already visited (that ispainted)we ignore that edge ifon the other hand(uvleads to an unvisited vertex vthen we unroll our stringand go to we then paint as "visited,and make it the current vertexrepeating the computation above eventuallywe will get to "dead end,that isa current vertex such that all the edges incident to lead to vertices already visited to get out of this impassewe roll our string back upbacktracking along the edge that brought us to vgoing back to previously visited vertex we then make our current vertex and repeat the computation above for any edges incident to that we have not yet considered if all of ' incident edges lead to visited verticesthen we again roll up our string and backtrack to the vertex we came from to get to uand repeat the procedure at that vertex thuswe continue to backtrack along the path that we have traced so far until we find vertex that has yet unexplored edgestake one such edgeand continue the traversal the process terminates when our backtracking leads us back to the start vertex sand there are no more unexplored edges incident to the pseudo-code for depth-first search traversal starting at vertex (see code fragment follows our analogy with string and paint we use recursion to implement the string analogyand we assume that we have mechanism (the paint analogyto determine whether vertex or edge has been previously explored algorithm dfs( , ){we assume has already been marked as visitedinputa graph and vertex of outputa collection of vertices reachable from uwith their discovery edges for each outgoing edge (uvof do if vertex has not been visited then mark vertex as visited (via edge erecursively call dfs( ,vcode fragment the dfs algorithm
13,371
classifying graph edges with dfs an execution of depth-first search can be used to analyze the structure of graphbased upon the way in which edges are explored during the traversal the dfs process naturally identifies what is known as the depth-first search tree rooted at starting vertex whenever an edge (uvis used to discover new vertex during the dfs algorithm of code fragment that edge is known as discovery edge or tree edgeas oriented from to all other edges that are considered during the execution of dfs are known as nontree edgeswhich take us to previously visited vertex in the case of an undirected graphwe will find that all nontree edges that are explored connect the current vertex to one that is an ancestor of it in the dfs tree we will call such an edge back edge when performing dfs on directed graphthere are three possible kinds of nontree edgesback edgeswhich connect vertex to an ancestor in the dfs tree forward edgeswhich connect vertex to descendant in the dfs tree cross edgeswhich connect vertex to vertex that is neither its ancestor nor its descendant an example application of the dfs algorithm on directed graph is shown in figure demonstrating each type of nontree edge an example application of the dfs algorithm on an undirected graph is shown in figure bos bos ord ord jfk jfk sfo sfo dfw dfw lax lax mia mia ( (bfigure an example of dfs in directed graphstarting at vertex (bos)(aintermediate stepwherefor the first timea considered edge leads to an already visited vertex (dfw)(bthe completed dfs the tree edges are shown with thick linesthe back edges are shown with dashed linesand the forward and cross edges are shown with dotted lines the order in which the vertices are visited is indicated by label next to each vertex the edge (ord,dfwis back edgebut (dfw,ordis forward edge edge (bos,sfois forward edgeand (sfo,laxis cross edge
13,372
( (ba ( (da ( ( figure example of depth-first search traversal on an undirected graph starting at vertex we assume that vertex' adjacencies are considered in alphabetical order visited vertices and explored edges are highlightedwith discovery edges drawn as solid lines and nontree (backedges as dashed lines(ainput graph(bpath of tree edgestraced from until back edge ( ,cis examined(creaching fwhich is dead end(dafter backtracking to iresuming with edge ( , )and hitting another dead end at (eafter backtracking to gcontinuing with edge ( , )and hitting another dead end at ( final result
13,373
properties of depth-first search there are number of observations that we can make about the depth-first search algorithmmany of which derive from the way the dfs algorithm partitions the edges of graph into groups we begin with the most significant property proposition let be an undirected graph on which dfs traversal starting at vertex has been performed then the traversal visits all vertices in the connected component of sand the discovery edges form spanning tree of the connected component of justificationsuppose there is at least one vertex in ' connected component not visitedand let be the first unvisited vertex on some path from to (we may have wsince is the first unvisited vertex on this pathit has neighbor that was visited but when we visited uwe must have considered the edge (uv)henceit cannot be correct that is unvisited thereforethere are no unvisited vertices in ' connected component since we only follow discovery edge when we go to an unvisited vertexwe will never form cycle with such edges thereforethe discovery edges form connected subgraph without cycleshence tree moreoverthis is spanning tree becauseas we have just seenthe depth-first search visits each vertex in the connected component of be directed graph depth-first search on starting at proposition let vertex visits all the vertices of that are reachable from alsothe dfs tree contains directed paths from to every vertex reachable from visited by dfs starting at justificationlet vs be the subset of vertices of vertex we want to show that vs contains and every vertex reachable from belongs to vs suppose nowfor the sake of contradictionthat there is vertex reachable from that is not in vs consider directed path from to wand let (uvbe the first edge on such path taking us out of vs that isu is in vs but is not in vs when dfs reaches uit explores all the outgoing edges of uand thus must reach also vertex via edge (uvhencev should be in vs and we have obtained contradiction thereforevs must contain every vertex reachable from we prove the second fact by induction on the steps of the algorithm we claim that each time discovery edge (uvis identifiedthere exists directed path from to in the dfs tree since must have previously been discoveredthere exists path from to uso by appending the edge (uvto that pathwe have directed path from to note that since back edges always connect vertex to previously visited vertex ueach back edge implies cycle in gconsisting of the discovery edges from to plus the back edge (uv
13,374
running time of depth-first search in terms of its running timedepth-first search is an efficient method for traversing graph note that dfs is called at most once on each vertex (since it gets marked as visited)and therefore every edge is examined at most twice for an undirected graphonce from each of its end verticesand at most once in directed graphfrom its origin vertex if we let ns < be the number of vertices reachable from vertex sand ms < be the number of incident edges to those verticesa dfs starting at runs in (ns ms timeprovided the following conditions are satisfiedthe graph is represented by data structure such that creating and iterating through the incident edges(vtakes (deg( )timeand the opposite(vmethod takes ( time the adjacency list structure is one such structurebut the adjacency matrix structure is not we have way to "marka vertex or edge as exploredand to test if vertex or edge has been explored in ( time we discuss ways of implementing dfs to achieve this goal in the next section given the assumptions abovewe can solve number of interesting problems proposition let be an undirected graph with vertices and edges dfs traversal of can be performed in ( mtimeand can be used to solve the following problems in ( mtimecomputing path between two given vertices of gif one exists testing whether is connected computing spanning tree of gif is connected computing the connected components of computing cycle in gor reporting that has no cycles be directed graph with vertices and edges proposition let can be performed in ( mtimeand can be used to solve dfs traversal of the following problems in ( mtimeif one exists computing directed path between two given vertices of computing the set of vertices of that are reachable from given vertex is strongly connected testing whether or reporting that is acyclic computing directed cycle in computing the transitive closure of (see section the justification of propositions and is based on algorithms that use slightly modified versions of the dfs algorithm as subroutines we will explore some of those extensions in the remainder of this section
13,375
dfs implementation and extensions we begin by providing python implementation of the basic depth-first search algorithmoriginally described with pseudo-code in code fragment our dfs function is presented in code fragment def dfs(gudiscovered) """perform dfs of the undiscovered portion of graph starting at vertex discovered is dictionary mapping each vertex to the edge that was used to discover it during the dfs ( should be "discoveredprior to the call newly discovered vertices will be added to the dictionary as result ""for every outgoing edge from for in incident edges( ) opposite( if not in discoveredv is an unvisited vertex discovered[ve is the tree edge that discovered dfs(gvdiscoveredrecursively explore from code fragment recursive implementation of depth-first search on graphstarting at designated vertex in order to track which vertices have been visitedand to build representation of the resulting dfs treeour implementation introduces third parameternamed discovered this parameter should be python dictionary that maps vertex of the graph to the tree edge that was used to discover that vertex as technicalitywe assume that the source vertex occurs as key of the dictionarywith none as its value thusa caller might start the traversal as followsresult { nonedfs(guresulta new dictionarywith trivially discovered the dictionary serves two purposes internallythe dictionary provides mechanism for recognizing visited verticesas they will appear as keys in the dictionary externallythe dfs function augments this dictionary as it proceedsand thus the values within the dictionary are the dfs tree edges at the conclusion of the process because the dictionary is hash-basedthe test"if not in discovered,and the record-keeping step"discovered[ve,run in ( expected timerather than worst-case time in practicethis is compromise we are willing to acceptbut it does violate the formal analysis of the algorithmas given on page if we could assume that vertices could be numbered from to then those numbers could be used as indices into an array-based lookup table rather than hash-based map alternativelywe could store each vertex' discovery status and associated tree edge directly as part of the vertex instance
13,376
reconstructing path from to we can use the basic dfs function as tool to identify the (directedpath leading from vertex to vif is reachable from this path can easily be reconstructed from the information that was recorded in the discovery dictionary during the traversal code fragment provides an implementation of secondary function that produces an ordered list of vertices on the path from to to reconstruct the pathwe begin at the end of the pathexamining the discovery dictionary to determine what edge was used to reach vertex vand then what the other endpoint of that edge is we add that vertex to listand then repeat the process to determine what edge was used to discover it once we have traced the path all the way back to the starting vertex uwe can reverse the list so that it is properly oriented from to vand return it to the caller this process takes time proportional to the length of the pathand therefore it runs in (ntime (in addition to the time originally spent calling dfs def construct path(uvdiscovered) path empty path by default if in discovered we build list from to and then reverse it at the end path append( walk while walk is not discovered[walkfind edge leading to walk parent opposite(walk path append(parent walk parent path reversereorient path from to return path code fragment function to reconstruct directed path from to vgiven the trace of discovery from dfs started at the function returns an ordered list of vertices on the path testing for connectivity we can use the basic dfs function to determine whether graph is connected in the case of an undirected graphwe simply start depth-first search at an arbitrary vertex and then test whether len(discoveredequals at the conclusion if the graph is connectedthen by proposition all vertices will have been discoveredconverselyif the graph is not connectedthere must be at least one vertex that is not reachable from uand that will not be discovered
13,377
we may wish to test whether it is strongly connectedthat for directed graphgiswhether for every pair of vertices and vboth reaches and reaches if we start an independent call to dfs from each vertexwe could determine whether this was the casebut those calls when combined would run in ( (nm)howeveris strongly connected much faster than thisrequiring only we can determine if two depth-first searches starting at we begin by performing depth-first search of our directed graph an arbitrary vertex if there is any vertex of that is not visited by this traversaland is not reachable from sthen the graph is not strongly connected if this first we need to then check whether is reachdepth-first search visits each vertex of gable from all other vertices conceptuallywe can accomplish this by making but with the orientation of all edges reversed depth-first search copy of graph gstarting at in the reversed graph will reach every vertex that could reach in the original in practicea better approach than making new graph is to reimplement version of the dfs method that loops through all incoming edges to the current vertexrather than all outgoing edges since this algorithm makes just two dfs it runs in ( mtime traversals of gcomputing all connected components when graph is not connectedthe next goal we may have is to identify all of the connected components of an undirected graphor the strongly connected components of directed graph we begin by discussing the undirected case if an initial call to dfs fails to reach all vertices of graphwe can restart new call to dfs at one of those unvisited vertices an implementation of such comprehensive dfs all method is given in code fragment def dfs complete( ) """perform dfs for entire graph and return forest as dictionary result maps each vertex to the edge that was used to discover it (vertices that are roots of dfs tree are mapped to none "" forest for in vertices) if not in forest forest[unone will be the root of tree dfs(guforest return forest code fragment top-level function that returns dfs forest for an entire graph
13,378
although the dfs complete function makes multiple calls to the original dfs functionthe total time spent by call to dfs complete is ( mfor an undirected graphrecall from our original analysis on page that single call to dfs starting at vertex runs in time (ns ms where ns is the number of vertices reachable from sand ms is the number of incident edges to those vertices because each call to dfs explores different componentthe sum of ns ms terms is the ( mtotal bound applies to the directed case as welleven though the sets of reachable vertices are not necessarily disjoint howeverbecause the same discovery dictionary is passed as parameter to all dfs callswe know that the dfs subroutine is called once on each vertexand then each outgoing edge is explored only once during the process the dfs complete function can be used to analyze the connected components of an undirected graph the discovery dictionary it returns represents dfs forest for the entire graph we say this is forest rather than treebecause the graph may not be connected the number of connected components can be determined by the number of vertices in the discovery dictionary that have none as their discovery edge (those are roots of dfs treesa minor modification to the core dfs method could be used to tag each vertex with component number when it is discovered (see exercise - the situation is more complex for finding strongly connected components of directed graph there exists an approach for computing those components in ( mtimemaking use of two separate depth-first search traversalsbut the details are beyond the scope of this book detecting cycles with dfs for both undirected and directed graphsa cycle exists if and only if back edge exists relative to the dfs traversal of that graph it is easy to see that if back edge existsa cycle exists by taking the back edge from the descendant to its ancestor and then following the tree edges back to the descendant converselyif cycle exists in the graphthere must be back edge relative to dfs (although we do not prove this fact herealgorithmicallydetecting back edge in the undirected case is easybecause all edges are either tree edges or back edges in the case of directed graphadditional modifications to the core dfs implementation are needed to properly categorize nontree edge as back edge when directed edge is explored leading to previously visited vertexwe must recognize whether that vertex is an ancestor of the current vertex this requires some additional bookkeepingfor exampleby tagging vertices upon which recursive call to dfs is still active we leave details as an exercise ( -
13,379
breadth-first search the advancing and backtracking of depth-first searchas described in the previous sectiondefines traversal that could be physically traced by single person exploring graph in this sectionwe consider another algorithm for traversing connected component of graphknown as breadth-first search (bfsthe bfs algorithm is more akin to sending outin all directionsmany explorers who collectively traverse graph in coordinated fashion bfs proceeds in rounds and subdivides the vertices into levels bfs starts at vertex swhich is at level in the first roundwe paint as "visited,all vertices adjacent to the start vertex --these vertices are one step away from the beginning and are placed into level in the second roundwe allow all explorers to go two steps ( edgesaway from the starting vertex these new verticeswhich are adjacent to level vertices and not previously assigned to levelare placed into level and marked as "visited this process continues in similar fashionterminating when no new vertices are found in level python implementation of bfs is given in code fragment we follow convention similar to that of dfs (code fragment )using discovered dictionary both to recognize visited verticesand to record the discovery edges of the bfs tree we illustrate bfs traversal in figure def bfs(gsdiscovered) """perform bfs of the undiscovered portion of graph starting at vertex discovered is dictionary mapping each vertex to the edge that was used to discover it during the bfs ( should be mapped to none prior to the call newly discovered vertices will be added to the dictionary as result "" level [sfirst level includes only while len(level prepare to gather newly found vertices next level for in level for in incident edges( )for every outgoing edge from opposite( if not in discoveredv is an unvisited vertex discovered[ve is the tree edge that discovered will be further considered in next pass next level append(vrelabel 'nextlevel to become current level next level code fragment implementation of breadth-first search on graphstarting at designated vertex
13,380
( ( ( ( ( ( figure example of breadth-first search traversalwhere the edges incident to vertex are considered in alphabetical order of the adjacent vertices the discovery edges are shown with solid lines and the nontree (crossedges are shown with dashed lines(astarting the search at (bdiscovery of level (cdiscovery of level (ddiscovery of level (ediscovery of level (fdiscovery of level
13,381
graph algorithms when discussing dfswe described classification of nontree edges being either back edgeswhich connect vertex to one of its ancestorsforward edgeswhich connect vertex to one of its descendantsor cross edgeswhich connect vertex to another vertex that is neither its ancestor nor its descendant for bfs on an undirected graphall nontree edges are cross edges (see exercise - )and for bfs on directed graphall nontree edges are either back edges or cross edges (see exercise - the bfs traversal algorithm has number of interesting propertiessome of which we explore in the proposition that follows most notablya path in breadthfirst search tree rooted at vertex to any other vertex is guaranteed to be the shortest such path from to in terms of the number of edges proposition let be an undirected or directed graph on which bfs traversal starting at vertex has been performed then the traversal visits all vertices of that are reachable from for each vertex at level ithe path of the bfs tree between and has edgesand any other path of from to has at least edges if (uvis an edge that is not in the bfs treethen the level number of can be at most greater than the level number of we leave the justification of this proposition as an exercise ( - the analysis of the running time of bfs is similar to the one of dfswith the algorithm running in ( mtimeor more specificallyin (ns ms time if ns is the number of vertices reachable from vertex sand ms < is the number of incident edges to those vertices to explore the entire graphthe process can be restarted at another vertexakin to the dfs complete function of code fragment alsothe actual path from vertex to vertex can be reconstructed using the construct path function of code fragment proposition let be graph with vertices and edges represented with the adjacency list structure bfs traversal of takes ( mtime although our implementation of bfs in code fragment progresses level by levelthe bfs algorithm can also be implemented using single fifo queue to represent the current fringe of the search starting with the source vertex in the queuewe repeatedly remove the vertex from the front of the queue and insert any of its unvisited neighbors to the back of the queue (see exercise - in comparing the capabilities of dfs and bfsboth can be used to efficiently find the set of vertices that are reachable from given sourceand to determine paths to those vertices howeverbfs guarantees that those paths use as few edges as possible for an undirected graphboth algorithms can be used to test connectivityto identify connected componentsor to locate cycle for directed graphsthe dfs algorithm is better suited for certain taskssuch as finding directed cycle in the graphor in identifying the strongly connected components
13,382
transitive closure we have seen that graph traversals can be used to answer basic questions of reachability in directed graph in particularif we are interested in knowing whether there is path from vertex to vertex in graphwe can perform dfs or bfs traversal starting at and observe whether is discovered if representing graph with an adjacency list or adjacency mapwe can answer the question of reachability for and in ( mtime (see propositions and in certain applicationswe may wish to answer many reachability queries more efficientlyin which case it may be worthwhile to precompute more convenient representation of graph for examplethe first step for service that computes driving directions from an origin to destination might be to assess whether the destination is reachable similarlyin an electricity networkwe may wish to be able to quickly determine whether current flows from one particular vertex to another motivated by such applicationswe introduce the following definition the is itself directed graph such that the transitive closure of directed graph and has an edge (uv)whenare the same as the vertices of gvertices of ever has directed path from to (including the case where (uvis an edge of the original gif graph is represented as an adjacency list or adjacency mapwe can compute its transitive closure in ( ( )time by making use of graph traversalsone from each starting vertex for examplea dfs starting at vertex can be used to determine all vertices reachable from uand thus collection of edges originating with in the transitive closure in the remainder of this sectionwe explore an alternative technique for computing the transitive closure of directed graph that is particularly well suited for when directed graph is represented by data structure that supports ( )-time lookup for the get edge( ,vmethod (for examplethe adjacency-matrix structurelet be directed graph with vertices and edges we compute the transitive closure we also arbitrarily number the in series of rounds we initialize of vertices of as vn we then begin the computation of the roundsbegink starting ning with round in generic round kwe construct directed graph - with gk gk- and adding to gk the directed edge (vi if directed graph contains both the edges (vi vk and (vk in this waywe will enforce simple rule embodied in the proposition that follows has an edge (vi if and proposition for ndirected graph only if directed graph has directed path from vi to whose intermediate is equal to the vertices (if anyare in the set { vk in particularg transitive closure of
13,383
graph algorithms proposition suggests simple algorithm for computing the transitive clothat is based on the series of rounds to compute each this algorithm sure of is known as the floyd-warshall algorithmand its pseudo-code is given in code fragment we illustrate an example run of the floyd-warshall algorithm in figure algorithm floydwarshall( )with vertices inputa directed graph of outputthe transitive closure let vn be an arbitrary numbering of the vertices of  for to do -  for all ij in { nwith  and ij  do - then if both edges (vi vk and (vk are in (if it is not already presentadd edge (vi to return gn code fragment pseudo-code for the floyd-warshall algorithm this algoof by incrementally computing series rithm computes the transitive closure for of directed graphs from this pseudo-codewe can easily analyze the running time of the floydwarshall algorithm assuming that the data structure representing supports methods get edge and insert edge in ( time the main loop is executed times and the inner loop considers each of ( pairs of verticesperforming constant-time computation for each one thusthe total running time of the floyd-warshall algorithm is ( from the description and analysis above we may immediately derive the following proposition be directed graph with verticesand let be repreproposition let sented by data structure that supports lookup and update of adjacency information in ( time then the floyd-warshall algorithm computes the transitive closure of in ( time performance of the floyd-warshall algorithm asymptoticallythe ( running time of the floyd-warshall algorithm is no better than that achieved by repeatedly running dfsonce from each vertexto compute the reachability howeverthe floyd-warshall algorithm matches the asymptotic bounds of the repeated dfs when graph is denseor when graph is sparse but represented as an adjacency matrix (see exercise -
13,384
bos ord jfk sfo sfo dfw dfw lax lax mia (amia (bbos ord jfk sfo sfo dfw dfw bos jfk ord lax ord jfk bos lax (cv mia mia (dbos bos ord ord jfk jfk sfo sfo dfw dfw lax lax mia mia ( ( figure sequence of directed graphs computed by the floyd-warshall algo= and numbering of the vertices(bdirected rithm(ainitial directed graph note that   if directed graph (cg (dg (eg ( graph gk- has the edges (vi vk and (vk )but not the edge (vi )in the drawk we show edges (vi vk and (vk with dashed linesand ing of directed graph edge (vi with thick line for examplein (bexisting edges (mia,laxand (lax,ordresult in new edge (mia,ord
13,385
the importance of the floyd-warshall algorithm is that it is much easier to implement than dfsand much faster in practice because there are relatively few low-level operations hidden within the asymptotic notation the algorithm is particularly well suited for the use of an adjacency matrixas single bit can be used to designate the reachability modeled as an edge (uvin the transitive closure howevernote that repeated calls to dfs results in better asymptotic performance when the graph is sparse and represented using an adjacency list or adjacency map in that casea single dfs runs in ( mtimeand so the transitive closure can be computed in ( nmtimewhich is preferable to ( python implementation we conclude with python implementation of the floyd-warshall algorithmas presented in code fragment although the original algorithm is described we create single copy of the using series of directed graphs original graph (using the deepcopy method of python' copy moduleand then repeatedly add new edges to the closure as we progress through rounds of the floydwarshall algorithm the algorithm requires canonical numbering of the graph' verticesthereforewe create list of the vertices in the closure graphand subsequently index that list for our order within the outermost loopwe must consider all pairs and finallywe optimize by only iterating through all values of after we have verified that has been chosen such that (vi vk exists in the current version of our closure def floyd warshall( ) """return new graph that is the transitive closure of "" closure deepcopy(gimported from copy module verts list(closure vertices)make indexable list len(verts for in range( ) for in range( ) verify that edge ( ,kexists in the partial closure if ! and closure get edge(verts[ ],verts[ ]is not none for in range( ) verify that edge ( ,jexists in the partial closure if ! ! and closure get edge(verts[ ],verts[ ]is not none if ( ,jnot yet includedadd it to the closure if closure get edge(verts[ ],verts[ ]is none closure insert edge(verts[ ],verts[ ] return closure code fragment python implementation of the floyd-warshall algorithm
13,386
directed acyclic graphs directed graphs without directed cycles are encountered in many applications such directed graph is often referred to as directed acyclic graphor dagfor short applications of such graphs include the followingprerequisites between courses of degree program inheritance between classes of an object-oriented program scheduling constraints between the tasks of project we explore this latter application further in the following exampleexample in order to manage large projectit is convenient to break it up into collection of smaller tasks the taskshoweverare rarely independentbecause scheduling constraints exist between them (for examplein house building projectthe task of ordering nails obviously precedes the task of nailing shingles to the roof deck clearlyscheduling constraints cannot have circularitiesbecause they would make the project impossible (for examplein order to get job you need to have work experiencebut in order to get work experience you need to have job the scheduling constraints impose restrictions on the order in which the tasks can be executed namelyif constraint says that task must be completed before task is startedthen must precede in the order of execution of the tasks thusif we model feasible set of tasks as vertices of directed graphand we place directed edge from to whenever the task for must be executed before the task for vthen we define directed acyclic graph topological ordering be directed graph the example above motivates the following definition let with vertices topological ordering of is an ordering ,vn of the vertices it is the case that that isa toposuch that for every edge (vi of gof traverses vertices in logical ordering is an ordering such that any directed path in increasing order note that directed graph may have more than one topological ordering (see figure has topological ordering if and only if it is acyclic proposition justificationthe necessity (the "only ifpart of the statementis easy to is topologically ordered assumefor the sake of condemonstrate suppose tradictionthat has cycle consisting of edges (vi vi )(vi vi ),(vik- vi because of the topological orderingwe must have ik- which must be acyclic is clearly impossible thusg
13,387
(ah (bfigure two topological orderings of the same acyclic directed graph is we now argue the sufficiency of the condition (the "ifpartsuppose acyclic we will give an algorithmic description of how to build topological since is acyclicg must have vertex with no incoming edges ordering for (that iswith in-degree let be such vertex indeedif did not existthen in tracing directed path from an arbitrary start vertexwe would eventually if we encounter previously visited vertexthus contradicting the acyclicity of remove from gtogether with its outgoing edgesthe resulting directed graph is still acyclic hencethe resulting directed graph also has vertex with no incoming edgesand we let be such vertex by repeating this process until the directed because graph becomes emptywe obtain an ordering ,vn of the vertices of then vi must be deleted before of the construction aboveif (vi is an edge of gv can be deletedand thusi thereforev vn is topological ordering proposition ' justification suggests an algorithm for computing topological ordering of directed graphwhich we call topological sorting we present python implementation of the technique in code fragment and an example execution of the algorithm in figure our implementation uses dictionarynamed incountto map each vertex to counter that represents the current number of incoming edges to vexcluding those coming from vertices that have previously been added to the topological order technicallya python dictionary provides ( expected time access to entriesrather than worst-case timeas was the case with our graph traversalsthis could be converted to worst-case time if vertices could be indexed from to or if we store the counter as an element of vertex as side effectthe topological sorting algorithm of code fragment is acyclic indeedif the algorithm also tests whether the given directed graph terminates without ordering all the verticesthen the subgraph of the vertices that have not been ordered must contain directed cycle
13,388
def topological sort( ) """return list of verticies of directed acyclic graph in topological order if graph has cyclethe result will be incomplete "" topo list of vertices placed in topological order ready list of vertices that have no remaining constraints incount keep track of in-degree for each vertex for in vertices) incount[ug degree(ufalseparameter requests incoming degree if incount[ = if has no incoming edges ready append(uit is free of constraints while len(ready ready popu is free of constraints topo append(uadd to the topological order consider all outgoing neighbors of for in incident edges( ) opposite( incount[ - has one less constraint without if incount[ = ready append( return topo code fragment python implementation for the topological sorting algorithm (we show an example execution of this algorithm in figure performance of topological sorting be directed graph with vertices and edgesusing proposition let an adjacency list representation the topological sorting algorithm runs in ( +mtime using (nauxiliary spaceand either computes topological ordering of or fails to include some verticeswhich indicates that has directed cycle justificationthe initial recording of the in-degrees uses (ntime based on the degree method say that vertex is visited by the topological sorting algorithm when is removed from the ready list vertex can be visited only when incount(uis which implies that all its predecessors (vertices with outgoing edges into uwere previously visited as consequenceany vertex that is on directed cycle will never be visitedand any other vertex will be visited exactly once the algorithm traverses all the outgoing edges of each visited vertex onceso its running time is proportional to the number of outgoing edges of the visited vertices in accordance with proposition the running time is ( mregarding the space usageobserve that containers toporeadyand incount have at most one entry per vertexand therefore use (nspace
13,389
(ah ( (dh (eb ( ( ( (hf (ifigure example of run of algorithm topological sort (code fragment the label near vertex shows its current incount valueand its eventual rank in the resulting topological order the highlighted vertex is one with incount equal to zero that will become the next vertex in the topological order dashed lines denote edges that have already been examined and which are no longer reflected in the incount values
13,390
shortest paths as we saw in section the breadth-first search strategy can be used to find shortest path from some starting vertex to every other vertex in connected graph this approach makes sense in cases where each edge is as good as any otherbut there are many situations where this approach is not appropriate for examplewe might want to use graph to represent the roads between citiesand we might be interested in finding the fastest way to travel cross-country in this caseit is probably not appropriate for all the edges to be equal to each otherfor some inter-city distances will likely be much larger than others likewisewe might be using graph to represent computer network (such as the internet)and we might be interested in finding the fastest way to route data packet between two computers in this caseit again may not be appropriate for all the edges to be equal to each otherfor some connections in computer network are typically much faster than others (for examplesome edges might represent low-bandwidth connectionswhile others might represent high-speedfiber-optic connectionsit is naturalthereforeto consider graphs whose edges are not weighted equally weighted graphs weighted graph is graph that has numeric (for exampleintegerlabel (eassociated with each edge ecalled the weight of edge for (uv)we let notation (uvw(ewe show an example of weighted graph in figure bos ord sfo jfk lax dfw mia figure weighted graph whose vertices represent major airports and whose edge weights represent distances in miles this graph has path from jfk to lax of total weight , (going through ord and dfwthis is the minimumweight path in the graph from jfk to lax
13,391
defining shortest paths in weighted graph let be weighted graph the length (or weightof path is the sum of the weights of the edges of that isif (( )( )(vk- vk ))then the length of pdenoted ( )is defined as - (pw(vi vi+ = the distance from vertex to vertex in gdenoted (uv)is the length of minimum-length path (also called shortest pathfrom to vif such path exists people often use the convention that (uvif there is no path at all from to in even if there is path from to in ghoweverif there is cycle in whose total weight is negativethe distance from to may not be defined for examplesuppose vertices in represent citiesand the weights of edges in represent how much money it costs to go from one city to another if someone were willing to actually pay us to go from say jfk to ordthen the "costof the edge (jfk,ordwould be negative if someone else were willing to pay us to go from ord to jfkthen there would be negative-weight cycle in and distances would no longer be defined that isanyone could now build path (with cyclesin from any city to another city that first goes to jfk and then cycles as many times as he or she likes from jfk to ord and backbefore going on to the existence of such paths would allow us to build arbitrarily low negative-cost paths (andin this casemake fortune in the processbut distances cannot be arbitrarily low negative numbers thusany time we use edge weights to represent distanceswe must be careful not to introduce any negative-weight cycles suppose we are given weighted graph gand we are asked to find shortest path from some vertex to each other vertex in gviewing the weights on the edges as distances in this sectionwe explore efficient ways of finding all such shortest pathsif they exist the first algorithm we discuss is for the simpleyet commoncase when all the edge weights in are nonnegative (that isw( > for each edge of )hencewe know in advance that there are no negative-weight cycles in recall that the special case of computing shortest path when all weights are equal to one was solved with the bfs traversal algorithm presented in section there is an interesting approach for solving this single-source problem based on the greedy method design pattern (section recall that in this pattern we solve the problem at hand by repeatedly selecting the best choice from among those available in each iteration this paradigm can often be used in situations where we are trying to optimize some cost function over collection of objects we can add objects to our collectionone at timealways picking the next one that optimizes the function from among those yet to be chosen
13,392
dijkstra' algorithm the main idea in applying the greedy method pattern to the single-source shortestpath problem is to perform "weightedbreadth-first search starting at the source vertex in particularwe can use the greedy method to develop an algorithm that iteratively grows "cloudof vertices out of swith the vertices entering the cloud in order of their distances from thusin each iterationthe next vertex chosen is the vertex outside the cloud that is closest to the algorithm terminates when no more vertices are outside the cloud (or when those outside the cloud are not connected to those within the cloud)at which point we have shortest path from to every vertex of that is reachable from this approach is simplebut nevertheless powerfulexample of the greedy method design pattern applying the greedy method to the single-sourceshortest-path problemresults in an algorithm known as dijkstra' algorithm edge relaxation let us define label [vfor each vertex in which we use to approximate the distance in from to the meaning of these labels is that [vwill always store the length of the best path we have found so far from to initiallyd[ and [vfor each sand we define the set cwhich is our "cloudof verticesto initially be the empty set at each iteration of the algorithmwe select vertex not in with smallest [ulabeland we pull into (in generalwe will use priority queue to select among the vertices outside the cloud in the very first iteration we willof coursepull into once new vertex is pulled into cwe then update the label [vof each vertex that is adjacent to and is outside of cto reflect the fact that there may be new and better way to get to via this update operation is known as relaxation procedurefor it takes an old estimate and checks if it can be improved to get closer to its true value the specific edge relaxation operation is as followsedge relaxationif [uw(uvd[vthen [vd[uw(uvalgorithm description and example we give the pseudo-code for dijkstra' algorithm in code fragment and illustrate several iterations of dijkstra' algorithm in figures through
13,393
algorithm shortestpath(gs)inputa weighted graph with nonnegative edge weightsand distinguished vertex of outputthe length of shortest path from to for each vertex of initialize [ and [vfor each vertex  let priority queue contain all the vertices of using the labels as keys while is not empty do {pull new vertex into the cloudu value returned by remove min(for each vertex adjacent to such that is in do {perform the relaxation procedure on edge (uv)if [uw(uvd[vthen [vd[uw(uvchange to [vthe key of vertex in return the label [vof each vertex code fragment pseudo-code for dijkstra' algorithmsolving the singlesource shortest-path problem bwi pvd jfk dfw lax sfo ord dfw lax pvd jfk sfo bos ord bos bwi mia mia ( (bfigure an execution of dijkstra' algorithm on weighted graph the start vertex is bwi box next to each vertex stores the label [vthe edges of the shortest-path tree are drawn as thick arrowsand for each vertex outside the "cloudwe show the current best edge for pulling in with thick line (continues in figure
13,394
bos bwi jfk lax bwi dfw sfo pvd ord dfw lax pvd jfk sfo bos ord mia mia ( ( ord jfk jfk lax mia ( ( dfw bwi dfw lax pvd jfk sfo ord sfo pvd jfk bos ord bos lax bwi mia dfw sfo bwi dfw lax pvd ord sfo pvd bos bos bwi mia mia ( (hfigure an example execution of dijkstra' algorithm (continued from figure continued in figure
13,395
bos sfo dfw lax pvd bwi sfo dfw lax pvd jfk ord jfk bos ord bwi mia mia ( (jfigure an example execution of dijkstra' algorithm (continued from figure why it works the interesting aspect of the dijkstra algorithm is thatat the moment vertex is pulled into cits label [ustores the correct length of shortest path from to thuswhen the algorithm terminatesit will have computed the shortest-path distance from to every vertex of that isit will have solved the single-source shortest-path problem it is probably not immediately clear why dijkstra' algorithm correctly finds the shortest path from the start vertex to each other vertex in the graph why is it that the distance from to is equal to the value of the label [uat the time vertex is removed from the priority queue and added to the cloud cthe answer to this question depends on there being no negative-weight edges in the graphfor it allows the greedy method to work correctlyas we show in the proposition that follows proposition in dijkstra' algorithmwhenever vertex is pulled into the cloudthe label [vis equal to (sv)the length of shortest path from to justificationsuppose that [vd(svfor some vertex in and let be the first vertex the algorithm pulled into the cloud (that isremoved from qsuch that [zd(szthere is shortest path from to (for otherwise (szd[ ]let us therefore consider the moment when is pulled into cand let be the first vertex of (when going from to zthat is not in at this moment let be the predecessor of in path (note that we could have (see figure we knowby our choice of ythat is already in at this point
13,396
the first "wrongvertex picked picked implies that [ < [yc [zd(szs [xd(sxx [yd(syfigure schematic illustration for the justification of proposition moreoverd[xd(sx)since is the first incorrect vertex when was pulled into cwe tested (and possibly updatedd[yso that we had at that point [ < [xw(xyd(sxw(xybut since is the next vertex on the shortest path from to zthis implies that [yd(sybut we are now at the moment when we are picking znot yto join chenced[ < [yit should be clear that subpath of shortest path is itself shortest path hencesince is on the shortest path from to zd(syd(yzd(szmoreoverd(yz> because there are no negative-weight edges therefored[ < [yd(sy< (syd(yzd(szbut this contradicts the definition of zhencethere can be no such vertex the running time of dijkstra' algorithm in this sectionwe analyze the time complexity of dijkstra' algorithm we denote with and the number of vertices and edges of the input graph grespectively we assume that the edge weights can be added and compared in constant time because of the high level of the description we gave for dijkstra' algorithm in code fragment analyzing its running time requires that we give more details on its implementation specificallywe should indicate the data structures used and how they are implemented
13,397
let us first assume that we are representing the graph using an adjacency list or adjacency map structure this data structure allows us to step through the vertices adjacent to during the relaxation step in time proportional to their number thereforethe time spent in the management of the nested for loopand the number of iterations of that loopis outdeg( ) in vg which is (mby proposition the outer while loop executes (ntimessince new vertex is added to the cloud during each iteration this still does not settle all the details for the algorithm analysishoweverfor we must say more about how to implement the other principal data structure in the algorithm--the priority queue referring back to code fragment in search of priority queue operationswe find that vertices are originally inserted into the priority queuesince these are the only insertionsthe maximum size of the queue is in each of iterations of the while loopa call to remove min is made to extract the vertex with smallest label from thenfor each neighbor of uwe perform an edge relaxationand may potentially update the key of in the queue thuswe actually need an implementation of an adaptable priority queue (section )in which case the key of vertex is changed using the method update( )where is the locator for the priority queue entry associated with vertex in the worst casethere could be one such update for each edge of the graph overallthe running time of dijkstra' algorithm is bounded by the sum of the followingn insertions into calls to the remove min method on calls to the update method on if is an adaptable priority queue implemented as heapthen each of the above operations run in (log )and so the overall running time for dijkstra' algorithm is (( mlog nnote that if we wish to express the running time as function of onlythen it is ( log nin the worst case let us now consider an alternative implementation for the adaptable priority queue using an unsorted sequence (see exercise - thisof courserequires that we spend (ntime to extract the minimum elementbut it affords very fast key updatesprovided supports location-aware entries (section specificallywe can implement each key update done in relaxation step in ( time--we simply change the key value once we locate the entry in to update hencethis implementation results in running time that is ( )which can be simplified to ( since is simple
13,398
comparing the two implementations we have two choices for implementing the adaptable priority queue with locationaware entries in dijkstra' algorithma heap implementationwhich yields running time of (( mlog )and an unsorted sequence implementationwhich yields running time of ( since both implementations would be fairly simple to codethey are about equal in terms of the programming sophistication needed these two implementations are also about equal in terms of the constant factors in their worst-case running times looking only at these worst-case timeswe prefer the heap implementation when the number of edges in the graph is small (that iswhen log )and we prefer the sequence implementation when the number of edges is large (that iswhen log nproposition given weighted graph with vertices and edgessuch that the weight of each edge is nonnegativeand vertex of gdijkstra' algorithm can compute the distance from to all other vertices of in the better of ( or (( mlog ntime we note that an advanced priority queue implementationknown as fibonacci heapcan be used to implement dijkstra' algorithm in ( log ntime programming dijkstra' algorithm in python having given pseudo-code description of dijkstra' algorithmlet us now present python code for performing dijkstra' algorithmassuming we are given graph whose edge elements are nonnegative integer weights our implementation of the algorithm is in the form of functionshortest path lengthsthat takes graph and designated source vertex as parameters (see code fragment it returns dictionarynamed cloudmapping each vertex that is reachable from the source to its shortest-path distance (svwe rely on our adaptableheappriorityqueue developed in section as an adaptable priority queue as we have done with other algorithms in this we rely on dictionaries to map vertices to associated data (in this casemapping to its distance bound [vand its adaptable priority queue locatorthe expected ( )-time access to elements of these dictionaries could be converted to worst-case boundseither by numbering vertices from to to use as indices into listor by storing the information within each vertex' element the pseudo-code for dijkstra' algorithm begins by assigning [vfor each other than the source we rely on the special value floatinf in python to provide numeric value that represents positive infinity howeverwe avoid including vertices with this "infinitedistance in the resulting cloud that is returned by the function the use of this numeric limit could be avoided altogether by waiting to add vertex to the priority queue until after an edge that reaches it is relaxed (see exercise -
13,399
graph algorithms def shortest path lengths(gsrc) """compute shortest-path distances from src to reachable vertices of graph can be undirected or directedbut must be weighted such that element(returns numeric weight for each edge return dictionary mapping each reachable vertex to its distance from src "" ={ [vis upper bound from to cloud map reachable to its [vvalue pq adaptableheappriorityqueuevertex will have key [ pqlocator map from vertex to its pq locator for each vertex of the graphadd an entry to the priority queuewith the source having distance and all others having infinite distance for in vertices) if is src [ elsesyntax for positive infinity [vfloatinf pqlocator[vpq add( [ ]vsave locator for future updates while not pq is empty) keyu pq remove min cloud[ukey its correct [uvalue del pqlocator[uu is no longer in pq outgoing edges ( , for in incident edges( ) opposite( if not in cloud perform relaxation step on edge ( , wgt element if [uwgt [ ]better path to [vd[uwgt update the distance pq update(pqlocator[ ] [ ]vupdate the pq entry return cloud only includes reachable vertices code fragment python implementation of dijkstra' algorithm for computing the shortest-path distances from single source we assume that elementfor edge represents the weight of that edge