id
int64
0
25.6k
text
stringlengths
0
4.59k
13,700
consider the following list of integers we shall partition this list using the partition function belowdef partition(unsorted_arrayfirst_indexlast_index)pivot unsorted_array[first_indexpivot_index first_index index_of_last_element last_index less_than_pivot_index index_of_last_element greater_than_pivot_index first_index the partition function receives the array that we need to partition as its parametersthe index of its first element and the index of its last element the value of the pivot is stored in the pivot variablewhile its index is stored in pivot_index we are not using unsorted_array[ because when the unsorted array parameter is called with segment of an arrayindex will not necessarily point to the first element in that array the index of the next element to the pivotfirst_index marks the position where we begin to look for the element in the array that is greater than the pivotgreater_than_pivot_index first_index less_than_pivot_index index_of_last_element marks the position of the last element in the list which iswhere we begin the search for the element that is less than the pivotwhile truewhile unsorted_array[greater_than_pivot_indexpivot and greater_than_pivot_index last_indexgreater_than_pivot_index + while unsorted_array[less_than_pivot_indexpivot and less_than_pivot_index >first_indexless_than_pivot_index -
13,701
at the beginning of the execution of the main while loop the array looks like thisthe first inner while loop moves one index to the right until it lands on index because the value at that index is greater than at this pointthe first while loop breaks and does not continue at each test of the condition in the first while loopgreater_than_pivot_index + is evaluated only if the while loop' test condition evaluates to true this makes the search for the element greater than the pivot progress to the next element on the right the second inner while loop moves one index at time to the leftuntil it lands on index whose value is less than at this pointneither inner while loop can be executed any furtherif greater_than_pivot_index less_than_pivot_indextemp unsorted_array[greater_than_pivot_indexunsorted_array[greater_than_pivot_indexunsorted_array[less_than_pivot_indexunsorted_array[less_than_pivot_indextemp elsebreak
13,702
since greater_than_pivot_index less_than_pivot_indexthe body of the if statement swaps the element at those indexes the else condition breaks the infinite loop any time greater_than_pivot_index becomes greater than less_than_pivot_index in such conditionit means that greater_than_pivot_index and less_than_pivot_index have crossed over each other our array now looks like thisthe break statement is executed when less_than_pivot_index is equal to and greater_than_pivot_index is equal to as soon as we exit the while loopwe interchange the element at unsorted_array[less_than_pivot_indexwith that of less_than_pivot_indexwhich is returned as the index of the pivotunsorted_array[pivot_index]=unsorted_array[less_than_pivot_indexunsorted_array[less_than_pivot_index]=pivot return less_than_pivot_index the image below shows how the code interchanges with as the last step in the partitioning process
13,703
to recapthe first time the quick sort function was calledit was partitioned about the element at index after the return of the partitioning functionwe obtain the array [ as you can seeall elements to the right of element are greaterwhile those to the left are smaller the partitioning is complete using the split point with index we will recursively sort the two sub-arrays [ and [ using the same process we just went through the body of the main quick sort function is as followsdef quick_sort(unsorted_arrayfirstlast)if last first < return elsepartition_point partition(unsorted_arrayfirstlastquick_sort(unsorted_arrayfirstpartition_point- quick_sort(unsorted_arraypartition_point+ lastthe quick sort function is very simple methodno more than lines of code the heavy lifting is done by the partition function when the partition method is called it returns the partition point this is the point in the unsorted_array where all elements to the left are less than the pivot and all elements to its right are greater than it when we print the state of unsorted_array immediately after the partition progresswe see clearly how the partitioning is happeningoutput[ [ [ [ [ taking step backlet' sort the first sub array after the first partition has happened the partitioning of the [ sub array will stop when greater_than_pivot_index is at index and less_than_pivot_index is at index at that pointthe two markers are said to have crossed because greater_than_pivot_index is greater than less_than_pivot_indexfurther execution of the while loop will cease pivot will be exchanged with while index is returned as the partition point the quick sort algorithm has ( worst case complexitybut it is efficient when sorting large amounts of data
13,704
heap sort in graphs and other algorithmswe implemented the (binaryheap data structure our implementation always made sure that after an element has been removed or added to heapthe heap order property is maintained by using the sink and float helper methods the heap data structure can be used to implement the sorting algorithm called the heap sort as recaplet' create simple heap with the following itemsh heap(unsorted_list [ for in unsorted_listh insert(iprint("unsorted list{}format(unsorted_list)the heaphis created and the elements in the unsorted_list are inserted after each method call to insertthe heap order property is restored by the subsequent call to the float method after loop has terminatedat the top of our heap will be element the number of elements in our heap is if we call the pop method on the heap object times and store the actual elements being poppedwe end up with sorted list after each pop operationthe heap is readjusted to maintain the heap order property the heap_sort method is as followsclass heapdef heap_sort(self)sorted_list [for node in range(self size) self pop(sorted_list append(nreturn sorted_list the for loop simply calls the pop method self size number of times sorted_list will contain sorted list of items after the loop terminates the insert method is called number of times together with the float methodthe insert operation takes worst case runtime of ( log )as does the pop method as suchthis sorting algorithm incurs worst case runtime of ( log
13,705
summary in this we have explored number of sorting algorithms quick sort performs much better than the other sorting algorithms of all the algorithms discussedquick sort preserves the index of the list that it sorts we'll use this property in the next as we explore the selection algorithms
13,706
selection algorithms one interesting set of algorithms related to finding elements in an unordered list of items is selection algorithms in doing sowe shall be answering questions that have to do with selecting the median of set of numbers and selecting the ith-smallest or -largest element in listamong other things in this we will cover the following topicsselection by sorting randomized selection deterministic selection selection by sorting items in list may undergo statistical enquiries such as finding the meanmedianand mode values finding the mean and mode values do not require the list to be ordered howeverto find the median in list of numbersthe list must first be ordered finding the median requires one to find the element in the middle position of the ordered list but what if we want to find the last-smallest item in the list or the first-smallest item in the listto find the ith-smallest number in an unordered list of itemsthe index of where that item occurs is important to obtain but because the elements have not been sortedit is difficult to know whether the element at index in list is really the first-smallest number pragmatic and obvious thing to do when dealing with unordered lists is to first sort the list once the list is sortedone is assured that the zeroth element in the list will house the first-smallest element in the list likewisethe last element in the list will house the lastsmallest element in the list
13,707
assume that perhaps the luxury of sorting before performing the search cannot be afforded is it possible to find the ith-smallest element without having to sort the list in the first placerandomized selection in the previous we examined the quick sort algorithm the quick sort algorithm allows us to sort an unordered list of items but has way of preserving the index of elements as the sorting algorithm runs generally speakingthe quick sort algorithm does the following selects pivot partitions the unsorted list around the pivot recursively sorts the two halves of the partitioned list using step and step one interesting and important fact is that after every partitioning stepthe index of the pivot will not change even after the list has become sorted it is this property that enables us to be able to work with not-so-fully sorted list to obtain the ith-smallest number because randomized selection is based on the quick sort algorithmit is generally referred to as quick select quick select the quick select algorithm is used to obtain the ith-smallest element in an unordered list of itemsin this casenumbers we declare the main method of the algorithm as followsdef quick_select(array_listleftrightk)split partition(array_listleftrightif split =kreturn array_list[splitelif split kreturn quick_select(array_listsplit rightkelsereturn quick_select(array_listleftsplit-
13,708
the quick_select function takes as parameters the index of the first element in the list as well as the last the ith element is specified by the third parameter values greater or equal to zero ( are allowed in such way that when is we know to search for the firstsmallest item in the list others like to treat the parameter so that it maps directly with the index that the user is searching forso that the first-smallest number maps to the index of sorted list it' all matter of preference method call to the partition functionsplit partition(array_listleftright)returns the split index this index of split array is the position in the unordered list where all elements between right to split- are less than the element contained in the array splitwhile all elements between split+ to left are greater when the partition function returns the split valuewe compare it with to find out if the split corresponds to the kth items if split is less than kthen it means that the kth-smallest item should exist or be found between split+ and rightin the preceding examplea split within an imaginary unordered list occurs at index while we are searching for the second-smallest number since < yields falsea recursive call to return quick_select(array_listleftsplit- kis made so that the other half of the list is searched
13,709
if the split index was less than kthen we would make call to quick_select like thispartition step the partition step is exactly like we had in the quick sort algorithm there are couple of things worthy of notedef partition(unsorted_arrayfirst_indexlast_index)if first_index =last_indexreturn first_index pivot unsorted_array[first_indexpivot_index first_index index_of_last_element last_index less_than_pivot_index index_of_last_element greater_than_pivot_index first_index while truewhile unsorted_array[greater_than_pivot_indexpivot and greater_than_pivot_index last_indexgreater_than_pivot_index + while unsorted_array[less_than_pivot_indexpivot and less_than_pivot_index >first_indexless_than_pivot_index - if greater_than_pivot_index less_than_pivot_indextemp unsorted_array[greater_than_pivot_indexunsorted_array[greater_than_pivot_indexunsorted_array[less_than_pivot_indexunsorted_array[less_than_pivot_indextemp elsebreak
13,710
unsorted_array[pivot_indexunsorted_array[less_than_pivot_indexunsorted_array[less_than_pivot_indexpivot return less_than_pivot_index an if statement has been inserted at the beginning of the function definition to cater for situations where first_index is equal to last_index in such casesit means there is only one element in our sublist we therefore simply return any of the function parametersin this casefirst_index the first element is always chosen as the pivot this choice to make the first element the pivot is random decision it often does not yield good split and subsequently good partition howeverthe ith element will eventually be found even though the pivot is chosen at random the partition function returns the pivot index pointed to by less_than_pivot_indexas we saw in the preceding from this point onyou will need to follow the program execution with pencil and paper to get better feel of how the split variable is being used to determine the section of the list to search for the ith-smallest item deterministic selection the worst-case performance of randomized selection algorithm is ( it is possible to improve on section of the randomized selection algorithm to obtain worst-case performance of (nthis kind of algorithm is called deterministic selection the general approach to the deterministic algorithm is listed here select pivot split list of unordered items into groups of five elements each sort and find the median of all the groups repeat step and step recursively to obtain the true median of the list use the true median to partition the list of unordered items recurse into the part of the partitioned list that may contain the ith-smallest element
13,711
pivot selection previouslyin the random selection algorithmwe selected the first element as the pivot we shall replace that step with sequence of steps that enables us to obtain the true or approximate median this will improve the partitioning of the list about the pivotdef partition(unsorted_arrayfirst_indexlast_index)if first_index =last_indexreturn first_index elsenearest_median median_of_medians(unsorted_array[first_index:last_index]index_of_nearest_median get_index_of_nearest_median(unsorted_arrayfirst_indexlast_indexnearest_medianswap(unsorted_arrayfirst_indexindex_of_nearest_medianpivot unsorted_array[first_indexpivot_index first_index index_of_last_element last_index less_than_pivot_index index_of_last_element greater_than_pivot_index first_index let' now study the code for the partition function the nearest_median variable stores the true or approximate median of given listdef partition(unsorted_arrayfirst_indexlast_index)if first_index =last_indexreturn first_index elsenearest_median median_of_medians(unsorted_array[first_index:last_index]if the unsorted_array parameter has only one elementfirst_index and last_index will be equal first_index is therefore returned anyway howeverif the list size is greater than onewe call the median_of_medians function with the section of the arraydemarcated by first_index and last_index the return value is yet again stored in nearest_median
13,712
median of medians the median_of_medians function is responsible for finding the approximate median of any given list of items the function uses recursion to return the true mediandef median_of_medians(elems)sublists [elems[ : + for in range( len(elems) )medians [for sublist in sublistsmedians append(sorted(sublist)[len(sublist)/ ]if len(medians< return sorted(medians)[len(medians)/ elsereturn median_of_medians(mediansthe function begins by splitting the listelemsinto groups of five elements each this means that if elems contains itemsthere will be groups created by the statement sublists [elems[ : + for in range( len(elems) )]with each containing exactly five elements or fewermedians [for sublist in sublistsmedians append(sorted(sublist)[len(sublist)/ ]an empty array is created and assigned to medianswhich stores the medians in each of the five element arrays assigned to sublists the for loop iterates over the list of lists inside sublists each sublist is sortedthe median foundand stored in the medians list the medians append(sorted(sublist)[len(sublist)/ ]statement will sort the list and obtain the element stored in its middle index this becomes the median of the fiveelement list the use of an existing sorting function will not impact the performance of the algorithm due to the list' small size we understood from the outset that we would not sort the list in order to find the ith-smallest elementso why employ python' sorted methodwellsince we are sorting very small list of five elements or fewerthe impact of that operation on the overall performance of the algorithm is considered negligible
13,713
thereafterif the list now contains five or fewer elementswe shall sort the medians list and return the element located in its middle indexif len(medians< return sorted(medians)[len(medians)/ ifon the other handthe size of the list is greater than fivewe recursively call the median_of_medians function againsupplying it with the list of the medians stored in medians takefor instancethe following list of numbers[ we can break this list into groups of five elements each with the code statement sublists [elems[ : + for in range( len(elems) )]to obtain the following list[[ ][ ][ ][ ][ ]sorting each of the five-element lists and obtaining their medians produces the following list[ since the list is five elements in sizewe only return the median of the sorted listor we would have made another call to the median_of_median function partitioning step now that we have obtained the approximate medianthe get_index_of_nearest_median function takes the bounds of the list indicated by the first and last parametersdef get_index_of_nearest_median(array_listfirstsecondmedian)if first =secondreturn first elsereturn first array_list[first:secondindex(median
13,714
once againwe only return the first index if there is only one element in the list the arraylist[first:secondreturns an array with index up to the size of the list - when we find the index of the medianwe lose the portion in the list where it occurs because of the new range indexing the [first:secondcode returns thereforewe must add whatever index is returned by arraylist[first:secondto first to obtain the true index where the median was foundswap(unsorted_arrayfirst_indexindex_of_nearest_medianwe then swap the first element in unsorted_array with index_of_nearest_medianusing the swap function the utility function to swap two array elements is shown heredef swap(array_listfirstsecond)temp array_list[firstarray_list[firstarray_list[secondarray_list[secondtemp our approximate median is now stored at first_index of the unsorted list the partition function continues as it would in the code of the quick select algorithm after the partitioning stepthe array looks like thisdef deterministic_select(array_listleftrightk)split partition(array_listleftrightif split =kreturn array_list[splitelif split return deterministic_select(array_listsplit rightkelsereturn deterministic_select(array_listleftsplit-
13,715
as you will have already observedthe main function of the deterministic selection algorithm looks exactly the same as its random selection counterpart after the initial array_list has been partitioned about the approximate mediana comparison with the kth element is made if split is less than kthen recursive call to deterministic_select(array_listsplit rightkis made this will look for the kth element in that half of the array otherwise the function call to deterministic_select(array_listleftsplit- kis made summary this has examined ways to answer the question of how to find the ith-smallest element in list the trivial solution of simply sorting list to perform the operation of finding the ith-smallest has been explored there is also the possibility of not necessarily sorting the list before we can determine the ith-smallest element the random selection algorithm allows us to modify the quick sort algorithm to determine the ith-smallest element to further improve upon the random selection algorithm so that we can obtain time complexity of ( )we embark on finding the median of medians to enable us find good split during partitioning in the next we will explore the world of strings we will learn how to efficiently store and manipulate large amounts of text data structures and common string operations will be covered too
13,716
design techniques and strategies in this we will take step back and look into the broader topics in computer algorithm design as your experience with programming growscertain patterns begin to become apparent to you and just like with any other skilled tradeyou cannot do without some techniques and principles to achieve the means in the world of algorithmsthere are plethora of these techniques and design principles working knowledge and mastery of these techniques is required to tackle harder problems in the field we will look at the ways in which algorithms are generally classified other design techniques will be treated alongside implementation of some of the algorithms the aim of this is not to make you pro at algorithm design and strategy but to unveil the large expanse of algorithms in few pages classification of algorithms there exist number of classification schemes that are based on the goal that an algorithm has to achieve in the previous we implemented number of algorithms one question that may arise isdo these algorithms share the same formif yeswhat are the similarities and characteristics being used as the basisif nocan the algorithms be grouped into classesthese are the questions we will examine as we tackle the major modes of classifying algorithms
13,717
classification by implementation when translating series of steps or processes into working algorithmthere are number of forms that it may take the heart of the algorithm may employ some assetsdescribed further in this section recursion recursive algorithms are the ones that make calls to themselves until certain condition is satisfied some problems are more easily expressed by implementing their solution through recursion one classic example is the towers of hanoi there are also different types of recursive algorithmssome of which include single and multiple recursionindirect recursionanonymous recursionand generative recursion an iterative algorithmon the other handuses series of steps or repetitive construct to formulate solution this repetitive construct could be simple while loop or any other kind of loop iterative solutions also come to mind more easily than their recursive implementations logical one implementation of an algorithm is expressing it as controlled logical deduction this logic component is comprised of the axioms that will be used in the computation the control component determines the manner in which deduction is applied to the axioms this is expressed in the form algorithm logic control this forms the basis of the logic programming paradigm the logic component determines the meaning of the algorithm the control component only affects its efficiency without modifying the logicthe efficiency can be improved by improving the control component serial or parallel the ram model of most computers allows for the assumption that computing is done one instruction at time serial algorithmsalso known as sequential algorithmsare algorithms that are executed sequentially execution commences from start to finish without any other execution procedure
13,718
to be able to process several instructions at oncea different model or computing technique is required parallel algorithms perform more than one operation at time in the pram modelthere are serial processors that share global memory the processors can also perform various arithmetic and logical operations in parallel this enables the execution of several instructions at one time the parallel/distributed algorithms divide problem into subproblems among its processors to collect the results some sorting algorithms can be efficiently parallelizedwhile iterative algorithms are generally parallelizable deterministic versus nondeterministic algorithms deterministic algorithms will produce the same output without fail every time the algorithm is run with the same input there are some sets of problems that are so complex in the design of their solutions that expressing their solution in deterministic way can be challenge nondeterministic algorithms can change the order of execution or some internal subprocess that leads to change in the final result any time the algorithm is run as suchwith every run of nondeterministic algorithmthe output of the algorithm is different for instancean algorithm that makes use of probabilistic value will yield different outputs on successive execution depending on the value of the random number generated classification by complexity to determine the complexity of an algorithm is to try to estimate how much space (memoryand time is used overall during the computation or program execution principles of algorithm designpresents more comprehensive coverage of the subject matter on complexity we will summarize what we learned there here complexity curves now consider problem of magnitude to determine the time complexity of an algorithmwe denote it with (nthe value may fall under ( ) (log ) ( ) ( log( )) ( ) ( )or ( ndepending on the steps an algorithm performsthe time complexity may or may not be affected the notation (ncaptures the growth rate of an algorithm
13,719
let' now examine practical scenario by which means do we arrive at the conclusion that the bubble sort algorithm is slower than the quick sort algorithmor in generalhow do we measure the efficiency of one algorithm against the otherwellwe can compare the big of any number of algorithms to determine their efficiency it is this approach that gives us time measure or the growth ratewhich charts the behavior of the algorithm as gets bigger here is graph of the different runtimes that an algorithm' performance may fall underin ascending orderthe list of runtimes from better to worse is given as ( ) (log ) ( ) ( log ) ( ) ( )and ( nclassification by design in this sectionwe will present the categories of algorithms based on the design of the various algorithms used in solving problems given problem may have number of solutions when the algorithms of these solutions are analyzedit becomes evident that some implement certain technique or pattern it is these techniques that we will discuss hereand in later sectionin greater detail
13,720
divide and conquer this approach to problem-solving is just as its name suggests to solve (conquercertain problemsthis algorithm divides the problem into subproblems identical to the original problem that can easily be solved solutions to the subproblems are combined in such way that the final solution is the solution of the origin problem the way in which the problems are broken down into smaller chunks is mostly by recursion we will examine this technique in detail in the upcoming sections some algorithms that use this technique include merge sortquick sortand binary search dynamic programming this technique is similar to divide and conquerin that problem is broken down into smaller problems in divide and conquereach subproblem has to be solved before its results can be used to solve bigger problems by contrastdynamic programming does not compute the solution to an already encountered subproblem ratherit uses remembering technique to avoid the recomputation dynamic programming problems have two characteristicsoptimal substructure and overlapping subproblem we will talk more on this in the next section greedy algorithms for certain category of problemsdetermining the best solution is really difficult to make up for the lack of optimal solutionwe resort to an approach where we select out of bunch of options or choices the closest solution that is the most promising in obtaining solution greedy algorithms are much easier to conceive because the guiding rule is for one to always select the solution that yields the most benefit and continue doing thathoping to reach perfect solution this technique aims to find global optimal final solution by making series of local optimal choices the local optimal choice seems to lead to the solution in real lifemost of those local optimal choices made are suboptimal as suchmost greedy algorithms have poor asymptotic time complexity
13,721
technical implementation let' dig into the implementation of some of the theoretical programming techniques that we discussed previously in this we will start with dynamic programming dynamic programming as we have already describedin this approachwe divide problem into smaller subproblems in finding the solutions to the subprogramscare is taken not to recompute any of the previously encountered subproblems this sounds bit like recursionbut things are little broader here problem may lend itself to being solved by using dynamic programming but will not necessarily take the form of making recursive calls property of problem that will make it an ideal candidate for being solved with dynamic programming is that it should have an overlapping set of subproblems once we realize that the form of subproblems has repeated itself during computationwe need not compute it again insteadwe return the result of pre-computed value of that subproblem previously encountered to avoid situation where we never have to re-evaluate subproblemwe need an efficient way in which we can store the results of each subproblem the following two techniques are readily available memoization this technique starts from the initial problem set and divides it into small subproblems after the solution to subprogram has been determinedwe store the result to that particular subproblem in the futurewhen this subproblem is encounteredwe only return its pre-computed result tabulation in tabulationwe settle on an approach where we fill table of solutions to subproblems and then combine them to solve bigger problems
13,722
the fibonacci series we will use the fibonacci series to illustrate both memoization and tabulation techniques of generating the series the memoization technique let' generate the fibonacci series to the fifth term recursive style of program to generate the sequence is as followsdef fib( )if < return elsereturn fib( - fib( - the code is very simple but little tricky to read because of the recursive calls being made that end up solving the problem when the base case is metthe fib(function returns if is equal to or less than the base case is met if the base case is not metwe will call the fib(function again and this time supply the first call with - and the second with - return fib( - fib( - the layout of the strategy to solve the ith term in the fibonacci sequence is as follows
13,723
careful observation of the preceding tree shows some interesting patterns the call to ( happens twice the call to ( happens thrice alsothe call to ( happens twice the return values of the function calls to all the times that fib( was called never changes the same goes for fib( and fib( the computational time is wasted since the same result is returned for the function calls with the same parameters these repeated calls to function with the same parameters and output suggest that there is an overlap certain computations are reoccurring down in the smaller subproblems better approach would be to store the results of the computation of fib( the first time it is encountered this also applies to fib( and fib( lateranytime we encounter call to fib( )fib( )or fib( )we simply return their respective results the diagram of our fib calls will now look like thiswe have now completely eliminated the need to compute fib( )fib( )and fib( this typifies the memoization technique wherein breaking problem into its subproblemsthere is no recomputation of overlapping calls to functions the overlapping function calls in our fibonacci example are fib( )fib( )and fib( )def dyna_fib(nlookup)if < lookup[ if lookup[nis nonelookup[ndyna_fib( - lookupdyna_fib( - lookupreturn lookup[
13,724
to create list of , elementswe do the following and pass it to the lookup parameter of the dyna_fib functionmap_set [none]*( this list will store the value of the computation of the various calls to the dyna_fib(functionif < lookup[ any call to the dyna_fib(with being less than or equal to will return when dyna_fib( is evaluatedwe store the value at index of map_setwrite the condition for lookup[ ]as the followingif lookup[nis nonelookup[ndyna_fib( - lookupdyna_fib( - lookupwe pass lookup so that it can be referenced when evaluating the subproblems the calls to dyna_fib( - lookupand dyna_fib( - lookupare stored in lookup[nwhen th we run our updated implementation of the function to find the term of the fibonacci serieswe realize that there is considerable improvement this implementation runs much faster than our initial implementation supply the value to both implementations and witness the difference in the execution speed the algorithm sacrificed space complexity for time because of the use of memory in storing the result to the function calls the tabulation technique there is second technique in dynamic programmingwhich involves the use of table of results or matrix in some cases to store results of computations for later use this approach solves the bigger problem by first working out route to the final solution in the case of the fib(functionwe will develop table with the values of fib( and fib( predetermined based on these two valueswe will work our way up to fib( )def fib( )results [ for in range( )results append(results[ - results[ - ]return results[-
13,725
the results variable is at index and the values and this represents the return values of fib( and fib( to calculate the values of the fib(function for higher than we simply call the for loop appends the sum of the results[ - results[ - to the list of results divide and conquer this programming approach to problem-solving emphasizes the need to break down problem into smaller problems of the same type or form of the original problem these subproblems are solved and combined to solve the original problem the following three steps are associated with this kind of programming divide to divide means to break down an entity or problem herewe devise the means to break down the original problem into subproblems we can achieve this through iterative or recursive calls conquer it is impossible to continue to break the problems into subproblems indefinitely at some pointthe smallest indivisible problem will return solution once this happenswe can reverse our thought process and say that if we know the solution to the smallest problem possiblewe can obtain the final solution to the original problem merge to obtain the final solutionwe need to combine the smaller solutions to the smaller problems in order to solve the bigger problem there are other variants to the divide and conquer algorithmsuch as merge and combineand conquer and solve algorithms that make use of the divide and conquer principle include merge sortingquick sortand strassen' matrix multiplication we will go through an implementation of the merge sort as we started earlier in principles of algorithm design
13,726
merge sort the merge sort algorithm is based on the divide and conquer rule given list of unsorted elementswe split the list into approximately two halves we continue to divide the two halves recursively after whilethe sublists created as result of the recursive call will contain only one element at that pointwe begin to merge the solutions in the conquer or merge stepdef merge_sort(unsorted_list)if len(unsorted_list= return unsorted_list mid_point int((len(unsorted_list))/ first_half unsorted_list[:mid_pointsecond_half unsorted_list[mid_point:half_a merge_sort(first_halfhalf_b merge_sort(second_halfreturn merge(half_ahalf_bour implementation starts by accepting the list of unsorted elements into the merge_sort function the if statement is used to establish the base casewhere if there is only one element in the unsorted_listwe simply return that list again if there is more than one element in the listwe find the approximate middle using mid_point int((len(unsorted_list))/ using this mid_pointwe divide the list into two sublistsnamely first_half and second_halffirst_half unsorted_list[:mid_pointsecond_half unsorted_list[mid_point: recursive call is made by passing the two sublists to the merge_sort function againhalf_a merge_sort(first_halfhalf_b merge_sort(second_halfenter the merge step when half_a and half_b have been passed their valueswe call the merge function that will merge or combine the two solutions stored in half_a and half_bwhich are listsdef merge(first_sublistsecond_sublist) merged_list [
13,727
while len(first_sublistand len(second_sublist)if first_sublist[isecond_sublist[ ]merged_list append(first_sublist[ ] + elsemerged_list append(second_sublist[ ] + while len(first_sublist)merged_list append(first_sublist[ ] + while len(second_sublist)merged_list append(second_sublist[ ] + return merged_list the merge function takes the two lists we want to merge togetherfirst_sublist and second_sublist the and variables are initialized to and are used as pointers to tell us where in the two lists we are with respect to the merging process the final merged_list will contain the merged listwhile len(first_sublistand len(second_sublist)if first_sublist[isecond_sublist[ ]merged_list append(first_sublist[ ] + elsemerged_list append(second_sublist[ ] + the while loop starts comparing the elements in first_sublist and second_sublist the if statement selects the smaller of the twofirst_sublist[ior second_sublist[ ]and appends it to merged_list the or index is incremented to reflect the point we are at with the merging step the while loop only when either sublist is empty there may be elements left behind in either first_sublist or second_sublist the last two while loops make sure that those elements are added to the merged_list before it is returned the last call to merge(half_ahalf_bwill return the sorted list
13,728
let' give the algorithm dry run by playing the last step of merging the two sublists [ and [ ]step first_sublist second_sublist merged_list step [ [ [step [ [ step [ step [ step [ step [ note that the text in bold represents the current item referenced in the loops first_sublist (which uses the index iand second_sublist (which uses the index jat this point in the executionthe third while loop in the merge function kicks in to move and into the merged_list the returned merged_list will contain the fully sorted list greedy algorithms as we said earliergreedy algorithms make decisions that yield the largest benefit in the interim it is the hope of this technique that by making these high yielding benefit choicesthe total path will lead to an overall good solution or end examples of greedy algorithms include prim' algorithm for finding the minimum spanning treesthe knapsack problemand the travelling salesman problemjust to mention few coin-counting problem let' examine very simple use of this greedy technique in some arbitrary countrywe have the denominations ghc ghcand ghc given an amount such as ghcwe may want to find the least possible number of denominations needed to provide change using the greedy approachwe pick the largest value from our denomination to divide ghc we use because it yields the best possible means by which we can reduce the amount ghc into lower denominations
13,729
the remainder ghccannot be divided by so we try the ghc denomination and realize that we can multiply it by to obtain ghc at the end of the daythe least possible number of denominations to create ghc is to get one ghc and four ghc notes so farour greedy algorithm seems to be doing pretty well function that returns the respective denominations is as followsdef basic_small_change(denomtotal_amount)sorted_denominations sorted(denomreverse=truenumber_of_denoms [for in sorted_denominationsdiv total_amount total_amount total_amount if div number_of_denoms append((idiv)return number_of_denoms this greedy algorithm always starts by using the largest denomination possible denom is list of denominations sorted(denomreverse=truewill sort the list in reverse so that we can obtain the largest denomination at index nowstarting from index of the sorted list of denominationssorted_denominationswe iterate and apply the greedy techniquefor in sorted_denominationsdiv total_amount total_amount total_amount if div number_of_denoms append((idiv)the loop will run through the list of denominations each time the loop runsit obtains the quotientdivby dividing the total_amount by the current denominationi total_amount is updated to store the remainder for further processing if the quotient is greater than we store it in number_of_denoms unfortunatelythere are instances where our algorithm fails for instancewhen passed ghsour algorithm returns one ghc and four ghs this output ishowevernot the optimal solution the right solution will be to use two ghc and two ghc denominations
13,730
better greedy algorithm is presented here this timethe function returns list of tuples that allow us to investigate the better resultsdef optimal_small_change(denomtotal_amount)sorted_denominations sorted(denomreverse=trueseries [for in range(len(sorted_denominations))term sorted_denominations[ :number_of_denoms [local_total total_amount for in termdiv local_total local_total local_total if div number_of_denoms append((idiv)series append(number_of_denomsreturn series the outer for loop enables us to limit the denominations from which we find our solutionfor in range(len(sorted_denominations))term sorted_denominations[ :assuming that we have the list [ in sorted_denominationsslicing it with [ :helps us obtain the sublists [ ][ ]and [ ]from which we try to get the right combination to create the change dijkstra' shortest path algorithm we introduce and study dijkstra' algorithm this algorithm is an example of greedy algorithm it finds the shortest distance from source to all other nodes or vertices in graph by the end of this sectionyou will come to understand why it is classified as greedy algorithm
13,731
consider the following graphby inspectionthe first answer to the question of finding the shortest path between node and node that comes to mind is the edge with value or distance from the diagramit would seem that the straight path from node to would also yield the shortest route between the two nodes but the assumption that the edge connecting the two nodes is the shortest route does not always hold true this shortsighted approach of selecting the first option when solving problem is what gives the algorithm its name and class having found the supposed shortest route or distancethe algorithm continues to refine and improve its results other paths from node to node prove to be shorter than our initial pick for instancetravelling from node to node to node will incur total distance of but the route through node to efand is even shorter we will implement the shortest path algorithm with single source our result should help us determine the shortest path from the originwhich in this case is ato any other node in the graph the shortest path from node to node is through node likewisethe shortest path to is through node with total distance of in order to come up with an algorithm to help us find the shortest path in graphlet' solve the problem by hand thereafterwe will present the working solution in python in the on graphswe saw how we could represent graph with an adjacency list we will use it with slight modification to enable us capture the distance on every edge table will be used to also keep track of the shortest distance from the source in the graph to any other node python dictionary will be used to implement this table here is one such table
13,732
node shortest distance from source previous node none none none none none none the adjacency list for the diagram and table is as followsgraph dict(graph[' '{' ' ' ' ' ' graph[' '{' ' ' ' graph[' '{' ' ' ' graph[' '{' ' ' ' ' ' graph[' '{' ' ' ' graph[' '{' ' ' ' the nested dictionary holds the distance and adjacent nodes this table forms the basis for our effort as we try to solve the problem at hand when the algorithm startswe have no idea what the shortest distance from the source (ato any of the nodes is to play it safewe set the values in that column to infinity with the exception of node from the starting nodethe distance covered from node to node is so we can safely use this value as the shortest distance from node to itself no prior nodes have been visited when the algorithm begins we therefore mark the previous node column of node as none in step of the algorithmwe start by examining the adjacent nodes of node to find the shortest distance from node to node bwe need to find the distance from the start node to the previous node of node bwhich happens to be node aand add it to the distance from node to node we do this for other adjacent nodes of awhich are beand using the adjacent node as an examplethe distance from the start node to the previous node is the distance from the previous node to the current node (bis this sum is compared with the data in the shortest distance column of node since is less than infinity()we replace with the smallest of the twowhich is
13,733
any time the shortest distance of node is replaced by lesser valuewe need to update the previous node column too at the end of the first stepour table looks as followsnode shortest distance from source previous node none none none at this pointnode is considered visited as suchwe add node to the list of visited nodes in the tablewe show that node has been visited by making the text bold and appending an asterisk sign to it in the second stepwe find the node with the shortest distance using our table as guide node with its value has the shortest distance this is what we can infer from the table about node to get to node ewe must visit node and cover distance of from node awe cover distance of to get to the starting nodewhich is node itself the adjacent nodes of node are and but node has already been visitedso we only consider node to find the shortest route or distance to node fwe must find the distance from the starting node to node and add it to the distance between node and we can find the distance from the starting node to node by looking at the shortest distance column of node ewhich has the value the distance from node to can be obtained from the adjacency list we developed in python earlier in this section this distance is these two sum up to which is less than infinity remember we are on examining the adjacent node since there are more adjacent nodes of node ewe mark node as visited our updated table will have the following values
13,734
node shortest distance from source previous node none none at this pointwe initiate another step the smallest value in the shortest distance column is we choose instead of purely on an alphabetical basis the adjacent nodes of are and cbut node has already been visited using the rule we established earlierthe shortest distance from to is we arrive at this number because the distance from the starting node to node is while the distance from node to is since the sum is less than infinitywe update the shortest distance to and update the previous node column with node now is also marked as visited the new state of the table is as followsnode shortest distance from source previous node none
13,735
the node with the shortest distance yet unvisited is node the adjacent nodes of are nodes and but node has already been visited as suchwe focus on finding the shortest distance from the starting node to node we calculate this distance by adding the distance from node to to the distance from node to this sums up to which is less than thuswe update the with and replace with in node ' previous node column node is now marked as visited here is the updated table up to this pointnode shortest distance from source previous node none nowthe two unvisited nodes are and in alphabetical orderwe choose to examine because both nodes have the same shortest distance from the starting node howeverall the adjacent nodes of have been visited thuswe have nothing to do but mark node as visited the table remains unchanged at this point lastlywe take node and find out that all its adjacent nodes have been visited too we only mark it as visited the table remains unchangednode shortest distance from source previous node none
13,736
let' verify this table with our graph from the graphwe know that the shortest distance from to is we will need to go through to get to node according to the tablethe shortest distance from the source column for node is the value this is true it is also tells us that to get to node fwe need to visit node eand from enode awhich is our starting node this is actually the shortest path we begin the program for finding the shortest distance by representing the table that enables us to track the changes in our graph for the given diagram we usedhere is dictionary representation of the tabletable dict(table ' '[ none]' '[float("inf")none]' '[float("inf")none]' '[float("inf")none]' '[float("inf")none]' '[float("inf")none]the initial state of the table uses float("inf"to represent infinity each key in the dictionary maps to list at the first index of the list is stored the shortest distance from the source at the second index is the stored the previous nodedistance previous_node infinity float('inf'to avoid the use of magic numberswe use the preceding constants the shortest path column' index is referenced by distance the previous node column' index is referenced by previous_node now all is set for the main function it will take the graphrepresented by the adjacency listthe tableand the starting node as parametersdef find_shortest_path(graphtableorigin)visited_nodes [current_node origin starting_node origin we keep the list of visited nodes in the listvisited_nodes the current_node and starting_node variables will both point to the node in the graph we choose to make our starting node the origin value is the reference point for all other nodes with respect to finding the shortest path
13,737
the heavy lifting of the whole process is accomplished by the use of while loopwhile trueadjacent_nodes graph[current_nodeif set(adjacent_nodesissubset(set(visited_nodes))nothing here to do all adjacent nodes have been visited pass elseunvisited_nodes set(adjacent_nodesdifference(set(visited_nodes)for vertex in unvisited_nodesdistance_from_starting_node get_shortest_distance(tablevertexif distance_from_starting_node =infinity and current_node =starting_nodetotal_distance get_distance(graphvertexcurrent_nodeelsetotal_distance get_shortest_distance (tablecurrent_nodeget_distance(graphcurrent_nodevertexif total_distance distance_from_starting_nodeset_shortest_distance(tablevertextotal_distanceset_previous_node(tablevertexcurrent_nodevisited_nodes append(current_nodeif len(visited_nodes=len(table keys())break current_node get_next_node(table,visited_nodeslet' break down what the while loop is doing in the body of the while loopwe obtain the current node in the graph we want to investigate with the line adjacent_nodes graph[current_nodecurrent_node should have been set prior the if statement is used to find out whether all the adjacent nodes of current_node have been visited when the while loop is executed the first timecurrent_node will contain and adjacent_nodes will contain nodes bdand visited_nodes will be empty too if all nodes have been visitedwe only move on to the statements further down the program elsewe begin whole other step
13,738
the statement set(adjacent_nodesdifference(set(visited_nodes)returns the nodes that have not been visited the loop iterates over this list of unvisited nodesdistance_from_starting_node get_shortest_distance(tablevertexthe helper method get_shortest_distance(tablevertexwill return the value stored in the shortest distance column of our table using one of the unvisited nodes referenced by vertexif distance_from_starting_node =infinity and current_node =starting_nodetotal_distance get_distance(graphvertexcurrent_nodewhen we are examining the adjacent nodes of the starting nodedistance_from_starting_node =infinity and current_node =starting_node will evaluate to truein which case we only have to get the distance between the starting node and vertex by referencing the graphtotal_distance get_distance(graphvertexcurrent_nodethe get_distance method is another helper method we use to obtain the value (distanceof the edge between vertex and current_node if the condition failsthen we assign total_distance the sum of the distance from the starting node to current_node and the distance between current_node and vertex once we have our total distancewe need to check whether this total_distance is less than the existing data in the shortest distance column in our table if it is lessthen we use the two helper methods to update that rowif total_distance distance_from_starting_nodeset_shortest_distance(tablevertextotal_distanceset_previous_node(tablevertexcurrent_nodeat this pointwe add the current_node to the list of visited nodesvisited_nodes append(current_nodeif all nodes have been visitedthen we must exit the while loop to check whether all the nodes have been visitedwe compare the length of the visited_nodes list to the number of keys in our table if they have become equalwe simply exit the while loop the helper methodget_next_nodeis used to fetch the next node to visit it is this method that helps us find the minimum value in the shortest distance column from the starting nodes using our table
13,739
the whole method ends by returning the updated table to print the tablewe use the following statementsshortest_distance_table find_shortest_path(graphtable' 'for in sorted(shortest_distance_table)print("{{}format( ,shortest_distance_table[ ])output for the preceding statementa [ noneb [ ' ' [ ' ' [ ' ' [ ' ' [ ' 'for the sake of completenesslet' find out what the helper methods are doingdef get_shortest_distance(tablevertex)shortest_distance table[vertex][distancereturn shortest_distance the get_shortest_distance function returns the value stored in the zeroth index of our table at that indexwe always store the shortest distance from the starting node up to vertex the set_shortest_distance function only sets this value by the followingdef set_shortest_distance(tablevertexnew_distance)table[vertex][distancenew_distance when we update the shortest distance of nodewe update its previous node using the following methoddef set_previous_node(tablevertexprevious_node)table[vertex][previous_nodeprevious_node remember that the constantprevious_nodeequals in the tablewe store the value of the previous_node at table[vertex][previous_nodeto find the distance between any two nodeswe use the get_distance functiondef get_distance(graphfirst_vertexsecond_vertex)return graph[first_vertex][second_vertex
13,740
the last helper method is the get_next_node functiondef get_next_node(tablevisited_nodes)unvisited_nodes list(set(table keys()difference(set(visited_nodes))assumed_min table[unvisited_nodes[ ]][distancemin_vertex unvisited_nodes[ for node in unvisited_nodesif table[node][distanceassumed_minassumed_min table[node][distancemin_vertex node return min_vertex the get_next_node function resembles function to find the smallest item in list the function starts off by finding the unvisited nodes in our table by using visited_nodes to obtain the difference between the two sets of lists the very first item in the list of unvisited_nodes is assumed to be the smallest in the shortest distance column of table if lesser value is found while the for loop runsthe min_vertex will be updated the function then returns min_vertex as the unvisited vertex or node with the smallest shortest distance from the source the worst-case running time of dijkstra' algorithm is (| |vlog | |)where |vis the number of vertices and |eis the number of edges complexity classes the problems that computer algorithms try to solve fall within range of difficulty by which their solutions are arrived at in this sectionwe will discuss the complexity classes nnpnp-completeand np-hard problems versus np the advent of computers has sped up the rate at which certain tasks are performed in generalcomputers are good at perfecting the art of calculation and all problems that can be reduced to set of mathematical computations howeverthis assertion is not entirely true there are some nature or classes of problems that just take an enormous amount of time for the computer to make sound guesslet alone find the right solution
13,741
in computer sciencethe class of problems that computers can solve within polynomial time using step-wise process of logical steps is called -type problemswhere stands for polynomial these are relatively easy to solve then there is another class of problems that is considered very hard to solve the word "hard problemis used to refer to the way in which problems increase in difficulty when trying to find solution howeverthe good thing is that despite the fact that these problems have high growth rate of difficultyit is possible to determine whether proposed solution solves the problem in polynomial time these are the np-type problems np here stands for nondeterministic polynomial time now the million dollar question isdoes npthe proof for np is one of the millennium prize problems from the clay mathematics institute that attract $ , , prize for correct solution these problems number in number the travelling salesman problem is an example of an np-type problem the problem statement saysgiven that there are number of cities in some countryfind the shortest route between all the citiesthus making the trip cost-effective one when the number of cities is smallthis problem can be solved in reasonable amount of time howeverwhen the number of cities is above any two-digit numberthe time taken by the computer is remarkably long lot of computer systems and cybersecurity is based on the rsa encryption algorithm the strength of the algorithm and its security is due to the fact that it is based on the integer factoring problemwhich is an np-type problem finding the prime factors of prime number composed of many digits is very difficult when two large prime numbers are multiplieda large non-prime number is obtained with only two large prime factors factorization of this number is where many cryptographic algorithms borrow their strength
13,742
all -type problems are subsets of np problems this means that any problem that can be solved in polynomial time can also be verified in polynomial time but the questionis npinvestigates whether problems that can be verified in polynomial time can also be solved in polynomial time in particularif they are equalit would mean that problems that are solved by trying number of possible solutions can be solved without the need to actually try all the possible solutionsinvariably creating some sort of shortcut proof the proofwhen finally discoveredwill definitely have serious consequences in the fields of cryptographygame theorymathematicsand many other fields
13,743
np-hard problem is np-hard if all other problems in np can be polynomial time reducible or mapped to it np-complete problem is considered an np-complete problem if it is first of all an np hard and is also found in the np class summary in this last we looked at the theories that support the computer science field without the use of too much mathematical rigorwe saw some of the main categories into which algorithms are classified other design techniques in the fieldsuch as the divide and conquerdynamic programmingand greedy algorithmswere also discussedalong with sample implementations lastlyone of the outstanding problems yet to be solved in the field of mathematics was tackled we saw how the proof for npwill definitely be game-changer in number of fieldsif such proof is ever discovered
13,744
implementationsapplicationsand tools learning about algorithms without any real-life application remains purely academic pursuit in this we will explore data structures and algorithms that are shaping our world one of the golden nuggets of this age is the abundance of data -mailsphone numberstextand image documents contain large amounts of data in this data is found valuable information that makes the data become more important but to extract this information from the raw datawe will have to use data structuresprocessesand algorithms specialized for this task machine learning employs significant number of algorithms to analyze and predict the occurrence of certain variables analyzing data on purely numerical basis still leaves much of the latent information buried in the raw data presenting data visually thus enables one to understand and gain valuable insights too by the end of this you should be able to do the followingprune and present data accurately use both supervised and unsupervised learning algorithms for the purposes of prediction visually represent data in order to gain more insight
13,745
tools of the trade in order to proceed with this you will need to install the following packages these packages will be used to preprocess and visually represent the data being processed some of the packages also contain well-written and perfected algorithms that will operate on our data preferablythese modules should be installed within virtual environment such as pippip install numpy pip install scikit-learn pip install matplotlib pip install pandas pip install textblob these packages may require other platform-specific modules to be installed first take note and install all dependenciesnumpya library with functions to operate on -dimensional arrays and matrices scikit-learna highly advanced module for machine learning it contains good number of algorithms for classificationregressionand clusteringamong others matplotlibthis is plotting library that makes use of numpy to graph good variety of chartsincluding line plotshistogramsscatter plotsand even graphs pandasthis library deals with data manipulation and analysis data preprocessing collection of data from the real world is fraught with massive challenges the raw data collected is plagued with lot of issuesso much so that we need to adopt ways to sanitize the data to make it suitable for use in further studies why process raw dataraw data as collected from the field is rigged with human error data entry is major source of error when collecting data even technological methods of collecting data are not spared inaccurate reading of devicesfaulty gadgetryand changes in environmental factors can introduce significant margins of errors as data is collected
13,746
the data collected may also be inconsistent with other records collected over time the existence of duplicate entries and incomplete records warrant that we treat the data in such way as to bring out hidden and buried treasure the raw data may also be shrouded in sea of irrelevant data to clean the data upwe can totally discard irrelevant databetter known as noise data with missing parts or attributes can be replaced with sensible estimates alsowhere the raw data suffers from inconsistencydetecting and correcting them becomes necessary let us explore how we can use numpy and pandas for data preprocessing techniques missing data data collection is tedious andas suchonce data is collectedit should not be easily discarded just because dataset has missing fields or attributes does not mean it is not useful several methods can be used to fill up the nonexistent parts one of these methods is by either using global constantusing the mean value in the datasetor supplying the data manually the choice is based on the context and sensitivity of what the data is going to be used for takefor instancethe following dataimport numpy as np data pandas dataframe([ ][np nannp nan ][ ]]as we can seethe data elements data[ ][ and data[ ][ have values being np nanrepresenting the fact that they have no value if the np nan values are undesired in given datasetthey can be set to some constant figure let' set data elements with the value np nan to print(data fillna( )the new state of the data becomes the following
13,747
to apply the mean values insteadwe do the followingprint(data fillna(data mean())the mean value for each column is calculated and inserted in those data areas with the np nan value for the first columncolumn the mean value was obtained by ( )/ the resulting is then stored at data[ ][ similar operation is carried out for columns and feature scaling the columns in data frame are known as its features the rows are known as records or observations now examine the following data matrix this data will be referenced in subsections so please do take note[ ]feature with data has its values lying between and for feature the data lies between and inconsistent results will be produced if we supply this data to any machine learning algorithm ideallywe will need to scale the data to certain range in order to get consistent results once againa closer inspection reveals that each feature (or columnlies around different mean values thereforewhat we would want to do is to align the features around similar means one benefit of feature scaling is that it boosts the learning parts of machine learning the scikit module has considerable number of scaling algorithms that we shall apply to our data
13,748
min-max scalar the min-max scalar form of normalization uses the mean and standard deviation to box all the data into range lying between certain min and max value for most purposesthe range is set between and at other timesother ranges may be applied but the to range remains the defaultscaled_values preprocessing minmaxscaler(feature_range=( , )results scaled_values fit(datatransform(dataprint(resultsan instance of the minmaxscaler class is created with the range ( , and passed to the variable scaled_values the fit function is called to make the necessary calculations that will be used internally to change the dataset the transform function effects the actual operation on the datasetreturning the value to results[ ]we can see from the preceding output that all the data is normalized and lies between and this kind of output can now be supplied to machine learning algorithm standard scalar the mean values for the respective features in our initial dataset or table are and to make all the data have similar meanthat isa zero mean and unit variance across the datawe shall apply the standard scalar algorithmstand_scalar preprocessing standardscaler(fit(dataresults stand_scalar transform(dataprint(resultsdata is passed to the fit method of the object returned from instantiating the standardscaler class the transform method acts on the data elements in the data and returns the output to the results[ - [- [- - - ]examining the resultswe observe that all our features are now evenly distributed
13,749
binarizing data to binarize given feature setwe make use of threshold if any value within given dataset is greater than the thresholdthe value is replaced by if the value is less than the threshold we will replace itresults preprocessing binarizer( fit(datatransform(dataprint(resultsan instance of binarizer is created with the argument is the threshold that will be used in the binarizing algorithm[ ]all values in the data that are less than will have in their stead the opposite also holds true machine learning machine learning is subfield of artificial intelligence we know that we can never truly create machines that actually "thinkbut we can supply machines with enough data and models by which sound judgment can be reached machine learning focuses on creating autonomous systems that can continue the process of decision makingwith little or no human intervention in order to teach the machinewe need data drawn from the real world for instanceto shift through which -mails constitute spam and which ones don'twe need to feed the machine with samples of each after obtaining this datawe have to run the data through models (algorithmsthat will use probability and statistics to unearth patterns and structure from the data if this is properly donethe algorithm by itself will be able to analyze -mails and properly categorize them sorting -mails is just one example of what machines can do if they are "trained
13,750
types of machine learning there are three broad categories of machine learningas followssupervised learningherean algorithm is fed set of inputs and their corresponding outputs the algorithm then has to figure out what the output will be for an unfamiliar input examples of such algorithms include naive bayeslinear regressionand decision tree algorithms unsupervised learningwithout using the relationship that exists between set of input and output variablesthe unsupervised learning algorithm uses only the inputs to unearth groupspatternsand clusters within the data examples of such algorithms include hierarchical clustering and -means clustering reinforcement learningthe computer in this kind of learning dynamically interacts with its environment in such way as to improve its performance hello classifier to invoke the blessing of the programming gods in our quest to understand machine learningwe begin with an hello world example of text classifier this is meant to be gentle introduction to machine learning this example will predict whether given text carries negative or positive connotation before this can be donewe need to train our algorithm (modelwith some data the naive bayes model is suited for text classification purposes algorithms based on naive bayes models are generally fast and produce accurate results the whole model is based on the assumption that features are independent from each other to accurately predict the occurrence of rainfallthree conditions need to be considered these are wind speedtemperatureand the amount of humidity in the air in realitythese factors do have an influence on each other to tell the likelihood of rainfall but the abstraction in naive bayes is to assume that these features are unrelated in any way and thus independently contribute to chances of rainfall naive bayes is useful in predicting the class of an unknown datasetas we will see soon now back to our hello classifier after we have trained our modeits prediction will fall into either the positive or negative categoryfrom textblob classifiers import naivebayesclassifier train (' love this sandwich ''pos')('this is an amazing shop!''pos')('we feel very good about these beers ''pos')
13,751
('that is my best sword ''pos')('this is an awesome post''pos')(' do not like this cafe''neg')(' am tired of this bed ''neg')(" can' deal with this"'neg')('she is my sworn enemy!''neg')(' never had caring mom ''neg'firstwe will import the naivebayesclassifier class from the textblob package this classifier is very easy to work with and is based on the bayes theorem the train variable consists of tuples that each holds the actual training data each tuple contains the sentence and the group it is associated with nowto train our modelwe will instantiate naivebayesclassifier object by passing the train to itcl naivebayesclassifier(trainthe updated naive bayesian model cl will predict the category that an unknown sentence belongs to up to this pointour model knows of only two categories that phrase can belong toneg and pos the following code runs the following tests using our modelprint(cl classify(" just love breakfast")print(cl classify("yesterday was sunday")print(cl classify("why can' he pay my bills")print(cl classify("they want to kill the president of bantu")the output of our test is as followspos pos neg neg we can see that the algorithm has had some degree of success in classifying the input phrases into their categories well this contrived example is overly simplistic but it does show promise that if given the right amounts of data and suitable algorithm or modelit is possible for machine to carry out tasks without any human help
13,752
the specialized class naivebayesclassifier also did some heavy lifting for us in the background so we could not appreciate the innards by which the algorithm arrived at the various predictions our next example will use the scikit module to predict the category that phrase may belong to supervised learning example assume that we have set of posts to categorize as with supervised learningwe need to first train the model in order for it to accurately predict the category of an unknown post gathering data the scikit module comes with number of sample data we will use for training our model in this casewe will use the newsgroups posts to load the postswe will use the following lines of codefrom sklearn datasets import fetch_ newsgroups training_data fetch_ newsgroups(subset='train'categories=categoriesshuffle=truerandom_state= after we have trained our modelthe results of prediction must belong to one of the following categoriescategories ['alt atheism''soc religion christian','comp graphics''sci med'the number of records we are going to use as training data is obtained by the followingprint(len(training_data)machine learning algorithms do not mix well with textual attributes so the categories that each post belongs to are presented as numbersprint(set(training_data target)the categories have integer values that we can map back to the categories themselves with print(training_data target_names[ ]here is numerical random index picked from set(training_data target
13,753
now that the training data has been obtainedwe must feed the data to machine learning algorithm the bag of words model will break down the training data in order to make it ready for the learning algorithm or model bag of words the bag of words is model that is used for representing text data in such way that it does not take into consideration the order of words but rather uses word counts to segment words into regions take the following sentencessentence_ "as fit as fiddlesentence_ "as you like itthe bag of words enables us to decompose text into numerical feature vectors represented by matrix to reduce our two sentences into the bag of words modelwe need to obtain unique list of all the wordsset((sentence_ sentence_ split(")this set will become our columns in the matrix the rows in the matrix will represent the documents that are being used in training the intersection of row and column will store the number of times that word occurs in the document using our two sentences as exampleswe obtain the following matrixas fit fiddle you like it sentence sentence the preceding data alone will not enable us to predict accurately the category that new documents or articles will belong to the table has some inherent flaws there may be situations where longer documents or words that occur in many of the posts reduce the precision of the algorithm stop words can be removed to make sure only relevant data is analyzed stop words include isarewasand so on since the bag of words model does not factor grammar into its analysisthe stop words can safely be dropped it is also possible to add to the list of stop words that one feels should be exempted from final analysis
13,754
to generate the values that go into the columns of our matrixwe have to tokenize our training datafrom sklearn feature_extraction text import countvectorizer from sklearn feature_extraction text import tfidftransformer from sklearn naive_bayes import multinomialnb count_vect countvectorizer(training_matrix count_vect fit_transform(training_data datathe training_matrix has dimension of ( this means that corresponds to the dataset while corresponds to the number of columns that make up the unique set of words in all posts we instantiate the countvectorizer class and pass the training_data data to the fit_transform method of the count_vect object the result is stored in training_matrix the training_matrix holds all the unique words and their respective frequencies to mitigate the problem of basing prediction on frequency count alonewe will import the tfidftransformer that helps to smooth out the inaccuracies in our datamatrix_transformer tfidftransformer(tfidf_data matrix_transformer fit_transform(training_matrixprint(tfidf_data[ : todense()tfidf_data[ : todense(only shows truncated list of three rows by , columns matrix the values seen are the term frequency--inverse document frequency that reduce the inaccuracy resulting from using frequency countmodel multinomialnb(fit(tfidf_datatraining_data targetmultinomialnb is variant of the naive bayes model we pass the rationalized data matrixtfidf_data and categoriestraining_data targetto its fit method prediction to test whether our model has learned enough to predict the category that an unknown post is likely to belong towe have the following sample datatest_data ["my god is good""arm chip set will rival intel"test_counts count_vect transform(test_datanew_tfidf matrix_transformer transform(test_counts
13,755
the list test_data is passed to the count_vect transform function to obtain the vectorized form of the test data to obtain the term frequency--inverse document frequency representation of the test datasetwe call the transform method of the matrix_transformer object to predict which category the docs may belong towe do the followingprediction model predict(new_tfidfthe loop is used to iterate over the predictionshowing the categories they are predicted to belong tofor doccategory in zip(test_dataprediction)print('% =% (doctraining_data target_names[category])when the loop has run to completionthe phrasetogether with the category that it may belong tois displayed sample output is as follows'my god is good=soc religion christian 'arm chip set will rival intel=comp graphics all that we have seen up to this point is prime example of supervised learning we started by loading posts whose categories are already known these posts were then fed into the machine learning algorithm most suited for text processing based on the naive bayes theorem set of test post fragments were supplied to the model and the category was predicted to explore an example of an unsupervised learning algorithmwe shall study the -means algorithm for clustering some data an unsupervised learning example category of learning algorithms is able to discover inherent groups that may exist in set of data an example of these algorithms is the -means algorithm -means algorithm the -means algorithm uses the mean points in given dataset to cluster and discover groups within the dataset is the number of clusters that we want and are hoping to discover after the -means algorithm has generated the groupingswe can pass it additional but unknown data for it to predict to which group it will belong
13,756
note that in this kind of algorithmonly the raw uncategorized data is fed to the algorithm it is up to the algorithm to find out if the data has inherent groups within it to understand how this algorithm workswe will examine data points consisting of and values we will feed these values to the learning algorithm and expect that the algorithm will cluster the data into two sets we will color the two sets so that the clusters are visible let' create sample data of records of and pairsimport numpy as np import matplotlib pyplot as plt original_set - np random rand( second_set np random rand( original_set[ :second_set firstwe create records with - np random rand( in each of the recordswe will use the data in it to represent and values that will eventually be plotted the last numbers in original_set will be replaced by np random rand( in effectwhat we have done is to create two subsets of datawhere one set has numbers in the negative while the other set has numbers in the positive it is now the responsibility of the algorithm to discover these segments appropriately we instantiate the kmeans algorithm class and pass it n_clusters= that makes the algorithm cluster all its data under only two groups it is through series of trial and error that this figure is obtained but for academic purposeswe already know this number it is not at all obvious when working with unfamiliar datasets from the real worldfrom sklearn cluster import kmeans kmean kmeans(n_clusters= kmean fit(original_setprint(kmean cluster_centers_print(kmean labels_the dataset is passed to the fit function of kmeankmean fit(original_setthe clusters generated by the algorithm will revolve around certain mean point the points that define these two mean points are obtained by kmean cluster_centers_
13,757
the mean points when printed appear as follows[ [- - ]each data point in original_set will belong to cluster after our -means algorithm has finished its training the -mean algorithm represents the two clusters it discovers as and if we had asked the algorithm to cluster the data into fourthe internal representation of these clusters would have been and to print out the various clusters that each dataset belongs towe do the followingprint(kmean labels_this gives the following output[ there are and each shows the cluster that each data point falls under by using matplotlib pyplotwe can chart the points of each group and color it appropriately to show the clustersimport matplotlib pyplot as plt for in set(kmean labels_)index kmean labels_ = plt plot(original_set[index, ]original_set[index, ]' 'index kmean labels_ = is nifty way by which we select all points that correspond to the group when = all points belonging to the group are returned to index it' the same for index = and so on plt plot(original_set[index, ]original_set[index, ]' 'then plots these data points using as the character for drawing each point nextwe will plot the centroids or mean values around which the clusters have formedplt plot(kmean cluster_centers_[ ][ ],kmean cluster_centers_[ ][ ]'*' =' 'ms= plt plot(kmean cluster_centers_[ ][ ],kmean cluster_centers_[ ][ ]'*' =' 'ms= lastlywe show the whole graph with the two means illustrated by starplt show(
13,758
the algorithm discovers two distinct clusters in our sample data the two mean points of the two clusters are denoted with the red star symbol prediction with the two clusters that we have obtainedwe can predict the group that new set of data might belong to let' predict which group the points [[- - ]and [[ ]will belong tosample np array([[- - ]]print(kmean predict(sample)another_sample np array([[ ]]print(kmean predict(another_sample)the output is seen as follows[ [
13,759
at the barest minimumwe can expect the two test datasets to belong to different clusters our expectation is proved right when the print statement prints and thus confirming that our test data does indeed fall under two different clusters data visualization numerical analysis does not sometimes lend itself to easy understanding indeeda single image is worth , words and in this sectionan image would be worth , tables comprised of numbers only images present quick way to analyze data differences in size and lengths are quick markers in an image upon which conclusions can be drawn in this sectionwe will take tour of the different ways to represent data besides the graphs listed herethere is more that can be achieved when chatting data bar chart to chart the values and into bar graphwe will store the values in an array and pass it to the bar function the bars in the graph represent the magnitude along the yaxisimport matplotlib pyplot as plt data [ x_values range(len(data)plt bar(x_valuesdataplt show(x_values stores an array of values generated by range(len(data)alsox_values will determine the points on the -axis where the bars will be drawn the first bar will be drawn on the -axis where is the second bar with data will be drawn on the -axis where is
13,760
the width of each bar can be changed by modifying the following lineplt bar(x_valuesdatawidth= this should produce the following graph
13,761
howeverthis is not visually appealing because there is no space anymore between the barswhich makes it look clumsy each bar now occupies one unit on the -axis multiple bar charts in trying to visualize datastacking number of bars enables one to further understand how one piece of data or variable varies with anotherdata [ ][ ]import numpy as np x_values np arange( plt bar(x_values data[ ]color=' 'width= plt bar(x_values data[ ]color=' 'width= plt show(the values for the first batch of data are [ the second batch is [ when the bars are plotted and will occupy the same positionside by side x_values np arange( generates the array with values [ the first set of bars are drawn first at position x_values thusthe first values will be plotted at and the second batch of x_values will be plotted at and
13,762
box plot the box plot is used to visualize the median value and low and high ranges of distribution it is also referred to as box and whisker plot let' chart simple box plot we begin by generating numbers from normal distribution these are then passed to plt boxplot(datato be chartedimport numpy as np import matplotlib pyplot as plt data np random randn( plt boxplot(dataplt show(
13,763
the following figure is what is produceda few comments on the preceding figurethe features of the box plot include box spanning the interquartile rangewhich measures the dispersionthe outer fringes of the data are denoted by the whiskers attached to the central boxthe red line represents the median the box plot is useful to easily identify the outliers in dataset as well as determining in which direction dataset may be skewed pie chart the pie chart interprets and visually presents data as if to fit into circle the individual data points are expressed as sectors of circle that add up to degrees this chart is good for displaying categorical data and summaries tooimport matplotlib pyplot as plt data [ labels ["agriculture""aide""news"plt pie(datalabels=labels,autopct='% %%'plt show(
13,764
the sectors in the graph are labeled with the strings in the labels arraybubble chart another variant of the scatter plot is the bubble chart in scatter plotwe only plot the xy points of the data bubble charts add another dimension by illustrating the size of the points this third dimension may represent sizes of markets or even profitsimport numpy as np import matplotlib pyplot as plt np random rand(ny np random rand(ncolors np random rand(narea np pi ( np random rand( ))** plt scatter(xys=areac=colorsalpha= plt show(with the variable nwe specify the number of randomly generated and values this same number is used to determine the random colors for our and coordinates random bubble sizes are determined by area np pi ( np random rand( ))**
13,765
the following figure shows this bubble chartsummary in this we have explored how data and algorithms come together to aid machine learning making sense of huge amounts of data is made possible by first pruning our data through normalization processes feeding this data to specialized algorithmswe are able to predict the categories and sets that our data will fall into lastlycharting and plotting the condensed data helps to better understand and make insightful discoveries
13,766
abstract data types (adts adjacency adjacency list adjacency matrix algorithm analysisapproaches average case analysis benchmarking algorithm design divide and conquer paradigm dynamic programming approach algorithmsclassifying by complexity about complexity curves algorithmsclassifying by design about divide and conquer dynamic programming greedy algorithms algorithmsclassifying by implementation about determinstic logical nondeterminstic parallel recursion serial algorithms about classifying elements reasonsfor studying amortized analysis anaconda reference append method arithmetic operators array module arrays asymptotic analysis average case analysis backtracking about example bar chart benchmarking big notation binary search tree (bstabout benefits example implementation maximum nodesfinding minimum nodesfinding nodesdeleting nodesinserting operations treesearching binary search about example binary trees boolean operation box plot bracket-matching applicationstack breadth first traversal breadth-first search about code bubble chart bubble sort
13,767
implementing built-in data types about none type numeric types sequences canopy reference chaining chainmaps about advantage child circular buffer circular list about elementsappending elementsdeleting iterating through class methods classes co-routines coin counting problem collections module about data type comparison operators complexity classes about composing np-complete np-hard pversus np complexity curves conceptsgraphs adjacency degree of vertices edge loop node path vertex conceptstrees child degree depth edge height leaf node level node parent root node sibling sub tree counter objects data encapsulation data preprocessing data structures data visualization about bar chart box plot bubble chart multiple bar charts pie chart defaultdict object degree degree of vertex depth first traversal about in-order traversaland infix notation post-order traversaland postfix notation pre-order traversaland prefix notation depth-first search deques about advantages deterministic algorithms versus nondeterminstic algorithms deterministic selection about median of medians partitioning step pivot selection
13,768
13,769
preface xiii abstract data types introduction abstractions abstract data types data structures general definitions the date abstract data type defining the adt using the adt preconditions and postconditions implementing the adt bags the bag abstract data type selecting data structure list-based implementation iterators designing an iterator using iterators applicationstudent records designing solution implementation exercises programming projects arrays the array structure why study arraysthe array abstract data type implementing the array the python list
13,770
contents creating python list appending items extending list inserting items list slice two-dimensional arrays the array abstract data type implementing the - array the matrix abstract data type matrix operations implementing the matrix applicationthe game of life rules of the game designing solution implementation exercises programming projects sets and maps sets the set abstract data type selecting data structure list-based implementation maps the map abstract data type list-based implementation multi-dimensional arrays the multiarray abstract data type data organization variable-length arguments implementing the multiarray applicationsales reports exercises programming projects algorithm analysis complexity analysis big- notation evaluating python code evaluating the python list amortized cost evaluating the set adt
13,771
applicationthe sparse matrix list-based implementation efficiency analysis exercises programming projects searching and sorting searching the linear search the binary search sorting bubble sort selection sort insertion sort working with sorted lists maintaining sorted list merging sorted lists the set adt revisited sorted list implementation comparing the implementations exercises programming projects linked structures introduction the singly linked list traversing the nodes searching for node prepending nodes removing nodes the bag adt revisited linked list implementation comparing implementations linked list iterators more ways to build linked list using tail reference the sorted linked list the sparse matrix revisited an array of linked lists implementation comparing the implementations applicationpolynomials polynomial operations vii
13,772
contents the polynomial adt implementation exercises programming projects stacks the stack adt implementing the stack using python list using linked list stack applications balanced delimiters evaluating postfix expressions applicationsolving maze backtracking designing solution the maze adt implementation exercises programming projects queues the queue adt implementing the queue using python list using circular array using linked list priority queues the priority queue adt implementationunbounded priority queue implementationbounded priority queue applicationcomputer simulations airline ticket counter implementation exercises programming projects advanced linked lists the doubly linked list organization list operations the circular linked list
13,773
organization list operations multi-linked lists multiple chains the sparse matrix complex iterators applicationtext editor typical editor operations the edit buffer adt implementation exercises programming projects recursion recursive functions properties of recursion factorials recursive call trees the fibonacci sequence how recursion works the run time stack using software stack tail recursion recursive applications recursive binary search towers of hanoi exponential operation playing tic-tac-toe applicationthe eight-queens problem solving for four-queens designing solution exercises programming projects hash tables introduction hashing linear probing clustering rehashing efficiency analysis separate chaining ix
13,774
contents hash functions the hashmap abstract data type applicationhistograms the histogram abstract data type the color histogram exercises programming projects advanced sorting merge sort algorithm description basic implementation improved implementation efficiency analysis quick sort algorithm description implementation efficiency analysis how fast can we sort radix sort algorithm description basic implementation efficiency analysis sorting linked lists insertion sort merge sort exercises programming projects binary trees the tree structure the binary tree properties implementation tree traversals expression trees expression tree abstract data type string representation tree evaluation tree construction heaps definition
13,775
implementation the priority queue revisited heapsort simple implementation sorting in place applicationmorse code decision trees the adt definition exercises programming projects search trees the binary search tree searching min and max values insertions deletions efficiency of binary search trees search tree iterators avl trees insertions deletions implementation the - tree searching insertions efficiency of the - tree exercises programming projects appendix apython review the python interpreter the basics of python primitive types statements variables arithmetic operators logical expressions using functions and methods standard library user interaction standard input xi
13,776
contents standard output control structures selection constructs repetition constructs collections strings lists tuples dictionaries text files file access writing to files reading from files user-defined functions the function definition variable scope main routine appendix buser-defined modules structured programs namespaces appendix cexceptions catching exceptions raising exceptions standard exceptions assertions appendix dclasses the class definition constructors operations using modules hiding attributes overloading operators inheritance deriving child classes creating class instances invoking methods polymorphism
13,777
the standard second course in computer science has traditionally covered the fundamental data structures and algorithmsbut more recently these topics have been included in the broader topic of abstract data types this book is no exceptionwith the main focus on the designuseand implementation of abstract data types the importance of designing and using abstract data types for easier modular programming is emphasized throughout the text the traditional data structures are also presented throughout the text in terms of implementing the various abstract data types multiple implementations using different data structures are used throughout the text to reinforce the abstraction concept common algorithms are also presented throughout the text as appropriate to provide complete coverage of the typical data structures course overview the typical data structures coursewhich introduces collection of fundamental data structures and algorithmscan be taught using any of the different programming languages available today in recent yearsmore colleges have begun to adopt the python language for introducing students to programming and problem solving python provides several benefits over other languages such as +and javathe most important of which is that python has simple syntax that is easier to learn this book expands upon that use of python by providing python-centric text for the data structures course the clean syntax and powerful features of the language are used throughoutbut the underlying mechanisms of these features are fully explored not only to expose the "magicbut also to study their overall efficiency for number of yearsmany data structures textbooks have been written to serve dual role of introducing data structures and providing an in-depth study of object-oriented programming (oopin some instancesthis dual role may compromise the original purpose of the data structures course by placing more focus on oop and less on the abstract data types and their underlying data structures to stress the importance of abstract data typesdata structuresand algorithmswe limit the discussion of oop to the use of base classes for implementing the various abstract data types we do not use class inheritance or polymorphism in the main part of the text but instead provide basic introduction as an appendix this choice was made for several reasons firstour objective is to provide "back to xiii
13,778
preface basicsapproach to learning data structures and algorithms without overwhelming the reader with all of the oop terminology and conceptswhich is especially important when the instructor has no plans to cover such topics seconddifferent instructors take different approaches with python in their first course our aim is to provide an excellent text to the widest possible audience we do this by placing the focus on the data structures and algorithmswhile designing the examples to allow the introduction of object-oriented programming if so desired the text also introduces the concept of algorithm analysis and explores the efficiency of algorithms and data structures throughout the text the major presentation of complexity analysis is contained in single which allows it to be omitted by instructors who do not normally cover such material in their data structures course additional evaluations are provided throughout the text as new algorithms and data structures are introducedwith the major details contained in individual sections when algorithm analysis is coveredexamples of the various complexity functions are introducedincluding amortized cost the latter is important when using python since many of the list operations have very efficient amortized cost prerequisites this book assumes that the student has completed the standard introduction to programming and problem-solving course using the python language since the contents of the first course can differ from college to college and instructor to instructorwe assume the students are familiar with or can do the followingdesign and implement complete programs in pythonincluding the use of modules and namespaces apply the basic data types and constructsincluding loopsselection statementsand subprograms (functionscreate and use the built-in list and dictionary structures design and implement basics classesincluding the use of helper methods and private attributes contents and organization the text is organized into fourteen and four appendices the basic concepts related to abstract data typesdata structuresand algorithms are presented in the first four later build on these earlier concepts to present more advanced topics and introduce the student to additional abstract data types and more advanced data structures the book contains several topic threads that run throughout the textin which the topics are revisited in various as appropriate the layout of the text does not force rigid outlinebut allows for the
13,779
reordering of some topics for examplethe on recursion and hashing can be presented at any time after the discussion of algorithm analysis in abstract data types introduces the concept of abstract data types (adtsfor both simple typesthose containing individual data fieldsand the more complex typesthose containing data structures adts are presented in terms of their definitionuseand implementation after discussing the importance of abstractionwe define several adts and then show how well-defined adt can be used without knowing how its actually implemented the focus then turns to the implementation of the adts with an emphasis placed on the importance of selecting an appropriate data structure the includes an introduction to the python iterator mechanism and provides an example of user-defined iterator for use with container type adt arrays introduces the student to the array structurewhich is important since python only provides the list structure and students are unlikely to have seen the concept of the array as fixed-sized structure in first course using python we define an adt for one-dimensional array and implement it using hardware array provided through special mechanism of the -implemented version of python the two-dimensional array is also introduced and implemented using - array of arrays the array structures will be used throughout the text in place of the python' list when it is the appropriate choice the implementation of the list structure provided by python is presented to show how the various operations are implemented using - array the matrix adt is introduced and includes an implementation using two-dimensional array that exposes the students to an example of an adt that is best implemented using structure other than the list or dictionary sets and maps this reintroduces the students to both the set and map (or dictionaryadts with which they are likely to be familiar from their first programming course using python even though python provides these adtsthey both provide great examples of abstract data types that can be implemented in many different ways the also continues the discussion of arrays from the previous by introducing multi-dimensional arrays (those of two or more dimensionsalong with the concept of physically storing these using one-dimensional array in either row-major or column-major order the concludes with an example application that can benefit from the use of three-dimensional array algorithm analysis introduces the basic concept and importance of complexity analysis by evaluating the operations of python' list structure and the set adt as implemented in the previous this information will be used to provide more efficient implementation of the set adt in the following the concludes by introducing the sparse matrix adt and providing more efficient implementation with the use of list in place of two-dimensional array xv
13,780
preface searching and sorting introduces the concepts of searching and sorting and illustrates how the efficiency of some adts can be improved when working with sorted sequences search operations for an unsorted sequence are discussed and the binary search algorithm is introduced as way of improving this operation three of the basic sorting algorithms are also introduced to further illustrate the use of algorithm analysis new implementation of the set adt is provided to show how different data structures or data organizations can change the efficiency of an adt linked structures provides an introduction to dynamic structures by illustrating the construction and use of the singly linked list using dynamic storage allocation the common operations -traversalsearchinginsertionand deletion -are presented as is the use of tail reference when appropriate several of the adts presented in earlier are reimplemented using the singly linked listand the run times of their operations are compared to the earlier versions new implementation of the sparse matrix is especially eye-opening to many students as it uses an array of sorted linked lists instead of single python list as was done in an earlier stacks introduces the stack adt and includes implementations using both python list and linked list several common stack applications are then presentedincluding balanced delimiter verification and the evaluation of postfix expressions the concept of backtracking is also introduced as part of the application for solving maze detailed discussion is provided in designing solution and partial implementation queues introduces the queue adt and includes three different implementationspython listcircular arrayand linked list the priority queue is introduced to provide an opportunity to discuss different structures and data organization for an efficient implementation the application of the queue presents the concept of discrete event computer simulations using an airline ticket counter as the example advanced linked lists continues the discussion of dynamic structures by introducing collection of more advanced linked lists these include the doubly linkedcircularly linkedand multi linked lists the latter provides an example of linked structure containing multiple chains and is applied by reimplementing the sparse matrix to use two arrays of linked listsone for the rows and one for the columns the doubly linked list is applied to the problem of designing and implementing an edit buffer adt for use with basic text editor recursion introduces the use of recursion to solve various programming problems the properties of creating recursive functions are presented along with common examplesincluding factorialgreatest common divisorand the towers of hanoi the concept of backtracking is revisited to use recursion for solving the eight-queens problem
13,781
hash tables introduces the concept of hashing and the use of hash tables for performing fast searches different addressing techniques are presentedincluding those for both closed and open addressing collision resolution techniques and hash function design are also discussed the magic behind python' dictionary structurewhich uses hash tableis exposed and its efficiency evaluated advanced sorting continues the discussion of the sorting problem by introducing the recursive sorting algorithms--merge sort and quick sort--along with the radix distribution sort algorithmall of which can be used to sort sequences some of the common techniques for sorting linked lists are also presented binary trees presents the tree structure and the general binary tree specifically the construction and use of the binary tree is presented along with various properties and the various traversal operations the binary tree is used to build and evaluate arithmetic expressions and in decoding morse code sequences the tree-based heap structure is also introduced along with its use in implementing priority queue and the heapsort algorithm search trees continues the discussion from the previous by using the tree structure to solve the search problem the basic binary search tree and the balanced binary search tree (avlare both introduced along with new implementations of the map adt finallya brief introduction to the - multi-way tree is also providedwhich shows an alternative to both the binary search and avl trees appendix apython review provides review of the python language and concepts learned in the traditional first course the review includes presentation of the basic constructs and built-in data structures appendix buser-defined modules describes the use of modules in creating well structured programs the different approaches for importing modules is also discussed along with the use of namespaces appendix cexceptions provides basic introduction to the use of exceptions for handling and raising errors during program execution appendix dclasses introduces the basic concepts of object-oriented programmingincluding encapsulationinheritanceand polymorphism the presentation is divided into two main parts the first part presents the basic design and use of classes for those instructors who use "back to basicsapproach in teaching data structures the second part briefly explores the more advanced features of inheritance and polymorphism for those instructors who typically include these topics in their course xvii
13,782
preface acknowledgments there are number of individuals would like to thank for helping to make this book possible firsti must acknowledge two individuals who served as mentors in the early part of my career mary dayne gregg (university of southern mississippi)who was the best computer science teacher have ever knownshared her love of teaching and provided great role model in academia richard prosl (professor emerituscollege of william and maryserved not only as my graduate advisor but also shared great insight into teaching and helped me to become good teacher special thanks to the many students have taught over the yearsespecially those at washington and lee universitywho during the past five years used draft versions of the manuscript and provided helpful suggestions would also like to thank some of my colleagues who provided great advice and the encouragement to complete the projectsara sprenkle (washington and lee university)debbie noonan (college of william and mary)and robert noonan (college of william and maryi am also grateful to the following individuals who served as outside reviewers and provided valuable feedback and helpful suggestionsesmail bonakdarian (franklin university)david dubin (university of illinois at urbana-champaignmark fenner (norwich university)robert franks (central college)charles leska (randolph-macon college)fernando martincic (wayne state university)joseph sloan (wofford college)david sykes (wofford college)and stan thomas (wake forest universityfinallyi would like to thank everyone at john wiley sons who helped make this book possible would especially like to thank beth golubmike berlinand amy weintraubwith whom worked closely throughout the process and who helped to make this first book an enjoyable experience rance necaise
13,783
abstract data types the foundation of computer science is based on the study of algorithms an algorithm is sequence of clear and precise step-by-step instructions for solving problem in finite amount of time algorithms are implemented by translating the step-by-step instructions into computer program that can be executed by computer this translation process is called computer programming or simply programming computer programs are constructed using programming language appropriate to the problem while programming is an important part of computer sciencecomputer science is not the study of programming nor is it about learning particular programming language insteadprogramming and programming languages are tools used by computer scientists to solve problems introduction data items are represented within computer as sequence of binary digits these sequences can appear very similar but have different meanings since computers can store and manipulate different types of data for examplethe binary sequence could be string of charactersan integer valueor real value to distinguish between the different types of datathe term type is often used to refer to collection of values and the term data type to refer to given type along with collection of operations for manipulating values of the given type programming languages commonly provide data types as part of the language itself these data typesknown as primitivescome in two categoriessimple and complex the simple data types consist of values that are in the most basic form and cannot be decomposed into smaller parts integer and real typesfor exampleconsist of single numeric values the complex data typeson the other handare constructed of multiple components consisting of simple types or other complex types in pythonobjectsstringslistsand dictionarieswhich can
13,784
abstract data types contain multiple valuesare all examples of complex types the primitive types provided by language may not be sufficient for solving large complex problems thusmost languages allow for the construction of additional data typesknown as user-defined types since they are defined by the programmer and not the language some of these data types can themselves be very complex abstractions to help manage complex problems and complex data typescomputer scientists typically work with abstractions an abstraction is mechanism for separating the properties of an object and restricting the focus to those relevant in the current context the user of the abstraction does not have to understand all of the details in order to utilize the objectbut only those relevant to the current task or problem two common types of abstractions encountered in computer science are proceduralor functionalabstraction and data abstraction procedural abstraction is the use of function or method knowing what it does but ignoring how it' accomplished consider the mathematical square root function which you have probably used at some point you know the function will compute the square root of given numberbut do you know how the square root is computeddoes it matter if you know how it is computedor is simply knowing how to correctly use the function sufficientdata abstraction is the separation of the properties of data type (its values and operationsfrom the implementation of that data type you have used strings in python many times but do you know how they are implementedthat isdo you know how the data is structured internally or how the various operations are implementedtypicallyabstractions of complex problems occur in layerswith each higher layer adding more abstraction than the previous consider the problem of representing integer values on computers and performing arithmetic operations on those values figure illustrates the common levels of abstractions used with integer arithmetic at the lowest level is the hardware with little to no abstraction since it includes binary representations of the values and logic circuits for performing the arithmetic hardware designers would deal with integer arithmetic at this level and be concerned with its correct implementation higher level of abstraction for integer values and arithmetic is provided through assembly languagewhich involves working with binary values and individual instructions corresponding to the underlying hardware compiler writers and assembly language programmers would work with integer arithmetic at this level and must ensure the proper selection of assembly language instructions to compute given mathematical expression for examplesuppose we wish to compute at the assembly language levelthis expression must be split into multiple instructions for loading the values from memorystoring them into registersand then performing each arithmetic operation separatelyas shown in the following psuedocodeloadfrommemr 'aloadfrommemr '
13,785
add sub storetomemr 'xto avoid this level of complexityhigh-level programming languages add another layer of abstraction above the assembly language level this abstraction is provided through primitive data type for storing integer values and set of well-defined operations that can be performed on those values by providing this level of abstractionprogrammers can work with variables storing decimal values and specify mathematical expressions in more familiar notation ( than is possible with assembly language instructions thusa programmer does not need to know the assembly language instructions required to evaluate mathematical expression or understand the hardware implementation in order to use integer arithmetic in computer program software-implemented software-implemented big big integers integers higher level high-level high-level language language instructions instructions assembly assembly language language instructions instructions hardware hardware implementation implementation lower level figure levels of abstraction used with integer arithmetic one problem with the integer arithmetic provided by most high-level languages and in computer hardware is that it works with values of limited size on -bit architecture computersfor examplesigned integer values are limited to the range - ( what if we need larger valuesin this casewe can provide long or "big integersimplemented in software to allow values of unlimited size this would involve storing the individual digits and implementing functions or methods for performing the various arithmetic operations the implementation of the operations would use the primitive data types and instructions provided by the high-level language software libraries that provide big integer implementations are available for most common programming languages pythonhoweveractually provides software-implemented big integers as part of the language itself abstract data types an abstract data type (or adt is programmer-defined data type that specifies set of data values and collection of well-defined operations that can be performed on those values abstract data types are defined independent of their
13,786
abstract data types implementationallowing us to focus on the use of the new data type instead of how it' implemented this separation is typically enforced by requiring interaction with the abstract data type through an interface or defined set of operations this is known as information hiding by hiding the implementation details and requiring adts to be accessed through an interfacewe can work with an abstraction and focus on what functionality the adt provides instead of how that functionality is implemented abstract data types can be viewed like black boxes as illustrated in figure user programs interact with instances of the adt by invoking one of the several operations defined by its interface the set of operations can be grouped into four categoriesconstructorscreate and initialize new instances of the adt accessorsreturn data contained in an instance without modifying it mutatorsmodify the contents of an adt instance iteratorsprocess individual data components sequentially user programs interact with adts through their interface or set of operations user user program program string adt str(upper(the implementation details are hidden as if inside black box lower(figure separating the adt definition from its implementation the implementation of the various operations are hidden inside the black boxthe contents of which we do not have to know in order to utilize the adt there are several advantages of working with abstract data types and focusing on the "whatinstead of the "how we can focus on solving the problem at hand instead of getting bogged down in the implementation details for examplesuppose we need to extract collection of values from file on disk and store them for later use in our program if we focus on the implementation detailsthen we have to worry about what type of storage structure to usehow it should be usedand whether it is the most efficient choice we can reduce logical errors that can occur from accidental misuse of storage structures and data types by preventing direct access to the implementation if we used list to store the collection of values in the previous examplethere is the opportunity to accidentally modify its contents in part of our code
13,787
where it was not intended this type of logical error can be difficult to track down by using adts and requiring access via the interfacewe have fewer access points to debug the implementation of the abstract data type can be changed without having to modify the program code that uses the adt there are many times when we discover the initial implementation of an adt is not the most efficient or we need the data organized in different way suppose our initial approach to the previous problem of storing collection of values is to simply append new values to the end of the list what happens if we later decide the items should be arranged in different order than simply appending them to the endif we are accessing the list directlythen we will have to modify our code at every point where values are added and make sure they are not rearranged in other places by requiring access via the interfacewe can easily "swap outthe black box with new implementation with no impact on code segments that use the adt it' easier to manage and divide larger programs into smaller modulesallowing different members of team to work on the separate modules large programming projects are commonly developed by teams of programmers in which the workload is divided among the members by working with adts and agreeing on their definitionthe team can better ensure the individual modules will work together when all the pieces are combined using our previous exampleif each member of the team directly accessed the list storing the collection of valuesthey may inadvertently organize the data in different ways or modify the list in some unexpected way when the various modules are combinedthe results may be unpredictable data structures working with abstract data typeswhich separate the definition from the implementationis advantageous in solving problems and writing programs at some pointhoweverwe must provide concrete implementation in order for the program to execute adts provided in language librarieslike pythonare implemented by the maintainers of the library when you define and create your own abstract data typesyou must eventually provide an implementation the choices you make in implementing your adt can affect its functionality and efficiency abstract data types can be simple or complex simple adt is composed of single or several individually named data fields such as those used to represent date or rational number the complex adts are composed of collection of data values such as the python list or dictionary complex abstract data types are implemented using particular data structurewhich is the physical representation of how data is organized and manipulated data structures can be characterized by how they store and organize the individual data elements and what operations are available for accessing and manipulating the data
13,788
abstract data types there are many common data structuresincluding arrayslinked listsstacksqueuesand treesto name few all data structures store collection of valuesbut differ in how they organize the individual data items and by what operations can be applied to manage the collection the choice of particular data structure depends on the adt and the problem at hand some data structures are better suited to particular problems for examplethe queue structure is perfect for implementing printer queuewhile the -tree is the better choice for database index no matter which data structure we use to implement an adtby keeping the implementation separate from the definitionwe can use an abstract data type within our program and later change to different implementationas neededwithout having to modify our existing code general definitions there are many different terms used in computer science some of these can have different meanings among the various textbooks and programming languages to aide the reader and to avoid confusionwe define some of the common terms we will be using throughout the text collection is group of values with no implied organization or relationship between the individual values sometimes we may restrict the elements to specific data type such as collection of integers or floating-point values container is any data structure or abstract data type that stores and organizes collection the individual values of the collection are known as elements of the container and container with no elements is said to be empty the organization or arrangement of the elements can vary from one container to the next as can the operations available for accessing the elements python provides number of built-in containerswhich include stringstupleslistsdictionariesand sets sequence is container in which the elements are arranged in linear order from front to backwith each element accessible by position throughout the textwe assume that access to the individual elements based on their position within the linear order is provided using the subscript operator python provides two immutable sequencesstrings and tuplesand one mutable sequencethe list in the next we introduce the array structurewhich is also commonly used mutable sequence sorted sequence is one in which the position of the elements is based on prescribed relationship between each element and its successor for examplewe can create sorted sequence of integers in which the elements are arranged in ascending or increasing order from smallest to largest value in computer sciencethe term list is commonly used to refer to any collection with linear ordering the ordering is such that every element in the collectionexcept the first onehas unique predecessor and every elementexcept the last onehas unique successor by this definitiona sequence is listbut list is not necessarily sequence since there is no requirement that list provide access to the elements by position pythonunfortunatelyuses the same name for its built-in mutable sequence typewhich in other languages would be called an array
13,789
list or vector abstract data type to avoid confusionwe will use the term list to refer to the data type provided by python and use the terms general list or list structure when referring to the more general list structure as defined earlier the date abstract data type an abstract data type is defined by specifying the domain of the data elements that compose the adt and the set of operations that can be performed on that domain the definition should provide clear description of the adt including both its domain and each of its operations as only those operations specified can be performed on an instance of the adt nextwe provide the definition of simple abstract data type for representing date in the proleptic gregorian calendar defining the adt the gregorian calendar was introduced in the year by pope gregory xiii to replace the julian calendar the new calendar corrected for the miscalculation of the lunar year and introduced the leap year the official first date of the gregorian calendar is fridayoctober the proleptic gregorian calendar is an extension for accommodating earlier dates with the first date on november bc this extension simplifies the handling of dates across older calendars and its use can be found in many software applications define date adt date represents single day in the proleptic gregorian calendar in which the first day starts on november bc datemonthdayyear )creates new date instance initialized to the given gregorian date which must be valid year bc and earlier are indicated by negative year components day()returns the gregorian day number of this date month()returns the gregorian month number of this date year()returns the gregorian year of this date monthname()returns the gregorian month name of this date dayofweek()returns the day of the week as number between and with representing monday and representing sunday numdaysotherdate )returns the number of days as positive integer between this date and the otherdate isleapyear()determines if this date falls in leap year and returns the appropriate boolean value
13,790
abstract data types advancebydays )advances the date by the given number of days the date is incremented if days is positive and decremented if days is negative the date is capped to november bcif necessary comparable otherdate )compares this date to the otherdate to determine their logical ordering this comparison can be done using any of the logical operators >===!tostring ()returns string representing the gregorian date in the format mm/dd/yyyy implemented as the python operator that is automatically called via the str(constructor the abstract data types defined in the text will be implemented as python classes when defining an adtwe specify the adt operations as method prototypes the class constructorwhich is used to create an instance of the adtis indicated by the name of the class used in the implementation python allows classes to define or overload various operators that can be used more naturally in program without having to call method by name we define all adt operations as named methodsbut implement some of them as operators when appropriate instead of using the named method the adt operations that will be implemented as python operators are indicated in italicized text and brief comment is provided in the adt definition indicating the corresponding operator this approach allows us to focus on the general adt specification that can be easily translated to other languages if the need arises but also allows us to take advantage of python' simple syntax in various sample programs using the adt to illustrate the use of the date adtconsider the program in listing which processes collection of birth dates the dates are extracted from standard input and examined those dates that indicate the individual is at least years of age based on target date are printed to standard output the user is continuously prompted to enter birth date until zero is entered for the month this simple example illustrates an advantage of working with an abstraction by focusing on what functionality the adt provides instead of how that functionality is implemented by hiding the implementation detailswe can use an adt independent of its implementation in factthe choice of implementation for the date adt will have no effect on the instructions in our example program note class definitions classes are the foundation of object-oriented programing languages and they provide convenient mechanism for defining and implementing abstract data types review of python classes is provided in appendix
13,791
listing the checkdates py program extracts collection of birth dates from the user and determines if each individual is at least years of age from date import date def main()date before which person must have been born to be or older bornbefore date( extract birth dates from the user and determine if or older date promptandextractdate(while date is not none if date <bornbefore print"is at least years of age"date date promptandextractdate(prompts for and extracts the gregorian date components returns date object or none when the user has finished entering dates def promptandextractdate()print"enter birth date month intinput("month ( to quit)"if month = return none else day intinput("day"year intinput("year"return datemonthdayyear call the main routine main(preconditions and postconditions in defining the operationswe must include specification of required inputs and the resulting outputif any in additionwe must specify the preconditions and postconditions for each operation precondition indicates the condition or state of the adt instance and inputs before the operation can be performed postcondition indicates the result or ending state of the adt instance after the operation is performed the precondition is assumed to be true while the postcondition is guarantee as long as the preconditions are met attempting to perform an operation in which the precondition is not satisfied should be flagged as an error consider the use of the pop(imethod for removing value from list when this method is calledthe precondition states the supplied index must be within the legal range upon successful completion of the operationthe postcondition guarantees the item has been removed from the list if an invalid indexone that is out of the legal rangeis passed to the pop(methodan exception is raised all operations have at least one preconditionwhich is that the adt instance has to have been previously initialized in an object-oriented languagethis precondition is automatically verified since an object must be created and initialized
13,792
abstract data types via the constructor before any operation can be used other than the initialization requirementan operation may not have any other preconditions it all depends on the type of adt and the respective operation likewisesome operations may not have postconditionas is the case for simple access methodswhich simply return value without modifying the adt instance itself throughout the textwe do not explicitly state the precondition and postcondition as suchbut they are easily identified from the description of the adt operations when implementing abstract data typesit' important that we ensure the proper execution of the various operations by verifying any stated preconditions the appropriate mechanism when testing preconditions for abstract data types is to test the precondition and raise an exception when the precondition fails you then allow the user of the adt to decide how they wish to handle the erroreither catch it or allow the program to abort pythonlike many other object-oriented programming languagesraises an exception when an error occurs an exception is an event that can be triggered and optionally handled during program execution when an exception is raised indicating an errorthe program can contain code to catch and gracefully handle the exceptionotherwisethe program will abort python also provides the assert statementwhich can be used to raise an assertionerror exception the assert statement is used to state what we assume to be true at given point in the program if the assertion failspython automatically raises an assertionerror and aborts the programunless the exception is caught throughout the textwe use the assert statement to test the preconditions when implementing abstract data types this allows us to focus on the implementation of the adts instead of having to spend time selecting the proper exception to raise or creating new exceptions for use with our adts for more information on exceptions and assertionsrefer to appendix implementing the adt after defining the adtwe need to provide an implementation in an appropriate language in our casewe will always use python and class definitionsbut any programming language could be used partial implementation of the date class is provided in listing with the implementation of some methods left as exercises date representations there are two common approaches to storing date in an object one approach stores the three components--monthdayand year--as three separate fields with this formatit is easy to access the individual componentsbut it' difficult to compare two dates or to compute the number of days between two dates since the number of days in month varies from month to month the second approach stores the date as an integer value representing the julian daywhich is the number of days elapsed since the initial date of november bc (using the gregorian calendar notationgiven julian day numberwe can compute any of the three gregorian components and simply subtract the two integer values to determine
13,793
which occurs first or how many days separate the two dates we are going to use the latter approach as it is very common for storing dates in computer applications and provides for an easy implementation listing partial implementation of the date py module implements proleptic gregorian calendar date as julian day number class date creates an object instance for the specified gregorian date def __init__selfmonthdayyear )self _julianday assert self _isvalidgregorianmonthdayyear )"invalid gregorian date the first line of the equationt ( has to be changed since python' implementation of integer division is not the same as the mathematical definition tmp if month tmp - self _julianday day ( (year tmp/ ( (month tmp / ( ((year tmp/ / extracts the appropriate gregorian date component def monthself )return (self _togregorian())[ returning from (mdydef dayself )return (self _togregorian())[ returning from (mdydef yearself )return (self _togregorian())[ returning from (mdyreturns day of the week as an int between (monand (sundef dayofweekself )monthdayyear self _togregorian(if month month month year year return (( month / day year year / year / year / returns the date as string in gregorian format def __str__self )monthdayyear self _togregorian(return "% /% /% (monthdayyearlogically compares the two dates def __eq__selfotherdate )return self _julianday =otherdate _julianday (listing continued
13,794
abstract data types listing continued def __lt__selfotherdate )return self _julianday otherdate _julianday def __le__selfotherdate )return self _julianday <otherdate _julianday the remaining methods are to be included at this point returns the gregorian date as tuple(monthdayyeardef _togregorianself ) self _julianday / ( / year ( / ( year / month / day ( month / month / month month ( ayear ( year return monthdayyear constructing the date we begin our discussion of the implementation with the constructorwhich is shown in lines - of listing the date adt will need only single attribute to store the julian day representing the given gregorian date to convert gregorian date to julian day numberwe use the following formula where day corresponds to november bc and all operations involve integer arithmetic ( jday ( ( ( ( ( (( before attempting to convert the gregorian date to julian daywe need to verify it' valid date this is necessary since the precondition states the supplied gregorian date must be valid the isvalidgregorian(helper method is used to verify the validity of the given gregorian date this helper methodthe implementation of which is left as an exercisetests the supplied gregorian date components and returns the appropriate boolean value if valid date is supplied to the constructorit is converted to the equivalent julian day using the equation provided earlier note the statements in lines - the equation for converting gregorian date to julian day number uses integer arithmeticbut seidelmannp kenneth (ed ( explanatory supplement to the astronomical almanacpp -- university science books
13,795
the date abstract data type comments class definitions and methods should be properly commented to aide the user in knowing what the class and/or methods do to conserve spacehoweverclasses and methods presented in this book do not routinely include these comments since the surrounding text provides full explanation caution the equation line ( produces an incorrect result in python due to its implementation of integer divisionwhich is not the same as the mathematical definition by definitionthe result of the integer division - / is but python computes this as - resulting in - thuswe had to modify the first line of the equation to produce the correct julian day when the month component is greater than protected attributes and methods python does not provide technique to protect attributes and helper methods in order to prevent their use outside the class definition in this textwe use identifier nameswhich begin with single underscore to flag those attributes and methods that should be considered protected and rely on the user of the class to not attempt direct access the gregorian date to access the gregorian date components the julian day must be converted back to gregorian this conversion is needed in several of the adt operations instead of duplicating the formula each time it' neededwe create helper method to handle the conversion as illustrated in lines - of listing the togregorian(method returns tuple containing the daymonthand year components as with the conversion from gregorian to julianinteger arithmetic operations are used throughout the conversion formula by returning tuplewe can call the helper method and use the appropriate component from the tuple for the given gregorian component access methodas illustrated in lines - the dayofweek(methodshown in lines - also uses the togregorian(conversion helper method we determine the day of the week based on the gregorian components using simple formula that returns an integer value between and where represents monday represents tuesdayand so on the tostring operation defined by the adt is implemented in lines - by overloading python' str method it creates string representation of date in gregorian format this can be done using the string format operator and supplying the values returned from the conversion helper method by using python' str methodpython automatically calls this method on the object when you attempt to print or convert an object to string as in the following example
13,796
abstract data types firstday date printfirstday comparing date objects we can logically compare two date instances to determine their calendar order when using julian day to represent the datesthe date comparison is as simple as comparing the two integer values and returning the appropriate boolean value based on the result of that comparison the "comparableadt operation is implemented using python' logical comparison operators as shown in lines - of listing by implementing the methods for the logical comparison operatorsinstances of the class become comparable objects that isthe objects can be compared against each other to produce logical ordering you will notice that we implemented only three of the logical comparison operators the reason for this is that starting with python version python will automatically swap the operands and call the appropriate reflective method when necessary for exampleif we use the expression with date objects in our programpython will automatically swap the operands and call instead since the lt method is defined but not gt it will do the same for > and < when testing for equalitypython will automatically invert the result when only one of the equality operators (=or !=is defined thuswe need only define one operator from each of the following pairs to achieve the full range of logical comparisons=and =or !for more information on overloading operatorsrefer to appendix tip overloading operators user-defined classes can implement methods to define many of the standard python operators such as +*%and ==as well as the standard named operators such as in and not in this allows for more natural use of the objects instead of having to call specific named methods it can be tempting to define operators for every class you createbut you should limit the definition of operator methods for classes where the specific operator has meaningful purpose bags the date adt provided an example of simple abstract data type to illustrate the design and implementation of complex abstract data typewe define the bag adt bag is simple container like shopping bag that can be used to store collection of items the bag container restricts access to the individual items by only defining operations for adding and removing individual itemsfor determining if an item is in the bagand for traversing over the collection of items
13,797
the bag abstract data type there are several variations of the bag adt with the one described here being simple bag grab bag is similar to the simple bag but the items are removed from the bag at random another common variation is the counting bagwhich includes an operation that returns the number of occurrences in the bag of given item implementations of the grab bag and counting bag are left as exercises define bag adt bag is container that stores collection in which duplicate values are allowed the itemseach of which is individually storedhave no particular order but they must be comparable bag()creates bag that is initially empty length ()returns the number of items stored in the bag accessed using the len(function contains item )determines if the given target item is stored in the bag and returns the appropriate boolean value accessed using the in operator additem )adds the given item to the bag removeitem )removes and returns an occurrence of item from the bag an exception is raised if the element is not in the bag iterator ()creates and returns an iterator that can be used to iterate over the collection of items you may have noticed our definition of the bag adt does not include an operation to convert the container to string we could include such an operationbut creating string for large collection is time consuming and requires large amount of memory such an operation can be beneficial when debugging program that uses an instance of the bag adt thusit' not uncommon to include the str operator method for debugging purposesbut it would not typically be used in production software we will usually omit the inclusion of str operator method in the definition of our abstract data typesexcept in those cases where it' meaningfulbut you may want to include one temporarily for debugging purposes examples given the abstract definition of the bag adtwe can create and use bag without knowing how it is actually implemented consider the following simple examplewhich creates bag and asks the user to guess one of the values it contains
13,798
abstract data types mybag bag(mybag add mybag add mybag add mybag add mybag add value intinput("guess value contained in the bag "if value in mybagprint"the bag contains the value"value else print"the bag does not contain the value"value nextconsider the checkdates py sample program from the previous section where we extracted birth dates from the user and determined which ones were for individuals who were at least years of age suppose we want to keep the collection of birth dates for later use it wouldn' make sense to require the user to re-enter the dates multiple times insteadwe can store the birth dates in bag as they are entered and access them lateras many times as needed the bag adt is perfect container for storing objects when the position or order of specific item does not matter the following is new version of the main routine for our birth date checking program from listing #pgmcheckdates py (modified main(from checkdates pyfrom linearbag import bag from date import date def main()bornbefore date bag bag(extract dates from the user and place them in the bag date promptandextractdate(while date is not none bag adddate date promptandextractdate(iterate over the bag and check the age for date in bag if date <bornbefore print"is at least years of age"date why bag adtyou may be wonderingwhy do we need the bag adt when we could simply use the list to store the itemsfor small program and small collection of datausing list would be appropriate when working with large programs and multiple team membershoweverabstract data types provide several advantages as described earlier in section by working with the abstraction of bagwe canafocus on solving the problem at hand instead of worrying about the
13,799
implementation of the containerbreduce the chance of introducing errors from misuse of the list since it provides additional operations that are not appropriate for bagcprovide better coordination between different modules and designersand deasily swap out our current implementation of the bag adt for differentpossibly more efficientversion later selecting data structure the implementation of complex abstract data type typically requires the use of data structure for organizing and managing the collection of data items there are many different structures from which to choose so how do we know which to usewe have to evaluate the suitability of data structure for implementing given abstract data typewhich we base on the following criteria does the data structure provide for the storage requirements as specified by the domain of the adtabstract data types are defined to work with specific domain of data values the data structure we choose must be capable of storing all possible values in that domaintaking into consideration any restrictions or limitations placed on the individual items does the data structure provide the necessary data access and manipulation functionality to fully implement the adtthe functionality of an abstract data type is provided through its defined set of operations the data structure must allow for full and correct implementation of the adt without having to violate the abstraction principle by exposing the implementation details to the user does the data structure lend itself to an efficient implementation of the operationsan important goal in the implementation of an abstract data type is to provide an efficient solution some data structures allow for more efficient implementation than othersbut not every data structure is suitable for implementing every adt efficiency considerations can help to select the best structure from among multiple candidates there may be multiple data structures suitable for implementing given abstract data typebut we attempt to select the best possible based on the context in which the adt will be used to accommodate different contextslanguage libraries will commonly provide several implementations of some adtsallowing the programmer to choose the most appropriate following this approachwe introduce number of abstract data types throughout the text and present multiple implementations as new data structures are introduced the efficiency of an implementation is based on complexity analysiswhich is not introduced until later in thuswe postpone consideration of the efficiency of an implementation in selecting data structure until that time in the meantimewe only consider the suitability of data structure based on the storage and functional requirements of the abstract data type