id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
4,100 | def issubset( )"""assumes and are lists returns true if each element in is also in and false otherwise ""for in matched false for in if = matched true break if not matchedreturn false return true figure implementation of subset test each time the inner loop is reached it is executed (len( times the function will execute the outer loop (len( )timesso the inner loop will be reached (len( )times thereforethe complexity of issubset is (len( )*len( )now consider the function intersect in figure def intersect( )"""assumesl and are lists returns list that is the intersection of and ""#build list containing common elements tmp [for in for in if = tmp append( #build list without duplicates result [for in tmpif not in resultresult append(ereturn result figure implementation of list intersection the running time for the part building the list that might contain duplicates is clearly (len( )*len( )at first glanceit appears that the part of the code that builds the duplicate-free list is linear in the length of tmpbut it is not the test not in result potentially involves looking at each element in resultand is therefore (len(result))consequently the second part of the implementation is (len(tmp)*len(result)since the lengths of result and tmp are bounded by the length of the smaller of and and since we ignore additive termsthe complexity of intersect is (len( )*len( )exponential complexity as we will see later in this bookmany important problems are inherently exponentiali solving them completely can require time that is exponential in the size of the input this is unfortunatesince it rarely pays to write program that has reasonably high probability of taking exponential time to run |
4,101 | simplistic introduction to algorithmic complexity considerfor examplethe code in figure def getbinaryrep(nnumdigits)"""assumes and numdigits are non-negative ints returns numdigits str that is binary representation of ""result 'while result str( % result // if len(resultnumdigitsraise valueerror('not enough digits'for in range(numdigits len(result))result ' result return result def genpowerset( )"""assumes is list returns list of lists that contains all possible combinations of the elements of if is [ it will return list with elements [][ ][ ]and [ , ""powerset [for in range( **len( ))binstr getbinaryrep(ilen( )subset [for in range(len( ))if binstr[ =' 'subset append( [ ]powerset append(subsetreturn powerset figure generating the power set the function genpowerset(lreturns list list of lists that contains all possible combinations of the elements of for exampleif is [' '' ']the powerset of will be list containing the lists [][' '][' ']and [' '' 'the algorithm is bit subtle consider list of elements we can represent any combination of elements by string of ' and 'swhere represents the presence of an element and its absence the combination containing no items would be represented by string of all 'sthe combination containing all of the items would be represented by string of all 'sthe combination containing only the first and last elements would be represented by etc therefore generating all sublists of list of length can be done as follows generate all -bit binary numbers these are the numbers from to for each of these + binary numbersbgenerate list by selecting those elements of that have an index corresponding to in for exampleif is [' '' 'and is generate the list [' 'try running genpowerset on list containing the first ten letters of the alphabet it will finish quite quickly and produce list with elements nexttry running genpowerset on the first twenty letters of the alphabet it will take more than bit of time to runand return list with about million elements if you try running genpowerset on all twenty-six lettersyou will probably get tired of |
4,102 | waiting for it to completeunless your computer runs out of memory trying to build list with tens of millions of elements don' even think about trying to run genpowerset on list containing all uppercase and lowercase letters step of the algorithm generates ( len( )binary numbersso the algorithm is exponential in len(ldoes this mean that we cannot use computation to tackle exponentially hard problemsabsolutely not it means that we have to find algorithms that provide approximate solutions to these problems or that find perfect solutions on some instances of the problem but that is subject for later comparisons of complexity classes the following plots are intended to convey an impression of the implications of an algorithm being in one or another of these complexity classes the plot on the right compares the growth of constant-time algorithm to that of logarithmic algorithm note that the size of the input has to reach about million for the two of them to crosseven for the very small constant of twenty when the size of the input is five millionthe time required by logarithmic algorithm is still quite small the moral is that logarithmic algorithms are almost as good as constant-time ones the plot on the left illustrates the dramatic difference between logarithmic algorithms and linear algorithms notice that the -axis only goes as high as while we needed to look at large inputs to appreciate the difference between constant-time and logarithmic-time algorithmsthe difference between logarithmic-time and linear-time algorithms is apparent even on small inputs the dramatic difference in the relative performance of logarithmic and linear algorithms does not mean that linear algorithms are bad in factmost of the time linear algorithm is acceptably efficient the plot below and on the left shows that there is significant difference between (nand ( log( )given how slowly log(ngrowsthis may seem bit surprisingbut keep in mind that it is multiplicative factor also keep in mind that in most practical situationso( log( )is fast enough to be useful |
4,103 | simplistic introduction to algorithmic complexity on the other handas the plot below and on the right suggeststhere are many situations in which quadratic rate of growth is prohibitive the quadratic curve is rising so quickly that it is hard to see that the log-linear curve is even on the plot the final two plots are about exponential complexity in the plot on the leftthe numbers to the left of the -axis run from to howeverthe notation on the top left means that each tick on the -axis should be multiplied by sothe plotted -values range from to roughly * it lookshoweveralmost as if there are no curves in the plot on the left that' because an exponential function grows so quickly that relative to the value of the highest point (which determines the scale of the -axis)the values of earlier points on the exponential curve (and all points on the quadratic curveare almost indistinguishable from the plot on the right addresses this issue by using logarithmic scale on the -axis one can readily see that exponential algorithms are impractical for all but the smallest of inputs noticeby the waythat when plotted on logarithmic scalean exponential curve appears as straight line we will have more to say about this in later |
4,104 | structures though we expend fair number of pages in this book talking about efficiencythe goal is not to make you expert in designing efficient programs there are many long books (and even some good long booksdevoted exclusively to that topic in we introduced some of the basic concepts underlying complexity analysis in this we use those concepts to look at the complexity of few classic algorithms the goal of this is to help you develop some general intuitions about how to approach questions of efficiency by the time you get through this you should understand why some programs complete in the blink of an eyewhy some need to run overnightand why some wouldn' complete in your lifetime the first algorithms we looked at in this book were based on brute-force exhaustive enumeration we argued that modern computers are so fast that it is often the case that employing clever algorithms is waste of time program something that is simple and obviously correctand let it rip we then looked at some problems ( finding an approximation to the roots of polynomialwhere the search space was too large to make brute force practical this led us to consider more efficient algorithms such as bisection search and newton-raphson the major point was that the key to efficiency is good algorithmnot clever coding tricks in the sciences (physicallifeand social)programmers often start by quickly coding up simple algorithm to test the plausibility of hypothesis about data setand then run it on small amount of data if this yields encouraging resultsthe hard work of producing an implementation that can be run (perhaps over and over againon large data sets begins such implementations need to be based on efficient algorithms efficient algorithms are hard to invent successful professional computer scientists might invent maybe one algorithm during their whole career--if they are lucky most of us never invent novel algorithm what we do instead is learn to reduce the most complex aspects of the problems with which we are faced to previously solved problems more specificallywe develop an understanding of the inherent complexity of the problem with which we are facedthink about how to break that problem up into subproblemsand relate those subproblems to other problems for which efficient algorithms already exist introduction to algorithmsby cormenleisersonrivestand steinis an excellent source for those of you not intimidated by fair amount of mathematics |
4,105 | some simple algorithms and data structures this contains few examples intended to give you some intuition about algorithm design many other algorithms appear elsewhere in the book keep in mind that the most efficient algorithm is not always the algorithm of choice program that does everything in the most efficient possible way is often needlessly difficult to understand it is often good strategy to start by solving the problem at hand in the most straightforward manner possibleinstrument it to find any computational bottlenecksand then look for ways to improve the computational complexity of those parts of the program contributing to the bottlenecks search algorithms search algorithm is method for finding an item or group of items with specific properties within collection of items we refer to the collection of items as search space the search space might be something concretesuch as set of electronic medical recordsor something abstractsuch as the set of all integers large number of problems that occur in practice can be formulated as search problems many of the algorithms presented earlier in this book can be viewed as search algorithms in we formulated finding an approximation to the roots of polynomial as search problemand looked at three algorithms--exhaustive enumerationbisection searchand newton-raphson--for searching the space of possible answers in this sectionwe will examine two algorithms for searching list each meets the specification def search(le)"""assumes is list returns true if is in and false otherwise""the astute reader might wonder if this is not semantically equivalent to the python expression in the answer is yesit is and if one is unconcerned about the efficiency of discovering whether is in lone should simply write that expression linear search and using indirection to access elements python uses the following algorithm to determine if an element is in listdef search(le)for in range(len( ))if [ =ereturn true return false if the element is not in the list the algorithm will perform (len( )testsi the complexity is at best linear in the length of why "at bestlinearit will be linear only if each operation inside the loop can be done in constant time that raises the question of whether python retrieves the ith element of list in constant time since our model of computation assumes that fetching the |
4,106 | contents of an address is constant-time operationthe question becomes whether we can compute the address of the ith element of list in constant time let' start by considering the simple case where each element of the list is an integer this implies that each element of the list is the same sizee four units of memory (four eight-bit bytes in this case the address in memory of the ith element of the list is simply start iwhere start is the address of the start of the list therefore we can assume that python could compute the address of the ith element of list of integers in constant time of coursewe know that python lists can contain objects of types other than intand that the same list can contain objects of many different types and sizes you might think that this would present problembut it does not in pythona list is represented as length (the number of objects in the listand sequence of fixed-size pointers to objects figure illustrates the use of these pointers the shaded region represents list containing four elements the leftmost shaded box contains pointer to an integer indicating the length of the list each of the other shaded boxes contains pointer to an object in the list figure implementing lists if the length field is four units of memoryand each pointer (addressoccupies four units of memorythe address of the ith element of the list is stored at the address start againthis address can be found in constant timeand then the value stored at that address can be used to access the ith element this access too is constant-time operation this example illustrates one of the most important implementation techniques used in computingindirection generally speakingindirection involves accessing something by first accessing something else that contains reference the number of bits used to store an integeroften called the word sizeis typically dictated by the hardware of the computer of size bits in some implementations and bits in others my dictionary defines "indirectionas "lack of straightforwardness and opennessdeceitfulness in factthe word generally had pejorative implication until about when computer scientists realized that it was the solution to many problems |
4,107 | some simple algorithms and data structures to the thing initially sought this is what happens each time we use variable to refer to the object to which that variable is bound when we use variable to access list and then reference stored in that list to access another objectwe are going through two levels of indirection binary search and exploiting assumptions getting back to the problem of implementing search(le)is (len( )the best we can doyesif we know nothing about the relationship of the values of the elements in the list and the order in which they are stored in the worst casewe have to look at each element in to determine whether contains but suppose we know something about the order in which elements are storede suppose we know that we have list of integers stored in ascending order we could change the implementation so that the search stops when it reaches number larger than the number for which it is searchingdef search(le)"""assumes is listthe elements of which are in ascending order returns true if is in and false otherwise""for in range(len( ))if [ =ereturn true if [iereturn false return false this would improve the average running time howeverit would not change the worst-case complexity of the algorithmsince in the worst case each element of is examined we canhoweverget considerable improvement in the worst-case complexity by using an algorithmbinary searchthat is similar to the bisection search algorithm used in to find an approximation to the square root of floating point number there we relied upon the fact that there is an intrinsic total ordering on floating point numbers here we rely on the assumption that the list is ordered the idea is simple pick an indexithat divides the list roughly in half ask if [ = if notask whether [iis larger or smaller than depending upon the answersearch either the left or right half of for it has often been said that "any problem in computing can be solved by adding another level of indirection following three levels of indirectionwe attribute this observation to david wheeler the paper "authentication in distributed systemstheory and practice,by butler lampson et al contains the observation it also contains footnote saying that "roger needham attributes this observation to david wheeler of cambridge university |
4,108 | given the structure of this algorithmit is not surprising that the most straightforward implementation of binary search uses recursionas shown in figure def search(le)"""assumes is listthe elements of which are in ascending order returns true if is in and false otherwise""def bsearch(lelowhigh)#decrements high low if high =lowreturn [low= mid (low high)// if [mid=ereturn true elif [mideif low =mid#nothing left to search return false elsereturn bsearch(lelowmid elsereturn bsearch(lemid highif len( = return false elsereturn bsearch(le len( figure recursive binary search the outer function in figure search(le)has the same arguments as the function specified abovebut slightly different specification the specification says that the implementation may assume that is sorted in ascending order the burden of making sure that this assumption is satisfied lies with the caller of search if the assumption is not satisfiedthe implementation has no obligation to behave well it could workbut it could also crash or return an incorrect answer should search be modified to check that the assumption is satisfiedthis might eliminate source of errorsbut it would defeat the purpose of using binary searchsince checking the assumption would itself take (len( )time functions such as search are often called wrapper functions the function provides nice interface for client codebut is essentially pass-through that does no serious computation insteadit calls the helper function bsearch with appropriate arguments this raises the question of why not eliminate search and have clients call bsearch directlythe reason is that the parameters low and high have nothing to do with the abstraction of searching list for an element they are implementation details that should be hidden from those writing programs that call search let us now analyze the complexity of bsearch we showed in the last section that list access takes constant time thereforewe can see that excluding the recursive calleach instance of bsearch is ( thereforethe complexity of bsearch depends only upon the number of recursive calls |
4,109 | some simple algorithms and data structures if this were book about algorithmswe would now dive into careful analysis using something called recurrence relation but since it isn'twe will take much less formal approach that starts with the question "how do we know that the program terminates?recall that in we asked the same question about while loop we answered the question by providing decrementing function for the loop we do the same thing here in this contextthe decrementing function has the properties it maps the values to which the formal parameters are bound to nonnegative integer when its value is the recursion terminates for each recursive callthe value of the decrementing function is less than the value of the decrementing function on entry to the instance of the function making the call the decrementing function for bsearch is high-low the if statement in search ensures that the value of this decrementing function is at least the first time bsearch is called (decrementing function property when bsearch is enteredif high-low is exactly the function makes no recursive call--simply returning the value [low= (satisfying decrementing function property the function bsearch contains two recursive calls one call uses arguments that cover all of the elements to the left of midand the other call uses arguments that cover all of the elements to the right of mid in either casethe value of high-low is cut in half (satisfying decrementing function property we now understand why the recursion terminates the next question is how many times can the value of high-low be cut in half before high-low = recall that logy(xis the number of times that has to be multiplied by itself to reach converselyif is divided by logy(xtimesthe result is this implies that high-low can be cut in half at most log (high-lowtimes before it reaches finallywe can answer the questionwhat is the algorithmic complexity of binary searchsince when search calls bsearch the value of high-low is equal to len( )- the complexity of search is (log(len( )) finger exercisewhy does the code use mid+ rather than mid in the second recursive call recall that when looking at orders of growth the base of the logarithm is irrelevant |
4,110 | sorting algorithms we have just seen that if we happen to know that list is sortedwe can exploit that information to greatly reduce the time needed to search list does this mean that when asked to search list one should first sort it and then perform the searchlet (sortcomplexity( )be the complexity of sorting list since we know that we can always search list in (len( )timethe question of whether we should first sort and then search boils down to the questionis (sortcomplexity(llog(len( ))len( )the answersadlyis no one cannot sort list without looking at each element in the list at least onceso it is not possible to sort list in sub-linear time does this mean that binary search is an intellectual curiosity of no practical importhappilyno suppose that one expects to search the same list many times it might well make sense to pay the overhead of sorting the list onceand then amortize the cost of the sort over many searches if we expect to search the list timesthe relevant question becomesis (sortcomplexity(lk*log(len( ))less than *len( )as becomes largethe time required to sort the list becomes increasingly irrelevant how big needs to be depends upon how long it takes to sort list iffor examplesorting were exponential in the size of the listk would have to be quite large fortunatelysorting can be done rather efficiently for examplethe standard implementation of sorting in most python implementations runs in roughly ( *log( )timewhere is the length of the list in practiceyou will rarely need to implement your own sort function in most casesthe right thing to do is to use either python' built-in sort method ( sort(sorts the list lor its built-in function sorted (sorted(lreturns list with same elements as lbut does not mutate lwe present sorting algorithms here primarily to provide some practice in thinking about algorithm design and complexity analysis we begin with simple but inefficient algorithmselection sort selection sortfigure works by maintaining the loop invariant thatgiven partitioning of the list into prefix ( [ : ]and suffix ( [ + :len( )])the prefix is sorted and no element in the prefix is larger than the smallest element in the suffix |
4,111 | some simple algorithms and data structures we use induction to reason about loop invariants base caseat the start of the first iterationthe prefix is emptyi the suffix is the entire list the invariant is (triviallytrue induction stepat each step of the algorithmwe move one element from the suffix to the prefix we do this by appending minimum element of the suffix to the end of the prefix because the invariant held before we moved the elementwe know that after we append the element the prefix is still sorted we also know that since we removed the smallest element in the suffixno element in the prefix is larger than the smallest element in the suffix when the loop is exitedthe prefix includes the entire listand the suffix is empty thereforethe entire list is now sorted in ascending order def selsort( )"""assumes that is list of elements that can be compared using sorts in ascending order""suffixstart while suffixstart !len( )#look at each element in suffix for in range(suffixstartlen( ))if [il[suffixstart]#swap position of elements [suffixstart] [il[ ] [suffixstartsuffixstart + figure selection sort it' hard to imagine simpler or more obviously correct sorting algorithm unfortunatelyit is rather inefficient the complexity of the inner loop is (len( )the complexity of the outer loop is also (len( )sothe complexity of the entire function is (len( ) it is quadratic in the length of merge sort fortunatelywe can do lot better than quadratic time using divide-andconquer algorithm the basic idea is to combine solutions of simpler instances of the original problem in generala divide-and-conquer algorithm is characterized by threshold input sizebelow which the problem is not subdivided the size and number of sub-instances into which an instance is splitand the algorithm used to combine sub-solutions the threshold is sometimes called the recursive base for item it is usual to consider the ratio of initial problem size to sub-instance size in most of the examples we've seen so farthe ratio was but not the most inefficient of sorting algorithmsas suggested by successful candidate for the presidency see |
4,112 | some simple algorithms and data structures merge sort is prototypical divide-and-conquer algorithm it was invented in by john von neumannand is still widely used like many divide-andconquer algorithms it is most easily described recursively if the list is of length or it is already sorted if the list has more than one elementsplit the list into two listsand use merge sort to sort each of them merge the results the key observation made by von neumann is that two sorted lists can be efficiently merged into single sorted list the idea is to look at the first element of each listand move the smaller of the two to the end of the result list when one of the lists is emptyall that remains is to copy the remaining items from the other list considerfor examplemerging the two lists [ , , , , , and [ , , , ]left in list [ , , , , , [ , , , , [ , , , , [ , , , , [ , , , , [ , , , [ , , [ , , [left in list result [ , , , [ , , , [ , , [ , [ [ [ [[[[ [ , [ , , [ , , , [ , , , , [ , , , , , [ , , , , , , [ , , , , , , , , , what is the complexity of the merge processit involves two constant-time operationscomparing the values of elements and copying elements from one list to another the number of comparisons is (len( ))where is the longer of the two lists the number of copy operations is (len( len( ))because each element gets copied exactly once thereforemerging two sorted lists is linear in the length of the lists figure contains an implementation of the merge sort algorithm notice that we have made the comparison operator parameter of the mergesort function the parameter' default value is the lt operator defined in the standard python module named operator this module defines set of functions corresponding to the built-in operators of python (for example for numbersin section we will exploit this flexibility |
4,113 | some simple algorithms and data structures def merge(leftrightcompare)"""assumes left and right are sorted lists and compare defines an ordering on the elements returns new sorted (by comparelist containing the same elements as (left rightwould contain ""result [ , while len(leftand len(right)if compare(left[ ]right[ ])result append(left[ ] + elseresult append(right[ ] + while ( len(left))result append(left[ ] + while ( len(right))result append(right[ ] + return result import operator def mergesort(lcompare operator lt)"""assumes is listcompare defines an ordering on elements of returns new sorted list containing the same elements as ""if len( return [:elsemiddle len( )// left mergesort( [:middle]compareright mergesort( [middle:]comparereturn merge(leftrightcomparefigure merge sort let' analyze the complexity of mergesort we already know that the time complexity of merge is (len( )at each level of recursion the total number of elements to be merged is len(lthereforethe time complexity of mergesort is (len( )multiplied by the number of levels of recursion since mergesort divides the list in half each timewe know that the number of levels of recursion is (log(len( )thereforethe time complexity of mergesort is ( *log( ))where is len(lthis is lot better than selection sort' (len( ) for exampleif has , elementslen( ) is hundred million but len( )*log (len( )is about , this improvement in time complexity comes with price selection sort is an example of an in-place sorting algorithm because it works by swapping the place of elements within the listit uses only constant amount of extra storage (one element in our implementationin contrastthe merge sort algorithm |
4,114 | involves making copies of the list this means that its space complexity is (len( )this can be an issue for large lists exploiting functions as parameters suppose we want to sort list of names written as firstname lastnamee the list ['chris terman''tom brady''eric grimson''gisele bundchen'figure defines two ordering functionsand then uses these to sort list in two different ways each function imports the standard python module stringand uses the split function from that module the two arguments to split are strings the second argument specifies separator ( blank space in the code in figure that is used to split the first argument into sequence of substrings the second argument is optional if that argument is omitted the first string is split using arbitrary strings of whitespace characters (spacetabnewlinereturnand formfeeddef lastnamefirstname(name name )import string name string split(name 'name string split(name 'if name [ !name [ ]return name [ name [ else#last names the samesort by first name return name [ name [ def firstnamelastname(name name )import string name string split(name 'name string split(name 'if name [ !name [ ]return name [ name [ else#first names the samesort by last name return name [ name [ ['chris terman''tom brady''eric grimson''gisele bundchen'newl mergesort(llastnamefirstnameprint 'sorted by last name ='newl newl mergesort(lfirstnamelastnameprint 'sorted by first name ='newl figure sorting list of names quicksortinvented by hoare in is conceptually similar to merge sortbut considerably more complex it has the advantage of needing only log(nadditional space unlike merge sortits running time depends upon the way the elements in the list to be sorted are ordered relative to each other though its worst-case running time is ( )its expected running time is only ( *log( ) |
4,115 | some simple algorithms and data structures sorting in python the sorting algorithm used in most python implementations is called timsort the key idea is to take advantage of the fact that in lot of data sets the data is already partially sorted timsort' worst-case performance is the same as merge sort'sbut on average it performs considerably better as mentioned earlierthe python method list sort takes list as its first argument and modifies that list in contrastthe python function sorted takes an iterable object ( list or dictionaryas its first argument and returns new sorted list for examplethe code [ , , {' ': ' ': ' ':'dog'print sorted(lprint sort(print print sorted(dd sort(will print [ [ [ [' '' '' 'traceback (most recent call last)file "/current/mit/teaching/ /book/ algorithmsalgorithms py"line in sort(attributeerror'dictobject has no attribute 'sortnotice that when the sorted function is applied to dictionaryit returns sorted list of the keys of the dictionary in contrastwhen the sort method is applied to dictionaryit causes an exception to be raised since there is no method dict sort both the list sort method and the sorted function can have two additional parameters the key parameter plays the same role as compare in our implementation of merge sortit is used to supply the comparison function to be used the reverse parameter specifies whether the list is to be sorted in ascending or descending order for examplethe code [[ , , ]( , , , )'abc'print sorted(lkey lenreverse truesorts the elements of in reverse order of length and prints [( )[ ]'abc' timsort was invented by tim peters in because he was unhappy with the previous algorithm used in python |
4,116 | both the list sort method and the sorted function provide stable sorts this means that if two elements are equal with respect to the comparison used in the sorttheir relative ordering in the original list (or other iterable objectis preserved in the final list hash tables if we put merge sort together with binary searchwe have nice way to search lists we use merge sort to preprocess the list in ( *log( )timeand then we use binary search to test whether elements are in the list in (log( )time if we search the list timesthe overall time complexity is ( *log(nk*log( )this is goodbut we can still askis logarithmic the best that we can do for search when we are willing to do some preprocessingwhen we introduced the type dict in we said that dictionaries use technique called hashing to do the lookup in time that is nearly independent of the size of the dictionary the basic idea behind hash table is simple we convert the key to an integerand then use that integer to index into listwhich can be done in constant time in principlevalues of any immutable type can be easily converted to an integer after allwe know that the internal representation of each object is sequence of bitsand any sequence of bits can be viewed as representing an integer for examplethe internal representation of 'abcis the string of bits which can be viewed as representation of the decimal integer , , of courseif we want to use the internal representation of strings as indices into listthe list is going to have to be pretty darn long what about situations where the keys are already integersimaginefor the momentthat we are implementing dictionary all of whose keys are social security numbers if we represented the dictionary by list with elements and used social security numbers to index into the listwe could do lookups in constant time of courseif the dictionary contained entries for only ten thousand ( peoplethis would waste quite lot of space which gets us to the subject of hash functions hash function maps large space of inputs ( all natural numbersto smaller space of outputs ( the natural numbers between and hash functions can be used to convert large space of keys to smaller space of integer indices since the space of possible outputs is smaller than the space of possible inputsa hash function is many-to-one mappingi multiple different inputs may be mapped to the same output when two inputs are mapped to the same outputit is called collision-- topic which we will to return shortly good hash function produces uniform distributioni every output in the range is equally probablewhich minimizes the probability of collisions united states social security number is nine-digit integer |
4,117 | some simple algorithms and data structures designing good hash functions is surprisingly challenging the problem is that one wants the outputs to be uniformly distributed given the expected distribution of inputs supposefor examplethat one hashed surnames by performing some calculation on the first three letters in the netherlandswhere roughly of surnames begin with "vanand another with "de,the distribution would be far from uniform figure uses simple hash function (recall that % returns the remainder when the integer is divided by the integer jto implement dictionary with integers as keys the basic idea is to represent an instance of class intdict by list of hash bucketswhere each bucket is list of key/value pairs by making each bucket listwe handle collisions by storing all of the values that hash to the same bucket in the list the hash table works as followsthe instance variable buckets is initialized to list of numbuckets empty lists to store or look up an entry with key dictkeywe use the hash function to convert dictkey into an integerand use that integer to index into buckets to find the hash bucket associated with dictkey we then search that bucket (which is listlinearly to see if there is an entry with the key dictkey if we are doing lookup and there is an entry with the keywe simply return the value stored with that key if there is no entry with that keywe return none if value is to be storedthen we either replace the value in the existing entryif one was foundor append new entry to the bucket if none was found there are many other ways to handle collisionssome considerably more efficient than using lists but this is probably the simplest mechanismand it works fine if the hash table is big enough and the hash function provides good enough approximation to uniform distribution notice that the __str__ method produces representation of dictionary that is unrelated to the order in which elements were added to itbut is instead ordered by the values to which the keys happen to hash this explains why we can' predict the order of the keys in an object of type dict |
4,118 | class intdict(object)""" dictionary with integer keys""def __init__(selfnumbuckets)"""create an empty dictionary""self buckets [self numbuckets numbuckets for in range(numbuckets)self buckets append([]def addentry(selfdictkeydictval)"""assumes dictkey an int adds an entry ""hashbucket self buckets[dictkey%self numbucketsfor in range(len(hashbucket))if hashbucket[ ][ =dictkeyhashbucket[ (dictkeydictvalreturn hashbucket append((dictkeydictval)def getvalue(selfdictkey)"""assumes dictkey an int returns entry associated with the key dictkey""hashbucket self buckets[dictkey%self numbucketsfor in hashbucketif [ =dictkeyreturn [ return none def __str__(self)result '{for in self bucketsfor in bresult result str( [ ]':str( [ ]',return result[:- '}#result[:- omits the last comma figure implementing dictionaries using hashing the following code first constructs an intdict with twenty entries the values of the entries are the integers to the keys are chosen at random from integers in the range to (we discuss the random module in the code then goes on to print the intdict using the __str__ method defined in the class finally it prints the individual hash buckets by iterating over buckets (this is terrible violation of information hidingbut pedagogically useful import random # standard library module intdict( for in range( )#choose random int between and ** key random randint( ** addentry(keyiprint 'the value of the intdict is:print print '\ ''the buckets are:for hashbucket in buckets#violates abstraction barrier print 'hashbucket |
4,119 | some simple algorithms and data structures when we ran this code it printed the value of the intdict is{ : , : , : , : , : , : , : , : : , : , : , : , : , : , : , : : , : , : , : the buckets are[( )[( )[[[[[( )[( )[( )[[( )[[( )[[[[[( )[( )[( )( )[( )( )[[( )( )[[( )( )( )[( )[[[( )when we violate the abstraction barrier and peek at the representation of the intdictwe see that many of the hash buckets are empty others contain onetwoor three tuples--depending upon the number of collisions that occurred what is the complexity of getvalueif there were no collisions it would be ( )because each hash bucket would be of length or butof coursethere might be collisions if everything hashed to the same bucketit would be (nwhere is the number of entries in the dictionarybecause the code would perform linear search on that hash bucket by making the hash table large enoughwe can reduce the number of collisions sufficiently to allow us to treat the complexity as ( that iswe can trade space for time but what is the tradeoffto answer this questionone needs to know tiny bit of probabilityso we defer the answer to since the integers were chosen at randomyou will probably get different results if you run it |
4,120 | often text is the best way to communicate informationbut sometimes there is lot of truth to the chinese proverb(" picture' meaning can express ten thousand words"yet most programs rely on textual output to communicate with their users whybecause in many programming languages presenting visual data is too hard fortunatelyit is simple to do in python plotting using pylab pylab is python standard library module that provides many of the facilities of matlab" high-level technical computing language and interactive environment for algorithm developmentdata visualizationdata analysisand numeric computation " later in the bookwe will look at some of the more advanced features of pylabbut in this we focus on some of its facilities for plotting data complete user' guide for pylab is at the web site matplotlib sourceforge net/users/index html there are also number of web sites that provide excellent tutorials we will not try to provide user' guide or complete tutorial here insteadin this we will merely provide few example plots and explain the code that generated them other examples appear in later let' start with simple example that uses pylab plot to produce two plots executing import pylab pylab figure( #create figure pylab plot([ , , , ][ , , , ]#draw on figure pylab show(#show figure on screen will cause window to appear on your computer monitor its exact appearance may depend on the operating system on your machinebut it will look similar to the following |
4,121 | plotting and more about classes the bar at the top contains the name of the windowin this case "figure the middle section of the window contains the plot generated by the invocation of pylab plot the two parameters of pylab plot must be sequences of the same length the first specifies the -coordinates of the points to be plottedand the second specifies the -coordinates togetherthey provide sequence of four coordinate pairs[( , )( , )( , )( , )these are plotted in order as each point is plotteda line is drawn connecting it to the previous point the final line of codepylab show()causes the window to appear on the computer screen if that line were not presentthe figure would still have been producedbut it would not have been displayed this is not as silly as it at first soundssince one might well choose to write figure directly to fileas we will do laterrather than display it on the screen the bar at the bottom of the window contains number of push buttons the rightmost button is used to write the plot to file the next button to the left is used to adjust the appearance of the plot in the window the next four buttons are used for panning and zooming and the button on the left is used to restore the figure to its original appearance after you are done playing with pan and zoom it is possible to produce multiple figures and to write them to files these files can have any name you likebut they will all have the file extension png the file extension png indicates that the file is in the portable networks graphics format this is public domain standard for representing images in some operating systemspylab show(causes the process running python to be suspended until the figure is closed (by clicking on the round red button at the upper lefthand corner of the windowthis is unfortunate the usual workaround is to ensure that pylab show(is the last line of code to be executed for those of you too young to knowthe icon represents "floppy disk floppy disks were first introduced by ibm in they were inches in diameter and held all of , bytes unlike later floppy disksthey actually were floppy the original ibm pc had single kbyte -inch floppy disk drive for most of the and sfloppy disks were the primary storage device for personal computers the transition to rigid enclosures (as represented in the icon that launched this digressionstarted in the mid- (with the macintosh)which didn' stop people from continuing to call them floppy disks |
4,122 | plotting and more about classes the code pylab figure( #create figure pylab plot([ , , , ][ , , , ]#draw on figure pylab figure( #create figure pylab plot([ , , , ][ , , , ]#draw on figure pylab savefig('figure-addie'#save figure pylab figure( #go back to working on figure pylab plot([ , , , ]#draw again on figure pylab savefig('figure-jane'#save figure produces and saves to files named figure-jane png and figure-addie png the two plots below observe that the last call to pylab plot is passed only one argument this argument supplies the values the corresponding values default to range(len([ ]))which is why they range from to in this case contents of figure-jane png contents of figure-addie png pylab has notion of "current figure executing pylab figure(xsets the current figure to the figure numbered subsequently executed calls of plotting functions implicitly refer to that figure until another invocation of pylab figure occurs this explains why the figure written to the file figure-addie png was the second figure created let' look at another example the code principal #initial investment interestrate years values [for in range(years )values append(principalprincipal +principal*interestrate pylab plot(valuesproduces the plot on the left below |
4,123 | plotting and more about classes if we look at the codewe can deduce that this is plot showing the growth of an initial investment of $ , at an annually compounded interest rate of howeverthis cannot be easily inferred by looking only at the plot itself that' bad thing all plots should have informative titlesand all axes should be labeled if we add to the end of our the code the lines pylab title(' growthcompounded annually'pylab xlabel('years of compounding'pylab ylabel('value of principal ($)'we get the plot above and on the right for every plotted curvethere is an optional argument that is format string indicating the color and line type of the plot the letters and symbols of the format string are derived from those used in matlaband are composed of color indicator followed by line-style indicator the default format string is ' -'which produces solid blue line to plot the above with red circlesone would replace the call pylab plot(valuesby pylab plot(values'ro')which produces the plot on the right for complete list of color and line-style indicatorssee in order to keep the price downwe chose to publish this book in black and white that posed dilemmashould we discuss how to use color in plots or notwe concluded that color is too important to ignore if you want to see what the plots look like in colorrun the code |
4,124 | it' also possible to change the type size and line width used in plots this can be done using keyword arguments in individual calls to functionse the code principal #initial investment interestrate years values [for in range(years )values append(principalprincipal +principal*interestrate pylab plot(valueslinewidth pylab title(' growthcompounded annually'fontsize 'xx-large'pylab xlabel('years of compounding'fontsize ' -small'pylab ylabel('value of principal ($)'produces the intentionally bizarre-looking plot it is also possible to change the default valueswhich are known as "rc settings (the name "rcis derived from the rc file extension used for runtime configuration files in unix these values are stored in dictionary-like variable that can be accessed via the name pylab rcparams sofor exampleyou can set the default line width to points by executing the code pylab rcparams['lines linewidth' the point is measure used in typography it is equal to / of an inchwhich is mm |
4,125 | plotting and more about classes the default values used in most of the examples in this book were set with the code #set line width pylab rcparams['lines linewidth' #set font size for titles pylab rcparams['axes titlesize' #set font size for labels on axes pylab rcparams['axes labelsize' #set size of numbers on -axis pylab rcparams['xtick labelsize' #set size of numbers on -axis pylab rcparams['ytick labelsize' #set size of ticks on -axis pylab rcparams['xtick major size' #set size of ticks on -axis pylab rcparams['ytick major size' #set size of markers pylab rcparams['lines markersize' if you are viewing plots on color displayyou will have little reason to customize these settings we customized the settings we used so that it would be easier to read the plots when we shrank them and converted them to black and white for complete discussion of how to customize settingssee plotting mortgagesan extended example in we worked our way through hierarchy of mortgages as way of illustrating the use of subclassing we concluded that by observing that "our program should be producing plots designed to show how the mortgage behaves over time figure enhances class mortgage by adding methods that make it convenient to produce such plots (the function findpaymentwhich is used in mortgageis defined in figure the methods plotpayments and plotbalance are simple one-linersbut they do use form of pylab plot that we have not yet seen when figure contains multiple plotsit is useful to produce key that identifies what each plot is intended to represent in figure each invocation of pylab plot uses the label keyword argument to associate string with the plot produced by that invocation (this and other keyword arguments must follow any format strings key can then be added to the figure by calling the function pylab legendas shown in figure the nontrivial methods in class mortgage are plottotpd and plotnet the method plottotpd simply plots the cumulative total of the payments made the method plotnet plots an approximation to the total cost of the mortgage over time by plotting the cash expended minus the equity acquired by paying off part of the loan it is an approximation because it does not perform net present value calculation to take into account the time value of cash |
4,126 | class mortgage(object)"""abstract class for building different kinds of mortgages""def __init__(selfloanannratemonths)"""create new mortgage""self loan loan self rate annrate/ self months months self paid [ self owed [loanself payment findpayment(loanself ratemonthsself legend none #description of mortgage def makepayment(self)"""make payment""self paid append(self paymentreduction self payment self owed[- ]*self rate self owed append(self owed[- reductiondef gettotalpaid(self)"""return the total amount paid so far""return sum(self paiddef __str__(self)return self legend def plotpayments(selfstyle)pylab plot(self paid[ :]stylelabel self legenddef plotbalance(selfstyle)pylab plot(self owedstylelabel self legenddef plottotpd(selfstyle)"""plot the cumulative total of the payments made""totpd [self paid[ ]for in range( len(self paid))totpd append(totpd[- self paid[ ]pylab plot(totpdstylelabel self legenddef plotnet(selfstyle)"""plot an approximation to the total cost of the mortgage over time by plotting the cash expended minus the equity acquired by paying off part of the loan""totpd [self paid[ ]for in range( len(self paid))totpd append(totpd[- self paid[ ]#equity acquired through payments is amount of original loan paid to datewhich is amount of loan minus what is still owed equityacquired pylab array([self loan]*len(self owed)equityacquired equityacquired pylab array(self owednet pylab array(totpdequityacquired pylab plot(netstylelabel self legendfigure class mortgage with plotting methods the expression pylab array(self owedin plotnet performs type conversion thus farwe have been calling the plotting functions of pylab with arguments of type list under the coverspylab has been converting these lists to different |
4,127 | plotting and more about classes typearraywhich pylab inherits from numpy the invocation pylab array makes this explicit there are number of convenient ways to manipulate arrays that are not readily available for lists in particularexpressions can be formed using arrays and arithmetic operators considerfor examplethe code pylab array([ ]print ' =' * print ' =' print ' =' print ' =' print ' =' print ' * =' * the expression * multiplies each element of by the constant the expression + adds the integer to each element of the expression - subtracts each element of from the corresponding element of (if the arrays had been of different lengthan error would have occurredthe expression * multiplies each element of by the corresponding element of when the above code is run it prints [ [ [ - [- - - * there are number of ways to create arrays in pylabbut the most common way is to first create listand then convert it figure repeats the three subclasses of mortgage from each has distinct __init__ that overrides the __init__ in mortgage the subclass tworate also overrides the makepayment method of mortgage numpy is python module that provides tools for scientific computing in addition to providing multi-dimensional arrays it provides variety of linear algebra tools |
4,128 | class fixed(mortgage)def __init__(selfloanrmonths)mortgage __init__(selfloanrmonthsself legend 'fixedstr( * '%class fixedwithpts(mortgage)def __init__(selfloanrmonthspts)mortgage __init__(selfloanrmonthsself pts pts self paid [loan*(pts/ )self legend 'fixedstr( * '%'str(ptspointsclass tworate(mortgage)def __init__(selfloanrmonthsteaserrateteasermonths)mortgage __init__(selfloanteaserratemonthsself teasermonths teasermonths self teaserrate teaserrate self nextrate / self legend str(teaserrate* )'for str(self teasermonths)monthsthen str( * '%def makepayment(self)if len(self paid=self teasermonths self rate self nextrate self payment findpayment(self owed[- ]self rateself months self teasermonthsmortgage makepayment(selffigure subclasses of mortgage figure contain functions that can be used to generate plots intended to provide insight about the different kinds of mortgages the function plotmortgages generates appropriate titles and axis labels for each plotand then uses the methods in mortgageplots to produce the actual plots it uses calls to pylab figure to ensure that the appropriate plots appear in given figure it uses the index to select elements from the lists morts and styles in way that ensures that different kinds of mortgages are represented in consistent way across figures for examplesince the third element in morts is variablerate mortgage and the third element in styles is ' :'the variable-rate mortgage is always plotted using blue dotted line the function comparemortgages generates list of different mortgagesand simulates making series of payments on eachas it did in it then calls plotmortgages to produce the plots |
4,129 | plotting and more about classes def plotmortgages(mortsamt)styles [' -'' '' :'#give names to figure numbers payments cost balance netcost pylab figure(paymentspylab title('monthly payments of different $str(amtmortgages'pylab xlabel('months'pylab ylabel('monthly payments'pylab figure(costpylab title('cash outlay of different $str(amtmortgages'pylab xlabel('months'pylab ylabel('total payments'pylab figure(balancepylab title('balance remaining of $str(amtmortgages'pylab xlabel('months'pylab ylabel('remaining loan balance of $'pylab figure(netcostpylab title('net cost of $str(amtmortgages'pylab xlabel('months'pylab ylabel('payments equity $'for in range(len(morts))pylab figure(paymentsmorts[iplotpayments(styles[ ]pylab figure(costmorts[iplottotpd(styles[ ]pylab figure(balancemorts[iplotbalance(styles[ ]pylab figure(netcostmorts[iplotnet(styles[ ]pylab figure(paymentspylab legend(loc 'upper center'pylab figure(costpylab legend(loc 'best'pylab figure(balancepylab legend(loc 'best'def comparemortgages(amtyearsfixedrateptsptsratevarrate varrate varmonths)totmonths years* fixed fixed(amtfixedratetotmonthsfixed fixedwithpts(amtptsratetotmonthsptstworate tworate(amtvarrate totmonthsvarrate varmonthsmorts [fixed fixed tworatefor in range(totmonths)for mort in mortsmort makepayment(plotmortgages(mortsamtfigure generate mortgage plots the call comparemortgages(amt= years= fixedrate= pts ptsrate= varrate = varrate = varmonths= |
4,130 | produces plots that shed some light on the mortgages discussed in the first plotwhich was produced by invocations of plotpaymentssimply plots each payment of each mortgage against time the box containing the key appears where it does because of the value supplied to the keyword argument loc used in the call to pylab legend when loc is bound to 'bestthe location is chosen automatically this plot makes it clear how the monthly payments vary (or don'tover timebut doesn' shed much light on the relative costs of each kind of mortgage the next plot was produced by invocations of plottotpd it sheds some light on the cost of each kind of mortgage by plotting the cumulative costs that have been incurred at the start of each month the entire plot is on the leftand an enlargement of the left part of the plot is on the right the next two plots show the remaining debt (on the leftand the total net cost of having the mortgage (on the right |
4,131 | statistics there is something very comforting about newtonian mechanics you push down on one end of leverand the other end goes up you throw ball up in the airit travels parabolic pathand comes down !in shorteverything happens for reason the physical world is completely predictable place--all future states of physical system can be derived from knowledge about its current state for centuriesthis was the prevailing scientific wisdomthen along came quantum mechanics and the copenhagen doctrine the doctrine' proponentsled by bohr and heisenbergargued that at its most fundamental level the behavior of the physical world cannot be predicted one can make probabilistic statements of the form " is highly likely to occur,but not statements of the form " is certain to occur other distinguished physicistsmost notably einstein and schrodingervehemently disagreed this debate roiled the worlds of physicsphilosophyand even religion the heart of the debate was the validity of causal nondeterminismi the belief that not every event is caused by previous events einstein and schrodinger found this view philosophically unacceptableas exemplified by einstein' oftenrepeated comment"god does not play dice what they could accept was predictive nondeterminismi the concept that our inability to make accurate measurements about the physical world makes it impossible to make precise predictions about future states this distinction was nicely summed up by einsteinwho said"the essentially statistical character of contemporary theory is solely to be ascribed to the fact that this theory operates with an incomplete description of physical systems the question of causal nondeterminism is still unsettled howeverwhether the reason we cannot predict events is because they are truly unpredictable or is because we don' have enough information to predict them is of no practical importance while the bohr/einstein debate was about how to understand the lowest levels of the physical worldthe same issues arise at the macroscopic level perhaps the outcomes of horse racesspins of roulette wheelsand stock market investments are causally deterministic howeverthere is ample evidence that it is perilous to treat them as predictably deterministic this book is about using computation to solve problems thus farwe have focused our attention on problems that can be solved by predictably deterministic computation such computations are highly usefulbut clearly not sufficient to tackle some kinds of problems many aspects of the world in of course this doesn' stop people from believing that they areand losing lot of money based on that belief |
4,132 | which we live can be accurately modeled only as stochastic processes process is stochastic if its next state depends upon both previous states and some random element stochastic programs program is deterministic if whenever it is run on the same inputit produces the same output notice that this is not the same as saying that the output is completely defined by the specification of the problem considerfor examplethe specification of squarerootdef squareroot(xepsilon)"""assumes and epsilon are of type floatx > and epsilon returns float such that -epsilon < * < +epsilon""this specification admits many possible return values for the function call squareroot( howeverthe successive approximation algorithm we looked at in will always return the same value the specification doesn' require that the implementation be deterministicbut it does allow deterministic implementations not all interesting specifications can be met by deterministic implementations considerfor exampleimplementing program to play dice gamesay backgammon or craps somewhere in the program there may be function that simulates fair roll of single six-sided die suppose it had specification something like def rolldie()"""returns an int between and ""this would be problematicsince it allows the implementation to return the same number each time it is calledwhich would make for pretty boring game it would be better to specify that rolldie "returns randomly chosen int between and most programming languagesincluding pythoninclude simple ways to write programs that use randomness the code in figure uses one of several useful functions found in the imported python standard library module random the function random choice takes non-empty sequence as its argument and returns randomly chosen member of that sequence almost all of the functions in random are built using the function random randomwhich generates random floating point number between and the word stems from the greek word stokhastikoswhich means something like "capable of divining stochastic programas we shall seeis aimed at getting good resultbut the exact results are not guaranteed roll is fair if each of the six possible outcomes is equally likely in point of factthe function is not truly random it is what mathematicians call pseudorandom for almost all practical purposes outside of cryptographythis distinction is not relevant and we shall ignore it |
4,133 | stochastic programsprobabilityand statistics import random def rolldie()"""returns random int between and ""return random choice([ , , , , , ]def rolln( )result 'for in range( )result result str(rolldie()print result figure roll die nowimagine running rolln( would you be more surprised to see it print or orto put it another waywhich of these two sequences is more randomit' trick question each of these sequences is equally likelybecause the value of each roll is independent of the values of earlier rolls in stochastic process two events are independent if the outcome of one event has no influence on the outcome of the other this is bit easier to see if we simplify the situation by thinking about twosided die (also known as coinwith the values and this allows us to think of the output of call of rolln as binary number (see when we use binary diethere are possible sequences that testn might return each of these is equally likelytherefore each has probability of occurring of ( / ) let' go back to our six-sided die how many different sequences are there of length sothe probability of rolling ten consecutive ' is / less than one out of sixty million pretty lowbut no lower than the probability of any other particular sequencee of ten rolls in generalwhen we talk about the probability of result having some property ( all 'swe are asking what fraction of all possible results has that property this is why probabilities range from to suppose we want to know the probability of getting any sequence other than all ' when rolling the dieit is simply ( / )because the probability of something happening and the probability of the same thing not happening must add up to suppose we want to know the probability of rolling the die ten times without getting single one way to answer this question is to transform it into the question of how many of the possible sequences don' contain |
4,134 | this can be computed as followsthe probability of not rolling on any single roll is / the probability of not rolling on either the first or the second roll is ( / )*( / )or ( / ) sothe probability of not rolling ten times in row is ( / ) slightly more than we will return to the subject of probability in bit more detail later inferential statistics and simulation the tiny program in figure is simulation model rather than asking some person to roll die multiple timeswe wrote program to simulate that activity we often use simulations to estimate the value of an unknown quantity by making use of the principles of inferential statistics in brief (since this is not book about statistics)the guiding principle of inferential statistics is that random sample tends to exhibit the same properties as the population from which it is drawn suppose harvey dent (also known as two-faceflipped coinand it came up heads you would not infer from this that the next flip would also come up heads suppose he flipped it twiceand it came up heads both time you might reason that the probability of this happening for fair coin ( coin where heads and tails are equally likelywas so there was still no reason to assume the next flip would be heads supposehowever out of flips came up heads / is pretty small numberso you might feel safe in inferring that the coin has head on both sides your belief in whether the coin is fair is based on the intuition that the behavior of sample of flips is similar to the behavior of the population of all flips of your coin this belief seems pretty sound when all flips are heads supposethat flips came up heads and tails would you feel comfortable in predicting that the next flips would have the same ratio of heads to tailsfor that matterhow comfortable would you feel about even predicting that there would be more heads than tails in the next flipstake few minutes to think about thisand then try the experiment using the code in figure the function flip in figure simulates flipping fair coin numflips timesand returns the fraction of flips that came up heads for each fliprandom random(returns random floating point number between and numbers less than or greater than are treated as heads or tails respectively the value is arbitrarily assigned the value tails given the vast number of floating point values between and it is highly unlikely that this will affect the result |
4,135 | stochastic programsprobabilityand statistics def flip(numflips)heads for in range(numflips)if random random( heads + return heads/numflips def flipsim(numflipspertrialnumtrials)fracheads [for in range(numtrials)fracheads append(flip(numflipspertrial)mean sum(fracheads)/len(fracheadsreturn mean figure flipping coin try executing the function flipsim( couple of times here' what we saw the first two times we tried itflipsim( flipsim( it seems that it would be inappropriate to assume much (other than that the coin has both heads and tailsfrom any one trial of flips that' why we typically structure our simulations to include multiple trials and compare the results let' try flipsim( )flipsim( flipsim( intuitivelywe can feel better about these results how about flipsim( )flipsim( flipsim( this looks really good (especially since we know that the answer should be but that' cheatingnow it seems we can safely conclude something about the next flipi that heads and tails are about equally likely but why do we think that we can conclude thatwhat we are depending upon is the law of large numbers (also known as bernoulli' theorem this law states that in repeated independent experiments ( flipping fair coin times and counting the fraction of headswith the same expected value ( in this case)the average value of the though the law of large numbers had been discussed in the th century by cardanothe first proof was published by jacob bernoulli in the early th century it is unrelated to the theorem about fluid dynamics called bernoulli' theoremwhich was proved by jacob' nephew daniel |
4,136 | experiments approaches the expected value as the number of experiments goes to infinity it is worth noting that the law of large numbers does not implyas too many seem to thinkthat if deviations from expected behavior occurthese deviations are likely to be evened out by opposite deviations in the future this misapplication of the law of large numbers is known as the gambler' fallacy note that "largeis relative concept for exampleif we were to flip fair coin on the order of , , timeswe should expect to encounter several sequences of at least million consecutive heads if we looked only at the subset of flips containing these headswe would inevitably jump to the wrong conclusion about the fairness of the coin in factif every subsequence of large sequence of events appears to be randomit is highly likely that the sequence itself is not truly random if your itunes shuffle mode doesn' play the same song first once in whileyou can assume that the shuffle is not really random finallynotice that in the case of coin flips the law of large numbers does not imply that the absolute difference between the number of heads and the number of tails decreases as the number of flips increases in factwe can expect that number to increase what decreases is the ratio of the absolute difference to the number of flips figure contains functionflipplotthat produces some plots intended to show the law of large numbers at work the line random seed( near the bottom ensures that the pseudo-random number generator used by random random will generate the same sequence of pseudorandom numbers each time this code is executed this is convenient for debugging "on august at the casino in monte carloblack came up record twenty-six times in succession [in roulette[therewas near-panicky rush to bet on redbeginning about the time black had come up phenomenal fifteen times in application of the maturity [of the chancesdoctrineplayers doubled and tripled their stakesthis doctrine leading them to believe after black came up the twentieth time that there was not chance in million of another repeat in the end the unusual run enriched the casino by some millions of francs huff and geishow to take chancepp - |
4,137 | stochastic programsprobabilityand statistics def flipplot(minexpmaxexp)"""assumes minexp and maxexp positive integersminexp maxexp plots results of **minexp to **maxexp coin flips""ratios [diffs [xaxis [for exp in range(minexpmaxexp )xaxis append( **expfor numflips in xaxisnumheads for in range(numflips)if random random( numheads + numtails numflips numheads ratios append(numheads/float(numtails)diffs append(abs(numheads numtails)pylab title('difference between heads and tails'pylab xlabel('number of flips'pylab ylabel('abs(#heads #tails)'pylab plot(xaxisdiffspylab figure(pylab title('heads/tails ratios'pylab xlabel('number of flips'pylab ylabel('#heads/#tails'pylab plot(xaxisratiosrandom seed( flipplot( figure plotting the results of coin flips the call flipplot( produces the two plotsthe plot on the left seems to suggest that the absolute difference between the number of heads and the number of tails fluctuates in the beginningcrashes downwardsand then moves rapidly upwards howeverwe need to keep in mind that we have only two data points to the right of , that pylab plot connected these points with lines may mislead us into seeing trends when all we have are isolated points this is not an uncommon phenomenonso you should always ask how many points plot actually contains before jumping to any conclusion about what it means |
4,138 | it' hard to see much of anything in the plot on the rightwhich is mostly flat line this too is deceptive even though there are sixteen data pointsmost of them are crowded into small amount of real estate on the left side of the plotso that the detail is impossible to see this occurs because values on the -axis range from to , , and unless instructed otherwise pylab will space these points evenly along the axis this is called linear scaling fortunatelythese visualization problems are easy to address in pylab as we saw in we can easily instruct our program to plot unconnected pointse by writing pylab plot(xaxisdiffs'bo'we can also instruct pylab to use logarithmic scale on either or both of the and axes by calling the functions pylab semilogx and pylab semilogy these functions are always applied to the current figure both plots use logarithmic scale on the -axis since the -values generated by flipplot are minexp minexp+ maxexpusing logarithmic -axis causes the points to be evenly spaced along the -axis--providing maximum separation between points the left-hand plot below also uses logarithmic scale on the yaxis the values on this plot range from nearly to nearly if the -axis were linearly scaledit would be difficult to see the relatively small differences in values on the left side of the plot on the other handon the plot on the right the values are fairly tightly groupedso we use linear -axis finger exercisemodify the code in figure so that it produces plots like those shown above these plots are easier to interpret than the earlier plots the plot on the right suggests pretty strongly that the ratio of heads to tails converges to as the number of flips gets large the meaning of the plot on the left is bit less clear it appears that the absolute difference grows with the number of flipsbut it is not completely convincing it is never possible to achieve perfect accuracy through sampling without sampling the entire population no matter how many samples we examinewe can never be sure that the sample set is typical until we examine every element |
4,139 | stochastic programsprobabilityand statistics of the population (and since we are usually dealing with infinite populationse all possible sequences of coin flipsthis is usually impossibleof coursethis is not to say that an estimate cannot be precisely correct we might flip coin twiceget one heads and one tailsand conclude that the true probability of each is we would have reached the right conclusionbut our reasoning would have been faulty how many samples do we need to look at before we can have justified confidence in our answerthis depends on the variance in the underlying distribution roughly speakingvariance is measure of how much spread there is in the possible different outcomes we can formalize this notion relatively simply by using the concept of standard deviation informallythe standard deviation tells us what fraction of the values are close to the mean if many values are relatively close to the meanthe standard deviation is relatively small if many values are relatively far from the meanthe standard deviation is relatively large if all values are the samethe standard deviation is zero more formallythe standard deviations (sigma)of collection of valuesis defined as |!!"#(!where |!is the size of the collection and (muits mean figure contains python implementation of standard deviation we apply the type conversion floatbecause if each of the elements of is an intthe type of the sum will be an int def stddev( )"""assumes that is list of numbers returns the standard deviation of ""mean float(sum( ))/len(xtot for in xtot +( mean)** return (tot/len( ))** #square root of mean difference figure standard deviation we can use the notion of standard deviation to think about the relationship between the number of samples we have looked at and how much confidence we should have in the answer we have computed figure contains modified version of flipplot it runs multiple trials of each number of coin flipsand plots the means for abs(heads tailsand the heads/tails ratio it also plots the standard deviation of each you'll probably never need to implement this yourself statistical libraries implement this and many other standard statistical functions howeverwe present the code here on the off chance that some readers prefer looking at code to looking at equations |
4,140 | the implementation of flipplot uses two helper functions the function makeplot contains the code used to produce the plots the function runtrial simulates one trial of numflips coins def makeplot(xvalsyvalstitlexlabelylabelstylelogx falselogy false)"""plots xvals vs yvals with supplied titles and labels ""pylab figure(pylab title(titlepylab xlabel(xlabelpylab ylabel(ylabelpylab plot(xvalsyvalsstyleif logxpylab semilogx(if logypylab semilogy(def runtrial(numflips)numheads for in range(numflips)if random random( numheads + numtails numflips numheads return (numheadsnumtailsdef flipplot (minexpmaxexpnumtrials)"""assumes minexp and maxexp positive intsminexp maxexp numtrials positive integer plots summaries of results of numtrials trials of **minexp to **maxexp coin flips""ratiosmeansdiffsmeansratiossdsdiffssds [][][][xaxis [for exp in range(minexpmaxexp )xaxis append( **expfor numflips in xaxisratios [diffs [for in range(numtrials)numheadsnumtails runtrial(numflipsratios append(numheads/float(numtails)diffs append(abs(numheads numtails)ratiosmeans append(sum(ratios)/float(numtrials)diffsmeans append(sum(diffs)/float(numtrials)ratiossds append(stddev(ratios)diffssds append(stddev(diffs)numtrialsstring (str(numtrialstrials)title 'mean heads/tails ratiosnumtrialsstring makeplot(xaxisratiosmeanstitle'number of flips''mean heads/tails''bo'logx truetitle 'sd heads/tails ratiosnumtrialsstring makeplot(xaxisratiossdstitle'number of flips''standard deviation''bo'logx truelogy truefigure coin-flipping simulation |
4,141 | stochastic programsprobabilityand statistics let' try flipplot ( it generates the plots this is encouraging the ratio heads/tails is converging towards and the log of the standard deviation is falling linearly with the log of the number of flips per trial by the time we get to about coin flips per trialthe standard deviation (about - is roughly three decimal orders of magnitude smaller than the mean (about )indicating that the variance across the trials was small we canthereforehave considerable confidence that the expected heads/tails ratio is quite close to as we flip more coinsnot only do we have more precise answerbut more importantwe also have reason to be more confident that it is close to the right answer what about the absolute difference between the number of heads and the number of tailswe can take look at that by adding to the end of flipplot the code in figure title 'mean abs(#heads #tails)numtrialsstring makeplot(xaxisdiffsmeanstitle'number of flips''mean abs(#heads #tails)''bo'logx truelogy truetitle 'sd abs(#heads #tails)numtrialsstring makeplot(xaxisdiffssdstitle'number of flips''standard deviation''bo'logx truelogy truefigure absolute differences |
4,142 | this produces the additional plots as expectedthe absolute difference between the numbers of heads and tails grows with the number of flips furthermoresince we are averaging the results over twenty trialsthe plot is considerably smoother than when we plotted the results of single trial but what' up with the last plotthe standard deviation is growing with the number of flips does this mean that as the number of flips increases we should have less rather than more confidence in the estimate of the expected value of the difference between heads and tailsnoit does not the standard deviation should always be viewed in the context of the mean if the mean were billion and the standard deviation we would view the dispersion of the data as small but if the mean were and the standard deviation we would view the dispersion as quite large the coefficient of variation is the standard deviation divided by the mean when comparing data sets with highly variable means (as here)the coefficient of variation is often more informative than the standard deviation as you can see from its implementation in figure the coefficient of variation is not defined when the mean is def cv( )mean sum( )/float(len( )tryreturn stddev( )/mean except zerodivisionerrorreturn float('nan'figure coefficient of variation |
4,143 | stochastic programsprobabilityand statistics figure contains version of flipplot that plots coefficients of variation def flipplot (minexpmaxexpnumtrials)"""assumes minexp and maxexp positive intsminexp maxexp numtrials positive integer plots summaries of results of numtrials trials of **minexp to **maxexp coin flips""ratiosmeansdiffsmeansratiossdsdiffssds [][][][ratioscvsdiffscvs [][xaxis [for exp in range(minexpmaxexp )xaxis append( **expfor numflips in xaxisratios [diffs [for in range(numtrials)numheadsnumtails runtrial(numflipsratios append(numheads/float(numtails)diffs append(abs(numheads numtails)ratiosmeans append(sum(ratios)/float(numtrials)diffsmeans append(sum(diffs)/float(numtrials)ratiossds append(stddev(ratios)diffssds append(stddev(diffs)ratioscvs append(cv(ratios)diffscvs append(cv(diffs)numtrialsstring (str(numtrialstrials)title 'mean heads/tails ratiosnumtrialsstring makeplot(xaxisratiosmeanstitle'number of flips''mean heads/tails''bo'logx truetitle 'sd heads/tails ratiosnumtrialsstring makeplot(xaxisratiossdstitle'number of flips''standard deviation''bo'logx truelogy truetitle 'mean abs(#heads #tails)numtrialsstring makeplot(xaxisdiffsmeanstitle'number of flips''mean abs(#heads #tails)''bo'logx truelogy truetitle 'sd abs(#heads #tails)numtrialsstring makeplot(xaxisdiffssdstitle'number of flips''standard deviation''bo'logx truelogy truetitle 'coeff of var abs(#heads #tails)numtrialsstring makeplot(xaxisdiffscvstitle'number of flips''coeff of var ''bo'logx truetitle 'coeff of var heads/tails rationumtrialsstring makeplot(xaxisratioscvstitle'number of flips''coeff of var ''bo'logx truelogy truefigure final version of flipplot |
4,144 | stochastic programsprobabilityand statistics it produces the additional plots in this case we see that the plot of coefficient of variation for the heads/tails ratio is not much different from the plot of the standard deviation this is not surprisingsince the only difference between the two is the division by the meanand since the mean is close to that makes little difference on the other handthe plot of the coefficient of variation for the absolute difference between heads and tails is different story it would take brave person to argue that it is trending in any direction it seems to be fluctuating widely this suggests that dispersion in the values of abs(heads tailsis independent of the number of flips it' not growingas the standard deviation might have misled us to believebut it' not shrinking either perhaps trend would appear if we tried trials instead of let' see it looks as if once the number of flips reaches somewhere around the coefficient of variation settles in somewhere in the neighborhood of in generaldistributions with coefficient of variation of less than are considered low-variance beware that if the mean is near zerosmall changes in the mean lead to large (but not necessarily meaningfulchanges in the coefficient of variationand when the mean is zerothe coefficient of variation is undefined alsoas we shall see shortlythe standard deviation can be used to construct confidence intervalbut the coefficient of variation cannot |
4,145 | stochastic programsprobabilityand statistics distributions histogram is plot designed to show the distribution of values in set of data the values are first sortedand then divided into fixed number of equalwidth bins plot is then drawn that shows the number of elements in each bin considerfor examplethe code vals [ #guarantee that values will range from to for in range( )num random choice(range( )num random choice(range( )vals append(num +num pylab hist(valsbins the function call pylab hist(valsbins produces the histogramwith ten binson the left pylab has automatically chosen the width of each bin looking at the codewe know that the smallest number in vals will be and the largest number thereforethe possible values on the -axis range from to each bin represents an equal fraction of the values on the xaxisso the first bin will contain the elements - the next bin the elements - etc since the mean values chosen for num and num will be in the vicinity of it is not surprising that there are more elements in the middle bins than in the bins near the edges by now you must be getting awfully bored with flipping coins neverthelesswe are going to ask you to look at yet one more coin-flipping simulation the simulation in figure illustrates more of pylab' plotting capabilities and gives us an opportunity to get visual notion of what standard deviation means the simulation uses the function pylab xlim to control the extent of the -axis the function call pylab xlim(returns tuple composed of the minimal and maximal values of the -axis of the current figure the function call pylab xlim(xminxmaxsets the minimal and maximal values of the -axis of the current figure the function pylab ylim works the same way |
4,146 | def flip(numflips)heads for in range(numflips)if random random( heads + return heads/numflips def flipsim(numflipspertrialnumtrials)fracheads [for in range(numtrials)fracheads append(flip(numflipspertrial)mean sum(fracheads)/len(fracheadssd stddev(fracheadsreturn (fracheadsmeansddef labelplot(numflipsnumtrialsmeansd)pylab title(str(numtrialstrials of str(numflipsflips each'pylab xlabel('fraction of heads'pylab ylabel('number of trials'xminxmax pylab xlim(yminymax pylab ylim(pylab text(xmin (xmax-xmin)* (ymax-ymin)/ 'mean str(round(mean )'\nsd str(round(sd ))size=' -large'def makeplots(numflips numflips numtrials)val mean sd flipsim(numflips numtrialspylab hist(val bins xmin,xmax pylab xlim(ymin,ymax pylab ylim(labelplot(numflips numtrialsmean sd pylab figure(val mean sd flipsim(numflips numtrialspylab hist(val bins pylab xlim(xminxmaxlabelplot(numflips numtrialsmean sd random seed( makeplots( , , figure plot histograms demonstrating normal distributions when the code in figure is runit produces the plots |
4,147 | stochastic programsprobabilityand statistics notice that while the means in both plots are about the samethe standard deviations are quite different the spread of outcomes is much tighter when we flip the coin times per trial than when we flip the coin times per trial to make this clearwe have used pylab xlim to force the bounds of the -axis in the second plot to match those in the first plotrather than letting pylab choose the bounds we have also used pylab xlim and pylab ylim to choose set of coordinates for displaying text box with the mean and standard deviation normal distributions and confidence levels the distribution of results in each of these plots is close to what is called normal distribution technically speakinga normal distribution is defined by the formula *(!!!)!where is the meanthe standard deviationand euler' number (roughly if you don' feel like studying this equationthat' fine just remember that normal distributions peak at the meanfall off symmetrically above and below the meanand asymptotically approach they have the nice mathematical property of being completely specified by two parametersthe mean and the standard deviation (the only two parameters in the equationknowing these is equivalent to knowing the entire distribution the shape of the normal distribution resembles (in the eyes of somethat of bellso it sometimes is referred to as bell curve as we can see by zooming in on the center of the plot for flips/trialthe distribution is not perfectly symmetricaland therefore not quite normal howeveras we increase the number of trialsthe distribution will converge towards normal normal distributions are frequently used in constructing probabilistic models for three reasons they have nice mathematical properties many naturally occurring distributions are indeed close to normaland they can be used to produce confidence intervals instead of estimating an unknown parameter by single value ( the mean of set of trials) confidence interval provides range that is likely to contain the unknown value and degree of confidence that the unknown value lies within that range for examplea political poll might indicate that candidate is likely to get of the vote +- ( the confidence interval is of size with confidence level of what this means is that the pollster believes that of the time the candidate will receive between and of the vote together the confidence interval and the confidence level indicate the reliability of the |
4,148 | estimate almost alwaysincreasing the confidence level will widen the confidence interval the calculation of confidence interval generally requires assumptions about the nature of the space being sampled it assumes that the distribution of errors of estimation is normal and has mean of zero the empirical rule for normal distributions provides handy way to estimate confidence intervals and levels given the mean and standard deviation of the data will fall within standard deviation of the mean of the data will fall within standard deviations of the meanand almost all ( %of the data will fall within standard deviations of the mean suppose that we run trials of coin flips each suppose further that the mean fraction of heads is and the standard deviation if we assume that the distribution of the means of the trials was normalwe can conclude that if we conducted more trials of flips each of the time the fraction of heads will be +- and > of the time the fraction of heads will be +- it is often useful to visualize confidence intervals using error bars the code in figure calls the version of flipsim in figure and then uses pylab errorbar(xvalsmeansyerr *pylab array(sds)to produce the plot on the right the first two arguments give the and values to be plotted the third argument says that the values in sds should be used to create vertical error bars the call showerrorbars( produces the plot on the right unsurprisinglythe error bars shrink as the number of flips per trial grows these values are approximations for example of the data will fall within standard deviations of the mean standard deviations is convenient approximation |
4,149 | stochastic programsprobabilityand statistics def showerrorbars(minexpmaxexpnumtrials)"""assumes minexp and maxexp positive intsminexp maxexp numtrials positive integer plots mean fraction of heads with error bars""meanssds [][xvals [for exp in range(minexpmaxexp )xvals append( **expfracheadsmeansd flipsim( **expnumtrialsmeans append(meansds append(sdpylab errorbar(xvalsmeansyerr= *pylab array(sds)pylab semilogx(pylab title('mean fraction of heads (str(numtrialstrials)'pylab xlabel('number of flips per trial'pylab ylabel('fraction of heads confidence'figure produce plot with error bars of coursefinding mathematically nice model is of no use if it provides bad model of the actual data fortunatelymany random variables have an approximately normal distribution for examplephysical properties of plants and animals ( heightweightbody temperaturetypically have approximately normal distributions importantlymany experimental setups have normally distributed measurement errors this assumption was used in the early by the german mathematician and physicist karl gausswho assumed normal distribution of measurement errors in his analysis of astronomical data (which led to the normal distribution becoming known as the gaussian distribution in much of the scientific communitynormal distributions can be easily generated by calling random gauss(musigma)which returns randomly chosen floating point number from normal distribution with mean mu and standard deviation sigma it is importanthoweverto remember that not all distributions are normal uniform distributions consider rolling single die each of the six outcomes is equally probable if one were to roll single die million times and create histogram showing how often each number came upeach column would be almost the same height if one were to plot the probability of each possible lottery number being chosenit would be flat line (at divided by the range of the lottery numberssuch distributions are called uniform one can fully characterize uniform distribution with single parameterits range ( minimum and maximum valueswhile uniform distributions are quite common in games of chancethey rarely occur in naturenor are they usually useful for modeling complex manmade systems uniform distributions can easily be generated by calling random uniform(minmaxwhich returns randomly chosen floating point number between min and max |
4,150 | exponential and geometric distributions exponential distributionsunlike uniform distributionsoccur quite commonly they are often used to model inter-arrival timese of cars entering highway or requests for web page they are especially important because they have the memoryless property considerfor examplethe concentration of drug in the human body assume that at each time step each molecule has probability of being cleared ( of no longer being in the bodythe system is memoryless in the sense that at each time step the probability of molecule being cleared is independent of what happened at previous times at time the probability of an individual molecule still being in the body is at time the probability of that molecule still being in the body is at time the probability of that molecule still being in the body is ( ) more generallyat time the probability of an individual molecule having survived is ( ) suppose that at time there are molecules of the drug in generalat time tthe number of molecules will be multiplied by the probability that an individual module has survived to time the function implemented in figure plots the expected number of remaining molecules versus time def clear(npsteps)"""assumes steps positive intsp float nthe initial number of molecules pthe probability of molecule being cleared stepsthe length of the simulation""numremaining [nfor in range(steps)numremaining append( *(( - )** )pylab plot(numremainingpylab xlabel('time'pylab ylabel('molecules remaining'pylab title('clearance of drug'figure exponential clearance of molecules the call clear( produces the plot on the left |
4,151 | stochastic programsprobabilityand statistics this is an example of exponential decay in practiceexponential decay is often talked about in terms of half-lifei the expected time required for the initial value to decay by one can also talk about the half-life of single item for examplethe half-life of single radioactive atom is the time at which the probability of that atom having decayed is notice that as time increases the number of remaining molecules approaches zero but it will never quite get there this should not be interpreted as suggesting that fraction of molecule remains rather it should be interpreted as saying that since the system is probabilisticone can never guarantee that all of the molecules have been cleared what happens if we make the -axis logarithmic (by using pylab semilogy)we get the plot above and on the right the values on the -axis are changing exponentially quickly relative to the values on the -axis if we make the -axis itself change exponentially quicklywe get straight line the slope of that line is the rate of decay exponential growth is the inverse of exponential decay it too is quite commonly seen in nature compound interestthe growth of algae in swimming pooland the chain reaction in an atomic bomb are all examples of exponential growth exponential distributions can easily be generated by calling random expovariate the geometric distribution is the discrete analog of the exponential distribution it is usually thought of as describing the number of independent attempts required to achieve first success (or first failureimaginefor examplethat you have crummy car that starts only half of the time you turn the key geometric distribution could be used to characterize the expected number of times you would have to attempt to start the car before being successful this is illustrated by the histogram on the rightwhich was produced by the code in figure the histogram implies that most of the time you'll get the car going within few attempts on the other handthe long tail suggests that on occasion you may run the risk of draining your battery before the car gets going the name "geometric distributionarises from its similarity to "geometric progression geometric progression is any sequence of numbers in which each number other than the first is derived by multiplying the previous number by constant nonzero number euclid' elements proves number of interesting theorems about geometric progressions |
4,152 | def successfulstarts(eventprobnumtrials)"""assumes eventprob is float representing probability of single attempt being successful numtrials positive int returns list of the number of attempts needed before success for each trial ""triesbeforesuccess [for in range(numtrials)consecfailures while random random(eventprobconsecfailures + triesbeforesuccess append(consecfailuresreturn triesbeforesuccess random seed( probofsuccess numtrials distribution successfulstarts(probofsuccessnumtrialspylab hist(distributionbins pylab xlabel('tries before success'pylab ylabel('number of occurrences out of str(numtrials)pylab title('probability of starting each try str(probofsuccess)figure geometric distribution benford' distribution benford' law defines really strange distribution let be large set of decimal integers how frequently would you expect each digit to appear as the first digitmost of us would probably guess one ninth of the time and when people are making up sets of numbers ( faking experimental data or perpetrating financial fraudthis is typically true it is nothowevertypically true of many naturally occurring data sets insteadthey follow distribution predicted by benford' law set of decimal numbers is said to satisfy benford' law if the probability of the first digit being is consistent with (dlog ( /dfor examplethis law predicts that the probability of the first digit being is about %shockinglymany actual data sets seem to observe this law it is possible to show that the fibonacci sequencefor examplesatisfies it perfectly that' kind of plausiblesince the sequence is generated by formula it' less easy to understand why such diverse data sets as iphone pass codesthe number of twitter followers per userthe population of countriesor the distance of stars from the earth closely approximate benford' law the law is named after the physicist frank benfordwho published paper in showing that the law held on over , observations drawn from twenty different domains howeverit was first postulated in by the astronomer simon newcomb |
4,153 | stochastic programsprobabilityand statistics how often does the better team winthus far we have looked at using statistical methods to help understand possible outcomes of games in which skill is not intended to play role it is also common to apply these methods to situations in which there ispresumablysome skill involved setting odds on football matchchoosing political candidate with chance of winninginvesting in the stock marketand so on almost every october two teams from american major league baseball meet in something called the world series they play each other repeatedly until one of the teams has won four gamesand that team is called (not entirely appropriately"the world champion setting aside the question of whether there is reason to believe that one of the participants in the world series is indeed the best team in the worldhow likely is it that contest that can be at most seven games long will determine which of the two participants is betterclearlyeach year one team will emerge victorious so the question is whether we should attribute that victory to skill or to luck to address that question we can use something called -value -values are used to determine whether or not result is statistically significant to compute -value one needs two thingsa null hypothesis this hypothesis describes the result that one would get if the results were determined entirely by chance in this casethe null hypothesis would be that the teams are equally talentedso if the two teams were to play an infinite number of seven-game serieseach would win half the time an observation data gathered either by observing what happens or by running simulation that one believes provides an accurate model of what would happen the -value gives us the likelihood that the observation is consistent with the null hypothesis the smaller the -valuethe more likely it is that we should reject the hypothesis that the observation is due entirely to chance usuallywe insist that be no larger than before we consider result to be statistically significant we insist that there is no more than chance that the null hypothesis holds getting back to the world seriesshould we consider the results of those sevengame series to be statistically significantthat isshould we conclude that the better team did indeed winfigure contains code that can provide us with some insight into that question the function simseries has one argumentnumseriesa positive integer describing the number of seven-game series to be simulated it plots the probability of the better team winning the series against the probability of that team winning single game it varies the probability of the better team winning single game from to and produces plot |
4,154 | def playseries(numgamesteamprob)"""assumes numgames an odd integerteamprob float between and returns true if better team wins series""numwon for game in range(numgames)if random random(<teamprobnumwon + return (numwon numgames// def simseries(numseries)prob fracwon [probs [while prob < serieswon for in range(numseries)if playseries( prob)serieswon + fracwon append(serieswon/numseriesprobs append(probprob + pylab plot(probsfracwonlinewidth pylab xlabel('probability of winning game'pylab ylabel('probability of winning series'pylab axhline( pylab ylim( pylab title(str(numseriesseven-game series'simseries( figure world series simulation when simseries is used to simulate seven-game seriesit produces the plot on the right notice that for the better team to win of the time ( on the -axis)it needs to be more than three times better than its opponent that is to saythe better team needs to winon averagemore than three out of four games ( on the -axisfor comparisonin the two teams in the world series had regular season winning percentages of (new york yankeesand (philadelphia philliesthis suggests that new york should win about of the games between the two teams our plot tells us that even if they were to play each other in seven-game seriesthe yankees would win less than of the time suppose we assume that these winning percentages are accurate reflections of the relative strengths of these two teams how many games long should the |
4,155 | stochastic programsprobabilityand statistics world series be in order for us to get results that would allow us to reject the null hypothesisi the hypothesis that the teams are evenly matchedthe code in figure simulates instances of series of varying lengthsand plots an approximation of the probability of the better team winning def findserieslength(teamprob)numseries maxlen step def fracwon(teamprobnumseriesserieslen)won for series in range(numseries)if playseries(serieslenteamprob)won + return won/numseries winfrac [xvals [for serieslen in range( maxlenstep)xvals append(serieslenwinfrac append(fracwon(teamprobnumseriesserieslen)pylab plot(xvalswinfraclinewidth pylab xlabel('length of series'pylab ylabel('probability of winning series'pylab title(str(round(teamprob )probability of better team winning game'pylab axhline( #draw horizontal line at yanksprob philsprob findserieslength(yanksprob/(yanksprob philsprob)figure how long should the world series bethe output of findserieslength suggests that under these circumstances the world series would have to be approximately games long before we could reject the null hypothesis and confidently say that the better team had almost certainly won scheduling series of this length might present some practical problems |
4,156 | stochastic programsprobabilityand statistics hashing and collisions in section we pointed out that by using larger hash table one could reduce the incidence of collisionsand thus reduce the expected time to retrieve value we now have the intellectual tools needed to examine that tradeoff more precisely firstlet' get precise formulation of the problem assumea the range of the hash function is to nb the number of insertions is kand the hash function produces perfectly uniform distribution of the keys used in insertionsi for all keyskeyand for integersiin the range to nthe probability that hash(keyis is / what is the probability that at least one collision occursthe question is exactly equivalent to asking "given randomly generated integers in the range to nwhat is the probability that at least two of them are equal if >nthe probability is clearly but what about when nas is often the caseit is easiest to start by answering the inverse question"given randomly generated integers in the range to nwhat is the probability that none of them are equal?when we insert the first elementthe probability of not having collision is clearly how about the second insertionsince there are - hash results left that are not equal to the result of the first hashn- out of choices will not yield collision sothe probability of not getting collision on the second insertion is !!and the probability of not getting collision on either of the first two insertions is !!we can multiply these probabilities because for each insertion the value produced by the hash function is independent of anything that has preceded it the probability of not having collision after three insertions is !!!!after insertions it is !!!!!!!and to get the probability of having at least one collisionwe subtract this value from the probability is -!- !- !!- given the size of the hash table and the number of expected insertionswe can use this formula to calculate the probability of at least one collision if were reasonably largesay , it would be bit tedious to compute the probability with pencil and paper that leaves two choicesmathematics and programming mathematicians have used some fairly advanced techniques to find way to approximate the value of this series but unless is very largeit is easier to run some code to compute the exact value of the series |
4,157 | stochastic programsprobabilityand statistics def collisionprob(nk)prob for in range( )prob prob (( )/float( )return prob if we try collisionprob( we get probability of about of there being at least one collision if we consider insertionsthe probability of collision is nearly one does that seem bit high to youlet' write simulationfigure to estimate the probability of at least one collisionand see if we get similar results def siminsertions(numindicesnuminsertions)"""assumes numindices and numinsertions are positive ints returns if there is collision otherwise""choices range(numindices#list of possible indices used [for in range(numinsertions)hashval random choice(choicesif hashval in used#there is collision return elseused append(hashvalreturn def findprob(numindicesnuminsertionsnumtrials)collisions for in range(numtrials)collisions +siminsertions(numindicesnuminsertionsreturn collisions/numtrials figure simulating hash table if we run the code print 'actual probability of collision ='collisionprob( print 'est probability of collision ='findprob( print 'actual probability of collision ='collisionprob( print 'est probability of collision ='findprob( it prints actual probability of collision est probability of collision actual probability of collision est probability of collision the simulation results are comfortingly similar to what we derived analytically should the high probability of collision make us think that hash tables have to be enormous to be usefulno the probability of there being at least one collision tells us little about the expected lookup time the expected time to look up value depends upon the average length of the lists implementing the buckets that hold the values that collided this is simply the number of insertions divided by the number of buckets |
4,158 | visualization in the scottish botanist robert brown observed that pollen particles suspended in water seemed to float around at random he had no plausible explanation for what came to be known as brownian motionand made no attempt to model it mathematically clear mathematical model of the phenomenon was first presented in in louis bachelier' doctoral thesisthe theory of speculation howeversince this thesis dealt with the then disreputable problem of understanding financial marketsit was largely ignored by respectable academics five years latera young albert einstein brought this kind of stochastic thinking to the world of physics with mathematical model almost the same as bachelier' and description of how it could be used to confirm the existence of atoms for some reasonpeople seemed to think that understanding physics was more important than making moneyand the world started paying attention times were certainly different brownian motion is an example of random walk random walks are widely used to model physical processes ( diffusion)biological processes ( the kinetics of displacement of rna from heteroduplexes by dna)and social processes ( movements of the stock marketin this we look at random walks for three reasons random walks are intrinsically interesting it provides us with good example of how to use abstract data types and inheritance to structure programs in general and simulations in particular it provides an opportunity to introduce few more features of python and to demonstrate some additional techniques for producing plots the drunkard' walk let' look at random walk that actually involves walking drunken farmer is standing in the middle of fieldand every second the farmer takes one step in random direction what is her (or hisexpected distance from the origin in nor was he the first to observe it as early as bcthe roman titus lucretiusin his poem "on the nature of things,described similar phenomenonand even implied that it was caused by the random movement of atoms "on the movement of small particles suspended in stationary liquid demanded by the molecular-kinetic theory of heat,annalen der physikmay einstein would come to describe as his "annus mirabilis that yearin addition to his paper on brownian motionhe published papers on the production and transformation of light (pivotal to the development of quantum theory)on the electrodynamics of moving bodies (special relativity)and on the equivalence of matter and energy ( mc not bad year for newly minted phd |
4,159 | random walks and more about data vizualization secondsif she takes many stepsis she likely to move ever further from the originor is she more likely to wander back to the origin over and overand end up not far from where she startedlet' write simulation to find out before starting to design programit is always good idea to try to develop some intuition about the situation the program is intended to model let' start by sketching simple model of the situation using cartesian coordinates assume that the farmer is standing in field where the grass hasmysteriouslybeen cut to resemble piece of graph paper assume further that each step the farmer takes is of length one and is parallel to either the -axis or -axis the picture on the left depicts farmer standing in the middle of the field the smiley faces indicate all the places the farmer might be after one step notice that after one step she is always exactly one unit away from where she started let' assume that she wanders eastward from her initial location on her first step how far away might she be from her initial location after her second steplooking at the smiley faces in the picture on the rightwe see that with probability of she will be units awaywith probability of she will be units awayand with probability of she will be units away soon average she will be further away after two steps than after one step what about the third stepif the second step is to the top or bottom smiley facethe third step will bring the farmer closer to origin half the time and further half the time if the second step is to the left smiley face (the origin)the third step will be away from the origin if the second step is to the right smiley facethe third step will be closer to the origin quarter of the timeand further away three quarters of the time it seems like the more steps the drunk takesthe greater the expected distance from the origin we could continue this exhaustive enumeration of possibilities and perhaps develop pretty good intuition about how this distance grows with respect to the number of steps howeverit is getting pretty tediousso it seems like better idea to write program to do it for us let' begin the design process by thinking about some data abstractions that are likely to be useful in building this simulation and perhaps simulations of other kinds of random walks as usual we should try to invent types that correspond to be honestthe person pictured here is professional actor impersonating farmer why we are using the pythagorean theorem |
4,160 | to the kinds of things that appear in the situation we are attempting to model three obvious types are locationfieldand drunk as we look at the classes providing these typesit is worthwhile to think about what each might imply about the kinds of simulation models they will allow us to build let' start with location class location(object)def __init__(selfxy)""" and are floats""self self def move(selfdeltaxdeltay)"""deltax and deltay are floats""return location(self deltaxself deltaydef getx(self)return self def gety(self)return self def distfrom(selfother)ox other oy other xdist self ox ydist self oy return (xdist** ydist** )** def __str__(self)return 'figure location class this is simple classbut it does embody two important decisions it tells us that the simulation will involve at most two dimensions the simulation will not model changes in altitude this is consistent with the pictures above alsosince the values of deltax and deltay are floats rather than integersthere is no built-in assumption in this class about the set of directions in which drunk might move this is generalization of the informal model in which each step was of length one and was parallel to the -axis or -axis class field is also quite simplebut it too embodies notable decisions it simply maintains mapping of drunks to locations it places no constraints on locationsso presumably field is of unbounded size it allows multiple drunks to be added into field at random locations it says nothing about the patterns in which drunks movenor does it prohibit multiple drunks from occupying the same location or moving through spaces occupied by other drunks |
4,161 | random walks and more about data vizualization class field(object)def __init__(self)self drunks {def adddrunk(selfdrunkloc)if drunk in self drunksraise valueerror('duplicate drunk'elseself drunks[drunkloc def movedrunk(selfdrunk)if drunk not in self drunksraise valueerror('drunk not in field'xdistydist drunk takestep(currentlocation self drunks[drunk#use move method of location to get new location self drunks[drunkcurrentlocation move(xdistydistdef getloc(selfdrunk)if drunk not in self drunksraise valueerror('drunk not in field'return self drunks[drunkfigure field class the classes drunk and usualdrunk define the ways in which drunk might wander through the field in particular the value of stepchoices in usualdrunk restores the restriction that each step is of length one and is parallel to either the -axis or -axis it also captures the assumption that each kind of step is equally likely and not influenced by previous steps bit later we will look at subclasses of drunk with different kinds of behaviors class drunk(object)def __init__(selfname none)"""assumes name is str""self name name def __str__(self)if self !nonereturn self name return 'anonymousclass usualdrunk(drunk)def takestep(self)stepchoices [( , )( ,- )( )(- )return random choice(stepchoicesfigure drunk base class the next step is to use these classes to build simulation that answers the original question figure contains three functions used in this simulation the function walk simulates one walk of numsteps steps the function simwalks calls walk to simulate numtrials walks of numsteps steps each the function drunktest calls simwalks to simulate walks of varying lengths |
4,162 | the parameter dclass of simwalks is of type classand is used in the first line of code to create drunk of the appropriate subclass laterwhen drunk takestep is invoked from field movedrunkthe method from the appropriate subclass is automatically selected the function drunktest also has parameterdclassof type class it is used twiceonce in the call to simwalks and once in the first print statement in the print statementthe built-in class attribute __name__ is used to get string with the name of the class the function drunktest calculates the coefficient of variation of the distance from the origin using the cv function defined in figure def walk(fdnumsteps)"""assumesf fieldd drunk in fand numsteps an int > moves numsteps timesand returns the difference between the final location and the location at the start of the walk ""start getloc(dfor in range(numsteps) movedrunk(dreturn start distfrom( getloc( )def simwalks(numstepsnumtrialsdclass)"""assumes numsteps an int > numtrials an int dclass subclass of drunk simulates numtrials walks of numsteps steps each returns list of the final distances for each trial""homer dclass(origin location( distances [for in range(numtrials) field( adddrunk(homerorigindistances append(walk(fhomernumtrials)return distances def drunktest(walklengthsnumtrialsdclass)"""assumes walklengths sequence of ints > numtrials an int dclass subclass of drunk for each number of steps in walklengthsruns simwalks with numtrials walks and prints results""for numsteps in walklengthsdistances simwalks(numstepsnumtrialsdclassprint dclass __name__'random walk of'numsteps'stepsprint mean ='sum(distances)/len(distances),'cv ='cv(distancesprint max ='max(distances)'min ='min(distancesfigure the drunkard' walk (with bug |
4,163 | random walks and more about data vizualization when we executed drunktest(( ) usualdrunk)it printed usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min this is surprisinggiven the intuition we developed earlier that the mean distance should grow with the number of steps it could mean that our intuition is wrongor it could mean that our simulation is buggyor both the first thing to do at this point is to run the simulation on values for which we already think we know the answerand make sure that what the simulation produces matches the expected result let' try walks of zero steps (for which the meanminimum and maximum distances from the origin should all be and one step (for which the meanminimum and maximum distances from the origin should all be when we ran drunktest(( , ) usualdrunk)we got the highly suspect result usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min how on earth can the mean distance of walk of zero steps be over we must have at least one bug in our simulation after some investigationthe problem is clear in simwalksthe call walk(fhomernumtrialsshould have been walk(fhomernumstepsthe moral here is an important onealways bring some skepticism to bear when looking at the results of simulation ask if the results are plausibleand "smoke test" the simulation on parameters for which you have strong intuition about what the results should be in the th centuryit became standard practice for plumbers to test closed systems of pipes for leaks by filling the system with smoke laterelectronic engineers adopted the term to cover the very first test of piece of electronics--turning on the power and looking for smoke still latersoftware developers starting using the term for quick test to see if program did anything useful |
4,164 | random walks and more about data vizualization when the corrected version of the simulation is run on our two simple casesit yields exactly the expected answersusualdrunk random walk of steps mean cv nan max min usualdrunk random walk of steps mean cv max min when run on longer walks it printed usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min as anticipatedthe average distance from the origin grows with the number of steps now let' look at plot of the mean distances from the origin to give sense of how fast the distance is growingwe have placed on the plot line showing the square root of the number of steps (and increased the number of steps to , , does this plot provide any information about the expected final location of drunkit does tell us that on average the drunk will be somewhere on circle with its center at the origin and with radius equal to the expected distance from the origin howeverit tells us very little about where we might actually find the drunk at the end of any particular walk we return to this topic later in this since the mean was zerothe coefficient of variation is undefined hence our implementation of cv returned the special "not numberfloating point value the plot showing the square root of the number of steps versus the distance from the origin is straight line because we used logarithmic scale on both axes |
4,165 | random walks and more about data vizualization biased random walks now that we have working simulationwe can start modifying it to investigate other kinds of random walks supposefor examplethat we want to consider the behavior of drunken farmer in the northern hemisphere who hates the coldand even in his drunken stupor is able to move twice as fast when his random movements take him in southward direction or maybe phototropic drunk who always moves towards the sun (east in the morning and west in the afternoonthese are all examples of biased random walks the walk is still stochasticbut there is bias in the outcome figure defines two additional subclasses of drunk in each case the specialization involves choosing an appropriate value for stepchoices the function simall iterates over sequence of subclasses of drunk to generate information about how each kind behaves class colddrunk(drunk)def takestep(self)stepchoices [( , )( ,- )( )(- )return random choice(stepchoicesclass ewdrunk(drunk)def takestep(self)stepchoices [( )(- )return random choice(stepchoicesdef simall(drunkkindswalklengthsnumtrials)for dclass in drunkkindsdrunktest(walklengthsnumtrialsdclassfigure subclasses of drunk base class when we ran simall((usualdrunkcolddrunkewdrunk)( ) it printed usualdrunk random walk of steps mean cv max min usualdrunk random walk of steps mean cv max min colddrunk random walk of steps mean cv max min colddrunk random walk of steps mean cv max min ewdrunk random walk of steps mean cv max min ewdrunk random walk of steps mean cv max min |
4,166 | this is quite bit of output to digest it does appear that our heat-seeking drunk moves away from the origin faster than the other two kinds of drunk howeverit is not easy to digest all of the information in this output it is once again time to move away from textual output and start using plots since we are showing number of different kinds of drunks on the same plotwe will associate distinct style with each type of drunk so that it is easy to differentiate among them the style will have three aspectsthe color of the line and pointsthe shape of the marker used to indicate pointand the style of linee solid or dotted the class styleiteratorin figure rotates through sequence of styles defined by the argument to styleiterator __init__ class styleiterator(object)def __init__(selfstyles)self index self styles styles def nextstyle(self)result self styles[self indexif self index =len(self styles self index elseself index + return result figure iterating over styles the code in figure is similar in structure to that in figure the print statements in simdrunk and simall contribute nothing to the result of the simulation they are there because this simulation can take rather long time to completeand printing an occasional message indicating that progress is being made can be quite reassuring to user who might be wondering if the program is actually making progress (recall that stddev was defined in figure |
4,167 | random walks and more about data vizualization def simdrunk(numtrialsdclasswalklengths)meandistances [cvdistances [for numsteps in walklengthsprint 'starting simulation of'numsteps'stepstrials simwalks(numstepsnumtrialsdclassmean sum(trials)/float(len(trials)meandistances append(meancvdistances append(stddev(trials)/meanreturn (meandistancescvdistancesdef simall(drunkkindswalklengthsnumtrials)stylechoice styleiterator((' -'' :'' ')for dclass in drunkkindscurstyle stylechoice nextstyle(print 'starting simulation of'dclass __name__ meanscvs simdrunk(numtrialsdclasswalklengthscvmean sum(cvs)/float(len(cvs)pylab plot(walklengthsmeanscurstylelabel dclass __name__ '(cv str(round(cvmean )')'pylab title('mean distance from origin (str(numtrialstrials)'pylab xlabel('number of steps'pylab ylabel('distance from origin'pylab legend(loc 'best'pylab semilogx(pylab semilogy(simall((usualdrunkcolddrunkewdrunk)( , , , , ) figure plotting the walks of different drunks the code in figure produces the plot on the right the usual drunk and the phototropic drunk (ewdrunkseem to be moving away from the origin at approximately the same pacebut the heat-seeking drunk (colddrunkseems to be moving away orders of magnitude faster this is interesting given that on average he is only moving faster (he takeson averagefive steps for every four taken by the othersalsothe coefficients of variation show quite spreadbut the plot doesn' shed any light on why let' construct different plotthat may help us get more insight into the behavior of these three classes instead of plotting the change in distance over time for an increasing number of stepsthe code in figure plots the distribution of final locations for single number of steps |
4,168 | def getfinallocs(numstepsnumtrialsdclass)locs [ dclass(origin location( for in range(numtrials) field( adddrunk(doriginfor in range(numsteps) movedrunk(dlocs append( getloc( )return locs def plotlocs(drunkkindsnumstepsnumtrials)stylechoice styleiterator((' +'' ^''mo')for dclass in drunkkindslocs getfinallocs(numstepsnumtrialsdclassxvalsyvals [][for in locsxvals append( getx()yvals append( gety()meanx sum(xvals)/float(len(xvals)meany sum(yvals)/float(len(yvals)curstyle stylechoice nextstyle(pylab plot(xvalsyvalscurstylelabel dclass __name__ mean loc <str(meanx'str(meany'>'pylab title('location at end of walks (str(numstepssteps)'pylab xlabel('steps east/west of origin'pylab ylabel('steps north/south of origin'pylab legend(loc 'lower left'numpoints plotlocs((usualdrunkcolddrunkewdrunk) figure plotting final locations the first thing plotlocs does is initialize stylechoice with three different styles of markers it then uses pylab plot to place marker at location corresponding to the end of each trial the call to pylab plot sets the color and shape of the marker to be plotted using the values returned by the iterator styleiterator the call plotlocs((usualdrunkcolddrunkewdrunk) produces the plot on the right the first thing to say is that our drunks seem to be behaving as advertised the ewdrunk ends up on the -axisthe colddrunk seem to make progress southwardsand the usualdrunk seem to have wandered aimlessly |
4,169 | random walks and more about data vizualization but why do there appear to be far fewer circle markers than triangle or markersbecause many of the ewdrunk' walks ended up at the same place this is not surprisinggiven the small number of possible endpoints ( for the ewdrunk also the circle markers seem to be fairly uniformly spaced across the -axiswhich is consistent with the relatively high coefficient of variation that we noticed earlier it is still not obviousat least to uswhy the colddrunk manageson averageto get so much further from the origin than the other kinds of drunks perhaps it' time to look not at the average endpoint of many walksbut at the path followed by single walk the code in figure produces the plot on the right def tracewalk(drunkkindsnumsteps)stylechoice styleiterator((' +'' ^''mo') field(for dclass in drunkkindsd dclass( adddrunk(dlocation( )locs [for in range(numsteps) movedrunk(dlocs append( getloc( )xvals [yvals [for in locsxvals append( getx()yvals append( gety()curstyle stylechoice nextstyle(pylab plot(xvalsyvalscurstylelabel dclass __name__pylab title('spots visited on walk (str(numstepssteps)'pylab xlabel('steps east/west of origin'pylab ylabel('steps north/south of origin'pylab legend(loc 'best'tracewalk((usualdrunkcolddrunkewdrunk) figure tracing walks since the walk is steps long and the ewdrunk' walk visits fewer than different locationsit' clear that he is spending lot of time retracing his steps the same kind of observation holds for the usualdrunk in contrastwhile the colddrunk is not exactly making beeline for floridahe is managing to spend relatively less time visiting places he has already been |
4,170 | none of these simulations is interesting in its own right (in the next we will look at more intrinsically interesting simulations but there are some points worth taking awayinitially we divided our simulation code into four separate chunks three of them were classes (locationfieldand drunkcorresponding to abstract data types that appeared in the informal description of the problem the fourth chunk was group of functions that used these classes to perform simple simulation we then elaborated drunk into hierarchy of classes so that we could observe different kinds of biased random walks the code for location and field remained untouchedbut the simulation code was changed to iterate through the different subclasses of drunk in doing thiswe took advantage of the fact that class is itself an objectand therefore can be passed as an argument finallywe made series of incremental changes to the simulation that did not involve any changes to the classes representing the abstract types these changes mostly involved introducing plots designed to provide insight into the different walks this is very typical of the way in which simulations are developed one gets the basic simulation working firstand then starts adding features treacherous fields did you ever play the board game known as chutes and ladders in the and snakes and ladders in the ukthis children' game originated in india (perhaps in the nd century bc)where it was called moksha-patamu landing on square representing virtue ( generositysent player up ladder to higher tier of life landing on square representing evil ( lust)sent player back to lower tier of life we can easily add this kind of feature to our random walks by creating field with wormholes, as shown in figure and replacing the second line of code in the function tracewalk by the line of code oddfield( this kind of wormhole is hypothetical concept invented by theoretical physicists it provides shortcuts through the time/space continuum |
4,171 | random walks and more about data vizualization class oddfield(field)def __init__(selfnumholesxrangeyrange)field __init__(selfself wormholes {for in range(numholes) random randint(-xrangexrangey random randint(-yrangeyrangenewx random randint(-xrangexrangenewy random randint(-yrangeyrangenewloc location(newxnewyself wormholes[(xy)newloc def movedrunk(selfdrunk)field movedrunk(selfdrunkx self drunks[drunkgetx( self drunks[drunkgety(if (xyin self wormholesself drunks[drunkself wormholes[(xy)figure fields with strange properties when we ran tracewalk((usualdrunkcolddrunkewdrunk) )we got the rather odd-looking plot clearly changing the properties of the field has had dramatic effect howeverthat is not the point of this example the main points arebecause of the way we structured our codeit was easy to accommodate significant change to the situation being modeled just as we could add different kinds of drunks without touching fieldwe can add new kind of field without touching drunk or any of its subclasses (had we been sufficiently prescient to make the field parameter of tracewalkwe wouldn' have had to change tracewalk either while it would have been feasible to analytically derive different kinds of information about the expected behavior of the simple random walk and even the biased random walksit would have been challenging to do so once the wormholes were introduced yet it was exceedingly simple to change the simulation to model the new situation simulation models often enjoy this advantage relative to analytic models |
4,172 | in the previous two we looked at different ways of using randomness in computations many of the examples we presented fall into the class of computation known as monte carlo simulation stanislaw ulam and nicholas metropolis coined the term monte carlo simulation in in homage to the games of chance played in the casino in the principality of monaco ulamwho is best known for designing the hydrogen bomb with edward tellerdescribed the invention of the model as followsthe first thoughts and attempts made to practice [the monte carlo methodwere suggested by question which occurred to me in as was convalescing from an illness and playing solitaires the question was what are the chances that canfield solitaire laid out with cards will come out successfullyafter spending lot of time trying to estimate them by pure combinatorial calculationsi wondered whether more practical method than "abstract thinkingmight not be to lay it out say one hundred times and simply observe and count the number of successful plays this was already possible to envisage with the beginning of the new era of fast computers, and immediately thought of problems of neutron diffusion and other questions of mathematical physicsand more generally how to change processes described by certain differential equations into an equivalent form interpretable as succession of random operations later [in idescribed the idea to john von neumannand we began to plan actual calculations the technique was effectively used during the manhattan project to predict what would happen during nuclear fission reactionbut did not really take off until the when computers became both more common and more powerful ulam was not the first mathematician to think about using the tools of probability to understand game of chance the history of probability is intimately connected to the history of gambling it is the existence of uncertainty that makes gambling possible and the existence of gambling provoked the development of much of the mathematics needed to reason about uncertainty contributions to the foundations of probability theory by cardanopascalfermatbernoullide moivreand laplace were all motivated by desire to better understand (and perhaps profit fromgames of chance "fastis relative term ulam was probably referring to the eniacwhich could perform about additions second (and weighed more than tonstoday' computers perform about additions second (and weigh maybe - tons eckhardtroger ( "stan ulamjohn von neumannand the monte carlo method,los alamos sciencespecial issue ( ) - |
4,173 | monte carlo simulation pascal' problem most of the early work on probability theory revolved around games using dice reputedlypascal' interest in the field that came to be known as probability theory began when friend asked him whether or not it would be profitable to bet that within twenty-four rolls of pair of dice he would roll double six this was considered hard problem in the mid- th century pascal and fermattwo pretty smart guysexchanged number of letters about how to resolve the issuebut it now seems like an easy question to answeron the first roll the probability of rolling six on each die is / so the probability of rolling six with both dice is / thereforethe probability of not rolling double six on the first roll is / / therefore the probability of not rolling double six twenty-four consecutive times is ( / ) nearly and therefore the probability of rolling double six is ( / ) about in the long run it would not be profitable to bet on rolling double six within twenty-four rolls just to be safelet' write little program to simulate pascal' friend' game and confirm that we get the same answer as pascal def rolldie()return random choice([ , , , , , ]def checkpascal(numtrials)"""assumes numtrials an int prints an estimate of the probability of winning""numwins for in range(numtrials)for in range( ) rolldie( rolldie(if = and = numwins + break print 'probability of winning ='numwins/numtrials figure checking pascal' analysis archeological excavations suggest that dice are the human race' oldest gambling implement the oldest known "modernsix-sided die dates to about bcbut egyptian tombs dating to two millennia before the birth of christ contain artifacts resembling dice typicallythese early dice were made from animal bonesin gambling circles people still use the phrase "rolling the bones as with our earlier analysesthis is true only under the assumption that each die is fairi the outcome of roll is truly random and each of the six outcomes is equally probable this is not always to be taken for granted excavations of pompeii discovered "loadeddice in which small lead weights had been inserted to bias the outcome of roll more recentlyan online vendor' site said"are you unusually unlucky when it comes to rolling diceinvesting in pair of dice that' moreuhreliable might be just what you need |
4,174 | when run the first timethe call checkpascal( printed probability of winning this is indeed quite close to ( / ) typing )** into the python shell produces pass or don' passnot all questions about games of chance are so easily answered in the game crapsthe shooter (the person who rolls the dicechooses between making "pass lineor "don' pass linebet pass lineshooter wins if the first roll (called coming outis "natural( or and loses if it is "craps( or if some other number is rolledthat number becomes the "pointand the shooter keeps rolling if the shooter rolls the point before rolling the shooter wins otherwise the shooter loses don' pass lineshooter loses if the first roll is or wins if it is or and ties ( "pushin gambling jargonif it is if some other number is rolledthat number becomes the point and shooter keeps rolling if the shooter rolls before rolling the pointthe shooter wins otherwise the shooter loses is one of these better bet than the otheris either good betit is possible to analytically derive the answer to these questionsbut it seems easier (at least to usto write program that simulates craps gameand see what happens figure contains the heart of such simulation the values of the instance variables of an instance of class crapsgame records the performance of the pass and don' pass lines since the start of the game the observer methods passresults and dpresults return these values the method playhand simulates one "hand" of game the bulk of the code in playhand is merely an algorithmic description of the rules stated above notice that there is loop in the else clause corresponding to what happens after point is established it is exited using break statement when either seven or the point is rolled hand starts when the shooter is "coming out,the term used in craps for roll before point is established hand ends when the shooter has won or lost his or her initial bet |
4,175 | monte carlo simulation class crapsgame(object)def __init__(self)self passwinsself passlosses ( , self dpwinsself dplossesself dppushes ( , , def playhand(self)throw rolldie(rolldie(if throw = or throw = self passwins + self dplosses + elif throw = or throw = or throw = self passlosses + if throw = self dppushes + elseself dpwins + elsepoint throw while truethrow rolldie(rolldie(if throw =pointself passwins + self dplosses + break elif throw = self passlosses + self dpwins + break def passresults(self)return (self passwinsself passlossesdef dpresults(self)return (self dpwinsself dplossesself dppushesfigure crapsgame class figure contains function that uses class crapsgame its structure is typical of many simulation programs it runs multiple games (think of each game as analogous to trial in our earlier simulationsand accumulates the results each game includes multiple handsso there is nested loop it then produces and stores statistics for each game finally it produces and outputs summary statistics in this caseit prints the expected return on investment (roior each kind of betting line and the standard deviation of that roi return on investment is defined by the equation !"!"#!"#!"#$%&'$"!"#!!"#$%&'$"!"#!!"#$%&'$"since the pass and don' pass lines pay even money (if you bet $ and winyou gain is $ )the roi is !"!"#$%!!"#!"#$%!!"##$!"#$%!!"# |
4,176 | monte carlo simulation for exampleif you made pass line bets and won half of themyour roi would be = if you bet the don' pass line times and had wins and pushes the roi would be - - note that in crapssim we use xrange rather than range in the for loops in anticipation of running large simulations recall that in python range(ncreates sequence with elements whereas xrange(ngenerates the values only as they are needed by the for loop def crapssim(handspergamenumgames)"""assumes handspergame and numgames are ints play numgames games of handspergame handsand print results""games [#play numgames games for in xrange(numgames) crapsgame(for in xrange(handspergame) playhand(games append( #produce statistics for each game proipergamedproipergame [][for in gameswinslosses passresults(proipergame append((wins losses)/float(handspergame)winslossespushes dpresults(dproipergame append((wins losses)/float(handspergame)#produce and print summary statistics meanroi str(round(( *sum(proipergame)/numgames) )'%sigma str(round( *stddev(proipergame) )'%print 'pass:''mean roi ='meanroi'std dev ='sigma meanroi str(round(( *sum(dproipergame)/numgames) )'%sigma str(round( *stddev(dproipergame) )'%print 'don\' pass:','mean roi ='meanroi'std dev ='sigma figure simulating craps game let' run our craps simulation and see what happens: crapssim( passmean roi - std dev don' passmean roi std dev keep in mind that since these programs incorporate randomnessyou should not expect to get identical results if you run the code yourself more importantlydo not make any bets until you have read the entire section |
4,177 | monte carlo simulation it looks as if it would be good idea to avoid the pass line--where the expected return on investment is loss but the don' pass line looks like pretty good bet or does itlooking at the standard deviationsit seems that perhaps the don' pass line is not such good bet after all recall that under the assumption that the distribution is normalthe confidence interval is encompassed by two standard deviations on either side of the mean for the don' pass linethe confidence interval is [ * * ]--roughly [- %+ %that certainly doesn' suggest that betting the don' pass line is sure thing time to put the law of large numbers to work crapssim( passmean roi - std dev don' passmean roi - std dev we can now be pretty safe in assuming that neither of these is good bet it looks as if the don' pass line may be slightly less bad howeverbecause the confidence interval [- - for the pass line overlaps with that for the don' pass line [- - ]we cannot say with confidence that the don' pass line is better bet suppose that instead of increasing the number of hands per gamewe increased the number of gamese by making the call crapssim( as shown belowthe mean of the estimated rois are close to the actual rois howeverthe standard deviations are still be high--indicating that the outcome of single game of hands is highly uncertain crapssim( passmean roi - std dev don' passmean roi - std dev one of the nice things about simulations is that they make it easy to perform "what ifexperiments for examplewhat if player could sneak in pair of cheater' dice that favored over ( and are on the opposite sides of die)to test this outall we have to do is replace the implementation of rolldie by something like def rolldie()return random choice([ , , , , , , , , , , , ]this relatively small change in the die makes dramatic difference in the oddscrapssim( passmean roi std dev don' passmean roi - std dev no wonder casinos go to lot of trouble to make sure that players don' introduce their own dice into the game |
4,178 | using table lookup to improve performance you might not want to try running crapssim( at home it takes long time to complete on most computers that raises the question of whether there is simple way to speed up the simulation the complexity of crapssim is (playhand)*handspergame*numgames the running time of playhand depends upon the number of times the loop in it is executed in principlethe loop could be executed an unbounded number of times since there is no bound on how long it could take to roll either seven or the point in practiceof coursewe have every reason to believe it will always terminate noticehoweverthat the result of call to playhand does not depend on how many times the loop is executedbut only on which exit condition is reached for each possible pointone can easily calculate the probability of rolling that point before rolling seven for exampleusing pair of dice one can roll in three different waysand and one can roll in six different waysand thereforeexiting the loop by rolling is twice as likely as exiting the loop by rolling figure contains an implementation of playhand that exploits this thinking we have pre-computed the probability of making the point before rolling for each possible value of the pointand stored those values in dictionary supposefor examplethat the point is the shooter continues to roll until he either rolls the point or rolls craps there are five ways of rolling an (and and six ways of rolling sothe value for the dictionary key is the value of the expression having this table allows us to replace the inner loopwhich contained an unbounded number of rollswith test against one call to random random the asymptotic complexity of this version of playhand is ( the idea of replacing computation by table lookup has broad applicability and is frequently used when speed is an issue table lookup is an example of the general idea of trading time for space we saw another example of this technique in our analysis of hashingthe larger the tablethe fewer the collisionsand the faster the average lookup in this casethe table is smallso the space cost is negligible |
4,179 | monte carlo simulation def playhand(self)#an alternativefasterimplementation of playhand pointsdict { : : : : : :throw rolldie(rolldie(if throw = or throw = self passwins + self dplosses + elif throw = or throw = or throw = self passlosses + if throw = self dppushes + elseself dpwins + elseif random random(<pointsdict[throw]point before self passwins + self dplosses + else before point self passlosses + self dpwins + figure using table lookup to improve performance finding it is easy to see how monte carlo simulation is useful for tackling problems in which nondeterminism plays role interestinglyhowevermonte carlo simulation (and randomized algorithms in generalcan be used to solve problems that are not inherently stochastici for which there is no uncertainty about outcomes consider for thousands of yearspeople have known that there is constantcalled (pisince the th centurysuch that the circumference of circle is equal to *diameter and the area of the circle equal to *radius what they did not know was the value of this constant one of the earliest estimates *( / ) can found in the egyptian rhind papyruscirca bc more than thousand years laterthe old testament implied different value for when giving the specifications of one of king solomon' construction projectsand he made molten seaten cubits from the one brim to the otherit was round all aboutand his height was five cubitsand line of thirty cubits did compass it round about solving for so perhaps the bible is simply wrongor perhaps the molten sea wasn' perfectly circularor perhaps the circumference was king james bible kings |
4,180 | monte carlo simulation measured from the outside of the wall and the diameter from the insideor perhaps it' just poetic license we leave it to the reader to decide archimedes of syracuse ( - bcderived upper and lower bounds on the value of by using high-degree polygon to approximate circular shape using polygon with sideshe concluded that / / giving upper and lower bounds was rather sophisticated approach for the time alsoif we take his best estimate as the average of his two bounds we obtain an error of about not badlong before computers were inventedthe french mathematicians buffon ( and laplace ( - proposed using stochastic simulation to estimate the value of think about inscribing circle in square with sides of length so that the radiusrof the circle is of length by the definition of parea pr since is area but what' the area of the circlebuffon suggested that he could estimate the area of circle by dropping large number of needles (which he argued would follow random path as they fellin the vicinity of the square the ratio of the number of needles with tips lying within the square to the number of needles with tips lying within the circle could then be used to estimate the area of the circle if the locations of the needles are truly randomwe know that!"#!!"#!$!""#$"!!"#!$!"#!!"#$%!""#$"!!"#$%solving for the area of the circle!"#!!"#!$!"#!!"#$%!""#$"!!"#!$!""#$"!!"#$%recall that the area of by square is so!"#!!"#!$ !""#$"!!"#!$!""#$"!!"#$%in generalto estimate the area of some region pick an enclosing regionesuch that the area of is easy to calculate and lies completely within pick set of random points that lie within let be the fraction of the points that fall within multiply the area of by buffon proposed the idea firstbut there was an error in his formulation that was later corrected by laplace |
4,181 | monte carlo simulation if you try buffon' experimentyou'll soon realize that the places where the needles land are not truly random moreovereven if you could drop them randomlyit would take very large number of needles to get an approximation of as good as even the bible' fortunatelycomputers can randomly drop simulated needles at ferocious rate figure contains program that estimates using the buffon-laplace method for simplicityit considers only those needles that fall in the upper right-hand quadrant of the square the function throwneedles simulates dropping needle by first using random random to get pair of positive cartesian coordinates ( and valuesit then uses the pythagorean theorem to compute the hypotenuse of the right triangle with base and height this is the distance of the tip of the needle from the origin (the center of the squaresince the radius of the circle is we know that the needle lies within the circle if and only if the distance from the origin is no greater than we use this fact to count the number of needles in the circle the function getest uses throwneedles to find an estimate of by dropping numneedles needles and averaging the result over numtrials trials the function estpi calls getest with an ever-growing number of needles until getest returns an estimate thatwith confidence of %is within precision of the actual value it does this by calling throwneedles with an ever-larger number of needlesuntil the standard deviation of the results of numtrials trials is no larger than precision/ under the assumption that the errors are normally distributedthis ensures that of the values lie within precision of the mean |
4,182 | monte carlo simulation def throwneedles(numneedles)incircle for needles in xrange( numneedles ) random random( random random(if ( * * )** < incircle + #counting needles in one quadrant onlyso multiply by return *(incircle/float(numneedles)def getest(numneedlesnumtrials)estimates [for in range(numtrials)piguess throwneedles(numneedlesestimates append(piguesssdev stddev(estimatescurest sum(estimates)/len(estimatesprint 'est str(round(curest )+'std dev str(round(sdev ))'needles str(numneedlesreturn (curestsdevdef estpi(precisionnumtrials)numneedles sdev precision while sdev >precision/ curestsdev getest(numneedlesnumtrialsnumneedles * return curest figure estimating when we ran estpi( it printed est std dev needles est std dev needles est std dev needles est std dev needles est std dev needles est std dev needles est std dev needles est std dev needles as one would expectthe standard deviations decreased monotonically as we increased the number of samples in the beginning the estimates of the value of also improved steadily some were above the true value and some belowbut each increase in numneedles led to an improved estimate with samples per trialthe simulation' estimate was already better than those of the bible and the rhind papyrus curiouslythe estimate got worse when the number of needles went from , to , since is further from the true value of than is howeverif we look at the ranges defined by one standard deviation around each of the meansboth ranges contain the true value of pand the range associated with the larger sample size is considerably smaller even though the estimate generated with , samples happens to be further from the actual value of pwe should have more confidence in its accuracy this is an extremely important |
4,183 | monte carlo simulation notion it is not sufficient to produce good answer we have to have valid reason to be confident that it is in fact good answer and when we drop large enough number of needlesthe small standard deviation gives us reason to be confident that we have correct answer rightnot exactly having small standard deviation is necessary condition for having confidence in the validity of the result it is not sufficient condition the notion of statistically valid conclusion should never be confused with the notion of correct conclusion each statistical analysis starts with set of assumptions the key assumption here is that our simulation is an accurate model of reality recall that the design of our buffon-laplace simulation started with little algebra demonstrating how we could use the ratio of two areas to find the value of we then translated this idea into code that depended upon little geometry and the randomness of random random let' see what happens if we get any of this wrong supposefor examplewe replace the in the last line of throwneedles by and again run estpi( this time it prints est std dev needles est std dev needles est std dev needles est std dev needles est std dev needles est std dev needles the standard deviation for mere , needles suggests that we should have fair amount of confidence in the estimate but what does that really meanit means that we can be reasonably confident that if we were to draw more samples from the same distributionwe would get similar value it says nothing about whether or not this value is close to the actual value of statistically valid conclusion should not be confused with correct conclusion before believing the results of simulationwe need to have confidence both that our conceptual model is correct and that we have correctly implemented that model whenever possibleone should attempt to validate results against reality in this caseone could use some other means to compute an approximation to the area of circle ( physical measurementand check that the computed value of is at least in the right neighborhood some closing remarks about simulation models for most of the history of sciencetheorists used mathematical techniques to construct purely analytical models that could be used to predict the behavior of system from set of parameters and initial conditions this led to the development of important mathematical tools ranging from calculus to probability theory these tools helped scientists develop reasonably accurate understanding of the macroscopic physical world |
4,184 | as the th century progressedthe limitations of this approach became increasingly clear reasons for this includean increased interest in the social sciencese economicsled to desire to construct good models of systems that were not mathematically tractable as the systems to be modeled grew increasingly complexit seemed easier to successively refine series of simulation models than to construct accurate analytic models it is often easier to extract useful intermediate results from simulation than from an analytical modele to play "what ifgames the availability of computers made it feasible to run large-scale simulations until the advent of the modern computer in the middle of the th century the utility of simulation was limited by the time required to perform calculations by hand simulation attempts to build an experimental devicecalled modelthat will provide useful information about the possible behaviors of the system being modeled it is important to remember that these modelslike all modelsare only an approximation of reality one can never be sure that the actual system will behave in the way predicted by the model in factone can usually be pretty confident that the actual system will not behave exactly as predicted by the model it is commonly quoted truism that "all models are wrongbut some are useful " simulation models are descriptivenot prescriptive they tell how system works under given conditionsnot how to arrange the conditions to make the system work best simulation does not optimizeit merely describes that is not to say that simulation cannot be used as part of an optimization process for examplesimulation is often used as part of search process in finding an optimal set of parameter settings simulation models can be classified along three dimensionsdeterministic versus stochasticstatic versus dynamicand discrete versus continuous the behavior of deterministic simulation is completely defined by the model rerunning simulation will not change the outcome deterministic simulations are typically used when the system being modeled is too complex to analyze analyticallye the performance of processor chip stochastic simulations incorporate randomness in the model multiple runs of the same model may generate different values this random element forces us to generate many outcomes to see the range of possibilities the question of whether to generate or or , outcomes is statistical questionas discussed earlier usually attributed to the statistician george box |
4,185 | monte carlo simulation in static modeltime plays no essential role the needle-dropping simulation used to estimate in this is an example of static simulation in dynamic modeltimeor some analogplays an essential role in the series of random walks simulated in the number of steps taken was used as surrogate for time in discrete modelthe values of pertinent variables are enumerablee they are integers in continuous modelthe values of pertinent variables range over non-enumerable setse the real numbers imagine analyzing the flow of traffic along highway we might choose to model each individual carin which case we have discrete model alternativelywe might choose to treat traffic as flowwhere changes in the flow can be described by differential equations this leads to continuous model in this examplethe discrete model more closely resembles the physical situation (nobody drives half carthough some cars are half the size of others)but is more computationally complex than continuous one in practicemodels often have both discrete and continuous components for exampleone might choose to model the flow of blood through the human body using discrete model for blood ( modeling individual corpusclesand continuous model for blood pressure |
4,186 | this is all about understanding experimental data we will make extensive use of plotting to visualize the dataand will return to the topic of what is and what is not valid statistical conclusion we will also talk about the interplay between physical and computational experiments the behavior of springs springs are wonderful things when they are compressed or stretched by some forcethey store energy when that force is no longer applied they release the stored energy this property allows them to smooth the ride in carshelp mattresses conform to our bodiesretract seat beltsand launch projectiles in the british physicist robert hooke formulated hooke' law of elasticityut tensiosic visin englishf -kx in other wordsthe forcefstored in spring is linearly related to the distancexthe spring has been compressed (or stretched(the minus sign indicates that the force exerted by the spring is in the opposite direction of the displacement hooke' law holds for wide variety of materials and systemsincluding many biological systems of courseit does not hold for an arbitrarily large force all springs have an elastic limitbeyond which the law fails those of you who have stretched slinky too far know this all too well the constant of proportionalitykis called the spring constant if the spring is stiff (like the ones in the suspension of car or the limbs of an archer' bow) is large if the spring is weaklike the spring in ballpoint penk is small knowing the spring constant of particular spring can be matter of some import the calibrations of both simple scales and atomic force microscopes depend upon knowing the spring constants of components the mechanical behavior of strand of dna is related to the force required to compress it the force with which bow launches an arrow is determined by the spring constant of its limbs and so on |
4,187 | understanding experimental data generations of physics students have learned to estimate spring constants using an experimental apparatus similar to that pictured here the basic idea is to estimate the force stored in the spring by measuring the displacement caused by exerting known force on the spring we start with spring with no weight on itand measure the distance to the bottom of the spring from the top of the stand we then hang known mass on the springwait for it to stop movingand again measure the distance from the bottom of the spring to the top of the stand the difference between the two distances then becomes the value of in hooke' law we know that the forcefbeing exerted on the spring is equal to the massmmultiplied by the acceleration due to gravityg ( / is pretty good approximation of on this planet)so we substitute * for by simple algebra we know that -( * )/ supposefor examplethat kg and mthen ! ! !/=- !/ according to this calculationit will take newtons of force to stretch the spring one meter this would all be well and good if we had complete confidence in our ability to conduct this experiment perfectly in that casewe could take one measurementperform the calculationand know that we had found unfortunatelyexperimental science hardly ever works this wayand we could be sure that we were operating below the elastic limit of the spring more robust experiment is to hang series of increasingly heavier weights on the springmeasure the stretch of the spring each timeand plot the results the newtonwritten nis the standard international unit for measuring force it is the amount of force needed to accelerate mass of one kilogram at rate of one meter per second per second slinkyby the wayhas spring constant of approximately / |
4,188 | we ran such an experimentand typed the results into file named springdata txtdistance (mmass (kg the function in figure reads data from file such as the one we savedand returns lists containing the distances and masses def getdata(filename)datafile open(filename' 'distances [masses [discardheader datafile readline(for line in datafiledm line split('distances append(float( )masses append(float( )datafile close(return (massesdistancesfigure extracting the data from file the function in figure uses getdata to extract the experimental data from the file and then plots it def plotdata(inputfile)massesdistances getdata(inputfilemasses pylab array(massesdistances pylab array(distancesforces masses* pylab plot(forcesdistances'bo'label 'measured displacements'pylab title('measured displacement of spring'pylab xlabel('|force(newtons)'pylab ylabel('distance (meters)'figure plotting the data |
4,189 | understanding experimental data when plotdata('springdata txt'is runit produces the plot on the left this is not what hooke' law predicts hooke' law tells us that the distance should increase linearly with the massi the points should lie on straight line the slope of which is determined by the spring constant of coursewe know that when we take real measurements the experimental data are rarely perfect match for the theory measurement error is to be expectedso we should expect the points to lie around line rather than on it stillit would be nice to see line that represents our best guess of where the points would have been if we had no measurement error the usual way to do this is to fit line to the data using linear regression to find fit whenever we fit any curve (including lineto data we need some way to decide which curve is the best fit for the data this means that we need to define an objective function that provides quantitative assessment of how well the curve fits the data once we have such functionfinding the best fit can be formulated as finding curve that minimizes (or maximizesthe value of that functioni as an optimization problem (see and the most commonly used objective function is called least squares let observed and predicted be vectors of equal lengthwhere observed contains the measured points and predicted the corresponding data points on the proposed fit the objective function is then defined as!"!"#$%&$!(!"#$%&$!"#$%&'#)!!squaring the difference between observed and predicted points makes large differences between observed and predicted points relatively more important than small differences squaring the difference also discards information about whether the difference is positive or negative how might we go about finding the best least-squares fitone way to do this would be to use successive approximation algorithm similar to the newtonraphson algorithm in alternativelythere is an analytic solution that is usually applicable but we don' have to use eitherbecause pylab provides built-in functionpolyfitthat finds the best least-squares fit |
4,190 | the call pylab polyfit(observedxvalsobservedyvalsnfinds the coefficients of polynomial of degree that provides best leastsquares fit for the set of points defined by the arrays observedxvals and observedyvals for examplethe call pylab polyfit(observedxvalsobservedyvals will find line described by the polynomial ax bwhere is the slope of the line and the -intercept in this casethe call returns an array with two floating point values similarlya parabola is described by the quadratic equation ax bx thereforethe call pylab polyfit(observedxvalsobservedyvals returns an array with three floating point values the algorithm used by polyfit is called linear regression this may seem bit confusingsince we can use it to fit curves other than lines the method is linear in the sense that the value of the dependent variable is linear function of the independent variables and the coefficients found by the regression for examplewhen we fit quadraticwe get model of the form ax bx in such modelthe value of the dependent variable is linear in the independent variables and and the coefficients aband the function fitdata in figure extends the plotdata function in figure by adding line that represents the best fit for the data it uses polyfit to find the coefficients and band then uses those coefficients to generate the predicted spring displacement for each force notice that there is an asymmetry in the way forces and distance are treated the values in forces (which are derived from the mass suspended from the springare treated as independentand used to produce the values in the dependent variable predicteddistances ( prediction of the displacements produced by suspending the massthe function also computes the spring constantk the slope of the lineais distance/force the spring constanton the other handis force/distance consequently is the inverse of function is linear if the variables appear only in the first degreeare multiplied by constantsand are combined by addition and subtraction |
4,191 | understanding experimental data def fitdata(inputfile)massesdistances getdata(inputfiledistances pylab array(distancesmasses pylab array(massesforces masses* pylab plot(forcesdistances'bo'label 'measured displacements'pylab title('measured displacement of spring'pylab xlabel('|force(newtons)'pylab ylabel('distance (meters)'#find linear fit , pylab polyfit(forcesdistances predicteddistances *pylab array(forcesb / pylab plot(forcespredicteddistanceslabel 'displacements predicted by\nlinear fitk str(round( ))pylab legend(loc 'best'figure fitting curve to data the call fitdata('springdata txt'produces the plot on the right it is interesting to observe that very few points actually lie on the least-squares fit this is plausible because we are trying to minimize the sum of the squared errorsrather than maximize the number of points that lie on the line stillit doesn' look like great fit let' try cubic fit by adding to fitdata #find cubic fit , , , pylab polyfit(forcesdistances predicteddistances *(forces** *forces** *forces pylab plot(forcespredicteddistances' :'label 'cubic fit'this produces the plot on the left the cubic fit looks like much better model of the databut is itprobably not in the technical literatureone frequently sees plots like this that include both raw data and curve fit to the data all too oftenhoweverthe authors then |
4,192 | go on to assume that the fitted curve is the description of the real situationand the raw data merely an indication of experimental error this can be dangerous recall that we started with theory that there should be linear relationship between the and valuesnot cubic one let' see what happens if we use our cubic fit to predict where the point corresponding to kg would lie the result is shown in the plot on the left now the cubic fit doesn' look so good in particularit seems highly unlikely that by hanging large weight on the spring we can cause the spring to rise above (the -value is negativethe bar from which it is suspended what we have is an example of overfitting overfitting typically occurs when model is excessively complexe it has too many parameters relative to the amount of data when this happensthe fit can capture noise in the data rather than meaningful relationships model that has been overfit usually has poor predictive poweras seen in this example finger exercisemodify the code in figure so that it produces the above plot let' go back to the linear fit for the momentforget the line and study the raw data does anything about it seem oddif we were to fit line to the rightmost six points it would be nearly parallel to the -axis this seems to contradict hooke' law-until we recall that hooke' law holds only up to some elastic limit perhaps that limit is reached for this spring somewhere around (approximately kglet' see what happens if we eliminate the last six points by replacing the second and third lines of fitdata by distances pylab array(distances[:- ]masses pylab array(masses[:- ]eliminating those points certainly makes differencee has dropped dramatically and the linear and cubic fits are almost indistinguishable but how do we know which of the two linear fits is better representation of how our spring performs up to its elastic limitwe could use some statistical test to |
4,193 | understanding experimental data determine which line is better fit for the databut that would be beside the point this is not question that can be answered by statistics after all we could throw out all the data except any two points and know that polyfit would find line that would be perfect fit for those two points one should never throw out experimental results merely to get better fit here we justified throwing out the rightmost points by appealing to the theory underlying hooke' lawi that springs have an elastic limit that justification could not have been appropriately used to eliminate points elsewhere in the data the behavior of projectiles growing bored with merely stretching springswe decided to use one of our springs to build device capable of launching projectile we used the device four times to fire projectile at target yards ( inchesfrom the launching point each timewe measured the height of the projectile at various distances from the launch point the launching point and the target were at the same heightwhich we treated as in our measurements the data was stored in file with the contents distance trial trial trial trial the first column contains distances of the projectile from the target the other columns contain the height of the projectile at that distance for each of the four trials all of the measurements are in inches the code in figure was used to plot the mean altitude of the projectile against the distance from the point of launch it also plots the best linear and quadratic fits to the points (in case you have forgotten the meaning of multiplying list by an integerthe expression [ ]*len(distancesproduces list of len(distances ' which isn' to say that people never do projectile is an object that is propelled through space by the exertion of force that stops after the projectile is launched in the interest of public safetywe will not describe the launching device used in this experiment suffice it to say that it was awesome |
4,194 | def gettrajectorydata(filename)datafile open(filename' 'distances [heights heights heights heights [],[],[],[discardheader datafile readline(for line in datafiledh line split(distances append(float( )heights append(float( )heights append(float( )heights append(float( )heights append(float( )datafile close(return (distances[heights heights heights heights ]def processtrajectories(filename)distancesheights gettrajectorydata(filenamenumtrials len(heightsdistances pylab array(distances#get array containing mean height at each distance totheights pylab array([ ]*len(distances)for in heightstotheights totheights pylab array(hmeanheights totheights/len(heightspylab title('trajectory of projectile (mean of 'str(numtrialstrials)'pylab xlabel('inches from launch point'pylab ylabel('inches above launch point'pylab plot(distancesmeanheights'bo' , pylab polyfit(distancesmeanheights altitudes *distances pylab plot(distancesaltitudes' 'label 'linear fit' , , pylab polyfit(distancesmeanheights altitudes *(distances** *distances pylab plot(distancesaltitudes' :'label 'quadratic fit'pylab legend(figure plotting the trajectory of projectile quick look at the plot on the right makes it quite clear that quadratic fit is far better than linear one (the reason that the quadratic fit is not smooth is that we are only plotting the predicted heights that correspond to the measured heights but just how bad fit is the line and how good is the quadratic fit don' be misled by this plot into thinking that the projectile had steep angle of ascent it only looks that way because of the difference in scale between the vertical and horizontal axes on the plot |
4,195 | understanding experimental data coefficient of determination when we fit curve to set of datawe are finding function that relates an independent variable (inches horizontally from the launch point in this exampleto predicted value of dependent variable (inches above the launch point in this exampleasking about the goodness of fit is equivalent to asking about the accuracy of these predictions recall that the fits were found by minimizing the mean square error this suggests that one could evaluate the goodness of fit by looking at the mean square error the problem with that approach is that while there is lower bound for the mean square error (zero)there is no upper bound this means that while the mean square error is useful for comparing the relative goodness of two fits to the same datait is not particularly useful for getting sense of the absolute goodness of fit we can calculate the absolute goodness of fit using the coefficient of determinationoften written as let !be the !observed value!be the corresponding value predicted by modeland be the mean of the observed values ! (!!(!!by comparing the estimation errors (the numeratorwith the variability of the original values (the denominator) is intended to capture the proportion of variability in data set that is accounted for by the statistical model provided by the fit when the model being evaluated is produced by linear regressionthe value of always lies between and if the model explains all of the variability in the data if there is no relationship between the values predicted by the model and the actual data the code in figure provides straightforward implementation of this statistical measure its compactness stems from the expressiveness of the operations on arrays the expression (predicted measured)** subtracts the elements of one array from the elements of anotherand then squares each element in the result the expression (measured meanofmeasured)** subtracts the scalar value meanofmeasured from each element of the array measuredand then squares each element of the results def rsquared(measuredpredicted)"""assumes measured one-dimensional array of measured values predicted one-dimensional array of predicted values returns coefficient of determination""estimateerror ((predicted measured)** sum(meanofmeasured measured sum()/float(len(measured)variability ((measured meanofmeasured)** sum(return estimateerror/variability figure computing there are several different definitions of the coefficient of determination the definition supplied here is used to evaluate the quality of fit produced by linear regression |
4,196 | when the lines of code print 'rsquare of linear fit ='rsquared(meanheightsaltitudesand print 'rsquare of quadratic fit ='rsquared(meanheightsaltitudesare inserted after the appropriate calls to pylab plot in processtrajectoriesthey print rsquared of linear fit rsquared of quadratic fit roughly speakingthis tells us that less than of the variation in the measured data can be explained by the linear modelbut more than of the variation can be explained by the quadratic model using computational model now that we have what seems to be good model of our datawe can use this model to help answer questions about our original data one interesting question is the horizontal speed at which the projectile is traveling when it hits the target we might use the following train of thought to design computation that answers this question we know that the trajectory of the projectile is given by formula of the form ax bx ci it is parabola since every parabola is symmetrical around its vertexwe know that its peak occurs halfway between the launch point and the targetcall this value xmid the peak height!"#$is therefore given by !"#$!"#!"# if we ignore air resistance (remember that no model is perfect)we can compute the amount of time it takes for the projectile to fall from ypeak to the height of the targetbecause that is purely function of gravity it is given by the equation ( !"#$%)/ this is also the amount of time it takes for the projectile to travel the horizontal distance from xmid to the targetbecause once it reaches the target it stops moving given the time to go from xmid to the targetwe can easily compute the average horizontal speed of the projectile over that interval if we assume that the projectile was neither accelerating nor decelerating in the horizontal direction during that intervalwe can use the average horizontal speed as an estimate of the horizontal speed when the projectile hits the target figure implements this technique for estimating the horizontal velocity of the projectile this equation can be derived from first principlesbut it is easier to just look it up we found it at the vertical component of the velocity is also easily estimatedsince it is merely the product of the and in figure |
4,197 | understanding experimental data def gethorizontalspeed(abcminxmaxx)"""assumes minx and maxx are distances in inches returns horizontal speed in feet per second""inchesperfoot xmid (maxx minx)/ ypeak *xmid** *xmid *inchesperfoot #accel of gravity in inches/sec/sec ( *ypeak/ )** print 'horizontal speed ='int(xmid/( *inchesperfoot))'feet/secfigure computing the horizontal speed of projectile when the line gethorizontalspeed(abcdistances[- ]distances[ ]is inserted at the end of processtrajectoriesit prints horizontal speed feet/sec the sequence of steps we have just worked through follows common pattern we started by performing an experiment to get some data about the behavior of physical system we then used computation to find and evaluate the quality of model of the behavior of the system finallywe used some theory and analysis to design simple computation to derive an interesting consequence of the model fitting exponentially distributed data polyfit uses linear regression to find polynomial of given degree that is the best least-squares fit for some data it works well if the data can be directly approximated by polynomial but this is not always possible considerfor examplethe simple exponential growth function the code in figure fits th-degree polynomial to the first ten points and plots the results it uses the function call pylab arange( )which returns an array containing the integers - vals [for in range( )vals append( **ipylab plot(vals,'bo'label 'actual points'xvals pylab arange( , , , , pylab polyfit(xvalsvals yvals *(xvals** *(xvals** *(xvals** ) *xvals pylab plot(yvals'bx'label 'predicted points'markersize pylab title('fitting ** 'pylab legend(figure fitting polynomial curve to an exponential distribution |
4,198 | the code in figure produces the plot the fit is clearly good onefor these data points howeverlet' look at what the model predicts for when we add the code pred to *( ** *( ** *( ** ) * print 'model predicts that ** is roughly'round(pred to print 'actual value of ** is' ** to the end of figure it printsmodel predicts that ** is roughly actual value of ** is oh deardespite fitting the datathe model produced by polyfit is apparently not good one is it because four was not the right degreeno it is because no polynomial is good fit for an exponential distribution does this mean that we cannot use polyfit to build model of an exponential distributionfortunatelyit does notbecause we can use polyfit to find curve that fits the original independent values and the log of the dependent values consider the sequence [ if we take the log base of each value we get the sequence [ ] sequence that grows linearly in factif function ( )exhibits exponential growththe log (to any baseof (xgrows linearly this can be visualized by plotting an exponential function with logarithmic -axis the code xvalsyvals [][for in range( )xvals append(iyvals append( **ipylab plot(xvalsyvalspylab semilogy(produces the plot on the right the fact that taking the log of an exponential function produces linear function can be used to construct |
4,199 | understanding experimental data model for an exponentially distributed set of data pointsas illustrated by the code in figure we use polyfit to find curve that fits the values and log of the values notice that we use yet another python standard library modulemathwhich supplies log function import math #define an arbitrary exponential function def ( )return *( **( * )def createexpdata(fxvals)"""asssumes is an exponential function of one argument xvals is an array of suitable arguments for returns array containing results of applying to the elements of xvals""yvals [for in range(len(xvals))yvals append( (xvals[ ])return pylab array(xvals)pylab array(yvalsdef fitexpdata(xvalsyvals)"""assumes xvals and yvals arrays of numbers such that yvals[ = (xvals[ ]returns abbase such that log( ( )base=ax ""logvals [for in yvalslogvals append(math log( )#get log base , pylab polyfit(xvalslogvals return ab xvalsyvals createexpdata(frange( )pylab plot(xvalsyvals'ro'label 'actual values'abbase fitexpdata(xvalsyvalspredictedyvals [for in xvalspredictedyvals append(base**( * )pylab plot(xvalspredictedyvalslabel 'predicted values'pylab title('fitting an exponential function'pylab legend(#look at value for not in original data print ' ( =' ( print 'predicted ( ='base**( * bfigure using polyfit to fit an exponential distribution when runthis code produces the plot on the rightin which the actual values and the predicted values coincide moreoverwhen the model is tested on value ( that was not used to produce the fitit prints ( predicted ( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.