id
int64
0
25.6k
text
stringlengths
0
4.59k
4,200
this method of using polyfit to find model for data works when the relationship can be described by an equation of the form baseax+ if used on data that cannot be described this wayit will yield erroneous results to see thislet' try replacing the body of the function byreturn *( **( * ) it now printsf( predicted ( when theory is missing in this we have emphasized the interplay between theoreticalexperimentaland computational science sometimeshoweverwe find ourselves with lots of interesting databut little or no theory in such caseswe often resort to using computational techniques to develop theory by building model that seems to fit the data in an ideal worldwe would run controlled experiment ( hang weights from spring)study the resultsand retrospectively formulate model consistent with those results we would then run different prospective experiment ( hang different weights from the same springand compare the results of that experiment to what the model predicted unfortunatelyin many cases it is impossible to run even one controlled experiment imaginefor examplebuilding model designed to shed light on how interest rates affect stock prices very few of us are in position to set interest rates and see what happens on the other hand there is no shortage of relevant historical data in such situationsone can simulate set of experiments by dividing the existing data into training set and holdout set without looking at the holdout setwe build model that seems to explain the training set for examplewe find curve that has reasonable for the training set we then test that model on the holdout set most of the time the model will fit the training set more closely than it fits the holdout set but if the model is good oneit should fit the holdout set reasonably well if it doesn'tthe model should probably be discarded how does one choose the training setwe want it to be representative of the data set as whole one way to do this is to randomly choose the samples for the training set if the data set is sufficiently large this often works pretty well related but slightly different way to check model is to train on many randomly selected subsets of the original dataand see how similar the models are to one another if they are quite similarthan we can feel pretty good this approach is known as cross validation
4,201
"if you can' prove what you want to provedemonstrate something else and pretend they are the same thing in the daze that follows the collision of statistics with the human mindhardly anyone will notice the difference " statistical thinking is relatively new invention for most of recorded history things were assessed qualitatively rather than quantitatively people must have had an intuitive sense of some statistical facts ( that women are usually shorter than men)but they had no mathematical tools that would allow them to proceed from anecdotal evidence to statistical conclusions this started to change in the middle of the th centurymost notably with the publication of john graunt' natural and political observations made upon the bills of mortality this pioneering work used statistical analysis to estimate the population of london from death rollsand attempted to provide model that could be used to predict the spread of plague since that time people have used statistics as much to mislead as to inform some have willfully used statistics to misleadothers have merely been incompetent in this we discuss few ways in which people can be fooled into drawing inappropriate inferences from statistical data we trust that you will use this information only for goodi to become better consumer and more honest purveyor of statistical information garbage in garbage out (gigo"on two occasions have been asked [by members of parliament]'praymr babbageif you put into the machine wrong figureswill the right answers come out? am not able rightly to apprehend the kind of confusion of ideas that could provoke such question charles babbage the message here is simple one if the input data is seriously flawedno amount of statistical massaging will produce meaningful result the united states census showed that insanity among free blacks and mulattoes was roughly ten times more common than among enslaved blacks and mulattoes the conclusion was obvious as senator (and former vice president and future secretary of statejohn calhoun put it"the data on insanity revealed in this census is unimpeachable from it our nation must conclude that the abolition of slavery would be to the african curse never mind that it was soon clear that the census was riddled with errors as calhoun reportedly explained to john quincy adams"there were so many errors they darrell huffhow to lie with statistics
4,202
balanced one anotherand led to the same conclusion as if they were all correct calhoun' (perhaps willfullyspurious response to adams was based on classical errorthe assumption of independence were he more sophisticated mathematicallyhe might have said something like" believe that the measurement errors are unbiased and independent of each of otherand therefore evenly distributed on either side of the mean in factlater analysis showed that the errors were so heavily biased that no statistically valid conclusions could be drawn pictures can be deceiving there can be no doubt about the utility of graphics for quickly conveying information howeverwhen used carelessly (or maliciouslya plot can be highly misleading considerfor examplethe following charts depicting housing prices in the midwestern states looking at the chart on the leftit seems as if housing prices were pretty stable from - but wait minutewasn' there collapse of residential real estate followed by global financial crisis in late there was indeedas shown in the chart on the right these two charts show exactly the same databut convey very different impressions the first chart was designed to give the impression that housing prices had been stable on the -axisthe designer used logarithmic scale ranging from the absurdly low average price for house of $ , to the improbably high average price of $ million this minimized the amount of space devoted to the area where prices are changinggiving the impression that the changes were relatively small the chart above and on the right was designed to give the impression that housing prices moved erraticallyand then crashed the we should note that calhoun was in office over years ago it goes without saying that no contemporary politician would find ways to abuse statistics to support position
4,203
liesdamned liesand statistics designer used linear scale and narrow range of pricesso the sizes of the changes were exaggerated the code in figure produces the two plots we looked at above and plot intended to give an accurate impression of the movement of housing prices it uses two plotting facilities that we have not yet seen the call pylab bar(quarterspriceswidthproduces bar chart with width wide bars the left edges of the bars are the values of the elements of quarters and the heights of the bars are the values of the corresponding elements of prices the call pylab xticks(quarters+width/ labelsdescribes the labels associated with the bars the first argument specifies where each label is to be placed and the second argument the text of the labels the function yticks behaves analogously def plothousing(impression)"""assumes impression str must be one of 'flat''volatile,and 'fairproduce bar chart of housing prices over time"" open('midwesthousingprices txt'' '#each line of file contains year quarter price #for midwest region of labelsprices ([][]for line in fyearquarterprice line split(label year[ : '\ qquarter[ labels append(labelprices append(float(price)/ quarters pylab arange(len(labels)# coords of bars width #width of bars if impression ='flat'pylab semilogy(pylab bar(quarterspriceswidthpylab xticks(quarters+width/ labelspylab title('housing prices in midwest'pylab xlabel('quarter'pylab ylabel('average price ($ , \' )'if impression ='flat'pylab ylim( ** elif impression ='volatile'pylab ylim( elif impression ='fair'pylab ylim( elseraise valueerror plothousing('flat'pylab figure(plothousing('volatile'pylab figure(plothousing('fair'figure plotting housing prices
4,204
the call plothousing('fair'produces the plot cum hoc ergo propter hoc it has been shown that college students who regularly attend class have higher average grades than students who attend class only sporadically those of us who teach these classes would like to believe that this is because the students learn something from the lectures of courseit is at least equally likely that those students get better grades because students who are more likely to attend classes are also more likely to study hard when two things are correlated, there is temptation to assume that one has caused the other consider the incidence of flu in north america the number of cases rises and falls in predictable pattern there are almost no cases in the summerthe number of cases starts to rise in the early falland then starts dropping as summer approaches now consider the number of children attending school there are very few children in school in the summerenrollment starts to rise in the early falland then drops as summer approaches the correlation between the opening of schools and the rise in the incidence of flu is inarguable this has led many to conclude that that going to school is an important causative factor in the spread of flu that might be truebut one cannot conclude it based simply on the correlation correlation does not imply causationafter allthe correlation could be used just as easily to justify the belief that flu outbreaks cause schools to be in session or perhaps there is no causal relationship in either directionand there is some lurking variable that statisticianslike attorneys and physicianssometimes use latin for no obvious reason other than to seem erudite this phrase means"with thistherefore because of this correlation is measure of the degree to which two variables move in the same direction if when goes up goes upthe variables are positively correlated if they move in opposite directions they are negatively correlated if there is no relationshipthe correlation is people' heights are positively correlated with the heights of their parents the correlation between hours spent playing video games and grade point average is negative
4,205
liesdamned liesand statistics we have not considered that causes each in factas it happensthe flu virus survives considerably longer in cool dry air than it does in warm wet airand in north america both the flu season and school sessions are correlated with cooler and dryer weather given enough retrospective datait is always possible to find two variables that are correlatedas illustrated by the chart on the right when such correlations are foundthe first thing to do is to ask whether there is plausible theory explaining the correlation falling prey to the cum hoc ergo propter hoc fallacy can be quite dangerous at the start of roughly six million american women were being prescribed hormone replacement therapy (hrtin the belief that it would substantially lower their risk of cardiovascular disease that belief was supported by several highly reputable published studies that demonstrated reduced incidence of cardiovascular death among women using hrt many womenand their physicianswere taken by surprise when the journal of the american medical society published an article asserting that hrt in fact increased the risk of cardiovascular disease how could this have happenedre-analysis of some of the earlier studies showed that women undertaking hrt were likely to be from groups with better than average diet and exercise regimes perhaps the women undertaking hrt were on average more health conscious than the other women in the studyso that taking hrt and improved cardiac health were coincident effects of common cause statistical measures don' tell the whole story there are an enormous number of different statistics that can be extracted from data set by carefully choosing among theseit is possible to convey variety of different impressions about the same data good antidote is to look at the data set itself in the statistician anscombe published paper containing the table below it contains the coordinates of the points in each of four data sets stephen johnson"the trouble with qsar (or how learned to stop worrying and embrace fallacy), chem inf model nelson hdhumphrey llnygren pteutsch smallan jd postmenopausal hormone replacement therapyscientific review jama ; : -
4,206
liesdamned liesand statistics these four data sets are statistically similar they have the same mean value for ( )the same mean value for ( )the same variance for ( )the same variance for ( )and the same correlation between and ( furthermoreif we use linear regression to fit line to eachwe get the same result for eachy does this mean that there is no obvious way to distinguish these data sets from each othernoone simply needs to plot the data to see that the data sets are not at all alike the moral is simpleif possiblealways take look at some representation of the raw data
4,207
liesdamned liesand statistics sampling bias during world war iiwhenever an allied plane would return from mission over europe the plane would be inspected to see where the flak had impacted based upon this datamechanics reinforced those areas of the planes that seemed most likely to be hit by flak what' wrong with thisthey did not inspect the planes that failed to return from missions because they had been downed by flak perhaps these unexamined planes failed to return precisely because they were hit in the places where the flak would do the most damage this particular error is called nonresponse bias it is quite common in surveys at many universitiesfor examplesstudents are asked during one of the lectures late in the term to fill out form rating the quality of the professor' lectures though the results of such surveys are often unflatteringthey could be worse those students who think that the lectures are so bad that they aren' worth attending are not included in the survey as we said earlierall statistical techniques are based upon the assumption that by sampling subset of population we can infer things about the population as whole if random sampling is usedwe can make precise mathematical statements about the expected relationship of the sample to the entire population unfortunatelymany studiesparticularly in the social sciencesare based on what has been called convenience (or accidentalsampling this involves choosing samples based on how easy they are to procure why do so many psychological studies use populations of undergraduatesbecause they are easy to find on college campuses convenience sample might be representativebut there is no way of knowing whether it actually is representative the family research institute' web site contains table with the following informationtable how long do homosexuals live the move to online surveyswhich allows students who do not attend class to participate in the surveydoes not augur well for the egos of professors
4,208
pretty scary stuff if your sexual preference is other than heterosexual--until one looks at how the data was compiled according to the web site it was based on " , obituaries from homosexual journalscompared to obituaries from mainstream newspapers this method produces sample that could be non-representative of either the homosexual or non-homosexual population (or bothfor large number of reasons for exampleit seems to infer that someone is gay or lesbian if and only if their obituary appears in "homosexual journal,and that someone is not gay if their obituary appears in "mainstream newspaper it also seems to assume that the deaths for which obituaries appear are representative of all deaths how does one go about evaluating such sampleone technique is to compare data compiled from the sample against data compiled elsewhere for exampleone could compare the ratio of gay men to straight men in the obituary study to other studies reporting the relative sizes of those two populations context matters it is easy to read more into the data than it actually impliesespecially when viewing the data out of context on april cnn reported that"mexican health officials suspect that the swine flu outbreak has caused more than deaths and roughly , illnesses pretty scary stuff--until one compares it to the , deaths attributable annually to the seasonal flu in the an often quotedand accuratestatistic is that most auto accidents happen within miles of home so what--most driving is done within miles of home and besideswhat does "homemean in this contextthe statistic is computed using the address at which the automobile is registered as "home might one reduce the probability of getting into an accident by merely registering one' car in some distant placeopponents of government initiatives to reduce the prevalence of guns in the are fond of quoting the statistic that roughly of the firearms in the will not be used to commit violent crime in any given year does this mean that there is not much gun violence in the sthe national rifle association reports that that there are roughly million privately owned firearms in the -- of million is , beware of extrapolation it is all too easy to extrapolate from data we did that in when we extended fits derived from linear regression beyond the data upon which the regression was done extrapolation should be done only when one has sound theoretical justification for doing so one should be especially wary of straightline extrapolations
4,209
liesdamned liesand statistics consider the plot on the left it shows the growth of internet usage in the united states from to as you can seea straight line provides pretty good fit the plot on the right uses this fit to project the percentage of the population using the internet in following years the projection is bit hard to believe it seems unlikely that by everybody in the was using the internetand even less likely that by more than of the population was using the internet the texas sharpshooter fallacy imagine that you are driving down country road in texas you see barn that has six targets painted on itand bullet hole at the very center of each target "yes sir,says the owner of the barn" never miss "that' right,says his spouse"there ain' man in the state of texas who' more accurate with paint brush got ithe fired the six shotsand then painted the targets around them professor puzzles over studentschalk throwing ability classic of the genre appeared in it reported that research team at the royal cornhill hospital in aberdeen had discovered that "anorexic women are most likely to have been born in the spring or early summer between eaglesjohnet al "season of birth in females with anorexia nervosa in northeast scotland,international journal of eating disorders september
4,210
march and june there were more anorexics born than averageand more in june itself let' look at that worrisome statistic for those women born in june the team studied women who had been diagnosed as anorexicso the mean number of births per month was slightly more than this suggests that the number born in june was ( * let' write short program to see if we can reject the null hypothesis that this occurred purely by chance def juneprob(numtrials)june for trial in range(numtrials)june for in range( )if random randint( , = june + if june > june + jprob june /float(numtrialsprint 'probability of at least births in june ='jprob figure probability of anorexics being born in june when we ran juneprob( it printed probability of at least births in june it looks as if the probability of at least babies being born in june purely by chance is around so perhaps those researchers in aberdeen are on to something wellthey might have been on to something had they started with the hypothesis that more babies who will become anorexic are born in juneand then run study designed to check that hypothesis but that is not what they did insteadthey looked at the data and thenimitating the texas sharpshooterdrew circle around june the right statistical question to have asked is what is the probability that there was at least one month (out of in which at least babies were born the program in figure answers that question def anyprob(numtrials)anymonth for trial in range(numtrials)months [ ]* for in range( )months[random randint( , )+ if max(months> anymonth + aprob anymonth /float(numtrialsprint 'probability of at least births in some month ='aprob figure probability of anorexics being born in some month
4,211
liesdamned liesand statistics the call anyprob( printed probability of at least births in some month it appears that it is not so unlikely after all that the results reported in the study reflect chance occurrence rather real association between birth month and anorexia one doesn' have to come from texas to fall victim to the texas sharpshooter fallacy what we see here is that the statistical significance of result depends upon the way the experiment was conducted if the aberdeen group had started out with the hypothesis that more anorexics are born in junetheir result would be worth considering but if they started off with the hypothesis that there exists month in which an unusually large proportion of anorexics are borntheir result is not very compelling what next steps might the aberdeen group have taken to test their newfound hypothesisone possibility is to conduct prospective study in prospective studyone starts with set of hypotheses and then gathers data with the potential to either refute or confirm the hypothesis if the group conducted new study and got similar resultsone might be convinced prospective studies can be expensive and time consuming to perform in retrospective studyone has to examine existing data in ways that reduce the likelihood of getting misleading results one common techniqueas discussed in is to split the data into training set and holdout set for examplethey could have chosen / women at random from their data (the training set)and tallied the number of births for each month they could have then compared that to the number of births each month for the remaining women (the holdout set percentages can confuse an investment advisor called client to report that the value of his stock portfolio had risen over the last month he admitted that there had been some ups and downs over the yearbut was pleased to report that the average monthly change was + image the client' surprise when he got his statement for the yearand observed that the value of his portfolio had declined over the year he called his advisorand accused him of being liar "it looks to me,he said"like my portfolio declined by %and you told me that it went up by month " did not,the financial advisor replied" told you that the average monthly change was + when he examined his monthly statementsthe investor realized that he had not been lied tojust misled his portfolio went down by in each month during the first half of the yearand then went up by in each month during the second half of the year when thinking about percentageswe always need to pay attention to the basis on which the percentage is computed in this casethe declines were on higher average basis than the increases
4,212
percentages can be particularly misleading when applied to small basis you might read about drug that has side effect of increasing the incidence of some illness by but if the base incidence of the disease is very lowsay one in , , you might well decide that the risk of taking the drug was more than counterbalanced by the drug' positive effects just beware it would be easyand funto fill few hundred pages with history of statistical abuses but by now you probably got the messageit' just as easy to lie with numbers as it is to lie with words make sure that you understand what is actually being measured and how those "statistically significantresults were computed before you jump to conclusions
4,213
the notion of an optimization problem provides structured way to think about solving lots of computational problems whenever you set about solving problem that involves finding the biggestthe smallestthe mostthe fewestthe fastestthe least expensiveetc there is good chance that you can map the problem onto classic optimization problem for which there is known computational solution in generalan optimization problem has two parts an objective function that is to be maximized or minimized for examplethe airfare between boston and istanbul set of constraints (possibly emptythat must be honored for examplean upper bound on the travel time in this we introduce the notion of an optimization problem and give few examples we also provide some simple algorithms that solve them in the next we discuss more efficient ways of solving an important class of optimization problems the main things to take away from this aremany problems of real importance can be simply formulated in way that leads naturally to computational solution reducing seemingly new problem to an instance of well-known problem allows one to use preexisting solutions exhaustive enumeration algorithms provide simplebut often computationally intractableway to search for optimal solutions greedy algorithm is often practical approach to finding pretty goodbut not always optimalsolution to an optimization problem knapsack problems and graph problems are classes of problems to which other problems can often be reduced as usual we will supplement the material on computational thinking with few bits of python and some tips about programming knapsack problems it' not easy being burglar in addition to the obvious problems (making sure that home is emptypicking lockscircumventing alarmsdealing with ethical quandariesetc ) burglar has to decide what to steal the problem is that most homes contain more things of value than the average burglar can carry away what' poor burglar to dohe needs to find the set of things that provides the most value without exceeding his carrying capacity
4,214
knapsack and graph optimization problems suppose for examplea burglar who has knapsack that can hold at most pounds of loot breaks into house and finds the items in figure clearlyhe will not be able to fit it all in his knapsackso he needs to decide what to take and what to leave behind value weight value/weight clock painting radio vase book computer figure table of items greedy algorithms the simplest way to find an approximate solution to this problem is to use greedy algorithm the thief would choose the best item firstthen the next bestand continue until he reached his limit of coursebefore doing thisthe thief would have to decide what "bestshould mean is the best item the most valuablethe least heavyor maybe the item with the highest value-to-weight ratioif he chose highest valuehe would leave with just the computerwhich he could fence for $ if he chose lowest weighthe would takein orderthe bookthe radiothe vaseand the painting--which would be worth total of $ finallyif he decided that best meant highest value-to-weight ratiohe would start by taking the vase and the clock that would leave three items with value-to-weight ratio of but of those only the book would still fit in the knapsack after taking the bookhe would take the remaining item that still fitthe radio the total value of his loot would be $ though greedy-by-density (value-to-weight ratiohappens to yield the best result for this data setthere is no guarantee that greedy-by-density algorithm always finds better solution than greedy by weight or value more generallythere is no guarantee that any solution to this kind of knapsack problem that is found by greedy algorithm will be optimal we will discuss this issue in more detail bit later the code in figure and figure implements all three of these greedy algorithms in figure we first define class item each item has namevalueand weight attribute for those of you too young to remembera "knapsackis simple bag that people used to carry on their back--long before "backpacksbecame fashionable if you happen to have been in scouting you might remember the words of the "happy wanderer," love to go -wanderingalong the mountain trackand as goi love to singmy knapsack on my back there is probably some deep moral lesson to be extracted from this factand it is probably not "greed is good
4,215
knapscak and graph optimization problems the only interesting code is the implementation of the function greedy by introducing the parameter keyfunctionwe make greedy independent of the order in which the elements of the list are to be considered all that is required is that keyfunction defines an ordering on the elements in items we then use this ordering to produce sorted list containing the same elements as items we use the built-in python function sorted to do this (we use sorted rather than sort because we want to generate new list rather than mutate the list passed to the function we use the reverse parameter to indicate that we want the list sorted from largest (with respect to keyfunctionto smallest class item(object)def __init__(selfnvw)self name self value float(vself weight float(wdef getname(self)return self name def getvalue(self)return self value def getweight(self)return self weight def __str__(self)result '<self name 'str(self value)'str(self weight'>return result def value(item)return item getvalue(def weightinverse(item)return /item getweight(def density(item)return item getvalue()/item getweight(def builditems()names ['clock''painting''radio''vase''book''computer'values [ , , , , , weights [ , , , , , items [for in range(len(values))items append(item(names[ ]values[ ]weights[ ])return items figure building set of items with orderings
4,216
def greedy(itemsmaxweightkeyfunction)"""assumes items listmaxweight > keyfunction maps elements of items to floats""itemscopy sorted(itemskey=keyfunctionreverse trueresult [totalvalue totalweight for in range(len(itemscopy))if (totalweight itemscopy[igetweight()<maxweightresult append(itemscopy[ ]totalweight +itemscopy[igetweight(totalvalue +itemscopy[igetvalue(return (resulttotalvaluedef testgreedy(itemsconstraintkeyfunction)takenval greedy(itemsconstraintkeyfunctionprint 'total value of items taken 'val for item in takenprint 'item def testgreedys(maxweight )items builditems(print 'use greedy by value to fill knapsack of size'maxweight testgreedy(itemsmaxweightvalueprint '\nuse greedy by weight to fill knapsack of size'maxweight testgreedy(itemsmaxweightweightinverseprint '\nuse greedy by density to fill knapsack of size'maxweight testgreedy(itemsmaxweightdensityfigure using greedy algorithm to choose items when testgreedys(is executed it prints use greedy by value to fill knapsack of size total value of items taken use greedy by weight to fill knapsack of size total value of items taken use greedy by density to fill knapsack of size total value of items taken what is the algorithmic efficiency of greedythere are two things to considerthe time complexity of the built-in function sortedand the number of times through the for loop in the body of greedy the number of iterations of the loop is bounded by the number of elements in itemsi it is ( )where is the length of items howeverthe worst-case time for python' built-in sorting
4,217
knapscak and graph optimization problems function is roughly ( log )where is the length of the list to be sorted therefore the running time of greedy is ( log nan optimal solution to the / knapsack problem suppose we decide that an approximation is not good enoughi we want the best possible solution to this problem such solution is called optimalnot surprising since we are solving an optimization problem as it happensthis is an instance of classic optimization problemcalled the / knapsack problem the / knapsack problem can be formalized as follows each item is represented by pair the knapsack can accommodate items with total weight of no more than vectoriof length nrepresents the set of available items each element of the vector is an item vectorvof length nis used to indicate whether or not each item is taken by the burglar if [ item [iis taken if [ item [iis not taken find that maximizes " # [ ] [ivalue subject to the constraint that # $ [ ] [iweight let' see what happens if we try to implement this formulation of the problem in straightforward way enumerate all possible combinations of items that is to saygenerate all subsets of the set of items this is called the power setand was discussed in remove all of the combinations whose weight exceeds the allowed weight from the remaining combinations choose any one whose value is the largest this approach will certainly find an optimal answer howeverif the original set of items is largeit will take very long time to runbecauseas we saw in the number of subsets grows exceedingly quickly with the number of items figure contains straightforward implementation of this brute-force approach to solving the / knapsack problem it uses the classes and functions defined in figure and figure and the function genpowerset defined in figure as we discussed in the time complexity of the sorting algorithmtimsortused in most python implementations is ( log recall that every set is subset of itself and the empty set is subset of every set
4,218
def choosebest(psetmaxweightgetvalgetweight)bestval bestset none for items in psetitemsval itemsweight for item in itemsitemsval +getval(itemitemsweight +getweight(itemif itemsweight bestvalbestval itemsval bestset items return (bestsetbestvaldef testbest(maxweight )items builditems(pset genpowerset(itemstakenval choosebest(psetmaxweightitem getvalueitem getweightprint 'total value of items taken ='val for item in takenprint item figure brute-force optimal solution to the / knapsack problem the complexity of this implementation is ( * )where is the length of items the function genpowerset returns list of lists of items this list is of length nand the longest list in it is of length therefore the outer loop in choosebest will be executed ( ntimesand the number of times the inner loop will be executed is bounded by many small optimizations can be applied to speed this program up for examplegenpowerset could have had the header def genpowerset(itemsconstraintgetvalgetweightand returned only those combinations that meet the weight constraint alternativelychoosebest could exit the inner loop as soon as the weight constraint is exceeded while these kinds of optimizations are often worth doingthey don' address the fundamental issue the complexity of choosebest will still be ( * )where is the length of itemsand choosebest will therefore still take very long time to run when items is large in theoretical sensethe problem is hopeless the / knapsack problem is inherently exponential in the number of items in practical sensehoweverthe problem is far from hopelessas we will discuss in when testbest is runit printstotal value of items taken notice that this solution is better than any of the solutions found by the greedy algorithms the essence of greedy algorithm is making the best (as defined by
4,219
knapscak and graph optimization problems some metriclocal choice at each step it makes choice that is locally optimal howeveras this example illustratesa series of locally optimal decisions does not always lead to solution that is globally optimal despite the fact that they do not always find the best solutiongreedy algorithms are often used in practice they are usually easier to implement and more efficient to run than algorithms guaranteed to find optimal solutions as ivan boesky once said" think greed is healthy you can be greedy and still feel good about yourself there is variant of the knapsack problemcalled the fractional (or continuousknapsack problemfor which greedy algorithm is guaranteed to find an optimal solution since the items are infinitely divisibleit always makes sense to take as much as possible of the item with the highest remaining valueto-weight ratio supposefor examplethat our burglar found only three things of value in the housea sack of gold dusta sack of silver dustand sack of raisins in this casea greedy-by-density algorithm will always find the optimal solution graph optimization problems let' think about another kind of optimization problem suppose you had list of the prices of all of the airline flights between each pair of cities in the united states suppose also that for all citiesaband cthe cost of flying from to by way of was the cost of flying from to plus the cost of flying from to few questions you might like to ask arewhat is the smallest number of stops between some pair of citieswhat is the least expensive airfare between some pair of citieswhat is the least expensive airfare between some pair of cities involving no more than two stopswhat is the least expensive way to visit some collection of citiesall of these problems (and many otherscan be easily formalized as graph problems graph is set of objects called nodes (or verticesconnected by set of edges (or arcsif the edges are unidirectional the graph is called directed graph or digraph in directed graphif there is an edge from to we refer to as the source or parent node and as the destination or child node he said thisto enthusiastic applausein commencement address at the university of california at berkeley business school few months later he was indicted for insider tradinga charge that led to two years in prison and $ , , fine computer scientists and mathematicians use the word "graphin the sense used in this book they typically use the word "plotto denote the kind of graphs we saw in -
4,220
knapsack and graph optimization problems graphs are typically used to represent situations in which there are interesting relations among the parts the first documented use of graphs in mathematics was in when the swiss mathematician leonhard euler used what has come to be known as graph theory to formulate and solve the konigsberg bridges problem konigsbergthen the capital of east prussiawas built at the intersection of two rivers that contained number of islands the islands were connected to each other and to the mainland by seven bridgesas shown on the map below for some reasonthe residents of the city were obsessed with the question of whether it was possible to take walk that crossed each bridge exactly once euler' great insight was that the problem could be vastly simplified by viewing each separate landmass as point (think "node"and each bridge as line (think "edge"connecting two of these points the map of the town could then be represented by the graph to the right of the map euler then reasoned that if walk were to traverse each edge exactly onceit must be the case that each node in the middle of the walk ( any node except the first and last node visitedmust have an even number of edges to which it is connected since none of the nodes in this graph has an even number of edgeseuler concluded that it is impossible to traverse each bridge exactly once map of konigsberg arrows point to bridges euler' simplified map of greater interest than the konigsberg bridges problemor even euler' theorem (which generalizes his solution to the konigsberg bridges problem)is the whole idea of using graph theory to help understand problems for exampleonly one small extension to the kind of graph used by euler is needed to model country' highway system if weight is associated with each edge in graph (or digraphit is called weighted graph using weighted graphsthe highway system can be represented as graph in which cities are represented by nodes and the highways connecting them as edgeswhere each edge is labeled with the distance between the two nodes more generallyone
4,221
knapscak and graph optimization problems can represent any road map (including those with one-way streetsby weighted digraph similarlythe structure of the world wide web can be represented as digraph in which the nodes are web pages and there is an edge from node to node if and only if there is link to page on page traffic patterns could be modeled by adding weight to each edge indicating how often is it used there are also many less obvious uses of graphs biologists use graphs to model things ranging from the way proteins interact with each other to gene expression networks physicists use graphs to describe phase transitions epidemiologists use graphs to model disease trajectories and so on figure contains classes implementing abstract types corresponding to nodesweighted edgesand edges having class for nodes may seem like overkill after allnone of the methods in class node perform any interesting computation we introduced the class merely to give us the flexibility of decidingperhaps at some later pointto introduce subclass of node with additional properties class node(object)def __init__(selfname)"""assumes name is string""self name name def getname(self)return self name def __str__(self)return self name class edge(object)def __init__(selfsrcdest)"""assumes src and dest are nodes""self src src self dest dest def getsource(self)return self src def getdestination(self)return self dest def __str__(self)return self src getname('->self dest getname(class weightededge(edge)def __init__(selfsrcdestweight )"""assumes src and dest are nodesweight float""self src src self dest dest self weight weight def getweight(self)return self weight def __str__(self)return self src getname('->(str(self weight')'self dest getname(figure nodes and edges
4,222
figure contains implementations of the classes digraph and graph one important decision is the choice of data structure used to represent digraph one common representation is an adjacency matrixwhere is the number of nodes in the graph each cell of the matrix contains information ( weightsabout the edges connecting the pair of nodes if the edges are unweightedeach entry is true if and only if there is an edge from to another common representation is an adjacency listwhich we use here class digraph has two instance variables the variable nodes is python list containing the names of the nodes in the digraph the connectivity of the nodes is represented using an adjacency list implemented as dictionary the variable edges is dictionary that maps each node in the digraph to list of the children of that node class graph is subclass of digraph it inherits all of the methods of digraph except addedgewhich it overrides (this is not the most space-efficient way to implement graphsince it stores each edge twiceonce for each direction in the digraph but it has the virtue of simplicity class digraph(object)#nodes is list of the nodes in the graph #edges is dict mapping each node to list of its children def __init__(self)self nodes [self edges {def addnode(selfnode)if node in self nodesraise valueerror('duplicate node'elseself nodes append(nodeself edges[node[def addedge(selfedge)src edge getsource(dest edge getdestination(if not(src in self nodes and dest in self nodes)raise valueerror('node not in graph'self edges[srcappend(destdef childrenof(selfnode)return self edges[nodedef hasnode(selfnode)return node in self nodes def __str__(self)result 'for src in self nodesfor dest in self edges[src]result result src getname('->'dest getname('\nreturn result[:- #omit final newline class graph(digraph)def addedge(selfedge)digraph addedge(selfedgerev edge(edge getdestination()edge getsource()digraph addedge(selfrevfigure classes graph and digraph
4,223
knapscak and graph optimization problems you might want to stop for minute and think about why graph is subclass of digraphrather than the other way around in many of the examples of subclassing we have looked atthe subclass adds attributes to the superclass for exampleclass weightededge added weight attribute to class edge heredigraph and graph have the same attributes the only difference is the implementation of the addedge method either could have been easily implemented by inheriting methods from the otherbut the choice of which to make the superclass was not arbitrary in we stressed the importance of obeying the substitution principleif client code works correctly using an instance of the supertypeit should also work correctly when an instance of the subtype is substituted for the instance of the supertype and indeed if client code works correctly using an instance of digraphit will work correctly if an instance of graph is substituted for the instance of digraph the converse is not true there are many algorithms that work on graphs (by exploiting the symmetry of edgesthat do not work on directed graphs some classic graph-theoretic problems one of the nice things about formulating problem using graph theory is that there are well-known algorithms for solving many optimization problems on graphs some of the best-known graph optimization problems areshortest path for some pair of nodesn and find the shortest sequence of edges (source node and destination node)such that the source node in the first edge is the destination node of the last edge is for all edges and in the sequenceif follows in the sequencethe source node of is the destination node of shortest weighted path this is like the shortest pathexcept instead of choosing the shortest sequence of edges that connects two nodeswe define some function on the weights of the edges in the sequence ( their sumand minimize that value this is the kind of problem solved by mapquest and google maps when asked to compute driving directions between two points cliques find set of nodes such that there is path (or often path not exceeding maximum lengthin the graph between each pair of nodes in the set min cut given two sets of nodes in grapha cut is set of edges whose removal eliminates all paths from each node in one set to each node in the other the minimum cut is the smallest set of edges whose removal accomplishes this this notion is quite similar to the notion of social cliquei group of people who feel closely connected to each other and are inclined to exclude those not in the clique seefor examplethe movie heathers
4,224
the spread of disease and min cut figure contains pictorial representation of weighted graph generated by the centers for disease control (cdcin the course of studying an outbreak of tuberculosis in the united states each node represents personand each node is labeled by color indicating whether the person has active tbtested positive for exposure to tb ( high tst reaction rate)tested negative for exposure to tbor had not been tested the edges represent contact between pairs of people the weightswhich are not visible in the pictureindicate whether the contact between people was "closeor "casual figure spread of tuberculosis there are many interesting questions that can be formalized using this graph for exampleis it possible that all cases stemmed from single "indexpatientmore formallyis there nodensuch that there is path from to every other node in the graph with an active tb label? the answer is "almost there is path from the node in the middle of the graph to each active tb node except those nodes in the black circle on the right interestinglysubsequent investigation revealed that the person in the center of the black circle had previously been neighbor of the putative index patientand therefore there should have been casual contact edge linking the two to see color version of this graphgo to page of the edges of the graph do not capture anything related to time thereforethe existence of such node does not mean that the node represents an index patient howeverthe absence of such node would indicate the absence of an index patient we have necessarybut not sufficientcondition
4,225
knapscak and graph optimization problems in order to best limit the continued spreadwhich uninfected people should be vaccinatedthis can be formalized as solving min cut problem let na be the set of active tb nodes and no be the set of all the other nodes each edge in the minimum cut between these two sets will contain one person with known active tb and one person without the people without known active tb are candidates for vaccination shortest pathdepth-first search and breadth-first search social networks are made up of individuals and relationships between individuals these are typically modeled as graphs in which the individuals are nodes and the edges relationships if the relationships are symmetricthe edges are undirectedif the relationships are asymmetric the edges are directed some social networks model multiple kinds of relationshipsin which case labels on the edges indicate the kind of relationship in the playwright john guare wrote six degrees of separation the slightly dubious premise underlying the play is that "everybody on this planet is separated by only six other people by this he meant that if one built social network including every person on the earth using the relation "knows,the shortest path between any two individuals would pass through at most six other nodes less hypothetical question is the distance using the "friendrelation between pairs of people on facebook for exampleyou might wonder if you have friend who has friend who has friend who is friend of mick jagger let' think about designing program to answer such questions the friend relation (at least on facebookis symmetrice if stephanie is friend of andreaandrea is friend of stephanie we willthereforeimplement the social network using type graph we can then define the problem of finding the shortest connection between you and mick jagger asfor the graph gfind the shortest sequence of nodespath [you,mick jagger]such that if ni and ni+ are consecutive nodes in paththere is an edge in connecting ni and ni+ figure contains recursive function that finds the shortest path between two nodesstart and endin digraph since graph is subclass of digraphit will work for our facebook problem the algorithm implemented by dfs is an example of recursive depth-firstsearch (dfsalgorithm in generala depth-first-search algorithm begins by choosing one child of the start node it then chooses one child of that node and so ongoing deeper and deeper until it either reaches the goal node or node with no children the search then backtracksreturning to the most recent node with children that it has not yet visited when all paths have been when mark zuckerberg was six years old
4,226
exploredit chooses the shortest path (assuming that there is onefrom the start to the goal the code is bit more complicated than the algorithm we just described because it has to deal with the possibility of the graph containing cycles it also avoids exploring paths longer than the shortest path that it has already found the function search calls dfs with path [(to indicate that the current path being explored is emptyand shortest none (to indicate that no path from start to end has yet been founddfs begins by choosing one child of start it then chooses one child of that node and so onuntil either it reaches the node end or node with no unvisited children the check if node not in path prevents the program from getting caught in cycle the check if shortest =none or len(pathlen(shortest)is used to decide if it is possible that continuing to search this path might yield shorter path than the best path found so far if sodfs is called recursively if it finds path to end that is no longer than the best found so farshortest is updated when the last node on path has no children left to visitthe program backtracks to the previously visited node and visits the next child of that node the function returns when all possibly shortest paths from start to end have been explored figure contains some code that runs the code in figure the function testsp in figure first builds directed graph like the one pictured on the rightand then searches for shortest path between node and node
4,227
knapscak and graph optimization problems def printpath(path)"""assumes path is list of nodes""result 'for in range(len(path))result result str(path[ ]if !len(path result result '->return result def dfs(graphstartendpathshortest)"""assumes graph is digraphstart and end are nodespath and shortest are lists of nodes returns shortest path from start to end in graph""path path [startprint 'current dfs path:'printpath(pathif start =endreturn path for node in graph childrenof(start)if node not in path#avoid cycles if shortest =none or len(pathlen(shortest)newpath dfs(graphnodeendpathshortestif newpath !noneshortest newpath return shortest def search(graphstartend)"""assumes graph is digraphstart and end are nodes returns shortest path from start to end in graph""return dfs(graphstartend[]nonefigure depth-first-search shortest-path algorithm def testsp()nodes [for name in range( )#create nodes nodes append(node(str(name)) digraph(for in nodesg addnode(ng addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ]) addedge(edge(nodes[ ],nodes[ ])sp search(gnodes[ ]nodes[ ]print 'shortest path found by dfs:'printpath(spfigure test depth-first-search code
4,228
when executedtestsp produces the output current dfs path current dfs path -> current dfs path -> -> current dfs path -> -> -> current dfs path -> -> -> -> current dfs path -> -> -> -> current dfs path -> -> -> current dfs path -> current dfs path -> -> current dfs path -> -> -> current dfs path -> -> -> current dfs path -> -> -> current dfs path -> -> shortest path found by dfs -> -> -> notice that after exploring the path -> -> -> -> it backs up to node and explores the path -> -> -> -> after saving that as the shortest successful path so farit backs up to node and explores the path -> -> -> when it reaches the end of that path (node )it backs up all the way to node and investigates the path starting with the edge from to and so on the dfs algorithm implemented above finds the path with the minimum number of edges if the edges have weightsit will not necessarily find the path that minimizes the sum of the weights of the edges howeverit is easily modified to do so of coursethere are other ways to traverse graph than depth-first another common approach is breadth-first search (bfsin breadth-first traversal one first visits all children of the start node if none of those is the end nodeone visits all children of each of those nodes and so on unlike depth-first searchwhich is usually implemented recursivelybreadth-first search is usually implemented iteratively bfs explores many paths simultaneouslyadding one node to each path on each iteration since it generates the paths in ascending order of lengththe first path found with the goal as its last node is guaranteed to have minimum number of edges figure contains code that uses breadth-first search to find the shortest path in directed graph the variable pathqueue is used to store all of the paths currently being explored each iteration starts by removing path from pathqueue and assigning that path to tmppath if the last node in tmppath is endtmppath is returned otherwisea set of new paths is createdeach of which extends tmppath by adding one of its children each of these new paths is then added to pathqueue
4,229
knapscak and graph optimization problems def bfs(graphstartend)"""assumes graph is digraphstart and end are nodes returns shortest path from start to end in graph""initpath [startpathqueue [initpathwhile len(pathqueue! #get and remove oldest element in pathqueue tmppath pathqueue pop( print 'current bfs path:'printpath(tmppathlastnode tmppath[- if lastnode =endreturn tmppath for nextnode in graph childrenof(lastnode)if nextnode not in tmppathnewpath tmppath [nextnodepathqueue append(newpathreturn none figure breadth-first-search shortest path when the lines sp bfs(gnodes[ ]nodes[ ]print 'shortest path found by bfs:'printpath(spare added at the end of testsp and the function is executed it prints current dfs path current dfs path -> current dfs path -> -> current dfs path -> -> -> current dfs path -> -> -> -> current dfs path -> -> -> -> current dfs path -> -> -> current dfs path -> current dfs path -> -> current dfs path -> -> -> current dfs path -> -> -> current dfs path -> -> -> current dfs path -> -> shortest path found by dfs -> -> -> current bfs path current bfs path -> current bfs path -> current bfs path -> -> current bfs path -> -> current bfs path -> -> current bfs path -> -> -> current bfs path -> -> -> current bfs path -> -> -> current bfs path -> -> -> shortest path found by bfs -> -> ->
4,230
comfortinglyeach algorithm found path of the same length in this casethey found the same path howeverif graph contains more than one shortest path between pair of nodesdfs and bfs will not necessarily find the same shortest path as mentioned abovebfs is convenient way to search for path with the fewest edges because the first time path is foundit is guaranteed to be such path finger exerciseconsider digraph with weighted edges is the first path found by bfs guaranteed to minimize the sum of the weights of the edges
4,231
dynamic programming was invented by richard bellman in the early don' try to infer anything about the technique from its name as bellman described itthe name "dynamic programmingwas chosen to hide from governmental sponsors "the fact that was really doing mathematics [the phrase dynamic programmingwas something not even congressman could object to " dynamic programming is method for efficiently solving problems that exhibit the characteristics of overlapping subproblems and optimal substructure fortunatelymany optimization problems exhibit these characteristics problem has optimal substructure if globally optimal solution can be found by combining optimal solutions to local subproblems we've already looked at number of such problems merge sortfor exampleexploits the fact that list can be sorted by first sorting sublists and then merging the solutions problem has overlapping subproblems if an optimal solution involves solving the same problem multiple times merge sort does not exhibit this property even though we are performing merge many timeswe are merging different lists each time it' not immediately obviousbut the / knapsack problem exhibits both of these properties before looking at thathoweverwe will digress to look at problem where the optimal substructure and overlapping subproblems are more obvious fibonacci sequencesrevisited in we looked at straightforward recursive implementation of the fibonacci functionshown here in figure def fib( )"""assumes is an int > returns fibonacci of ""if = or = return elsereturn fib( - fib( - figure recursive implementation of fibonacci function as quoted in stuart dreyfus "richard bellman on the birth of dynamic programming,operations researchvol no (
4,232
dynamic programming while this implementation of the recurrence is obviously correctit is terribly inefficient tryfor examplerunning fib( )but don' wait for it to complete the complexity of the implementation is bit hard to derivebut it is roughly (fib( )that isits growth is proportional to the growth in the value of the resultand the growth rate of the fibonacci sequence is substantial for examplefib( is , , , , , , , , if each recursive call took nanosecondfib( would take about , years to finish let' try and figure out why this implementation takes so long given the tiny amount of code in the body of fibit' clear that the problem must be the number of times that fib calls itself as an examplelook at the tree of calls associated with the invocation fib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( cib( figure tree of calls for recursive fibonacci notice that we are computing the same values over and over again for example fib gets called with three timesand each of these calls provokes four additional calls of fib it doesn' require genius to think that it might be good idea to record the value returned by the first calland then look it up rather than compute it each time it is needed this is called memoizationand is the key idea behind dynamic programming figure contains an implementation of fibonacci based on this idea the function fastfib has parametermemothat it uses to keep track of the numbers it has already evaluated the parameter has default valuethe empty dictionaryso that clients of fastfib don' have to worry about supplying an initial value for memo when fastfib is called with an it attempts to look up in memo if it is not there (because this is the first time fastfib has been called with that value)an exception is raised when this happensfastfib uses the normal fibonacci recurrenceand then stores the result in memo
4,233
dynamic programming def fastfib(nmemo {})"""assumes is an int > memo used only by recursive calls returns fibonacci of ""if = or = return tryreturn memo[nexcept keyerrorresult fastfib( - memofastfib( - memomemo[nresult return result figure implementing fibonacci using memo if you try running fastfibyou will see that it is indeed quite fastfib( returns almost instantly what is the complexity of fastfibit calls fib exactly once for each value from to thereforeunder the assumption that dictionary lookup can be done in constant timethe time complexity of fastfib(nis ( dynamic programming and the / knapsack problem one of the optimization problems we looked at in was the / knapsack problem recall that we looked at greedy algorithm that ran in log timebut was not guaranteed to find an optimal solution we also looked at brute-force algorithm that was guaranteed to find an optimal solutionbut ran in exponential time finallywe discussed the fact that the problem is inherently exponential in the size of the input in the worst caseone cannot find an optimal solution without looking at all possible answers fortunatelythe situation is not as bad as it seems dynamic programming provides practical method for solving most / knapsack problems in reasonable amount of time as first step in deriving such solutionwe begin with an exponential solution based on exhaustive enumeration the key idea is to think about exploring the space of possible solutions by constructing rooted binary tree that enumerates all states that satisfy the weight constraint rooted binary tree is an acyclic directed graph in which there is exactly one node with no parents this is called the root each non-root node has exactly one parent each node has at most two children childless node is called leaf each node in the search tree for the / knapsack problem is labeled with quadruple that denotes partial solution to the knapsack problem though cute and pedagogically interestingthis is not the best way to implement fibonacci there is simple linear-time iterative implementation
4,234
dynamic programming the elements of the quadruple area set of items to be takenthe list of items for which decision has not been madethe total value of the items in the set of items to be taken (this is merely an optimizationsince the value could be computed from the set)and the remaining space in the knapsack (againthis is an optimization since it is merely the difference between the weight allowed and the weight of all the items taken so far the tree is built top-down starting with the root one element is selected from the still-to-be-considered items if there is room for that item in the knapsacka node is constructed that reflects the consequence of choosing to take that item by conventionwe draw that node as the left child the right child shows the consequences of choosing not to take that item the process is then applied recursively until either the knapsack is full or there are no more items to consider because each edge represents decision (to take or not to take an item)such trees are called decision trees figure is table describing set of items figure is decision tree for deciding which of those items to take under the assumption that the knapsack has maximum weight of name value weight figure table of items with values and weights it may seem odd to put the root of tree at the topbut that is the way that mathematicians and computer scientists usually draw them perhaps it is evidence that those folks do not spend enough time contemplating nature decision treeswhich need not be binaryprovide structured way to explore the consequences of making series of sequential decisions they are used extensively in many fields
4,235
dynamic programming figure decision tree for knapsack problem the root of the tree (node has label indicating that no items have been takenall items remain to be consideredthe value of the items taken is and weight of is still available node indicates that has been taken[ , ,dremain to be consideredthe value of the items taken is and the knapsack can hold another pounds there is no node to the left of node since item bwhich weighs poundswould not fit in the knapsack in figure the numbers that precede the colon in each node indicate one order in which the nodes could be generated this particular ordering is called left-first depth-first at each node we attempt to generate left node if that is impossiblewe attempt to generate right node if that too is impossiblewe back up one node (to the parentand repeat the process eventuallywe find ourselves having generated all descendants of the rootand the process halts when the process haltseach combination of items that could fit in the knapsack has been generatedand any leaf node with the greatest value represents an optimal solution notice that for each leaf nodeeither the second element is the empty list (indicating that there are no more items to consider takingor the fourth element is (indicating that there is no room left in the knapsackunsurprisingly (especially if you read the previous the natural implementation of depth-first tree search is recursive figure contains such an implementation it uses class item from figure the function maxval returns two valuesthe set of items chosen and the total value of those items it is called with two argumentscorresponding to the second and fourth elements of the labels of the nodes in the treetoconsider those items that nodes higher up in the tree (corresponding to earlier calls in the recursive call stackhave not yet considered avail the amount of space still available
4,236
notice that the implementation of maxval does not build the decision tree and then look for an optimal node insteadit uses the local variable result to record the best solution found so far def maxval(toconsideravail)"""assumes toconsider list of itemsavail weight returns tuple of the total weight of solution to the / knapsack problem and the items of that solution""if toconsider =[or avail = result ( ()elif toconsider[ getweight(avail#explore right branch only result maxval(toconsider[ :]availelsenextitem toconsider[ #explore left branch withvalwithtotake maxval(toconsider[ :]avail nextitem getweight()withval +nextitem getvalue(#explore right branch withoutvalwithouttotake maxval(toconsider[ :]avail#choose better branch if withval withoutvalresult (withvalwithtotake (nextitem,)elseresult (withoutvalwithouttotakereturn result def smalltest()names [' '' '' '' 'vals [ weights [ items [for in range(len(vals))items append(item(names[ ]vals[ ]weights[ ])valtaken maxval(items for item in takenprint item print 'total value of items taken ='val figure using decision tree to solve knapsack problem when smalltest (which uses the values in figure is run it prints result indicating that node in figure is an optimal solutiontotal value of items taken if you run this code on any of the examples we have looked atyou will find that it produces an optimal answer in factit will always produce an optimal answerif it gets around to producing any answer at all the code in figure makes it convenient to test maxval it randomly generates list of items of specified size try bigtest( now try
4,237
dynamic programming bigtest( after you get tired of waiting for it to returnstop it and ask yourself what is going on def buildmanyitems(numitemsmaxvalmaxweight)items [for in range(numitems)items append(item(str( )random randint( maxval)random randint( maxweight))return items def bigtest(numitems)items buildmanyitems(numitems valtaken maxval(items print 'items takenfor item in takenprint item print 'total value of items taken ='val figure testing the decision tree-based implementation let' think about the size of the tree we are exploring since at each level of the tree we are deciding to keep or not keep one itemthe maximum depth of the tree is len(itemsat level we have only one nodeat level up to two nodesat level up to four nodesat level up to eight nodes at level we have up to nodes no wonder it takes long time to runwhat should we do about thislet' start by asking whether this program has anything in common with our first implementation of fibonacci in particularis there optimal substructure and are there overlapping subproblemsoptimal substructure is visible both in figure and in figure each parent node combines the solutions reached by its children to derive an optimal solution for the subtree rooted at that parent this is reflected in figure by the code following the comment #choose better branch are there also overlapping subproblemsat first glancethe answer seems to be "no at each level of the tree we have different set of available items to consider this implies that if common subproblems do existthey must be at the same level of the tree and indeed at each level of the tree each node has the same set of items to consider taking howeverwe can see by looking at the labels in figure that each node at level represents different set of choices about the items considered higher in the tree think about what problem is being solved at each node the problem being solved is finding the optimal items to take from those left to considergiven the remaining available weight the available weight depends upon the total weight of the items takenbut not on which items are taken or the total value of the items taken sofor examplein figure nodes and are actually solving the same problemdeciding which elements of [ ,dshould be takengiven that the available weight is
4,238
the code in figure exploits the optimal substructure and overlapping subproblems to provide dynamic programming solution to the / knapsack problem an extra parametermemohas been added to keep track of solutions to subproblems that have already been solved it is implemented using dictionary with key constructed from the length of toconsider and the available weight the expression len(toconsideris compact way of representing the items still to be considered this works because items are always removed from the same end (the frontof the list toconsider def fastmaxval(toconsideravailmemo {})"""assumes toconsider list of itemsavail weight memo used only by recursive calls returns tuple of the total weight of solution to the / knapsack problem and the items of that solution""if (len(toconsider)availin memoresult memo[(len(toconsider)avail)elif toconsider =[or avail = result ( ()elif toconsider[ getweight(avail#explore right branch only result fastmaxval(toconsider[ :]availmemoelsenextitem toconsider[ #explore left branch withvalwithtotake =fastmaxval(toconsider[ :]avail nextitem getweight()memowithval +nextitem getvalue(#explore right branch withoutvalwithouttotake fastmaxval(toconsider[ :]availmemo#choose better branch if withval withoutvalresult (withvalwithtotake (nextitem,)elseresult (withoutvalwithouttotakememo[(len(toconsider)avail)result return result figure dynamic programming solution to knapsack problem figure shows the number of calls made when we ran the code on problems of various sizes
4,239
dynamic programming number of calls number of items selected , , , , len(items figure performance of dynamic programming solution the growth is hard to quantifybut it is clearly far less than exponential but how can this besince we know that the / knapsack problem is inherently exponential in the number of itemshave we found way to overturn fundamental laws of the universenobut we have discovered that computational complexity can be subtle notion the running time of fastmaxval is governed by the number of distinct pairs generated this is because the decision about what to do next depends only upon the items still available and the total weight of the items already taken the number of possible values of toconsider is bounded by len(itemsthe number of possible values of avail is more difficult to characterize it is bounded from above by the maximum number of distinct totals of weights of the items that the knapsack can hold if the knapsack can hold at most items (based on the capacity of the knapsack and the weights of the available items)avail can take on at most different values in principlethis could be rather large number howeverin practiceit is not usually so large even if the knapsack has large capacityif the weights of the items are chosen from reasonably small set of possible weightsmany sets of items will have the same total weightgreatly reducing the running time this algorithm falls into complexity class called pseudo polynomial careful explanation of this concept is beyond the scope of this book roughly speakingfastmaxval is exponential in the number of bits needed to represent the possible values of avail since , , , , , , , , , , , , ok"discoveredmay be too strong word people have known this for long time you probably figured it out around
4,240
to see what happens when the the values of avail are chosen from considerably larger spacechange the call to fastmaxval in figure to valtaken fastmaxval(items finding solution now takes , , calls of fastmaxval when the number of items is to see what happens when the weights are chosen from an enormous spacewe can choose the possible weights from the positive reals rather than the positive integers to do thisreplace the lineitems append(item(str( )random randint( maxval)random randint( maxweight))in buildmanyitems by the line items append(item(str( )random randint( maxval)random randint( maxweight)*random random())don' hold your breath waiting for this last test to finish dynamic programming may be miraculous technique in the common sense of the word, but it is not capable of performing miracles in the liturgical sense dynamic programming and divide-and-conquer like divide-and-conquer algorithmsdynamic programming is based upon solving independent subproblems and then combining those solutions there arehoweversome important differences divide-and-conquer algorithms are based upon finding subproblems that are substantially smaller than the original problem for examplemerge sort works by dividing the problem size in half at each step in contrastdynamic programming involves solving problems that are only slightly smaller than the original problem for examplecomputing the th fibonacci number is not substantially smaller problem than computing the th fibonacci number another important distinction is that the efficiency of divide-and-conquer algorithms does not depend upon structuring the algorithm so that the same problems are solved repeatedly in contrastdynamic programming is efficient only when the number of distinct subproblems is significantly smaller than the total number of subproblems extraordinary and bringing welcome consequences
4,241
the amount of digital data in the world has been growing at rate that defies human comprehension the world' data storage capacity has doubled about every three years since the during the time it will take you to read this approximately bits of data will be added to the world' store it' not easy to relate to number that large one way to think about it is that canadian pennies would have surface area roughly twice that of the earth of coursemore data does not always lead to more useful information evolution is slow processand the ability of the human mind to assimilate data hasalasnot doubled every three years one approach that the world is using to attempt to exploit what has come to be known as "big datais statistical machine learning machine learning is hard to define one of the earliest definitions was proposed by the american electrical engineer and computer scientist arthur samuel, who defined it as "field of study that gives computers the ability to learn without being explicitly programmed of coursein some senseevery useful program learns something for examplean implementation of newton' method learns the roots of polynomial humans learn things in two ways--memorization and generalization we use memorization to accumulate individual facts in englandfor exampleprimary school students might learn list of english monarchs humans use generalization to deduce new facts from old facts student of political sciencefor examplemight observe the behavior of large number of politicians and generalize to conclude that all politicians are likely to make decisions intended to enhance their chances of staying in office when computer scientists speak about machine learningthey most often mean the field of writing programs that automatically learn to make useful inferences from implicit patterns in data for examplelinear regression (see learns curve that is model of collection of examples that model can then be used to make predictions about previously unseen examples in generalmachine learning involves observing set of examples that represent incomplete information about some statistical phenomenonand then attempting to infer something about the process that generated those examples the examples are frequently called training data samuel is probably best known as the author of program that played checkers the programwhich he started working on in the and continued to work on into the swas impressive for its timethough not particularly good by modern standards howeverwhile working on it samuel invented several techniques that are still used today among other thingssamuel' checker-playing program was quite possibly the first program ever written that improved based upon "experience
4,242
supposefor exampleyou were given the following two sets of peoplea{abraham lincolngeorge washingtoncharles de gaulleb{benjamin harrisonjames madisonlouis napoleonnowsuppose that you were provided with the following partial descriptions of each of themabraham lincolnamericanpresident cm tall george washingtonamericanpresident cm tall benjamin harrisonamericanpresident cm tall james madisonamericanpresident cm tall louis napoleonfrenchpresident cm tall charles de gaullefrenchpresident cm tall based on this incomplete information about these historical figuresyou might infer that the process that assigned these examples to the set labeled or the set labeled involved separating tall presidents from shorter ones the incomplete information is typically called feature vector each element of the vector describes some aspect ( featureof the example there are large number of different approaches to machine learningbut all try to learn model that is generalization of the provided examples all have three componentsa representation of the modelan objective function for assessing the goodness of the modeland an optimization method for learning model that minimizes or maximizes the value of the objective function broadly speakingmachine learning algorithms can be thought of as either supervised or unsupervised in supervised learningwe start with set of feature vector/label pairs the goal is to derive from these examples rule that predicts the label associated with previously unseen feature vector for examplegiven the sets and ba learning algorithm might infer that all tall presidents should be labeled and all short presidents labeled when asked to assign label to thomas jeffersonamericanpresident cm it would then choose label supervised machine learning is broadly used in practice for such tasks as detecting fraudulent use of credit cards and recommending movies to people the best algorithms are quite sophisticatedand understanding them requires level of mathematical sophistication well beyond that assumed for this book consequentlywe will not cover them here much of the machine learning literature uses the word "classrather than "label since we use the word "classfor something else in this bookwe will stick to using "labelfor this concept
4,243
quick look at machine learning in unsupervised learningwe are given set of feature vectors but no labels the goal of unsupervised learning is to uncover latent structure in the set of feature vectors for examplegiven the set of presidential feature vectorsan unsupervised learning algorithm might separate the presidents into tall and shortor perhaps into american and french the most popular unsupervised learning techniques are designed to find clusters of similar feature vectors geneticistsfor exampleuse clustering to find groups of related genes many popular clustering methods are surprisingly simple we will present the most widely used algorithm later in this firsthoweverwe want to say few words about feature extraction feature vectors the concept of signal-to-noise ratio (snris used in many branches of engineering and science the precise definition varies across applicationsbut the basic idea is simple think of it as the ratio of useful input to irrelevant input in restaurantthe signal might be the voice of your dinner dateand the noise the voices of the other diners if we were trying to predict which students would do well in programming courseprevious programming experience and mathematical aptitude would be part of the signalbut gender merely noise separating the signal from the noise is not always easy and when it is done poorlythe noise can be distraction that obscures the truth in the signal the purpose of feature extraction is to separate those features in the available data that contribute to the signal from those that are merely noise failure to do an adequate job of this introduces two kinds of problems irrelevant features can lead to bad model the danger of this is particularly high when the dimensionality of the data ( the number of different featuresis large relative to the number of samples irrelevant features can greatly slow the learning process machine learning algorithms are often computationally intensiveand complexity grows with both the number of examples and the number of features the goal of feature extraction is to reduce the vast amount of information that might be available in examples to information from which it will be productive to generalize imaginefor examplethat your goal is to learn model that will predict whether person likes to drink wine some attributese age and the nation in which they liveare likely to be relevant other attributese whether they are left-handedare less likely to be relevant feature extraction is difficult in the context of supervised learningone can try to select those features that are correlated with the labels of the examples in unless your dinner date is exceedingly boring in which caseyour dinner date' conversation becomes the noiseand the conversation at the next table the signal
4,244
quick look at machine learning unsupervised learningthe problem is harder typicallywe choose features based upon our intuition about which features might be relevant to the kinds of structure we would like to find consider figure which contains table of feature vectors and the label (reptile or notwith which each vector is associated name egglaying scales poisonous coldblooded legs reptile cobra true true true true yes rattlesnake true true true true yes boa constrictor false true false true yes alligator true true false true yes dart frog true false true false no salmon true true false true no python true true false true yes figure namefeatures and labels for assorted animals supervised machine learning algorithm (or humangiven only the information about cobras cannot do much more than to remember the fact that cobra is reptile nowlet' add the information about rattlesnakes we can begin to generalizeand might infer the rule that an animal is reptile if it lays eggshas scalesis poisonousis cold-bloodedand has no legs nowsuppose we are asked to decide if boa constrictor is reptile we might answer "no,because boa constrictor is neither poisonous nor egg-laying but this would be the wrong answer of courseit is hardly surprising that attempting to generalize from two examples might lead us astray once we include the boa constrictor in our training datawe might formulate the new rule that an animal is reptile if it is has scalesis cold-bloodedand is legless in doing sowe are discarding the features egg-laying and poisonous as irrelevant to the classification problem if we use the new rule to classify the alligatorwe conclude incorrectly that since it has legs it is not reptile once we include the alligator in the training data we reformulate the rule to allow reptiles to have either none or four legs when we look at the dart frogwe correctly conclude that it is not reptilesince it is not cold-blooded howeverwhen we use our current rule to classify the salmonwe incorrectly conclude that salmon is reptile we can add yet more complexity to our ruleto separate salmon from alligatorsbut it' losing battle there is no way to modify our rule so that it will correctly classify both salmon and pythons--since the feature vectors of these two species are identical this kind of problem is more common than not in machine learning it is quite rare to have feature vectors that contain enough information to classify things perfectly in this casethe problem is that we don' have enough features if we
4,245
quick look at machine learning had included the fact that reptile eggs have amnios, we could devise rule that separates reptiles from fish unfortunatelyin most practical applications of machine learning it is not possible to construct feature vectors that allow for perfect discrimination does this mean that we should give up because all of the available features are mere noiseno in this case the features scales and cold-blooded are necessary conditions for being reptilebut not sufficient conditions the rule has scales and is cold-blooded will not yield any false negativesi any animal classified as non-reptile will indeed not be reptile howeverit will yield some false positivesi some of the animals classified as reptiles will not be reptiles distance metrics in figure we described animals using four binary features and one integer feature suppose we want to use these features to evaluate the similarity of two animalse to askis boa constrictor more similar to rattlesnake or to dart frog? the first step in doing this kind of comparison is converting the features for each animal into sequence of numbers if we say true and false we get the following feature vectorsrattlesnake[ , , , , boa constrictor[ , , , , dart frog[ , , , , there are many different ways to compare the similarity of vectors of numbers the most commonly used metrics for comparing equal-length vectors are based on the minkowski distance!"!"! ! !"#$%&'! ! !!the parameter defines the kinds of paths that can be followed in traversing the distance between the vectors ! and ! this can be mostly easily visualized if the vectors are of length twoand represent cartesian coordinates consider the picture on the left is the circle in the bottom left corner closer to the cross or to the starit depends if we can travel in straight linethe cross is closer the pythagorean theorem tells us that the cross is the square root of units from the circleabout unitswhereas we can amnios are protective outer layers that allow eggs to be laid on land rather than in the water this question is not quite as silly as it sounds naturalist and toxicologist (or someone looking to enhance the effectiveness of blow dartmight give different answers to this question
4,246
easily see that the star is units from the circle these distances are called euclidean distancesand correspond to using the minkowski distance with but imagine that the lines in the picture correspond to streetsand that one has to stay on the streets to get from one place to another in that casethe star remains units from the circlebut the cross is now units away these distances are called manhattan distances, and correspond to using the minkowski distance with figure contains an implementation of the minkowski distance def minkowskidist( )"""assumes and are equal-length arrays of numbers returns minkowski distance of order between and ""dist for in range(len( ))dist +abs( [iv [ ])** return dist**( /pfigure minkowski distance figure contains class animal it defines the distance between two animals as the euclidean distance between the feature vectors associated with the animals class animal(object)def __init__(selfnamefeatures)"""assumes name stringfeatures list of numbers""self name name self features pylab array(featuresdef getname(self)return self name def getfeatures(self)return self features def distance(selfother)"""assumes other an animal returns the euclidean distance between feature vectors of self and other""return minkowskidist(self getfeatures()other getfeatures() figure class animal figure contains function that compares list of animals to each otherand produces table showing the pairwise distances manhattan island is the most densely populated borough of new york city on most of the islandthe streets are laid out in gridso using the minkowski distance with provides good approximation of the distance one has to travel to walk from one place (say the museum of modern art at rd street and th avenueto another (say the american folk art museum at th street and thalso called columbus avenuedriving in manhattan is totally different story
4,247
quick look at machine learning def compareanimals(animalsprecision)"""assumes animals is list of animalsprecision an int > builds table of euclidean distance between each animal""#get labels for columns and rows columnlabels [for in animalscolumnlabels append( getname()rowlabels columnlabels[:tablevals [#get distances between pairs of animals #for each row for in animalsrow [#for each column for in animalsif = row append('--'elsedistance distance( row append(str(round(distanceprecision))tablevals append(row#produce table table pylab table(rowlabels rowlabelscollabels columnlabelscelltext tablevalscellloc 'center'loc 'center'colwidths [ ]*len(animals)table scale( pylab axis('off'#don' display and -axes pylab savefig('distances'figure build table of distances between pairs of animals the code uses pylab plotting facility that we have not previously usedtable the table function produces plot that (surprise!looks like table the keyword arguments rowlabels and collabels are used to supply the labels (in this example the names of the animalsfor the rows and columns the keyword argument celltext is used to supply the values appearing in the cells of the table in the examplecelltext is bound to tablevalswhich is list of lists of strings each element in tablevals is list of the values for the cells in one row of the table the keyword argument cellloc is used to specify where in each cell the text should appearand the keyword argument loc is used to specify where in the figure the table itself should appear the last keyword parameter used in the example is colwidths it is bound to list of floats giving the width (in inchesof each column in the table the code table scale( instructs pylab to leave the horizontal width of the cells unchangedbut to increase the height of the cells by factor of (so the tables look prettier
4,248
if we run the code rattlesnake animal('rattlesnake'[ , , , , ]boa animal('boa\nconstrictor'[ , , , , ]dartfrog animal('dart frog'[ , , , , ]animals [rattlesnakeboadartfrogcompareanimals(animals it produces figure containing the table as you probably expectedthe distance between the rattlesnake and the boa constrictor is less than that between either of the snakes and the dart frog noticeby the waythat the dart frog does seem to be bit closer to the rattlesnake than to the boa nowlet' add to the bottom of the above code the lines alligator animal('alligator'[ , , , , ]animals append(alligatorcompareanimals(animals it produces the table perhaps you're surprised that the alligator is considerably closer to the dart frog than to either the rattlesnake or the boa constrictor take minute to think about why the feature vector for the alligator differs from that of the rattlesnake in two placeswhether it is poisonous and the number of legs the feature vector for the alligator differs from that of the dart frog in three placeswhether it is poisonouswhether it has scalesand whether it is cold-blooded yet according to our distance metric the alligator is more like the dart frog than like the rattlesnake what' going onthe root of the problem is that the different features have different ranges of values all but one of the features range between and but the number of legs ranges from to this means that when we calculate the euclidean distance the number of legs gets disproportionate weight let' see what
4,249
quick look at machine learning happens if we turn the feature into binary featurewith value of if the animal is legless and otherwise this looks lot more plausible of courseit is not always convenient to use only binary features in section we will present more general approach to dealing with differences in scale among features clustering clustering can be defined as the process of organizing objects into groups whose members are similar in some way key issue is defining the meaning of "similar consider the plot on the rightwhich shows the heightweightand whether or not they are wearing striped shirt for people if we want to cluster people by heightthere are two obvious clusters--delimited by the dotted horizontal line if we want to cluster people by weight there are two different obvious clusters--delimited by the solid vertical line if we want to cluster people based on their shirtthere is yet third clustering--delimited by the angled dotted arrows noticeby the waythat this last division is not lineari we cannot separate the people wearing striped shirts from the others using single straight line clustering is an optimization problem the goal is to find set of clusters that optimizes an objective functionsubject to some set of constraints given distance metric that can be used to decide how close two examples are to each otherwe need to define an objective function that minimizes the distance between examples in the same clustersi minimizes the dissimilarity of the examples within cluster as we will see laterthe exact definition of the objective function can greatly influence the outcome good measure of how close the examples within single clustercare to each other is variance to compute the variance of the examples within clusterwe
4,250
quick look at machine learning first compute the mean of the feature vectors of all the examples in the cluster if is list of feature vectors each of which is an array of numbersthe mean (more precisely the euclidean meanis the value of the expression sum( )/float(len( )given the mean and metric for computing the distance between feature vectorsthe variance of cluster is !"#$"%&!"#$%&'((!"#!)!notice that the variance is not normalized by the size of the clusterso clusters with more points are likely to look less cohesive according to this measure if one wants to compare the coherence of two clusters of different sizesone needs to divide the variance of each by the size of the cluster the definition of variance within single clusterccan be extended to define dissimilarity metric for set of clustersc!"##"$"%&'"(!"#$"%&'(!!notice that since we don' divide the variance by the size of the clustera large incoherent cluster increases the value of dissimilarity(cmore than small incoherent cluster does sois the optimization problem to find set of clusterscsuch that dissimilarity(cis minimizednot exactly it can easily be minimized by putting each example in its own cluster we need to add some constraint for examplewe could put constraint on the distance between clusters or require that the maximum number of clusters is in generalsolving this optimization problem is computationally prohibitive for most interesting problems consequentlypeople rely on greedy algorithms that provide approximate solutions later in this we present one such algorithmk-means clustering but first we will introduce some abstractions that are useful for implementing that algorithm (and other clustering algorithms as well
4,251
quick look at machine learning types example and cluster class example will be used to build the samples to be clustered associated with each example is namea feature vectorand an optional label the distance method returns the euclidean distance between two examples class example(object)def __init__(selfnamefeatureslabel none)#assumes features is an array of numbers self name name self features features self label label def dimensionality(self)return len(self featuresdef getfeatures(self)return self features[:def getlabel(self)return self label def getname(self)return self name def distance(selfother)return minkowskidist(self featuresother getfeatures() def __str__(self)return self name +':'str(self features':str(self labelfigure class example class clusterfigure is slightly more complex think of cluster as set of examples the two interesting methods in cluster are computecentroid and variance think of the centroid of cluster as its center of mass the method computecentroid returns an example with feature vector equal to the euclidean mean of the feature vectors of the examples in the cluster the method variance provides measure of the coherence of the cluster
4,252
quick look at machine learning class cluster(object)def __init__(selfexamplesexampletype)"""assumes examples is list of example of type exampletype""self examples examples self exampletype exampletype self centroid self computecentroid(def update(selfexamples)"""replace the examples in the cluster by new examples return how much the centroid has changed""oldcentroid self centroid self examples examples if len(examples self centroid self computecentroid(return oldcentroid distance(self centroidelsereturn def members(self)for in self examplesyield def size(self)return len(self examplesdef getcentroid(self)return self centroid def computecentroid(self)dim self examples[ dimensionality(totvals pylab array([ ]*dimfor in self examplestotvals + getfeatures(centroid self exampletype('centroid'totvals/float(len(self examples))return centroid def variance(self)totdist for in self examplestotdist +( distance(self centroid))** return totdist** def __str__(self)names [for in self examplesnames append( getname()names sort(result 'cluster with centroid 'str(self centroid getfeatures()contains:\ for in namesresult result 'return result[:- figure class cluster
4,253
quick look at machine learning -means clustering -means clustering is probably the most widely used clustering method its goal is to partition set of examples into clusters such that each example is in the cluster whose centroid is the closest centroid to that exampleand the dissimilarity of the set of clusters is minimized unfortunatelyfinding an optimal solution to this problem on large dataset is computationally intractable fortunatelythere is an efficient greedy algorithm that can be used to find useful approximation it is described by the pseudocode randomly choose examples as initial centroids while true create clusters by assigning each example to closest centroid compute new centroids by averaging the examples in each cluster if none of the centroids differ from the previous iterationreturn the current set of clusters the complexity of step is ( * * )where is the number of clustersn is the number of examplesand the time required to compute the distance between pair of examples the complexity of step is ( )and the complexity of step is (khencethe complexity of single iteration is ( * *dif the examples are compared using the minkowski distanced is linear in the length of the feature vector of coursethe complexity of the entire algorithm depends upon the number of iterations that is not easy to characterizebut suffice it to say that it is usually small one problem with the -means algorithm is that it is nondeterministic--the value returned depends upon the initial set of randomly chosen centroids if particularly unfortunate set of initial centroids is chosenthe algorithm might settle into local optimum that is far from the global optimum in practicethis problem is typically addressed by running -means multiple times with randomly chosen initial centroids we then choose the solution with the minimum dissimilarity of clusters figure contains straightforward translation of the pseudocode describing -means into python it uses random sample(exampleskto get the initial centroids this invocation returns list of randomly chosen distinct elements from the list examples though -means clustering is probably the most commonly used clustering methodit is not the most appropriate method in all situations two other widely used methodsnot coverd in this bookare hierarchical clustering and em-clustering the most widely used -means algorithm is attributed to james mcqueenand was first published in howeverother approaches to -means clustering were used as early as the unfortunatelyin many applications we need to use distance metrice earthmovers distance or dynamic-time-warping distancethat have higher computational complexity
4,254
def kmeans(examplesexampletypekverbose)"""assumes examples is list of examples of type exampletypek is positive intverbose is boolean returns list containing clusters if verbose is true it prints result of each iteration of -means""#get randomly chosen initial centroids initialcentroids random sample(examplesk#create singleton cluster for each centroid clusters [for in initialcentroidsclusters append(cluster([ ]exampletype)#iterate until centroids do not change converged false numiterations while not convergednumiterations + #create list containing distinct empty lists newclusters [for in range( )newclusters append([]#associate each example with closest centroid for in examples#find the centroid closest to smallestdistance distance(clusters[ getcentroid()index for in range( )distance distance(clusters[igetcentroid()if distance smallestdistancesmallestdistance distance index #add to the list of examples for the appropriate cluster newclusters[indexappend( #upate each clustercheck if centroid has changed converged true for in range(len(clusters))if clusters[iupdate(newclusters[ ] converged false if verboseprint 'iteration #str(numiterationsfor in clustersprint print '#add blank line return clusters figure -means clustering figure contains functiontrykmeansthat calls kmeans multiple times and selects the result with the lowest dissimilarity
4,255
quick look at machine learning def dissimilarity(clusters)totdist for in clusterstotdist + variance(return totdist def trykmeans(examplesexampletypenumclustersnumtrialsverbose false)"""calls kmeans numtrials times and returns the result with the lowest dissimilarity""best kmeans(examplesexampletypenumclustersverbosemindissimilarity dissimilarity(bestfor trial in range( numtrials)clusters kmeans(examplesexampletypenumclustersverbosecurrdissimilarity dissimilarity(clustersif currdissimilarity mindissimilaritybest clusters mindissimilarity currdissimilarity return best figure finding the best -means clustering contrived example figure contains code that generatesplotsand clusters examples drawn from two distributions the function gendistributions generates list of examples with twodimensional feature vectors the values of the elements of these feature vectors are drawn from normal distributions the function plotsamples plots the feature vectors of set of examples it uses another pylab plotting feature that we have not yet seenthe function annotate is used to place text next to points on the plot the first argument is the textthe second argument the point with which the text is associatedand the third argument the location of the text relative to the point with which it is associated the function contrivedtest uses gendistributions to create two distributions of ten examples each with the same standard deviation but different meansplots the examples using plotsamplesand then clusters them using trykmeans
4,256
def gendistribution(xmeanxsdymeanysdnnameprefix)samples [for in range( ) random gauss(xmeanxsdy random gauss(ymeanysdsamples append(example(nameprefix+str( )[xy])return samples def plotsamples(samplesmarker)xvalsyvals [][for in samplesx getfeatures()[ getfeatures()[ pylab annotate( getname()xy (xy)xytext ( + - )fontsize ' -large'xvals append(xyvals append(ypylab plot(xvalsyvalsmarkerdef contrivedtest(numtrialskverbose)random seed( xmean xsd ymean ysd samples gendistribution(xmeanxsdymeanysdn' 'plotsamples( samples' ^' samples gendistribution(xmean+ xsdymean+ ysdn' 'plotsamples( samples'ro'clusters trykmeans( samples samplesexampleknumtrialsverboseprint 'final resultfor in clustersprint '' figure test of -means when executedthe call contrivedtest( trueproduced the plot in figure figure examples from two distributions
4,257
quick look at machine learning and printed iteration cluster with centroid contains cluster with centroid contains iteration cluster with centroid contains cluster with centroid contains iteration cluster with centroid contains cluster with centroid contains iteration cluster with centroid contains cluster with centroid contains iteration cluster with centroid contains cluster with centroid contains final result cluster with centroid contains cluster with centroid contains notice that the initial (randomly chosencentroids led to highly skewed clustering in which single cluster contained all but one of the points by the fifth iterationhoweverthe centroids had moved to places such that the points from the two distributions were cleanly separated into two clusters given that straight line can be used to separate the points generated from the first distribution from those generated by from the second distributionit is not terribly surprising that -means converged on this clustering when we tried trials rather than by calling contrivedtest( false)it printed final result cluster with centroid contains cluster with centroid contains this indicates that the solution found using trialdespite perfectly separating the examples by the distribution from which they were chosenwas not as good
4,258
(with respect to minimizing the objective functionas one of the solutions found using trials finger exercisedraw lines on figure to show the separations found by our two attempts to cluster the points do you agree that the solution found using trials is better than the one found using trialone of the key issues in using -means clustering is choosing consider the points in the plot on the rightwhich were generated using contrivedtest figure this function generates and clusters points from three overlapping gaussian distributions def contrivedtest (numtrialskverbose)random seed( xmean xsd ymean ysd samples gendistribution(xmean,xsdymeanysdn' 'plotsamples( samples' ^' samples gendistribution(xmean+ ,xsd,ymeanysdn' 'plotsamples( samples'ro' samples gendistribution(xmeanxsdymean+ ysdn' 'plotsamples( samples'gd'clusters trykmeans( samples samples samplesexampleknumtrialsverboseprint 'final resultfor in clustersprint '' figure generating points from three distributions the invocation contrivedtest ( falseprints final result cluster with centroid contains cluster with centroid contains
4,259
quick look at machine learning the invocation contrivedtest ( falseprints final result cluster with centroid contains cluster with centroid contains cluster with centroid contains and the invocation contrivedtest ( falseprints final result cluster with centroid cluster with centroid cluster with centroid cluster with centroid cluster with centroid cluster with centroid contains contains contains contains contains containsthe last clustering is the tightest fiti the clustering has the lowest dissimilarity does this mean that it is the "bestfitrecall that when we looked at linear regression in section we observed that by increasing the degree of the polynomial we got more complex model that provided tighter fit to the data we also observed that when we increased the degree of the polynomial we ran the risk of finding model with poor predictive value-because it overfit the data choosing the right value for is exactly analogous to choosing the right degree polynomial for linear regression by increasing kwe can decrease dissimilarityat the risk of overfitting (when is equal to the number of examples to be clusteredthe dissimilarity is zero!if we have some information about how the examples to be clustered were generatede chosen from distributionswe can use that information to choose absent such informationthere are variety of heuristic procedures for choosing going into them is beyond the scope of this book less contrived example different species of mammals have different eating habits some species ( elephants and beaverseat only plantsothers ( lions and tigerseat only meatand some ( pigs and humanseat anything they can get into their mouths the vegetarian species are called herbivoresthe meat eaters are called carnivoresand those species that eat both are called omnivores over the millenniaevolution (or some other mysterious processhas equipped species with teeth suitable for consumption of their preferred foods that raises the question of whether clustering mammals based on their dentition produces clusters that have some relation to their diets
4,260
the table on the right shows the contents of file listing some species of mammalstheir dental formulas (the first numbers)their average adult weight in pounds, and code indicating their preferred diet the comments at the top describe the items associated with each mammale the first item following the name is the number of top incisors figure contains functionreadmammaldatafor reading file formatted in this way and processing the contents of the file to produce set of examples representing the information in the file it first processes the header information at the start of the file to get count of the number of features to be associated with each example it then uses the lines corresponding to each species to build three lists#name #top incisors #top canines #top premolars #top molars #bottom incisors #bottom canines #bottom premolars #bottom molars #weight #label =herbivore =carnivore =omnivore badger, , , , , , , , , , bear, , , , , , , , , , beaver, , , , , , , , , , brown bat, , , , , , , , , , cat, , , , , , , , , , cougar, , , , , , , , , , cow, , , , , , , , , , deer, , , , , , , , , , dog, , , , , , , , , , fox, , , , , , , , , , fur seal, , , , , , , , , , grey seal, , , , , , , , , , guinea pig, , , , , , , , , , elk, , , , , , , , , , human, , , , , , , , , , jaguar, , , , , , , , , , kangaroo, , , , , , , , , , lion, , , , , , , , , , mink, , , , , , , , , , mole, , , , , , , , , , moose, , , , , , , , , , mouse, , , , , , , , , , porcupine, , , , , , , , , , pig, , , , , , , , , , rabbit, , , , , , , , , , raccoon, , , , , , , , , , rat, , , , , , , , , red bat, , , , , , , , , , sea lion, , , , , , , , , , skunk, , , , , , , , , , squirrel, , , , , , , , , , woodchuck, , , , , , , , , , wolf, , , , , , , , , , speciesnames is labellist is list of the labels associated with the mammals featurevals is list of lists each element of featurevals contains the list of valuesone for each mammalfor single feature the value of the expression featurevals[ ][jis the ith feature of the jth mammal list of the names of the mammals we included the information about weight because the author has been told on more than one occasion that there is relationship between his weight and his eating habits
4,261
quick look at machine learning the last part of readmammaldata uses the values in featurevals to create list of feature vectorsone for each mammal (the code could be simplified by not constructing featurevals and instead directly constructing the feature vectors for each mammal we chose not to do that in anticipation of an enhancement to readmammaldata that we make later in this section def readmammaldata(fname)datafile open(fname' 'numfeatures #process lines at top of file for line in datafile#find number of features if line[ : ='#label'#indicates end of features break if line[ : !'#name'numfeatures + featurevals [#produce featurevalsspeciesnamesand labellist featurevalsspeciesnameslabellist [][][for in range(numfeatures)featurevals append([]#continue processing lines in filestarting after comments for line in datafiledataline string split(line[:- ]','#remove newlinethen split speciesnames append(dataline[ ]classlabel float(dataline[- ]labellist append(classlabelfor in range(numfeatures)featurevals[iappend(float(dataline[ + ])#use featurevals to build list containing the feature vectors #for each mammal featurevectorlist [for mammal in range(len(speciesnames))featurevector [for feature in range(numfeatures)featurevector append(featurevals[feature][mammal]featurevectorlist append(featurevectorreturn featurevectorlistlabellistspeciesnames figure read and process file the function testteeth in figure uses trykmeans to cluster the examples built by the other functionbuildmammalexamplesin figure it then reports the number of herbivorescarnivoresand omnivores in each cluster
4,262
def buildmammalexamples(featurelistlabellistspeciesnames)examples [for in range(len(speciesnames))features pylab array(featurelist[ ]example example(speciesnames[ ]featureslabellist[ ]examples append(examplereturn examples def testteeth(numclustersnumtrials)featureslabelsspecies readmammaldata('dentalformulas txt'examples buildmammalexamples(featureslabelsspeciesbestclustering =trykmeans(examplesexamplenumclustersnumtrialsfor in bestclusteringnames 'for in members()names + getname('print '\ 'names[:- #remove trailing comma and space herbivorescarnivoresomnivores for in members()if getlabel(= herbivores + elif getlabel(= carnivores + elseomnivores + print herbivores'herbivores,'carnivores'carnivores,',omnivores'omnivoresfigure clustering animals when we executed the code testteeth( it printed cowelkmoosesea lion herbivores carnivores omnivores badgercougardogfoxguinea pigjaguarkangaroominkmolemouseporcupinepigrabbitraccoonratred batskunksquirrelwoodchuckwolf herbivores carnivores omnivores beardeerfur sealgrey sealhumanlion herbivores carnivores omnivores so much for our conjecture that the clustering would be related to the eating habits of the various species cursory inspection suggests that we have clustering totally dominated by the weights of the animals the problem is that the range of weights is much larger than the range of any of the other features thereforewhen the euclidean distance between examples is computedthe only feature that truly matters is weight we encountered similar problem in section when we found that the distance between animals was dominated by the number of legs we solved the problem there by turning the number of legs into binary feature (legged or leglessthat was fine for that data setbecause all of the animals happened to have either zero or four legs herehoweverthere is no way to binarize weight without losing great deal of information
4,263
quick look at machine learning this is common problemwhich is often addressed by scaling the features so that each feature has mean of and standard deviation of as done by the function scalefeatures in figure def scalefeatures(vals)"""assumes vals is sequence of numbers""result pylab array(valsmean sum(result)/float(len(result)result result mean sd stddev(resultresult result/sd return result figure scaling attributes to see the effect of scalefeatureslet' look at the code below [][for in range( ) append(random gauss( ) append(random gauss( ) scalefeatures( scalefeatures( print ' mean ='round(sum( )/len( ) ),' standard deviation'round(stddev( ) print ' mean ='round(sum( )/len( ) ),' standard deviation'round(stddev( ) the code generates two normal distributions with different means ( and and different standard deviations ( and it then scales each and prints the means and standard deviations of the results when runit prints mean - standard deviation mean standard deviation it' easy to see why the statement result result mean ensures that the mean of the returned array will always be close to that the standard deviation will always be is not obvious it can be shown by long and tedious chain of algebraic manipulationswhich we will not bore you with figure contains version of readmammaldata that allows scaling of features the new version of the function testteeth in the same figure shows the result of clustering with and without scaling normal distribution with mean of and standard deviation of is called standard normal distribution we say "close,because floating point numbers are only an approximation to the reals and the result will not always be exactly
4,264
quick look at machine learning def readmammaldata(fnamescale)"""assumes scale is boolean if truefeatures are scaled""#start of code is same as in previous version #use featurevals to build list containing the feature vectors #for each mammal scale featuresif needed if scalefor in range(numfeatures)featurevals[iscalefeatures(featurevals[ ]#remainder of code is the same as in previous version def testteeth(numclustersnumtrialsscale)featureslabelsspecies =readmammaldata('dentalformulas txt'scaleexamples buildmammalexamples(featureslabelsspecies#remainder of code is the same as in the previous version figure code that allows scaling of features when we execute the code print 'cluster without scalingtestteeth( falseprint '\ncluster with scalingtestteeth( trueit prints cluster without scaling cowelkmoosesea lion herbivores carnivores omnivores badgercougardogfoxguinea pigjaguarkangaroominkmolemouseporcupinepigrabbitraccoonratred batskunksquirrelwoodchuckwolf herbivores carnivores omnivores beardeerfur sealgrey sealhumanlion herbivores carnivores omnivores cluster with scaling cowdeerelkmoose herbivores carnivores omnivores guinea pigkangaroomouseporcupinerabbitratsquirrelwoodchuck herbivores carnivores omnivores badgerbearcougardogfoxfur sealgrey sealhumanjaguarlionminkmolepigraccoonred batsea lionskunkwolf herbivores carnivores omnivores
4,265
quick look at machine learning the clustering with scaling does not perfectly partition the animals based upon their eating habitsbut it is certainly correlated with what the animals eat it does good job of separating the carnivores from the herbivoresbut there is no obvious pattern in where the omnivores appear this suggests that perhaps features other than dentition and weight might be needed to separate omnivores from herbivores and carnivores wrapping up in this we've barely scratched the surface of machine learning we've tried to give you taste of the kind of thinking involved in using machine learning--in the hope that you will find ways to pursue the topic on your own the same could be said about many of the other topics presented in this book we've covered lot more ground than is typical of introductory computer science courses you probably found some topics less interesting than others but we do hope that you encountered at least few topics you are looking forward to learning more about eye position might be useful featuresince both omnivores and carnivores typically have eyes in the front of their headwhereas the eyes of herbivores are typically located more towards the side among the mammalsonly mothers of humans have eyes in the back of their head
4,266
common operations on numerical types + is the sum of and - is minus * is the product of and // is integer division / is divided by in python when and are both of type intthe result is also an intotherwise the result is float % is the remainder when the int is divided by the int ** is raised to the power + is equivalent to *and -work the same way comparison and boolean operators = returns true if and are equal ! returns true if and are not equal have their usual meanings and is true if both and are trueand false otherwise or is true if at least one of or is trueand false otherwise not is true if is falseand false if is true common operations on sequence types seq[ireturns the ith element in the sequence len(seqreturns the length of the sequence seq seq concatenates the two sequences *seq returns sequence that repeats seq times seq[start:endreturns slice of the sequence in seq tests whether is contained in the sequence not in seq tests whether is not contained in the sequence for in seq iterates over the elements of the sequence common string methods count( counts how many times the string occurs in find( returns the index of the first occurrence of the substring in - if is not in rfind( same as findbut starts from the end of index( same as findbut raises an exception if is not in rindex( same as indexbut starts from the end of lower(converts all uppercase letters to lowercase replace(oldnewreplaces all occurrences of string old with string new rstrip(removes trailing white space split(dsplits using as delimiter returns list of substrings of
4,267
python quick reference common list methods append(eadds the object to the end of count(ereturns the number of times that occurs in insert(ieinserts the object into at index extend( appends the items in list to the end of remove(edeletes the first occurrence of from index(ereturns the index of the first occurrence of in pop(iremoves and returns the item at index defaults to - sort(has the side effect of sorting the elements of reverse(has the side effect of reversing the order of the elements in common operations on dictionaries len(dreturns the number of items in keys(returns list containing the keys in values(returns list containing the values in in returns true if key is in [kreturns the item in with key raises keyerror if is not in get(kvreturns [kif in dand otherwise [kv associates the value with the key if there is already value associated with kthat value is replaced del [kremoves element with key from raises keyerror if is not in for in iterates over the keys in comparison of common non-scalar types type type of index type of element examples of literals mutable str int characters ''' ''abcno tuple int any type ()( ,)('abc' no list int any type [][ ]['abc' yes dict hashable objects any type {}{' ': }{' ': ' ': yes common input/output mechanisms raw_input(msgprints msg and then returns value entered as string print sn prints strings sn with space between each open('filename'' 'creates file for writing open('filename'' 'opens an existing file for reading open('filename'' 'opens an existing file for appending filehandle read(returns string containing contents of the file filehandle readline(returns the next line in the file filehandle readlines(returns list containing lines of the file filehandle write(swrite the string to the end of the file filehandle writelines(lwrites each element of to the file filehandle close(closes the file
4,268
__init__ __lt__ built--in method __name__ built--in method __str__ abs built--in function abstract data type see data abstraction abstraction abstraction barrier acceleration due to gravity algorithm aliasing testing for al--khwarizmimuhammad ibn musa american folk art museum annotatepylab plotting anscombef append method approximate solutions arange function arc of graph archimedes arguments array type operators assert statement assertions assignment statement multiple mutation versus unpacking multiple returned values babbagecharles bachelierlouis backtracking bar chart baseball bellmanrichard benford' law bernoullijacob bernoulli' theorem bible big notation see computational complexity binary feature binary number binary search binary search debugging technique binary tree bindingof names bisection search bit bizarre looking plot black--box testing see testingblack--box blocks of code boeskyivan boolean expression compound short--circuit evaluation boxgeorge branching programs breadth--first search (bfs) break statement brownrita mae brownrobert brownian motion buffon bug covert intermittent origin of word overt persistent built--in functions abs help id input isinstance len list map max min range raw_input round sorted sum
4,269
index type xrange byte ++ cartesian coordinates case--sensitivity causal nondeterminism centroid child node churchalonzo church--turing thesis chutes and ladders class variable classes - __init__ method __name__ method __str__ method abstract attribute attribute reference class variable data attribute defining definition dot notation inheritance instance instance variable instantiation isinstance function isinstance vs type method attribute overriding attributes printing instances self subclass superclass type hierarchy type vs isinstance client close method for files clu clustering coefficient of variation command see statement comment in programs compiler complexity classes - computation computational complexity - amortized analysis asymptotic notation average--case best--case big notation big theta notation constant expected--case exponential inherently exponential linear logarithmic log--linear lower bound polynomial pseudo polynomial quadratic rules of thumb for expressing tight bound time--space tradeoff upper bound worst--case concatenation (+appendvs lists sequence types tuples conceptual complexity conjunct copenhagen doctrine copy standard library module correlation craps cross validation data abstraction - datetime standard library module debugging - stochastic programs decimal numbers decision tree - decomposition decrementing function deepcopy function default parameter values
4,270
index defensive programming dental formula depth--first search (dfs) destination node deterministic program dict type - adding an element allowable keys deleting an element keys keys method values method dictionary see dict type dijkstraedsger dimensionalityof data disjunct dispersion dissimilarity metric distributions bell curve see distributionsnormal benford' empirical rule for normal gaussian see distributionsnormal memoryless property normal - uniform divide--and--conquer algorithms divide--and--conquer problem solving docstring don' pass line dot notation dr pangloss dynamic programming - dynamic--time--warping earth--movers distance edge of graph efficient programs einsteinalbert elastic limit of springs elif else encapsulation eniac error bars escape character euclid euclidean distance euclidean mean eulerleonhard except block exceptions - built--in assertionerror indexerror nameerror typeerror valueerror built--in class handling - raising try-except unhandled exhaustive enumeration algorithms square root algorithm exponential decay exponential growth expression extend method extending list factorial iterative implementation recursive implementation false negative false positive feature extraction feature vector fibonacci poem fibonacci sequence - dynamic programming implementation recursive implementation file system files - appending close method file handle open function reading write method
4,271
index writing first--class values fitting curve to data - coefficient of determination ( ) exponential with polyfit least--squares objective function linear regression objective function, overfitting polyfit fixed--program computers float type see floating point floating point - exponent precision reals vs rounded value rounding errors significant digits floppy disk flow of control for loop for statement generators franklinbenjamin function actual parameter arguments as object - as parameter call class as parameter default parameter values defining invocation keyword argument positional parameter binding gambler' fallacy gaussian distribution see distributionsnormal generalization generator geometric distribution geometric progression glass--box testing see testingglass--box global optimum global statement global variable graph - adjacency list representation adjacency matrix representation breadth--first search (bfs) depth--first search (dfs) digraph directed graph edge graph theory node problems cliques min cut shortest path - shortest weighted path weighted grauntjohn gravityacceleration due to greedy algorithm guess--and--check algorithms halting problem hamlet hand simulation hashing - collision hash buckets hash function hash tables probability of collisions help built--in function helper functions heron of alexandria higher--order functions higher--order programming histogram hoarec holdout set holmessherlock hooke' law hoppergrace murray hormone replacement therapy housing prices huffdarrell id built--in function idle
4,272
index edit menu file menu if statement immutable type import statement in operator indentation of code independent events indexing for sequence types indirection induction inductive definition inferential statistics information hiding input input built--in function raw_input vs instanceof class integrated development environment (ide) interface interpreter introduction to algorithms isinstance built--in function iteration for loop over integers over lists java juliet julius caesar kennedyjoseph keyon plot see plotting in pylablegend function keyword argument keywords --means clustering - knapsack problem - / brute--force solution dynamic programming solution fractional (or continuous) knight capital group knowledgedeclarative vs imperative knuthdonald konigsberg bridges problem label keyword argument lambda abstraction lampsonbutler laplacepierre--simon law of large numbers leafof tree least squares fit len built--in function lengthfor sequence types leonardo of pisa lexical scoping librarystandard pythonsee also standard libarbary modules linear regression liskovbarbara list built--in function list comprehension list type - (concatenationoperator cloning comprehension copying indexing internal representation literals local optimum local variable log function logarithmbase of logarithmic axis logarithmic scaling loop loop invariant lt operator lurking variable machine code machine learning supervised unsupervised manhattan distance manhattan project
4,273
- label 'posif else 'negprint(labelneg in [ ]print('posif else 'neg'neg basic types string strings in python are immutable in [ ]string 'my stringstring[ 'ttypeerror traceback (most recent call lastin string 'my string---- string[ 'ttypeerror'strobject does not support item assignment in [ ]string replace(' '' 'out[ ]'ty stringin [ ]string out[ ]'my stringstring is iterable in [ ]for in 'my string'print(sm formating of strings
4,274
from datetime import date 'today is str(date today()out[ ]'today is in [ ]'today is {and number {format(date today()[ ]out[ ]'today is and number [ -strings have been introduced in python in [ ]print( 'today is {date today()}'today is check if substring is in string in [ ]if 'subin 'substring'print('true'true there are already many built-in functions for handling strings in python in [ ]dir(list
4,275
['__add__''__class__''__contains__''__delattr__''__delitem__''__dir__''__doc__''__eq__''__format__''__ge__''__getattribute__''__getitem__''__gt__''__hash__''__iadd__''__imul__''__init__''__init_subclass__''__iter__''__le__''__len__''__lt__''__mul__''__ne__''__new__''__reduce__''__reduce_ex__''__repr__''__reversed__''__rmul__''__setattr__''__setitem__''__sizeof__''__str__''__subclasshook__''append''clear''copy''count''extend''index''insert''pop''remove''reverse''sort'in [ ]dir(strout[ ]['__add__''__class__''__contains__''__delattr__''__dir__''__doc__''__eq__''__format__''__ge__''__getattribute__''__getitem__''__getnewargs__''__gt__''__hash__''__init__''__init_subclass__''__iter__''__le__'
4,276
'__len__''__lt__''__mod__''__mul__''__ne__''__new__''__reduce__''__reduce_ex__''__repr__''__rmod__''__rmul__''__setattr__''__sizeof__''__str__''__subclasshook__''capitalize''casefold''center''count''encode''endswith''expandtabs''find''format''format_map''index''isalnum''isalpha''isdecimal''isdigit''isidentifier''islower''isnumeric''isprintable''isspace''istitle''isupper''join''ljust''lower''lstrip''maketrans''partition''replace''rfind''rindex''rjust''rpartition''rsplit''rstrip''split''splitlines''startswith''strip''swapcase''title''translate''upper''zfill'in [ ]'my first sentenceupper(out[ ]'my first sentence
4,277
enum is data type which links name to an index they are useful to represent closed set of options in [ ]from enum import enum class qhbrowseraction(enum)query_button_clicked save_button_clicked date_changed qh_name_changed slider_moved qhbrowseraction date_changed namea value out[ ]('date_changed' in [ ]a_next qhbrowseraction( value+ a_next out[ ]in [ ]if a_next =qhbrowseraction qh_name_changedprint('in state {}format(a_next value)in state containers container data types in python are dedicated to store multiple variables of various type the basic container types areliststuplessetsdictionaries lists in [ ]my_list [ ' 'truemy_list out[ ][ ' 'truelists are -indexed and elements are accessed by square bracket in [ ]my_list[ out[ ] lists are mutable
4,278
my_list[ my_list out[ ][ truein order to extend list one can either append in [ ]my_list append( my_list out[ ][ true or simply in [ ]my_list [ ' 'out[ ][ true ' 'or append elements in ]my_list +[ my_list in ]my_list my_list [ one shall not do that my_list be careful with the last assignmentthis creates new listso need to perfom copy very inefficient for large lists how to append list at the endin [ ]my_list append([ ' ']my_list out[ ][ true [ ' ']this adds list as an elementwhich is not quite what we wanted in [ ]my_list extend([ ]my_list out[ ][ true [ ' '] ' ' ' '[ ]' '
4,279
import itertools list [[ , , ][ , , ][ ][ , ]merged list(itertools chain(*list )merged out[ ][[ ][ ][ ][ ]which one to choose in order to add elements efficientlylist comprehension old-fashioned way in [ ]my_list [for in range( )my_list append(imy_list out[ ][ one-line list comprehension in [ ]abs( () - out[ ]true in [ ]my_list [ /( + for in range( )my_list out[ ][ in [ ]my_list [ for in range( if my_list out[ ][ generator comprehension
4,280
( ** for in range( )print(xat faceb in [ ]next(xstopiteration traceback (most recent call lastin ---- next(xstopiterationin [ ]import datetime str(datetime datetime now()out[ ] : : in [ ]print(datetime datetime now()for in (( + )** for in range(int( ))) **(- / print(datetime datetime now() : : : : in [ ]print(datetime datetime now()lst [( + )** for in range(int( ))for in lstx**(- / print(datetime datetime now() : : : : generator returns values on demand no need to create table and than iterate over it in [ ] iter(range( )next(xout[ ] in ] ( ** for in range( )list(xfiltermapreduce
4,281
my_list [- - - - - filter(lambda xx> my_listout[ ]filter returns an iterable generator generator is very important concept in pythonin [ ]for el in filter(lambda xx> ,my_list)print(el in [ ]list(filter(lambda xx> my_list)out[ ][ map in [ ]print(my_listlist(map(lambda xabs( )my_list)[- - - - - out[ ][ map can be applied to many lists in [ ]lst [ , , , , lst [ , , , list(map(lambda xyx+ylst lst )out[ ][ reduce in [ ]sum([ , , , , , , , , , , ]out[ ]
4,282
from functools import reduce reduce(lambda xyx+ [ , , , , , , , , , , ]out[ ] $ + + \frac{ ( + )}{ }iterating over lists in [ ] for el in [- - - - - ]print(ieli + - - - - - iterating with index in [ ]for indexel in enumerate([- - - - - ])print(indexel - - - - - iterating over two (manylists in [ ]letters [' '' '' '' 'numbers [ for ln in zip(lettersnumbers)print(lna
4,283
list(zip(lettersnumbers)out[ ][(' ' )(' ' )(' ' )(' ' )in [ ]dict(zip(lettersnumbers)out[ ]{' ' ' ' ' ' ' ' in [ ]help(ziphelp on class zip in module builtinsclass zip(objectzip(iter [,iter ]]--zip object return zip object whose __next__(method returns tuple where the -th element comes from the -th iterable argument the __next__(method continues until the shortest iterable in the argument sequence is exhausted and then it raises stopiteration methods defined here__getattribute__(selfname/return getattr(selfname__iter__(self/implement iter(self__new__(*args**kwargsfrom builtins type create and return new object see help(typefor accurate signature __next__(self/implement next(self__reduce__return state information for pickling copying lists in [ ] [ [ 'aprint(xy[' ' [' ' in [ ] copy(out[ ][
4,284
[ copy( [ 'aprint(xy[ [' ' in [ ] [[ ' '] copy(equivalent to [: [ 'aprint(xy[[ ' '] [' ' in [ ] [[ ' '] copy( [ ][ 'bprint(xy[[' '' '] [[' '' '] the reason for this behavior is that python performs shallow copy in [ ]from copy import deepcopy [[ ' '] deepcopy(xy[ ][ 'bprint(xy[[ ' '] [[' '' '] sorting lists inplace operations in [ ] [ sort(print(xnone list sort(is an inplace operation in generalinplace operations are efficient as they do not create new copy in memory in [ ] [ sort(print( [ list sorted does create new variable in [ ] [ sorted(xprint( [
4,285
[ sorted(xprint( [ in [ ] [ is sorted(xout[ ]false how to sort in reverted order in [ ] [ sort(reverse=trueprint( [ sort nested lists in [ ]employees [( 'john')( 'emily')( 'david')( 'mark')( 'andrew')employees sort(key=lambda xx[ ]employees out[ ][( 'andrew')( 'mark')( 'john')( 'emily')( 'david')in [ ]employees [( 'john')( 'emily')( 'david')( 'mark')( 'andrew')employees sort(key=lambda xx[ ]employees out[ ][( 'andrew')( 'david')( 'emily')( 'john')( 'mark')also with reversed order in [ ]employees [( 'john')( 'emily')( 'david')( 'mark')( 'andrew')employees sort(key=lambda xx[ ]reverse=trueemployees out[ ][( 'david')( 'emily')( 'john')( 'mark')( 'andrew')list extras
4,286
my_list *[' 'my_list out[ ][' '' '' '' '' 'in [ ] in [ , , , , out[ ]true in [ ] [' ' [' ' = out[ ]true in [ ] (' ' (' ' is out[ ]true tuples tuplessimilarly to lists can stores elements of different types in [ ]my_tuple ( , , my_tuple out[ ]( in [ ]my_tuple[ out[ ] unlike the liststuples are immutable in [ ]my_tuple[ ]= typeerror traceback (most recent call lastin ---- my_tuple[ ]= typeerror'tupleobject does not support item assignment
4,287
tuple([ , , ]out[ ]( sets sets are immutable and contain only unique elements in [ ]{ , , , out[ ]{ in [ ]{ , , , , out[ ]{ so this is neat way for obtaining unique elements in list in [ ]my_list [ set(my_listout[ ]{ or tuple in [ ]my_tuple ( set(my_tupleout[ ]{ one can perform set operations on sets ;-in [ ] { , , { , , print( ' + ={ union( )}'print( ' - ={ - }'print( ' * ={ intersection( )}'print( ' * ={ intersection({})}' + ={ - ={ * ={ * =set(
4,288
pm {'system''source''i_meas''i_ref'signals pm {'system''source'signals out[ ]{'i_meas''i_ref'in [ ]for in signalsprint(si_meas i_ref in [ ]help(sethelp on class set in module builtinsclass set(objectset(-new empty set object set(iterable-new set object build an unordered collection of unique elements methods defined here__and__(selfvalue/return self&value __contains__x __contains__(yy in __eq__(selfvalue/return self==value __ge__(selfvalue/return self>=value __getattribute__(selfname/return getattr(selfname__gt__(selfvalue/return self>value __iand__(selfvalue/return self&=value __init__(self/*args**kwargsinitialize self see help(type(self)for accurate signature __ior__(selfvalue/return self|=value __isub__(selfvalue/return self-=value __iter__(self/implement iter(self__ixor__(selfvalue/return self^=value __le__(selfvalue/return self<=value __len__(self/return len(self
4,289
return len(self__lt__(selfvalue/return self<value __ne__(selfvalue/return self!=value __new__(*args**kwargsfrom builtins type create and return new object see help(typefor accurate signature __or__(selfvalue/return self|value __rand__(selfvalue/return value&self __reduce__return state information for pickling __repr__(self/return repr(self__ror__(selfvalue/return value|self __rsub__(selfvalue/return value-self __rxor__(selfvalue/return value^self __sizeof__s __sizeof__(-size of in memoryin bytes __sub__(selfvalue/return self-value __xor__(selfvalue/return self^value addadd an element to set this has no effect if the element is already present clearremove all elements from this set copyreturn shallow copy of set differencereturn the difference of two or more sets as new set ( all elements that are in this set but not the others difference_updateremove all elements of another set from this set discardremove an element from set if it is member if the element is not memberdo nothing intersectionreturn the intersection of two sets as new set ( all elements that are in both sets intersection_updateupdate set with the intersection of itself and another
4,290
isdisjointreturn true if two sets have null intersection issubsetreport whether another set contains this set issupersetreport whether this set contains another set popremove and return an arbitrary set element raises keyerror if the set is empty removeremove an element from setit must be member if the element is not memberraise keyerror symmetric_differencereturn the symmetric difference of two sets as new set ( all elements that are in exactly one of the sets symmetric_difference_updateupdate set with the symmetric difference of itself and another unionreturn the union of sets as new set ( all elements that are in either set updateupdate set with the union of itself and others data and other attributes defined here__hash__ none in [ ]signals[ typeerror traceback (most recent call lastin ---- signals[ typeerror'setobject does not support indexing in [ ]next(iter(signals)out[ ]'i_measin [ ]list(signals)[ out[ ]'i_measunpacking variables
4,291
firstsecond [ print(firstsecondvalueerror traceback (most recent call lastin ---- firstsecond [ print(firstsecondvalueerrortoo many values to unpack (expected in [ ]firstsecond ( print(firstsecond in [ ]firstsecond { print(firstsecond in [ ]employees [( 'john')( 'emily')( 'david')( 'mark')( 'andrew')for employee_idemployee_name in employeesprint(employee_idemployee_name john emily david mark andrew dictionaries in [ ]empty_set {type(empty_setout[ ]dict in [ ]empty_set set(type(empty_setout[ ]set in [ ]my_dict {' ' ' ' ' ' ' ' my_dict out[ ]{' ' ' ' ' ' ' '
4,292
my_dict[' 'out[ ] in [ ]for key in my_dictprint(keya in [ ]for keyvalue in my_dict items()print(keyvaluea summary of python containers feature list tuple dict set purpose an ordered collection of variables an ordered collection of variables an ordered collection of key,value pairs collection of variables duplication of values yes yes unique keysduplicate values no mutability yes no yes no creation [ , , ( , , {' ': { , , empty container [({set(comprehension [ for in range( )tuple(( for in range( )){kv for , in zip([' '][ ]){ for in range( )accessing element lst[ tpl[ dct['key'not possible functions in [ ]lambda functions lambda xx** ( out[ ] in [ ]def ( )return ** ( out[ ] arguments
4,293
def (abc= )return + + ( , out[ ] in [ ] ( , out[ ] if the number of arguments matchesone can pass list in [ ]lst [ , , (*lstout[ ] or dictionary (provided that key names match the argument namesvery useful for methods with multiple argumentse plottingquerying databasesetc in [ ]dct {' ' ' ' ' ' (**dctout[ ] in ]query_params {'db'"nxcals""signal""i_meas""t_start""today""t_end""tomorrow"call_db(**query_paramsquery_params['db''pmcall_db(**query_paramsdefault argument values in [ ]def (abdc= )return + + + in [ ]def (*args)print(len(args)return args[ ]*args[ ]*args[ ( ' ' out[ ]'aaaaaaaaaa
4,294
def (**kwargs)return kwargs[' 'kwargs[' ' ( = = = out[ ] in [ ]def (arg*args**kwargs)return arg sum(argskwargs[' ' ( = out[ ] in [ ]def (ab* )return + + ( , , typeerror traceback (most recent call lastin def (ab* ) return + + ---- ( , , typeerrorf(takes positional arguments but were given in [ ] ( , ,scaling= out[ ] function passed as an argument in [ ]def ( )return ** def (funcx)return func(xg( , out[ ] function can return multiple valuesin fact it returns tuple in [ ]def ()return ' '' ''sf(out[ ](' '' '' '
4,295
first list( ()print(firstprint(secondin [ ]first[ in [ ]first out[ ][' ' ' ' recursion factorial of an integer $nis given as\begin{equationnn*( - )*( - )*( - ) \end{equationfor example\begin{equation \end{equationin [ ]def factorial( )if = return elsereturn *factorial( - factorial( out[ ] in [ ]factorial( out[ ] in [ ]factorial(- error:root:internal python error in the inspect module below is the traceback from this internal error
4,296
file "/usr/local/lib/swan/ipython/core/interactiveshell py"line in run_code exec(code_objself user_global_nsself user_nsfile ""line in factorial(- file ""line in factorial return *factorial( - file ""line in factorial return *factorial( - file ""line in factorial return *factorial( - [previous line repeated more timesfile ""line in factorial if = recursionerrormaximum recursion depth exceeded in comparison during handling of the above exceptionanother exception occurredtraceback (most recent call last)file "/usr/local/lib/swan/ipython/core/interactiveshell py"line in showtraceback stb value _render_traceback_(attributeerror'recursionerrorobject has no attribute '_render_traceback_during handling of the above exceptionanother exception occurredtraceback (most recent call last)file "/cvmfs/sft cern ch/lcg/releases/python/- / -centos -gcc -opt/lib/python /genericpath "line in exists os stat(pathfilenotfounderror[errno no such file or directory'during handling of the above exceptionanother exception occurredtraceback (most recent call last)file "/usr/local/lib/swan/ipython/core/ultratb py"line in get_records return _fixed_getinnerframes(etbnumber_of_lines_of_contexttb_offsetfile "/usr/local/lib/swan/ipython/core/ultratb py"line in wrapped return (*args**kwargsfile "/usr/local/lib/swan/ipython/core/ultratb py"line in _fixed_getinnerframes records fix_frame_records_filenames(inspect getinnerframes(etbcontext)file "/cvmfs/sft cern ch/lcg/releases/python/- / -centos -gcc -opt/lib/python /inspect py"line in getinnerframes frameinfo (tb tb_frame,getframeinfo(tbcontextfile "/cvmfs/sft cern ch/lcg/releases/python/- / -centos -gcc -opt/lib/python /inspect py"line in getframeinfo filename getsourcefile(frameor getfile(framefile "/cvmfs/sft cern ch/lcg/releases/python/- / -centos -gcc -opt/lib/python /inspect py"line in getsourcefile if os path exists(filename)file "/cvmfs/sft cern ch/lcg/releases/python/- / -centos -gcc -opt/lib/python /genericpath "line in exists os stat(pathkeyboardinterrupt in [ ]def factorial( )if not isinstance(nintor < raise valueerror("argument is not positive integer"if = return elsereturn *factorial( - factorial( out[ ]
4,297
in [ ]def flatten_nested_lists( )result [for el in xif isinstance(el(listtuple))result extend(flatten_nested_lists(el)elseresult append(elreturn result in [ ]lst [ lst [ lst append(lst lst typeerror traceback (most recent call lastin lst [ lst [ ---- lst append(*lst lst typeerrorappend(takes exactly one argument ( givenin [ ]lst [ [ , ][ [ ]]flatten_nested_lists(lstout[ ][ fibonacci in [ ]def fib( )if = return elif = return elsereturn fib( - fib( - [fib(ifor in range( )out[ ][ how many times do we calculate fib( )
4,298
arguments [def fib( )arguments append(nif = return elif = return elsereturn fib( - fib( - [fib(ifor in range( )print( [ in [ ]counts {iarguments count(ifor in range(max(arguments)+ )counts out[ ]{ in [ ]sum(counts values()out[ ] memoization in computingmemoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again sourcein [ ]memoization for fibonacci fibonacci memo { : : arguments [def fib( )arguments append(nif not in memomemo[nfib( - fib( - return memo[ [fib(ifor in range( )out[ ][ in [ ]counts {iarguments count(ifor in range(max(arguments)+ )counts out[ ]{
4,299
sum(counts values()out[ ] decorators decorators are functions dedicated to enhance functionality of given functione check parameter inputsformat input in [ ]def argument_test_natural_number( )def helper( )if type(xis int and return (xelseraise exception("argument is not an integer"return helper def factorial( )if = return elsereturn *factorial( - factorial argument_test_natural_number(factorialfactorial( out[ ] in [ ]factorial(- exception traceback (most recent call lastin ---- factorial(- in helper( return ( else---- raise exception("argument is not an integer" return helper exceptionargument is not an integer