id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
15,200 | proofthe cdf of is given by [ zp[( )/ zp[ szz +sz - dx - / dy ph( ) where we make change of variable ( )/ in the fourth equation hencez ( the rescaling procedure in theorem is called standardization it follows from theorem that any (us can be written as szstandardization where ( in other wordsany normal random variable can be viewed as an affine transformation -that isa linear transformation plus constant -of standard normal random variable we now generalize this to dimensions let zn be independent and standard normal random variables the joint pdf of [ zn ]is given by fz (ze zi ( ) = rn affine transformation ( we write ( )where is the identity matrix consider the affine transformation = +bz ( for some matrix and -dimensional vector note thatby ( and ( ) has expectation vector and covariance matrix bbwe say that has multivariate normal or multivariate gaussian distribution with mean vector and covariance matrix we write (usthe following theorem states that any affine combination of independent multivariate normal random variables is again multivariate normal theorem affine transformation of normal random vectors let xr be independent mi -dimensional normal random vectorswith xi (ui si ) thenfor any vector and mi matrices br abi xi bi ui bi si > ( = = = proofdenote the -dimensional random vector in the left-hand side of ( by by definitioneach xi can be written as ui ai where the { are independent (because the {xi are independent)so that ar = bi (ui ai = bi ui = bi ai multivariate normal |
15,201 | which is an affine combination of independent standard normal random vectors hencey is multivariate normal its expectation vector and covariance matrix can be found easily from theorem the next theorem shows that the distribution of subvector of multivariate normal random vector is again normal theorem marginal distributions of normal random vectors let (usbe an -dimensional normal random vector decompose xuand as up xp sr xus( uq xq sr sq where is the upper left corner of and sq is the lower right corner of thenx ( proofwe give proof assuming that is positive definite let bbbe the (lowercholesky decomposition of we can write #up xp bp uq xq cr cq { ( where and are independent pand -dimensional standard normal random vectors in particularx which means that ( )since > by relabeling the elements of we see that theorem implies that any subvector of has multivariate normal distribution for examplexq (uq sq the following theorem shows that not only the marginal distributions of normal random vector are normalbut also its conditional distributions theorem conditional distributions of normal random vectors let (usbe an -dimensional normal random vector with det( if is decomposed as in ( )then - ( xq (uq > - ( )sq sr sr as consequencex and xq are independent if and only if they are uncorrelatedthat isif sr (zero matrixprooffrom ( we see that + and xq uq +cr +cq consequently(xq uq cr - ( cq where is -dimensional multivariate standard normal random vector it follows that xq conditional on has (uq cr - ( )cq cq distribution the proof of |
15,202 | - - - ( is completed by observing that > - cr ( cr and - sq > - sr cq > sr cr cr cq cq cr |{zb cr if and xq are independentthen they are obviously uncorrelatedas sr [( )(xq uq ) ( (xq uq ) converselyif sr othen by ( the conditional distribution of xq given is the same as the unconditional distribution of xq that isn(uq sq in other wordsxq is independent of the next few results are about the relationships between the normalchi-squaredstudentand distributionsdefined in table recall that the chi-squared family of distributionsdenoted by kh are simply gamma( / / distributionswhere the parameter { is called the degrees of freedom kh distribution theorem relationship between normal and kh distributions if (usis an -dimensional normal random vector with det( then ( ) - ( ukh ( prooflet bbbe the cholesky decomposition of swhere is invertible since can be written as bzwhere [ zn ]is vector of independent standard normal random variableswe have - - ( us ( ( (bb ( uz zi = using the independence of zn the moment generating function of given by sy ( +***+zn [ sz szn sz pn = zi is where ( the moment generating function of is sz - / sz dz ( - ) dz ee so that sy / which is the moment generating function of the gamma( / / distributionthat isthe kh distribution -see example the result now follows from the uniqueness of the moment generating function consequence of theorem is that if [ xn ]is -dimensional standard normalthen the squared length kxk xn has kh distribution if instead xi (ui ) then kxk is said to have noncentral kh distribution this distribution depends on the {ui only through the norm kuk we write kxk kh (th)where th kuk is the noncentrality parameter such distributions frequently occur when considering projections of multivariate normal random variablesas summarized in the following theorem noncentral kh distribution noncentrality parameter |
15,203 | multivariate normal distribution theorem relationship between normal and noncentral kh distributions let (uin be an -dimensional normal random vector and let vk vm be linear subspaces of dimensions and mrespectivelywith let xk and xm be orthogonal projections of onto vk and vm and let uk and um be the corresponding projections of thenthe following holds the random vectors xk xm xk and xm are independent kxk kh (kuk )kxm xk kh - (kum uk )and kx xm kh - (ku um prooflet vn be an orthonormal basis of rn such that vk spans vk and vm spans vm by ( we can write the orthogonal projection matrices onto pj vi > kmnwhere vn is defined as rn note that pn is simply the idenas = tity matrix let :[ vn and define :[ zn ]vx recall from section that any orthogonal transformation such as vx is length preservingthat iskzk kxk to prove the first statement of the theoremnote that vx vp [ ] km it follows that (xm xk [ zk+ zm ]and ( xm [ zm+ zn ]moreoverbeing linear transformation of normal random vectorz is also normalwith covariance matrix vv in in particularthe {zi are independent this shows that xk xm xk and xm are independent as well nextobserve that kxk kvxk kz kwhere :[ zk ]the latter vector has independent components with variances and its squared norm has therefore (by definitiona kh (thdistribution the noncentrality parameter is th kez kexk kuk kagain by the length-preserving property of orthogonal transformations this shows that kxk kh (kuk kthe distributions of kxm xk and kx xm follow by analogy theorem is frequently used in the statistical analysis of normal linear modelssee section in typical situations lies in the subspace vm or even vk -in which case kxm xk kh - and kx xm kh - independently the (scaledquotient then turns out to have an distribution - consequence of the following theorem theorem relationship between kh and distributions let kh and kh be independent thenu/ (mnv/ prooffor notational simplicitylet / and / the pdf of / is given by fw ( fu (wvv fv (vdv substituting the pdfs of the corresponding gamma |
15,204 | distributionswe have wc- (wv) - -wv/ vd- - / dv vc+ - -( + ) / dv fw (wc + ( ( (cg( ( dwc- (cg( ( ) + where the last equality follows from the fact that the integrand is equal to ( ) - times the density of the gamma(aldistribution with and ( )/ the density of mn uv is given by fz (zfw ( /nm/ the proof is completed by comparing the resulting expression with the pdf of the distribution given in table corollary (relationship between normalkh and distributionslet ( and kh be independent thenz tn / prooflet zv/ because kh we have by theorem that ( nthe result follows now from the symmetry around of the pdf of and the fact that the square of tn random variable has an ( ndistribution convergence of random variables recall that random variable is function from ohm to if we have sequence of random variables (for instancexn (ox(on for each ohm)then one can consider the pointwise convergencelim xn (ox( )nfor all ohmin which case we say that converges surely to more interesting type of convergence uses the probability measure associated with sure convergence definition convergence in probability the sequence of random variables converges in probability to random variable iffor all lim [|xn xe np we denote the convergence in probability as xn - convergence in probability |
15,205 | convergence in probability refers only to the distribution of xn insteadif the sequence is defined on common probability spacethen we can consider the following mode of convergence that uses the joint distribution of the sequence of random variables definition almost sure convergence the sequence of random variables converges almost surely to random variable if for every lim sup |xk xe nk> almost sure convergence we denote the almost sure convergence as xn - note that in accordance with these definitions xn - is equivalent to supk> |xk - example (convergence in probability versus almost sure convergencesince the event {|xn xeis contained in {supk> |xk xe}we can conclude that almost sure convergence implies convergence in probability howeverthe converse is not true in general for instanceconsider the iid sequence with marginal distribution [xn [xn / clearlyxn - howeverfor and any we havep sup |xk [xn exn+ ek> [xn ex [xn+ ex (using independencem lim [xk elim mmk = = - - ** mn + it follows that [supk> |xk for any in other wordsit is not true that xn - lim another important type of convergence is useful when we are interested in estimating expectations or multidimensional integrals via monte carlo methodology definition convergence in distribution the sequence of random variables is said to converge in distribution to random variable with distribution function (xp[ xprovided thatlim [xn xf (xfor all such that lim (af (xnd convergence in distribution ( - we denote the convergence in distribution as either xn -xor xn - |
15,206 | the generalization to random vectors replaces ( with lim [xn ap[ afor all rn such that [ ( where denotes the boundary of the set useful tool for demonstrating convergence in distribution is the characteristic function ps of random vector xdefined as the expectationps ( : eit rn ( the moment generating function in ( is special case of the characteristic function evaluated at -is note that while the moment generating function of random variable may not existits characteristic function always exists the characteristic function of random vector is closely related to the fourier transform of its pdf characteristic function example (characteristic function of multivariate gaussian random vectorthe density of the multivariate standard normal distribution is given in ( and thus the characteristic function of ( in is it - / ps (te ( peit kzk dz rn -ktk / - / = ( pe kz-it dz -ktk / rn rn hencethe characteristic function of the random vector bz in ( with multivariate normal distribution (usis given by ps (te eit eit ( +bzeit ei( tz eit ps (bt eit -kb tk / eit - st/ the importance of the characteristic function is mainly derived from the following resultfor which proof can be foundfor examplein [ theorem characteristic function suppose that ps ( )ps ( )are the characteristic functions of the sequence of random vectors and ps (tis the characteristic function of thenthe following three statements are equivalent limnps xn (tps (tfor all rn xn - limneh(xn eh(xfor all bounded continuous functions rd |
15,207 | as example (convergence in distributiondefine the random variables ! xk yn : = iid where ber( / we now show that yn - ( firstnote that exp(ityn ( exp(it/ ) exp(itxk / - = = secondfrom the collapsing product( exp(it/ )nk= ( exp(it/ ) exp(it)we have / exp(ityn ( exp(it) exp(it/ it follows that limne exp(ityn (exp(it )/(it)which we recognize as the characteristic function of the ( distribution yet another mode of convergence is the following definition convergence in -norm the sequence of random variables converges in -norm to random variable if lim |xn xp nlp convergence in -norm we denote the convergence in -norm as xn - the case for corresponds to convergence in mean squared error the following example illustrates that convergence in -norm is qualitatively different from convergence in distribution example (comparison of modes of convergencedefine xn : xwhere has uniform distribution on the interval ( , clearlyxn - ( howevere|xn - | / and so the sequence does not converge in -norm in additionp[|xn xe- and so xn does not converge in probability as well thusin general xn - implies neither xn -xnor xn - we mentionhoweverthat if xn - for some constant cthen xn - as well to see thisnote that xn - stands for lim [xn nin other wordswe can writep[|xn ce [xn ep[xn - which shows that xn - by definition |
15,208 | definition complete convergence the sequence of random variables is said to converge completely to if for all [|xn xen cpl we denote the complete convergence as xn - complete convergence example (complete and almost sure convergencewe show that complete convergence implies almost sure convergence we can bound the criterion for almost sure convergence as followsp[sup |xk xep[ > {|xk xe} > [|xk xeby union bound in ( > | = [|xk xe{ cpl - = [|xk xecfrom xn -- cn- = [|xk xe- henceby definition xn - the next theorem shows how the different types of convergence are related to each > other for examplein the diagram belowthe notation means that -norm convergence implies lq -norm convergence under the assumption that theorem modes of convergence the most general relationships among the various modes of convergence for numerical random variables are shown on the following hierarchical diagramcpl xn - xn - lp > xn - xn - lq xn - xn - |
15,209 | proof firstwe show that xn - xn - using the inequality [ [afor any event to this endconsider the distribution function of xf xn (xp[xn xp[xn |xn xep[xn |xn [|xn xep[xn xx xn [|xn xep[ enowin the arguments above we can switch the roles of xn and (there is symmetryto deduce the analogous resultf ( [| xn ep[xn ethereforemaking the switch gives ( [| xn ef xn (xputting it all together givesf ( ep[| xn xn ( [|xn xef ( etaking on both sides yields for any ( lim xn ( ( ensince is continuous at by assumption we can take to conclude that limnf xn (xf (xlp lq secondwe show that xn - xn - for since the function (xxq/ is concave for / jensen' inequality yields( |xp ) / ( |xp (|xp | | in other words( |xn - | ) / ( |xn -xp ) / - proving the statement of the theorem chebyshev' inequality thirdwe show that xn - xn - first note that for any random variable ywe can writee|ye[| {| |>ee[| {| |>ee [|yethereforewe obtain chebyshev' inequalitye| ( [|ye using chebyshev' inequality and xn -xwe can write [|xn xe |xn - henceby definition xn - cpl finallyxn - xn - xn - is proved in examples and finallywe will make use of the following theorem theorem slutsky let (xybe continuous scalar function of vectors and suppose that xn - and - for some finite constant thend (xn - (xc |
15,210 | proofwe prove the theorem for xand scalar the proof for random vectors is analogxn ous firstwe show that :-= usingfor exampletheorem in yn other wordswe wish to show that the characteristic function of the joint distribution of xn and yn converges pointwise as psxn ,yn (te ei( xn + yn -eit eit psx, ( ) to show the limit aboveconsider |psxn ,yn (tpsx, ( ) |psxn , (tpsx, ( )|psxn ,yn (tpsxn , ( )|eit (eit xn eit )| ei( xn + (eit (yn - ) |eit | (eit xn eit ) |ei( xn + cx |eit (yn - |psxn ( psx ( ) |eit (yn - since xn -xtheorem implies that psxn ( -psx ( )and the first term |psxn ( psx ( )goes to zero for the second term we use the fact that rx rx |eix eith dth | eith dth | | to obtain the bounde|eit (yn - |eit (yn - | {|yn - |>ee|eit (yn - | {|yn - | {|yn - |>ee| (yn )| {|yn - | [|yn ce| | -| |en since is arbitrarywe can let to conclude that limn|psxn ,yn ( )-psx, ( ) in other wordsz -zand by the continuity of gwe have ( - (zor (xn - (xcexample (necessity of slutsky' conditionthe condition that yn converges in probability to constant cannot be relaxed for examplesuppose that (xyx yd xn - ( and yn - ( thenour intuition tempts us to incorrectly conclude that xn yn - ( this intuition is falsebecause we can have yn -xn for all so that xn yn while both and have the same marginal distribution (in this case standard normalc law of large numbers and central limit theorem two main results in probability are the law of large numbers and the central limit theorem both are limit theorems involving sums of independent random variables in particularconsider sequence of iid random variables with finite expectation and finite variance for each define :( +*xn )/ what can we say about the (randomsequence of averages by ( and ( we have and var / henceas increasesthe variance of the (randomaverage goes to this means |
15,211 | that by definition the average converges to in -norm as that isx - in factto obtain convergence in probability the variance need not be finite -it is sufficient to assume that ex theorem weak law of large numbers law of large numbers if xn are iid with finite expectation uthen for all lim | ue np in other wordsx - theh theorem has ia natural generalization for random vectors namelyif ex then kxn uk where is the euclidean norm we give proof in the scalar case prooflet zk :xk for all kso that ez we thus need to show that - we use the properties of the characteristic function of denoted as psz due to the iid assumptionwe have psz (te eitz eitzi / = = eizi / psz ( / [psz ( / )] ( = an application of taylor' theorem in the neighborhood of yields psz ( /npsz ( ( /nsince psz ( we havepsz ( [psz ( / )] [ ( / )] - the characteristic function of random variable that always equals zero is therefored theorem implies that - howeveraccording to example convergence in distribution to constant implies convergence in probability hencez - there is also stronger version of this theoremas follows theorem strong law of large numbers strong law of large numbers if xn are iid with expectation and ex lim sup | ue na in other wordsx - > |
15,212 | prooffirstnote that any random variable can be written as the difference of two nonnegative random variablesx xxwhere :max{ and :min{ thuswithout loss of generalitywe assume that the random variables in the theorem above are nonnegative secondfrom the sequence { we can pick up the subsequence { ={ thenfrom chebyshev' inequality ( and the iid conditionwe have var = = cpl thereforeby definition - and from theorem we conclude that - thirdfor any arbitrary nwe can find ksay ncso that ( ) for such and nonnegative it holds that ( ) ( + ( ) since and ( + ) converge almost surely to as (and hence ngoes to infinitywe conclude that - note that the condition ex in theorem can be weakened to |xand the iid condition on the variables xn can be relaxed to mere pairwise independence the corresponding proofhoweveris significantly more difficult the central limit theorem describes the approximate distribution of and it applies to both continuous and discrete random variables looselyit states that central limit theorem the average of large number of iid random variables approximately has normal distribution specificallythe random variable has distribution that is approximately normalwith expectation and variance / theorem central limit theorem if xn are iid with finite expectation and finite variance then for all rx lim ph( )nsn where ph is the cdf of the standard normal distribution prooflet zk :(xk )/ for all kso that and we thus need to show that - ( we again use the properties of the characteristic function let psz be the characteristic function of an iid copy of zthen due to the iid assumption similar calculation to the one in ( yieldsps (te eit [psz (tn)] |
15,213 | an application of taylor' theorem in the neighborhood of yields psz (tn ps ( ps ( ( / since ps ( dtd eitz = ez and ps ( ez - we have# in psn (tpsz (tn ( / - - / from example we recognize - / as the characteristic function of the standard normal distribution thusfrom theorem we conclude that - ( figure shows the central limit theorem in action the left part shows the pdfs of for the case where the {xi have [ distribution the right part shows the same for the exp( distribution in both caseswe clearly see convergence to bell-shaped curvecharacteristic of the normal distribution figure illustration of the central limit theorem for (leftthe uniform distribution and (rightthe exponential distribution the multivariate version of the central limit theorem is the basis for many asymptotic (in the size of the training setresults in machine learning and data science theorem multivariate central limit theorem let xn be iid random vectors with expectation vector and finite covariance matrix define xn :( xn )/ thenm-estimator (xn - ( sas one application is as follows suppose that parameter of interestthis the unique solution of the system of equations ps( th where ps is vector-valued (or multivaluedfunction and the distribution of does not depend on th an -estimator of th |
15,214 | denoted thn is the solution to the system of equations that results from approximating the expectation with respect to using an average of iid copies of xn psn (th: ps(xi thn = thuspsn ( thn theorem -estimator the -estimator is asymptotically normal as ( thn th- ( - ba-)( where :- ps ( thand : ps( thps( th)is the covariance matrix th of ps( th thn is unique rootthat isproofwe give proof under the simplifying assumption that for any th and ethere exists such that kb thn thk implies that kpsn (th) firstwe argue that thn -ththat isp[kb thn thk from the multivariate extension of theorem we have that psn (th- psn (the ps( th thereforeusing the uniqueness of thn we can show that thn -thvia the boundh kb thn thk kpsn (th) kpsn (the psn (th) thn around thto secondwe take taylor expansion of each component of the vector psn ( obtainpsn ( thn psn (thjn (th )( thn th)where jn (this the jacobian of psn at thand th lies on the segment joining thn and thline rearrange the last equation and multiply both sides by - to obtain- - jn (th ( thn tha- psn (thby the central limit theoremn ps(thconverges in distribution to ( btherefored - - jn (th ( thn th- ( - ba-theorem (the weak law of large numbersapplied to the iid random matrices th ps(xi th)shows that jn (th- ps( thth moreoversince thn -thand jn is continuous in thwe have that jn (th -- thereforep by slutsky' theorem- - jn (th ( thn thn ( thn th- the result holds under far less stringent assumptions |
15,215 | laplace' approximation finallywe mention laplace' approximationwhich shows how integrals or expectations behave under the normal distribution with vanishingly small variance theorem laplace' approximation suppose that thn thwhere thlies in the interior of the open set th and that sn is covariance matrix such that sn slet th be continuous function with (th thenas - / (the (th-thn sn (th-thn dth (th| ( th proof(sketch for bounded domain th the left-hand side of ( can be written as the expectation with respect to the (thn sn /ndistributionz (th th exp (th thn ) - dth | sn (th| sn [ (xn ) {xn th}]| sn / | / th / where xn sn /nlet ( ithenth sn zn has the same distribution (thn , / as xn and thn sn zn thas by continuity of (th) {th thin the interior of thas : / / [ (xn ) {xn th} thn snnz thn snnz th - (th) {ththsince thlies in the interior of thwe have {thth completing the proof as an application of theorem we can show the following theorem approximation of integrals suppose that th is twice continuously differentiable with unique global minimum at thand th is continuous with (th thenas ( ln (the- (thdth - (thln rp more generallyif rn has unique global minimum thn and rn thn ththen ln (the- rn (thdth - (thln rp proofwe only sketch the proof of ( let (thbe the hessian matrix of at th by taylor' theorem we can write (th (thr(th (th th (th th) (th)(th th)th { = we can exchange the limit and expectationas (th) {th th maxthth (thand |thmaxthth (thr th maxthth (thdth |
15,216 | where th is point that lies on the line segment joining thand th since this unique global minimumthere must be small enough neighborhood of thsay thsuch that is strictly (also known as stronglyconvex function on th in other wordsh(this positive definite matrix for all th th and there exists smallest positive eigenvalue such that xh(th) kxk for all in additionsince the maximum eigenvalue of (this continuous function of th th and th is boundedthere must exist constant such that xh(th) kxk for all in other wordsdenoting : (th)we have the boundsl kth thk -( (thr kth thk th th thereforee - rz th nl kth-thk (the dth th - (thg(the - rdth th nl (the kth-th dth an application of theorem yields th (the- (thdth ( - / / andmore importantlyz ln (the- (thdth - rln th - (thp thusthe proof will be complete once we show that th (the dthwith th : this asymptotically negligible compared to th (the- (thdth since th is global minimum that lies outside any neighborhood of ththere must exists constant such that (th)-rc for all th th thereforez - (th-( - rg(the dth (the- (the-( - )( (th)- dth th zth -( - (the- (the-( - ) dth th -( - )( + (the- (thdth ( - ( +cr rp the last expression is of order ( - / / )concluding the proof markov chains definition markov chain markov chain is collection {xt of random variables (or random vectorswhose futures are conditionally independent of their pasts given their present values that isp[xt+ tp[xt+ xt for all markov chain ( in other wordsthe conditional distribution of the future variable xt+ given the entire past { }is the same as the conditional distribution of xt+ given only the present xt property ( is called the markov property markov property |
15,217 | timehomogeneous transition density initial distribution the index in xt is usually seen as "timeor "stepparameter the index set { in the definition above was chosen out of convenience it can be replaced by any countable index set we restrict ourselves to time-homogeneous markov chains -markov chains for which the conditional pdfs fxt+ xt ( xdo not depend on twe abbreviate these as ( xthe { ( )are called the (one-steptransition densities of the markov chain note that the random variables or vectors {xt may be discrete ( taking values in some set { }or continuous ( taking values in an interval [ or rd in particularin the discrete caseeach ( xis probabilityq( xp[xt+ xt xthe distribution of is called the initial distribution of the markov chain the onestep transition densities and the initial distribution completely specify the distribution of the random vector [ xt ]namelywe have by the product rule ( and the markov property that the joint pdf is given by fx ,xt ( xt fx ( fx ( fxt xt- , (xt xt- fx ( fx ( fxt xt- (xt xt- fx ( ( ( (xt xt- ergodic limiting pdf global balance equations markov chain is said to be ergodic if the probability distribution of xt converges to fixed distribution as ergodicity is property of many markov chains intuitivelythe probability of encountering the markov chain in state at time far into the future should not depend on the tprovided that the markov chain can reach every state from any other state -such markov chains are said to be irreducible -and does not "escapeto infinity thusfor an ergodic markov chain the pdf fxt (xconverges to fixed limiting pdf (xas irrespective of the starting state for the discrete casef (xcorresponds to the long-run fraction of times that the markov process visits under mild conditions (such as irreducibilitythe limiting pdf (xcan be found by solving the global balance equationsp (yq( yf (xf (yq( ydy reversible reverse markov chain (discrete case)(continuous case( for the discrete case the rationale behind this is as follows since (xis the long-run proportion of time that the markov chain spends in xthe proportion of transitions out of is (xthis should be balanced with the proportion of transitions into state xwhich is (yq( yone is often interested in stronger type of balance equations imagine that we have taken video of the evolution of the markov chainwhich we may run in forward and reverse time if we cannot determine whether the video is running forward or backward (we cannot determine any systematic "looping"which would indicate in which direction time is flowing)the chain is said to be time-reversible or simply reversible although not every markov chain is reversibleeach ergodic markov chainwhen run backwardsgives another markov chain -the reverse markov chain -with transition densities ( xf (yq( ) (xto see thisfirst observe that (xis the long-run proportion of time spent in for both the original and reverse markov chain secondlythe "probability fluxfrom to in the reversed chain must be equal to the probability flux from to in the original chainmeaning (xe ( xf (yq( )which yields the |
15,218 | stated transition probabilities for the reversed chain in particularfor reversible markov chain we have (xq( xf (yq( yfor all xy ( these are the detailed (or localbalance equations note that the detailed balance equations imply the global balance equations henceif markov chain is irreducible and there exists pdf such that ( holdsthen (xmust be the limiting pdf in the discrete state space case an additional condition is that the chain must be aperiodicmeaning that the return times to the same state cannot always be multiple of some integer local balance equations aperiodic example (random walk on graphconsider markov chain that performs "random walkon the graph in figure at each step jumping from the current vertex (nodeto one of the adjacent verticeswith equal probability clearly this markov chain is reversible it is also irreducible and aperiodic let (xdenote the limiting probability that the chain is in vertex by symmetryf ( ( ( ( ) ( ( and ( ( moreoverby the detailed balance equationsf ( )/ ( )/ and ( )/ ( )/ it follows that ( ( ( / ( / ( ( so that ( / ( / and ( / figure the random walk on this graph is reversible statistics statistics deals with the gatheringsummarizationanalysisand interpretation of data the two main branches of statistics are classical or frequentist statisticshere the observed data is viewed as the outcome of random data described by probabilistic model -usually the model is specified up to (multidimensionalparameterthat ist (thfor some th the statistical inference is then purely concerned with the model and in particular with the parameter th for exampleon the basis of the data one may wish to (aestimate the parameter(bperform statistical tests on the parameteror frequentist statistics |
15,219 | (cvalidate the model bayesian statistics bayesian statisticsin this approach we average over all possible values of the parameter th using user-specified weight function (thand obtain the model (thg(thdth for practical computationsthis means that we can treat th as random variable with pdf (thbayesformula (th tg( thg(this used to learn th based on the observed data example (iid samplethe most fundamental statistical model is where the data xn is such that the random variables xn are assumed to be independent and identically distributediid xn distrandom sample according to some known or unknown distribution dist an iid sample is often called random sample in the statistics literature note that the word "samplecan refer to both collection of random variables and to single random variable it should be clear from the context which meaning is being used often our guess or model for the true distribution is specified up to an unknown parameter thwith th th the most common model isiid xn (us )in which case th (us and th rc estimator estimate bias mean squared error efficient estimation suppose the model (thfor the data is completely specified up to an unknown parameter vector th the aim is to estimate th on the basis of the observed data only (an alternative goal could be to estimate ps(thfor some vector-valued function psspecificallythe goal is to find an estimator ( that is close to the unknown th the corresponding outcome (tis the estimate of th the bias of an estimator of th is defined as et th an estimator of th is said to be unbiased if eth th we often write th for both an estimator and estimate of th the mean squared error (mseof real-valued estimator is defined as mse eth ( th) an estimator is said to be more efficient than an estimator if the mse of is smaller than the mse of the mse can be written as the sum mse (eth th) varth relative time variance products the first term measures the unbiasedness and the second is the variance of the estimator in particularfor an unbiased estimator the mse of an estimator is simply equal to its variance for simulation purposes it is often important to include the running time of the estimator in efficiency comparisons one way to compare two unbiased estimators and is to compare their relative time variance productsri var ( ) ( |
15,220 | where and are the times required to calculate the estimators and respectively in this schemet is considered more efficient than if its relative time variance product is smaller we discuss next two systematic approaches for constructing sound estimators method of moments suppose xn are outcomes from an iid sample xn ~iid ( th)where th [th thk ]is unknown the moments of the sampling distribution can be easily estimated namelyif ( th)then the -th moment of xthat is ur (theth (assuming it exists)can be estimated through the sample -th momentn ni= xir the method of moments involves choosing the estimate th of th such that each of the first sample and true moments are matchedn ur ( th) = sample -th moment method of moments in generalthis set of equations is nonlinear and so its solution often has to be found numerically example (sample mean and sample variancesuppose the data is given by { xn }where the {xi form an iid sample from general distribution with mean and variance matching the first two moments gives the set of equations xi un = xi = the method of moments estimates for and are therefore the sample mean sample mean and xi =xn = ( ( (xi ) = = ( the corresponding estimator for uxis unbiased howeverthe estimator for is biasedc ( )/ an unbiased estimator is the sample variance es sample variance = (xi ) = its square roots is called the sample standard deviation example (sample covariance matrixthe method of moments can also be used to estimate the covariance matrix of random vector in particularlet the xn sample standard deviation |
15,221 | be iid copies of -dimensional random vector with mean vector and covariance matrix we assume the moment estimator for isas in the casex ( xn )/ as the covariance matrix can be written (see ( )as ( )( )the method of moments yields the estimator (xi )(xi )sn = sample covariance matrix similar to the one-dimensional case ( )replacing the factor / with /( gives an unbiased estimatorcalled the sample covariance matrix likelihood function log-likelihood function ( maximum likelihood method the concept of likelihood is central in statistics it describes in precise way the information about model parameters that is contained in the observed data let be (randomdata object that is modeled as draw from the pdf ( th(discrete or continuouswith parameter vector th th let be an outcome of the function (th : ( th)th this called the likelihood function of thbased on the (naturallogarithm of the likelihood function is called the log-likelihood function and is often denoted by lower case note that (th tand ( thhave the same formulabut the first is viewed as function of th for fixed twhere the second is viewed as function of for fixed th the concept of likelihood is particularly useful when is modeled as an iid sample { xn from some pdf in that casethe likelihood of the data { xn }as function of this given by the product (th tmaximum likelihood estimator = (xi th( let be an observation from ( th)and suppose that ( thtakes its largest value at th th in way this th is our best estimate for thas it maximizes the probability (densityfor the observation it is called the maximum likelihood estimate (mleof th note that th = th(tis function of the corresponding random variablealso denoted th is the maximum likelihood estimator (also abbreviated as mlemaximization of (th tas function of th is equivalent (when searching for the maximizerto maximizing the log-likelihood (th )as the natural logarithm is an increasing function this is often easierespecially when is an iid sample from some sampling distribution for examplefor of the form ( )we have (th tn = ln (xi th |
15,222 | if (th tis differentiable function with respect to th and the maximum is attained in the interior of thand there exists unique maximum pointthen we can find the mle of th by solving the equations (th thi example (bernoulli random samplesuppose we have data tn { xn and assume the model xn ~iid ber(ththenthe likelihood function is given by (th tn = th xi ( th) -xi th ( th) - th ( where : xn =nx the log-likelihood is (ths ln th ( sln( ththrough differentiation with respect to thwe find the derivative - th th th( th th ( solving (th gives the ml estimate th and ml estimator th confidence intervals an essential part in any estimation procedure is to provide an assessment of the accuracy of the estimate indeedwithout information on its accuracy the estimate itself would be meaningless confidence intervals (also called interval estimatesprovide precise way of describing the uncertainty in the estimate let xn be random variables with joint distribution depending on parameter th th let be statisticsthat ist ( xn ) are functions of the databut not of th the random interval ( is called stochastic confidence interval for th with confidence if pth [ for all th th interval estimates stochastic confidence interval ( if and are the observed values of and then the interval ( is called the (numericalconfidence interval for th with confidence for every th th if the right-hand side of ( is merely heuristic estimate or approximation of the true probabilitythen the resulting interval is called an approximate confidence interval the probability pth [ th is called the coverage probability for confidence intervalit must be at least for multidimensional parameters th rd the stochastic confidence interval is replaced with stochastic confidence region rd such that pth [th for all th (numericalconfidence interval coverage probability confidence region |
15,223 | example (approximate confidence interval for the meanlet xn be an iid sample from distribution with mean and variance (both assumed to be unknownby the central limit theorem and the law of large numberstx approx ( )sn for large nwhere is the sample standard deviation rearranging the approximate equality [| - / awhere - / is the / quantile of the standard normal distributionyields - / - / an so that - / - / abbreviated as + - / is an approximate stochastic ( aconfidence interval for ( since ( is an asymptotic result onlycare should be taken when applying it to cases where the sample size is small or moderate and the sampling distribution is heavily skewed hypothesis testing null hypothesis alternative hypothesis test statistic hypothesis testing suppose the model for the data is described by family of probability distributions that depend on parameter th th the aim of hypothesis testing is to decideon the basis of the observed data twhich of two competing hypotheses holds truethese being the null hypothesish th th and the alternative hypothesish th th in classical statistics the null hypothesis and alternative hypothesis do not play equivalent roles contains the "status quostatement and is only rejected if the observed data are very unlikely to have happened under the decision whether to accept or reject is dependent on the outcome of test statistic ( for simplicitywe discuss only the one-dimensional case two (relatedtypes of decision rules are generally used decision rule reject if falls in the critical region critical region here the critical region is any appropriately chosen region in in practice critical region is one of the followingleft one-sided(- ]right one-sided[ )two-sided(- [ critical values for examplefor right one-sided testh is rejected if the outcome of the test statistic is too large the endpoints cc and of the critical regions are called critical values |
15,224 | decision rule reject if the -value is smaller than some significance level the -value is the probability thatunder the (randomtest statistic takes value as extreme as or more extreme than the one observed in particularif is the observed outcome of the test statistic then -value left one-sided testp :ph [ ]right one-sidedp :ph [ ]two-sidedp :min{ ph [ ] ph [ ]the smaller the -valuethe greater the strength of the evidence against provided by the data as rule of thumbp suggestive evidencep reasonable evidencep strong evidence whether the first or the second decision rule is usedone can make two types of errorsas depicted in table table type and ii errors in hypothesis testing true statement decision is true is true accept correct type ii error reject type error correct the choice of the test statistic and the corresponding critical region involves multiobjective optimization criterionwhereby both the probabilities of type and type ii error shouldideallybe chosen as small as possible unfortunatelythese probabilities compete with each other for exampleif the critical region is made larger (smaller)the probability of type ii error is reduced (increased)but at the same time the probability of type error is increased (reducedsince the type error is considered more seriousneyman and pearson [ suggested the following approachchoose the critical region such that the probability of type ii error is as small as possiblewhile keeping the probability of type error below predetermined small significance level remark (equivalence of decision rulesnote that decision rule and are equivalent in the following sensereject if falls in the critical regionat significance level reject if the -value is significance level significance level |
15,225 | in other wordsthe -value of the test is the smallest level of significance that would lead to the rejection of in generala statistical test involves the following steps formulate an appropriate statistical model for the data give the null ( and alternative ( hypotheses in terms of the parameters of the model determine the test statistic ( function of the data only determine the (approximatedistribution of the test statistic under calculate the outcome of the test statistic calculate the -value or the critical regiongiven preselected significance level accept or reject the actual choice of an appropriate test statistic is akin to selecting good estimator for the unknown parameter th the test statistic should summarize the information about th and make it possible to distinguish between the alternative hypotheses example (hypothesis testingwe are given outcomes xm and yn of two simulation studies obtained via independent runswith and the sample means and standard deviations are sx and sy thusthe {xi are outcomes of iid random variables {xi }the {yi are outcomes of iid random variables {yi }and the {xi and {yi are independent we wish to assess whether the expectations ux exi and uy eyi are the same or not going through the steps abovewe have the model is already specified above ux uy versus ux uy for similar reasons as in example take - tq / / by the central limit theoremthe statistic hasunder approximately standard normal distribution (assuming the variances are finiteq the outcome of is ( ) / / - as this is two-sided testthe -value is ph [ - - because the -value is extremely smallthere is overwhelming evidence that the two expectations are not the same |
15,226 | further reading accessible treatises on probability and stochastic processes include [ kallenberg' book [ provides complete graduate-level overview of the foundations of modern probability details on the convergence of probability measures and limit theorems can be found in [ for an accessible introduction to mathematical statistics with simple applications seefor example[ for more detailed overview of statistical inferencesee [ standard reference for classical (frequentiststatistical inference is [ |
15,227 | ython rimer python has become the programming language of choice for many researchers and practitioners in data science and machine learning this appendix gives brief introduction to the language as the language is under constant development and each year many new packages are being releasedwe do not pretend to be exhaustive in this introduction insteadwe hope to provide enough information for novices to get started with this beautiful and carefully thought-out language getting started the main website for python is where you will find documentationa tutorialbeginnersguidessoftware examplesand so on it is important to note that there are two incompatible "branchesof pythoncalled python and python further development of the language will involve only python and in this appendix (and indeed the rest of the bookwe only consider python as there are many interdependent packages that are frequently used with python installationit is convenient to install distribution -for instancethe anaconda python distributionavailable from the anaconda installer automatically installs the most important packages and also provides convenient interactive development environment (ide)called spyder use the anaconda navigator to launch spyderjupyter notebookinstall and update packagesor open command-line terminal to get started try out the python statements in the input boxes that follow you can either type these statements at the ipython command prompt or run them as (very short we assume that you have installed all the necessary files and have launched spyder anaconda |
15,228 | object python programs the output for these two modes of input can differ slightly for exampletyping variable name in the console causes its contents to be automatically printedwhereas in python program this must be done explicitly by calling the print function selecting (highlightingseveral program lines in spyder and then pressing function key is equivalent to executing these lines one by one in the console in pythondata is represented as an object or relation between objects (see also section basic data types are numeric types (including integersbooleansand floats)sequence types (including stringstuplesand lists)setsand mappings (currentlydictionaries are the only built-in mapping typestrings are sequences of charactersenclosed by single or double quotes we can print strings via the print function print (hello world !"hello world for pretty-printing outputpython strings can be formatted using the format function the bracket syntax {iprovides placeholder for the -th variable to be printedwith being the first index individual variables can be formatted separately and as desiredformatting syntax is discussed in more detail in section print ("name :{ height { mage { })format ( ,bilbo , )namebilbo height mage lists can contain different types of objectsand are created using square brackets as in the following examplex [ ,'string ',another string "quote type is not important [ 'string ''another string 'mutable elements in lists are indexed starting from and are mutable (can be changed) [ , [ note that the first index is [ , immutable in contrasttuples (with round bracketsare immutable (cannot be changedstrings are immutable as well ( , [ typeerror 'tuple object does not support item assignment slice lists can be accessed via the slice notation [start:endit is important to note that end is the index of the first element that will not be selectedand that the first element has index to gain familiarity with the slice notationexecute each of the following lines this may depend on the keyboard and operating system |
15,229 | [ [ : elements with index from to [: all elements with index less than [ :all elements with index or more - :the last two elements [ [ [ [ an operator is programming language construct that performs an action on one or more operands the action of an operator in python depends on the type of the operand(sfor exampleoperators such as +*-and that are arithmetic operators when the operands are of numeric typecan have different meanings for objects of non-numeric type (such as strings'hello 'world operator string concatenation 'helloworld 'hello string repetition 'hellohello [ , list repetition [ remainder of / some common python operators are given in table python objects as mentioned in the previous sectiondata in python is represented by objects or relations between objects we recall that basic data types included strings and numeric types (such as integersbooleansand floatsas python is an object-oriented programming languagefunctions are objects too (everything is an object!each object has an identity (unique to each object and immutable -that iscannot be changed -once created) type (which determines which operations can be applied to the objectand is considered immutable)and value (which is either mutable or immutablethe unique identity assigned to an object obj can be found by calling idas in id(objeach object has list of attributesand each attribute is reference to another object the function dir applied to an object returns the list of attributes for examplea string object has many useful attributesas we shall shortly see functions are objects with the __call__ attribute attributes |
15,230 | class (see section can be thought of as template for creating custom type of object hello dir(sprint (dflush =trueprint the list in flushed format ['__add__ ''__class__ ''__contains__ ''__delattr__ ''__dir__ '(many left out'replace ''rfind ''rindex ''rjust ''rpartition ''rsplit ''rstrip ''split ''splitlines ''startswith ''strip ''swapcase ''title ''translate ''upper ''zfill 'dot notation any attribute attr of an object obj can be accessed via the dot notationobj attr to find more information about any object use the help function hello help( replace replace method of builtins str instance replace (old new[count ]-str return copy of with all occurrences of substring old replaced by new if the optional argument count is given only the first count occurrences are replaced method this shows that the attribute replace is in fact function an attribute that is function is called method we can use the replace method to create new string from the old one by changing certain characters 'hello replace (' ',' 'print ( hallo in many python editorspressing the tab keyas in objectname will bring up list of possible attributes via the editor' autocompletion feature type types and operators each object has type three basic data types in python are str (for string)int (for integers)and float (for floating point numbersthe function type returns the type of an object type ([ , , ] type (( , , ) type ({ , , }print ( , , |
15,231 | the assignment operator=assigns an object to variablee an expression is combination of valuesoperatorsand variables that yields another value or variable assignment variable names are case sensitive and can only contain lettersnumbersand underscores they must start with either letter or underscore note that reserved words such as true and false are case sensitive as well python is dynamically typed languageand the type of variable at particular point during program execution is determined by its most recent object assignment that isthe type of variable does not need to be explicitly declared from the outset (as is the case in or java)but instead the type of the variable is determined by the object that is currently assigned to it it is important to understand that variable in python is reference to an object -think of it as label on shoe box even though the label is simple entitythe contents of the shoe box (the object to which the variable referscan be arbitrarily complex instead of moving the contents of one shoe box to anotherit is much simpler to merely move the label [ , refers to the same object as print (id( =id( )check that the object id' are the same [ change the contents of the list that refers to print (xtrue [ , [ , refers to the same object as [ , now refers to different object print (id( =id( )print (xfalse [ , table shows selection of python operators for numerical and logical variables table common numerical (leftand logical (rightoperators addition binary not subtraction binary and multiplication binary xor *power binary or division =equal to /integer division !not equal to modulus reference |
15,232 | several of the numerical operators can be combined with an assignment operatoras in + to mean operators such as and can be defined for other data types as wellwhere they take on different meaning this is called operator overloadingan example of which is the use of for list repetition as we saw earlier function functions and methods functions make it easier to divide complex program into simpler parts to create functionuse the following syntaxdef () function takes list of input variables that are references to objects inside the functiona number of statements are executed which may modify the objectsbut not the reference itself in additionthe function may return an output object (or will return the value none if not explicitly instructed to return outputthink again of the shoe box analogy the input variables of function are labels of shoe boxesand the objects to which they refer are the contents of the shoe boxes the following program highlights some of the subtleties of variables and objects in python note that the statements within function must be indented this is python' way to define where function begins and ends [ , , def change_list ( ) append ( append an element to the list referenced by [ ]= modify the first element of the same list [ , , the local now refers to different list the list to which first referred does not change return sum(yprint change_list ( )print ( [ variables that are defined inside function only have local scopethat isthey are recognized only within that function this allows the same variable name to be used in different functions without creating conflict if any variable is used within functionpython first checks if the variable has local scope if this is not the case (the variable has not been defined inside the function)then python searches for that variable outside the function (the global scopethe following program illustrates several important points |
15,233 | from numpy import array square sqrt array ([ , , ]def stat( ) len( #the length of meanx sum( )/ stdx sqrt(sumsquare ( meanx ))/nreturn [meanx ,stdxprint (stat( )[ basic math functions such as sqrt are unknown to the standard python interpreter and need to be imported more on this in section below as was already mentionedindentation is crucial it shows where the function begins and ends no semicolons are needed to end linesbut the first line of the function definition (here line must end with colon (: lists are not arrays (vectors of numbers)and vector operations cannot be performed on lists howeverthe numpy module is designed specifically with efficient vector/matrix operations in mind on the second code linewe define as vector (ndarrayobject functions such as squaresumand sqrt are then applied to such arrays note that we used the default python functions len and sum more on numpy in section running the program with stat(xinstead of print(stat( )in line will not show any output in the console to display the complete list of built-in functionstype (using double underscoresdir(__builtin__d modules python module is programming construct that is useful for organizing code into manageable parts to each module with name module_name is associated python file module_name py containing any number of definitionse of functionsclassesand variablesas well as executable statements modules can be imported into other programs using the syntaximport as where is shorthand name for the module semicolons can be used to put multiple commands on single line module |
15,234 | namespace when imported into another python filethe module name is treated as namespaceproviding naming system where each object has its unique name for exampledifferent modules mod and mod can have different sum functionsbut they can be distinguished by prefixing the function name with the module name via the dot notationas in mod sum and mod sum for examplethe following code uses the sqrt function of the numpy module import numpy as np np sqrt ( python package is simply directory of python modulesthat isa collection of modules with additional startup information (some of which may be found in its __path__ attributepython' built-in module is called __builtins__ of the great many useful python modulestable gives few datetime matplotlib numpy os pandas pytorch scipy requests seaborn sklearn statsmodels table few useful python modules/packages module for manipulating dates and times matlabtm -type plotting package fundamental package for scientific computingincluding random number generation and linear algebra tools defines the ubiquitous ndarray class python interface to the operating system fundamental module for data analysis defines the powerful dataframe class machine learning library that supports gpu computation ecosystem for mathematicsscienceand engineeringcontaining many tools for numerical computingincluding those for integrationsolving differential equationsand optimization library for performing http requests and interfacing with the web package for statistical data visualization easy to use machine learning library package for the analysis of statistical models the numpy package contains various subpackagessuch as randomlinalgand fft more details are given in section when using spyderpress ctrl+ in front of any objectto display its help file in separate window as we have already seenit is also possible to import only specific functions from module using the syntaxfrom import from numpy import sqrt cos sqrt ( cos ( |
15,235 | this avoids the tedious prefixing of functions via the (aliasof the module name howeverfor large programs it is good practice to always use the prefix/alias name constructionto be able to clearly ascertain precisely which module function being used belongs to flow control flow control in python is similar to that of many programming languageswith conditional statements as well as while and for loops the syntax for if-then-else flow control is as follows if elif elsehereand are logical conditions that are either true or falselogical conditions often involve comparison operators (such as ==><=!=in the example abovethere is one elif partwhich allows for an "else ifconditional statement in generalthere can be more than one elif partor it can be omitted the else part can also be omitted the colons are essentialas are the indentations the while and for loops have the following syntax while for in aboveis an iterable object (see section belowfor further control in for and while loopsone can use break statement to exit the current loopand the continue statement to continue with the next iteration of the loopwhile abandoning any remaining statements in the current iteration here is an example import numpy as np ans 'ywhile ans !' 'outcome np random randint ( , + if outcome = print (hooray !"break elseprint ("bad luck "outcome ans input (again ( / " |
15,236 | iteration iterating over sequence of objectssuch as used in for loopis common operation to better understand how iteration workswe consider the following code hello for in sprint ( ,'*'end=' iterable iterator string is an example of python object that can be iterated one of the methods of string object is __iter__ any object that has such method is called an iterable calling this method creates an iterator -an object that returns the next element in the sequence to be iterated this is done via the method __next__ hello __iter__ ( is now an iterator same as iter(sprint ( __next__ (same as next(tprint ( __next__ (print ( __next__ ( sequence range the inbuilt functions next and iter simply call these corresponding doubleunderscore functions of an object when executing for loopthe sequence/collection over which to iterate must be an iterable during the execution of the for loopan iterator is created and the next function is executed until there is no next element an iterator is also an iterableso can be used in for loop as well liststuplesand strings are so-called sequence objects and are iterableswhere the elements are iterated by their index the most common iterator in python is the range iteratorwhich allows iteration over range of indices note that range returns range objectnot list for in range ( , )print (iend='print range ( , ) range ( , similar to python' slice operator [ ]the iterator range(ijranges from to jnot including the index sets two other common iterables are sets and dictionaries python sets areas in mathematicsunordered collections of unique objects sets are defined with curly brackets }as opposed to round brackets for tuplesand square brackets for lists unlike listssets do not have duplicate elements many of the usual set operations are implemented in pythonincluding the union and intersection |
15,237 | { { for in aprint (iprint ( { useful way to construct lists is by list comprehensionthat isby expressions of the form list comprehension for in if for sets similar construction holds in this waylists and sets can be defined using very similar syntax as in mathematics comparefor examplethe mathematical definition of the sets :{ { (no order and no duplication of elementsand :{ awith the python code below seta { setb { ** for in setaprint (setblista [ listb [ ** for in lista print listb { [ dictionary is set-like data structurecontaining one or more key:value pairs enclosed in curly brackets the keys are often of the same typebut do not have to bethe same holds for the values here is simple examplestoring the ages of lord of the rings characters in dictionary dictionary dict {'gimly ' 'frodo ': 'aragorn ' for key in dictprint (key dict[key ]gimly frodo aragorn classes recall that objects are of fundamental importance in python -indeeddata types and functions are all objects class is an object typeand writing class definition can be thought of as creating template for new type of object each class contains number of attributesincluding number of inbuilt methods the basic syntax for the creation of class isclass |
15,238 | class def __init__(self)instance the main inbuilt method is __init__which creates an instance of class object for examplestr is class object (string class)but str('hello'or simply 'hello'creates an instancesof the str class instance attributes are created during initialization and their values may be different for different instances in contrastthe values of class attributes are the same for every instance the variable self in the initialization method refers to the current instance that is being created here is simple exampleexplaining how attributes are assigned class shire_person def __init__ (self ,name)initialization method self name name instance attribute self age instance attribute address 'the shire class attribute print (dirshire_person )[ : ',dirshire_person )- :]list of class attributes shire_person ('sam 'create an instance shire_person ('frodo 'create another instance print ( __dict__ list of instance attributes race 'hobbit age print ( __dict__ add another attribute to instance change instance attribute print getattr ( ,'address ')content of ' class attribute [__delattr__ ''__dict__ ''__dir__ ''__doc__ '[__weakref__ ''address '{'name ''sam ''age ' {'name ''frodo ''age ' 'race ''hobbit 'the shire it is good practice to create all the attributes of the class object in the __init__ methodbutas seen in the example aboveattributes can be created and assigned everywhereeven outside the class definition more generallyattributes can be added to any object that has __dict__ an "emptyclass can be created via class pass inheritance python classes can be derived from parent class by inheritancevia the following syntax class () |
15,239 | the derived class (initiallyinherits all of the attributes of the parent class as an examplethe class shire_person below inherits the attributes nameageand address from its parent class person this is done using the super functionused here to refer to the parent class person without naming it explicitly when creating new object of type shire_personthe __init__ method of the parent class is invokedand an additional instance attribute shire_address is created the dir function confirms that shire_address is an attribute only of shire_person instances class person def __init__ (self ,name)self name name self age self address class shire_person person )def __init__ (self ,name)super (__init__ (nameself shire_address 'bag end shire_person (frodo " person (gandalf "print (dir( )[: dir( )- :print (dir( )[: dir( )- :[shire_address '['address ''age ''name '['__class__ '['address ''age ''name ' files to write to or read from filea file first needs to be opened the open function in python creates file object that is iterableand thus can be processed in sequential manner in for or while loop here is simple example fout open('output txt ',' 'for in range ( , )if % = fout write ('{: }\nformat ( )fout close (the first argument of open is the name of the file the second argument specifies if the file is opened for reading (' ')writing (' ')appending (' ')and so on see help(openfiles are written in text mode by defaultbut it is also possible to write in binary mode the above program creates file output txt with linescontaining the strings note that if we had written fout write(iin the fourth line of the code abovean error message would be producedas the variable is an integerand not string recall that the expression string format(is python' way to specify the format of the output string the formatting syntax {: dindicates that the output should be constrained to specific width of three characterseach of which is decimal value as mentioned in the |
15,240 | introductionbracket syntax {iprovides placeholder for the -th variable to be printedwith being the first index the format for the output is further specified by { :format}where format is typically of the form[width]precision][typein this specificationwidth specifies the minimum width of outputprecision specifies the number of digits to be displayed after the decimal point for floating point values of type for the number of digits before and after the decimal point for floating point values of type gtype specifies the type of output the most common types are for stringsd for integersb for binary numbersf for floating point numbers (floatsin fixed-point notationg for floats in general notatione for floats in scientific notation the following illustrates some behavior of formatting on numbers '{: }format ( '{ }format ( '{ }format ( '{ }format ( '{ }format ( '{ }format ( '{ }format ( '{ : }{ };format ( ' + ' ' ' ' ' + ' - the following code reads the text file output txt line by lineand prints the output on the screen to remove the newline \ characterwe have used the strip method for stringswhich removes any whitespace from the start and end of string fin open('output txt ',' 'for line in finline line strip (strips newline character print (linefin close ( more formatting options are possible |
15,241 | when dealing with file input and output it is important to always close files files that remain opene when program finishes unexpectedly due to programming errorcan cause considerable system problems for this reason it is recommended to open files via context management the syntax is as follows with open('output txt '' 'as ff write ('hi there !'context management ensures that file is correctly closed even when the program is terminated prematurely an example is given in the next programwhich outputs the mostfrequent words in dicken' tale of two citieswhich can be downloaded from the book' github site as ataleof cities txt note that in the next programthe file ataleof cities txt must be placed in the current working directory the current working directory can be determined via import os followed by cwd os getcwd(numline dict {with open('ataleof cities txt 'encoding ="utf "as finfor line in finwords line split (for in words if not in dictdict[ elsedict[ += numline + sd sorted (dict ,key=dict get reverse =true#sort the dictionary print (number of unique words {}\nformat (len(dict))print ("ten most frequent words :\ "print ("{: {}format ("word"count ")print ( '-'for in range ( , )print ("{: {}format (sd[ ]dict[sd[ ]])number of unique words ten most frequent words word count the and of to in his was that |
15,242 | numpy the package numpy (module name numpyprovides the building blocks for scientific computing in python it contains all the standard mathematical functionssuch as sincostanetc as well as efficient functions for random number generationlinear algebraand statistical computation import numpy as np import the package np cos ( data [ , , , , np mean(dataz np std(dataprint ('cos ( { : fmean { std { format ( , , )cos ( mean std creating and shaping arrays the fundamental data type in numpy is the ndarray this data type allows for fast matrix operations via highly optimized numerical libraries such as lapack and blasthis in contrast to (nestedlists as suchnumpy is often essential when dealing with large amounts of quantitative data ndarray objects can be created in various ways the following code creates array of zeros think of it as -dimensional matrix or two stacked matrices np zeros ([ , , ] by by array of zeros print (aprint ( shape number of rows and columns print (type( ) is an ndarray [[ ][ ]]( we will be mostly working with arraysthat isndarrays that represent ordinary matrices we can also use the range method and lists to create ndarrays via the array method note that arange is numpy' version of rangewith the difference that arange returns an ndarray object np array range ( )equivalent to np arange ( np array ([ , , , ] np array ([[ , , ,[ , , ]]print ( '\ ' ,'\nc[ [ |
15,243 | [[ [ ]the dimension of an ndarray can be obtained via its shape methodwhich returns tuple arrays can be reshaped via the reshape method this does not change the current ndarray object to make the change permanenta new instance needs to be created np array range ( )# is an ndarray of shape ( ,print ( shape reshape ( , # is an ndarray of shape ( , print (aprint ( [ ( ,[[ [ [ ]one shape dimension for reshape can be specified as - the dimension is then inferred from the other dimension(sthe 'tattribute of an ndarray gives its transpose note that the transpose of "vectorwith shape (nis the same vector to distinguish between column and row vectorsreshape such vector to an and arrayrespectively np arange ( # array vector of shape ( ,print (aprint ( shape reshape - , array matrix of shape ( , print (bprint ( ta np arange ( reshape ( , print ( [ ( ,[[ [ [ ][[ ][[ [ [ ]two useful methods of joining arrays are hstack and vstackwhere the arrays are joined horizontally and verticallyrespectively np ones (( , ) np zeros (( , ) np hstack (( , )print ( |
15,244 | [ ]slicing arrays can be sliced similarly to python lists if an array has several dimensionsa slice for each dimension needs to be specified recall that python indexing starts at ' and ends at 'len(obj)- the following program illustrates various slicing operations np array range ( )reshape ( , print (aprint ( [ ]first row print ( [, ]second column print ( [ , ]element in first row and second column print ( [ : , : ]( , ndarray containing [ , print ( [ - ]elements in nd and rd rows and last column [[ [ [ ][ [ [[ ][ note that ndarrays are mutable objectsso that elements can be modified directlywithout having to create new object [ , [ , change two elements in the matrix above print ( [[ [ [ ] array operations basic mathematical operators and functions act element-wise on ndarray objects np array ([[ , ,[ , ]] np array ([[ , ,[ , ]]print ( + [ ]print (np divide ( , )[ ]same as / |
15,245 | print (np sqrt( )[[ [ ]in order to compute matrix multiplications and compute inner products of vectorsnumpy' dot function can be usedeither as method of an ndarray instance or as method of np print (np dot( , )[[ [ ]print ( dot( )same as np dot( , [[ [ ]since version of pythonit is possible to multiply two ndarrays using the operator (which implements the np matmul methodfor matricesthis is similar to using the dot method for higher-dimensional arrays the two methods behave differently operator print ( [[ [ ]numpy allows arithmetic operations on arrays of different shapes (dimensionsspecificallysuppose two arrays have dimensions ( and ( )respectively the arrays or shapes are said to be aligned if for all it holds that aligned mi ni or min{mi ni or either mi or ni or both are missing for exampleshapes ( and ( are alignedas are ( and ( however( and ( are not aligned numpy "duplicatesthe array elements across the smaller dimension to match the larger dimension this process is called broadcasting and is carried out without actually making copiesthus providing efficient memory use below are some examples import numpy as np anp arange ( reshape ( , ( , array np array ([ , ] reshape ( , ( ,array ( , array print ( shapes ( , and ( ,print ( shapes ( , and ( , broadcasting |
15,246 | [ ][ [ ]note that above is duplicated row-wise and column-wise broadcasting also applies to the matrix-wise operator @as illustrated below herethe matrix is duplicated across the third dimension resulting in the two matrix multiplications ## and np arange ( reshape ( , , np arange ( reshape ( , print ( @ [[ ][[ [ ]]functions such as summeanand std can also be executed as methods of an ndarray instance the argument axis can be passed to specify along which dimension the function is applied by default axis=none np array range ( )reshape ( , print ( sum(axis = )summing over rows gives column totals [ random numbers one of the sub-modules in numpy is random it contains many functions for random variable generation import numpy as np np random seed ( set the seed for the random number generator np random random (uniform ( , np random randint ( , discrete uniform , np random randn ( array of four standard normals print ( , ,'\ ', - - for more information on random variable generation in numpysee |
15,247 | matplotlib the main python graphics library for and plotting is matplotliband its subpackage pyplot contains collection of functions that make plotting in python similar to that in matlab creating basic plot the code below illustrates various possibilities for creating plots the style and color of lines and markers can be changedas well as the font size of the labels figure shows the result sqrtplot py import matplotlib pyplot as plt import numpy as np np arange ( np arange ( , np sqrt(xv / plt figure figsize [ , ]size of plot in inches plt plot( , ' --'plot green dashed line plt plot( , ,' 'plot red dots plt xlabel (' 'plt ylabel (' 'plt tight_layout (plt savefig ('sqrtplot pdf ',format ='pdf 'saving as pdf plt show (both plots will now be drawn figure simple plot created using pyplot the library matplotlib also allows the creation of subplots the scatterplot and histogram in figure have been produced using the code below when creating histogram there are several optional arguments that affect the layout of the graph the number of bins is determined by the parameter bins (the default is scatterplots also take number of parameterssuch as string which determines the color of the dotsand alpha which affects the transparency of the dots |
15,248 | histscat py import matplotlib pyplot as plt import numpy as np np random randn ( np random randn ( np random randn ( plt subplot ( first subplot plt hist( ,bins = facecolor =' 'plt xlabel (' variable 'plt ylabel ('counts 'plt subplot ( second subplot plt scatter ( , , =' 'alpha = plt show ( counts variable figure histogram and scatterplot one can also create three-dimensional plots as illustrated below surf dscat py import matplotlib pyplot as plt import numpy as np from mpl_toolkits mplot import axes def npdf( , )return np exp - *pow( , )+pow( , )))/np sqrt ( np pixy np random randn ( np random randn ( npdf( ,yxgrid ygrid np linspace - , , np linspace - , , xarray yarray np meshgrid (xgrid ygrid |
15,249 | zarray npdf(xarray yarray fig plt figure figsize =plt figaspect ( ax fig add_subplot ( projection =' 'ax scatter ( , ,zc=' 'ax set_xlabel ('$ 'ax set_ylabel ('$ 'ax set_zlabel ('$ ( , )$'ax fig add_subplot ( projection =' 'ax plot_surface (xarray ,yarray ,zarray ,cmap='viridis 'edgecolor ='none 'ax set_xlabel ('$ 'ax set_ylabel ('$ 'ax set_zlabel ('$ ( , )$'plt show ( (xy (xy figure three-dimensional scatterand surface plots pandas the python package pandas (module name pandasprovides various tools and data structures for data analyticsincluding the fundamental dataframe class for the code in this section we assume that pandas has been imported via import pandas as pd series and dataframe the two main data structures in pandas are series and dataframe series object can be thought of as combination of dictionary and an -dimensional ndarray the syntax |
15,250 | for creating series object is series pd series(index=['index']heresome -dimensional data structuresuch as -dimensional ndarraya listor dictionaryand index is list of names of the same length as when is dictionarythe index is created from the keys of the dictionary when is an ndarray and index is omittedthe default index will be [ len(data)- dict {'one ': 'two ': 'three ': 'four ': print (pd series (dict)one two three four dtype int years [' ',' ',' 'cost [ print (pd series (cost index years name 'myseries ')#name it namemyseries dtype float the most commonly-used data structure in pandas is the two-dimensional dataframewhich can be thought of as pandasimplementation of spreadsheet or as dictionary in which each "keyof the dictionary corresponds to column name and the dictionary "valueis the data in that column to create dataframe one can use the pandas dataframe methodwhich has three main argumentsdataindex (row labels)and columns (column labelsdataframe(index=['']columns=['']+ if the index is not specifiedthe default index is [ len(data)- data can also be read directly from csv or excel fileas is done in section if dictionary is used to create the data frame (as below)the dictionary keys are used as the column names dict {'numbers ':[ , , , 'squared ':[ , , , df pd dataframe (dict index list('abcd ')print (dfa numbers squared |
15,251 | manipulating data frames often data encoded in dataframe or series objects need to be extractedalteredor combined gettingsettingand deleting columns works in similar manner as for dictionaries the following code illustrates various operations ages [ , , , , , , , ='gender ':[' '' ']* 'age 'agesdf pd dataframe (ddf at[ ,'age '] change an element df at[ ,'gender ''female change another element df df drop('age , drop column df df copy ()create separate copy of df df ['age 'ages add the original column dfcomb pd concat ([df ,df ,df ],axis = combine the three dfs print dfcomb gender female age gender female gender female age note that the above dataframe object has two age columns the expression dfcomb['age'will return dataframe with both these columns table useful pandas methods for data manipulation agg aggregate the data using one or more functions apply apply function to column or row astype change the data type of variable concat concatenate data objects replace find and replace values read_csv read csv file into dataframe sort_values sort by values along rows or columns stack stack dataframe to_excel write dataframe to an excel file it is important to correctly specify the data type of variable before embarking on data summarization and visualization tasksas python may treat different types of objects in dissimilar ways common data types for entries in dataframe object are floatcategorydatetimebooland int generic object type is object ='gender ':[' '' '' ']* 'age '[ , , , , , , , , , , , ]df=pd dataframe (dprint (df dtypes df['gender 'df['gender 'astype ('category 'change the type print (df dtypes |
15,252 | gender object age int dtype object gender category age int dtype object + extracting information extracting statistical information from dataframe object is facilitated by large collection of methods (functionsin pandas table gives selection of data inspection methods see for their practical use the code below provides several examples of useful methods the apply method allows one to apply general functions to columns or rows of dataframe these operations do not change the data the loc method allows for accessing elements (or rangesin data frame and acts similar to the slicing operation for lists and arrayswith the difference that the "stopvalue is includedas illustrated in the code below import numpy as np import pandas as pd ages [ , , , , , , , np random seed ( df pd dataframe (np random randn ( , index list('abc ')columns list('abcd ')print (dfdf df loc[" ":" "," ":" "create partial data frame print (df meana df[' 'mean (mean of 'acolumn print ('mean of column {format meana )expa df[' 'apply (np expexp of all elements in 'acolumn print (expaa - - - - - - - - - - - mean of column - nameadtype float the groupby method of dataframe object is useful for summarizing and displaying the data in manipulated ways it groups data according to one or more specified columnssuch that methods such as count and mean can be applied to the grouped data |
15,253 | table useful pandas methods for data inspection columns column names count counts number of non-na cells crosstab cross-tabulate two or more categories describe summary statistics dtypes data types for each column head display the top rows of dataframe groupby group data by column(sinfo display information about the dataframe loc access group or rows or columns mean column/row mean plot plot of columns std column/row standard deviation sum returns column/row sum tail display the bottom rows of dataframe value_counts counts of different non-null values var variance df pd dataframe (' ':[' ',' ',' ',' ',' ',' ']' ':np random rand ( ' ':[' ',' ',' ',' ',' ',' ']' ':np random rand ( }print (df print (df groupby (' 'mean () print (df groupby (' '' ']mean () to allow for multiple functions to be calculated at oncethe agg method can be used it can take listdictionaryor string of functions |
15,254 | print (df groupby (' 'agg ([sum ,np mean ]) sum mean sum mean plotting the plot method of dataframe makes plots of dataframe using matplotlib different types of plot can be accessed via the kind 'strconstructionwhere str is one of line (default)barhistboxkdeand several more finer controlsuch as modifying the fontis obtained by using matplotlib directly the following code produces the line and box plots in figure import numpy as np import pandas as pd import matplotlib df pd dataframe ('normal ':np random randn ( 'uniform ':np random uniform ( , , }font {'family 'serif ''size #set font matplotlib rc('font '*fontchange font df plot (line plot default df plot(kind 'box 'box plot matplotlib pyplot show (render plots normal uniform normal uniform figure line and box plot using the plot method of dataframe scikit-learn scikit-learn is an open-source machine learning and data science library for python the library includes range of algorithms relating to the in this book it is widely used due to its simplicity and its breadth the module name is sklearn below is brief introduction into modeling the data with sklearn the full documentation can be found at |
15,255 | partitioning the data randomly partitioning the data in order to test the model may be achieved easily with sklearn' function train_test_split for examplesuppose that the training data is described by the matrix of explanatory variables and the vector of responses then the following code splits the data set into training and testing setswith the testing set being half of the total set from sklearn model_selection import train_test_split x_train x_test y_train y_test train_test_split (xytest_size as an examplethe following code generates synthetic data set and splits it into equally-sized training and test sets syndat py import numpy as np import matplotlib pyplot as plt from sklearn model_selection import train_test_split np random seed ( =np pi *( np random random (size =( , )- =np cos( [, ])*np sin( [, ]>= x_train x_test y_train y_test train_test_split (xytest_size = fig plt figure (ax fig add_subplot ( ax scatter x_train y_train == , x_train y_train == , =' 'marker =' ',alpha = ax scatter x_train y_train == , x_train y_train == , =' 'marker =' ',alpha = ax scatter x_test y_test == , x_test y_test == , =' 'marker =' ',alpha = ax scatter x_test y_test == , x_test y_test == , =' 'marker =' ',alpha = plt savefig ('sklearntraintest pdf ',format ='pdf 'plt show ( standardization in some instances it may be necessary to standardize the data this may be done in sklearn with scaling methods such as minmaxscaler or standardscaler scaling may improve the convergence of gradient-based estimators and is useful when visualizing data on vastly different scales for examplesuppose that is our explanatory data ( stored as numpy array)and we wish to standardize such that each value lies between and |
15,256 | figure example training (circlesand test (squaresset for two class classification explanatory variables are the (xycoordinatesclasses are zero (greenor one (bluefrom sklearn import preprocessing min_max_scaler preprocessing minmaxscaler feature_range =( )x_scaled min_max_scaler fit_transform (xequivalent tox_scaled ( min(axis = )( max(axis = min(axis = ) fitting and prediction once the data has been partitioned and standardized if necessarythe data may be fitted to statistical modele classification or regression model for examplecontinuing with our data from abovethe following fits model to the data and predicts the responses for the test set from sklearn somesubpackage import someclassifier clf someclassifier (choose appropriate classifier clf fit(x_train y_train fit the data y_prediction clf predict x_test predict specific classifiers for logistic regressionnaive bayeslinear and quadratic discriminant analysisk-nearest neighborsand support vector machines are given in section testing the model once the model has made its prediction we may test its effectivenessusing relevant metrics for examplefor classification we may wish to produce the confusion matrix for the |
15,257 | test data the following code does this for the data shown in figure using support vector machine classifier from sklearn import svm clf svm svckernel 'rbf 'clf fitx_train y_train y_prediction clf predict x_test from sklearn metrics import confusion_matrix print confusion_matrix y_test y_prediction )[[ ] system callsurl accessand speed-up operating system commands (whether in windowsmacosor linuxfor creating directoriescopying or removing filesor executing programs from the system shell can be issued from within python by using the package os another useful package is requests which enables direct downloads of files and webpages from urls the following python script uses both it also illustrates simple example of exception handling in python misc py import os import requests for in "tryos mkdir (mydir "cexcept pass if it does not yet exist make directory otherwise do nothing uname https :/github com/dsml -bookprograms /treemaster appendices python primer /fname ataleof cities txtr requests getuname fname print ( textopen('mydir ato txt ''wb 'write ( content write to file bytes mode is important here the package numba can significantly speed up calculations via smart compilation first run the following code jitex py import timeit import numpy as np from numba import jit ** #@jit def myfun ( , )for in range ( , ) |
15,258 | / return start timeit time clock (print (euler ' constant is approximately {: }format myfun ( ,nnp log( ))end timeit time clock (print (elapsed time{: fseconds format (end start )euler ' constant is approximately elapsed time seconds now remove the character before the character in the code abovein order to activate the "just in timecompiler this gives -fold speedupeuler ' constant is approximately elapsed time seconds further reading to learn pythonwe recommend [ and [ howeveras python is constantly evolvingthe most up-to-date references will be available from the internet |
15,259 | [ ahalta krishnamurthyp chenand melton competitive learning algorithms for vector quantization neural networks : - [ akaike new look at the statistical model identification ieee transactions on automatic control ( ): - [ aronszajn theory of reproducing kernels transactions of the american mathematical society : - [ arthur and vassilvitskii -means++the advantages of careful seeding in proceedings of the eighteenth annual acm-siam symposium on discrete algorithmspages - philadelphia society for industrial and applied mathematics [ asmussen and glynn stochastic simulationalgorithms and analysis springernew york [ bartle the elements of integration and lebesgue measure john wiley sonshoboken [ bates and watts nonlinear regression analysis and its applications john wiley sonshoboken [ berger statistical decision theory and bayesian analysis springernew yorksecond edition [ bezdek pattern recognition with fuzzy objective function algorithms plenum pressnew york [ bickel and doksum mathematical statisticsvolume pearson prentice hallupper saddle riversecond edition |
15,260 | bibliography [ billingsley probability and measure john wiley sonsnew yorkthird edition [ bishop pattern recognition and machine learning springernew york [ boggs and byrd adaptivelimited-memory bfgs algorithms for unconstrained optimization siam journal on optimization ( ): - [ botevj grotowskiand kroese kernel density estimation via diffusion annals of statistics ( ): - [ botev and kroese global likelihood optimization via the cross-entropy methodwith an application to mixture models in ingallsm rossettij smithand peterseditorsproceedings of the winter simulation conferencepages - washingtondcdecember [ botevd kroeser rubinsteinand 'ecuyer the cross-entropy method for optimization in govindaraju and raoeditorsmachine learningtheory and applicationsvolume of handbook of statisticspages - elsevier [ boydn parikhe chub peleatoand eckstein distributed optimization and statistical learning via the alternating direction method of multipliers foundations and trends in machine learning : - [ boyd and vandenberghe convex optimization cambridge university presscambridge seventh printing with corrections [ boyles on the convergence of the em algorithm journal of the royal statistical societyseries ( ): - [ breiman classification and regression trees crc pressboca raton [ breiman bagging predictors machine learning ( ): - [ breiman heuristics of instability and stabilization in model selection annals of statistics ( ): - [ breiman random forests machine learning ( ): - [ caod - dub gaop - wanand pardalos minimax problems in combinatorial optimization in - du and pardaloseditorsminimax and applicationspages - kluwerdordrecht [ casella and berger statistical inference duxbury presspacific grovesecond edition [ chung course in probability theory academic pressnew yorksecond edition |
15,261 | [ cinlar introduction to stochastic processes prentice hallenglewood cliffs [ cover and thomas elements of information theory john wiley sonsnew york [ danielw graggl kaufmanand stewart reorthogonalization and stable algorithms for updating the gram-schmidt qr factorization mathematics of computation ( ): - [ - de boerd kroeses mannorand rubinstein tutorial on the cross-entropy method annals of operations research ( ): - [ dempstern lairdand rubin maximum likelihood from incomplete data via the em algorithm journal of the royal statistical society ( ): [ devroye non-uniform random variate generation springernew york [ draper and smith applied regression analysis john wiley sonsnew yorkthird edition [ duan and kroese splitting for optimization computers operations research : - [ dudap hartand stork pattern classification john wiley sonsnew york [ efron and hastie computer age statistical inferencealgorithmsevidenceand data science cambridge university presscambridge [ efron and tibshirani an introduction to the bootstrap chapman hallnew york [ fawcett an introduction to roc analysis ( ): - june pattern recognition letters[ feller an introduction to probability theory and its applicationsvolume john wiley sonshobokensecond edition [ ferreira and menegatto eigenvalues of integral operators defined by smooth positive definite kernels integral equations and operator theory : [ fisher and seneditors the collected works of wassily hoeffding springernew york [ fishman monte carloconceptsalgorithms and applications springernew york [ fletcher practical methods of optimization john wiley sonsnew york |
15,262 | [ freund and schapire decision-theoretic generalization of on-line learning and an application to boosting comput syst sci ( ): - [ friedman greedy function approximationa gradient boosting machine annals of statistics : - [ gelman bayesian data analysis chapman hallnew yorksecond edition [ gelman and hall data analysis using regression and multilevel/hierarchical models cambridge university presscambridge [ geman and geman stochastic relaxationgibbs distribution and the bayesian restoration of images ieee transactions on pattern analysis and machine intelligence ( ): - [ gentle random number generation and monte carlo methods springernew yorksecond edition [ gilkss richardsonand spiegelhalter markov chain monte carlo in practice chapman hallnew york [ glasserman monte carlo methods in financial engineering springernew york [ golub and van loan matrix computations johns hopkins university pressbaltimorefourth edition [ goodfellowy bengioand courville deep learning mit presscambridge [ grimmett and stirzaker probability and random processes oxford university pressthird edition [ hastier tibshiraniand friedman the elements of statistical learningdata mininginferenceand prediction springernew york [ hastier tibshiraniand wainwright statistical learning with sparsitythe lasso and generalizations crc pressboca raton [ - hiriart-urruty and lemarechal springernew york fundamentals of convex analysis [ hock and schittkowski test examples for nonlinear programming codes springernew york [ kelleyjr the cutting-plane method for solving convex programs journal of the society for industrial and applied mathematics ( ): - [ jain fundamentals of digital image processing prentice hallenglewood cliffs |
15,263 | [ kallenberg foundations of modern probability springernew yorksecond edition [ karalic linear regression in regression tree leaves in proceedings of ecai- pages - hoboken john wiley sons [ kaynak methods of combining multiple classifiers and their applications to handwritten digit recognition master' thesisinstitute of graduate studies in science and engineeringbogazici university [ keilath and sayededitors fast reliable algorithms for matrices with structure siampennsylvania [ nussbaumer knaflic storytelling with dataa data visualization guide for business professionals john wiley sonshoboken [ koller and friedman probabilistic graphical modelsprinciples and techniques adaptive computation and machine learning the mit presscambridge [ kolmogorov and fomin elements of the theory of functions and functional analysis dover publicationsmineola [ kroeset breretont taimreand botev why the monte carlo method is so important today wiley interdisciplinary reviewscomputational statistics ( ): - [ kroese and chan statistical modeling and computation springer [ kroeses porotskyand rubinstein the cross-entropy method for continuous multi-extremal optimization methodology and computing in applied probability ( ): - [ kroeset taimreand botev handbook of monte carlo methods john wiley sonsnew york [ kushner and yin stochastic approximation and recursive algorithms and applications springernew yorksecond edition [ lafaye de micheauxr drouilhetand liquet the softwarefundamentals of programming and statistical analysis springernew york [ larsen and marx an introduction to mathematical statistics and its applications prentice hallnew yorkthird edition [ law and kelton simulation modeling and analysis mcgraw-hillnew yorkthird edition [ 'ecuyer unified view of ipasfand lr gradient estimation techniques management science : - |
15,264 | [ 'ecuyer good parameters and implementations for combined multiple recursive random number generators operations research ( ): [ lehmann and casella theory of point estimation springernew yorksecond edition [ lewis and payne generalized feedback shift register pseudorandom number algorithm journal of the acm ( ): - [ little and rubin statistical analysis with missing data john wiley sonshobokensecond edition [ liu and nocedal on the limited memory bfgs method for large scale optimization mathematical programming ( - ): - [ lutz learning python 'reillyfifth edition [ matsumoto and nishimura mersenne twistera -dimensionally equidistributed uniform pseudo-random number generator acm transactions on modeling and computer simulation ( ): - [ mckinney python for data analysis 'reilly mediainc second edition [ mclachlan and krishnan the em algorithm and extensions john wiley sonshobokensecond edition [ mclachlan and peel finite mixture models john wiley sonsnew york [ metropolisa rosenbluthm rosenblutha tellerand teller equations of state calculations by fast computing machines journal of chemical physics ( ): - [ micchelliy xuand zhang universal kernels journal of machine learning research : - [ michalewicz genetic algorithms data structures evolution programs springernew yorkthird edition [ monahan numerical methods of statistics cambridge university presslondon [ mroz the sensitivity of an empirical model of married women' hours of work to economic and statistical assumptions econometrica ( ): - [ murphy machine learninga probabilistic perspective the mit presscambridge [ neyman and pearson on the problem of the most efficient tests of statistical hypotheses philosophical transactions of the royal society of londonseries : - |
15,265 | [ nielsen neural networks and deep learningvolume determination press [ petersen and pedersen the matrix cookbook technical university of denmark [ quinlan learning with continuous classes in adams and sterlingeditorsproceedings ai' pages - singapore world scientific [ rasmussen and williams gaussian processes for machine learning mit presscambridge [ ripley stochastic simulation john wiley sonsnew york [ robert and casella monte carlo statistical methods springernew yorksecond edition [ ross simulation academic pressnew yorkthird edition [ ross first course in probability prentice hallenglewood cliffsseventh edition [ rubinstein the cross-entropy method for combinatorial and continuous optimization methodology and computing in applied probability : - [ rubinstein and kroese the cross-entropy methoda unified approach to combinatorial optimizationmonte-carlo simulation and machine learning springernew york [ rubinstein and kroese simulation and the monte carlo method john wiley sonsnew yorkthird edition [ ruder an overview of gradient descent optimization algorithms arxiv[ rudin functional analysis mcgraw-hillsingaporesecond edition [ salomon data compressionthe complete reference springernew york [ seber and lee linear regression analysis john wiley sonshobokensecond edition [ shalev-shwartz and ben-david understanding machine learningfrom theory to algorithms cambridge university presscambridge [ shaw learning python the hard way addison-wesleyboston [ shens kiatsupaibulz zabinskyand smith an analytically derived cooling schedule for simulated annealing journal of global optimization ( ): - |
15,266 | [ shor minimization methods for non-differentiable functions springerberlin [ silverman density estimation for statistics and data analysis chapman hallnew york [ simonoff smoothing methods in statistics springernew york [ steinwart and christmann support vector machines springernew york [ strang introduction to linear algebra wellesley-cambridge presscambridgefifth edition [ strang linear algebra and learning from data wellesley-cambridge presscambridge [ streetw wolbergand mangasarian nuclear feature extraction for breast tumor diagnosis in is& /spie international symposium on electronic imagingscience and technologysan josecapages - [ tikhomirov on the representation of continuous functions of several variables as superpositions of continuous functions of one variable and addition in selected works of kolmogorovpages - springerberlin [ van buuren flexible imputation of missing data crc pressboca ratonsecond edition [ vapnik the nature of statistical learning theory springernew york [ vapnik and ya chervonenkis on the uniform convergence of relative frequencies of events to their probabilities theory of probability and its applications ( ): - [ wahba spline models for observational data siamphiladelphia [ wasserman all of statisticsa concise course in statistical inference springer [ webb statistical pattern recognition arnoldlondon [ wendland scattered data approximation cambridge university presscambridge [ williams probability with martingales cambridge university presscambridge [ wu on the convergence properties of the em algorithm the annals of statistics ( ): - |
15,267 | auxiliary variable methods axioms of kolmogorov acceptance probability - acceptance-rejection method accuracy (classification-) activation function adaboost - adagrad adam method adjoint operation affine transformation agglomerative clustering akaike information criterion algebraic multiplicity aligned arrays (python) almost sure convergence alternating direction method of multipliers alternative hypothesis anaconda (python) analysis of variance (anova) annealing schedule approximation error - approximation-estimation tradeoff armijo inexact line search assignment operator (python) attributes (python) back-propagation backward elimination backward substitution bagged estimator bagging balance equations (markov chains) bandwidth barplot barrier function barzilai-borwein formulas basis of vector space orthogonal - bayes empirical error rate factor naive - optimal decision rule bayesrule bayesian information criterion bayesian statistics bernoulli distribution bessel distribution |
15,268 | beta distribution bias of an estimator bias vector (deep learning) bias-variance tradeoff binomial distribution boltzmann distribution bootstrap aggregationsee bagging bootstrap method bounded mapping boxplot broadcasting (python) broyden' family broyden-fletcher-goldfarb-shanno (bfgsupdating burn-in period categorical variable cauchy sequence cauchy-schwarz inequality central difference estimator central limit theorem multivariate centroid chain rule for differentiation characteristic function characteristic polynomial chebyshev' inequality chi-squared distribution cholesky decomposition circulant matrix class (python) classification - hierarchical multilabel classifier coefficient of determination adjusted coefficient profiles combinatorial optimization comma separated values (csv) common random numbers complete hilbert space complete vector space complete convergence complete-data likelihood log-likelihood composition of functions concave function conditional distribution expectation pdf probability confidence interval bayesian bootstrap confidence region confusion matrix constrained optimization context management (python) continuous mapping continuous optimization control variable convergence almost sure in norm in distribution in probability sure convex function program - set convolution convolution neural network cook' distance cooling factor correlation coefficient cost-complexity measure pruning countable sample space covariance matrix - properties coverage probability |
15,269 | credible interval region critical region value cross tabulate cross-entropy method risk in-sample training loss cross-validation leave-one-out linear model crude monte carlo cubic spline cumulative distribution function (cdf) joint cycle davidon-fletcher-powell updating decision tree deep learning degrees of freedom dendrogram density dependent variable derivatives multidimensional partial design matrix detailed balance equations determinant of matrix diagonal matrix diagonalizable dictionary (python) digamma function dimension direct sum directional derivative discrete distribution fourier transform optimization probability space sample space uniform distribution discriminant analysis distribution bernoulli bessel beta binomial chi-squared discrete discrete uniform exponential extreme value gamma gaussiansee normal geometric inverse-gamma joint multivariate normal noncentral kh normal pareto poisson probability student' uniform weibull divisive clustering dot notation (python) dual optimization problem - early stopping efficiency of estimators of acceptance-rejection eigen-decomposition eigenvalue eigenvector elementary event elite sample empirical |
15,270 | bayes cdf distribution entropy impurity epoch (deep learning) equilikely principle ergodic markov chain error of the first and second kind estimate estimator bias of control variable efficiency of unbiased euclidean norm evaluation functional event elementary independent exact match ratio exchangeable variables expectation conditional properties vector expectation-maximization (emalgorithm expected generalization risk expected optimism explanatory variable exponential distribution extreme value distribution factor false negative false positive fast fourier transform fb score distribution feasible region feature importance map feed-forward network feedback shift register finite difference method finite-dimensional distributions fisher information matrix fisher' scoring method folds (cross-validation) forward selection forward substitution fourier expansion fourier transform discrete frequentist statistics full rank matrix function (python) function space functionc functional functions of random variables gamma distribution function gauss-markov inequality gauss-newton search direction gaussian distributionsee normal distribution gaussian kernel gaussian kernel density estimate gaussian process gaussian rule of thumb generalization risk generalized inverse-gamma distribution generalized linear model geometric cooling geometric distribution geometric multiplicity gibbs pdf gibbs sampler random random order reversible gini impurity global balance equations global minimizer gradient |
15,271 | boosting descent gram matrix gram-schmidt procedure hamming distance hermite polynomials hermitian matrix hessian matrix hidden layer hierarchical classification hilbert matrix inverse hilbert space isomorphism hinge loss histogram hoeffding' inequality homotopy paths hyperparameters hypothesis testing immutable (python) importance sampling - improper prior in-sample risk incremental effects independence of event of random variables independence sampler independent and identically distributed (iid) indicator indicator feature indicator loss infinitesimal perturbation analysis information matrix equality inheritance (python) initial distribution (markov chain) inner product instance (python) integration monte carlo interaction interior-point method interval estimatesee confidence interval inverse discrete fourier transform fourier transform matrix inverse-gamma distribution inverse-transform method irreducible risk iterable (python) iterative reweighted least squares iterator (python) jacobi matrix of jensen' inequality joint cdf pdf jointly normalsee multivariate normal jointly normal distributionsee multivariate normal distribution karush-kuhn-tucker (kktconditions kernel density estimation kernel trick kiefer-wolfowitz algorithm -nearest neighbors method kolmogorov axioms kullback-leibler divergence lagrange dual program function method - multiplier lagrangian penalty laguerre polynomials lance-williams update |
15,272 | laplace' approximation lasso (regression) latent variable methodssee auxiliary variable methods law of large numbers law of total probability learner learning rate least-squares iterative reweighted nonlinear ordinary regularized leave-one-out cross-validation left pseudo-inverse left-eigenvector legendre polynomials length preserving transformation length of vector level set levenberg-marquardt search direction leverage levinson-durbin likelihood complete-data log- optimization ratio limited memory bfgs limiting pdf limiting pdf (markov chain) line search linear discriminant function kernel mapping model program subspace transformation linearly independent link function linkage matrix list comprehension (python) local balance equationssee detailed balance equations local minimizer local/global minimum log-likelihood log-odds ratio logarithmic efficiency logistic distribution logistic regression long-run average reward loss function loss matrix -estimator manhattan distance marginal distribution markov chain ergodic reversible simulation of markov chain monte carlo markov property matern kernel matplotlib (python) - matrix blockwise inverse covariance determinant diagonal -- inverse of jacobi pseudo-inverse sparse toeplitz trace transpose matrix multiplication (python) max-cut problem maximum posteriori maximum distance maximum likelihood estimation mean integrated squared error mean squared error measure |
15,273 | implementing svm in python svm kernels pros and cons of svm classifiers classification algorithms decision tree introduction to decision tree implementing decision tree algorithm building tree implementation in python classification algorithms naive bayes introduction to naive bayes algorithm building model using naive bayes in python pros cons applications of naive bayes classification classification algorithms random forest introduction working of random forest algorithm implementation in python pros and cons of random forest machine learning algorithms regression regression algorithms overview introduction to regression types of regression models building regressor in python types of ml regression algorithms applications regression algorithms linear regression introduction to linear regression vi |
15,274 | types of linear regression multiple linear regression (mlr python implementation assumptions machine learning algorithms clustering clustering algorithms overview introduction to clustering cluster formation methods measuring clustering performance silhouette analysis analysis of silhouette score types of ml clustering algorithms applications of clustering clustering algorithms -means algorithm introduction to -means algorithm working of -means algorithm implementation in python advantages and disadvantages applications of -means clustering algorithm clustering algorithms mean shift algorithm introduction to mean-shift algorithm working of mean-shift algorithm implementation in python advantages and disadvantages clustering algorithms hierarchical clustering introduction to hierarchical clustering steps to perform agglomerative hierarchical clustering vii |
15,275 | role of dendrograms in agglomerative hierarchical clustering machine learning algorithms knn algorithm knn algorithm finding nearest neighbors introduction working of knn algorithm implementation in python knn as classifier knn as regressor pros and cons of knn applications of knn machine learning algorithms performance metrics performance metrics for classification problems performance metrics for regression problems machine learning with pipelines automatic workflows introduction challenges accompanying ml pipelines modelling ml pipeline and data preparation modelling ml pipeline and feature extraction machine learning improving performance of ml models performance improvement with ensembles ensemble learning methods bagging ensemble algorithms boosting ensemble algorithms voting ensemble algorithms machine learning improving performance of ml model (contd performance improvement with algorithm tuning performance improvement with algorithm tuning viii |
15,276 | machine learning with python basics we are living in the 'age of datathat is enriched with better computational power and more storage resourcesthis data or information is increasing day by daybut the real challenge is to make sense of all the data businesses organizations are trying to deal with it by building intelligent systems using the concepts and methodologies from data sciencedata mining and machine learning among themmachine learning is the most exciting field of computer science it would not be wrong if we call machine learning the application and science of algorithms that provides sense to the data what is machine learningmachine learning (mlis that field of computer science with the help of which computer systems can provide sense to data in much the same way as human beings do in simple wordsml is type of artificial intelligence that extract patterns out of raw data by using an algorithm or method the main focus of ml is to allow computer systems learn from experience without being explicitly programmed or human intervention need for machine learning human beingsat this momentare the most intelligent and advanced species on earth because they can thinkevaluate and solve complex problems on the other sideai is still in its initial stage and haven' surpassed human intelligence in many aspects then the question is that what is the need to make machine learnthe most suitable reason for doing this is"to make decisionsbased on datawith efficiency and scalelatelyorganizations are investing heavily in newer technologies like artificial intelligencemachine learning and deep learning to get the key information from data to perform several real-world tasks and solve problems we can call it data-driven decisions taken by machinesparticularly to automate the process these data-driven decisions can be usedinstead of using programing logicin the problems that cannot be programmed inherently the fact is that we can' do without human intelligencebut other aspect is that we all need to solve real-world problems with efficiency at huge scale that is why the need for machine learning arises why when to make machines learnwe have already discussed the need for machine learningbut another question arises that in what scenarios we must make the machine learnthere can be several circumstances where we need machines to take data-driven decisions with efficiency and at huge scale the followings are some of such circumstances where making machines learn would be more effectivelack of human expertise the very first scenario in which we want machine to learn and take data-driven decisionscan be the domain where there is lack of human expertise the examples can be navigations in unknown territories or spatial planets |
15,277 | dynamic scenarios there are some scenarios which are dynamic in nature they keep changing over time in case of these scenarios and behaviorswe want machine to learn and take data-driven decisions some of the examples can be network connectivity and availability of infrastructure in an organization difficulty in translating expertise into computational tasks there can be various domains in which humans have their expertise,howeverthey are unable to translate this expertise into computational tasks in such circumstances we want machine learning the examples can be the domains of speech recognitioncognitive tasks etc machine learning model before discussing the machine learning modelwe must need to understand the following formal definition of ml given by professor mitchell" computer program is said to learn from experience with respect to some class of tasks and performance measure pif its performance at tasks in tas measured by pimproves with experience the above definition is basically focusing on three parametersalso the main components of any learning algorithmnamely task( )performance(pand experience (ein this contextwe can simplify this definition asml is field of ai consisting of learning algorithms thatimprove their performance (pat executing some task (tover time with experience ( |
15,278 | based on the abovethe following diagram represents machine learning modeltask (tperforman ce (pexperienc (elet us discuss them more in detail nowtask(tfrom the perspective of problemwe may define the task as the real-world problem to be solved the problem can be anything like finding best house price in specific location or to find best marketing strategy etc on the other handif we talk about machine learningthe definition of task is different because it is difficult to solve ml based tasks by conventional programming approach task is said to be ml based task when it is based on the process and the system must follow for operating on data points the examples of ml based tasks are classificationregressionstructured annotationclusteringtranscription etc experience (eas name suggestsit is the knowledge gained from data points provided to the algorithm or model once provided with the datasetthe model will run iteratively and will learn some inherent pattern the learning thus acquired is called experience(emaking an analogy with human learningwe can think of this situation as in which human being is learning or gaining some experience from various attributes like situationrelationships etc supervisedunsupervised and reinforcement learning are some ways to learn or gain experience the experience gained by out ml model or algorithm will be used to solve the task |
15,279 | performance (pan ml algorithm is supposed to perform task and gain experience with the passage of time the measure which tells whether ml algorithm is performing as per expectation or not is its performance (pp is basically quantitative metric that tells how model is performing the tasktusing its experiencee there are many metrics that help to understand the ml performancesuch as accuracy scoref scoreconfusion matrixprecisionrecallsensitivity etc challenges in machines learning while machine learning is rapidly evolvingmaking significant strides with cybersecurity and autonomous carsthis segment of ai as whole still has long way to go the reason behind is that ml has not been able to overcome number of challenges the challenges that ml is facing currently arequality of datahaving good-quality data for ml algorithms is one of the biggest challenges use of low-quality data leads to the problems related to data preprocessing and feature extraction time-consuming taskanother challenge faced by ml models is the consumption of time especially for data acquisitionfeature extraction and retrieval lack of specialist personsas ml technology is still in its infancy stageavailability of expert resources is tough job no clear objective for formulating business problemshaving no clear objective and well-defined goal for business problems is another key challenge for ml because this technology is not that mature yet issue of overfitting underfittingif the model is overfitting or underfittingit cannot be represented well for the problem curse of dimensionalityanother challenge ml model faces is too many features of data points this can be real hindrance difficulty in deploymentcomplexity of the ml model makes it quite difficult to be deployed in real life applications of machines learning machine learning is the most rapidly growing technology and according to researchers we are in the golden year of ai and ml it is used to solve many real-world complex problems which cannot be solved with traditional approach following are some real-world applications of mlemotion analysis sentiment analysis error detection and prevention weather forecasting and prediction stock market analysis and forecasting speech synthesis speech recognition |
15,280 | customer segmentation object recognition fraud detection fraud prevention recommendation of products to customer in online shopping |
15,281 | machine learning with pythonmachine python ecosystem an introduction to python python is popular object-oriented programing language having the capabilities of highlevel programming language its easy to learn syntax and portability capability makes it popular these days the followings facts gives us the introduction to pythonpython was developed by guido van rossum at stichting mathematisch centrum in the netherlands it was written as the successor of programming language named 'abcit' first version was released in the name python was picked by guido van rossum from tv show named monty python' flying circus it is an open source programming language which means that we can freely download it and use it to develop programs it can be downloaded from www python org python programming language is having the features of java and both it is having the elegant 'ccode and on the other handit is having classes and objects like java for object-oriented programming it is an interpreted languagewhich means the source code of python program would be first converted into bytecode and then executed by python virtual machine strengths and weaknesses of python every programming language has some strengths as well as weaknessesso does python too strengths according to studies and surveyspython is the fifth most important language as well as the most popular language for machine learning and data science it is because of the following strengths that python haseasy to learn and understandthe syntax of python is simplerhence it is relatively easyeven for beginners alsoto learn and understand the language multi-purpose languagepython is multi-purpose programming language because it supports structured programmingobject-oriented programming as well as functional programming |
15,282 | huge number of modulespython has huge number of modules for covering every aspect of programming these modules are easily available for use hence making python an extensible language support of open source communityas being open source programming languagepython is supported by very large developer community due to thisthe bugs are easily fixed by the python community this characteristic makes python very robust and adaptive scalabilitypython is scalable programming language because it provides an improved structure for supporting large programs than shell-scripts weakness although python is popular and powerful programming languageit has its own weakness of slow execution speed the execution speed of python is slow as compared to compiled languages because python is an interpreted language this can be the major area of improvement for python community installing python for working in pythonwe must first have to install it you can perform the installation of python in any of the following two waysinstalling python individually using pre-packaged python distributionanaconda let us discuss these each in detail installing python individually if you want to install python on your computerthen then you need to download only the binary code applicable for your platform python distribution is available for windowslinux and mac platforms the following is quick overview of installing python on the above-mentioned platformson unix and linux platform with the help of following stepswe can install python on unix and linux platformfirstgo to nextclick on the link to download zipped source code available for unix/linux nowdownload and extract files nextwe can edit the modules/setup file if we want to customize some options nextwrite the command run /configure script make make install |
15,283 | on windows platform with the help of following stepswe can install python on windows platformfirstgo to nextclick on the link for windows installer python-xyz msi file here xyz is the version we wish to install nowwe must run the file that is downloaded it will take us to the python install wizardwhich is easy to use nowaccept the default settings and wait until the install is finished on macintosh platform for mac os xhomebrewa great and easy to use package installer is recommended to install python in case if you don' have homebrewyou can install it with the help of following commandruby - "$(curl -fssl it can be updated with the command belowbrew update nowto install python on your systemwe need to run the following commandbrew install python using pre-packaged python distributionanaconda anaconda is packaged compilation of python which have all the libraries widely used in data science we can follow the following steps to setup python environment using anacondastep firstwe need to download the required installation package from anaconda distribution the link for the same is choose from windowsmac and linux os as per your requirement step nextselect the python version you want to install on your machine the latest python version is there you will get the options for -bit and -bit graphical installer both step after selecting the os and python versionit will download the anaconda installer on your computer nowdouble click the file and the installer will install anaconda package step for checking whether it is installed or notopen command prompt and type python as follows |
15,284 | you can also check this in detailed video lecture at anaconda asp why python for data sciencepython is the fifth most important language as well as most popular language for machine learning and data science the following are the features of python that makes it the preferred choice of language for data scienceextensive set of packages python has an extensive and powerful set of packages which are ready to be used in various domains it also has packages like numpyscipypandasscikit-learn etc which are required for machine learning and data science easy prototyping another important feature of python that makes it the choice of language for data science is the easy and fast prototyping this feature is useful for developing new algorithm collaboration feature the field of data science basically needs good collaboration and python provides many useful tools that make this extremely one language for many domains typical data science project includes various domains like data extractiondata manipulationdata analysisfeature extractionmodellingevaluationdeployment and updating the solution as python is multi-purpose languageit allows the data scientist to address all these domains from common platform |
15,285 | components of python ml ecosystem in this sectionlet us discuss some core data science libraries that form the components of python machine learning ecosystem these useful components make python an important language for data science though there are many such componentslet us discuss some of the importance components of python ecosystem herejupyter notebook jupyter notebooks basically provides an interactive computational environment for developing python based data science applications they are formerly known as ipython notebooks the following are some of the features of jupyter notebooks that makes it one of the best components of python ml ecosystemjupyter notebooks can illustrate the analysis process step by step by arranging the stuff like codeimagestextoutput etc in step by step manner it helps data scientist to document the thought process while developing the analysis process one can also capture the result as the part of the notebook with the help of jupyter notebookswe can share our work with peer also installation and execution if you are using anaconda distributionthen you need not install jupyter notebook separately as it is already installed with it you just need to go to anaconda prompt and type the following commandc:\>jupyter notebook |
15,286 | after pressing enterit will start notebook server at localhost: of your computer it is shown in the following screen shotnowafter clicking the new tabyou will get list of options select python and it will take you to the new notebook for start working in it you will get glimpse of it in the following screenshots |
15,287 | on the other handif you are using standard python distribution then jupyter notebook can be installed using popular python package installerpip pip install jupyter types of cells in jupyter notebook the following are the three types of cells in jupyter notebookcode cellsas the name suggestswe can use these cells to write code after writing the code/contentit will send it to the kernel that is associated with the notebook markdown cellswe can use these cells for notating the computation process they can contain the stuff like textimageslatex equationshtml tags etc raw cellsthe text written in them is displayed as it is these cells are basically used to add the text that we do not wish to be converted by the automatic conversion mechanism of jupyter notebook for more detailed study of jupyter notebookyou can go to the link numpy it is another useful component that makes python as one of the favorite languages for data science it basically stands for numerical python and consists of multidimensional array objects by using numpywe can perform the following important operationsmathematical and logical operations on arrays fourier transformation |
15,288 | operations associated with linear algebra we can also see numpy as the replacement of matlab because numpy is mostly used along with scipy (scientific pythonand mat-plotlib (plotting libraryinstallation and execution if you are using anaconda distributionthen no need to install numpy separately as it is already installed with it you just need to import the package into your python script with the help of followingimport numpy as np on the other handif you are using standard python distribution then numpy can be installed using popular python package installerpip pip install numpy after installing numpyyou can import it into your python script as you did above for more detailed study of numpyyou can go to the link pandas it is another useful python library that makes python one of the favorite languages for data science pandas is basically used for data manipulationwrangling and analysis it was developed by wes mckinney in with the help of pandasin data processing we can accomplish the following five stepsload prepare manipulate model analyze data representation in pandas the entire representation of data in pandas is done with the help of following three data structuresseriesit is basically one-dimensional ndarray with an axis label which means it is like simple array with homogeneous data for examplethe following series is collection of integers , , , , , data frameit is the most useful data structure and used for almost all kind of data representation and manipulation in pandas it is basically two-dimensional data structure which can contain heterogeneous data generallytabular data is represented by using |
15,289 | data frames for examplethe following table shows the data of students having their names and roll numbersage and gendername roll number age gender aarav male harshit male kanika female mayank male panelit is -dimensional data structure containing heterogeneous data it is very difficult to represent the panel in graphical representationbut it can be illustrated as container of dataframe the following table gives us the dimension and description about above mentioned data structures used in pandasdata structure dimension description series - size immutable - homogeneous data dataframes - size mutableheterogeneous data in tabular form panel - size-mutable arraycontainer of dataframe we can understand these data structures as the higher dimensional data structure is the container of lower dimensional data structure installation and execution if you are using anaconda distributionthen no need to install pandas separately as it is already installed with it you just need to import the package into your python script with the help of followingimport pandas as pd on the other handif you are using standard python distribution then pandas can be installed using popular python package installerpip pip install pandas after installing pandasyou can import it into your python script as did above |
15,290 | example the following is an example of creating series from ndarray by using pandasin [ ]import pandas as pd in [ ]import numpy as np in [ ]data np array([' ',' ',' ',' ',' ',' ']in [ ] pd series(datain [ ]print ( dtypeobject for more detailed study of pandas you can go to the link scikit-learn another useful and most important python library for data science and machine learning in python is scikit-learn the following are some features of scikit-learn that makes it so usefulit is built on numpyscipyand matplotlib it is an open source and can be reused under bsd license it is accessible to everybody and can be reused in various contexts wide range of machine learning algorithms covering major areas of ml like classificationclusteringregressiondimensionality reductionmodel selection etc can be implemented with the help of it installation and execution if you are using anaconda distributionthen no need to install scikit-learn separately as it is already installed with it you just need to use the package into your python script for examplewith following line of script we are importing dataset of breast cancer patients from scikit-learn |
15,291 | from sklearn datasets import load_breast_cancer on the other handif you are using standard python distribution and having numpy and scipy then scikit-learn can be installed using popular python package installerpip pip install - scikit-learn after installing scikit-learnyou can use it into your python script as you have done above |
15,292 | python machine learning methods for machine learning there are various ml algorithmstechniques and methods that can be used to build models for solving real-life problems by using data in this we are going to discuss such different kinds of methods different types of methods the following are various ml methods based on some broad categoriesbased on human supervision in the learning processsome of the methods that are based on human supervision are as followssupervised learning supervised learning algorithms or methods are the most commonly used ml algorithms this method or learning algorithm take the data sample the training data and its associated output labels or responses with each data samples during the training process the main objective of supervised learning algorithms is to learn an association between input data samples and corresponding outputs after performing multiple training data instances for examplewe have xinput variables and youtput variable nowapply an algorithm to learn the mapping function from the input to output as followsy= (xnowthe main objective would be to approximate the mapping function so well that even when we have new input data ( )we can easily predict the output variable (yfor that new input data it is called supervised because the whole process of learning can be thought as it is being supervised by teacher or supervisor examples of supervised machine learning algorithms includes decision treerandom forestknnlogistic regression etc based on the ml taskssupervised learning algorithms can be divided into following two broad classesclassification regression |
15,293 | classification the key objective of classification-based tasks is to predict categorial output labels or responses for the given input data the output will be based on what the model has learned in training phase as we know that the categorial output responses means unordered and discrete valueshence each output response will belong to specific class or category we will discuss classification and associated algorithms in detail in the upcoming also regression the key objective of regression-based tasks is to predict output labels or responses which are continues numeric valuesfor the given input data the output will be based on what the model has learned in its training phase basicallyregression models use the input data features (independent variablesand their corresponding continuous numeric output values (dependent or outcome variablesto learn specific association between inputs and corresponding outputs we will discuss regression and associated algorithms in detail in further also unsupervised learning as the name suggestsit is opposite to supervised ml methods or algorithms which means in unsupervised machine learning algorithms we do not have any supervisor to provide any sort of guidance unsupervised learning algorithms are handy in the scenario in which we do not have the libertylike in supervised learning algorithmsof having pre-labeled training data and we want to extract useful pattern from input data for exampleit can be understood as followssuppose we havexinput variablesthen there would be no corresponding output variable and the algorithms need to discover the interesting pattern in data for learning examples of unsupervised machine learning algorithms includes -means clusteringknearest neighbors etc based on the ml tasksunsupervised learning algorithms can be divided into following broad classesclustering association dimensionality reduction clustering clustering methods are one of the most useful unsupervised ml methods these algorithms used to find similarity as well as relationship patterns among data samples and then cluster those samples into groups having similarity based on features the real-world example of clustering is to group the customers by their purchasing behavior association another useful unsupervised ml method is association which is used to analyze large dataset to find patterns which further represents the interesting relationships between various items it is also termed as association rule mining or market basket analysis which is mainly used to analyze customer shopping patterns |
15,294 | dimensionality reduction this unsupervised ml method is used to reduce the number of feature variables for each data sample by selecting set of principal or representative features question arises here is that why we need to reduce the dimensionalitythe reason behind is the problem of feature space complexity which arises when we start analyzing and extracting millions of features from data samples this problem generally refers to "curse of dimensionalitypca (principal component analysis) -nearest neighbors and discriminant analysis are some of the popular algorithms for this purpose anomaly detection this unsupervised ml method is used to find out the occurrences of rare events or observations that generally do not occur by using the learned knowledgeanomaly detection methods would be able to differentiate between anomalous or normal data point some of the unsupervised algorithms like clusteringknn can detect anomalies based on the data and its features semi-supervised learning such kind of algorithms or methods are neither fully supervised nor fully unsupervised they basically fall between the two supervised and unsupervised learning methods these kinds of algorithms generally use small supervised learning component small amount of pre-labeled annotated data and large unsupervised learning component lots of unlabeled data for training we can follow any of the following approaches for implementing semi-supervised learning methodsthe first and simple approach is to build the supervised model based on small amount of labeled and annotated data and then build the unsupervised model by applying the same to the large amounts of unlabeled data to get more labeled samples nowtrain the model on them and repeat the process the second approach needs some extra efforts in this approachwe can first use the unsupervised methods to cluster similar data samplesannotate these groups and then use combination of this information to train the model reinforcement learning these methods are different from previously studied methods and very rarely used also in this kind of learning algorithmsthere would be an agent that we want to train over period of time so that it can interact with specific environment the agent will follow set of strategies for interacting with the environment and then after observing the environment it will take actions regards the current state of the environment the following are the main steps of reinforcement learning methodsstep firstwe need to prepare an agent with some initial set of strategies step then observe the environment and its current state step nextselect the optimal policy regards the current state of the environment and perform important action step nowthe agent can get corresponding reward or penalty as per accordance with the action taken by it in previous step |
15,295 | step nowwe can update the strategies if it is required so step at lastrepeat steps - until the agent got to learn and adopt the optimal policies tasks suited for machine learning the following diagram shows what type of task is appropriate for various ml problemsis data correlated or redundantyes no dimensionality reduction is data producing categoryyes is data producing quantityis data labeledyes classification no no yes clustering regression no bad luck based on learning ability in the learning processthe following are some methods that are based on learning abilitybatch learning in many caseswe have end-to-end machine learning systems in which we need to train the model in one go by using whole available training data such kind of learning method or algorithm is called batch or offline learning it is called batch or offline learning because it is one-time procedure and the model will be trained with data in one single batch the following are the main steps of batch learning methodsstep firstwe need to collect all the training data for start training the model |
15,296 | step nowstart the training of model by providing whole training data in one go step nextstop results/performance learning/training process once you got satisfactory step finallydeploy this trained model into production hereit will predict the output for new data sample online learning it is completely opposite to the batch or offline learning methods in these learning methodsthe training data is supplied in multiple incremental batchescalled minibatchesto the algorithm followings are the main steps of online learning methodsstep firstwe need to collect all the training data for starting training of the model step nowstart the training of model by providing mini-batch of training data to the algorithm step nextwe need to provide the mini-batches of training data in multiple increments to the algorithm step as it will not stop like batch learning hence after providing whole training data in mini-batchesprovide new data samples also to it step finallyit will keep learning over period of time based on the new data samples based on generalization approach in the learning processfollowings are some methods that are based on generalization approachesinstance based learning instance based learning method is one of the useful methods that build the ml models by doing generalization based on the input data it is opposite to the previously studied learning methods in the way that this kind of learning involves ml systems as well as methods that uses the raw data points themselves to draw the outcomes for newer data samples without building an explicit model on training data in simple wordsinstance-based learning basically starts working by looking at the input data points and then using similarity metricit will generalize and predict the new data points model based learning in model based learning methodsan iterative process takes place on the ml models that are built based on various model parameterscalled hyperparameters and in which input data is used to extract the features in this learninghyperparameters are optimized based on various model validation techniques that is why we can say that model based learning methods uses more traditional ml approach towards generalization |
15,297 | machine learning with python -machine datalearning loading for ml projects suppose if you want to start ml project then what is the first and most important thing you would requireit is the data that we need to load for starting any of the ml project with respect to datathe most common format of data for ml projects is csv (commaseparated valuesbasicallycsv is simple file format which is used to store tabular data (number and textsuch as spreadsheet in plain text in pythonwe can load csv data into with different ways but before loading csv data we must have to take care about some considerations consideration while loading csv data csv data format is the most common format for ml databut we need to take care about following major considerations while loading the same into our ml projectsfile header in csv data filesthe header contains the information for each field we must use the same delimiter for the header file and for data file because it is the header file that specifies how should data fields be interpreted the following are the two cases related to csv file header which must be consideredcase-iwhen data file is having file headerit will automatically assign the names to each column of data if data file is having file header case-iiwhen data file is not having file headerwe need to assign the names to each column of data manually if data file is not having file header in both the caseswe must need to specify explicitly weather our csv file contains header or not comments comments in any data file are having their significance in csv data filecomments are indicated by hash (#at the start of the line we need to consider comments while loading csv data into ml projects because if we are having comments in the file then we may need to indicatedepends upon the method we choose for loadingwhether to expect those comments or not delimiter in csv data filescomma (,character is the standard delimiter the role of delimiter is to separate the values in the fields it is important to consider the role of delimiter while uploading the csv file into ml projects because we can also use different delimiter such as tab or white space but in the case of using different delimiter than standard onewe must have to specify it explicitly |
15,298 | quotes in csv data filesdouble quotation ("mark is the default quote character it is important to consider the role of quotes while uploading the csv file into ml projects because we can also use other quote character than double quotation mark but in case of using different quote character than standard onewe must have to specify it explicitly methods to load csv data file while working with ml projectsthe most crucial task is to load the data properly into it the most common data format for ml projects is csv and it comes in various flavors and varying difficulties to parse in this sectionwe are going to discuss about three common approaches in python to load csv data fileload csv with python standard library the first and most used approach to load csv data file is the use of python standard library which provides us variety of built-in modules namely csv module and the reader()function the following is an example of loading csv data file with the help of itexample in this examplewe are using the iris flower data set which can be downloaded into our local directory after loading the data filewe can convert it into numpy array and use it for ml projects following is the python script for loading csv data filefirstwe need to import the csv module provided by python standard library as followsimport csv nextwe need to import numpy module for converting the loaded data into numpy array import numpy as np nowprovide the full path of the filestored on our local directoryhaving the csv data filepath " :\iris csvnextuse the csv reader()function to read data from csv filewith open(path,' 'as freader csv reader( ,delimiter ','headers next(readerdata list(readerdata np array(dataastype(float |
15,299 | we can print the names of the headers with the following line of scriptprint(headersthe following line of script will print the shape of the data number of rows columns in the fileprint(data shapenext script line will give the first three line of data fileprint(data[: ]output ['sepal_length''sepal_width''petal_length''petal_width'( [[ [ [ ]load csv with numpy another approach to load csv data file is numpy and numpy loadtxt(function the following is an example of loading csv data file with the help of itexample in this examplewe are using the pima indians dataset having the data of diabetic patients this dataset is numeric dataset with no header it can also be downloaded into our local directory after loading the data filewe can convert it into numpy array and use it for ml projects the following is the python script for loading csv data filefrom numpy import loadtxt path " :\pima-indians-diabetes csvdatapathopen(path' 'data loadtxt(datapathdelimiter=","print(data shapeprint(data[: ] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.