id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
14,800 | polyreg py import numpy as np from numpy random import rand randn from numpy linalg import norm solve import matplotlib pyplot as plt def generate_data (beta sig ) np random rand( ( *np arange ( )beta sig np random randn ( return uy np random seed ( beta np array ([[ - - ]] sig uy generate_data (beta sig nxx np arange (np min( )np max( )+ - - yy np polyval (np flip(beta)xxplt plot(uy'markersize = plt plot(xx yy '--',linewidth = plt xlabel ( '$ 'plt ylabel ( '$ ^*( )$'plt legend (['data points ','true ']plt show (the following codewhich imports the code abovefits polynomial models with parameters to the training data and plots selection of fitted curvesas shown in figure polyreg py from polyreg import max_p p_range np arange ( max_p np ones (( )betahat trainloss {}{for in p_range is the number of parameters if np hstack ((xu**( - ))add column to matrix betahat [psolve ( xx ytrainloss [ (norm( betahat [ ]** /np [ select three curves replot the points and true line and store in the list plots plots [plt plot(uy' 'markersize = [ plt plot(xx yy ' --',linewidth = [ ]add the three curves for in pyy np polyval (np flipbetahat [ ]xxplots append (plt plot(xx yy)[ ] |
14,801 | plt xlabel ( '$ 'plt ylabel ( '$ ^{mathcal { }_p} {tau }( )$'plt legend (plots ,('data points ''true ','$ = $underfit ''$ = $correct ','$ = $overfit ')plt savefig ('polyfitpy pdf ',format ='pdf 'plt show (the last code snippet which imports the previous codegenerates the test data and plots the graph of the test lossas shown in figure polyreg py from polyreg import generate test data u_test y_test generate_data (beta sig nmse [x_test np ones (( )for in p_range if x_test np hstack (x_test u_test **( - ))y_hat x_test betahat [ppredictions mse append (np sum (y_test y_hat )** / )plt plot(p_range mse ' 'p_range mse 'bo'plt xticks ticks p_range plt xlabel ('number of parameters $ 'plt ylabel ('test loss ' tradeoffs in statistical learning the art of machine learning in the supervised case is to make the generalization risk ( or expected generalization risk ( as small as possiblewhile using as few computational resources as possible in pursuing this goala suitable class of prediction functions has to be chosen this choice is driven by various factorssuch as the complexity of the class ( is it rich enough to adequately approximateor even containthe optimal prediction function ?)the ease of training the learner via the optimization program ( )how accurately the training loss ( estimates the risk ( within class gthe feature types (categoricalcontinuousetc as resultthe choice of suitable function class usually involves tradeoff between conflicting factors for examplea learner from simple class can be trained very |
14,802 | quicklybut may not approximate gvery wellwhereas learner from rich class that contains gmay require lot of computing resources to train to better understand the relation between model complexitycomputational simplicityand estimation accuracyit is useful to decompose the generalization risk into several partsso that the tradeoffs between these parts can be studied we will consider two such decompositionsthe approximation-estimation tradeoff and the bias-variance tradeoff we can decompose the generalization risk ( into the following three components`(ggt `|{zirreducible risk irreducible risk approximation error statistical (estimationerror approximationestimation tradeoff `(gg ``(ggt `(gg ){ { approximation error ( statistical error where `:`(gis the irreducible risk and gg :argmingg `(gis the best learner within class no learner can predict new response with smaller risk than `the second component is the approximation errorit measures the difference between the irreducible risk and the best possible risk that can be obtained by selecting the best prediction function in the selected class of functions determining suitable class and minimizing `(gover this class is purely problem of numerical and functional analysisas the training data are not present for fixed that does not contain the optimal gthe approximation error cannot be made arbitrarily small and may be the dominant component in the generalization risk the only way to reduce the approximation error is by expanding the class to include larger set of possible functions the third component is the statistical (estimationerror it depends on the training set andin particularon how well the learner ggt estimates the best possible prediction functiongg within class for any sensible estimator this error should decay to zero (in probability or expectationas the training size tends to infinity the approximation-estimation tradeoff pits two competing demands against each other the first is that the class has to be simple enough so that the statistical error is not too large the second is that the class has to be rich enough to ensure small approximation error thusthere is tradeoff between the approximation and estimation errors for the special case of the squared-error lossthe generalization risk is equal to `(ggt ( ggt ( )) that isthe expected squared error between the predicted value ggt (xand the response recall that in this case the optimal prediction function is given by (xe[ xthe decomposition ( can now be interpreted as follows the first component` ( ( )) is the irreducible erroras no prediction function will yield smaller expected squared error the second componentthe approximation error `(gg `( )is equal to (gg (xg( )) we leave the proof (which is similar to that of theorem as an exercisesee exercise thusthe approximation error (defined as risk differencecan here be interpreted as the expected squared error between the optimal predicted value and the optimal predicted value within the class for the third componentthe statistical error`(ggt `(gg there is no direct interpretation as an expected squared error unless is the class of linear functionsthat isg(xxb for some vector in this case we can write (see exercise the statistical error as `(ggt `(gg (ggt (xgg ( )) colloquially called mean squared error |
14,803 | thuswhen using squared-error lossthe generalization risk for linear class can be decomposed as`(ggt (ggt (xy) ` (gg (xg( )) (ggt (xgg ( )) { { approximation error ( statistical error note that in this decomposition the statistical error is the only term that depends on the training set example (polynomial regression (cont )we continue example here is the class of linear functions of [ uu - ]and (xxbconditional on we have that (xe( )with (xn( `)where ` ( ( )) is the irreducible error we wish to understand how the approximation and statistical errors behave as we change the complexity parameter firstwe consider the approximation error any function can be written as (xh(ub - [ uu - band so (xis distributed as [ uu - ]bwhere ( similarlyg(xis distributed as [ uu ]bit follows that an expression for the approximation error is [ uu - [ uu bdu to minimize this errorwe set the gradient with respect to to zero and obtain the linear equations - [ uu [ uu du [ uu - [ uu ]bu du [ uu - [ uu bu - du let hp [ uu - ][ uu - du be the hilbert matrixwhich has (ij)-th entry given by uij- du /( where is the thenthe above system of linear equations can be written as hb upper left sub-block of hep and max{ the solutionwhich we denote by is ] [ ( bp [ [ - - ] hencethe approximation error gg (xg(xis given by [ uu - [ uu bdu ( hilbert matrix |
14,804 | notice how the approximation error becomes smaller as increases in this particular example the approximation error is in fact zero for in generalas the class of approximating functions becomes more complexthe approximation error goes down bthe nextwe illustrate the typical behavior of the statistical error since gt (xx> statistical error can be written as ) ( [ - ]( du ( ( figure illustrates the decomposition ( of the generalization risk for the same training set that was used to compute the test loss in figure recall that test loss gives an estimate of the generalization riskusing independent test data comparing the two figureswe see that in this case the two match closely the global minimum of the statistical error is approximately with minimizer since the approximation error is monotonically decreasing to zerop is also the global minimizer of the generalization risk approximation error statistical error irreducible error generalization risk figure the generalization risk for particular training set is the sum of the irreducible errorthe approximation errorand the statistical error the approximation error decreases to zero as increaseswhereas the statistical error has tendency to increase after note that the statistical error depends on the estimate bwhich in its turn depends on the training set we can obtain better understanding of the statistical error by considering its expected behaviorthat isaveraged over many training sets this is explored in exercise using again squared-error lossa second decomposition (for general gstarts from `(ggt ``(ggt `( )where the statistical error and approximation error are combined using similar reasoning as in the proof of theorem we have `(ggt (ggt (xy) ` ggt (xg( `ed (xt) |
14,805 | where (xt:ggt (xg(xnow consider the random variable (xt for random training set the expectation of its square is ggt (xg(xed (xt (ed(xt )) var (xt ( (eggt (xg( )) var ggt ( { { pointwise squared bias pointwise variance if we view the learner ggt (xas function of random training setthen the pointwise squared bias term is measure for how close ggt (xis on average to the true ( )whereas the pointwise variance term measures the deviation of ggt (xfrom its expected value eggt (xthe squared bias can be reduced by making the class of functions more complex howeverdecreasing the bias by increasing the complexity often leads to an increase in the variance term we are thus seeking learners that provide an optimal balance between the bias and varianceas expressed via minimal generalization risk this is called the bias-variance tradeoff note that the expected generalization risk ( can be written as `+ed (xt )where and are independent it therefore decomposes as `(ggt ` ( [ggt (xxg( )) [var[ggt (xx]{ { expected squared bias pointwise squared bias pointwise variance bias-variance tradeoff ( expected variance estimating risk the most straightforward way to quantify the generalization risk ( is to estimate it via the test loss ( howeverthe generalization risk depends inherently on the training setand so different training sets may yield significantly different estimates moreoverwhen there is limited amount of data availablereserving substantial proportion of the data for testing rather than training may be uneconomical in this section we consider different methods for estimating risk measures which aim to circumvent these difficulties in-sample risk we mentioned thatdue to the phenomenon of overfittingthe training loss of the learner` (gt (for simplicityhere we omit from ggt )is not good estimate of the generalization risk `(gt of the learner one reason for this is that we use the same data for both training the model and assessing its risk how should we then estimate the generalization risk or expected generalization riskto simplify the analysissuppose that we wish to estimate the average accuracy of the predictions of the learner gt at the feature vectors xn (these are part of the training set tin other wordswe wish to estimate the in-sample risk of the learner gt `in (gt loss(yi gt (xi )) = ( where each response yi is drawn from ( xi )independently even in this simplified settingthe training loss of the learner will be poor estimate of the in-sample risk insteadthe in-sample risk |
14,806 | proper way to assess the prediction accuracy of the learner at the feature vectors xn is to draw new response values yi ( xi ) nthat are independent from the responses yn in the training dataand then estimate the in-sample risk of gt via loss(yi gt (xi ) = for fixed training set twe can compare the training loss of the learner with the in-sample risk their differenceopt `in (gt ` (gt )expected optimism is called the optimism (of the training loss)because it measures how much the training loss underestimates (is optimistic aboutthe unknown in-sample risk mathematicallyit is simpler to work with the expected optimisme[opt xn xn =ex opt where the expectation is taken over random training set conditional on xi xi for ease of notationwe have abbreviated the expected optimism to ex opt where ex denotes the expectation operator conditional on xi xi as in example the feature vectors are stored as the rows of an nx matrix it turns out that the expected optimism for various loss functions can be expressed in terms of the (conditionalcovariance between the observed and predicted response theorem expected optimism for the squared-error loss and - loss with - responsethe expected optimism is ex opt covx (gt (xi )yi = ( proofin what followsall expectations are taken conditional on xn xn let yi be the response for xi and let yi gt (xi be the predicted value note that the latter depends on yn alsolet yi be an independent copy of yi for the same xi as in ( in particularyi has the same distribution as yi and is statistically independent of all { }including yi and therefore is also independent of yi we have bi ) yi ex (yi yi (yi ex (yi yi ) = = xb ex [yi yi ex yi ex yi covx ( yi yi = = ex opt the proof for the - loss with - response is left as exercise in summarythe expected optimism indicates how muchon averagethe training loss deviates from the expected in-sample risk since the covariance of independent random variables is zerothe expected optimism is zero if the learner gt is statistically independent from the responses yn |
14,807 | example (polynomial regression (cont )we continue example where the components of the response vector [ yn ]are independent and normally distributed with variance ` (the irreducible errorand expectations ex yi (xi > bi using the formula ( for the least-squares estimator bthe expected optimism ( is covx > byi tr covx xb by tr covx xxyy = tr (xxcovx (yy) `tr (xx ` in the last equation we used the cyclic property of the trace (theorem )tr(xxtr(xxtr( )assuming that rank(xp thereforean estimate for the in-sample risk ( isb `in (gt ` (gt ` / ( where we have assumed that the irreducible risk `is known figure shows that this estimate is very close to the test loss from figure henceinstead of computing the test loss to assess the best model complexity pwe could simply have minimized the training loss plus the correction term ` / in practice`also has to be estimated somehow figure in-sample risk estimate `in (gt as function of the number of parameters of the model the test loss is superimposed as blue dashed curve cross-validation in generalfor complex function classes git is very difficult to derive simple formulas of the approximation and statistical errorslet alone for the generalization risk or expected generalization risk as we sawwhen there is an abundance of datathe easiest way to assess the generalization risk for given training set is to obtain test set and evaluate the test loss ( when sufficiently large test set is not available but computational resources are cheapone can instead gain direct knowledge of the expected generalization risk via computationally intensive method called cross-validation cross-validation |
14,808 | the idea is to make multiple identical copies of the data setand to partition each copy into different training and test setsas illustrated in figure herethere are four copies of the data set (consisting of response and explanatory variableseach copy is divided into test set (colored blueand training set (colored pinkfor each of these setswe estimate the model parameters using only training data and then predict the responses for the test set the average loss between the predicted and observed responses is then measure for the predictive power of the model figure an illustration of four-fold cross-validationrepresenting four copies of the same data set the data in each copy is partitioned into training set (pinkand test set (bluethe darker columns represent the response variable and the lighter ones the explanatory variables folds -fold cross-validation in particularsuppose we partition data set of size into folds ck of sizes nk (hencen nk ntypically nk /kk let `ck be the test loss when using ck as test data and all remaining datadenoted - as training data each `ck is an unbiased estimator of the generalization risk for training set - that isfor `(gt- the -fold cross-validation loss is the weighted average of these risk estimatorscvk nk = `ck (gt- xx loss(gt- (xi )yi = ic = loss(gt- ( (xi )yi )where the function { { kindicates to which of the folds each of the observations belongs as the average is taken over varying training sets { - }it estimates the expected generalization risk `(gt )rather than the generalization risk `(gt for the particular training set example (polynomial regression (cont )for the polynomial regression examplewe can calculate -fold cross-validation loss with nonrandom partitioning of the training set using the following codewhich imports the previous code for the polynomial regression example we omit the full plotting code |
14,809 | polyregcv py from polyreg import k_vals [ number of folds cv np zeros (lenk_vals )max_p )cv loss np ones (( )for in p_range if np hstack ((xu**( - )) for in k_vals loss [for in range ( + )integer indices of test samples test_ind (( / )*( - np arange ( , / + - astype ('int 'train_ind np setdiff (np arange ( )test_ind x_train y_train [train_ind :] [train_ind :x_test y_test [test_ind :]ytest_ind fit model and evaluate test loss betahat solve x_train x_train x_train y_train loss append (normy_test x_test betahat * cv[jp - sum(loss)/ + basic plotting plt plot(p_range cv[ :]' 'plt plot(p_range cv[ :]' 'plt plot(p_range cv[ :]' --'plt show ( -fold cross-validation loss = = = number of parameters figure -fold cross-validation for the polynomial regression example |
14,810 | leave-one-out cross-validation figure shows the cross-validation loss for { the case corresponds to the leave-one-out cross-validationwhich can be computed more efficiently using the formula in theorem model modeling data the first step in any data analysis is to model the data in one form or another for examplein an unsupervised learning setting with data represented by vector [ ] very general model is to assume that is the outcome of random vector [ ]with some unknown pdf the model can then be refined by assuming specific form of when given sequence of such data vectors xn one of the simplest models is to assume that the corresponding random vectors xn are independent and identically distributed (iidwe write iid xn or iid xn distto indicate that the random vectors form an iid sample from sampling pdf or sampling distribution dist this model formalizes the notion that the knowledge about one variable does not provide extra information about another variable the main theoretical use of independent data models is that the joint density of the random vectors xn is simply the product of the marginal onessee theorem specificallyf ,xn ( xn ( (xn in most models of this kindour approximation or model for the sampling distribution is specified up to small number of parameters that isg(xis of the form ( bwhich is known up to some parameter vector examples for the one-dimensional case ( include the (us )bin(np)and exp(ldistributions see tables and for other common sampling distributions typicallythe parameters are unknown and must be estimated from the data in nonparametric setting the whole sampling distribution would be unknown to visualize the underlying sampling distribution from outcomes xn one can use graphical representations such as histogramsdensity plotsand empirical cumulative distribution functionsas discussed in if the order in which the data were collected (or their labelingis not informative or relevantthen the joint pdf of xn satisfies the symmetryf ,xn ( xn xp ,xpn (xp xpn exchangeable ( for any permutation pn of the integers we say that the infinite sequence is exchangeable if this permutational invariance ( holds for any finite subset of the sequence as we shall see in section on bayesian learningit is common to assume that the random vectors xn are subset of an exchangeable sequence and thus satisfy ( note that while iid random variables are exchangeablethe converse is not necessarily true thusthe assumption of an exchangeable sequence of random vectors is weaker than the assumption of iid random vectors |
14,811 | figure illustrates the modeling tradeoffs the keywords within the triangle represent various modeling paradigms few keywords have been highlightedsymbolizing their importance in modeling the specific meaning of the keywords does not concern us herebut the point is there are many models to choose fromdepending on what assumptions are made about the data figure illustration of the modeling dilemma complex models are more generally applicablebut may be difficult to analyze simple models may be highly tractablebut may not describe the data accurately the triangular shape signifies that there are great many specific models but not so many generic ones on the one handmodels that make few assumptions are more widely applicablebut at the same time may not be very mathematically tractable or provide insight into the nature of the data on the other handvery specific models may be easy to handle and interpretbut may not match the data very well this tradeoff between the tractability and applicability of the model is very similar to the approximation-estimation tradeoff described in section in the typical unsupervised setting we have training set { xn that is viewed as the outcome of iid random variables xn from some unknown pdf the objective is then to learn or estimate from the finite training data to put the learning in similar framework as for supervised learning discussed in the preceding sections we begin by specifying class of probability density functions :{ (th)th th}where th is parameter in some subset th of we now seek the best in to minimize some risk note that may not necessarily contain the true even for very large we stress that our notation (xhas different meaning in the supervised and unsupervised case in the supervised caseg is interpreted as prediction function for response yin the unsupervised settingg is an approximation of density for each we measure the discrepancy between the true model (xand the hypothesized model ( thusing the loss function lossf ( ) ( th)ln (xln (xln ( thg( th |
14,812 | the expected value of this loss (that isthe riskis thus (xf ( `(ge ln (xln dx ( thg( thkullbackleibler divergence ( the integral in ( provides fundamental way to measure the distance between two densities and is called the kullback-leibler (kldivergence between and (thnote that the kl divergence is not symmetric in and (thmoreoverit is always greater than or equal to (see exercise and equal to when (thusing similar notation as for the supervised learning setting in table define gg as the global minimizer of the risk in the class that isgg argmingg `(gif we define th argmin lossf ( ) ( th)argmin ln (xln ( thf (xdx th th argmax (xln ( thdx argmax ln ( th)th th then gg (thand learning gg is equivalent to learning (or estimatingthto learn thfrom training set { xn we then minimize the training lossn lossf (xi ) (xi th)ln (xi thln (xi ) = = = givingn thn :argmax ln (xi thn = th ( as the logarithm is an increasing functionthis is equivalent to thn :argmax th = (xi th)qn maximum likelihood estimate score where = (xi this the likelihood of the datathat isthe joint density of the {xi evaluated at the points {xi we therefore have recovered the classical maximum likelihood estimate of thwhen the risk `( (th)is convex in th over convex set thwe can find the maximum likelihood estimator by setting the gradient of the training loss to zerothat iswe solve (xi th = thwhere ( th:ln ( is the gradient of ln ( thwith respect to th and is often called th the score example (exponential modelsuppose we have the training data tn { xn }which is modeled as realization of positive iid random variablesx xn ~iid (xwe select the class of approximating functions to be the parametric class { ( th sometimes called cross-entropy distance |
14,813 | th exp(- th) th in other wordswe look for the best gg within the family of exponential distributions with unknown parameter th the likelihood of the data is = (xi thn th exp(-thxi exp(-th xn ln thi= and the score is ( th- th- thusmaximizing the likelihood with respect to th is the same as maximizing -th xn ln th or solving ni= (xi th)/ xn th- in other wordsthe solution to ( is the maximum likelihood estimate thn /xn in supervised settingwhere the data is represented by vector of explanatory variables and response ythe general model is that (xyis an outcome of (xyf for some unknown and for training sequence ( )(xn yn the default model assumption is that ( )(xn yn ~iid as explained in section the analysis primarily involves the conditional pdf ( xand in particular (when using the squarederror lossthe conditional expectation (xe[ xthe resulting representation ( allows us to then write the response at as function of the feature plus an error termy (xe(xthis leads to the simplest and most important model for supervised learningwhere we choose linear class of prediction or guess functions and assume that it is rich enough to contain the true gif we further assume thatconditional on xthe error term does not depend on xthat ise and var then we obtain the following model definition linear model in linear model the response depends on -dimensional explanatory variable [ ]via the linear relationship xb elinear model ( where and var note that ( is model for single pair (xythe model for the training set {(xi yi )is simply that each yi satisfies ( (with xi and that the {yi are independent gathering all responses in the vector [ yn ]we can write xb ( where [ en ]is vector of iid copies of and is the so-called model matrixwith rows > > linear models are fundamental building blocks of statistical learning algorithms for this reasona large part of is devoted to linear regression models example (polynomial regression (cont )for our running example we see that the data is described by linear model of the form ( )with model matrix given in ( model matrix |
14,814 | before we discuss few other models in the following sectionswe would like to emphasize number of points about modeling any model for data is likely to be wrong for examplereal data (as opposed to computer-generated dataare often assumed to come from normal distributionwhich is never exactly true howeveran important advantage of using normal distribution is that it has many nice mathematical propertiesas we will see in section most data models depend on number of unknown parameterswhich need to be estimated from the observed data any model for real-life data needs to be checked for suitability an important criterion is that data simulated from the model should resemble the observed dataat least for certain choice of model parameters here are some guidelines for choosing model think of the data as spreadsheet or data frameas in where rows represent the data units and the columns the data features (variablesgroupsfirst establish the type of the features (quantitativequalitativediscretecontinuousetc assess whether the data can be assumed to be independent across rows or columns decide on the level of generality of the model for exampleshould we use simple model with few unknown parameters or more generic model that has large number of parameterssimple specific models are easier to fit to the data (low estimation errorthan more general modelsbut the fit itself may not be accurate (high approximation errorthe tradeoffs discussed in section play an important role here decide on using classical (frequentistor bayesian model section gives short introduction to bayesian learning multivariate normal models standard model for numerical observations xn (forminge column in spreadsheet or data frameis that they are the outcomes of iid normal random variables iid xn (us it is helpful to view normally distributed random variable as simple transformation of standard normal random variable to witif has standard normal distributionthen sz has (us distribution the generalization to dimensions is discussed iid in appendix we summarize the main pointslet zn ( the pdf of [ zn ](that isthe joint pdf of zn is given by fz (ze zi ( ) = rn ( |
14,815 | we write ( in and say that has standard normal distribution in rn let = +bz ( for some matrix and -dimensional vector then has expectation vector and covariance matrix bbsee ( and ( this leads to the following definition definition multivariate normal distribution an -dimensional random vector that can be written in the form ( for some -dimensional vector and matrix bwith ( in )is said to have multivariate normal or multivariate gaussian distribution with mean vector and covariance matrix bbwe write (usthe -dimensional density of multivariate normal distribution has very similar form to the density of the one-dimensional normal distribution and is given in the next theorem we leave the proof as an exercisesee exercise theorem density of multivariate random vector let (us)where the covariance matrix is invertible then has pdf ( - ( -us ( -um ( |sx rm ( figure shows the pdfs of two bivariate (that istwo-dimensionalnormal distributions in both cases the mean vector is [ ]and the variances (the diagonal elements of sare the correlation coefficients (orequivalently herethe covariancesare respectively and - - - - figure pdfs of bivariate normal distributions with means zerovariances and correlation coefficients (leftand (rightmultivariate normal |
14,816 | the main reason why the multivariate normal distribution plays an important role in data science and machine learning is that it satisfies the following propertiesthe details and proofs of which can be found in appendix affine combinations are normal marginal distributions are normal conditional distributions are normal normal linear models normal linear models combine the simplicity of the linear model with the tractability of the gaussian distribution they are the principal model for traditional statisticsand include the classic linear regression and analysis of variance models definition normal linear model normal linear model in normal linear model the response depends on -dimensional explanatory variable [ ]via the linear relationship xb ( where ( thusa normal linear model is linear model (in the sense of definition with normal error terms similar to ( )the corresponding normal linear model for the whole training set {(xi yi )has the form xb ( where is the model matrix comprised of rows > > and ( in consequentlyy can be written as xb szwhere ( in )so that (xbs in it follows from ( that its joint density is given by ( bs ( ps ) || -xb| ( estimation of the parameter can be performed via the least-squares methodas discussed in example an estimate can also be obtained via the maximum likelihood method this simply means finding the parameters and that maximize the likelihood of the outcome ygiven by the right-hand side of ( it is clear that for every value of the likelihood is maximal when ky xbk is minimal as consequencethe maximum likelihood estimate for is the same as the least-squares estimate ( we leave it as an exercise (see exercise to show that the maximum likelihood estimate of is equal to ky xb bk ( where is the maximum likelihood estimate (least squares estimate in this caseof |
14,817 | bayesian learning in bayesian unsupervised learningwe seek to approximate the unknown joint density ( xn of the training data tn { xn via joint pdf of the form (xi thw(thdth( = where (thbelongs to family of parametric densities :{ (th)th th(viewed as family of pdfs conditional on parameter th in some set th and (this pdf that belongs to (possibly differentfamily of densities note how the joint pdf ( satisfies the permutational invariance ( and can thus be useful as model for training data which is part of an exchangeable sequence of random variables following standard practice in bayesian contextinstead of writing fx (xand fx ( yfor the pdf of and the conditional pdf of given yone simply writes (xand ( yif is different random variableits pdf (at yis thus denoted by (ythuswe will use the same symbol for different (conditionalapproximating probability densities and for the different (conditionaltrue and unknown probability densities using bayesian notationwe can writerg( thni= (xi thand thus the approximating joint pdf ( can then be written as ( thw(thdth and the true unknown joint pdf as (tf ( xn once and are specifiedselecting an approximating function (xof the form (xg( thw(thdth is equivalent to selecting suitable from similar to ( )we can use the kullbackleibler risk to measure the discrepancy between the proposed approximation ( and the true ( ) (tf ( (tln dt ( `(ge ln ( thw(thdth ( thw(thdth the main difference with ( is that since the training data is not necessarily iid (it may be exchangeablefor example)the expectation must be with respect to the joint density of not with respect to the marginal ( (as in the iid caseminimizing the training loss is equivalent to maximizing the likelihood of the training data tthat issolving the optimization problem max ( thw(thdthww where the maximization is over an appropriate class of density functions that is believed to result in the smallest kl risk |
14,818 | suppose that we have rough guessdenoted (th)for the best that minimizes the kullback-leibler risk we can always increase the resulting likelihood : ( thw (thr dth by instead using the density (th: (thg( th)/ giving likelihood : ( thw (thdth to see thiswrite and as expectations with respect to in particularwe can write it follows that ew ( thand ew ( thew ( th)/ ew ( thl varw [ ( th) ( we may thus expect to obtain better predictions using instead of because has taken into account the observed data and increased the likelihood of the model in factif we iterate this process (see exercise and create sequence of densities such that wt (thwt- (thg( th)then wt (thconcentrates more and more of its probability mass at the maximum likelihood estimator th (see ( )and in the limit equals (degenb eratepoint-mass pdf at th in other wordsin the limit we recover the maximum likelihood methodgt (xg( ththusunless the class of densities is restricted to be nondegeneratemaximizing the likelihood as much as possible leads to degenerate choice for (thin many situationsthe maximum likelihood estimate ( this either not an appropriate approximation to ( (see example )or simply fails to exist (see exercise in in such casesgiven an initial non-degenerate guess (thg(th)one can obtain more appropriate and non-degenerate approximation to (tby taking (thw (thg( thg(thin ( )giving the following bayesian learner of ( ) ( thg(thdth( gt ( : ( thr ( thg(thdth where ( thg(thdth (tusing bayesformula for probability densitiesl (th tg( thg(thg( ( we can write (thg(th twith this notationwe have the following definitions definition priorlikelihoodand posterior let and :{ (th)th thbe the training set and family of approximating functions prior pdf (ththat reflects our priori beliefs about th is called the prior pdf likelihood the conditional pdf ( this called the likelihood posterior inference about th is given by the posterior pdf (th )which is proportional to the product of the prior and the likelihoodg(th tg( thg(th |
14,819 | remark (early stoppingbayes iteration is an example of an "early stoppingheuristic for maximum likelihood optimizationwhere we exit after only one step as observed aboveif we keep iteratingwe obtain the maximum likelihood estimate (mlein sense the bayes rule provides regularization of the mle regularization is discussed in more detail in see also example the early stopping rule is also of benefit in regularizationsee exercise in on the one handthe initial guess (thconveys the priori (prior to training the bayesian learnerinformation about the optimal density in that minimizes the kl risk using this prior (th)the bayesian approximation to (xis the prior predictive densityz (xg( thg(thdth on the other handthe posterior pdf conveys improved knowledge about this optimal density in after training with using the posterior (th )the bayesian learner of (xis the posterior predictive densityz ( thg(th tdthgt (xg( tprior predictive density posterior predictive density where we have assumed that ( thtg( th)that isthe likelihood depends on only through the parameter th the choice of the prior is typically governed by two considerations the prior should be simple enough to facilitate the computation or simulation of the posterior pdf the prior should be general enough to model ignorance of the parameter of interest priors that do not convey much knowledge of the parameter are said to be uninformative the uniform or flat prior in example (to followis frequently used for the purpose of analytical and numerical computationswe can view th as random vector with prior density (th)which after training is updated to the posterior density (th tr the above thinking allows us to write ( tg( thg( thg(thdthfor examplethus ignoring any constants that do not depend on the argument of the densities example (normal modelsuppose that the training data { xn is modeled using the likelihood ( ththat is the pdf of th (us )where th :[us ]nextwe need to specify the prior distribution of th to complete the model we can specify prior distributions for and separately and then take their product to obtain the prior for vector th (assuming independencea possible prior distribution for is (nph ( uninformative prior |
14,820 | hyperparameters it is typical to refer to any parameters of the prior density as hyperparameters of the bayesian model instead of giving directly prior for (or )it turns out to be convenient to give the following prior distribution to / gamma(abs inverse gamma ( the smaller and arethe less informative is the prior under this priors is said to have an inverse gamma distribution if / gamma(ab)then the pdf of is proportional to exp (- / /za+ (exercise the bayesian posterior is then given byg(us tg(ux ( ( us exp - / exp np ( ) /( ) ( nx exp ph ( ) + ( ) / ( xn ) ( ) - / - - ( exp ph / where : xi (xi xn ) is the (scaledsample variance all inference about (us is then represented by the posterior pdf to facilitate computations it is helpful to find out if the posterior belongs to recognizable family of distributions for examplethe conditional pdf of given and is ( ) ( xn ) ( texp ph / which after simplification can be recognized as the pdf of ( tn gn xn ( gn )ngn / ( where we have defined the weight parametergn :sn ph sn we can then see that the posterior mean [ tgn xn ( gn ) is weighted linear combination of the prior mean and the sample average xn furtheras the weight gn and thus the posterior mean approaches the maximum likelihood estimate xn improper prior it is sometimes possible to use prior (ththat is not bona fide probability densityin the sense that (thdth as long as the resulting posterior (th tg( th) (this proper pdf such prior is called an improper prior example (normal model (cont )an example of an improper prior is obtained from ( when we let ph (the larger ph isthe more uninformative is the priortheng( is flat priorbut (udu making it an improper prior neverthelessthe posterior is proper densityand in particular the conditional posterior of ( tsimplifies to ( tn xn / reciprocal gamma distribution would have been better name |
14,821 | because the weight parameter gn goes to as ph the improper prior ( also allows us to simplify the posterior marginal for ns / -( - )/ - - ( tg(us tdu ( exp which we recognize as the density corresponding to - gamma sn in addition to ( we can also use an improper prior for if we take the limit and in ( )then we also obtain the improper prior ( / (or equivalently ( / / in this casethe posterior marginal density for implies thatns kh - and the posterior marginal density for implies thatu xn tn- nn ( in generalderiving simple formula for the posterior density of th is either impossible or too tedious insteadthe monte carlo methods in can be used to simulate (approximatelyfrom the posterior for the purposes of inference and prediction one way in which distributional result such as ( can be useful is in the construction of credible interval for the parameter uthat isan interval such that the probability [ tis equal to for examplethe symmetric credible interval is sn sn gxn xn - - where is the -quantile of the tn- distribution note that the credible interval is not random object and that the parameter is interpreted as random variable with distribution this is unlike the case of classical confidence intervalswhere the parameter is nonrandombut the interval is (the outcome ofa random object as generalization of the bayesian credible interval we can define - credible regionwhich is any set satisfying [th tg(th tdth ( thr credible interval credible region |
14,822 | example (bayesian regularization of maximum likelihoodconsider modeling the number of deaths during birth in maternity ward suppose that the hospital data consists of { xn }with xi if the -th baby has died during birth and xi otherwisefor possible bayesian model for the data is th ( (uniform iid priorwith ( xn thber(ththe likelihood is therefore ( thn = th xi ( th) -xi th ( th) - where xn is the total number of deaths since (th the posterior pdf is (th tth ( th) - th [ ]which is the pdf of the beta( distribution the normalization constant is ( ns the posterior pdf is shown in figure for (sn( it is not difficult figure posterior pdf for thwith and maximum posteriori to see that the maximum posteriori (mapestimate of th (the mode or maximizer of the posterior densityis argmax (th tn th which agrees with the maximum likelihood estimate figure also shows that the left one-sided credible interval for th is [ ]where is the quantile (roundedof the beta( distribution observe that when (sn( the maximum likelihood estimate th infers that deaths at birth are not possible we know that this inference is wrong -the probability of death can never be zeroit is simply (and fortunatelytoo small to be inferred accurately from sample size of in contrast to the maximum likelihood estimatethe posterior mean [th ( )/( is not zero for (sn( and provides the more reasonable point estimate of for the probability of death |
14,823 | in additionwhile computing bayesian credible interval poses no conceptual difficultiesit is not simple to derive confidence interval for the maximum likelihood estimate of thbecause the likelihood as function of th is not differentiable at th as result of this lack of smoothnessthe usual confidence intervals based on the normal approximation cannot be used we now return to the unsupervised learning setting of section but consider this from bayesian perspective recall from ( that the kullback-leibler risk for an approximating function is `(gf ( )[ln ( ln ( )dt where denotes the test data since ( ln ( dt plays no role in minimizing the riskwe consider instead the cross-entropy riskdefined as `(gf ( ln ( dt note that the smallest possible cross-entropy risk is `nf ( ln ( dt the expected generalization risk of the bayesian learner can then be decomposed as ( tn ( dt dtn ( ln `(gtn ` (tn ln ( tn ( tn { { "biascomponent "variancecomponent where gtn ( ( tn ( thg(th tn dth is the posterior predictive density after observing tn assuming that the sets tn and tn are comprised of iid random variables with density we can show (exercise that the expected generalization risk simplifies to `(gtn ln (tn ln ( )( where (tn and ( are the prior predictive densities of tn and respectively let thn argmaxth (th tn be the map estimator of th:argmaxth ln ( thassuming that thn converges to th(with probability oneand ln (tn thn ln ( tho( / )we can use the following large-sample approximation of the expected generalization risk theorem approximating the bayesian cross-entropy risk for the expected cross-entropy generalization risk satisfiese`(gtn - ln (tn ln ( where (with the dimension of the parameter vector th and thn the map estimator) ln (tn ln (tn thn ln ( |
14,824 | proofto show ( )we apply theorem to ln -nrn (thg(thdthwhere ln (xi th- ln ( th= (thrn (th:ln (tn thn = this gives (with probability onez ln (tn thg(thdth -nr(thln( taking expectations on both sides and using nr(thne[rn (thn ) ( )we deduce ( to demonstrate ( )we derive the asymptotic approximation of ln ( by repeating the argument for ( )but replacing with nwhere necessary thuswe obtaine ln ( - nr(thp ln( then( follows from the identity ( model evidence the results of theorem have two major implications for model selection and assessment first( suggests that ln (tn can be used as crude (leading-orderasymptotic approximation to the expected generalization risk for large and fixed in this contextthe prior predictive density (tn is usually called the model evidence or marginal likelihood for the class since the integral (tn thg(thdth is rarely available in closed formthe exact computation of the model evidence is typically not feasible and may require monte carlo estimation methods secondwhen the model evidence is difficult to compute via monte carlo methods or otherwise( suggests that we can use the following large-sample approximation- ln (tn - ln (tn thn ln(nbayesian information criterion ( the asymptotic approximation on the right-hand side of ( is called the bayesian information criterion (bicwe prefer the class with the smallest bic the bic is typically used when the model evidence is difficult to compute and is sufficiently larger than for fixed pand as becomes larger and largerthe bic becomes more and more accurate estimator of - ln (tn note that the bic approximation is valid even when the true density the bic provides an alternative to the akaike information criterion (aicfor model selection howeverwhile the bic approximation does not assume that the true model belongs to the parametric class under considerationthe aic assumes that thusthe aic is merely heuristic approximation based on the asymptotic approximations in theorem although the above bayesian theory has been presented in an unsupervised learning settingit can be readily extended to the supervised case we only need to relabel the training set tn in particularwhen (as is typical for regression modelsthe training responses yn are considered as random variables but the corresponding feature vectors xn are viewed as being fixedthen tn is the collection of random responses { yn alternativelywe can simply identify tn with the response vector [ yn ]we will adopt this notation in the next example |
14,825 | example (polynomial regression (cont )consider example once againbut now in bayesian frameworkwhere the prior knowledge on ( bis specified by ( / and ( )and is (matrixhyperparameter let :(xx - )- then the posterior can be written as (bs exp ky-xbk ( ps ) / - exp (yx ( ps / | | / ( )-( + )/ - ks- / ( ) ( exp ( )( + )/ | | / ! ( )where :sxy and : ( xsx) /( are the map estimates of and and (yis the model evidence for (yg(bs ydb ds ( + + exp | | / ( ) / | | / ( ) / + ds | | / ( / | | / ( ( ) / thereforebased on ( )we have `(gtn - ln (yn ln ( ln ( / ln |dln |son the other handthe minus of the log-likelihood of can be written as ln ( bs ky xbk ln( ps ks- / ( ) ( ln( ps thereforethe bic approximation ( is - ln ( bs ( ln(nn[ln( ( ln( ( )( where the extra ln(nterm in ( ln(nis due to the inclusion of in th ( bfigure shows the model evidence and its bic approximationwhere we used hyperparameter for the prior density of we can see that both approximations exhibit pronounced minimum at thus identifying the true polynomial regression model compare the overall qualitative shape of the cross-entropy risk estimate with the shape of the square-error risk estimate in figure |
14,826 | figure the bic and marginal likelihood used for model selection it is possible to give the model complexity parameter bayesian treatmentin which we define prior density on the set of all models under consideration for examplelet ( ) be prior density on candidate models treating the model complexity index as an additional parameter to th and applying bayesformulathe posterior for (thpcan be written asg(thp tg(th ptx ( tg( pg(pg( thpg(th px ( pg( { { posterior of th given model posterior of model the model evidence for fixed is now interpreted as the prior predictive density of tconditional on the model pz ( pg( thpg(th pdthp and the quantity (tmp= ( pg(pis interpreted as the marginal likelihood of all the candidate models finallya simple method for model selection is to pick the index with the largest posterior probabilityb argmax ( targmax ( pg(pp example (polynomial regression (cont )let us revisit example by giving the parameter mwith bayesian treatment recall that we used the notation in that example we assume that the prior ( / is flat and uninformative so that the posterior is given by ( yg( | | / ( / | | / ( ( ) / |
14,827 | where all quantities in ( pare computed using the first columns of figure shows the resulting posterior density ( ythe figure also shows the posterior density ( )where ( = [ln( ( ln( ( ( :exp is derived from the bic approximation ( in both casesthere is clear maximum at suggesting that third-degree polynomial is the most appropriate model for the data figure posterior probabilities for each polynomial model of degree suppose that we wish to compare two modelssay model and model instead of computing the posterior ( texplicitlywe can compare the posterior odds ratiog( tg( ( ( tg( ( { bayes factor this gives rise to the bayes factor bi whose value signifies the strength of the evidence in favor of model over model in particular bi means that the evidence in favor for model is larger example (savage-dickey ratiosuppose that we have two models model has likelihood ( unp )depending on two parameters model has the same functional form for the likelihood but now is fixed to some (knownn that isg( up ( un we also assume that the prior information on bayes factor |
14,828 | for model is the same as that for model conditioned on that iswe assume ( ( as model contains model as special casethe latter is said to be nested inside model we can formally write (see also exercise ) ( ( up ( du ( un ( du ( hencethe bayes factor simplifies to savage-dickey density ratio (tn ( ( (tn ( tp ( ( ( ( in other wordsb is the ratio of the posterior density to the prior density of nevaluated at and both under the unrestricted model this ratio of posterior to prior densities is called the savage-dickey density ratio whether to use classical (frequentistor bayesian model is largely question of convenience classical inference is useful because it comes with huge repository of readyto-use resultsand requires no (subjectiveprior information on the parameters bayesian models are useful because the whole theory is based on the elegant bayesformulaand uncertainty in the inference ( confidence intervalscan be quantified much more naturally ( credible intervalsa usual practice is to "bayesifya classical modelsimply by adding some prior information on the parameters further reading popular textbook on statistical learning is [ accessible treatments of mathematical statistics can be foundfor examplein [ ][ ]and [ more advanced treatments are given in [ ][ ]and [ good overview of modern-day statistical inference is given in [ classical references on pattern classification and machine learning are [ and [ for advanced learning theory including information theory and rademacher complexitywe refer to [ and [ an applied reference for bayesian inference is [ for survey of numerical techniques relevant to computational statisticssee [ exercises suppose that the loss function is the piecewise linear function loss( , ya ( ) ( )ab where cis equal to if and zero otherwise show that the minimizer of the risk `(ge loss(yg( )satisfies [ (xx xa+ in other wordsg(xis the /( bquantile of yconditional on |
14,829 | show thatfor the squared-error lossthe approximation error `(gg `(gin ( )is equal to (gg (xg( )) [hintexpand `(gg ( (xg(xgg ( )) suppose is the class of linear functions linear function evaluated at feature can be described as (xbx for some parameter vector of appropriate dimension denote gg (xxbg and ggt (xx> show that ggt (xg(xe > xbg xbg (xhencededuce that the statistical error in ( is `(ggt `(gg (ggt (xgg ( )) show that formula ( holds for the - loss with - response let be an -dimensional normal random vector with mean vector and covariance matrix swhere the determinant of is non-zero show that has joint probability density - ( -us ( -uf (xn ( |sx rn let ay using the defining properties of the pseudo-inverseshow that for any rpkab yk kab yk suppose that in the polynomial regression example we select the linear class of functions with thengg and the approximation error is zerobecause gg (xg(xxbwhere [ - - ] use the tower property to show that the learner gt (xx> with xyassuming rank( is unbiasede gt (xg( unbiased (exercise continued observe that the learner gt can be written as linear combination of the response variablegt (xxxy prove that for any learner of the form xaywhere pxn is some matrix and that satisfies ex [xayg( )we have varx [xxy varx [xay]where the equality is achieved for xthis is called the gauss-markov inequality henceusing the gauss-markov inequality deduce that for the unconditional variancegauss-markov inequality var gt ( var[xaydeduce that xalso minimizes the expected generalization risk consider again the polynomial regression example use the fact that ex xh( )where (ue[ [ ( ) (un )]to show that the expected in-sample risk iskh( ) kxxh( ) ` ex `in (gt alsouse theorem to show that the expected statistical error isex ( ) ( `tr( ( ) (xh(ub) (xh(ub |
14,830 | consider the setting of the polynomial regression in example use theorem to prove that - - ( bn - ` - ( where : [xx( (xgg ( )) is the matrix with (ij)-th entryz inverse hilbert matrix uij- (hh (uh( )) duand - is the inverse hilbert matrix with (ij)-th entry! + - pj- ij- (- ( - - pj ij observe that for so that the matrix term is due to choosing restrictive class that does not contain the true prediction function in example we saw that the statistical error can be expressed (see ( )as [ - ]( du ( ) ( by exercise the random vector : ( bn has asymptotically multivariate - - normal distribution with mean vector and covariance matrix :` - hp mphp use theorem to show that the expected statistical error is asymptotically ( ) ( bpn tr( - ( plot this large-sample approximation of the expected statistical error and compare it with the outcome of the statistical error we note subtle technical detailin generalconvergence in distribution does not imply convergence in -norm (see example )and so here we have implicitly assumed that kz -dist kz -constant :limnekz consider again example the result in ( suggests that as where is the solution in the class given in ( thusthe large-sample approximg at [ - ]is ation of the pointwise bias of the learner gt (xx> gt (xg( [ - [ uu bn use python to reproduce figure which shows the (large-samplepointwise squared bias of the learner for { note how the bias is larger near the endpoints and explain why the areas under the curves correspond to the approximation errors |
14,831 | figure the large-sample pointwise squared bias of the learner for the bias is zero for for our running example we can use ( to derive large-sample approximation of the pointwise variance of the learner gt (xx> bn in particularshow that for large var gt ( `xh- - xh- mphp ( figure shows this (large-samplevariance of the learner for different values of the predictor and model index observe that the variance ultimately increases in and that it is smaller at / than closer to the endpoints or since the bias is also figure the pointwise variance of the learner for various pairs of and larger near the endpointswe deduce that the pointwise mean squared error ( is larger near the endpoints of the interval [ than near its middle in other wordsthe error is much smaller in the center of the data cloud than near its periphery |
14,832 | let be convex function and let be random variable use the subgradient definition of convexity to prove jensen' inequalityjensen' inequality (xh(ex( using jensen' inequalityshow that the kullback-leibler divergence between probability densities and is always positivethat ise ln ( (xwhere vapnikchernovenkis bound the purpose of this exercise is to prove the following vapnik-chernovenkis boundfor any finite class (containing only finite number |gof possible functionsand general bounded loss functionl loss uthe expected statistical error is bounded from above according to( ln( | | `(gtn `( ( note how this bound conveniently does not depend on the distribution of the training set tn (which is typically unknown)but only on the complexity ( cardinalityof the class we can break up the proof of ( into the following four parts(afor general function class gtraining set risk function `and training loss ` we haveby definition`(gg `(gand ` (ggt ` (gfor all show that `(ggt `(gg sup |` ( `( )` (gg `(gg )gg where we used the notation sup (supremumfor the least upper bound since ` (ge`( )we obtainafter taking expectations on both sides of the inequality abovee `(ggt `(gg sup |` ( `( )gg hoeffding' inequality (bif is zero-mean random variable taking values in the interval [lu]then the following hoeffding' inequality states that the moment generating function satisfies ( ) exp tx ( prove this result by using the fact that the line segment joining points (lexp(tl)and (uexp(tu)bounds the convex function exp(txfor [lu]that isetx etl - - etu - - [lu(clet zn be (possibly dependent and non-identically distributedzero-mean random variables with moment generating functions that satisfy exp(tzk exp( / for all and some parameter use jensen' inequality ( to prove that for any |
14,833 | te max zk ln max etzk ln from this derive that max zk ln finallyshow that this last inequality implies that max |zk ln( nk ( (dreturning to the objective of this exercisedenote the elements of by |gand let zk `tn (gk `(gk by part (ait is sufficient tobound maxk |zk show that the {zk satisfy the conditions of (cwith ( ) for this you will need to apply part (bto the random variable loss( ( ) `( )where (xyis generic data point now complete the proof of ( consider the problem in exercise above show that |` (ggt `(gg ) sup |` ( `( )` (gg `(gg gg from thisconcludee |` (ggt `(gg ) sup |` ( `( )gg the last bound allows us to assess how close the training loss ` (ggt is to the optimal risk `(gg within class show that for the normal linear model (xbs in )the maximum likelihood estimator of is identical to the method of moments estimator ( let gamma(alshow that the pdf of / is equal to - la ( )- - - (zg(az consider the sequence where (this non-degenerate initial guess and wt (thwt- (th) ( th) we assume that ( this not the constant function (with respect to thand that the maximum likelihood value ( thmax ( thth exists (is boundedlet lt : ( th)wt (thdth show that {lt is strictly increasing and bounded sequence henceconclude that its limit is ( th |
14,834 | consider the bayesian model for { xn with likelihood ( usuch that ( xn ~iid ( and prior pdf (usuch that ( for some hyperparameter define sequence of densities wt ( ) via wt (uwt- (ug( )starting with (ug(ulet at and bt denote the mean and precision of under the posterior gt ( tg( )wt (ushow that gt ( tis normal density with precision bt bt- nb and mean at ( gt )at- gt xn nwhere gt : /(bt- nhencededuce that gt ( tconverges to degenerate density with point-mass at xn consider again example where we have normal model with improper prior (thg(us / show that the prior predictive pdf is an improper density ( but that the posterior predictive density is ( xn ) ( ( ) !- / -xn deduce that ( + )/( - tn- iid assuming that xn show that ( holds and that ` - ln ( suppose that { xn are observations of iid continuous and strictly positive random variablesand that there are two possible models for their pdf the first model is ( thp th exp (-thxand the second is th ( thp ! / thx exp for both modelsassume that the prior for th is gamma density (thbt - th exp (-bthg(twith the same hyperparameters and find formula for the bayes factorg( )/ ( )for comparing these models suppose that we have total of possible models with prior probabilities ( ) show that the posterior probability of model ( tcan be expressed in terms of all the ( bayes factors- ( jg( ( ij, the precision is the reciprocal of the variance |
14,835 | given the data { xn }suppose that we use the likelihood ( thn(us with parameter th (us )and wish to compare the following two nested models (amodel where is known and this is incorporated via the prior (th ( ( ps ( - ) ( (bmodel where both mean and variance are unknown with prior (th ( ( ps ( - ) bt ( )- - - / (tshow that the prior (th can be viewed as the limit of the prior (th when and ts henceconclude that ( lim ( tb=ts and use this result to calculate check that the formula for agrees with the savagedickey density ratiog( ( tg( ( where ( tand ( are the posterior and priorrespectivelyunder model |
14,836 | onte arlo ethods many algorithms in machine learning and data science make use of monte carlo techniques this gives an introduction to the three main uses of monte carlo simulationto ( simulate random objects and processes in order to observe their behavior( estimate numerical quantities by repeated samplingand ( solve complicated optimization problems through randomized algorithms introduction briefly putmonte carlo simulation is the generation of random data by means of computer these data could arise from simple modelssuch as those described in or from very complicated models describing real-life systemssuch as the positions of vehicles on complex road networkor the evolution of security prices in the stock market in many casesmonte carlo simulation simply involves random sampling from certain probability distributions the idea is to repeat the random experiment that is described by the model many times to obtain large quantity of data that can be used to answer questions about the model the three main uses of monte carlo simulation aresampling here the objective is to gather information about random object by observing many realizations of it for instancethis could be random process that mimics the behavior of some real-life system such as production line or telecommunications network another usage is found in bayesian statisticswhere markov chains are often used to sample from posterior distribution estimation in this case the emphasis is on estimating certain numerical quantities related to simulation model an example is the evaluation of multidimensional integrals via monte carlo techniques this is achieved by writing the integral as the expectation of random variablewhich is then approximated by the sample mean appealing to the law of large numbers guarantees that this approximation will eventually converge when the sample size becomes large optimization monte carlo simulation is powerful tool for the optimization of complicated objective functions in many applications these functions are deterministic and monte carlo simulation |
14,837 | randomness is introduced artificially in order to more efficiently search the domain of the objective function monte carlo techniques are also used to optimize noisy functionswhere the function itself is randomfor examplewhen the objective function is the output of monte carlo simulation the monte carlo method dramatically changed the way in which statistics is used in today' analysis of data the ever-increasing complexity of data requires radically different statistical models and analysis techniques from those that were used to years ago by using monte carlo techniquesthe data analyst is no longer restricted to using basic (and often inappropriatemodels to describe data nowany probabilistic model that can be simulated on computer can serve as the basis for statistical analysis this monte carlo revolution has had an impact on both bayesian and frequentist statistics in particularin frequentist statisticsmonte carlo methods are often referred to as resampling techniques an important example is the well-known bootstrap method [ ]where statistical quantities such as confidence intervals and -values for statistical tests can simply be determined by simulation without the need of sophisticated analysis of the underlying probability distributionsseefor example[ for basic applications the impact on bayesian statistics has been even more profoundthrough the use of markov chain monte carlo (mcmctechniques [ mcmc samplers construct markov process which converges in distribution to desired (often high-dimensionaldensity this convergence in distribution justifies using finite run of the markov process as an approximate random realization from the target density the mcmc approach has rapidly gained popularity as versatile heuristic approximationpartly due to its simple computer implementation and inbuilt mechanism to tradeoff between computational cost and accuracynamelythe longer one runs the markov processthe better the approximation nowadaysmcmc methods are indispensable for analyzing posterior distributions for inference and model selectionsee also [ the following three sections elaborate on these three uses of monte carlo simulation in turn monte carlo sampling in this section we describe variety of monte carlo sampling methodsfrom the building block of simulating uniform random numbers to mcmc samplers random number generator generating random numbers at the heart of any monte carlo method is random number generatora procedure that produces stream of uniform random numbers on the interval ( , since such numbers are usually produced via deterministic algorithmsthey are not truly random howeverfor most applications all that is required is that such pseudo-random numbers are statistically indistinguishable from genuine random numbers that are uniformly distributed on the interval ( , and are independent of each otherwe write ~iid ( for examplein python the rand method of the numpy random module is widely used for this purpose |
14,838 | most random number generators at present are based on linear recurrence relations one of the most important random number generators is the multiple-recursive generator (mrgof order kwhich generates sequence of integers xk xk+ via the linear recurrence xt ( xt- ak xt- mod mt kk ( for some modulus and multipliers {ai khere "modrefers to the modulo operationn mod is the remainder when is divided by the recurrence is initialized by specifying "seeds" xk- to yield fast algorithmsall but few of the multipliers should be when is large integerone can obtain stream of pseudo-random numbers uk uk+ between and from the sequence xk xk+ simply by setting ut xt / it is also possible to set small modulusin particular the output function for such modulo generators is then typically of the form ut multiplerecursive generator modulus multipliers modulo generators xtw+ - - = for some ke or examples of modulo generators are the feedback shift register generatorsthe most popular of which are the mersenne twistersseefor example[ and [ mrgs with excellent statistical properties can be implemented efficiently by combining several simpler mrgs and carefully choosing their respective moduli and multipliers one of the most successful is 'ecuyer' mrg generatorsee [ from now onwe assume that the reader has sound random number generator available feedback shift register mersenne twisters simulating random variables simulating random variable from an arbitrary (that isnot necessarily uniformdistribution invariably involves the following two steps simulate uniform random numbers uk on ( for some return ( uk )where is some real-valued function the construction of suitable functions is as much of an art as science many simulation methods may be foundfor examplein [ and the accompanying website www montecarlohandbook org two of the most useful general procedures for generating random variables are the inverse-transform method and the acceptance-rejection method before we discuss thesewe show one possible way to simulate standard normal random variables in python we can generate standard normal random variables via the randn method of the numpy random module example (simulating standard normal random variablesif and are independent standard normally distributed random variables (that isxy ~iid ( ))then their joint pdf is ( + (xy(xyr which is radially symmetric function in example we see thatin polar coordinatesthe angle th that the random vector [xy]makes with the positive -axis is ( |
14,839 | distributed (as would be expected from the radial symmetryand the radius has pdf fr (rr - / moreoverr and th are independent we will see shortlyin ex ample that has the same distribution as - ln with ( soto simulate xy ~iid ( )the idea is to first simulate and th independently and then return cos(thand sin(thas pair of independent standard normal random variables this leads to the box-muller approach for generating standard normal random variables algorithm normal random variable simulationbox-muller approach outputindependent standard normal random variables and simulate two independent random variablesu and from ( / (- ln cos( pu / (- ln sin( pu return xy cholesky decomposition once standard normal number generator is availablesimulation from any ndimensional normal distribution (usis relatively straightforward the first step is to find an matrix that decomposes into the matrix product bbin fact there exist many such decompositions one of the more important ones is the cholesky decompositionwhich is special case of the lu decompositionsee section for more information on such decompositions in pythonthe function cholesky of numpy linalg can be used to produce such matrix once the cholesky factorization is determinedit is easy to simulate (usasby definitionit is the affine transformation bz of an -dimensional standard normal random vector algorithm normal random vector simulation inputus outputx (us determine the cholesky factorization bb simulate [ zn by drawing zn ~iid ( bz return example (simulating from bivariate normal distributionthe python code below draws iid samples from the two bivariate ( normal pdfs in figure the resulting point clouds are given in figure bvnormal py import numpy as np from numpy random import randn import matplotlib pyplot as plt change to for other plot sigma np array ([[ ][ ]] |
14,840 | np linalg cholesky sigma randn ( ,nplt scatter ([ [ ,:],[ [ ,:]alpha = figure realizations of bivariate normal distributions with means zerovariances and correlation coefficients (leftand (rightin some casesthe covariance matrix has special structure which can be exploited to create even faster generation algorithmsas illustrated in the following example example (simulating normal vectors in ( timesuppose that the random vector [ xn ]represents the values at times kdk of zeromean gaussian process ( ( ) that is weakly stationarymeaning that cov( ( ) ( )depends only on - then clearly the covariance matrix of xsay an is symmetric toeplitz matrix suppose for simplicity that var ( then the covariance matrix is in fact correlation matrixand will have the following structurea an- an- - an : an- an- an- using the levinson-durbin algorithm we can compute lower diagonal matrix ln and diagonal matrix dn in ( time such that ln an > dn see theorem if we simulate ( in )then the solution of the linear systemln / zn has the desired distribution ( an the linear system is solved in ( time via forward substitution |
14,841 | inverse-transform method let be random variable with cumulative distribution function (cdff let - denote the inverse of and ( thenp[ - ( xp[ ( ) ( ( this leads to the following method to simulate random variable with cdf falgorithm inverse-transform method inputcumulative distribution function outputrandom variable distributed according to generate from ( - ( return the inverse-transform method works both for continuous and discrete distributions after importing numpy as npsimulating numbers according to probabilities pk- can be done via np min(np where(np cumsum(pnp random rand()))where is the vector of the probabilities example (example (cont )one remaining issue in example was how to simulate the radius when we only know its density fr (rr - / we can use the inverse-transform method for thisbut first we need to determine its cdf the cdf of isby integration of the pdf fr ( and its inverse is found by solving fr (rin terms of rgiving fr- ( - ln( ) ( thus has the same distribution as - ln( )with ( since also has ( distributionr has also the same distribution as - ln acceptance-rejection method the acceptance-rejection method is used to sample from "difficultprobability density function (pdff (xby generating instead from an "easypdf (xsatisfying ( (xfor some constant (for examplevia the inverse-transform method)and then accepting or rejecting the drawn sample with certain probability algorithm gives the pseudo-code the idea of the algorithm is to generate uniformly point (xyunder the graph of the function cgby first drawing and then ( cg( )if this point lies under the graph of then we accept as sample from otherwisewe try again the efficiency of the acceptance-rejection method is usually expressed in terms of the probability of acceptancewhich is / every cdf has unique inverse function defined by - (uinf{ (xuiffor each uthe equation (xu has unique solution xthis definition coincides with the usual interpretation of the inverse function |
14,842 | algorithm acceptance-rejection method inputpdf and constant such that cg(xf (xfor all outputrandom variable distributed according to pdf found false while not found do generate from generate from ( independently of ucg( if (xthen found true return example (simulating gamma random variablessimulating random variables from gamma(aldistribution is generally done via the acceptance-rejection method considerfor examplethe gamma distribution with and its pdf la xa- -lx (ar where is the gamma function ( : - xa- dxa is depicted by the blue solid curve in figure ( figure the pdf of the exp( distribution multiplied by dominates the pdf of the gamma( distribution this pdf happens to lie completely under the graph of cg( )where and ( exp(- ) is the pdf of the exponential distribution exp( hencewe can simulate from this particular gamma distribution by accepting or rejecting sample from the exp( distribution according to step of algorithm simulating from the exp( distribution can be done via the inverse-transform methodsimulate ( and return ln( )/ the following python code implements algorithm for this example |
14,843 | accrejgamma py from math import exp gamma log from numpy random import rand alpha lam lambda xlam *alpha **alpha - exp(-lam* )gamma alpha lambda exp - *xc found false while not found log(rand ())/ if * ( )*rand (< ( )found true print (xsimulating random vectors and processes techniques for generating random vectors and processes are as diverse as the class of random processes themselvesseefor example[ we highlight few general scenarios when xn are independent random variables with pdfs fi nso that their joint pdf is (xf ( fn (xn )the random vector [ xn ]can be simply simulated by drawing each component xi fi individually -for examplevia the inverse-transform method or acceptance-rejection for dependent components xn we canas consequence of the product rule of probabilityrepresent the joint pdf (xas (xf ( xn ( ( fn (xn xn- )( where ( is the marginal pdf of and fk (xk xk- is the conditional pdf of xk given xk- xk- provided the conditional pdfs are knownone can generate by first generating thengiven generate from ( )and so onuntil generating xn from fn (xn xn- markov chain the latter method is particularly applicable for generating markov chains recall from section that markov chain is stochastic process {xt that satisfies the markov propertymeaning that for all and the conditional distribution of xt+ given xu tis the same as that of xt+ given only xt as resulteach conditional density ft (xt xt- can be written as one-step transition density qt (xt xt- )that isthe probability density to go from state xt- to state xt in one step in many cases of interest the chain is time-homogeneousmeaning that the transition density qt does not depend on such markov chains can be generated sequentiallyas given in algorithm |
14,844 | algorithm simulate markov chain inputnumber of steps ninitial pdf transition density draw from the initial pdf for to do draw xt from the distribution corresponding to the density (xt- return xn example (markov chain simulationfor time-homogeneous markov chains with discrete state spacewe can visualize the one-step transitions by means of transition graphwhere arrows indicate possible transitions between states and the labels describe the corresponding probabilities figure shows (on the leftthe transition graph of the markov chain {xt with state space { and one-step transition matrix transition graph figure the transition graph (leftand typical path (rightof the markov chain in the same figure (on the righta typical outcome (pathof the markov chain is shown the path was simulated using the python program below in this implementation the markov chain always starts in state we will revisit markov chainsand in particular markov chains with continuous state spacesin section mcsim py import numpy as np import matplotlib pyplot as plt np array ([[ [ ][ ][ ]] np array (np ones(ndtype =int) [ for in range ( , - ) |
14,845 | [ + np min(np where (np cumsum ( [ [ ,:]np random rand ()) #add to all elements of the vector plt plot(np array range ( , )), ' 'plt plot(np array range ( , )), '--'plt show (resampling resampling the idea behind resampling is very simplean iid sample :{ xn from some unknown cdf represents our best knowledge of if we make no further priori assumptions about it if it is not possible to simulate more samples from fthe best way to "repeatthe experiment is to resample from the original data by drawing from the empirical cdf fn see ( that iswe draw each xi with equal probability and repeat this timesaccording to algorithm below as we draw here "with replacement"multiple instances of the original data points may occur in the resampled data algorithm sampling from an empirical cdf inputoriginal iid sample xn and sample size outputiid sample xnfrom the empirical cdf for to do draw ( set dnue set xtxi return xnin step dnue returns the ceiling of nuthat isit is the smallest integer larger than or equal to nu consequentlyi is drawn uniformly at random from the set of indices { nby sampling from the empirical cdf we can thus (approximatelyrepeat the experiment that gave us the original data as many times as we like this is useful if we want to assess the properties of certain statistics obtained from the data for examplesuppose that the original data gave the statistic (tby resampling we can gain information about the distribution of the corresponding random variable ( example (quotient of uniformslet un vn be iid ( random variables and define xi ui /vi suppose we wish to investigate the distribue and sample mean of the (randomdata :{ xn tion of the sample median since we know the model for exactlywe can generate large numbern sayof indee en pendent copies of itand for each of these copies evaluate the sample medians and sample means for and the empirical cdfs might look like the left and right curves in figure respectively contrary to what you might have expectedthe distributions of the sample median and sample mean do not match at all the sample median is quite concentrated around whereas the distribution of the sample mean is much more spread out |
14,846 | figure empirical cdfs of the medians of the resampled data (left curveand sample means (right curveof the resampled data instead of sampling completely new datawe could also reuse the original data by ex eand resampling them via algorithm this gives independent copies for which we can again plot the empirical cdf the results will be similar to the previous case in factin figure the cdf of the resampled sample medians and sample means are plotted the corresponding python code is given below the essential point of this example is that resampling of data can greatly add to the understanding of the probabilistic properties of certain measurements on the dataeven if the underlying model is not known see exercise for further investigation of this example quotunif py import numpy as np from numpy random import rand choice import matplotlib pyplot as plt from statsmodels distributions empirical_distribution import ecdf rand( )/rand(ndata med np zeros (nave np zeros (nfor in range ( , ) choice (xnreplace =trueresampled data med[inp median (save[inp mean(smed_cdf ecdf(medave_cdf ecdf(aveplt plotmed_cdf xmed_cdf yplt plotave_cdf xave_cdf yplt show ( |
14,847 | markov chain monte carlo target burn-in period markov chain monte carlo (mcmcis monte carlo sampling technique for (approximatelygenerating samples from an arbitrary distribution -often referred to as the target distribution the basic idea is to run markov chain long enough such that its limiting distribution is close to the target distribution often such markov chain is constructed to be reversibleso that the detailed balance equations ( can be used depending on the starting position of the markov chainthe initial random variables in the markov chain may have distribution that is significantly different from the target (limitingdistribution the random variables that are generated during this burn-in period are often discarded the remaining random variables form an approximate and dependent sample from the target distribution in the next two sections we discuss two popular mcmc samplersthe metropolishastings sampler and the gibbs sampler proposal acceptance probability markov chain monte carlo metropolis-hastings sampler the metropolis-hastings sampler [ is similar to the acceptance-rejection method in that it simulates trial statewhich is then accepted or rejected according to some random mechanism specificallysuppose we wish to sample from target pdf ( )where takes values in some -dimensional set the aim is to construct markov chain {xt in such way that its limiting pdf is suppose the markov chain is in state at time transition of the markov chain from state is carried out in two phases first proposal state is drawn from transition density (xthis state is accepted as the new statewith acceptance probability (yq( , (xymin (xq( ( or rejected otherwise in the latter case the chain remains in state the algorithm just described can be summarized as follows algorithm metropolis-hastings sampler inputinitial state sample size ntarget pdf ( )proposal function ( xoutputx (dependent)approximately distributed according to ( for to do draw ( xt /draw proposal (xt /acceptance probability as in ( draw ( if then xt+ else xt+ xt return the fact that the limiting distribution of the metropolis-hastings markov chain is equal to the target distribution (under general conditionsis consequence of the following result |
14,848 | theorem local balance for the metropolis-hastings sampler the transition density of the metropolis-hastings markov chain satisfies the detailed balance equations proofwe prove the theorem for the discrete case only because transition of the metropolis-hastings markov chain consists of two stepsthe one-step transition probability to go from to is not ( xbut if xq( xa(xy) ( ( , ( xa(xz)if we thus need to show that (xe ( xf (ye ( yfor all xy ( with the acceptance probability as in ( )we need to check ( for three cases(ax (bx and ( ) ( ( ) ( )and (cx and ( ) ( yf ( ) ( xcase (aholds trivially for case ( ) (xyf ( ) ( )/ ( ) ( )and (yx consequentlye ( xf ( ) ( ) (xand ( yq( )so that ( holds similarlyfor case (cwe have (xy and (yxf ( ) ( ) ( ) ( )it follows thate ( xq( xso that ( holds again and ( yf ( ) ( ) ( )thus if the metropolis-hastings markov chain is ergodicthen its limiting pdf is (xa fortunate property of the algorithmwhich is important in many applicationsis that in order to evaluate the acceptance probability (xyin ( )one only needs to know the target pdf (xup to constantthat is (xc (xfor some known function (xbut unknown constant the efficiency of the algorithm depends of course on the choice of the proposal transition density ( xideallywe would like ( xto be "closeto the target ( )irrespective of we discuss two common approaches choose the proposal transition density ( xindependent of xthat isq( xg(yfor some pdf (yan mcmc sampler of this type is called an independence sampler the acceptance probability is thus (yg(xa(xymin , (xg( independence sampler |
14,849 | if the proposal transition density is symmetric (that isq( xq( ))then the acceptance probability has the simple form (ya(xymin , ( (xrandom walk sampler and the mcmc algorithm is called random walk sampler typical example is whenfor given current state xthe proposal state is of the form zwhere is generated from some spherically symmetric distributionsuch as ( iwe now give an example illustrating the second approach example (random walk samplerconsider the two-dimensional pdf + ( sin - - ( where is an unknown normalization constant the graph of this pdf (unnormalizedis depicted in the left panel of figure - - - - - - figure left panelthe two-dimensional target pdf right panelpoints from the random walk sampler are approximately distributed according to the target pdf the following python program implements random walk sampler to (approximatelydraw dependent samples from the pdf at each stepgiven current state xa proposal is drawn from the (xidistribution that isy zwith bivariate standard normal we see in the right panel of figure that the sampler works correctly the starting point for the markov chain is chosen as ( note that the normalization constant is never required to be specified in the program rwsamp py import numpy as np import matplotlib pyplot as plt from numpy import pi exp sqrt sin from numpy random import rand randn |
14,850 | lambda - *pi lambda xx pi lambda exp(-sqrt( ** ** / *sin ( sqrt( ** ** ))+ )* ( )* ( )* ( )* ( xx np zeros (( , ) np zeros (( , )for in range ( , ) randn ( , alpha np amin (( ( [ ][ , [ ][ ]/ ( [ ][ , [ ][ ], ) rand (alpha * ( - )* xx[ ,: plt scatter (xx [, xx [, alpha = , = plt axis('equal 'plt show ( gibbs sampler the gibbs sampler [ uses somewhat different methodology from the metropolishastings algorithm and is particularly useful for generating -dimensional random vectors the key idea of the gibbs sampler is to update the components of the random vector one at timeby sampling them from conditional pdfs thusgibbs sampling can be advantageous if it is easier to sample from the conditional distributions than from the joint distribution specificallysuppose that we wish to sample random vector [ xn ]according to target pdf (xlet (xi xi- xi+ xn represent the conditional pdf of the -th componentxi given the other components xi- xi+ xn the gibbs sampling algorithm is as follows gibbs sampler algorithm gibbs sampler inputinitial point sample size nand target pdf outputx approximately distributed according to for to do draw from the conditional pdf ( xt, xt, for to do draw yi from the conditional pdf (yi yi- xt, + xt, xt+ return there exist many variants of the gibbs samplerdepending on the steps required to update xt to xt+ -called the cycle of the gibbs algorithm in the algorithm abovethe in this section we employ bayesian notation styleusing the same letter for different (conditionaldensities cycle |
14,851 | systematic gibbs sampler random-order gibbs sampler random gibbs sampler reversible gibbs sampler cycle consists of steps - in which the components are updated in fixed order for this reason algorithm is also called the systematic gibbs sampler in the random-order gibbs samplerthe order in which the components are updated in each cycle is random permutation of { (see exercise other modifications are to update the components in blocks ( several at the same time)or to update only random selection of components the variant where in each cycle only single random component is updated is called the random gibbs sampler in the reversible gibbs sampler cycle consists of the coordinate-wise updating in all casesexcept for the systematic gibbs samplerthe resulting markov chain {xt is reversible and hence its limiting distribution is precisely (xunfortunatelythe systematic gibbs markov chain is not reversible and so the detailed balance equations are not satisfied howevera similar result holdsdue to hammersley and cliffordunder the so-called positivity conditionif at point ( xn all marginal densities (xi nthen the joint density ( theorem hammersley-clifford balance for the gibbs sampler let - ( xdenote the transition density of the systematic gibbs samplerand let qn- ( ybe the transition density of the reverse movein the order thenif the positivity condition holdsf (xq - ( xf (yqn- ( ( prooffor the forward move we haveq - ( xf ( xn ( xn (yn yn- )and for the reverse moveqn- ( yf (xn yn- (xn- yn- xn ( xn consequentlyn (yi yi- xi+ xn - ( xqn- ( yf (xi yi- xi+ xn = ( yi xi+ xn ( yi- xi xn = ( yi xi+ xn (yn- qni= (xj= ( - xn (yn- (yi= ( yi xi+ xn qn- (xf (xj= ( + xn the result follows by rearranging the last identity the positivity condition ensures that we do not divide by along the line intuitivelythe long-run proportion of transitions for the "forward movechain is equal to the long-run proportion of transitions for the "reverse movechain |
14,852 | to verify that the markov chain for the systematic gibbs sampler indeed has limiting pdf ( )we need to check that the global balance equations ( hold by integrating (in the continuous caseboth sides in ( with respect to xwe see that indeed (xq - ( xdx (yexample (gibbs sampler for the bayesian normal modelgibbs samplers are often applied in bayesian statisticsto sample from the posterior pdf consider for instance the bayesian normal model (us / ( us ( ihere the prior for (us is improper that isit is not pdf in itselfbut by obstinately applying bayesformula it does yield proper posterior pdf in some sense this prior conveys the least amount of information about and following the same procedure as in example we find the posterior pdfp - / - (xi ) ( (us xs exp improper prior note that and here are the "variablesand is fixed data vector to simulate samples and from ( using the gibbs samplerwe need the distributions of both ( xand ( uxto find ( )view the right-hand side of ( as function of onlyregarding as constant this gives ux nu xi exp ( xexp ( / ( ) exp ( / this shows that ( xhas normal distribution with mean and variance / similarlyto find ( ux)view the right-hand side of ( as function of regarding as constant this gives ( ux( )- / - exp ( / ( = showing that ( uxhas an inverse-gamma distribution with parameters / and pn = (xi / the gibbs sampler thus involves the repeated simulation of and ( uxinvgamma / (xi ) / ( xn xs / = simulating invgamma(alis achieved by first generating gamma(aland then returning / |
14,853 | in our parameterization of the gamma(aldistributionl is the rate parameter many software packages instead use the scale parameter / be aware of this when simulating gamma random variables the python script below defines small data set of size (which was randomly simulated from standard normal distribution)and implements the systematic gibbs sampler to simulate from the posterior distributionusing samples gibbsamp py import numpy as np import matplotlib pyplot as plt np array ([- - - - ]] = size sample_mean np mean(xsample_var np var(xsig np var(xmusample_mean = ** gibbs_sample np array (np zeros (( ))for in range ( )musample_mean np sqrt(sig / )*np random randn ( =np sum (( -mu)** / sig np random gamma ( / /vgibbs_sample [ ,:]np array ([mu sig ]plt scatter gibbs_sample [, gibbs_sample [, alpha = , = plt plot(np mean( )np var( ),'wo'plt show ( ^( - - figure leftapproximate draws from the posterior pdf (us xobtained via the gibbs sampler rightestimate of the posterior pdf ( |
14,854 | the left panel of figure shows the (us points generated by the gibbs sampler also shownvia the white circleis the point (xs )where is the sample mean and the sample variance this posterior point cloud visualizes the considerable uncertainty in the estimates by projecting the (us points onto the -axis -that isby ignoring the values -one obtains (approximatesamples from the posterior pdf of uthat isf ( xthe right panel of figure shows kernel density estimate (see section of this pdf the corresponding and sample quantiles were found to be - and respectivelygiving the credible interval (- for uwhich contains the true expectation similarlyan estimated credible interval for is ( )which contains the true variance monte carlo estimation in this section we describe how monte carlo simulation can be used to estimate complicated integralsprobabilitiesand expectations number of variance reduction techniques are introduced as wellincluding the recent cross-entropy method crude monte carlo the most common setting for monte carlo estimation is the followingsuppose we wish to compute the expectation ey of some (say continuousrandom variable with pdf but the integral ey (ydy is difficult to evaluate for exampleif is complicated function of other random variablesit would be difficult to obtain an exact expression for (ythe idea of crude monte carlo -sometimes abbreviated as cmc -is to approximate by simulating many independent copies yn of and then take their sample mean as an estimator of all that is needed is an algorithm to simulate such copies by the law of large numbersy converges to as provided the expectation of exists moreoverby the central limit theoremy approximately has (us /ndistribution for large nprovided that the variance vary this enables the construction of an approximate ( aconfidence interval for us - / - / crude monte carlo confidence interval ( where is the sample standard deviation of the {yi and zg denotes the -quantile of the ( distributionsee also section instead of specifying the confidence intervalone often reports only the sample mean and the estimated standard errors nor the estimated relative errors /( nthe basic estimation procedure for independent data is summarized in algorithm below it is often the case that the output is function of some underlying random vector or stochastic processthat isy ( )where is real-valued function and is random vector or process the beauty of monte carlo for estimation is that ( holds regardless of the dimension of estimated standard error estimated relative error |
14,855 | algorithm crude monte carlo for independent data inputsimulation algorithm for sample size nconfidence level outputpoint estimate and approximate ( aconfidence interval for ey iid simulate yn pn = yi pn = (yi yn- return and the interval ( monte carlo integration example (monte carlo integrationin monte carlo integrationsimulation is used to evaluate complicated integrals considerfor examplethe integral | -( + + )/ dx dx dx iid defining | | / ( ) / with ( )we can write ey using the following python programwith sample size of we obtained an estimate with an approximate confidence interval ( mcint py import numpy as np from numpy import pi ( pi**( / lambda xc*np sqrt(np abs(np sum( ,axis = )) ** np random randn ( , (xmy np mean(ysy np std(yre sy/my/np sqrt(nprint ('estimate {: }ci ({: ,{: }format my my *( - *re)my *( + *re))estimate ci ( , example (example (cont )we return to the bias-variance tradeoff in example figure gives estimates of the (squared-errorgeneralization risk ( as function of the number of parameters in the model but how accurate are these estimatesbecause we know in this case the exact model for the datawe can use monte carlo simulation to estimate the generalization risk (for fixed training setand the expected generalization risk (averaged over all training setsprecisely all we need to do is repeat the data generationfittingand validation steps many times and then take averages of the results the following python code repeats times simulate the training set of size fit models up to size |
14,856 | estimate the test loss using test set with the same sample size figure shows that there is some variation in the test lossesdue to the randomness in both the training and test sets to obtain an accurate estimate of the expected generalization risk ( )take the average of the test losses we see that for the estimate in figure is close to the true expected generalization risk test loss number of parameters figure independent estimates of the test loss show some variability cmctestloss py import numpy as np matplotlib pyplot as plt from numpy random import rand randn from numpy linalg import solve def generate_data (beta sig ) rand( ( *np arange ( )beta sig randn ( return uy beta np array ([[ - - ]] sig betahat {plt figure figsize =[ , ]totmse np zeros ( max_p p_range np arange ( max_p for in range ( , ) |
14,857 | uy generate_data (beta sig ntraining data np ones (( )for in p_range if np hstack ((xu**( - ))betahat [psolve ( xx yu_test y_test generate_data (beta sig #test data mse [x_test np ones (( )for in p_range if x_test np hstack (x_test u_test **( - ))y_hat x_test betahat [ppredictions mse append (np sum (y_test y_hat )** / )totmse totmse np array (mseplt plot(p_range mse ,' ',alpha = plt plot(p_range totmse / ,' - 'plt xticks ticks p_range plt xlabel ('number of parameters $ 'plt ylabel ('test loss 'plt tight_layout (plt savefig ('mserepeatpy pdf ',format ='pdf 'plt show ( bootstrapping bootstrap method the bootstrap method [ combines cmc estimation with the resampling procedure of section the idea is as followssuppose we wish to estimate number via some estimator ( )where :{ xn is an iid sample from some unknown cdf it is assumed that does not depend on the order of the {xi to assess the quality (for exampleaccuracyof the estimator yone could draw independent replications tn of and find sample estimates for quantities such as the variance varythe bias ey uand the mean squared error ( ) howeverit may be too time-consuming or simply not feasible to obtain such replications an alternative is to resample the original data to reiterategiven an outcome { xn of we simulate an iid sample :{ xnfrom the empirical cdf fn via algorithm (hence the resampling size is herethe rationale is that the empirical cdf fn is close to the actual cdf and gets closer as gets larger henceany quantities depending on fsuch as ef ( )where is functioncan be approximated by efn (ythe latter is usually still difficult to evaluatebut it can be simply estimated via cmc as (yi) = where ykare independent random variableseach distributed as ( this seemingly self-referent procedure is called bootstrapping -alluding to baron von mun |
14,858 | chausenwho pulled himself out of swamp by his own bootstraps as an examplethe bootstrap estimate of the expectation of is *ey = which is simply the sample mean of {yisimilarlythe bootstrap estimate for vary is the sample variance vary (yi ) ( = pk bootstrap estimators for the bias and mse are and = (yi ) respectively note that for these estimators the unknown quantity is replaced with its original estimator confidence intervals can be constructed in the same fashion we mention two variantsthe normal method and the percentile method in the normal methoda confidence interval for is given by ( + - / )normal method percentile method where is the bootstrap estimate of the standard deviation of ythat isthe square root of ( in the percentile methodthe upper and lower bounds of the confidence interval for are given by the / and / quantiles of ywhich in turn are estimated via the corresponding sample quantiles of the bootstrap sample {yithe following example illustrates the usefulness of the bootstrap method for ratio estimation and also introduces the renewal reward process model for data example (bootstrapping the ratio estimatora common scenario in stochastic simulation is that the output of the simulation consists of independent pairs of data ( )( )where each is interpreted as the length of period of time - socalled cycle -and is the reward obtained during that cycle such collection of random variables {(ci ri )is called renewal reward process typicallythe reward ri depends on pnt the cycle length ci let at be the average reward earned by time tthat isat = ri /twhere nt max{ cn tcounts the number of complete cycles at time it can be shownsee exercise that if the expectations of the cycle length and reward are finitethen at converges to the constant er/ec this ratio can thus be interpreted as the long-run average reward estimation of the ratio er/ec from data ( )(cn rn is easytake the ratio estimator ac howeverthis estimator is not unbiased and it is not obvious how to derive confidence intervals fortunatelythe bootstrap method can come to the rescuesimply resample the pairs {(ci ri )}obtain ratio estimators * * and from these compute quantities of interest such as confidence intervals as concrete examplelet us return to the markov chain in example recall that the chain starts at state at time after certain amount of time the process returns to state the time steps form natural "cyclefor this processas from time onwards the process behaves probabilistically exactly the same as when it startedrenewal reward process long-run average reward ratio estimator |
14,859 | independently of xt - thusif we define and let be the -th time that the chain returns to state then we can break up the time interval into independent cycles of lengths ci - now suppose that during the -th cycle reward ri - % -ti- (xt = - is receivedwhere (iis some fixed reward for visiting state { and ( is discounting factor clearly{(ci ri )is renewal reward process figure shows the outcomes of pairs (cr)using ( ( ( ( and reward cycle length figure each circle represents (cycle lengthrewardpair the varying circle sizes indicate the number of occurrences for given pair for example( , is the most likely pair hereoccurring out of times it corresponds to the cycle path the long-run average reward is estimated as for our data but how accurate is this estimatefigure shows density plot of the bootstrapped ratio estimateswhere we independently resampled the data pairs times density long-run average reward figure density plot of the bootstrapped ratio estimates for the markov chain renewal reward process |
14,860 | figure indicates that the true long-run average reward lies between and with high confidence more preciselythe bootstrap confidence interval (percentile methodis here ( the following python script spells out the procedure ratioest py import numpy as np matplotlib pyplot as plt seaborn as sns from numba import jit np random seed ( np array ([[ [ , ][ ][ ]] np array ([ , , , ]corg np array (np zeros (( , ))rorg np array (np zeros (( , ))rho = @jit (#for speed -upsee appendix def generate_cyclereward ( )for in range ( ) = xreg regenerative state (out of , , , reward [ xnp amin(np argwhere (np cumsum ( [xreg - ,:]np random rand ()) while !xregt + reward +rho **( - )* [ - np amin(np where (np cumsum ( [ - ,:]np random rand ()) corg[it rorg[ireward return corg rorg corg rorg generate_cyclereward (naorg np mean(rorg)/np mean(corgk np array (np zeros (( , )) np array (np zeros (( , )) np array (np zeros (( , ))for in range ( )ind np ceil( *np random rand ( , )astype (int)[ - corg[indr rorg[inda[inp mean( )/np mean(cplt xlabel ('long -run average reward 'plt ylabel ('density 'sns kdeplot ( flatten (,shade =trueplt show ( |
14,861 | control variable variance reduction the estimation of performance measures in monte carlo simulation can be made more efficient by utilizing known information about the simulation model variance reduction techniques include antithetic variablescontrol variablesimportance samplingconditional monte carloand stratified samplingseefor example[ we shall only deal with control variables and importance sampling here obtained suppose is the output of simulation experiment random variable ye are correlated from the same simulation runis called control variable for if and is known the use of control variables (negatively or positivelyand the expectation of for variance reduction is based on the following theorem we leave its proof to exercise theorem control variable estimation en be let yn be the output of independent simulation runs and let ek the corresponding control variableswith ey known let % ,ye be the correlation coefficient between each yk and yk for each the estimator xh ek yk = ( ( is an unbiased estimator for ey the minimal variance of (cis var ( ( % ,yevar yn ( which is obtained for % ,ye vary/vary + from ( we see thatby using the optimal in ( )the variance of the control variate estimator is factor % ,ye smaller than the variance of the crude monte carlo is highly correlated with ya significant variance reduction can be estimator thusif achieved the optimal is usually unknownbut it can be easily estimated from the sample ek )covariance matrix of {(yk in the next examplewe estimate the multiple integral in example using control variables example (monte carlo integration (cont )the random variable | for the | / ( ) / is positively correlated with the random variable iid var( we can use it as same choice of ( as ey control variable to estimate the expectation of the following python program is based on theorem it imports the crude monte carlo sampling code from example |
14,862 | mcintcv py from mcint import yc np sum( ** axis = control variable data yc true expectation of control variable np cov( ,ycsample covariance matrix cor [ ][ ]np sqrt( [ ][ ] [ ][ ]alpha [ ][ ] [ ][ est np mean(yalpha *(yc -yc)recv np sqrt (( cor ** )* [ ][ ] )/est relative error print ('estimate {: }ci ({: ,{: }corr {: }format (est est *( - *recv)est *( + *recv),cor)estimate ci ( , corr typical estimate of the correlation coefficient % ,ye is which gives reduction of the variance with factor - simulation speed-up of compared with crude monte carlo although the gain is small in this casedue to the modest correlation little extra work was required to achieve this variance reduction between and yone of the most important variance reduction techniques is importance sampling this technique is especially useful for the estimation of very small probabilities the standard setting is the estimation of quantity (xz (xf (xdx( where is real-valued function and the probability density of random vector xcalled the nominal pdf the subscript is added to the expectation operator to indicate that it is taken with respect to the density let be another probability density such that ( implies that (xf ( using the density we can represent as uz (xf (xh(xg(xdx eg (xg(xg(ximportance sampling nominal pdf ( consequentlyif ~iid gthen (xk uh(xk = (xk ( is an unbiased estimator of this estimator is called the importance sampling estimator and is called the importance sampling density the ratio of densitiesf ( )/ ( )is called the likelihood ratio the importance sampling pseudo-code is given in algorithm importance sampling estimator likelihood ratio |
14,863 | algorithm importance sampling estimation inputfunction himportance sampling density such that ( for all for which (xf ( sample size nconfidence level outputpoint estimate and approximate ( aconfidence interval for eh( )where iid simulate and let yi ( ( )/ ( ) estimate via and determine an approximate ( aconfidence interval as - / : - / where zg denotes the -quantile of the ( distribution and is the sample standard deviation of yn return and the interval example (importance samplinglet us examine the workings of importance sampling by estimating the areau sayunder the graph of the function + sin ( ( ( we saw similar function in example (but note the different domaina natural approach to estimate the area is to truncate the domain to the square [-bb] for large enough band to estimate the integral bz ub ( ) (xf (xdx ( { - - (xvia crude monte carlowhere ( /( ) [-bb] is the pdf of the uniform distribution on [-bb] here is the python code which does just that impsamp py import numpy as np from numpy import exp sqrt sin pi log cos from numpy random import rand lambda ( * )** exp(-sqrt( ** ** / *sin ( sqrtx ** ** ))+ * ** ** ** /(( )** ** - * *rand( , - * *rand( , ( , estcmc np mean(zitem (to obtain scalar recmc np std( )estcmc /sqrt(nitem (print ('ci ({: ,{: }re { }format estcmc *( - recmc )estcmc *( + recmc ),recmc )ci ( , re |
14,864 | for truncation level of and sample size of typical estimate is with an estimated relative error of we have two sources of error here the first is the error in approximating by ub howeveras the function decays exponentially fastb is more than enough to ensure this error is negligible the second type of error is the statistical errordue to the estimation process itself this can be quantified by the estimated relative errorand can be reduced by increasing the sample size let us now consider an importance sampling approach in which the importance sampling pdf is radially symmetric and decays exponentially in the radiussimilar to the function in particularwe simulate ( in way akin to example by first generating radius exp(land an angle th ( )and then returning cos(thand sin(thby the transformation rule (theorem we then have - + le (xfr,th (rthl -lr { the following codewhich imports the one given aboveimplements the importance sampling stepsusing the parameter impsamp py from impsamp import lam lambda lam*exp(-sqrt( ** ** )*lam)/sqrt( ** ** /( pi) rand( , ) rand( , -log( )/lam *cos ( pi*vx *sin ( pi*vz ( , )* / ( , estis np mean(zitem (obtain scalar reis np std( )estis /sqrt(nitem (print ('ci ({: ,{: }re { }format estis *( - reis)estis *( + reis),reis)ci ( , re typical estimate is with an estimated relative error of - which gives substantial variance reduction in terms of approximate confidence intervalswe have ( , in the cmc case versus ( , in the importance sampling case of coursewe could have reduced the truncation level to improve the performance of cmcbut then the approximation error might become more significant for the importance sampling casethe relative error is hardly affected by the threshold levelbut does depend on the choice of we chose such that the decay rate is slower than the decay rate of the function hwhich is as illustrated in the above examplea main difficulty in importance sampling is how to choose the importance sampling distribution poor choice of may seriously affect the accuracy of both the estimate and the confidence interval the theoretically optimal choice |
14,865 | gfor the importance sampling density minimizes the variance of and is therefore the solution to the functional minimization program ( ( min varg (xg ( optimal importance sampling pdf it is not difficult to showsee also exercise that if either ( or ( for all xthen the optimal importance sampling pdf is (xh(xf (xu ( namelyin this case varg* varg( (xf ( )/ ( )vargu so that the estimator is constant under gan obvious difficulty is that the evaluation of the optimal importance sampling density gis usually not possiblesince (xin ( depends on the unknown quantity neverthelessone can typically choose good importance sampling density "closeto the minimum variance density gone of the main considerations for choosing good importance sampling pdf is that the estimator ( should have finite variance this is equivalent to the requirement that (xf ( ( ( eg ( (xg(xthis suggests that should not have lighter tails than and thatpreferablythe likelihood ratiof /gshould be bounded monte carlo for optimization in this section we describe several monte carlo methods for optimization such randomized algorithms can be useful for solving optimization problems with many local optima and complicated constraintspossibly involving mix of continuous and discrete variables randomized algorithms are also used to solve noisy optimization problemsin which the objective function is unknown and has to be obtained via monte carlo simulation simulated annealing simulated annealing simulated annealing is monte carlo technique for minimization that emulates the physical state of atoms in metal when the metal is heated up and then slowly cooled down when the cooling is performed very slowlythe atoms settle down to minimum-energy state denoting the state as and the energy of state as ( )the probability distribution of the (randomstates is described by the boltzmann pdf (xf (xekt xwhere is boltzmann' constant and is the temperature |
14,866 | going beyond the physical interpretationsuppose that (xis an arbitrary function to be minimizedwith taking values in some discrete or continuous set the gibbs pdf corresponding to (xis defined as gibbs pdf (xet ft (xzt xp provided that the normalization constant zt : exp(- ( )/ is finite note that this is simply the boltzmann pdf with the boltzmann constant removed as the pdf becomes more and more peaked around the set of global minimizers of the idea of simulated annealing is to create sequence of points that are approximately distributed according to pdfs ft ( )ft ( )where is sequence of "temperaturesthat decreases (is "cooled"to -known as the annealing schedule if each xt were sampled exactly from ftt then xt would converge to global minimum of (xas howeverin practice sampling is approximate and convergence to global minimum is not assured generic simulated annealing algorithm is as follows annealing schedule algorithm simulated annealing inputannealing schedule ,function initial value outputapproximations to the global minimizer xand minimum value ( set and while not stopping do approximately simulate xt from ftt ( - + return xt (xt popular annealing schedule is geometric coolingwhere - for given initial temperature and cooling factor ( appropriate values for and are problem-dependent and this has traditionally required tuning on the part of the user possible stopping criterion is to stop after fixed number of iterationsor when the temperature is "small enoughapproximate sampling from gibbs distribution is most often carried out via markov chain monte carlo for each iteration tthe markov chain should theoretically run for large number of steps to accurately sample from the gibbs pdf ftt howeverin practiceone often only runs single step of the markov chainbefore updating the temperatureas in algorithm below to sample from gibbs distribution ft this algorithm uses random walk metropolishastings sampler from ( )the acceptance probability of proposal is thus et (yt ( ( )- ( ) min (xymin et (xhenceif (ys ( )then the proposal is aways accepted otherwisethe proposal is accepted with probability exp( ( (ys ( ))geometric cooling cooling factor |
14,867 | algorithm simulated annealing with random walk sampler inputobjective function starting state initial temperature number of iterations nsymmetric proposal density ( )constant outputapproximate minimizer and minimum value of for to do simulate new state from the symmetric proposal ( xt if (ys (xt then xt+ else draw ( if -( ( )- (xt ))/tt then xt+ else xt+ xt + return and ( example (simulated annealing for minimizationlet us minimize the "wigglyfunction depicted in the bottom panel of figure and given by - - / sin( ) sin( ) if (xotherwise : (xe! ( )= : = ( - - - - - figure lower panelthe "wigglyfunction (xupper panelthree (normalizedgibbs pdfs for temperatures as the temperature decreasesthe gibbs pdf converges to the pdf that has all its mass concentrated at the minimizer of |
14,868 | the function has many local minima and maximawith global minimum around the figure also illustrates the relationship between and the (unnormalizedgibbs pdf ft the following python code implements slight variant of algorithm whereinstead of stopping after fixed number of iterationsthe algorithm stops when the temperature is lower than some threshold (here - instead of stopping after fixed number of iterations or when the temperature is low enoughit is useful to stop when consecutive function values are closer than some distance to each otheror when the best found function value has not changed over fixed number of iterations for "currentstate xthe proposal state is here drawn from the ( distribution we use geometric cooling with decay parameter and initial temperature we set the initial state to figure depicts realization of the sequence of states xt for after initially fluctuating wildlythe sequence settles down to value around with ( - corresponding to the global optimizer and minimumrespectively simann py import numpy as np import matplotlib pyplot as plt def wiggly ( ) -np exp( ** / *np sin ( * - ** ** np sin ( - * ** ** ind np vstack (np argwhere ( )) [ind ]float ('inf 'return wiggly beta sig = xnp array ([ ]xx =[sx= (xwhile > **- ) =beta* +sig*np random randn (sy (yalpha np amin (np exp (-(sy -sx)/ , )if np random uniform (<alpha = sx=sy xx=np hstack ((xx , )print ('minimizer {: }minimum ={: }format ( [ sx [ ])plt plot(xxplt show (minimizer minimum - |
14,869 | state number of iterations figure typical states generated by the simulated annealing algorithm cross-entropy cross-entropy method the cross-entropy (cemethod [ is simple monte carlo algorithm that can be used for both optimization and estimation the basic idea of the ce method for minimizing function on set is to define parametric family of probability densities ( ) von and to iteratively update the parameter so that (vplaces more mass on states that have smaller values than on the previous iteration in particularthe ce algorithm has two basic phasessamplingsamples are drawn independently according to (vthe objective function is evaluated at these points elite sample rarity parameter smoothing parameter updatinga new parameter is selected on the basis of those xi for which (xi for some level these {xi form the elite sample sete at each iteration the level parameter is chosen as the worst of the elite : %ne best performing sampleswhere ( is the rarity parameter -typically or the parameter is updated as smoothed average av +( - )vwhere ( is the smoothing parameter and :argmax ln ( ( vv xe the updating rule ( is the result of minimizing the kullback-leibler divergence between the conditional density of ( vgiven ( gand (xv)see [ note that ( yields the maximum likelihood estimator (mleof based on the elite samples hencefor many specific families of distributionsexplicit solutions can be found an important example is where (udiag( ))that isx has independent gaussian |
14,870 | components in this casethe mean vector and the vector of variances are simply updated via the sample mean and sample variance of the elite samples this is known as normal updating generic ce procedure for minimization is given in algorithm normal updating algorithm cross-entropy method for minimization inputfunction initial sampling parameter sample size nrarity parameter %smoothing parameter outputapproximate minimum of and optimal sampling parameter elite initialize set %ne and while stopping criterion is not met do - + simulate an iid sample from the density (vt- evaluate the performances ( ) ( and sort them from smallest to largests ( ( let gt be the sample %-quantile of the performancesgt ( elite ( determine the set of elite samples et {xi (xi gt let be the mle of the elite samplesx argmax ln ( vv ( xet update the sampling parameter as vt av ( )vt- ( return gt vt the ce algorithm produces sequence of pairs ( )( )such that gt converges (approximatelyto the minimal function valueand (vt to degenerate pdf that (approximatelyconcentrates all its mass at minimizer of as possible stopping condition is to stop when the sampling distribution (vt is sufficiently close to degenerate distribution for normal updating this means that the standard deviation is sufficiently small the output of the ce algorithm could also include the overall best function value and corresponding solution in the following examplewe minimize the same function as in example but instead use the ce algorithm example (cross-entropy method for minimizationin this case we take the family of normal distributions { (us )for the sampling step (step of algorithm )starting with and the choice of the initial parameter is quite arbitraryas long as is large enough to sample wide range of points we take samples at each iterationset and keep the elite dn% smallest ones as the elite samples the parameters and are then updated via the sample mean and sample standard deviation |
14,871 | of the elite samples in this case we do not use any smoothing ( in the following python code the matrix sx stores the -values in the first column and the function values in the second column the rows of this matrix are sorted in ascending order according to the function valuesgiving the matrix sortsx the first elite rows of this sorted matrix correspond to the elite samples and their function values the updating of and is done in lines and figure shows how the pdfs of the (ut sampling distributions degenerate to the point mass at the global minimizer cemethod py from simann import wiggly import numpy as np np set_printoptions precision = mu sigma nnel eps *- wiggly while sigma epsx np random randn ( , )sigma np array (np ones (( , )))*mu sx np hstack ((xs( ))sortsx sx[sx [, argsort (,elite sortsx [ nel ,- mu np mean(elite axis = sigma np std(elite axis = print (' (mu){}mu{}sigma {}\nformat ( (mu)mu sigma ) (mu ) (mu ) (mu ) (mu ) (mu ) (mu ) (mu ) (mu )[ mu[ sigma [ [ mu[ sigma [ - mu[ sigma [ - mu[ sigma [ - mu[ sigma [ - mu[ sigma [ - mu[ sigma [ - - mu[ sigma [ - ( < - iteration - figure the normal pdfs of the first six sampling distributionstruncated to the interval [- the initial sampling distribution is ( |
14,872 | splitting for optimization minimizing function ( ) is closely related to drawing random sample from level set of the form { ( gsuppose has minimum value gattained at xas long as gthis level set contains the minimizer moreoverif is close to gthe volume of this level set will be small soa randomly selected point from this set is expected to be close to xthusby gradually decreasing the level parameter gthe level sets will gradually shrink towards the set {xindeedthe ce method was developed with exactly this connection in mindseee [ note that the ce method employs parametric sampling distribution to obtain samples from the level sets (the elite samplesin [ non-parametric sampling mechanism is introduced that uses an evolving collection of particles the resulting optimization algorithmcalled splitting for continuous optimization (sco)provides fast and accurate way to optimize complicated continuous functions the details of sco are given in algorithm algorithm splitting for continuous optimization (scoinputobjective function sample size nrarity parameter %scale factor wbounded region that is known to contain global minimizerand maximum number of attempts maxtry outputfinal iteration number and sequence (xbest, )(xbest, bt of best solutions and function values at each iteration elite simulate { uniformly on set and dn% while stopping condition is not satisfied do determine the elite smallest valuess ( ( elite of { ( ) yt }and store the corresponding vectorsx( ( elite in xt+ set bt+ ( and xbest, + ( pn elite bi mod elite draw bi bernoulli ) elite with = elite for to do ri nnelite bi /random splitting factor (iy for to ri do draw { elite {iuniformly and let si | (ix( simulate uniform permutation ( pn of ( for to do for try to maxtry do (pk (pk si (pk )zz ( if ( (ythen and break add to yt+ - + return {(xbest, bk ) tat iteration the algorithm starts with population of particles { that are uniformly generated on some bounded region bwhich is large enough to contain global minimizer the function values of all particles in are sortedand the best level set splitting for continuous optimization |
14,873 | elite dn% form the elite particle set exactly as in the ce method nextthe elite particles are "splitinto bn/ elite children particlesadding one extra child to some of the elite particles to ensure that the total number of children is again the purpose of line is to randomize which elite particles receive an extra child lines - describe how the children of the -th elite particle are generated firstin line we select one of the other elite particles uniformly at random the same line defines an -dimensional vector si whose components are the absolute differences between the vectors (iand (imultiplied by constant that is| ( ), ( ), | ( ), ( ), si | (ix( : | ( ), ( ), nexta uniform random permutation of ( nis simulated (see exercise lines - describe howstarting from candidate child point yeach coordinate of is resampledin the order determined by pby adding standard normal random variable to that componentmultiplied by the corresponding component of si (line if the resulting has function value that is less than that of ythen the new candidate is accepted otherwisethe same coordinate is tried again if no improvement is found in maxtry attemptsthe original component is retained this process is performed for all elite samplesto produce the first-generation population the procedure is then repeated for iterations until some stopping criterion is mete when the best found function value does not change for number of consecutive iterationsor when the total number of function evaluations exceeds some threshold the best found function value and corresponding argument (particleare returned at the conclusion of the algorithm the input variable maxtry governs how much computational time is dedicated to updating component in most cases we have encounteredthe choices and maxtry work well empiricallyrelatively high value for work wellsuch as or even the latter case means that at each stage all samples from yt- carry over to the elite set xt example (test problem hock and schittkowski [ provide rich source of test problems for multiextremal optimization challenging one is problem where the goal is to find so as to minimize the function xj (xx ln = subject to the following set of constraintsx xj where the constants {ci are given in table |
14,874 | table constants for test problem - - - - - - - - - - the best known minimal value in [ was - in [ better solution was found- using genetic algorithm the corresponding solution vector was completely different from the one in [ further improvement,- was found in [ ]using the ce methodgiving similar solution vector to that in [ ] to obtain solution with scowe first converted this -dimensional problem into -dimensional one by defining the objective function (ys ( )where and ( ) ( ) ( )subject to where the {xi are taken as functions of the {yi we then adopted penalty approach (see section by adding penalty function to the original objective functionse (ys ( = max{-(xi ) }whereagainthe {xi are defined in terms of the {yi as above optimizing this last function with scowe foundin less time than the other algorithmsa slightly smaller function value- with solution vector in line with the earlier solutions noisy optimization in noisy optimizationthe objective function is unknownbut estimates of function values are availablee via simulation for exampleto find an optimal prediction function in supervised learningthe exact risk `(ge loss(yg( )is usually unknown and only estimates of the risk are available optimizing the risk is thus typically noisy optimization problem noisy optimization features prominently in simulation studies where noisy optimization |
14,875 | stochastic approximation central difference estimator the behavior of some system ( vehicles on road networkis simulated under certain parameters ( the lengths of the traffic light intervalsand the aim is to choose those parameters optimally ( to maximize the traffic throughputfor each parameter setting the exact value for the objective function is unknown but estimates can be obtained via the simulation in generalsuppose the goal is to minimize function where is unknownbut an estimate of (xcan be obtained for any choice of because the gradient is unknownone cannot directly apply classical optimization methods the stochastic approximation method mimics the classical gradient descent method by replacing deterministic (xgradient with an estimate simple estimator for the -th component of ( (that iss ( )/xi )is the central difference estimator ( ei / ( ei / ( where ei denotes the -th unit vectorand ( +ei / and ( -ei / can be any estimators of ( ei / and ( ei / )respectively the difference parameter should be small enough to reduce the bias of the estimatorbut large enough to keep the variance of the estimator small to reduce the variance in the estimator ( it is important to have ( ei / and ( ei / positively correlated this can for example be achieved by using common random numbers in the simulation common random numbers in direct analogy to gradient descent methodsthe stochastic approximation method produces sequence of iteratesstarting with some xvia (xt )xt+ xt bt ( where is sequence of strictly positive step sizes generic stochastic approximation algorithm for minimizing function is thus as follows algorithm stochastic approximation inputa mechanism to estimate any gradient (xand step sizes outputapproximate optimizer of initialize set while stopping criterion is not met do (xt of at xt obtain an estimated gradient determine step size bt (xt set xt+ xt bt - + robbins-monro return xt (xt is an unbiased estimator of (xt in ( the stochastic approximawhen tion algorithm is referred to as the robbins-monro algorithm when finite differc (xt )as in ( )the resulting algorithm is known as the ences are used to estimate |
14,876 | kiefer-wolfowitz algorithm in section we will see how stochastic gradient descent is employed in deep learning to minimize the training lossbased on "minibatchof training data it can be shown [ thatunder certain regularity conditions on the sequence converges to the true minimizer xwhen the step sizes decrease slowly enough to in particularwhen = bt and = kieferwolfowitz ( in practiceone rarely uses step sizes that satisfy ( )as the convergence of the sequence will be too slow to be of practical use an alternative approach to stochastic approximation is the stochastic counterpart methodalso called sample average approximation it can be applied in situations where the noisy objective function is of the form (xese(xx) xstochastic counterpart ( where is random vector that can be simulated and se(xxcan be evaluated exactly the idea is to replace the optimization of ( with that of the sample average xe (xs (xxi ) = ( where are iid copies of note that is deterministic function of and so can be optimized using any optimization algorithm solution to this sample average version is taken to be an estimator of solution xto the original problem ( example (determining good importance sampling parametersthe selection of good importance sampling parameters can be viewed as stochastic optimization problem considerfor instancethe importance sampling estimator in example recall that the nominal distribution is the uniform distribution on the square [-bb] with pdf fb ( ( ) [-bb] where is large enough to ensure that ub is close to uin that examplewe chose the importance sampling pdf is - + le gl (xfr,th (rthle-lr ( { } which depends on free parameter in the example we chose is this the best choicemaybe or would have resulted in more accurate estimate the important thing to realize is that the "effectivenessof can be measured in terms of the variance of the estimator in ( )which is given by |
14,877 | (xu (xf ( vargl (xegl ( (xn gl (xn gl (xn gl ( hencethe optimal parameter lminimizes the function (le [ (xf ( )/gl ( )]which is unknownbut can be estimated from simulation to solve this stochastic minimization problemwe first use stochastic approximation thusat each step of the algorithmthe gradient of (lis estimated from realizations of (lh (xf ( )/gl ( )where fb as in the original problem (that isthe estimation of )the parameter should be large enough to avoid any bias in the estimator of lbut also small enough to ensure small variance the following python code implements particular instance of algorithm for sampling from fb herewe used instead of as this will improve the crude monte carlo estimation of lwithout noticeably affecting the bias the gradient of (lis estimated in lines - using the central difference estimator ( notice how for the ( - / and ( + / the same random vector [ ]is used this significantly reduces the variance of the gradient estimatorsee also exercise the (xt lt given the large gradient herewe choose step size bt should be such that bt - and decrease it each step by factor of figure shows how the sequence decreases towards approximately which we take as an estimator for the optimal importance sampling parameter lstochapprox py import numpy as np from numpy import pi import matplotlib pyplot as plt = choose large enough but not too large delta lambda ( * )** np exp(-np sqrt( ** ** / *np sin ( np sqrt( ** ** + ))* ** ** < ** /( )** lambda lamlam*np exp(-np sqrt( ** ** )*lam)/np sqrt( ** ** /( pibeta *- #step size very small as the gradient is large lam = lams np array (lam ] = ** for in range ( - * *np random rand( , - * *np random rand( , laml lam delta / lamr lam delta / estl np mean( ( , )** * / ( laml)estr np mean( ( , )** * / ( lamr)#use same , gr (estr -estl)delta gradient lam lam gr*beta gradient descend lams np hstack (lams lam)beta beta * lamsize range ( (lams size)plt plot(lamsize lamsplt show ( |
14,878 | steps figure the stochastic optimization algorithm produces sequence lt that tends to an approximate estimate of the optimal importance sampling parameter nextwe estimate lusing stochastic counterpart approach as the objective function (lis of the form ( (with taking the role of and the role of )we obtain the sample average (xi ( (xi (ln = gl (xi where ~iid fb once the ~iid fb have been simulatedb (lis deterministic function of lwhich can be optimized by any means we take the most basic approach and simply evaluate the function for and select the minimizing on this grid the code is given below and figure shows (las function of the minimum value found was for minimizer which is in accordance with the value obtained via stochastic approximation the sensitivity of this estimate can be assessed from the graphfor wide range of values (say from to stays rather flat so any of these values could be used in an importance sampling procedure to estimate howeververy small values (less than and large values (greater than should be avoided our original choice of was therefore justified and we could not have done much better stochcounterpart py from stochapprox import lams np linspace ( res =[res np array (resfor in range (lams size)lam lams[inp random seed ( lambda lam*np exp(-np sqrt( ** ** )*lam)/np sqrt ( ** ** /( pi |
14,879 | =- + * *np random rand( , =- + * *np random rand( , = ( , )** * / ( ,yestcmc np mean(zres np hstack ((res estcmc )plt plot(lams resplt xlabel ( '$lambda'plt ylabel ( '$\hat{ }(lambda )$'plt ticklabel_format style ='sci 'axis=' 'scilimits =( , )plt show ( figure the stochastic counterpart method replaces the unknown ( (that isthe scaled variance of the importance sampling estimatorwith its sample averageb (lthe minimum value of is attained around third method for stochastic optimization is the cross-entropy method in particularalgorithm can easily be modified to minimize noisy functions (xese(xx)as defined in ( the only change required in the algorithm is that every function value (xbe replaced by its estimate (xdepending on the level of noise in the functionthe sample size might have to be increased considerably example (cross-entropy method for noisy optimizationto explore the use of the ce method for noisy optimizationtake the following noisy discrete optimization problem suppose there is "black boxthat contains an unknown binary sequence of bits if one feeds the black box any input vectorit will first scramble the input by independently flipping the bits (changing to and to with probability th and then return the number of bits that do not match the true (unknownbinary sequence this is illustrated in figure for |
14,880 | figure noisy optimization function as black box the input to the black box is binary vector inside the black box the digits of the input vector are scrambled by flipping bits with probability th the output is the number of bits of the scrambled vector that do not match the true (unknownbinary vector denoting by (xthe true number of matching digits for binary input vector xthe black box thus returns noisy estimate (xthe objective is to estimate the binary sequence inside the black boxby feeding it with many input vectors and observing their output orto put it in different wayto minimize (xusing (xas proxy since there are possible input vectorsit is infeasible to try all possible vectors even for moderate the following python program implements the noisy function (xfor each input bit is flipped with rather high probability th so that the output is poor indicator of how many bits actually match the true vector this true vector has at positions and at snoisy py import numpy as np def snoisy ( )takes matrix shape [ shape [ true binary vector xorg np hstack (np ones (( , // ))np zeros (( , // )))theta probability to flip the input storing the number of bits unequal to the true vector np zeros (nfor in range ( , )determine which bits to flip flip (np random uniform (size =( )theta astype (intind flip > [ ]ind - [ ]inds[ ( [ !xorgsum (return the ce code below to optimize (xis quite similar to the continuous optimization code in example howeverinstead of sampling iid random variables xn from normal distributionwe now sample iid binary vectors from berpdistribution more preciselygiven row vector of probabilities [ pn ]we independently simulate the components xn of each binary vector according to xi ber(pi ) after each iterationthe vector is updated as the (vectormean of the elite |
14,881 | samples the sample size is and the number of elite samples is the components of the initial sampling vector are all equal to / that isthe are initially uniformly sampled from the set of all binary vectors of length at each subsequent iteration the parameter vector is updated via the mean of the elite samples and evolves towards degenerate vector pwith only and sampling from such berpdistribution gives an outcome xpwhich can be taken as an estimate for the minimizer of that isthe true binary vector hidden in the black box the algorithm stops when has degenerated sufficiently figure shows the evolution of the vector of probabilities this figure may be seen as the discrete analogue of figure we see thatdespite the high noisethe ce method is able to find the true state of the black boxand hence the minimum value of figure evolution of the vector of probabilities [ pn towards the degenerate solution |
14,882 | cenoisy py from snoisy import snoisy import numpy as np rho nel int( *rho)eps np ones(ni pstart ps np zeros (( , )ps [ pstart pdist np zeros (( , )while np max(np minimum ( , - )epsi + (np random uniform (size =( , )pastype (intx_tmp np array (xcopy=truesx snoisy x_tmp ids np argsort (sx ,axis = elite [ids [ nel ,: np mean(elite ,axis = ps[ip print (pfurther reading the article [ explores why the monte carlo method is so important in today' quantitative investigations the handbook of monte carlo methods [ provides comprehensive overview of monte carlo simulation that explores the latest topicstechniquesand realworld applications popular books on simulation and the monte carlo method include [ ][ ]and [ classic reference on random variable generation is [ easy introductions to stochastic simulation are given in [ ][ ]and [ more advanced theory can be found in [ markov chain monte carlo is detailed in [ and [ the research monograph on the cross-entropy method is [ and tutorial is provided in [ range of optimization applications of the ce method is given in [ theoretical results on adaptive tuning schemes for simulated annealing may be foundfor examplein [ there are several established ways for gradient estimation these include the finite difference methodinfinitesimal perturbation analysisthe score function methodand the method of weak derivativesseefor example[ exercises we can modify the box-muller method in example to draw and uniformly on the unit disc{(xyr }in the following wayindependently draw radius and an angle th ( )and return cos(th) sin(ththe question is how to draw (ashow that the cdf of is given by fr (rr for (with fr ( and |
14,883 | fr ( for respectively(bexplain how to simulate using the inverse-transform method (csimulate independent draws of [xy]according to the method described above simple acceptance-rejection method to simulate vector in the unit -ball { rd kxk is to first generate uniformly in the hyper cube [- ] and then to accept the point only if kxk determine an analytic expression for the probability of acceptance as function of and plot this for let the random variable have pdf xf ( simulate random variable from ( )using (athe inverse-transform method(bthe acceptance-rejection methodusing the proposal density ( construct simulation algorithms for the following distributionsa (athe weib(aldistributionwith cdf ( -(lxx where and (bthe pareto(aldistributionwith pdf (xal( lx)-( + where and we wish to sample from the pdf (xx - using acceptance-rejection with the proposal pdf (xe- / / (afind the smallest for which cg(xf (xfor all (bwhat is the efficiency of this acceptance-rejection method let [xy]be uniformly distributed on the triangle with corners ( )( )and (- give the distribution of [uv]defined by the linear transformation # explain how to generate random variable from the extreme value distributionwhich has cdf - ( eexps via the inverse-transform method ( ) |
14,884 | write program that generates and displays random vectors that are uniformly distributed within the ellipse [hintconsider generating uniformly distributed samples within the circle of radius and use the fact that linear transformations preserve uniformity to transform the circle to the given ellipse suppose that xi exp(li )independentlyfor all let [ pn ]be the random permutation induced by the ordering xp xp xpn and define :xp and :xp xp - for (adetermine an matrix such that ax and show that det( (bdenote the joint pdf of and as , (xpn = lpi exp -lpi xpi {xp xpn } pn where pn is the set of all npermutations of { nuse the multivariate transformation formula ( to show that , (zpexp zi lpk li pn = = > henceconclude that the probability mass function of the random permutation isn lp pn [ pk> lpk = (cwrite pseudo-code to simulate uniform random permutation pn that issuch that [ pn! and explain how this uniform random permutation can be used to reshuffle training set tn consider the markov chain with transition graph given in figure starting in state start figure the transition graph for the markov chain {xt |
14,885 | (aconstruct computer program to simulate the markov chainand show realization for steps (bcompute the limiting probabilities that the markov chain is in state , , by solving the global balance equations ( (cverify that the exact limiting probabilities correspond to the average fraction of times that the markov process visits states , , for large number of steps as generalization of example consider random walk on an arbitrary undirected connected graph with finite vertex set for any vertex vlet (vbe the number of neighbors of -called the degree of the random walk can jump to each one of the neighbors with probability / (vand can be described by markov chain show thatif the chain is aperiodicthe limiting probability that the chain is in state is equal to ( ) ( let uv ~iid ( the reason why in example the sample mean and sample median behave very differently is that [ /vwhile the median of / is finite show thisand compute the median [hintstart by determining the cdf of / by writing it as an expectation of an indicator function consider the problem of generating samples from gamma( (adirect simulationlet ~iid ( show that ln( )/ -ln( )/ gamma( [hintderive the distribution of ln( )/ and use example (bsimulation via mcmcimplement an independence sampler to simulate from the gamma( target pdf ( - using proposal transition density ( xg( )where (yis the pdf of an exp( random variable generate samplesand compare the true cdf with the empirical cdf of the data let [xy]be random column vector with bivariate normal distribution with expectation vector [ ]and covariance matrix sa (awhat are the conditional distributions of ( xand ( )[hintuse theorem (bimplement gibbs sampler to draw samples from the bivariate distribution (usfor and and plot the resulting samples here the objective is to sample from the -dimensional pdf (xyc -(xy+ +yx for some normalization constant cusing gibbs sampler let (xyf |
14,886 | (afind the conditional pdf of given yand the conditional pdf of given (bwrite working python code that implements the gibbs sampler and outputs points that are approximately distributed according to (cdescribe how the normalization constant could be estimated via monte carlo iid simulationusing random variables xn yn exp( we wish to estimate - - / dx (xf (xdx via monte carlo simulation - / using two different approaches( defining (xand the pdf of the [- distribution and ( defining ( {- and the pdf of the ( distribution (afor both cases estimate via the estimator - (xi ( = use sample size of (bfor both cases estimate the relative error of using (cgive confidence interval for for both cases using (dfrom part ( )assess how large should be such that the relative width of the confidence interval is less than and carry out the simulation with this compare the result with the true value of consider estimation of the tail probability [ gof some random variable xwhere is large the crude monte carlo estimator of is zi = ( where xn are iid copies of and zi {xi } (ashow that is unbiasedthat ise (bexpress the relative error of ui varb re eb in terms of and (cexplain how to estimate the relative error of from outcomes xn of xn and how to construct confidence interval for (dan unbiased estimator of is said to be logarithmically efficient if ln ez gln lim ( show that the cmc estimator ( with is not logarithmically efficient |
14,887 | one of the test cases in [ involves the minimization of the hougen function implement cross-entropy and simulated annealing algorithm to carry out this optimization task in the binary knapsack problemthe goal is to solve the optimization problemmax pxx{ , } subject to the constraints ax cwhere and are vectors of non-negative numbersa (ai is an matrixand is an vector the interpretation is that or depending on whether item with value is packed into the knapsack or not nthe variable ai represents the -th attribute ( volumeweightof the -th item associated with each attribute is maximal capacitye could be the maximum volume of the knapsackc the maximum weightetc write ce program to solve the sento dat knapsack problem at le brunel ac uk/~mastjjb/jeb/orlib/files/mknap txtas described in [ let ( )( )be renewal reward processwith er and pnt ec let at = ri / be the average reward at time where nt max{ tand we have defined ni= ci as the time of the -th renewal (ashow that / -ec as (bshow that nt -as (cshow that nt / - /ec as [hintuse the fact that nt nt + for all (dshow that at -er as ec prove theorem prove that if ( the importance sampling pdf gin ( gives the zerovariance importance sampling estimator let and be random variables (not necessarily independentand suppose we wish to estimate the expected difference [ yex ey (ashow that if and are positively correlatedthe variance of is smaller than if and are independent (bsuppose now that and have cdfs and grespectivelyand are simulated via the inverse-transform methodx - ( ) - ( )with uv ( )not necessarily independent intuitivelyone might expect that |
14,888 | if and are positively correlatedthe variance of - would be smaller than if and are independent show that this is not always the case by providing counter-example (ccontinuing ( )assume now that and are continuous show that the variance of by taking common random numbers is no larger than when and are independent [hintuse the following lemma of hoeffding [ ]if (xyhave joint cdf with marginal cdfs of and being and grespectivelythen ( (xyf(xg( )dx dycov(xyprovided cov(xyexists |
14,889 | nsupervised earning when there is no distinction between response and explanatory variablesunsupervised methods are required to learn the structure of the data in this we look at various unsupervised learning techniquessuch as density estimationclusteringand principal component analysis important tools in unsupervised learning include the cross-entropy training lossmixture modelsthe expectation-maximization algorithmand the singular value decomposition introduction in contrast to supervised learningwhere an "output(responsevariable is explained by an "input(explanatoryvector xin unsupervised learning there is no response variable and the overall goal is to extract useful information and patterns from the datae in the form { xn or as matrix [ xn in essencethe objective of unsupervised learning is to learn about the underlying probability distribution of the data we start in section by setting up framework for unsupervised learning that is similar to the framework used for supervised learning in section that iswe formulate unsupervised learning in terms of risk and loss minimizationbut now involving the crossentropy riskrather than the squared-error risk in natural way this leads to fundamental learning concepts such as likelihoodfisher informationand the akaike information criterion section introduces the expectation-maximization (emalgorithm as useful method for maximizing likelihood functions when their solution cannot be found easily in closed form if the data forms an iid sample from some unknown distributionthe "empirical distributionof the data provides valuable information about the unknown distribution in section we formalize the concept of the empirical distribution ( generalization of the empirical cdfand explain how we can produce an estimate of the underlying probability density function of the data using kernel density estimators most unsupervised learning techniques focus on identifying certain traits of the underlying distributionsuch as its local maximizers related idea is to partition the data into clusters of points that are in some sense "similarto each other in section we formulate the clustering problem in terms of mixture model in particularthe data are assumed |
14,890 | to come from mixture of (usually gaussiandistributionsand the objective is to recover the parameters of the mixture distributions from the data the principal tool for parameter estimation in mixture models is the em algorithm section discusses more heuristic approach to clusteringwhere the data are grouped according to certain "cluster centers"whose positions are found by solving an optimization problem section describes how clusters can be constructed in hierarchical manner finallyin section we discuss the unsupervised learning technique called principal component analysis (pca)which is an important tool for reducing the dimensionality of the data we will revisit various unsupervised learning techniques in subsequent on supervised learning for examplecross-entropy training loss minimization will be important in logistic regression (section and classification ()and pca can be used for variable selection and dimensionality reductionto make models easier to train and increase their predictive powersee sections and risk and loss in unsupervised learning in unsupervised learningthe training data :{ xn only consists of (what are usually assumed to beindependent copies of feature vector xthere is no response data suppose our objective is to learn the unknown pdf of based on an outcome { xn of the training data convenientlywe can follow the same line of reasoning as for supervised learningdiscussed in sections table gives summary of definitions for the case of unsupervised learning compare this with table for the supervised case similar to supervised learningwe wish to find function gwhich is now probability density (continuous or discrete)that best approximates the pdf in terms of minimizing risk `( : lossf ( ) ( ))( where loss is loss function in ( )we already encountered the kullback-leibler risk `( : ln cross-entropy risk (xe ln (xe ln (xg( ( if is class of functions that contains then minimizing the kullback-leibler risk over will yield the (correctminimizer of coursethe problem is that minimization of ( depends on which is generally not known howeversince the term ln (xdoes not depend on git plays no role in the minimization of the kullback-leibler risk by removing this termwe obtain the cross-entropy risk (for discrete replace the integral with sum)`( :- ln (xz (xln (xdx ( thusminimizing the cross-entropy risk ( over all gagain gives the minimizer provided that unfortunatelysolving ( is also infeasible in generalas it still |
14,891 | table summary of definitions for unsupervised learning (xt or tn or tn lossf ( ) ( )`(ggg ` ( ` (gggt or gt ggt or gt fixed feature vector random feature vector pdf of evaluated at the point fixed training data {xi nrandom training data {xi napproximation of the pdf loss incurred when approximating (xwith (xrisk for approximation function gthat ise lossf ( ) ( )optimal approximation function in function class gthat isargmingg `(gtraining loss for approximation function (guessgthat isthe sample average estimate of `(gbased on fixed training sample the same as ` ( )but now for random training sample the learnerargmingg ` (gthat isthe optimal approximation function based on fixed training set and function class we suppress the superscript if the function class is implicit the learner for random training set depends on insteadwe seek to minimize the cross-entropy training loss` ( : lossf (xi ) (xi )ln (xi = = cross-entropy training loss ( over the class of functions gwhere { xn is an iid sample from this optimization is doable without knowing and is equivalent to solving the maximization problem max gg ln (xi ( = key step in setting up the learning procedure is to select suitable function class over which to optimize the standard approach is to parameterize with parameter th and let be the class of functions { (th)th thfor some -dimensional parameter set th for the remainder of section we will be using this function classas well as the cross-entropy risk the function th ( this called the likelihood function it gives the likelihood of the observed feature vector under (th)as function of the parameter th the natural logarithm of the likelihood function is called the log-likelihood function and its gradient with respect to th is called the score functiondenoted ( th)that isg( thln ( ths( th:th th ( th( likelihood function score function |
14,892 | risk and loss in unsupervised learning the random score ( th)with (th)is of particular interest in many casesits expectation is equal to the zero vectornamelyz ( thth eth ( thg( thdx ( th( ( thdx ( thdx th th th provided that the interchange of differentiation and integration is justified this is true for large number of distributionsincluding the normalexponentialand binomial distributions notable exceptions are distributions whose support depends on the distributional parameterfor example the ( thdistribution it is important to see whether expectations are taken with respect to (thor we use the expectation symbols eth and to distinguish the two cases fisher information matrix from now on we simply assume that the interchange of differentiation and integration is permittedseee [ for sufficient conditions the covariance matrix of the random score ( this called the fisher information matrixwhich we denote by or (thto show its dependence on th since the expected score is we have (theth [ ( ths( th) related matrix is the expected hessian matrix of ln ( th) ln ( th ln ( th th **th th ln ( th ln ( ths( th th (th: - th th th ln ( th ln ( thth th th th ( ln ( thth th ln ( thth th ln ( th( th note that the expectation here is with respect to it turns out that if (th)the two matrices are the samethat isf(thh(th)information matrix equality ( provided that we may swap the order of differentiation and integration (expectationthis result is called the information matrix equality we leave the proof as exercise the matrices (thand (thplay important roles in approximating the cross-entropy risk for large to set the scenelet gg (thbe the minimizer of the cross-entropy risk (th:- ln ( thwe assume that ras function of this well-behavedin particularthat in the neighborhood of thit is strictly convex and twice continuously differentiable (this holds truefor exampleif is gaussian densityit follows that this root of ( th)because ln ( thln ( thr(th=- - ( th) th th th |
14,893 | again provided that the order of differentiation and integration (expectationcan be swapped in the same wayh(this then the hessian matrix of let ( thn be the minimizer of the training loss ln (xi th)rtn (th: = where tn { xn is random training set let rbe the smallest possible crossentropy risktaken over all functionsclearlyr- ln ( )where similar to the supervised learning casewe can decompose the generalization risk`( ( thn ) ( thn )into ( thr( thn rr(thrr( thn (thr(the ln { { ( th approx error statistical error the following theorem specifies the asymptotic behavior of the components of the generp alization risk in the proof we assume that thn -thas theorem approximating the cross-entropy risk it holds asymptotically ( that where er( thn (thtr (thh- (th/( )( (thertn ( thn tr (thh- (th/( ( proofa taylor expansion of ( thn around thgives the statistical error (th (thn th) (thn )( thn th) ( thn (th( thn th)th { ( = where thn lies on the line segment between thand thn for large we may replace (thn with (thasby assumptionb thn converges to ththe matrix (this positive definite because (this strictly convex at thby assumptionand therefore invertible it is important to realize thn is in fact an -estimator of thin particularin the notation of theorem we that have ps sa (th)and (thconsequentlyby that same theoremd ( thn th- - (thf(thh-(th( combining ( with ( )it follows from theorem that asymptotically the expected estimation error is given by ( nextwe consider taylor expansion of rtn (tharound thn rtn ( thn rtn (thrtn ( thn (thb thn )(th thn htn (thn )(thb thn )th { = ( |
14,894 | risk and loss in unsupervised learning where htn (thn : ni= (xthi thn is the hessian of rtn (that some thn between thn and thtaking expectations on both sides of ( )we obtain thn )htn (thn )(thb thn (thertn ( thn (thb replacing htn (thn with (thfor large and using ( )we have (thb thn )htn (thn )(thb thn -tr (thh- (thn thereforeasymptotically as we have ( theorem has number of interesting consequences similar to section the training loss `tn (gtn rtn ( thn tends to underestimate the risk `( (th )because the training set tn is used to both train (that isestimate thand to estimate the risk the relation ( tells us that on average the training loss underestimates the true risk by tr( (thh- (th))/( adding equations ( and ( )yields the following asymptotic approximation to the expected generalization risk ( ( thn rtn ( thn tr (thh- (thn the first term on the right-hand side of ( can be estimated (without biasvia the training loss rtn ( thn as for the second termwe have already mentioned that when the true model gthen (thh(ththereforewhen is deemed to be sufficiently rich class of models parameterized by -dimensional vector thwe may approximate the second term as tr( (th) - (th))/ tr( )/ / this suggests the following heuristic approximation to the (expectedgeneralization riskp ( ( thn rtn ( thn multiplying both sides of ( by and substituting tr (th) - (thpwe obtain the approximationn (thn - ln (xi thn ( = akaike information criterion the right-hand side of ( is called the akaike information criterion (aicjust like ( )the aic approximation can be used to compare the difference in generalization risk of two or more learners we prefer the learner with the smallest (estimatedgeneralization risk suppose thatfor training set the training loss rt (thhas unique minimum point th which lies in the interior of th if rt (this differentiable function with respect to ththen we can find the optimal parameter th by solving rt (th (xi th th = { st (th |
14,895 | in other wordsthe maximum likelihood estimate th for th is obtained by solving the root of the average score functionthat isby solving st (th ( it is often not possible to find th in an explicit form in that case one needs to solve the equation ( numerically there exist many standard techniques for root-findinge via newton' method (see section )wherebystarting from an initial guess th subsequent iterates are obtained via the iterative scheme tht+ tht - (tht st (tht )where newton' method st (th (xi thht (th:th = th is the average hessian matrix of {ln (xi th)}ni= under (th)the expectation of ht (this equal to the information matrix (th)which does not depend on the data this suggests an alternative iterative schemecalled fisher' scoring methodtht+ tht - (tht st (tht )( fisher' scoring method which is not only easier to implement (if the information matrix can be readily evaluated)but also is more numerically stable example (maximum likelihood for the gamma distributionwe wish to approximate the density of the gamma(aldistribution for some true but unknown parameters aand lon the basis of training set { xn of iid samples from this distribution choosing our approximating function (alin the same class of gamma densitiesla xa- -lx ( ( alg(awith and we seek to solve ( taking the logarithm in ( )the loglikelihood function is given by ( al: ln ln ( ( ln lx it follows that ( alln ps(aln (ala - ( all where ps is the derivative of ln gthe so-called digamma function hence ( alal ( al-ps (aps (al (al- - la ( all( all al fisher' scoring method ( can now be used to solve ( )with ln ps(an- ni= ln xi st (ala - ni= xi and (alh(aldigamma function |
14,896 | expectation-maximization (emalgorithm the expectation-maximization algorithm (emis general algorithm for maximization of complicated (log-)likelihood functionsthrough the introduction of auxiliary variables to simplify the notation in this sectionwe use bayesian notation systemwhere the same symbol is used for different (conditionalprobability densities as in the previous sectiongiven independent observations { xn from some unknown pdf the objective is to find the best approximation to in function class { (th)th thby solving the maximum likelihood problemthargmax ( th)thth latent variables complete-data likelihood ( where ( th: ( thg(xn ththe key element of the em algorithm is the augmentation of the data with suitable vector of latent variableszsuch that ( thg(tz thdz the function th (tz this usually referred to as the complete-data likelihood function the choice of the latent variables is guided by the desire to make the maximization of (tz thmuch easier than that of ( thsuppose denotes an arbitrary density of the latent variables thenwe can writez ln ( thp(zln ( thdz (tz th)/ (zdz (zln ( tth)/ (zz (tz thg( tthdz (zln dz (zln (zp(zz (tz thp(zln dz (pg(tth))( (zwhere (pg(tth)is the kullback-leibler divergence from the density to (tthsince it follows that (tz thln ( thp(zln dz = (pthp(zfor all th and any density of the latent variables in other wordsl(pthis lower bound on the log-likelihood that involves the complete-data likelihood the em algorithm then aims to increase this lower bound as much as possible by starting with an initial guess th( and thenfor solving the following two steps (targmax (pth( - ) th(targmaxthth ( (tth |
14,897 | the first optimization problem can be solved explicitly namelyby ( )we have that (targmin (pg(tth( - ) (tth( - that isthe optimal density is the conditional density of the latent variables given the data and the parameter th( - the second optimization problem can be simplified by writing ( (tthq( (the (tln ( ( )where ( (th: (tln (tz this the expected complete-data log-likelihood under (tconsequentlythe maximization of ( (tthwith respect to th is equivalent to finding th(targmax ( (ththth this leads to the following generic em algorithm algorithm generic em algorithm inputdata tinitial guess th( outputapproximation of the maximum likelihood estimate while stopping criterion is not met do expectation stepfind ( ( : ( tth( - and compute the expectation ( (th: (tln (tz th ( maximization steplet th(targmaxthth ( (tht - + return th(ta possible stopping criterion is to stop when ln ( th(tln ( th( - ln ( th(tfor some small tolerance remark (properties of the em algorithmthe identity ( can be used to show that the likelihood ( th(tdoes not decrease with every iteration of the algorithm this property is one of the strengths of the algorithm for exampleit can be used to debug computer implementations of the em algorithmif the likelihood is observed to decrease at any iterationthen one has detected bug in the program the convergence of the sequence {th(tto global maximum (if it existsis highly dependent on the initial value th( andin many casesan appropriate choice of th( may not be clear typicallypractitioners run the algorithm from different random starting points over thto ascertain empirically that suitable optimum is achieved |
14,898 | example (censored datasuppose the lifetime (in yearsof certain type of machine is modeled via (us distribution to estimate and the lifetimes of (independentmachines are recorded up to years denote these censored lifetimes by xn the {xi are thus realizations of iid random variables {xi }distributed as min{yc}where (us by the law of total probability (see ( ))the marginal pdf of each can be written asphs ( { cph(( )/ { } ( us ph(( )/ { { ph(( )/sp[ >cp[ <cwhere phs (*is the pdf of the ( distributionph is the cdf of the standard normal distributionand ph : ph it follows that the likelihood of the data { xn as function of the parameter th :[us ]is- ) exp ( ( thx ph(( )/ ps :xi < :xi = let nc be the total number of xi such that xi using nc latent variables [ znc ]we can write the joint pdfpnc :xi < (xi ui= (zi uexp min (tz thi ( ps ) / so that (tz thdz ( thwe can thus apply the em algorithm to maximize the likelihoodas follows for the (xpectation)-stepwe have for fixed thg( tthnc = (zi tth)where ( tth { cphs ( )/ph(( )/sis simply the pdf of the (us distributiontruncated to [cfor the (aximization)-stepwe compute the expectation of the complete loglikelihood with respect to fixed ( tthand use the fact that znc are iidp nc ( ) :xi < (xi uln ln( ) ln (tz th where has (us distributiontruncated to [cto maximize the last expression with respect to we set the derivative with respect to to zeroand obtainp nc ez :xi < xi un similarlysetting the derivative with respect to to zero givesp nc ( ) :xi < (xi ) in summarythe em iterates for are as follows |
14,899 | -step given the current estimate tht :[ut ]compute the expectations nt :ez and zt : ( ut ) where (ut )conditional on cthat isphs ( ut nt :ut ph(( ut )/st phs ( ut zt :st ( ut ph(( ut )/st -step update the estimate to tht+ :[ut+ + ]via the formulasp nc nt :xi < xi ut+ np :xi < (xi ut+ + empirical distribution and density estimation bn obtained from an iid training set in section we saw how the empirical cdf { xn from an unknown distribution on rgives an estimate of the unknown cdf bn is genuine cdfas it is right-continuousf of this sampling distribution the function increasingand lies between and the corresponding discrete probability distribution is called the empirical distribution of the data random variable distributed according to this empirical distribution takes the values xn with equal probability / the concept of empirical distribution naturally generalizes to higher dimensionsa random vector that is distributed according to the empirical distribution of xn has discrete pdf [ xi /ni sampling from such distribution -in other words resampling the original data -was discussed in section the preeminent usage of such sampling is the bootstrap methoddiscussed in section in waythe empirical distribution is the natural answer to the unsupervised learning questionwhat is the underlying probability distribution of the datahoweverthe empirical distribution isby definitiona discrete distributionwhereas the true sampling distribution might be continuous for continuous data it makes sense to also consider estimation of the pdf of the data common approach is to estimate the density via kernel density estimate (kde)the most prevalent learner to carry this out is given next empirical distribution definition gaussian kde let xn rd be the outcomes of an iid sample from continuous pdf gaussian kernel density estimate of is mixture of normal pdfsof the form kx-xi gtn ( se = ( ) / sd where is called the bandwidth rd ( gaussian kernel density estimate |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.